url stringlengths 31 38 | title stringlengths 7 229 | abstract stringlengths 44 2.87k | text stringlengths 319 2.51M | meta dict |
|---|---|---|---|---|
https://arxiv.org/abs/2007.02590 | Angle sums of random polytopes | For two families of random polytopes we compute explicitly the expected sums of the conic intrinsic volumes and the Grassmann angles at all faces of any given dimension of the polytope under consideration. As special cases, we compute the expected sums of internal and external angles at all faces of any fixed dimension. The first family are the Gaussian polytopes defined as convex hulls of i.i.d. samples from a non-degenerate Gaussian distribution in $\mathbb R^d$. The second family are convex hulls of random walks with exchangeable increments satisfying certain mild general position assumption. The expected sums are expressed in terms of the angles of the regular simplices and the Stirling numbers, respectively. There are non-trivial analogies between these two settings. Further, we compute the angle sums for Gaussian projections of arbitrary polyhedral sets, of which the Gaussian polytopes are a special case. Also, we show that the expected Grassmann angle sums of a random polytope with a rotationally invariant law are invariant under affine transformations. Of independent interest may be also results on the faces of linear images of polyhedral sets. These results are well known but it seems that no detailed proofs can be found in the existing literature. | \section{Introduction}
\subsection{Angles and face numbers}
For a convex polytope $P\subset\mathbb{R}^d$ denote by $\mathcal{F}(P)$ the set of its faces including $P$ itself. The classical Euler relation (see, e.g.,~\cite[Chapter~8]{bG03}) states that for every polytope $P$,
\begin{align}\label{916}
\sum_{F\in\mathcal{F}(P)}(-1)^{\dim F}=1.
\end{align}
A similar, although slightly less known result, exists for the internal solid angles of $P$. Let $\beta(F,P)$ denote the internal solid angle of $P$ at the face $F$. It can be defined as
$$
\beta(F, P)
:=
\lim_{r\downarrow 0} \frac{\mathop{\mathrm{Vol}}\nolimits(\mathbb{B}_r(z)\cap P) }{\mathop{\mathrm{Vol}}\nolimits(\mathbb{B}_r(z))},
$$
where $\mathop{\mathrm{Vol}}\nolimits$ denotes the Lebesgue measure in $\mathbb{R}^d$, $\mathbb{B}_r(z)$ is the $d$-dimensional ball with radius $r>0$ centered at $z$, and $z$ is any point in $F$ not belonging to a face of smaller dimension.
Then the following Gram--Euler relation holds:
\begin{align}\label{934}
\sum_{F\in\mathcal{F}(P)}(-1)^{\dim F}\beta(F,P)=0,
\end{align}
see~\cite{jG74} for $d=3$ and~\cite[\S14.1]{bG03} for arbitrary dimension.
For $d=2$, this relation reduces to a theorem from plane geometry stating that the angle-sum of any $n$-gon equals $(n-2)\pi$.
Perles and Shephard~\cite[\S2]{PS67} found an elegant derivation of~\eqref{934} from~\eqref{916}. To this end, they considered a random orthogonal projection $\Pi_{d-1}P$ of $P$ onto a random $(d-1)$-dimensional hyperplane whose normal vector is uniformly distributed on the unit sphere in $\mathbb{R}^d$. They observed~\cite[Eq.~(8)]{PS67} that for all $j\in\{0,\ldots, d-1\}$,
\begin{align}\label{eq:sum_beta}
\sum_{F\in\mathcal{F}_j(P)} \beta(F,P) = \frac12 f_j(P)- \frac12 \mathbb E\, f_j(\Pi_{d-1} P),
\end{align}
where $\mathcal{F}_j(P)$ denotes the set of all $j$-dimensional faces of a polytope $P$, and $f_j(P)= |\mathcal{F}_j(P)|$ is their number.
Multiplying~\eqref{eq:sum_beta} by $(-1)^j$, taking the sum over all dimensions $j\in \{0,\ldots,d-1\}$, and making use of the Euler relation~\eqref{916} for $P$ and $\Pi_{d-1}P$, Perles and Shephard derived~\eqref{934}.
Shortly after that, Gr\"unbaum~\cite{bG68} generalized this approach to the so-called Grassmann angles (to be defined in Section~\ref{559}) and proved various linear relations for these angles. Relating expected face numbers of the random projections to the angles of the polytope is a crucial step in the work of Affentranger and Schneider~\cite{AS92}. Later, similar ideas were also used in \cite{FK09,kabluchko_angles,beta_polytopes}.
\subsection{Outline of the paper}
Our goal is to apply the idea of Perles and Shephard to compute the expected sums of the angles for \emph{random} convex polytopes. We will consider two basic models: the \emph{Gaussian polytopes} (and, more generally, Gaussian projections of arbitrary polyhedral sets) and the \emph{convex hulls of random walks}. A detailed description of these models will be given in Section~\ref{2320} and Section~\ref{subsec:gauss_proj}. Necessary preliminaries are collected in Section~\ref{559}. The expected number of the $j$-faces is known for both models, see~\cite{AS92} (combined with~\cite{BV94}) for the Gaussian polytopes and~\cite{KVZ17} for the convex hulls of random walks. Moreover, these models possess the common important property: the random projection of the polytope from the model has the same distribution as the same model of lower dimension. This makes it possible to use the approach of Perles and Shephard to compute the expected sums of the angles for these models. Furthermore, like in~\cite{bG68}, we will also generalize these results to sums of Grassmann angles which include both internal and external angles as special cases. The main results and their proofs are collected in Sections~\ref{1025} and~\ref{631}.
\section{Convex cones and Grassmann angles}\label{559}
In this section we collect some necessary definitions from stochastic and convex geometry. The reader may skip this section and return to it when necessary.
\subsection{Notation}
For a set $M\subset\mathbb{R}^d$ denote by $\mathop{\mathrm{lin}}\nolimits M$ (respectively, $\mathop{\mathrm{aff}}\nolimits M)$ its linear (respectively, affine) hull, that is, the minimal linear (respectively, affine) subspace containing $M$.
Equivalently, $\mathop{\mathrm{lin}}\nolimits M$ (respectively, $\mathop{\mathrm{aff}}\nolimits M)$ is the set of all linear (respectively, affine) combinations of elements of $M$. The interior of $M$ will be denoted by $\mathop{\mathrm{int}}\nolimits M$.
We write $\mathop{\mathrm{relint}}\nolimits M$ for the relative interior of $M$ which is the interior of $M$ taken with respect to its affine hull $\mathop{\mathrm{aff}}\nolimits M$. The dimension of a convex set $M$, denoted by $\dim M$, is the dimension of $\mathop{\mathrm{aff}}\nolimits M$.
For an arbitrary set $M\subset \mathbb{R}^d$ let $\mathop{\mathrm{pos}}\nolimits M$ denote its \emph{positive} (or \emph{conic}) \emph{hull}:
\[
\mathop{\mathrm{pos}}\nolimits M: =\Big\{\sum_{i=1}^m\lambda_i t_i:\, m\in \mathbb{N},\, t_1, \ldots,t_m\in M,\, \lambda_1,\ldots,\lambda_m\geq 0\Big\}.
\]
\subsection{Grassmann angles}
A set $C\subset\mathbb{R}^d$ is called a \emph{polyhedral cone} if it can be represented as a positive hull of finitely many vectors. Equivalently, a polyhedral cone is an intersection of finitely many half-spaces whose boundaries pass through the origin.
The \emph{solid angle} of a polyhedral cone $C\subset\mathbb{R}^d$ is defined as
\begin{equation}\label{933}
\alpha(C):=\P[Z\in C],
\end{equation}
where $Z$ is uniformly distributed on the centered unit sphere in the linear hull $\mathop{\mathrm{lin}}\nolimits C$. The maximal possible value of the solid angle in this normalization is $\alpha(C)=1$ and attained if $C$ is a linear subspace. If the dimension of $C$ is $d$ but $C\ne\mathbb{R}^d$, then $\P[Z\in C,-Z\in C]=0$ and denoting the random line passing through $Z$ and $-Z$ by $W_1$, we obtain that~\eqref{933} is equivalent to
\begin{equation}\label{1304}
\alpha(C) = \frac12\P[W_1\cap C\ne\{0\}].
\end{equation}
This definition of the solid angle can be generalized as follows. Fix some $k\in \{0,\ldots,d\}$. Let $W_{d-k}$ be a random $(d-k)$-dimensional linear subspace having the uniform distribution on the Grassmannn manifold of all such subspaces.
Following Gr\"unbaum~\cite{bG68} define (with the inverse index order) the $k$-th \emph{Grassmann angle} of $C$ as the probability that $C$ is intersected by the random, uniform $(d-k)$-plane $W_{d-k}$ non-trivially:
\begin{equation}\label{1138}
\gamma_k(C):=\P[W_{d-k}\cap C\ne\{0\}], \quad k\in \{0,\ldots,d\}.
\end{equation}
For example, taking $k=d-1$ and assuming that the dimension of $C$ is $d$, we have
\begin{align}\label{739}
\alpha(C)=\frac 1 2\gamma_{d-1}(C)+\frac 1 2\mathbbm{1}[C=\mathbb{R}^d].
\end{align}
It follows from~\eqref{1138} that for any convex cone $C\subset\mathbb{R}^d$ with $C\neq \{0\}$,
$$
1= \gamma_0(C) \geq \gamma_1(C) \geq \ldots \geq \gamma_{d}(C) = 0.
$$
The \textit{lineality space} of a polyhedral cone $C$, defined as $C\cap (-C)$, is the maximal linear subspace contained in $C$. If the lineality space of $C$ has dimension $j\in \{0,\ldots,d-1\}$ and $C$ is not a linear subspace, then~\eqref{1138} even implies that
\begin{equation}\label{eq:grassmann_angles_lineality}
1= \gamma_0(C) =\ldots = \gamma_{j}(C) \geq \gamma_{j+1}(C) \geq \ldots \geq \gamma_{d}(C) = 0.
\end{equation}
On the other hand, it follows directly from~\eqref{1138} that for a $j$-dimensional linear subspace $L_j\subset\mathbb{R}^d$ with $j\in \{0,\ldots,d\}$ we have
\begin{equation}\label{2256}
\gamma_k(L_j)=
\begin{cases}
1,\quad \text{ if } 0\leq k\leq j-1,\\
0,\quad \text{ if } j\leq k\leq d.
\end{cases}
\end{equation}
If $C$ is not a linear subspace, then the quantity $\frac 12 \gamma_k(C)$ is also known as the \textit{$k$-th conical quermassintegral $U_k(C)$} of $C$; see~\cite[Eqs.~(1)--(4)]{HugSchneider2016} or as the \emph{half-tail functional} $h_{k+1}(C)$ defined in~\cite{amelunxen_edge}.
It was shown in~\cite[Eq.~(2.5)]{bG68}) that, as with the classical intrinsic volumes, the Grassmann angles do not depend on dimension of the ambient space: If we embed $C$ in $\mathbb{R}^N$ with $N\geq d$, the result will be the same. In particular, it is convenient to define
\[
\gamma_N(C):=0\quad\text{for all} \quad N\geq\dim C.
\]
\subsection{Angles of polyhedral sets}
A \textit{polyhedral set} is an intersection of finitely many closed half-spaces (whose boundaries need not pass through the origin).
If a polyhedral set is bounded, it is a polytope. Polyhedral cones are also special cases of polyhedral sets.
Denote by $\mathcal{F}_j(P)$ the set of $j$-dimensional faces of a polyhedral set $P\subset \mathbb{R}^d$. The \textit{tangent cone} at a face $F\in \mathcal{F}_j(P)$ is defined by
\begin{equation}\label{eq:def_tangent_cone}
T_F(P) = \{v\in\mathbb{R}^d\colon f_0 +\varepsilon v \in P \text{ for some } \varepsilon>0\},
\end{equation}
where $f_0$ is any point in the relative interior of $F$.
The \textit{normal cone} at the face $F\in \mathcal{F}_j(P)$ is defined as the polar of the tangent one, that is
\begin{equation}\label{eq:def_normal_cone}
N_F(P) = T_F^\circ (P) = \{w\in\mathbb{R}^d \colon \langle w, u\rangle\leq 0 \text{ for all } u\in T_F(P)\}.
\end{equation}
The \textit{internal angle} of $P$ at $F$ is defined as the solid angle of its tangent cone:
$$
\beta(F,P) := \alpha(T_F(P)).
$$
The \textit{external angle} of $P$ at $F$ is the solid angle of its normal cone
$$
\gamma(F,P) := \alpha (N_F(P)).
$$
\section{Two models of random polytopes}\label{2320}
\subsection{Gaussian polytopes}\label{2320a}
Let $X_1,\ldots, X_n$ be independent $d$-dimensional standard Gaussian random vectors. Their convex hull
\begin{align}\label{2224}
\mathcal{P}_{n,d}:=\mathop{\mathrm{conv}}\nolimits(X_1,\ldots,X_n)
\end{align}
is called the Gaussian polytope. Most of the time, it will be convenient to impose the assumption $n\geq d+1$ which guarantees that $\mathcal{P}_{n,d}$ has full dimension $d$ a.s. Fix some $j\in\{0,\ldots,d-1\}$. An exact formula for the expected number of $j$-dimensional faces of $\mathcal{P}_{n,d}$ can be obtained by combining the results of~\citet{AS92} and~\citet{BV94}. To state this formula, we need to introduce some notation. Let $e_1,\ldots,e_n$ be the standard orthonormal basis in $\mathbb{R}^n$. The internal, respectively, external, angle sums of the regular $n$-vertex simplex $\Delta_n:=\mathop{\mathrm{conv}}\nolimits(e_1,\ldots,e_n)$ at its $k$-vertex faces are denoted by
$$
\sigma\stirlingsec{n}{k},
\;\;\;
\text{respectively,}
\;\;\;
\sigma\stirling{n}{k}.
$$
This notation is intentionally chosen to resemble the standard notation for Stirling numbers~\cite[\S6.1]{Graham1994}; the analogy between these notions will be discussed below. Since the number of $k$-vertex faces of $\Delta_n$ equals $\binom nk$ and since the angles at all such faces are equal, we can choose one $k$-vertex face, say $\Delta_k:=\mathop{\mathrm{conv}}\nolimits(e_1,\ldots,e_k)$, and define
$$
\sigma\stirlingsec{n}{k}:= \binom nk \cdot \alpha (T_{\Delta_k} (\Delta_n)),
\qquad
\sigma\stirling{n}{k}:= \binom nk \cdot \alpha (N_{\Delta_k} (\Delta_n)),
$$
for all $n\in \mathbb{N}$ and all $k\in \{1,\ldots,n\}$. Here, $T_{\Delta_k} (\Delta_n)$,
respectively $N_{\Delta_k} (\Delta_n)$, denotes that tangent (respectively, normal) cone of $\Delta_n$ at $\Delta_k$, while $\alpha(C)$ is the solid angle of a cone $C$; see Section~\ref{559}. It is convenient to extend the above definition by putting
$$
\sigma\stirlingsec{n}{k}
:=
\sigma\stirling{n}{k}
:=
0,
$$
for all $n\in \mathbb{N}$ and all $k\notin \{1,\ldots,n\}$. With this notation, the formula of~\citet{AS92} (taking into account also the observation of~\citet{BV94}) takes the form
\begin{equation}\label{eq:E_f_k_P_n_d}
\mathbb E\, f_j(\mathcal{P}_{n,d})
=
2\sum_{l=0}^{\infty} \sigma\stirling{n}{d-2l} \sigma\stirlingsec{d-2l}{j+1},
\end{equation}
for all $j\in \{0,\ldots,d-1\}$.
In fact, \citet{AS92} proved the same formula for the expected number of $j$-dimensional faces of the projection of the simplex $\mathop{\mathrm{conv}}\nolimits(e_1,\ldots,e_n)$ onto a uniform, random $d$-dimensional subspace in $\mathbb{R}^n$. Then, \citet{BV94} argued that this expected number of faces is the same as for the Gaussian polytope.
Explicit formulas for $\sigma\stirlingsec{n}{k}$ and $\sigma\stirling{n}{k}$ are available; see~\cite{KZ17a} for a review of this topic. For example, it is known that
\begin{align}
\sigma\stirlingsec{n}{k}
&=
\binom nk \cdot \frac 1 {\sqrt {2\pi}} \int_{0}^{\infty} \left(\Phi^{n-k} \left( \frac{{\rm{i}} x}{\sqrt n}\right) + \Phi^{n-k} \left(- \frac{{\rm{i}} x}{\sqrt n}\right)\right) {\rm e}^{-x^2/2} {\rm d} x
,\label{eq:regular_simpl_internal}\\
\sigma\stirling{n}{k}
&=
\binom nk \cdot \frac 1 {\sqrt {2\pi}} \int_{0}^{\infty} \left(\Phi^{n-k} \left(\frac{x}{\sqrt k}\right) + \Phi^{n-k} \left(- \frac{x}{\sqrt k}\right)\right) {\rm e}^{-x^2/2} {\rm d} x,
\label{eq:regular_simpl_external}
\end{align}
where ${\rm{i}} = \sqrt {-1}$, and $\Phi$ denotes the distribution function of the standard normal law.
It is known that $\Phi$ admits an analytic continuation to the entire complex plane, namely
$$
\Phi(z) = \frac 12 + \frac 1 {\sqrt{2\pi}} \sum_{n =0}^{\infty} \frac{(-1)^n }{(2n+1) 2^n n!} z^{2n+1}, \qquad z\in \mathbb{C}.
$$
In the above formulas for the angle sums, we need the values of $\Phi$ on the real and imaginary axes only, namely
\begin{equation}\label{eq:Phi_def}
\Phi(z) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^z e^{-t^2/2} {\rm d} t,
\quad
\Phi( {\rm{i}} z) = \frac 12 + \frac i{\sqrt{2\pi}} \int_0^{z} e^{t^2/2} {\rm d} t,
\quad z\in\mathbb{R}.
\end{equation}
\subsection{Convex hulls of random walks}\label{2320b}
Let $\xi_1,\ldots,\xi_n$ be (possibly dependent) random $d$-dimensional vectors with partial sums
$$
S_i = \xi_1 + \ldots + \xi_i,\quad 1\leq i\leq n,\quad S_0=0.
$$
The sequence $S_0,S_1,\ldots,S_n$ will be referred to as a \emph{random walk}.
Consider its \emph{convex hull}
\begin{align}\label{2225}
\mathcal{Q}_{n,d} &:= \mathop{\mathrm{conv}}\nolimits(S_0,S_1,\ldots, S_n).
\end{align}
We impose the following assumptions on the joint distribution of the increments.
\begin{enumerate}
\item[$(\text{Ex})$] \textit{Exchangeability:} For every permutation $\sigma$ of the set $\{1,\ldots,n\}$, we have the distributional equality
$$
(\xi_{\sigma(1)},\ldots, \xi_{\sigma(n)}) \stackrel{d}{=} (\xi_1,\ldots,\xi_n).
$$
\item[$(\text{GP})$] \textit{General position:}
For every $1\leq i_1 < \ldots < i_d\leq n$, the probability that the vectors $S_{i_1}, \ldots,S_{i_d}$ are linearly dependent is $0$.
\end{enumerate}
Under these assumptions, it was shown in~\cite{KVZ17} that for all $j\in\{0,\ldots,d-1\}$,
\begin{align}\label{800}
\mathbb E\, f_j(\mathcal{Q}_{n,d})= \frac{2\cdot j!}{n!} \sum_{l=0}^{\infty}\stirling{n+1}{d-2l} \stirlingsec{d-2l}{j+1}.
\end{align}
The right-hand side contains the (signless) \emph{Stirling numbers of the first kind} $\stirling{n}{m}$ and the \emph{Stirling numbers of the second kind} $\stirlingsec{n}{m}$, which are defined as the number of permutations of an $n$-element set with exactly $m$ cycles and the number of partitions of an $n$-element set into $m$ non-empty subsets, respectively, for $n\in\mathbb{N}$ and $m \in \{1,\ldots,n\}$. For $n\in\mathbb{N}$ and $m\notin \{1,\ldots,n\}$ one defines the Stirling numbers to be $0$, so that~\eqref{800} and all similar formulas contain a finite number of non-vanishing terms only. For the basic properties of the Stirling numbers, we refer to~\cite[\S6.1]{Graham1994}. The exponential generating functions of the Stirling numbers are given by
\begin{equation}\label{eq:stirling_def}
\sum_{n=m}^{\infty} \stirling{n}{m}\frac{t^n}{n!} = \frac 1 {m!} \left(\log \frac 1 {1-t}\right)^m,
\quad
\sum_{n=m}^{\infty} \stirlingsec{n}{m}\frac{t^n}{n!} = \frac 1 {m!} (e^{t}-1)^m.
\end{equation}
With the convention $\stirling{0}{0} = \stirlingsec{0}{0} =1$, the two-variable generating functions are given by
\begin{align}\label{eq:stirling_def_2_variables}
\sum_{m=0}^\infty\sum_{n=m}^{\infty} \stirling{n}{m}\frac{t^n}{n!}y^m=(1-t)^{-y}, \quad\sum_{m=0}^\infty\sum_{n=m}^{\infty} \stirlingsec{n}{m}\frac{t^n}{n!}y^m=e^{(e^t-1)y}.
\end{align}
\section{Main results}\label{1025}
\subsection{Expected sums of Grassmann angles}
Our main results are the following two theorems in which we compute the expected sums of the Grassmann angles at the faces of any fixed dimension for each of the random polytopes $\mathcal{P}_{n,d}$ and $\mathcal{Q}_{n,d}$ defined in Section~\ref{2320}.
\begin{theorem}\label{2219}
Fix some $d\in\mathbb{N}$ and $n\geq d+1$. Then, for every
$j\in \{0,\ldots,d-1\}$ and $k\in \{0,\ldots,d\}$
the expected sum of the $k$-th Grassmann angles at the $j$-dimensional faces of $\mathcal{P}_{n,d}$ equals
\begin{equation}\label{eq:theo_sum_grassmann_P}
\mathbb E\, \sum_{F\in\mathcal{F}_j(\mathcal{P}_{n,d})}\gamma_k(T_F(\mathcal{P}_{n,d}))
=
2\sum_{l=0}^{\infty} \sigma\stirling{n}{d-2l} \sigma\stirlingsec{d-2l}{j+1}
-
2\sum_{l=0}^{\infty} \sigma \stirling{n}{k-2l} \sigma\stirlingsec{k-2l}{j+1}.
\end{equation}
Here, the notation for the internal and external angle sums of the regular simplex introduced in Section~\ref{2320a} has been used.
\end{theorem}
In the special case when $k=d-1$, the above theorem combined with~\eqref{739} yields the following formula for the expected internal solid-angle sums at the $j$-dimensional faces of $\mathcal{P}_{n,d}$.
\begin{corollary}\label{cor:angle_sum_P}
Fix some $d\in\mathbb{N}$ and $n\geq d+1$. For every $j\in\{0,\ldots,d-1\}$ the expected sum of internal angles of $\mathcal{P}_{n,d}$ at its $j$-dimensional faces is given by
\begin{equation*}
\mathbb E\, \sum_{F\in\mathcal{F}_j(\mathcal{P}_{n,d})} \alpha(T_F(\mathcal{P}_{n,d}))
=
\sum_{s=0}^{\infty} (-1)^s \sigma\stirling{n}{d-s} \sigma\stirlingsec{d-s}{j+1}.
\end{equation*}
\end{corollary}
\vspace*{2mm}
Next we are going to state analogous results for convex hulls of random walks.
\begin{theorem}\label{1138thm}
Fix some $d\in\mathbb{N}$ and $n\geq d$. Then, for every
$j\in \{0,\ldots,d-1\}$ and $k\in \{0,\ldots,d\}$
the expected sum of the $k$-th Grassmann angles at the $j$-dimensional faces of $\mathcal{Q}_{n,d}$ equals
\begin{equation}
\mathbb E\, \sum_{F\in\mathcal{F}_j(\mathcal{Q}_{n,d})}\gamma_k(T_F(\mathcal{Q}_{n,d}))
= \frac{2\cdot j!}{n!} \sum_{l=0}^{\infty}\stirling{n+1}{d-2l} \stirlingsec{d-2l}{j+1}
- \frac{2\cdot j!}{n!} \sum_{l=0}^{\infty}\stirling{n+1}{k-2l} \stirlingsec{k-2l}{j+1}.
\end{equation}
Here, the notation for the Stirling numbers introduced in Section~\ref{2320b} has been used.
\end{theorem}
Let us mention some special and low-dimensional cases of Theorem~\ref{1138thm}. Taking $k=d-1$ in Theorem~\ref{1138thm} and making use of~\eqref{739}, we compute the expected internal solid-angle sums at the $j$-dimensional faces of $\mathcal{Q}_{n,d}$.
\begin{corollary}\label{cor:angle_sum_Q}
Fix some $d\in\mathbb{N}$ and $n\geq d$. Then, for every $j\in\{0,\ldots,d-1\}$ the expected sum of internal angles of $\mathcal{Q}_{n,d}$ at its $j$-dimensional faces is given by
\begin{equation*}
\mathbb E\, \sum_{F\in\mathcal{F}_j(\mathcal{Q}_{n,d})} \alpha(T_F(\mathcal{Q}_{n,d}))
= \frac{j!}{n!} \sum_{s=0}^{\infty} (-1)^s \stirling{n+1}{d-s} \stirlingsec{d-s}{j+1}.
\end{equation*}
\end{corollary}
For example, for $d=2$, the expected sum of angles of the random polygon $\mathcal{Q}_{n,2}$ at its vertices is given by
$$
\mathbb E\, \sum_{F\in\mathcal{F}_0(\mathcal{Q}_{n,2})} \alpha(T_F(\mathcal{Q}_{n,2})) = \frac 1 {n!}\left(\stirling{n+1}{2}\stirlingsec{2}{1} - \stirling{n+1}{1}\stirlingsec{1}{1} \right)
=
H_n-1,
$$
where
$$
H_n = 1 + \frac 12 + \frac 13 +\ldots + \frac 1n
$$
is the $n$-th harmonic number. Since the angle sum of a polygon with $v$ vertices equals $(v-2)/2$ times the full solid angle $2\pi$, this agrees with the result of Baxter~\cite{baxter}, see also~\cite{baxter_nielsen} and~\cite[Lemma 4.1]{snyder_steele} for generalizations, who proved that the expected number of vertices of $\mathcal{Q}_{n,2}$ is
$$
\mathbb E\, f_{0}(\mathcal{Q}_{n,2}) = 2 H_n.
$$
In dimension $d=3$, the expected sum of internal angles of $\mathcal{Q}_{n,3}$ at its vertices and a similar sum for edges are given by
\begin{align*}
\mathbb E\, \sum_{F\in\mathcal{F}_0(\mathcal{Q}_{n,3})} \alpha(T_F(\mathcal{Q}_{n,3}))
&=
\frac 1 {n!}\left(\stirling{n+1}{3}\stirlingsec{3}{1} - \stirling{n+1}{2}\stirlingsec{2}{1} + \stirling{n+1}{1}\stirlingsec{1}{1} \right)\\
&=
\frac 12 (H_n)^2 - H_n -\frac 12 H_n^{(2)} + 1,\\
\mathbb E\, \sum_{F\in\mathcal{F}_1(\mathcal{Q}_{n,3})} \alpha(T_F(\mathcal{Q}_{n,3}))
&=
\frac 1 {n!}\left(\stirling{n+1}{3}\stirlingsec{3}{2} - \stirling{n+1}{2}\stirlingsec{2}{2} \right)\\
&=
\frac 32 (H_n)^2 - H_n -\frac 32 H_n^{(2)},
\end{align*}
where
$$
H_n^{(2)} = 1 + \frac 1{2^2} + \frac 1{3^2} +\ldots + \frac 1{n^2}.
$$
\begin{remark}
Using relations stated in Lemma~\ref{lem:identity} and in Remark~\ref{rem:stirling_relations}, below, one can rewrite Theorems~\ref{2219} and~\ref{1138thm} as follows:
\begin{align}
\mathbb E\, \sum_{F\in\mathcal{F}_j(\mathcal{P}_{n,d})}\gamma_k(T_F(\mathcal{P}_{n,d}))
&=
2\sum_{l=1}^{\infty} \sigma \stirling{n}{k+2l} \sigma\stirlingsec{k+2l}{j+1}
-
2\sum_{l=1}^{\infty} \sigma\stirling{n}{d+2l} \sigma\stirlingsec{d+2l}{j+1},\label{eq:alternative_P}\\
\mathbb E\, \sum_{F\in\mathcal{F}_j(\mathcal{Q}_{n,d})}\gamma_k(T_F(\mathcal{Q}_{n,d}))
&=
\frac{2\cdot j!}{n!} \sum_{l=1}^{\infty}\stirling{n+1}{k+2l} \stirlingsec{k+2l}{j+1}
-\frac{2\cdot j!}{n!} \sum_{l=1}^{\infty}\stirling{n+1}{d+2l} \stirlingsec{d+2l}{j+1}. \label{eq:alternative_Q}
\end{align}
\end{remark}
\begin{remark}
Let us also mention one more result on angle sums of random polytopes. For typical cells in stationary tessellations, it is possible to compute the expected angle-sums explicitly in terms of the cell intensities; see Theorem~10.1.3 and Equation~(10.4) in~\cite{SW08}.
\end{remark}
\subsection{Method of proof of Theorems~\ref{2219} and~\ref{1138thm}}
The main ingredient in the proofs of Theorems~\ref{2219} and~\ref{1138thm} is the following stochastic representation of the Grassmann angles of a polyhedral set. We recall that $W_k$ denotes a random, uniformly distributed linear random subspace of dimension $k$ in $\mathbb{R}^d$ and that $\Pi_k$ denotes the orthogonal projection on $W_k$.
The next theorem was stated by Gr\"unbaum~\cite[p.~298]{bG68} with the comment that it is a simple application of the separation theorem for convex sets. Since its proof does not seem trivial to us and since the result has been used many times since then (most notably, by Affentranger and Schneider~\cite{AS92}, see also~\cite[Section~8.3]{SW08}), we shall provide a proof in Sections~\ref{631} and~\ref{2157}.
\begin{theorem}\label{820}
Let $P\subset \mathbb{R}^d$ be a polyhedral set with non-empty interior. Then, for all integer $0\leq j < k \leq d$ and all $F\in\mathcal{F}_j(P)$ we have
\begin{align}\label{eq:gamma_k_proof}
\gamma_k(T_F(P)) = \P[\Pi_kF \not \in \mathcal{F}(\Pi_k P)] = \P[\Pi_kF \not \in \mathcal{F}_j(\Pi_k P)].
\end{align}
\end{theorem}
Taking the sum over all faces $F\in \mathcal{F}_j(P)$ and noting that for almost every choice of $W_k$ every $j$-face of $\Pi_kP$ is the projection of some unique $j$-face of $P$ (which will be shown in Proposition~\ref{1640}) one arrives at the following
\begin{theorem}\label{623}
Let $P\subset \mathbb{R}^d$ be a polyhedral set with non-empty interior. Then for all integer $0\leq j < k \leq d$ we have
\begin{align*}
\sum_{F\in\mathcal{F}_j(P)}\gamma_k(T_F(P)) =f_j(P)-\mathbb E\, f_j(\Pi_k P).
\end{align*}
\end{theorem}
\vskip 10pt
The proofs of Theorems~\ref{820} and~\ref{623} are postponed to Section~\ref{2157}. In Section~\ref{631} we will collect some properties of convex cones which are essential for these proofs. At this point, we provide the proofs of Theorems~\ref{2219} and~\ref{1138thm} assuming Theorem~\ref{623}.
\begin{proof}[Proof of Theorem~\ref{2219} assuming Theorem~\ref{623}]
First of all, let us establish the statement for all $j\in \{0,\ldots,d-1\}$ and $k\in \{0,\ldots,d\}$ such that $k\leq j$.
Since the lineality space of $T_F(\mathcal{P}_{n,d})$ has dimension $j$ for every $F\in \mathcal{F}_j(\mathcal{P}_{n,d})$, which implies that $\gamma_k(T_F(\mathcal{P}_{n,d})) = 1$ by~\eqref{eq:grassmann_angles_lineality}, we have
$$
\mathbb E\, \sum_{F\in\mathcal{F}_j(\mathcal{P}_{n,d})}\gamma_k(T_F(\mathcal{P}_{n,d}))
=
\mathbb E\, f_j(\mathcal{P}_{n,d})
=
2\sum_{l=0}^{\infty} \sigma\stirling{n}{d-2l} \sigma\stirlingsec{d-2l}{j+1},
$$
where in the last step we used~\eqref{eq:E_f_k_P_n_d}. This proves~\eqref{eq:theo_sum_grassmann_P} because the second term on the right-hand side there vanishes.
In the following, let $0\leq j < k \leq d$. Projecting $X_1,\ldots,X_n$ onto the random uniform $k$-plane $W_k$ gives $n$ independent standard Gaussian vectors in $W_k$ which can be identified with $\mathbb{R}^k$. Applying~\eqref{eq:E_f_k_P_n_d} to $\Pi_k\mathcal{P}_{n,d}$ (which is the convex hull of $\Pi_kX_1,\ldots, \Pi_k X_n$) leads to
\begin{align*}
\mathbb E\, f_j(\Pi_k\mathcal{P}_{n,d}) =
2\sum_{l=0}^{\infty} \sigma \stirling{n}{k-2l} \sigma\stirlingsec{k-2l}{j+1}.
\end{align*}
On the other hand, for the original Gaussian polytope $\mathcal{P}_{n,d}$ \eqref{eq:E_f_k_P_n_d} states that
$$
\mathbb E\, f_j(\mathcal{P}_{n,d})
=
2\sum_{l=0}^{\infty} \sigma\stirling{n}{d-2l} \sigma\stirlingsec{d-2l}{j+1}.
$$
Combining these two equations with Theorem~\ref{623} completes the proof.
\end{proof}
\begin{remark}
An alternative way to prove Theorem~\ref{2219} is to apply Corollary~3.6 in~\cite{goetze_kabluchko_zaporozhets} to the tangent cones of the polytope $\mathcal{P}_{n,d}$ which can be viewed as a Gaussian projection of the regular simplex. The Grassmann angles of the regular simplex appearing in that corollary can be computed using~\eqref{eq:crofton_conic} and~\eqref{eq:eq:upsilon_reg_simplex}, below. Note also that the case when $n\leq d$ omitted in Theorem~\ref{2219} (meaning that $\mathcal{P}_{n,d}$ is a simplex of dimension $n-1$ in $\mathbb{R}^d$), was treated in Theorem~4.1 of~\cite{goetze_kabluchko_zaporozhets}. Translated into the notation of the present paper, this result shows that~\eqref{eq:alternative_P} (but not Theorem~\ref{2219}) continues to hold under the assumptions $d\in\mathbb{N}$, $n\in \{2,\ldots, d\}$, $j,k\in \{0,\ldots,n-2\}$. Let us also mention that Theorem~\ref{2219} is related to Theorems~1.12 and~1.13 of~\cite{beta_polytopes}, where the expected conic intrinsic volumes of the tangent cones of the so-called beta polytopes have been computed. The Gaussian polytopes considered here can be viewed as the limiting case $\beta\to+\infty$ of the beta polytopes.
\end{remark}
\begin{remark}
All results on the polytope $\mathcal{P}_{n,d}$ remain true if it is replaced by the random polytope $\mathcal{P}_{n,d}'$ defined as a random projection of the regular simplex $\mathop{\mathrm{conv}}\nolimits(e_1,\ldots,e_n)$ onto a random uniform $d$-dimensional subspace in $\mathbb{R}^n$. Indeed, \eqref{eq:E_f_k_P_n_d} remains true for $\mathcal{P}'_{n,d}$ by the original result of~\cite{AS92}, and a projection of $\mathcal{P}_{n,d}'$ onto a random uniform subspace of dimension $k < d$ has the same distribution as $\mathcal{P}_{n,k}'$, so that the above proof applies.
\end{remark}
\begin{proof}[Proof of Theorem~\ref{1138thm} assuming Theorem~\ref{623}]
In the case $k\leq j$ the statement can be proven in the same way as in the proof of Theorem~\ref{2219}, but this time we have to appeal to~\eqref{800}. In the following, let $0\leq j < k \leq d$.
Projecting the path $S_0,\ldots,S_n$ onto the random $k$-plane $W_k$ gives a random walk in $W_k$. We can identify $W_k$ with $\mathbb{R}^k$. The increments of the projected random walk are given by
\begin{align*}
\xi'_1 := \Pi_k \xi_{1}, \;\;\; \ldots,\;\;\; \xi'_{n} := \Pi_k \xi_n.
\end{align*}
It is straightforward to check that the projected random walk satisfies conditions $(\text{Ex})$ and $(\text{GP})$ as well. In particular, for $(\text{GP})$ note that any $k$ vectors among $S_1,\ldots,S_n$ are a.s.\ linearly independent (since $k\leq d$ and $(\text{GP})$ holds for the original random walk), hence their projections onto an independent $k$-plane $W_k$ are also linearly independent a.s. Therefore applying~\eqref{800} leads to
\begin{align*}
\mathbb E\, f_j(\Pi_k\mathcal{Q}_{n,d}) = \frac{2\cdot j!}{n!} \sum_{l=0}^{\infty}\stirling{n+1}{k-2l} \stirlingsec{k-2l}{j+1}.
\end{align*}
On the other hand, \eqref{800} applied to the original random walk states that
$$
\mathbb E\, f_j(\mathcal{Q}_{n,d})= \frac{2\cdot j!}{n!} \sum_{l=0}^{\infty}\stirling{n+1}{d-2l} \stirlingsec{d-2l}{j+1}.
$$
Combining these two equations with Theorem~\ref{623} completes the proof.
\end{proof}
\subsection{Expected sums of conic intrinsic volumes}
From the above Theorems~\ref{2219} and~\ref{1138thm} we can deduce formulas for the expected sums of conic intrinsic volumes of the tangent cones of the random polytopes $\mathcal{P}_{n,d}$ and $\mathcal{Q}_{n,d}$. Given a polyhedral cone $C\subset \mathbb{R}^d$, its $k$-th \emph{conic intrinsic volume} $\upsilon_k(C)$ is defined as
\begin{equation}\label{eq:upsilon_def}
\upsilon_k(C)
=
\sum_{F\in\mathcal{F}_k(C)}\alpha(F)\alpha(N_F(C)),
\end{equation}
for $k\in \{0,\ldots,d\}$.
There are other equivalent definitions using, for example, the conic Steiner formula or Euclidean projections; see~\cite[Section~6.5]{SW08}, \cite{glasauer_phd}, \cite{amelunxen_comb}, \cite{amelunxen_edge}, \cite[Section 2]{HugSchneider2016}.
\begin{theorem}\label{2219v}
Fix some $d\in\mathbb{N}$ and $n\geq d+1$. Then, for all $j\in\{0,\ldots,d-1\}$ and $k\in \{j,\ldots,d-1\}$ we have
\begin{align*}
\mathbb E\, \sum_{F\in\mathcal{F}_j(\mathcal{P}_{n,d})}\upsilon_k(T_F(\mathcal{P}_{n,d})) = \sigma \stirling{n}{k+1} \sigma\stirlingsec{k+1}{j+1}.
\end{align*}
In the remaining case when $j\in\{0,\ldots,d-1\}$ and $k = d$ we have
\begin{align*}
\mathbb E\, \sum_{F\in\mathcal{F}_j(\mathcal{P}_{n,d})}\upsilon_d(T_F(\mathcal{P}_{n,d}))
&=
\sum_{s=0}^\infty (-1)^{s} \sigma\stirling{n}{d-s} \sigma \stirlingsec{d-s}{j+1}
=
\sum_{s=1}^\infty (-1)^{s+1} \sigma\stirling{n}{d+s} \sigma \stirlingsec{d+s}{j+1}
.
\end{align*}
\end{theorem}
\begin{theorem}\label{1138v}
Fix some $d\in\mathbb{N}$ and $n\geq d$. Then, for all $j\in\{0,\ldots,d-1\}$ and $k\in \{j,\ldots,d-1\}$ we have
\begin{equation*}
\mathbb E\, \sum_{F\in\mathcal{F}_j(\mathcal{Q}_{n,d})}\upsilon_k(T_F(\mathcal{Q}_{n,d}))= \frac{j!}{n!}\stirling{n+1}{k+1}\stirlingsec{k+1}{j+1}.
\end{equation*}
In the remaining case when $j\in\{0,\ldots,d-1\}$ and $k = d$ we have
\begin{align*}
\mathbb E\, \sum_{F\in\mathcal{F}_j(\mathcal{Q}_{n,d})}\upsilon_d(T_F(\mathcal{Q}_{n,d}))
&=
\frac{j!}{n!}\sum_{s=0}^{\infty} (-1)^s\stirling{n+1}{d-s}\stirlingsec{d-s}{j+1}
=
\frac{j!}{n!} \sum_{s=1}^{\infty} (-1)^{s+1} \stirling{n+1}{d+s}\stirlingsec{d+s}{j+1}
.
\end{align*}
\end{theorem}
Note that in both theorems, the case $k=d$ yields a formula for the expected sum of internal angles of $\mathcal{P}_{n,d}$ and $\mathcal{Q}_{n,d}$ already (partially) obtained in Corollaries~\ref{cor:angle_sum_P} and~\ref{cor:angle_sum_Q}. On the other extreme, taking $k=j$ and noting that $\upsilon_{j}(T_F(P)) = \alpha(N_F(P))$ for all $F\in \mathcal{F}_j(P)$ because the only face of dimension $j$ in $T_F(P)$ is its lineality space (which is a shift of $\mathop{\mathrm{aff}}\nolimits F$), we obtain the following expressions for the sums of the external angles.
\begin{corollary}\label{cor_ext_angle_sum_P}
Fix some $d\in\mathbb{N}$ and $n\geq d+1$. Then, for every $j\in\{0,\ldots,d-1\}$ we have
\begin{align*}
\mathbb E\, \sum_{F\in\mathcal{F}_j(\mathcal{P}_{n,d})} \alpha (N_F(\mathcal{P}_{n,d})) = \sigma\stirling{n}{j+1}.
\end{align*}
\end{corollary}
\begin{corollary}\label{cor_ext_angle_sum_Q}
Fix some $d\in\mathbb{N}$ and $n\geq d$. Then, for every $j\in\{0,\ldots,d-1\}$ we have
\begin{equation*}
\mathbb E\, \sum_{F\in\mathcal{F}_j(\mathcal{Q}_{n,d})} \alpha(N_F(\mathcal{Q}_{n,d}))= \frac{j!}{n!}\stirling{n+1}{j+1}.
\end{equation*}
\end{corollary}
For the proof of Theorems~\ref{2219v} and~\ref{1138v} we use a relation, known as the \emph{conic Crofton formula}, between the Grassmann angles of a cone and its conical intrinsic volumes. Precisely, according to~\cite[p.~261]{SW08} we have
\begin{align}\label{eq:crofton_conic}
\gamma_k(C)=2\sum_{i=1,3,5,\ldots} \upsilon_{k+i}(C)
\end{align}
for every cone $C\subset \mathbb{R}^d$ which is not a linear subspace and for all $k\in \{0,\ldots,d\}$. Consequently,
\begin{align}\label{relation_quer_intr}
\upsilon_d(C)=\frac{1}{2}\gamma_{d-1}(C),
\;\;\;
\upsilon_k(C) = \frac{1}{2}\gamma_{k-1}(C)-\frac{1}{2}\gamma_{k+1}(C),
\end{align}
for all $k\in\{0,\ldots,d-1\}$. Here, in the case $k=0$ we have to define $\gamma_{-1}(C) = 1$ and the proof of~\eqref{relation_quer_intr} follows from~\eqref{eq:crofton_conic} together with the identity $\upsilon_0(C) + \upsilon_2(C) +\ldots = 1/2$.
\begin{proof}[Proof of Theorem~\ref{2219v}]
Let us start by observing that in the case $k=0$ (which implies $j=0$), we can use the fact that the external angles at the vertices of any polytope sum up to $1$. This yields
$$
\mathbb E\, \sum_{F\in\mathcal{F}_0(\mathcal{P}_{n,d})}\upsilon_0(T_F(\mathcal{P}_{n,d}))
=
1
=
\sigma\stirling{n}{1}\sigma\stirlingsec{1}{1},
$$
which is the desired result. In the following we exclude the case $k=j=0$.
In the general case, we can use the linear relation~\eqref{relation_quer_intr} between the Grassmann angles $\gamma_k$ and the conic intrinsic volumes $\upsilon_k$. Then, applying Theorem~\ref{2219}, it follows that for all $k\in\{j,\ldots,d-1\}$,
\begin{align*}
\mathbb E\, \sum_{F\in\mathcal{F}_j(\mathcal{P}_{n,d})}\upsilon_k(T_F(\mathcal{P}_{n,d}))
&=\mathbb E\, \sum_{F\in\mathcal{F}_j(\mathcal{P}_{n,d})}\Big(\frac{1}{2}\gamma_{k-1}(T_F(\mathcal{P}_{n,d}))- \frac{1}{2}\gamma_{k+1}(T_F(\mathcal{P}_{n,d}))\Big)\\
&= \sum_{l=0}^{\infty} \sigma \stirling{n}{k-2l+1} \sigma \stirlingsec{k-2l+1}{j+1}- \sum_{l=0}^{\infty} \sigma \stirling{n}{k-2l-1}\sigma\stirlingsec{k-2l-1}{j+1}\\
&= \sigma \stirling{n}{k+1} \sigma \stirlingsec{k+1}{j+1}.
\end{align*}
In the case $k=d$, we get
\begin{align*}
\mathbb E\, \sum_{F\in\mathcal{F}_j(\mathcal{P}_{n,d})}\upsilon_{d}(T_F(\mathcal{P}_{n,d}))&=\mathbb E\, \sum_{F\in\mathcal{F}_j(\mathcal{P}_{n,d})}\frac{1}{2}\gamma_{d-1}(T_F(\mathcal{P}_{n,d}))\\
&=\sum_{l=0}^\infty \sigma\stirling{n}{d-2l}\sigma\stirlingsec{d-2l}{j+1} - \sum_{l=0}^\infty \sigma\stirling{n}{d-1-2l}\sigma\stirlingsec{d-1-2l}{j+1}\\
&=\sum_{s=0}^\infty(-1)^s \sigma\stirling{n}{d-s} \sigma\stirlingsec{d-s}{j+1}.
\end{align*}
The second formula in the case $k=d$ follows then from the identity (see Lemma~\ref{lem:identity}, below)
$$
\sum_{m=j+1}^{n} (-1)^{n-m} \sigma \stirling{n}{m} \sigma\stirlingsec{m}{j+1} = \delta_{n,j+1}
$$
together with the observation that the Kronecker symbol on the right-hand side vanishes because $j+1 \leq d < n$.
\end{proof}
\begin{lemma}\label{lem:identity}
For all $n,k\in\mathbb{N}$ with $n\geq k$ we have
$$
\sum_{m=k}^{n} (-1)^{n-m} \sigma \stirling{n}{m} \sigma\stirlingsec{m}{k} = \delta_{n,k},
\;\;\;
\sum_{m=k}^{n} \sigma \stirling{n}{m} \sigma\stirlingsec{m}{k} = \binom nk.
$$
\end{lemma}
\begin{proof}
To prove the identity, consider the tangent cone of the regular simplex $\Delta_n= \mathop{\mathrm{conv}}\nolimits(e_1,\ldots,e_n)$ at its face $\Delta_k=\mathop{\mathrm{conv}}\nolimits(e_1,\ldots,e_k)$. Its $(m-1)$-st conic intrinsic volume can be computed using formula~\eqref{eq:upsilon_def} by observing that the $(m-1)$-dimensional faces of the tangent cone correspond to the $m$-vertex faces of $\Delta_n$ containing $\Delta_k$ and that the internal (respectively, normal) angles at these faces correspond to the internal (respectively, external) angles of these faces. Since the number of such $m$-vertex faces in $\binom {n-k}{m-k}$, one obtains the following formula (which can be found already in~\cite{AS92}):
\begin{equation}\label{eq:eq:upsilon_reg_simplex}
\upsilon_{m-1}(T_{\Delta_k}(\Delta_n))
=
\binom {n-k}{m-k} \frac{\sigma\stirlingsec{m}{k}}{\binom mk} \frac{\sigma\stirling{n}{m}}{\binom nm}
=
\frac 1 {\binom nk}\sigma \stirling{n}{m} \sigma \stirlingsec{m}{k},
\end{equation}
for all $m\in \{k,\ldots,n\}$. Moreover, the intrinsic volumes $\upsilon_{m-1}(T_{\Delta_k}(\Delta_n))$ with $m\in \{0,\ldots, k-1\}$ vanish because all faces of $T_{\Delta_k}(\Delta_n)$ have dimension at least $k-1$. The claim of the lemma follows from the identities $\sum_{m=1}^n \upsilon_{m-1}(C) = 1$ and $\sum_{m=1}^{n} (-1)^m \upsilon_{m-1}(C) = 0$ that are valid for every $(n-1)$-dimensional polyhedral cone $C$ which is not a linear subspace. Let us also note that Lemma~\ref{lem:identity} can be viewed as the limiting case, as $\beta\to +\infty$, of the identities for the expected angle sums of the random beta simplices stated in~\cite[Proposition~2.1]{kabluchko_algorithm}.
\end{proof}
\begin{remark}\label{rem:stirling_relations}
Identities similar to those stated in Lemma~\ref{lem:identity} are well known for Stirling numbers. Namely, for all $n,k\in\mathbb{N}$ with $n\geq k$, we have
$$
\sum_{m=k}^{n} (-1)^{n-m} \stirling{n}{m} \stirlingsec{m}{k} = \delta_{n,k},
\;\;\;
\sum_{m=k}^{n} \stirling{n}{m} \stirlingsec{m}{k} = L(n,k),
$$
where $L(n,k) = \frac {n!}{k!} \binom {n-1}{k-1}$ are the Lah numbers. An even more interesting analogy between the Stirling numbers and the angles of the regular simplex is related to the identity $\stirlingsec{n}{k} = \stirling{-k}{-n}$ which becomes valid after a natural extension of the Stirling numbers to negative parameters~\cite[\S6.1]{Graham1994}. It follows directly from~\eqref{eq:regular_simpl_external} and~\eqref{eq:regular_simpl_internal} that the individual angles of the regular simplex (rather than the angle sums $\sigma \stirling{n}{k}$ and $\sigma\stirlingsec{n}{k}$) satisfy a similar identity. Given these analogies, one may ask whether the Stirling numbers can be interpreted as angles of some polytope. This is indeed the case and it turns out that this polytope is the Schl\"afli orthoscheme. These questions will be studied in more detail elsewhere.
\end{remark}
\begin{proof}[Proof of Theorem~\ref{1138v}]
Let us first assume that $k\neq 0$.
Again, we can use the linear relation~\eqref{relation_quer_intr} and obtain for $k\in\{j,\ldots,d-1\}$,
\begin{align*}
\mathbb E\, \sum_{F\in\mathcal{F}_j(\mathcal{Q}_{n,d})}\upsilon_k(T_F(\mathcal{Q}_{n,d}))
& =\mathbb E\, \sum_{F\in\mathcal{F}_j(\mathcal{Q}_{n,d})}\Big(\frac{1}{2}\gamma_{k-1}(T_F(\mathcal{Q}_{n,d}))-\frac{1}{2} \gamma_{k+1}(T_F(\mathcal{Q}_{n,d}))\Big)\\
& =\frac{j!}{n!}\sum_{l=0}^{\infty}\stirling{n+1}{k-2l+1}\stirlingsec{k-2l+1}{j+1}-\frac{j!}{n!}\sum_{l=0}^{\infty}\stirling{n+1}{k-2l-1}\stirlingsec{k-2l-1}{j+1}\\
& =\frac{j!}{n!}\stirling{n+1}{k+1}\stirlingsec{k+1}{j+1},
\end{align*}
where we applied Theorem~\ref{1138thm} twice.
For $k=d$, relation~\eqref{relation_quer_intr} and Theorem~\ref{1138thm} yield
\begin{align*}
\mathbb E\, \sum_{F\in\mathcal{F}_j(\mathcal{Q}_{n,d})}\upsilon_{d}(T_F(\mathcal{Q}_{n,d}))
&=\mathbb E\, \sum_{F\in\mathcal{F}_j(\mathcal{Q}_{n,d})}\frac{1}{2}\gamma_{d-1}(T_F(\mathcal{Q}_{n,d}))\\
&=\frac{j!}{n!}\sum_{l=0}^\infty\stirling{n+1}{d-2l}\stirlingsec{d-2l}{j+1}-\frac{j!}{n!}\sum_{l=0}^\infty\stirling{n+1}{d-1-2l} \stirlingsec{d-1-2l}{j+1}\\
&=\frac{j!}{n!}\sum_{s=0}^\infty(-1)^s\stirling{n+1}{d-s}\stirlingsec{d-s}{j+1}.
\end{align*}
The second formula in the case $k=d$ follows then from the identity
$$
\sum_{m=j+1}^{n+1} (-1)^{n+1-m} \stirling{n+1}{m}\stirlingsec{m}{j+1} = \delta_{n,j},
$$
where the Kronecker symbol on the right hand-side vanishes because $j\leq d-1 <n$.
In order to treat the remaining case $k=0$ (which implies that $j=0$), we make use of the fact that the sum of external angles at all vertices in any polytope is $1$, hence
$$
\mathbb E\, \sum_{F\in\mathcal{F}_0(\mathcal{Q}_{n,d})}\upsilon_0(T_F(\mathcal{Q}_{n,d}))
=
1
=
\frac{0!}{n!}\stirling{n+1}{1}\stirlingsec{1}{1},
$$
which is the desired result.
\end{proof}
\subsection{Expected angle sums of Gaussian projections of polyhedral sets} \label{subsec:gauss_proj}
In this section, we are going to see that for polyhedral set $P\subset \mathbb{R}^n$ and a Gaussian random matrix $A\in\mathbb{R}^{d\times n}$ (meaning that the entries of $A$ are independent and standard Gaussian distributed random variables), the angle sums of the so-called Gaussian projection $AP$ can be expressed in terms of the angle sums of $P$. As we shall see below, this setting includes the angle sums of Gaussian polytopes and some other interesting examples as special cases.
\begin{theorem} \label{theorem:grassmann_sums_proj_polytope}
Fix some $d\in\mathbb{N}$ and $n\ge d$. Let $P\subset \mathbb{R}^n$ be a polyhedral set with non-empty interior and let $A\in\mathbb{R}^{d\times n}$ be a Gaussian matrix. Then, we have
\begin{align*}
\mathbb E\,\sum_{F\in\mathcal{F}_j(AP)}\gamma_k(T_F(AP))=\sum_{G\in\mathcal{F}_j(P)}\big(\gamma_k(T_G(P))-\gamma_d(T_G(P))\big)
\end{align*}
for all $j\in\{0,\dots,d-1\}$ and $k\in\{j-1,\dots,d-1\}$ and we recall that $\gamma_{-1} (C) = 1$.
\end{theorem}
The proof of Theorem~\ref{theorem:grassmann_sums_proj_polytope} is postponed to Section~\ref{2157}.
\begin{corollary}\label{cor:angle_sum_gauss_proj}
Fix some $d\in\mathbb{N}$ and $n\ge d$. Let $P\subset \mathbb{R}^n$ be a polyhedral set with non-empty interior and let $A\in\mathbb{R}^{d\times n}$ be a Gaussian matrix. Then, for all $j\in\{0,\dots,d-1\}$ and $k\in\{j,\dots,d-1\}$ we have
\begin{align*}
\mathbb E\,\sum_{F\in\mathcal{F}_j(AP)}\upsilon_k(T_F(AP))=\sum_{G\in\mathcal{F}_j(P)}\upsilon_k(T_G(P)).
\end{align*}
In the remaining case when $j\in\{0,\dots,d-1\}$ and $k=d$, we obtain
\begin{align*}
\mathbb E\,\sum_{F\in\mathcal{F}_j(AP)}\upsilon_d(T_F(AP))
&=\sum_{G\in\mathcal{F}_j(P)}\sum_{s=0}^{n-d}(-1)^s\upsilon_{d+s}(T_G(P)).
\end{align*}
\end{corollary}
\begin{proof}
We use Theorem~\ref{theorem:grassmann_sums_proj_polytope} and the linear relation~\eqref{relation_quer_intr} between the Grassmann angles $\gamma_k$ and the conic intrinsic volumes $\upsilon_k$. Thus, we obtain
\begin{align*}
\mathbb E\,\sum_{F\in\mathcal{F}_j(AP)}\upsilon_k(T_F(AP))
& =\frac{1}{2}\mathbb E\,\left[\sum_{F\in\mathcal{F}_j(AP)}\big(\gamma_{k-1}(T_F(AP))-\gamma_{k+1}(T_F(AP))\big)\right]\\
& =\frac{1}{2}\sum_{G\in\mathcal{F}_j(P)}\big(\gamma_{k-1}(T_G(P))-\gamma_d(T_G(P))-\gamma_{k+1}(T_G(P))+\gamma_d(T_G(P))\big)\\
& =\frac{1}{2}\sum_{G\in\mathcal{F}_j(P)}\big(\gamma_{k-1}(T_G(P))-\gamma_{k+1}(T_G(P))\big)\\
& =\sum_{G\in\mathcal{F}_j(P)}\upsilon_k(T_G(P))
\end{align*}
for $k\in\{j,\dots,d-1\}$.
Note that we used that both, $T_F(AP)$ and $T_G(P)$, are not linear subspaces. To justify this, note first that both cones are full-dimensional since $\dim P = n$ and $\dim AP = d$ (because the rank of $A$ is $d$ which we shall show in the proof of Theorem~\ref{theorem:grassmann_sums_proj_polytope}). To complete the argument, note that the lineality spaces of $T_F(AP)$ and $T_G(P)$ have dimension $j$, which is strictly smaller than $n$ and $d$.
In the remaining case $k=d$, we use~\eqref{relation_quer_intr} combined with Theorem~\ref{theorem:grassmann_sums_proj_polytope} again and obtain
\begin{align*}
\mathbb E\,\sum_{F\in\mathcal{F}_j(AP)}\upsilon_d(T_F(AP))
& = \frac 12\, \mathbb E\,\sum_{F\in\mathcal{F}_j(AP)}\gamma_{d-1}(T_F(AP))\\
& =\sum_{G\in\mathcal{F}_j(P)}\frac{1}{2}\big(\gamma_{d-1}(T_G(P))-\gamma_{d}(T_G(P))\big).
\end{align*}
Applying~\eqref{eq:crofton_conic} yields
\begin{align*}
\mathbb E\,\sum_{F\in\mathcal{F}_j(AP)}\upsilon_d(T_F(AP))
& =\sum_{G\in\mathcal{F}_j(P)}\sum_{i=1,3,5,\dots}\big(\upsilon_{d-1+i}(T_G(P))-\upsilon_{d+i}(T_G(P))\big)\\
& =\sum_{G\in\mathcal{F}_j(P)}\sum_{s=0}^\infty(-1)^s\upsilon_{d+s}(T_G(P)),
\end{align*}
which completes the proof.
\end{proof}
Let us consider some special cases of Corollary~\ref{cor:angle_sum_gauss_proj}.
\begin{remark}
In the case $P=\mathop{\mathrm{conv}}\nolimits(0,e_1,e_1+e_2,\dots,e_1+\ldots+e_n)$, where $e_1,\dots,e_n$ denotes the standard Euclidean basis vectors in $\mathbb{R}^n$, we observe that $AP$, for a Gaussian matrix $A\in\mathbb{R}^{d\times n}$, has the same distribution as the convex hull of a random walk in $\mathbb{R}^d$ with independent standard Gaussian increments. Using Corollary~\ref{cor:angle_sum_gauss_proj} and the formula of Theorem~\ref{1138v}, we obtain
\begin{align*}
\sum_{G\in\mathcal{F}_j(P)}\upsilon_k(T_G(P))
=\mathbb E\,\sum_{F\in\mathcal{F}_j(AP)}\upsilon_k(T_F(AP))
=\frac{j!}{n!}\stirling{n+1}{k+1}\stirlingsec{k+1}{j+1}
\end{align*}
for $0\le j\le k\le d-1$. Note that $P$ is also called the Schl\"afli orthoscheme of type $B$, which was extensively studied in~\cite{GK2020_Schlaefli_orthoschemes}. The above formula recovers Theorem 3.1 from~\cite{GK2020_Schlaefli_orthoschemes}.
\end{remark}
\begin{remark}
In the case of the regular simplex $P=\mathop{\mathrm{conv}}\nolimits(e_1,e_2,\dots,e_n)$ with $n\geq d+1$, we observe that $AP$, for a Gaussian matrix $A\in\mathbb{R}^{d\times n}$, has the same distribution as the Gaussian polytope $\mathcal{P}_{n,d}$.
Using Corollary~\ref{cor:angle_sum_gauss_proj} and the formula of Theorem~\ref{2219v}, we obtain
\begin{align*}
\sum_{G\in\mathcal{F}_j(P)}\upsilon_k(T_G(P))
=\mathbb E\,\sum_{F\in\mathcal{F}_j(\mathcal{P}_{n,d})}\upsilon_k(T_F(\mathcal{P}_{n,d}))=\sigma\stirling{n}{k+1}\sigma\stirlingsec{k+1}{j+1}
\end{align*}
for $0\le j\le k\le d-1$. Since the simplex $P$ is not full-dimensional, we have to apply Corollary~\ref{cor:angle_sum_gauss_proj} with $\mathbb{R}^n$ replaced by the affine hull of $P$ which has dimension $n-1$. Of course, the same argument could be used in the other direction, in which case we would recover Theorem~\ref{2219v}.
\end{remark}
\begin{remark}
For independent and standard Gaussian distributed random vectors $\xi_1,\dots,\xi_n$ with values in $\mathbb{R}^d$, where $n\geq d$, we define $D=\mathop{\mathrm{pos}}\nolimits(\xi_1,\dots,\xi_n)$. Then, the random cone $D$ has the same distribution as $A\mathbb{R}^n_+$ for a Gaussian matrix $A\in\mathbb{R}^{d\times n}$. Thus, we obtain
\begin{align*}
\mathbb E\,\sum_{G\in\mathcal{F}_j(D)}\upsilon_k(T_G(D))
=\sum_{F\in\mathcal{F}_j(\mathbb{R}^n_+)}\upsilon_k(T_F(\mathbb{R}^n_+))=\binom{n}{j}\binom{n-j}{k-j}2^{j-n}
\end{align*}
for all $0\le j\le k\le d-1$. In order to prove the last step, we need to consider the tangent cones $T_F(\mathbb{R}^n_+)$ for $F\in\mathcal{F}_j(\mathbb{R}^n_+)$. Each $j$-face $F\in\mathcal{F}_j(\mathbb{R}^n_+)$ is determined by a collection of indices $1\le i_1<\ldots<i_{n-j}\le n$ and given by
\begin{align*}
F=\{(x_1,\dots,x_n)\in\mathbb{R}^n:x_{i_1}=\ldots=x_{i_{n-j}}=0,\;x_l\ge 0\; \text{ for all } l\notin\{i_1,\dots,i_{n-j}\}\}.
\end{align*}
Then, the corresponding tangent cone is given by
\begin{align*}
T_F(\mathbb{R}^n_+)=\{v\in\mathbb{R}^n:v_{i_1}\ge 0,\dots,v_{i_{n-j}}\ge 0\}
\end{align*}
which is isometric to $\mathbb{R}^{n-j}_+\times\mathbb{R}^j$. Using the well-known formula for the conic intrinsic volumes of the orthant $\mathbb{R}^n_+$, see e.g.~\cite[Example~2.8]{amelunxen_comb}, we obtain
\begin{align*}
\sum_{F\in\mathcal{F}_j(\mathbb{R}^n_+)}\upsilon_k(T_F(\mathbb{R}^n_+))=\sum_{F\in\mathcal{F}_j(\mathbb{R}^n_+)}\upsilon_{k-j}(\mathbb{R}^{n-j}_+)
=\sum_{F\in\mathcal{F}_j(\mathbb{R}^n_+)}\binom{n-j}{k-j}2^{j-n}=\binom{n}{j}\binom{n-j}{k-j}2^{j-n}.
\end{align*}
In the remaining case $k=d$, we get
$$
\mathbb E\,\sum_{G\in\mathcal{F}_j(D)}\upsilon_d(T_G(D))
=\sum_{F\in\mathcal{F}_j(\mathbb{R}^n_+)}\sum_{s=0}^{n-d}(-1)^s\upsilon_{d+s}(T_F(\mathbb{R}^n_+))
=
2^{j-n} \binom nj \sum_{s=0}^{n-d}(-1)^s
\binom{n-j}{d+s-j}.
$$
\end{remark}
\subsection{Invariance of angle sums under affine transformations}
In general, the solid-angle sums of a deterministic polytope (as well as the more general sums of Grassmann angles), are not invariant under affine transformations of the ambient space. For example, for a simplex in dimension at least $3$, the sum of solid angles at vertices can take any value between $0$ and $1/2$ (see~\cite{PS67}), although all simplices can be transformed to each other by affine transformations. The next proposition states that if the polytope is random and its law is rotationally invariant, then the expected angle-sums become affine invariant.
\begin{theorem}\label{prop:elliptic}
Let $P$ be a random polytope (or, more generally, polyhedral set) with a.s.\ non-empty interior in $\mathbb{R}^d$. Assume that the law of $P$ is invariant under orthogonal maps, that is $OP$ has the same distribution as $P$ for every deterministic orthogonal transformation $O:\mathbb{R}^d\to\mathbb{R}^d$. Let $A:\mathbb{R}^d\to\mathbb{R}^d$ be a deterministic linear map with $\det A\neq 0$. Then, for all $j\in \{0,\ldots,d-1\}$ and $k\in \{0,\ldots,d\}$,
\begin{align}
&\mathbb E\, \sum_{G\in \mathcal{F}_j(AP)} \gamma_k(T_{G}(AP)) = \mathbb E\, \sum_{F\in \mathcal{F}_j(P)} \gamma_k(T_{F}(P)), \label{eq:prop:elliptic1}\\
&\mathbb E\, \sum_{G\in \mathcal{F}_j(AP)} \upsilon_k(T_{G}(AP)) = \mathbb E\, \sum_{F\in \mathcal{F}_j(P)} \upsilon_k(T_{F}(P)). \label{eq:prop:elliptic2}
\end{align}
In the special case $k=d$, \eqref{eq:prop:elliptic2} implies that the expected angle-sums are invariant in the sense that for all $j\in \{0,\ldots,d-1\}$:
$$
\mathbb E\, \sum_{G\in \mathcal{F}_j(AP)} \alpha(T_{G}(AP)) = \mathbb E\, \sum_{F\in \mathcal{F}_j(P)} \alpha(T_{F}(P)).
$$
\end{theorem}
Since an arbitrary non-degenerate Gaussian distribution can be represented as a linear image of the standard Gaussian distribution, the above theorem yields the following
\begin{corollary}
Theorems~\ref{2219}, \ref{2219v} and Corollaries~\ref{cor:angle_sum_P}, \ref{cor_ext_angle_sum_P} remain true if the points $X_1,\ldots,X_n$ generating the polytope $\mathcal{P}_{n,d}$ are sampled independently from an arbitrary non-degenerate Gaussian distribution.
\end{corollary}
\begin{proof}[Proof of Theorem~\ref{prop:elliptic}]
Let first $Q\subset \mathbb{R}^d$ be any deterministic polytope with non-empty interior and $A:\mathbb{R}^d\to\mathbb{R}^d$ a linear map with $\det A\neq 0$. All $j$-dimensional faces of $AQ$ are of the form $G= A F$ for some $F\in \mathcal{F}_j(Q)$. The tangent cone of the polytope $AQ$ at its face $AF$ coincides with $A (T_F(Q))$. If $W_{d-k}$ denotes a random uniform $(d-k)$-plane in $\mathbb{R}^d$ which is independent of everything else, then the $k$-th Grassmann angle of $AQ$ at $AF$ can be written as
\begin{multline*}
\gamma_k (T_{AF}(AQ))
=
\gamma_k (A T_F(Q))
=
\P[W_{d-k} \cap A T_F(Q) \neq \{0\}]
=
\P[A^{-1} W_{d-k} \cap T_F(Q) \neq \{0\}]
\\
=
\int_{G(d,d-k)} \mathbbm{1}_{\{V\cap T_F(Q) \neq \{0\}\}} \P_{A^{-1}W_{d-k}}({\rm d} V),
\end{multline*}
where $\P_{A^{-1}W_{d-k}}$ is the probability law of $A^{-1}W_{d-k}$ on $G(d,d-k)$, the Grassmannian of linear $(d-k)$-planes in $\mathbb{R}^d$. Taking the sum over all $F\in \mathcal{F}_j(Q)$, we obtain
$$
\sum_{G = AF \in \mathcal{F}_j(AQ)} \gamma_k (T_{G}(AQ))
=
\int_{G(d,d-k)} \left(\sum_{F\in \mathcal{F}_j(Q)} \mathbbm{1}_{\{V\cap T_F(Q) \neq \{0\}\}}\right) \P_{A^{-1}W_{d-k}}({\rm d} V).
$$
Applying this to $Q=P$ (with $W_{d-k}$ being independent of $P$), taking the expectation, and using Fubini's theorem we get
$$
\mathbb E\, \sum_{G\in \mathcal{F}_j(AP)} \gamma_k (T_{G}(AP))
=
\int_{G(d,d-k)} \mathbb E\, \left[\sum_{F\in \mathcal{F}_j(P)} \mathbbm{1}_{\{V\cap T_F(P) \neq \{0\}\}}\right] \P_{A^{-1}W_{d-k}}({\rm d} V).
$$
However, since the probability law of the random polytope $P$ is rotationally invariant, the expectation inside the integral does not depend on the choice of $V\in G(d,d-k)$ and it follows that
$$
\mathbb E\, \sum_{G\in \mathcal{F}_j(AP)} \gamma_k (T_{G}(AP))
=
\mathbb E\, \left[\sum_{F\in \mathcal{F}_j(P)} \mathbbm{1}_{\{V_0\cap T_F(P) \neq \{0\}\}}\right],
$$
where $V_0$ is any element of $G(d,d-k)$. Observe that the right-hand side does not depend on the choice of the linear map $A$. Since we can apply the above argument to the case when $A$ is the identity map, we arrive at the identity
$$
\mathbb E\, \sum_{G\in \mathcal{F}_j(AP)} \gamma_k (T_{G}(AP)) = \mathbb E\, \sum_{F\in \mathcal{F}_j(P)} \gamma_k (T_{F}(P)),
$$
which proves~\eqref{eq:prop:elliptic1}. To prove~\eqref{eq:prop:elliptic2}, recall~\eqref{relation_quer_intr}.
\end{proof}
\section{Linear images of polyhedral sets}\label{631}
In this section, we prove some facts on linear images (including projections) of polyhedral sets which will be used in the proof of Theorem~\ref{820}.
Consider a polyhedral set $P\subset \mathbb{R}^d$ with a non-empty interior. Take some $k\in \{1,\ldots,d\}$ and let $A:\mathbb{R}^d\to \mathbb{R}^k$ be a linear map of full rank $k$, which means that $\Ima A = \mathbb{R}^k$ or, equivalently, $\dim \Ker A = d-k$. We are interested in relating the faces of the polyhedral set $AP$ to the faces of the original polyhedral set $P$.
The first main result of the present section, Proposition~\ref{1640}, states that every proper face of $AP$ is an image of some face of $P$.
However, the converse is not true: not every face of $P$ is mapped to a face of $AP$.
The second main result, Proposition~\ref{1122}, states several equivalent conditions which guarantee that the image of a face of $P$ is a face of $AP$. These results require a general position assumption on $\Ker A$ with respect to $P$, which we are now going to state.
Let $M$ be a convex set in $\mathbb{R}^d$. Denote by $L$ the unique linear subspace in $\mathbb{R}^d$ such that for some $t\in\mathbb{R}^d$,
\begin{align*}
\mathop{\mathrm{aff}}\nolimits M = t+ L.
\end{align*}
In other words, $L$ is the translation of the affine hull of $M$ passing through the origin.
We say that $M$ is \textit{in general position with respect to a linear subspace} $L'\subset \mathbb{R}^d$ if
\begin{align*}
\dim (L \cap L') = \max(\dim L-\mathop{\mathrm{codim}}\nolimits L',0).
\end{align*}
Also, we say that a linear subspace $L'\subset \mathbb{R}^d$ is \textit{in general position with respect to a polyhedral set} $P$ if it is in general position with respect to all faces of $P$ of all dimensions.
\begin{lemma}\label{1641}
Let $M$ be a convex set in $\mathbb{R}^d$. Fix some $k\in\{1,\ldots,d\}$ and let $A:\mathbb{R}^d\to\mathbb{R}^k$ be a linear map of full rank $k$ (that is, $\dim\Ker A=d-k$). If $\Ker A$ is in general position with respect to $M$, then
\begin{align*}
\dim A M=\min(k,\dim M).
\end{align*}
\end{lemma}
\begin{proof}
Let $\mathop{\mathrm{aff}}\nolimits M=t+L$, where $t\in\mathbb{R}^d$ and $L\subset\mathbb{R}^d$ is a linear subspace. Since the map $A$ preserves affine and linear hulls, we have $\dim AM = \dim AL$. To prove the proposition, we need to show that
\begin{align*}
\dim A L=m, \quad\text{where}\quad m:=\min(k,\dim L).
\end{align*}
Since $\mathop{\mathrm{codim}}\nolimits\Ker A = k$, it follows from the general position assumption that
\begin{align}\label{936}
\dim (L\cap \Ker A)=\dim L-m.
\end{align}
This implies that there exist linearly independent vectors $e_1,\ldots,e_m\in L$ such that
\begin{align*}
\mathop{\mathrm{lin}}\nolimits(e_1,\ldots,e_m)\cap \Ker A=\{0\}.
\end{align*}
Therefore, for any tuple $(c_1,\ldots,c_m)\ne (0,\ldots,0)$,
\begin{align*}
0\ne A(c_1e_1+\ldots+c_me_m)=c_1Ae_1+\ldots+c_mAe_m,
\end{align*}
which implies the linear independence of $Ae_1,\ldots,Ae_m$. Thus, $\dim AL\geq m$.
On the other hand, we obviously have $\dim AL\leq m$. Indeed, if some vectors are linearly dependent, then their images under $A$ are linearly dependent as well.
\end{proof}
\begin{proposition}\label{1640}
Let $P\subset\mathbb{R}^d$ be a polyhedral set with non-empty interior. Fix some $k\in\{1,\ldots,d\}$ and let $A:\mathbb{R}^d\to\mathbb{R}^k$ be a linear map of full rank $k$.
\begin{enumerate}
\item[(a)] If $F$ is a proper face of $AP$, then $F = AG$ for a proper face $G\in \mathcal{F}(P)$ with $\dim G\geq \dim F$.
\item[(b)] If, moreover, $\Ker A$ is in general position with respect to $P$, then
\begin{align*}
\dim F=\dim G.
\end{align*}
Also, $G$ is unique in the following sense: If $G'\in \mathcal{F}(P)$ satisfies $AG' = F$, then $G'=G$.
\end{enumerate}
\end{proposition}
\begin{proof}
We prove (a).
By definition of a face, there exists a supporting affine hyperplane $H\subset\mathbb{R}^k$ of the polyhedral set $AP$ such that
\begin{align*}
F=H\cap AP.
\end{align*}
Since $A$ has full rank, $A^{-1}H$ is an affine hyperplane in $\mathbb{R}^d$. Moreover, we claim that $A^{-1}H$ is a supporting hyperplane of $P$. Indeed, if $H=\{y\in \mathbb{R}^k: \phi(y) = c\}$ and $AP\subset \{y\in \mathbb{R}^k: \phi(y) \geq c\}$ for some linear functional $\phi:\mathbb{R}^k\to\mathbb{R}$ (that does not vanish identically) and some constant $c\in\mathbb{R}$, then $A^{-1}H = \{x\in \mathbb{R}^d: \phi(Ax) = c\}$ and $P\subset \{x\in\mathbb{R}^d: \phi(Ax) \geq c\}$. To verify the last claim, assume that some $x\in P$ satisfies $\phi(Ax) <c$. But then $y:=Ax\in AP$ and $\phi(y) < c$, a contradiction. Since $x\mapsto \phi(Ax)$ is a linear functional on $\mathbb{R}^d$ that does not vanish identically, it follows that $A^{-1} H$ is a supporting hyperplane of $P$. It follows that
\begin{align*}
G:=(A^{-1}F)\cap P = A^{-1}(H\cap AP) \cap P = A^{-1}H\cap A^{-1}A P \cap P = A^{-1} H \cap P
\end{align*}
is a face of $P$. Let us finally check that $A G = F$. Clearly, $A G = A (A^{-1}F \cap P) \subset F$. To prove the converse inclusion $F\subset AG$, take some $f\in F = H\cap AP$.
It follows that there is $p\in P$ with $f = Ap$. Suppose that $p\notin A^{-1}H$, then $\phi(Ap)>c$. Hence, $\phi(f) > c$, which is a contradiction to $f\in F \subset H$. We just proved that $f=Ap$ with $p\in A^{-1} H \cap P = G$. Hence, $F = AG$, which proves (a) because the map $A$ cannot increase dimension and hence $\dim F\leq \dim G$.
Statement~(b) follows directly from Lemma~\ref{1641}. Indeed, since $\dim AP = k$ and $F$ is a proper face of $AP$, we have $\dim F <k$. By Lemma~\ref{1641}, we must have $\dim F = \dim AG = \min (k, \dim G) = \dim G$. To prove the uniqueness of $G$, one can argue as follows. If $AG'=F$ and since $F\subset H$, we must have $G' \subset A^{-1}H$ and thus also $G'\subset (A^{-1}H) \cap P = G$. If $G'$ would be a proper subset of $G$, it would have a strictly smaller dimension than $\dim G$, therefore also the dimension of $F= AG'$ would be strictly smaller than $\dim G = \dim F$, which is a contradiction.
\end{proof}
\begin{proposition}\label{1122}
Let $P\subset\mathbb{R}^d$ be a polyhedral set with non-empty interior. Fix some integer $0\leq j < k \leq d$. Let $A:\mathbb{R}^d\to\mathbb{R}^k$ be a linear map of full rank $k$ such that $\Ker A$ is in general position with respect to $P$. If $F$ is a $j$-face of $P$, then the following statements are equivalent:
\begin{enumerate}
\item[(a)] $AF$ is a face of $AP$;
\item[(b)] $AF$ is a $j$-face of $AP$;
\item[(c)] $AF\cap\mathop{\mathrm{int}}\nolimits AP=\varnothing$.
\item[(d)] $A(T_F(P)) \neq \mathbb{R}^k$.
\item[(e)] $(\mathop{\mathrm{int}}\nolimits T_F(P)) \cap \Ker A =\varnothing$.
\item[(f)] $T_F(P) \cap \Ker A = \{0\}$.
\end{enumerate}
\end{proposition}
Before we can start with the proof we need to state some lemmas.
\begin{lemma}\label{1146}
Let $k\in \{1,\ldots,d-1\}$. Consider some convex cone $C\subset \mathbb{R}^d$ with non-empty interior and a linear map $A:\mathbb{R}^{d} \to\mathbb{R}^k$ of full rank $k$. The following two conditions are equivalent:
\begin{enumerate}
\item[(a)] $\mathop{\mathrm{int}}\nolimits C\cap \Ker A\ne\varnothing$;
\item[(b)] $AC = \mathbb{R}^k$.
\end{enumerate}
\end{lemma}
\begin{proof}
See~\cite[Lemma~5.1]{goetze_kabluchko_zaporozhets}, where we have $\mathop{\mathrm{lin}}\nolimits (AC) = \mathbb{R}^k$ because $\mathop{\mathrm{int}}\nolimits C \neq \varnothing$ and $A$, being a linear surjection, maps open sets to open sets.
\end{proof}
\begin{lemma}\label{1134}
For any set $M\subset \mathbb{R}^k$ the following two conditions are equivalent:
\begin{enumerate}
\item[(a)] $0\in\mathop{\mathrm{relint}}\nolimits \mathop{\mathrm{conv}}\nolimits M$;
\item[(b)] $\mathop{\mathrm{pos}}\nolimits M=\mathop{\mathrm{lin}}\nolimits M$.
\end{enumerate}
\end{lemma}
\begin{proof}
See~\cite[Proposition~5.2]{goetze_kabluchko_zaporozhets}.
\end{proof}
\begin{lemma}\label{lem:intersects_boundary_intersects_interior}
Let $C\subset \mathbb{R}^d$ be a polyhedral cone of full dimension $\dim C=d$.
Let a proper linear subspace $S\subset \mathbb{R}^d$ be in general position with respect to the set of the linear hulls of its faces $\{\mathop{\mathrm{lin}}\nolimits (F): F\in \mathcal{F}(C) \}$.
If $S$ intersects $C$, then it also intersects its interior $\mathop{\mathrm{int}}\nolimits C$, i.e.
$$
S\cap C \neq \varnothing \Rightarrow S\cap \mathop{\mathrm{int}}\nolimits C \neq \varnothing.
$$
\end{lemma}
\begin{proof}
See the proof of Lemma~3.5 in~\cite{KVZ15} or~\cite[Lemma~5.7]{kabluchko_seidel} (which is stated for polytopes but is true for arbitrary polyhedral sets).
\end{proof}
\begin{proof}[Proof of Proposition~\ref{1122}]
Note that $\dim AP = k$. It follows from Lemma~\ref{1641} that (a) implies (b), and obviously (b) implies (c).
Let us prove that (c) implies (d). Take an arbitrary $f\in\mathop{\mathrm{relint}}\nolimits F$. Then, by (c) we have $Af \notin \mathop{\mathrm{int}}\nolimits AP$ and hence $0\notin\mathop{\mathrm{int}}\nolimits A(P-f)$. Therefore, making use of Lemma~\ref{1134} and taking into account that the set $A(P-f)$ is convex and has non-empty interior, we have $\mathop{\mathrm{pos}}\nolimits A(P-f) \neq \mathbb{R}^k$. But then
\[
A (T_F(P)) = A\mathop{\mathrm{pos}}\nolimits (P-f)=\mathop{\mathrm{pos}}\nolimits A(P-f)\neq \mathbb{R}^k,
\]
thus proving (d).
The equivalence of (d), (e) and (f) is stated in Lemmas~\ref{1146} and~\ref{lem:intersects_boundary_intersects_interior}. It remains to prove that (f) implies (a). So, let
\begin{equation}\label{eq:T_F_P_Ker_A}
T_F(P) \cap \Ker A = \{0\}.
\end{equation}
Since $A$ preserves convexity, $AF$ is a convex subset of $AP$. To prove that $AF$ is a face of $AP$ it suffices to prove the following statement (which is, in fact, a definition of a face; see~\cite[p.~18]{schneider_book_brunn_mink}): If $x = Af\in AF$ can be represented as $x= \frac 12 (x_1+x_2)$ for some $x_1,x_2\in AP$, then $x_1,x_2\in AF$. Write $x_1= Ap_1$ and $x_2 = Ap_2$ for some $p_1,p_2\in P$. Then, $x = Ap$ with $p := \frac 12(p_1+p_2)\in P$.
So, $x=Ap =Af$ with $f\in F$ and $p\in P$. We claim that this implies that $p=f$. This can be verified as follows. On the one hand, we have $p-f\in \Ker A$ because $A(p-f) = Ap - Af = x-x = 0$. On the other hand, we have $p-f = (p-f_0) +(f_0-f) \in T_F(P)$, where $f_0\in \mathop{\mathrm{relint}}\nolimits F$ is arbitrary and we have used that $p-f_0 \in T_F(P)$ be the definition of the tangent cone and that $f_0 - f$ belongs to $f_0-\mathop{\mathrm{aff}}\nolimits F$, which is the lineality space of $T_F(P)$. To summarize, $p-f \in T_F(P) \cap \Ker A$, hence $p=f$ by~\eqref{eq:T_F_P_Ker_A}.
So, $p=f\in F$. But since $F$ is a face of $P$ and $p=\frac 12 (p_1+p_2)$, we must have $p_1\in F$ and $p_2 \in F$. It follows that $x_1= Ap_1\in AF$ and $x_2= Ap_2\in AF$, thus proving the claim.
\end{proof}
\section{Proofs of Theorems~\ref{820},~\ref{623} and~\ref{theorem:grassmann_sums_proj_polytope}} \label{2157}
\begin{proof}[Proof of Theorem~\ref{820}]
By the definition of the Grassmann angles, see~\eqref{1138}, we have
\begin{align*}
\gamma_k(T_F(P)) = \P[T_F(P) \cap W_{d-k}\ne \{0\}].
\end{align*}
Applying Proposition~\ref{1122} (in particular, the equivalence between (a) and (f)) in the setting when $A$ is the orthogonal projection $\Pi_{W_{d-k}^\perp}$ on $W_{d-k}^\bot$ (which we identify with $\mathbb{R}^{k}$), we arrive at
\begin{align*}
\gamma_k(T_F(P))
=
\P\big[\Pi_{W_{d-k}^\perp} F \not\in \mathcal{F}(\Pi_{W_{d-k}^\perp}P)\big].
\end{align*}
Note that the general position assumption of Proposition~\ref{1122} is fulfilled with probability $1$ for the random linear subspace $\Ker A = W_{d-k}$; see \cite[Lemma 13.2.1]{SW08}.
Observing that $W_{d-k}^\perp$ has the same distribution as $W_k$, we arrive at
$
\gamma_k(T_F(P))
=
\P\big[\Pi_{W_{k}} F \not\in \mathcal{F}(\Pi_{W_{k}}P)\big]
=
\P\big[\Pi_{k} F \not\in \mathcal{F}(\Pi_{k}P)\big].
$
To show that $\mathcal{F}(\Pi_{k}P)$ can be replaced by $\mathcal{F}_j(\Pi_{k}P)$ on the right-hand side, we can use the same argument as above, but this time appeal to the equivalence of (b) and (f) in Proposition~\ref{1122}.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{623}]
Taking the sum of~\eqref{eq:gamma_k_proof} over all $F\in \mathcal{F}_j(P)$, we obtain
$$
\sum_{F\in \mathcal{F}_j(P)} \gamma_k(T_F(P))
=
\sum_{F\in \mathcal{F}_j(P)} \P\big[\Pi_{k} F \not\in \mathcal{F}_j(\Pi_{k}P)\big]
=
f_j(P) - \sum_{G\in \mathcal{F}_j(P)} \P\big[\Pi_{k} G \in \mathcal{F}_j(\Pi_{k}P)\big].
$$
Writing the probabilities as the expectations of the corresponding indicator functions, we can rewrite this as
$$
\sum_{F\in \mathcal{F}_j(P)} \gamma_k(T_F(P))
=
f_j(P) - \mathbb E\, \sum_{G\in \mathcal{F}_j(P)} \mathbbm{1}\{\Pi_{k} G \in \mathcal{F}_j(\Pi_{k}P)\}.
$$
According to Proposition~\ref{1640} every $j$-face of $\Pi_{k}P$ is of the form $\Pi_k G$ for some unique $G\in \mathcal{F}_j(P)$. Thus, the sum on the right-hand side equals $f_j(\Pi_k P)$ and we arrive at
$$
\sum_{F\in \mathcal{F}_j(P)} \gamma_k(T_F(P))
=
f_j(P) - \mathbb E\, f_j(\Pi_k P),
$$
thus proving the claim.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{theorem:grassmann_sums_proj_polytope}]
Take $d,n\in\mathbb{N}$ satisfying $n\ge d$. Furthermore, let $A\in\mathbb{R}^{d\times n}$ be a Gaussian random matrix, let $P\subset \mathbb{R}^n$ be a polyhedral set with non-empty interior and let $j\in\{0,\dots,d-1\}$ and $k\in\{j-1,\dots,d-1\}$ be given. At first, we are going to show that $\ker A$ is in general position with respect to $P$ a.s.~and that $\mathop{\mathrm{rank}}\nolimits A=d$ holds with probability $1$.
In order to prove that $\mathop{\mathrm{rank}}\nolimits A=d$ a.s., we will show that the row-vectors $\xi_1,\dots,\xi_d$ of $A$ are linearly dependent with probability $0$. Note that $\xi_1,\dots,\xi_d$ are independent and $n$-dimensional standard Gaussian distributed. Then, we have
\begin{align*}
&\P[\xi_1,\dots,\xi_d\text{ are linearly dependent}]\\
& \quad\le d\cdot \P[\xi_1\in\mathop{\mathrm{lin}}\nolimits\{\xi_2,\dots,\xi_d\}]\\
& \quad=d\cdot \int_{(\mathbb{R}^n)^{d-1}}\P(\xi_1\in\mathop{\mathrm{lin}}\nolimits\{x_2,\dots,x_d\}|\xi_2=x_2,\dots,\xi_d=x_d)\P_{(\xi_2,\dots,\xi_d)}(\text{d}(x_2,\dots,x_d))\\
& \quad=d\cdot \int_{(\mathbb{R}^n)^{d-1}}\P(\xi_1\in\mathop{\mathrm{lin}}\nolimits\{x_2,\dots,x_d\})\P_{(\xi_2,\dots,\xi_d)}(\text{d}(x_2,\dots,x_d))\\
& \quad=0,
\end{align*}
since $\dim\mathop{\mathrm{lin}}\nolimits\{x_2,\dots,x_d\}\le d-1<n$. Note that $\P_{(\xi_2,\dots,\xi_d)}$ denotes the joint Gaussian probability law of $(\xi_2,\dots,\xi_d)$. Thus, $\mathop{\mathrm{rank}}\nolimits A=d$ holds true with probability 1.
Furthermore, $\ker A$ is in general position to every subspace $L\subset\mathbb{R}^n$, following~\cite[Lemma 13.2.1]{SW08}, since $\ker A$ is invariant under rotations and therefore has the uniform distribution on the Grassmannian of all $(n-d)$-dimensional subspaces.
Assume first that $k\in\{j,\dots,d-1\}$, meaning that the case $k=j-1$ is postponed. Using Proposition~\ref{1640}, we obtain
\begin{align*}
\mathbb E\,\sum_{F\in\mathcal{F}_j(AP)}\gamma_k(T_F(AP))
& =\mathbb E\,\sum_{G\in\mathcal{F}_j(P)}\gamma_k(T_{AG}(AP))\mathbbm{1}_{\{AG\in\mathcal{F}_j(AP)\}}.
\end{align*}
We claim that $AT_G(P)=T_{AG}(AP)$ holds for a face $G\in\mathcal{F}_j(P)$ provided its projection is also a face $AG\in\mathcal{F}_j(AP)$. In order to prove this, note that the tangent cone $T_G(P)$ can equivalently be defined as $\mathop{\mathrm{pos}}\nolimits (P-g)$ for any point $g\in\mathop{\mathrm{relint}}\nolimits G$. Since $A$ is a linear mapping, we obtain
\begin{align*}
AT_G(P)=A\mathop{\mathrm{pos}}\nolimits(P-g)=\mathop{\mathrm{pos}}\nolimits(A(P-g))=\mathop{\mathrm{pos}}\nolimits(AP-Ag).
\end{align*}
It is left to show that $Ag$ lies in the relative interior of $AG$. Obviously, we know that $Ag\in AG$. Now suppose $Ag\notin \mathop{\mathrm{relint}}\nolimits AG$, that is, it lies in a face of $AP$ of dimension smaller than $j$ (which is the dimension of $AG$). Due to Proposition~\ref{1640}, this implies that $g$ lies in a face of $P$ of smaller dimension than $j$ which is a contradiction to $g\in\mathop{\mathrm{relint}}\nolimits G$. Thus, $Ag\in\mathop{\mathrm{relint}}\nolimits AG$ and we have $AT_G(P)=\mathop{\mathrm{pos}}\nolimits(AP-Ag)=T_{AG}(AP)$, which yields
\begin{align*}
\mathbb E\,\sum_{F\in\mathcal{F}_j(AP)}\gamma_k(T_F(AP))
& =\mathbb E\,\sum_{G\in\mathcal{F}_j(P)}\gamma_k(AT_{G}(P))\mathbbm{1}_{\{AG\in\mathcal{F}_j(AP)\}}.
\end{align*}
Proposition~\ref{1122} implies that $AG\in\mathcal{F}_j(AP)$ is equivalent to $AT_G(P)\neq\mathbb{R}^d$ since $\mathop{\mathrm{rank}}\nolimits A=d$ and $\ker A$ is in general position with respect to $P$ a.s. Therefore, we arrive at
\begin{align}\label{eq:123}
\mathbb E\,\sum_{F\in\mathcal{F}_j(AP)}\gamma_k(T_F(AP))
&= \mathbb E\,\sum_{G\in\mathcal{F}_j(P)}\gamma_k(AT_{G}(P))\mathbbm{1}_{\{AT_G(P)\neq\mathbb{R}^d\}}\notag\\
&= \sum_{G\in\mathcal{F}_j(P)}\mathbb E\,\big[\gamma_k(AT_{G}(P))\mathbbm{1}_{\{AT_G(P)\neq\mathbb{R}^d\}}\big].
\end{align}
From \cite[Corollary~3.6]{goetze_kabluchko_zaporozhets}, we obtain that
\begin{align*}
\mathbb E\,\big[\gamma_k(AT_{G}(P))\mathbbm{1}_{\{AT_G(P)\neq\mathbb{R}^d\}}\big]=\gamma_k(T_G(P))-\gamma_d(T_G(P)).
\end{align*}
Note that we used that $T_G(P)$ is not a linear subspace because $P$ is not a linear subspace (otherwise we would have $P=\mathbb{R}^n$ since $\mathop{\mathrm{int}}\nolimits P\neq \varnothing$, and the statement of the theorem would become empty). Applying this to~\eqref{eq:123} yields
\begin{align*}
\mathbb E\,\sum_{F\in\mathcal{F}_j(AP)}\gamma_k(T_F(AP))=\sum_{G\in\mathcal{F}_j(P)}\big(\gamma_k(T_G(P))-\gamma_d(T_G(P))\big),
\end{align*}
which is the required formula.
To complete the proof, note that in the case $k=j-1$ the required formula reduces (using that $\gamma_{-1}(C) = 1$) to the identity
$$
\mathbb E\, f_j(AP) = f_j(P) - \sum_{G\in \mathcal{F}_j(P)} \gamma_d(T_G(P)).
$$
This identity is verified by Theorem~\ref{623}. The fact that the orthogonal projection can be replaced by the Gaussian projection follows from the argument of~\cite{BV94}.
\end{proof}
\bibliographystyle{plainnat}
| {
"timestamp": "2020-07-16T02:19:26",
"yymm": "2007",
"arxiv_id": "2007.02590",
"language": "en",
"url": "https://arxiv.org/abs/2007.02590",
"abstract": "For two families of random polytopes we compute explicitly the expected sums of the conic intrinsic volumes and the Grassmann angles at all faces of any given dimension of the polytope under consideration. As special cases, we compute the expected sums of internal and external angles at all faces of any fixed dimension. The first family are the Gaussian polytopes defined as convex hulls of i.i.d. samples from a non-degenerate Gaussian distribution in $\\mathbb R^d$. The second family are convex hulls of random walks with exchangeable increments satisfying certain mild general position assumption. The expected sums are expressed in terms of the angles of the regular simplices and the Stirling numbers, respectively. There are non-trivial analogies between these two settings. Further, we compute the angle sums for Gaussian projections of arbitrary polyhedral sets, of which the Gaussian polytopes are a special case. Also, we show that the expected Grassmann angle sums of a random polytope with a rotationally invariant law are invariant under affine transformations. Of independent interest may be also results on the faces of linear images of polyhedral sets. These results are well known but it seems that no detailed proofs can be found in the existing literature.",
"subjects": "Probability (math.PR); Combinatorics (math.CO); Metric Geometry (math.MG)",
"title": "Angle sums of random polytopes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9861513889704252,
"lm_q2_score": 0.7185943925708561,
"lm_q1q2_score": 0.7086428583401088
} |
https://arxiv.org/abs/2005.00566 | Relationships between the number of inputs and other complexity measures of Boolean functions | We generalize and extend the ideas in a recent paper of Chiarelli, Hatami and Saks to prove new bounds on the number of relevant variables for boolean functions in terms of a variety of complexity measures. Our approach unifies and refines all previously known bounds of this type. We also improve Nisan and Szegedy's well-known block sensitivity vs. degree inequality by a constant factor, thereby improving Huang's recent proof of the sensitivity conjecture by the same constant. | \section{Introduction}
Is a boolean function $f: \{0,1\}^n \to \{0,1\}$ necessarily ``complex'' simply because it takes many input variables? (Of course one has to count only the \textit{relevant} inputs, ignoring any dummy variables which $f$ does not actually need.) In 1983, Simon \cite{Simon} answered this question in the affirmative, showing that the number of relevant variables $n(f)$ of a boolean function $f$ is bounded above by $s(f)4^{s(f)}$, where $s(f)$ is the sensitivity of $f$. Combined with earlier work by Cook, Dwork and Rieschuk \cite{CD} showing a $\Omega(\log s(f))$ lower bound on the parallel (CREW-PRAM) complexity of $f$, Simon's theorem implies that any function with $n$ relevant inputs takes $\Omega(\log \log n)$ time to evaluate on the worst case input, even with an \textit{unbounded} number of processors working in parallel. A decade later, Nisan and Szegedy \cite{NS} proved a similar upper bound on $n(f)$ in terms of the degree of $f$, namely
\be \label{Nisan_Szeg}
n(f) \leq \deg(f) \cdot 2^{\deg(f) - 1},
\ee
which was very recently improved to
\be \label{CHS_bd}
n(f) \leq 6.614 \cdot 2^{\deg(f)}
\ee
by Chiarelli, Hatami and Saks \cite{CHS}. While the proof of (\ref{Nisan_Szeg}) uses the average sensitivity or total influence $\textbf{I}[f]$ as a potential function, the proof of (\ref{CHS_bd}) requires a potential based on a local version of degree for each coordinate $i$, namely $\deg_i(f) = \deg(f(x) - f(x \oplus e_i))$.
In this paper, we generalize the measure $\deg_i$ to a class of measures with the same essential properties. This provides a common framework for proving nearly-tight bounds on $n(f)$ in terms of various complexity measures. In particular, we give short, unified proofs of the theorems of Simon, Nisan-Szegedy and Chiarelli-Hatami-Saks, as well as a variety of new and improved bounds, which we summarize in the following theorem.
\begin{theorem}
For any boolean function $f$,
\bea \nn
n(f) &\leq& 4.394 \cdot 2^{\deg(f)} \\ \nn
n(f) &\leq& \frac{1}{2}\cdot 4^{C(f)} \\ \nn
n(f) &\leq& 8.277\cdot 2^{\frac{\deg(f)}{2} + s(f)} \\ \nn
n(f) &\leq& (\log{s(f)} + 0.29)\cdot 4^{\frac{C(f) + s(f)}{2}}.
\eea
Moreover, if $f$ is monotone, then
\be \nn
n(f) \leq \min\left\{1.325 \cdot 2^{\deg(f)}, \, \, \frac{1}{2}\cdot 4^{s(f)}, \, \, \frac{1}{4}\cdot 2^{\emph{\text{DT}}(f)} + 2 \right\}.
\ee
\end{theorem}
In addition to our bounds on $n(f)$, we also improve another inequality from the same classic paper of Nisan and Szegedy \cite{NS}, namely
\be
\text{bs}(f) \leq \deg(f)^2
\ee
(where $\text{bs}(f)$ is the \textit{block sensitivity} of $f$.)
\begin{theorem}\label{intro_improved_bs_deg}
For any boolean function $f$,
\be
\emph{\text{bs}}(f) \leq \sqrt{2/3}\cdot \deg(f)^2 + 1.
\ee
\end{theorem}
\noindent \textbf{Organization:} We provide definitions of all the relevant complexity measures in Section \ref{sec_prelim}. Then in Section \ref{sec_variables}, we first give a high-level overview of our method for proving bounds on $n(f)$, and then define our generalized coordinate measures in \ref{sec_RRCM}. The rest of Section \ref{sec_variables} is devoted to proving those bounds for a variety of complexity measures. In Section \ref{sec_bs_d}, we prove Theorem \ref{intro_improved_bs_deg}. Finally in Section \ref{sec_future_directions}, we discuss some open problems related to our work.
\section{Preliminaries}\label{sec_prelim}
All functions $f$ in this paper will be assumed to be boolean valued on $\{0,1\}^n$. We will refer to the input variables of such functions either by $x_i$ or simply by the index $i$, for each $i \in \{1, \dots, n\} =: [n]$. We define $R(f)$ to be the set of relevant variables/coordinates for $f$, namely those $i \in [n]$ for which there exists a pair of inputs $(x, x')$ such that $x_j = x'_j$ for all $j \neq i$ and $f(x) \neq f(x')$. We write $\delta_i(f)$ for the indicator function of whether $i \in R(f)$. We also define $$n(f) := |R(f)| = \sum_{i \in [n]}\delta_i(f)$$ to be the number of relevant variables for $f$.
\subsection{Complexity measures}
For each $f$ there is a corresponding multilinear polynomial over $\{0,1\}^n$, which we call the multilinear polynomial expansion of $f$. The degree of this polynomial is the \textbf{degree} of $f$, denoted $\deg(f)$.\\
For any string $x \in \{0,1\}^n$ and a subset $S \subseteq [n]$, we let $x^S$ denote the string obtained by flipping the bits of $x$ belonging to $S$ and leaving the rest alone. If $S = \{i\}$, we simply write $x^i$ to denote $x$ with the $i$th bit flipped. If $f(x) \neq f(x^i)$, we say that $f$ is sensitive to $i$ at $x$. The sensitivity of $f$ at an input $x$, denoted $s_x(f)$, is the number of $i \in [n]$ for which $f$ is sensitive to $i$ at $x$. The maximum of $s_x(f)$ over all $x \in \{0,1\}^n$ is called the \textbf{sensitivity} of $f$ and is denoted $s(f)$. The 1-sensitivity (resp. 0-sensitivity) of $f$, denoted $s^1(f)$ (resp. $s^0(f)$), is the maximum of $s_x(f)$ over all inputs $x$ with $f(x) = 1$ (resp. 0). \\
The block sensitivity at the point $x$ of a boolean function $f: \{0,1\}^n \to \{0,1\}$, denoted $\text{bs}_x(f)$, is the maximum number $k$ such that there exist $k$ disjoint sets $B_1, \dots, B_{k} \subseteq [n]$ (called blocks) with the property that
$$f(x) = f(x^{B_i}), \, \, \text{for } i = 1, \dots, k.$$
We then define the \textbf{block sensitivity} of $f$ to be the maximum value of $\text{bs}_x(f)$ over all $x \in \{0,1\}^n$, and we denote it by $\text{bs}(f)$. \\
The certificate complexity at the point $x$ of a boolean function $f$, denoted $C_x(f)$, is the size of the smallest set $S \subseteq [n]$ with the property that $f$ is constant on the subcube of points which agree with $x$ on $S$, i.e. $\{y: y_i = x_i \text{ for all } i \in S\}$. The \textbf{certificate complexity} of $f$, denoted $C(f)$, is then defined as the maximum value of $C_x(f)$ over all $x \in \{0,1\}^n$. Also, let $C_{\min}(f) := \min_{x \in \{0,1\}^n} C_x(f)$. By analogy with $s^0(f)$ and $s^1(f)$, we can also define $C^0(f)$, $C^1(f)$, $C_{\min}^0(f)$ and $C_{\min}^1(f)$ in the obvious way.\\
It is easy to show (see \cite{Nisan}) that $s(f) \leq \text{bs}(f) \leq C(f)$ always, and that for any \textit{monotone} boolean function, the three measures actually coincide:
\be \label{s=C} s(f) = \text{bs}(f) = C(f). \ee
The \textbf{decision tree depth} of $f$, denoted $\text{DT}(f)$, is defined to be the minimum cost of any deterministic, adaptive query algorithm which always computes $f$ correctly. (The cost of such an algorithm is defined to be the maximal number of queries used by the algorithm to compute $f(x)$, taken over all $x \in \{0,1\}^n$.) \\
The \textbf{$\varepsilon$-approximate degree} of a boolean function $f: \{0,1\}^n \to \{0,1\}$ is the smallest $d$ for which there exists a degree $d$ (multilinear) polynomial $p(x_1, \dots, x_n)$ such that
$$|p(x) - f(x)| \leq \varepsilon\, \, \, \text{ for all } x \in\{0,1\}^n,$$
and we denote this quantity by $\widetilde{\deg}_\varepsilon(f).$ If we omit the $\varepsilon$ and simply write $\widetilde{\deg}(f)$, it should be understood to mean $\widetilde{\deg}_{1/3}(f).$ This is the canonical and somewhat arbitrary choice -- replacing $1/3$ by any other constant can only change the value of $\widetilde{\deg}(f)$ by a constant factor.
\subsection{Fourier influence}
Any function $f : \{0,1\}^n \to \{0,1\}$ can also be viewed as a function from $\{\pm 1\}^n \to \{\pm 1\}$ via the obvious affine transformation, and then expressed as a linear combination $f(x) = \sum_{S \subseteq [n]} \hat{f}(S) x^S$ of monomials $x^S = \prod_{i \in S} x_i$. The coefficients $\hat{f}(S)$ are the Fourier coefficients of $f$. For each coordinate $i$, we define the \emph{$i$th coordinate influence} of a function $f$, denoted $\text{Inf}_i[f]$, as
\be\nn \text{Inf}_i[f] := \Pr_{x \sim \{\pm 1\}^n}[f(x) \neq f(x^i)] \ee and the \emph{total influence} of $f$, denoted by $\textbf{I}[f]$, is defined to be $\sum_{i = 1}^n \text{Inf}_i[f]$. We'll need the following well-known Fourier formulas for influence:
$$\text{Inf}_i[f] = \sum_{S \ni i} \hat{f}(S)^2, \, \, \, \, \textbf{I}[f] = \sum_{S \subseteq [n]} |S|\hat{f}(S)^2$$
as well as the following well-known fact about influence and restrictions:\footnote{We use the notation $f_{\alpha}$ for $\alpha \in \{0,1\}^H$ to denote the function obtained from $f$ by restricting the coordinates in $H$ according to the partial assignment $\alpha$.}
\begin{fact}\label{inf_avg} For any $i \in [n]$, and any set $H \subset [n]$ with $i \not \in H$,
$$\emph{\text{Inf}}_i[f] = \E_{\alpha \sim \{0,1\}^H}[\emph{\text{Inf}}_i[f_{\alpha}]].$$
\end{fact}
\section{Improved bounds on the number of variables}\label{sec_variables}
\subsection{Overview}\label{sec_overview}
Our goal is to develop a unified framework for proving bounds on $n(f)$ in terms of various complexity measures like $\deg(f)$, $s(f)$ and $C(f)$. The key player in each proof is a certain ``coordinate version'' $m_i$ of each complexity measure $m$, which is engineered to behave in a certain way with respect to restrictions of variables (see Definition \ref{RRCM}). We call such $m_i$ ``restriction reducing coordinate measures'' (RRCMs) and form the corresponding potential functions
\be\label{potential}
\textbf{M}(f) := \sum_{i \in [n]} \frac{\delta_i(f)}{2^{m_i(f)}}.
\ee
The defining properties of RRCMs are chosen to guarantee that, for any $H \subseteq [n]$, $\textbf{M}$ always obeys the inequality
\be\label{H_rest}
\textbf{M}(f) \leq \sum_{i \in H}\frac{\delta_i(f)}{2^{m_i(f)}} + \E_{\alpha \sim \{0,1\}^H}[\textbf{M}(f_\alpha)].
\ee
This enables us to bound $\textbf{M}(f)$ recursively, assuming we choose the set of coordinates $H$ in such a way that the restrictions $f_{\alpha}$ are guaranteed to have \textit{lower complexity}, in some sense. Upper bounds on $\textbf{M}(f)$ naturally yield exponential upper bounds on $n(f)$ in terms of $m(f)$. We make these definitions precise below in the next subsection, and each subsequent subsection describes a different implementation of the general strategy above, yielding new bounds.
\subsection{Restriction-reducing coordinate measures}\label{sec_RRCM}
Let us say a functional $m$ on boolean functions is an \emph{i-coordinate measure} if $\delta_i(f) = 0 \implies m(f) = 0$.
\begin{defn}\label{RRCM}
We say an $i$-coordinate measure $m_i$ is \emph{restriction reducing} if, for any $j \in [n]\setminus\{i\}$, and each $b \in \{0,1\}:$
\begin{itemize}
\item[(1)] $m_i(f_{j=b}) \leq m_i(f)$
\item[(2)] if $\delta_i(f) = 1$ and $\delta_i(f_{j=b}) = 0$, then $m_i(f_{j=1-b}) \leq m_i(f) - 1.$
\end{itemize}
\end{defn}
We denote by $\mathcal{R}_i$ the set of restriction reducing $i$-coordinate measures. We abuse notation sightly and write $\{m_i\} \in \mathcal{R}_i$ to denote that $m_i \in \mathcal{R}_i$ for each $i \in [n]$. Properties (1) and (2) were essentially chosen to make the following a fact:
\begin{fact}\label{restriction_single_fact}
Let $m_i \in \mathcal{R}_i$, and let $j \in [n] \setminus \{i\}$. Then
\be\label{restriction_single} \delta_i(f)2^{-m_i(f)} \leq \frac{\delta_i(f_{j=0})2^{-m_i(f_{j=0})} + \delta_i(f_{j=1})2^{-{m_i(f_{j=1})}}}{2}.\ee
\end{fact}
\begin{proof}
If $\delta_i(f_{j=0}) = \delta_i(f_{j=1}) = 1$, then property (1) of Definition \ref{RRCM} implies that both $2^{-m_i(f_{j=0})} \geq 2^{-m_i(f)}$ and $2^{-m_i(f_{j=1})} \geq 2^{-m_i(f)}$, which implies (\ref{restriction_single}). Otherwise, suppose without loss of generality that $\delta_i(f_{j=0}) = 0$ and $\delta_i(f_{j=1}) = 1$. Then property (2) of Definition \ref{RRCM} implies that $2^{-m_i(f_{j=1})} \geq 2 \cdot 2^{-m_i(f)}$ which also implies (\ref{restriction_single}).
\end{proof}
Fact \ref{restriction_single_fact} extends easily by induction to larger restrictions:
\begin{fact}\label{m_rest}
For any $i \in [n]$ and any $H \subset [n]$ with $i \not \in H$, and any $\{m_i\} \in \mathcal{R}_i$,
\be \delta_i(f)2^{-m_i(f)} \leq \E_{\alpha \sim \{0,1\}^H}\left[\delta_i(f_\alpha)2^{-m_i(f_\alpha)}\right].\ee
\end{fact}
\begin{proof}
We proceed by induction on $|H|$. The base case $H = \{j\}$ is Fact \ref{restriction_single_fact}. For the inductive step, observe that if $\delta_i(f)2^{-m_i(f)} \leq \E_{\alpha \sim \{0,1\}^H}\left[\delta_i(f_\alpha)2^{-m_i(f_\alpha)}\right]$ holds for all $f$ with $H = H_1$ or $H_2$, then it holds for $H = H_1 \sqcup H_2$, since
\bea\nn
\delta_i(f)2^{-m_i(f)} &\leq& \E_{\alpha_1 \sim \{0,1\}^{H_1}}\left[\delta_i(f_{\alpha_1})2^{-m_i(f_{\alpha_1})}\right] \\\nn
&\leq& \E_{\alpha_1 \sim \{0,1\}^{H_1}}\left[\E_{\alpha_2 \sim \{0,1\}^{H_2}}\left[\delta_i(f_{\alpha_1, \alpha_2})2^{-m_i(f_{\alpha_1, \alpha_2})}\right]\right] \\
\nn &=& \E_{\alpha \sim \{0,1\}^{H_1 \sqcup H_2}}\left[\delta_i(f_\alpha)2^{-m_i(f_\alpha)}\right].
\eea
\end{proof}
For any $\{m_i\} \in \mathcal{R}_i$, we can define the associated potential function $\textbf{M}$ via equation (\ref{potential}). By Fact \ref{m_rest}, $\textbf{M}$ satisfies the inequality (\ref{H_rest}) for any set $H \subseteq [n]$ of restricted coordinates. Next we introduce three explicit families of RRCMs, the first of which ($\deg_i$) was introduced in \cite{CHS}:
\begin{defn}\label{RRCMs}
For each $i \in [n]$, define the $i$-coordinate measures
\bea
\label{deg_i}\deg_i(f) &:=& \deg(f(x) - f(x^i))\\
\label{sens_i}\emph{\text{sens}}_i(f) &:=& \max_{\{x \, : f(x) \neq f(x^i)\}} s_x(f) + s_{x^i}(f) \\
\label{cert_i}\emph{\text{cert}}_i(f) &:=& \max_{\{x \, : f(x) \neq f(x^i)\}} C_x(f) + C_{x^i}(f)
\eea
\end{defn}
\begin{lem}\label{rest_reduce_lem}
For each $i \in [n]$, the coordinate measures $\deg_i$, $\emph{\text{sens}}_i$, and $\emph{\text{cert}}_i$ all belong to $\mathcal{R}_i$.
\end{lem}
\begin{proof}
Since $\deg(\cdot)$, $s_x(\cdot)$ and $C_x(\cdot)$ cannot possibly increase by restricting input variables, property (1) of Definition \ref{RRCM} is trivially satisfied for each of the coordinate measures in question. To see that (2) holds, we abbreviate $f_{j=b}$ by $f_b$ and assume without loss of generality that $\delta_i(f_0) = 0$.
First we argue that $\deg_i(f_1) = \deg_i(f) - 1$. We can write $f(x) = x_jf_1(x) + (1-x_j)f_0(x)$. Since $x_i$ does not appear in $(1-x_j)f_0(x)$, it follows that $f(x) - f(x^i) = x_j(f_1(x) - f_1(x^i))$ from which it is clear that $\deg_i(f) = 1 + \deg_i(f_1)$.
Next we argue $\text{sens}_i(f_1) = \text{sens}_i(f) - 1$. Let $x$ be any input for which $f(x) \neq f(x^i)$, and let us write $y$ for the string which is $x$ with the $j$th bit omitted. Since $f_0$ does not depend on $i$, it must be that $f_0(y^i) = f_0(y)$. Therefore all such $x$ must have $x_j = 1$, so $f_1(y) = f(x) \neq f(x^i) = f_1(y^i)$. But then $j$ must be sensitive for $f$ at exactly one of $x^i$ or $x$, hence $s_x(f) + s_{x^i}(f) = s_x(f_1) + s_{x^i}(f_1) + 1$.
Finally we argue $\text{cert}_i(f_1) = \text{cert}_i(f) - 1$, which essentially follows from the previous paragraph. Indeed, as above, all $x$ for which $i$ is sensitive for $f$ must have $x_j = 1$, and $j$ must be sensitive for exactly one of $x$ or $x^i$ -- suppose it is $x$ (wlog). Then any certificate for $f$ which agrees with $x$ must assign 1 to $x_j$, since if it were allowed to be flipped, the certificate could not make $f$ constant. The claim follows.
\end{proof}
\begin{lem}\label{inf_bound}
Let $m_i$ be a restriction reducing $i$-coordinate measure and set $r := \min\{m_i(x \mapsto x_i), m_i(x \mapsto \neg x_i)\}$. Then for any boolean function $f$,
\be \label{ineq_i}\delta_i(f)2^{-m_i(f)} \leq 2^{-r} \cdot \emph{\text{Inf}}_i[f]. \ee
Hence $\emph{\textbf{M}}(f) \leq 2^{-r} \cdot \emph{\textbf{I}}[f]$ and for any $k \in \N$, at most $\emph{\textbf{I}}[f] \cdot 2^{k-r}$ relevant variables can have $m_i(f) \leq k$.
\end{lem}
\begin{proof} We proceed by induction on $n(f)$. If $n(f) = 1$, then the corollary follows from the definition of $r$ and the fact that $\text{Inf}_i[f] \leq 1$. Now suppose the desired inequality holds for all $f'$ with $n(f') < n(f)$, and we wish to show it holds for $f$ as well. Then by the induction hypothesis and Fact \ref{restriction_single_fact},
\be \delta_i(f)2^{-m_i(f)} \leq \frac{2^{-r} \cdot \text{Inf}_i[f_{j=0}] + 2^{-r} \cdot \text{Inf}_i[f_{j=1}]}{2} = 2^{-r} \cdot \text{Inf}_i[f]\ee
where the final equality is Fact \ref{inf_avg}. If we sum this inequality over $i \in R(f)$, we obtain
\be
\textbf{M}(f) = \sum_{k = 0}^{\infty}\frac{|\{j \in R(f) \, : \, m_i(f) = k\}|}{2^j} \leq 2^{-r}\textbf{I}[f]
\ee
which in particular implies that at most $\textbf{I}[f] \cdot 2^{k-r}$ variables in $R(f)$ have $m_i(f) \leq k$.
\end{proof}
\begin{obs}\emph{
Applying Lemma \ref{inf_bound} to the measures $\deg_i$ and $\text{sens}_i$ immediately yields both Nisan-Szegedy's and Simon's theorems\footnote{Note that $\textbf{I}[f] \leq \deg(f)$ and $\textbf{I}[f] \leq s(f)$.}. Indeed, $\min\{\deg_i(x \mapsto x_i), \deg_i(x \mapsto \neg x_i)\} = 1$ and $\min\{\text{sens}_i(x \mapsto x_i), \text{sens}_i(x \mapsto \neg x_i)\} = 2$, so \bea n(f) &\leq& \textbf{I}[f]\cdot 2^{\deg(f)-1} \\ \label{NS+Simon} n(f) &\leq& \textbf{I}[f] \cdot 4^{s(f)-1}.\eea
}
\end{obs}
\subsection{Degree}
Let $\textbf{D}(f) := \sum_{i \in [n]}\frac{\delta_i(f)}{2^{\deg_i(f)}}$, and for any $H \subseteq [n]$, let $\textbf{D}(H, f) = \sum_{i \in H}\frac{\delta_i(f)}{2^{\deg_i(f)}}$. For any $d \in \N$, let $\textbf{D}_d = \max_{\{f \, : \, \deg(f) \leq d\}}\textbf{D}(f)$. In \cite{CHS}, the authors argue that one can always find a set $H$ of $\leq \deg(f)^3$ coordinates such that (i) $\deg_i(f) = \deg(f)$ $\forall i \in H$ and (ii) $\deg(f_{\alpha}) < \deg(f)$ for all $\alpha \in \{0,1\}^H$. This implies $\textbf{D}_d \leq \frac{d^3}{2^d} + \textbf{D}_{d-1}$, and hence that $\textbf{D}(f) < \sum_{d = 1}^{\infty}\frac{d^3}{2^d} = 26$ for all $f$. Combined with the observation that $\textbf{D}_d \leq \frac{d}{2}$ (see Lemma \ref{inf_bound}), this yields Chiarelli, Hatami and Saks' final bound $\textbf{D}(f) \leq \frac{11}{2} + \sum_{d = 12}^\infty \frac{d^3}{2^d} \approx 6.614$.
In this subsection, we implement their argument in a slightly different way to obtain a slightly stronger bound. In particular, rather than choosing $H$ to be a minimal set of coordinates which covers all max degree monomials in $f$, we choose $H$ to be the variables in a \textit{single monomial} of $f$. Restricting this set of coordinates may not reduce the degree of $f$, but as shown below, it \textit{will} reduce the block sensitivity of $f$. Hence, as we'll want to induct on both degree and block sensitivity simultaneously, we define
$$\textbf{D}_{b, d} : = \max_{\substack{f \text{ with }\text{bs}(f) \leq b \\ \text{ and } \deg(f) = d }} \textbf{D}(f).$$
We also define $B_d := \max_{\deg(f) = d}\text{bs}(f)$, and make the convention that $\textbf{D}_{b,d} = 0$ whenever $b > B_d$.
\begin{lem}\label{bs decr}
If $M$ is a monomial of degree $d = \deg(f)$ which appears in $f$ with non-zero coefficient, then for any assignment $\alpha: M \to \{0,1\}$, the restricted function $f_\alpha$ has \emph{$\text{bs}(f_\alpha) \leq \text{bs}(f) - 1.$ }
\end{lem}
\begin{proof}
Let us write any string $x \in \{0,1\}^n$ as $x = (x_M, y)$, where $x_M \in \{0,1\}^M$ and $y \in \{0,1\}^{[n]\setminus M}$. We claim that for any $(x_M, y)$, there is always a sensitive block for $f$ contained entirely in $M$. Indeed, for any $y$, the function $f(\cdot, y)$ has degree $d$, since nothing can cancel with the maximal monomial $\prod_{i \in M}x_i$. In particular, it is not constant, so for any input $x_M$, there is always at least one sensitive block for $f(\cdot, y)$ at $x_M$. Therefore, $\text{bs}_{y}(f_{\alpha}) + 1 \leq \text{bs}_{(\alpha, y)}(f)$, and the lemma follows.
\end{proof}
\begin{lem}\label{bd_recursive}
For each $b,d$ with $b \leq B_d$, we have
$$\emph{\textbf{D}}_{b, d} \leq d \cdot2^{-d} + \max_{k \in \{1, \dots, d \}} \emph{\textbf{D}}_{b - 1, k} $$
\end{lem}
\begin{proof}
Suppose $f$ has $\deg(f) = d$ and $\text{bs}(f) \leq b$. Let $M$ be any degree $d$ monomial in $f$. Using (\ref{H_rest}),
\be \textbf{D}(f) \leq \underbrace{|M|\cdot 2^{-d}}_{= \, d \cdot 2^{-d}} + \underset{\alpha \sim \{0,1\}^M}{\E}[\textbf{D}(f_\alpha)].
\ee
By Lemma \ref{bs decr}, each $f_{\alpha}$ has $\text{bs}(f_\alpha) \leq b - 1$. Since $\textbf{D}_{b, d}$ is monotone in $b$ (for feasible $b \leq B_d$), it follows that for each $\alpha$, $\textbf{D}(f_\alpha) \leq \textbf{D}_{b - 1, k}$, where $k = \deg(f_{\alpha}).$ Taking the maximum over all values of $k \in \{1, \dots, d\}$ yields a uniform bound that holds for all restrictions $f_\alpha$.
\end{proof}
\begin{coro}\label{bd_cor}For every $f$, and every $d \geq 1$,
\bea\emph{\textbf{D}}(f) &\leq& \left(\emph{\textbf{D}}_{B_d, d} + \frac{(d+1)B_{d+1}}{2^{d+1}} + \sum_{k = d+2}^{\infty} \frac{k(B_k - B_{k-1})}{2^k}\right) \\ &\leq& \left(\emph{\textbf{D}}_{B_d, d} + \frac{(d+1)^3}{2^{d+1}} + \sum_{k = d+2}^{\infty} \frac{2k^2-k}{2^k}\right).
\eea
\end{coro}
Lemma \ref{bd_recursive} yields explicit bounds on $\textbf{D}_{b, d}$ for any finite $(b,d)$, which in turn yields an explicit bound on $\textbf{D}(f)$ for any $f$ via Corollary \ref{bd_cor}. Incorporating the influence bound $\textbf{D}_{b, d} \leq \frac{d}{2}$, we build up a table of upper bounds $D(b,d)$ recursively, using the rule
\be\label{dynamic}
D(b,d) = \begin{cases} \min\left\{\frac{d}{2}, \, \max_{k \in \{1, \dots, d \}}\left\{ d\cdot 2^{-d} + D(b - 1, k) \right\} \right\} & \text{for } b \leq B_d \\ 0 & \text{for } b > B_d \end{cases}
\ee
Supposing $B_d = d^2$ and extracting bounds recursively already shows that $\textbf{D}(f) < 5.0782$, but we can further improve this by obtaining sharper upper bounds on $B_d$. For values of $d \leq 14$ (which contribute the most to $D(b,d)$ anyway), we can obtain such bounds by manually checking feasibility of a certain linear program, as shown below. (This reduction is inspired in part by ideas of Nisan and Szegedy in \cite{NS}.)
\begin{fact}\label{bs_reduction}
If there exists a function $f : \{0,1\}^n \to \{0,1\}$ of degree $d$ with block sensitivity $b$, then there exists another function $g: \{0,1\}^b \to \{0,1\}$ of degree $\leq d$ with $g(0) = 0$ and $g(w) = 1$ for each vector $w$ of hamming weight 1.
\end{fact}
\begin{proof}
If $f(x)$ attains maximal block sensitivity at $z$, then $f(x \oplus z)$ attains maximal block sensitivity at 0, so without loss of generality we may assume $z = 0$, and possibly replacing $f$ by $1-f$ we may also assume that $f(0) = 0$. If $B_1, \dots, B_b$ are sensitive blocks for $f$ at 0, then define $$g(y_1, \dots, y_b) = f(\underbrace{y_1, \dots, y_1}_{B_1}, \dots, \underbrace{y_b, \dots, y_b}_{B_b})$$
so that for each coordinate vector $e_i$, $g(e_i) = f(\textbf{1}_{B_i}) = f(0^{B_i}) = 1$.
\end{proof}
For any $d \geq 1$, define the moment map $m_d: \R \to \R^d$ by $m(t) = (t, t^2,\dots, t^d)$.
\begin{prop}\label{bs_LP}
If there exists a degree $d$ function $f: \{0,1\}^n \to \{0,1\}$ with block sensitivity $b$, then there exists $\tau \in \{0,1\}$ such that the following set of linear inequalities has a solution $p \in \R^d$:
\bea \nn
\langle p, m_d(1) \rangle &=& 1 \\ \label{LP}
0 \leq \langle p, m_d(k) \rangle &\leq & 1 \,\, \text{ for each } k \in \{2, \dots, b-1\} \\ \nn
\langle p, m_d(b) \rangle &=& \tau
\eea
\end{prop}
\begin{proof}
If such an $f$ exists, then let $q(x_1, \dots, x_b) = \frac{1}{b!}\sum_{\sigma \in S_b}g(x_{\sigma(1)}, \dots, x_{\sigma(b)})$, where $g$ comes from Fact \ref{bs_reduction}, and set $\tau = g(1, 1, \dots, 1)$. It is well known (see \cite{BdW}) that there is a univariate polynomial $p: \R \to \R$ of degree at most $d$ such that for any $x \in \{0,1\}^b$, $q(x_1, \dots, x_b) = p(x_1 + \dots + x_b)$. For each $k \in \{1, \dots, b\}$, $p(k)$ is therefore the average value of $g$ on boolean vectors with hamming weight $k$, so in particular $p(k) \in [0,1]$. We also know $p(0) = g(0) = 0$, $p(b) = g(1,\dots, 1) = \tau$, and $p(1) = \frac{1}{n}\sum_i g(e_i) = 1$, and hence the coefficients of $p$ provide a solution to the set of linear inequalities. \end{proof}
Using the simplex method with exact (rational) arithmetic in Maple, we compute the largest $b$ for which the LP (\ref{LP}) is feasible for $1 \leq d \leq 14$, which yields upper bounds on $B_d$ for small $d$. These bounds are summarized in Table \ref{bs table}. Recomputing the table $D(b,d)$ with $B_d$ given by Table \ref{bs table} for $d \leq 14$ (and $B_d = d^2$ for $d > 14$), we can recompute the table as in (\ref{dynamic}) with these boundary conditions. This time $D(30^2, 30) \leq 4.4157\dots$, which implies
\be
\textbf{D}(f) \leq 4.4158
\ee
for all $f$. If we incorporate the main result of Section \ref{sec_bs_d}, which implies that
$$B_d^2 - B_d \leq \frac{2}{3}(d^4 - d^2)$$
into the table $D(b,d)$, we obtain the slightly stronger result $\textbf{D}_{\infty} \leq 4.3935$, which implies
\begin{theorem}\label{deg_thm}
For all $f$, $n(f) \leq 4.3935 \cdot 2^{\deg(f)}$.
\end{theorem}
\begin{table}\centering
\resizebox{0.8\textwidth}{!}{%
\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|l|l|l|}
\hline
$d$ &1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 \\ \hline
$B_d \leq$ &1 & 3 & 6 & 10 & 15 & 21 & 29 & 38 & 47 & 58 & 71 & 84 & 99 & 114 \\ \hline
\end{tabular}%
}
\caption{LP bounds on block sensitivity for low degree functions.}
\label{bs table}
\end{table}
\subsection{Certificate complexity}
Now let us define the analogous quantities for certificate complexity. Let $\textbf{C}(f) := \sum_{i \in [n]}\frac{\delta_i(f)}{2^{\text{cert}_i(f)}}$, and for any $H \subseteq [n]$, let $\textbf{C}(H, f) = \sum_{i \in H}\frac{\delta_i(f)}{2^{\text{cert}_i(f)}}$. For any $d \in \N$, we also define $\textbf{C}_d = \max_{\{f \, : \, \deg(f) \leq d\}}\textbf{C}(f)$.
\begin{theorem}\label{C<1/2}
For any $d \geq 1$, $\emph{\textbf{C}}_d \leq \frac{1}{2}$.
\end{theorem}
\begin{proof}
Let $f$ be a boolean function with $\deg(f) = d$. For any certificate $C$ for $f$, let $H$ be the set of variables fixed by $C$. It follows from (\ref{H_rest}) that
\be \label{C_recur}\textbf{C}(f) \leq \textbf{C}(H, f) + \E_{\alpha \sim \{0,1\}^H}[\textbf{C}(f_{\alpha})].\ee
Since $C$ is a certificate, we know $\deg(f_{\alpha}) \leq d-1$ for all $\alpha$, and $\deg(f_{\alpha^*}) = 0$ for \emph{some} $\alpha^* \in \{0,1\}^H$. So, $\textbf{C}(f_{\alpha^*}) = 0$, and we can improve (\ref{C_recur}) to
\be\textbf{C}(f) \leq \textbf{C}(H, f) + \left(1 - 2^{-|C|}\right)\textbf{C}_{d-1}.\ee
Now take $C$ to be the \emph{globally smallest certificate} for $f$, so that $C_i(f) \geq 2|H|$ for all $i \in R(f)$, and in particular
\be \label{C_recur_improved}
\textbf{C}(f) \leq |H|\cdot 4^{-|H|} + \left(1 - 2^{-|H|}\right) \textbf{C}_{d-1}.
\ee
Since $c\cdot 2^{-c} \leq \frac{1}{2}$ for $c \geq 1$, inequality (\ref{C_recur_improved}) implies that $\textbf{C}_d \leq \alpha \cdot \frac{1}{2} + (1 - \alpha)\cdot \textbf{C}_{d-1}$ for some $\alpha \in [0,1]$. Therefore if $\textbf{C}_{d-1} \leq \frac{1}{2}$ for some $d$, then also $\textbf{C}_d \leq \frac{1}{2}$. The theorem then follows by induction on $d$, noting that $\textbf{C}_1 = \frac{1}{4} < \frac{1}{2}.$
\end{proof}
Since $\textbf{C}(x \mapsto x_1) = \frac{1}{4}$, Theorem \ref{C<1/2} cannot be improved by more than a factor of 2. In any case, we have the following immediate corollary:
\begin{theorem}\label{C_thm}
For any $f$, $n(f) \leq \frac{1}{2} \cdot 4^{C(f)}$.
\end{theorem}
Finally, we use an implementation similar to the one above to give a proof of a stronger bound on $n(f)$ in terms of $\deg(f)$ for \textit{monotone} functions $f$.
\begin{theorem}\label{mon_deg}
For monotone functions $f$, $n(f) \leq 1.325 \cdot 2^{\deg(f)}$.
\end{theorem}
\begin{proof}
We let $\widetilde{\textbf{D}}_d$ denote the maximum value of $\textbf{D}(f)$ over all monotone functions of degree at most $d$. Given a monotone $f$ of degree $d$, let $H$ be the variables fixed by any minimal 0-certificate $C$. By monotonicity, $f(0_H, 1_{\overline{H}}) \equiv 0$, so by minimality of $H$, each $i \in H$ must be sensitive for $f$ at the input $(0_H, 1_{\overline{H}})$. Therefore restricting the variables in $\overline{H}$ to 1 yields an $\text{OR}$ function on $H$, and hence each $i \in H$ has $\deg_i(f) \geq |H|$. If we restrict all of the variables in $H$ so that one of the restrictions is constant, we get the analogue of (\ref{C_recur_improved}):
\be
{\textbf{D}}(f) \leq |H| \cdot 2^{-|H|} + \left(1-2^{-|H|}\right)\widetilde{\textbf{D}}_{d-1}.
\ee
However, if we only restrict those variables $i$ in $H$ with $\deg_i(f) = d$, we obtain
\be
{\textbf{D}}(f) \leq |H|\cdot 2^{-d} + \widetilde{\textbf{D}}_{d-1}.
\ee
Combining these two inequalities yields
\be\label{combined_ineq}
\widetilde{\textbf{D}}_d \leq \max_{1\leq k \leq d}\left\{ \min\left(k \cdot 2^{-k} + (1-2^{-k})\widetilde{\textbf{D}}_{d-1}, \, k \cdot 2^{-d} + \widetilde{\textbf{D}}_{d-1}\right)\right\}.
\ee
Note that $\widetilde{\textbf{D}}_1 = \widetilde{\textbf{D}}_2 = \frac{1}{2}$, since the only monotone functions of degree exactly two are $\text{AND}_2$ and $\text{OR}_2$. Starting with these values and using (\ref{combined_ineq}) to recursively compute bounds on $\widetilde{\textbf{D}}_{d}$, we find that $\widetilde{\textbf{D}}_{30} \leq 1.3243$, and hence ${\textbf{D}}(f) \leq 1.3243 + \sum_{d=31}^{\infty}\frac{d}{2^d} < 1.325$.
\end{proof}
\noindent \textbf{Remark:} In \cite{CHS}, a function of degree $d$ with $1.5 \cdot 2^d - 2$ relevant variables is constructed. Therefore, Theorem \ref{mon_deg} implies that all monotone functions of a given degree have at least $11\%$ fewer variables than do certain general functions of the same degree.
\subsection{Sensitivity}
Define $\textbf{S}(f) := \sum_{i \in [n]} \frac{\delta_i(f)}{2^{\text{sens}_i(f)}}$ and $\textbf{S}(H, f) = \sum_{i \in H} \frac{\delta_i(f)}{2^{\text{sens}_i(f)}}$ for any $H \subseteq [n]$. In light of the previous subsections, it seems natural to expect that one should be able to prove a bound $\textbf{S}(f) = O(1)$ for any $f$ using a similar inductive argument, thereby improving Simon's theorem (in the same sense that \cite{CHS} improved Nisan-Szegedy's bound.) However, choosing a good $H$ to restrict for $\textbf{S}$ is tricky business -- neither choice from the previous two subsections will work in general here. Despite this challenge, we believe such a bound does hold, so we leave it as a conjecture and provide some evidence in favor of it below.
\begin{conj}\label{s_conj}
For any $f$, $n(f) \lesssim 4^{s(f)}$. More strongly, $\emph{\textbf{S}}(f) \lesssim 1$.
\end{conj}
Our first piece of supporting evidence for Conjecture \ref{s_conj} comes from a direct combination of (\ref{s=C}) with Theorem \ref{C<1/2}:
\begin{theorem}\label{mon_s}
For any monotone $f$, $n(f) \leq \frac{1}{2} \cdot 4^{s(f)}$.
\end{theorem}
Theorem \ref{mon_s} is especially interesting in light of the fact that the tightest known example in Simon's theorem is monotone. Our next two pieces of evidence are corollaries of the following lemma, which is essentially a consequence of Huang's theorem.
\begin{lem}\label{num_sens}
For any function $f$, and any monomial $M$ appearing in $f$, the number of variables $i \in M$ with $\emph{\text{sens}}_i(f) \leq k$ is at most $(k-1)^2$. The same is true of any $M$ with $\hat{f}(M) \neq 0$.
\end{lem}
\begin{proof}
Let $B = \{i \in M \, : \text{sens}_i(f) \leq k\}$, and let $M' \subseteq M$ be a \emph{minimal} monomial containing $B$, in the sense that no other monomial $N$ has $B \subseteq N \subset M'$. Let $\alpha$ be a partial assignment which sets all coordinates in $[n] \setminus M'$ to arbitrary values in $\{0, 1\}$, and sets those in $M' \setminus B$ to 1. Consider the polynomial $f_{\alpha}$, which depends only on the variables in $B$. If $f_{\alpha}$ does not have full degree $|B|$, this could only be because the term $c_{M'}\prod_{i \in B}x_i\prod_{i \in M'\setminus B} x_i$ cancelled with another term of the form $c_N\prod_{i \in B}x_i \prod_{i \in N \setminus B} x_i$ when $M'\setminus B$ was restricted to 1. But this could only happen if $B \subseteq N \subset M'$, which by minimality of $M'$ cannot happen. Therefore $|B| = \deg(f_{\alpha})$, and by Huang's theorem \cite{Huang}, $ \deg(f_{\alpha}) \leq s(f_{\alpha})^2$. But since $\text{sens}_i(f) \leq k$ for each $i \in B$, $s(f_{\alpha}) \leq \max_{i \in B} \text{sens}_i(f) - 1 \leq k - 1,$ and hence $|B| \leq (k-1)^2$.
To prove the same for a monomial $M$ appearing in the fourier transform, we switch to $\pm 1$ notation and observe that if $z \sim \{\pm 1\}^{[n] \setminus M}$ is a random assignment to the variables outside of $M$, then
\bea\nn
\E_{z \sim \{\pm 1\}^{[n] \setminus M}}[\widehat{f_z}(M)^2] &=& \E_{z \sim \{\pm 1\}^{[n] \setminus M}}\left[\left(\sum_{T \supseteq M}\hat{f}(T)\chi_{T\setminus M}(z)\right)^2\right]\\ \nn &=& \sum_{T \supseteq M}\hat{f}(T)^2 \geq \hat{f}(M)^2 > 0
\eea
and hence there exists some restriction $f_z :\{\pm 1\}^M \to \{\pm 1\}$ with $\widehat{f_z}(M) \neq 0$. Therefore we can apply Huang's theorem as above and reach the same conclusion.
\end{proof}
As we show below, Lemma \ref{num_sens} implies a bound on the number of variables $i \in R(f)$ with $\text{sens}_i(f) \leq k$. Unlike the bound $\frac{1}{4}\textbf{I}[f] \cdot 2^k$ (from Lemma \ref{inf_bound}), this bound only depends on $k$, and not the (average) sensitivity of the function $f$, and for $k \ll \sqrt{\textbf{I}[f]}$ it says something much stronger.
\begin{coro}
Let $v: \N \to \N$ be any increasing function such that $\sum_{k=1}^{\infty} \frac{k}{v(k)} = C_v < \infty$. Then any boolean function $f$ has at most $C_v \cdot v(k)\cdot 2^k$ relevant variables with $\emph{\text{sens}}_i(f) \leq k$. In particular, the number of such variables is $O_{\epsilon}(k^{2+\epsilon}2^k)$ for any $\epsilon > 0$.
\end{coro}
\begin{proof}
For simplicity we consider only $v(k) = k^3$, the same proof works in general. Let $S$ be any set with $\hat{f}(S) \neq 0$, and let $a_k = |\{i \in S \, : \, \text{sens}_i(f) = k\}|$. Then by Lemma \ref{num_sens}, for each $\ell$ we have $\sum_{k=2}^{\ell} a_k \leq (\ell - 1)^2$. Note that the solution to the (integer) linear program
\bea \nn
&&\text{maximize } \sum_{k=2}^\infty \frac{a_k}{k^3} \\ &&\text{ subject to } \begin{cases} \sum_{k=2}^{\infty}a_k = d \\ \sum_{k=2}^{\ell} a_k \leq (\ell - 1)^2 & \text{ for all } \ell \geq 1
\end{cases}
\eea
occurs when as much weight is put on the lower values of $k$ as possible, which means setting $a_k = 2k - 3$. Therefore,
\bea\nn
\sum_{i \in S} \frac{1}{\text{sens}_i(f)^3} = \sum_{k \geq 2}\frac{a_k}{k^3} \leq \sum_{k = 2}^{\infty} \frac{2k-3}{k^3} =: c < \infty.
\eea
By Parseval's identity $\sum_{S \subseteq [n]} \hat{f}(S)^2 = 1$, this implies
\bea
c \geq \sum_{S \subseteq [n]}\sum_{i \in S} \frac{1}{\text{sens}_i(f)^3}\hat{f}(S)^2 = \sum_{i \in R(f)} \frac{\text{Inf}_i[f]}{\text{sens}_i(f)^3}
\eea
On the other hand, by Lemma \ref{inf_bound}, $\text{Inf}_i[f] \geq 2^{-\text{sens}_i(f)}$. Then
\be
c \geq \sum_{i \in R(f)}\frac{1}{2^{\text{sens}_i(f)}\text{sens}_i(f)^3} \geq \frac{1}{k^32^k} \cdot |\{i \in S \, : \, \text{sens}_i(f) \leq k\}|,
\ee
from which the corollary follows.
\end{proof}
A similar argument also works to show that $\textbf{S}(M,f) = O(1)$ for any monomial occurring in $f$.
\begin{coro}\label{monomial_sens}
For any function $f$, and any monomial $M$ of degree $d$ in $f$,
$$\emph{\textbf{S}}(M, f) \leq \sum_{k = 2}^{\lfloor \sqrt{d} + 1 \rfloor} \frac{2k-3}{2^{k}} + \frac{d - \lfloor \sqrt{d} \rfloor^2}{2^{\lfloor \sqrt{d} + 2 \rfloor}} < 1.5.$$
\end{coro}
Finally, we remark that most of the known functions\footnote{For example, the address function, the monotone address function, and the low-depth large-junta construction of Kane \cite{Kane} all have this property.} with low sensitivity compared to the number of relevant variables have the property that almost all of their variables never get to ``interact'' -- that is, they are never simultaneously sensitive. Below, we give a simple tensorization argument which implies that any function $f$ with this property must obey the bound on $n(f)$ in Conjecture \ref{s_conj}.
\begin{lem}
Suppose we can write $R(f) = Y \sqcup Z$, where for every input $x$, the set $s(f,x)$ of sensitive coordinates for $f$ at $x$ has $|s(f, x) \cap Y| \leq 1$. Then $|Y| < 4^{s(f)}$.
\end{lem}
\begin{proof}
Replacing each $y \in Y$ with a copy of $f$ on $n(f)$ fresh variables $x^y$, we obtain a function $f_2$ with $s(f_2) \leq s(f) + s(f) - 1$, since at most one of the $y$ variables and $s(f) - 1$ other variables are sensitive in $f$, which is really at most $s(f)$ ``new'' variables and $s(f) - 1$ ``old'' variables in $f_2$. Also, it is clear that $n(f_2) = |Z| + |Y|(|Y| + |Z|) \geq |Y|^2$. Recursively, we let $f_k$ be the function obtained from $f$ by replacing each $y \in Y$ with a copy of $f_{k-1}$ on fresh variables. By the same reasoning as the $k = 2$ case, we see that $s(f_k) \leq s(f) + s(f_{k-1}) - 1 \leq ks(f) - k$ and $s(f_k) \geq |Y|^k$. If $|Y| \geq 4^{s(f)}$, then $n(f_k) \geq 4^{ks(f)} = 4^k \cdot 4^{ks(f)-k} \geq 4^k \cdot 4^{s(f_k)}$, which contradicts Simon's theorem for $k$ large enough so that $4^k > ks(f) - k \geq s(f_k)$. Therefore, $|Y| < 4^{s(f)}$ as claimed.
\end{proof}
\subsection{Mixing measures}
The goal of this subsection is to prove bounds on the number of relevant variables in terms of multiple complexity measures \emph{simultaneously}, e.g.
\be
n(f) \lesssim 2^{\frac{\deg(f)}{2} + s(f)}.
\ee
Note that such a bound would follow from taking the geometric mean of Theorem \ref{deg_thm} with Conjecture \ref{s_conj}, however, we can give a direct and unconditional proof using the methodology we have already developed, combined with the following simple observation:
\begin{obs}
$\mathcal{R}_i$ is convex.
\end{obs}
We therefore define, for any $\beta \in [0,1]$, the coordinate measures $ds^\beta_i, cs^\beta_i \in \mathcal{R}_i$ via
\bea
ds_i^\beta(f) &:=& \beta \cdot \deg_i(f) + (1 - \beta) \cdot \text{sens}_i(f) \\
cs_i^\beta(f) &:=& \beta \cdot \text{cert}_i(f) + (1 - \beta) \cdot \text{sens}_i(f)
\eea
and the associated potentials
\bea
\textbf{DS}^\beta(f) &:=& \sum_{i \in [n]}\frac{\delta_i(f)}{2^{ds_i^\beta(f)}} \\
\textbf{CS}^\beta(f) &:=& \sum_{i \in [n]}\frac{\delta_i(f)}{2^{cs_i^\beta(f)}}.
\eea
\begin{prop}\label{mixed} For each $\beta \in (0,1]$, $\emph{\textbf{DS}}^\beta(f) = O_\beta(1)$. In particular, for $\beta = 1/2$, $\emph{\textbf{DS}}^{\beta}(f) < 8.277$ for all $f$.
\end{prop}
\begin{proof} Since $\beta > 0$, we can essentially let $\deg_i$ do the legwork while $\text{sens}_i$ simply hangs on for a free ride. Indeed, let $$\textbf{DS}^\beta_d := \max_{\{f \, : \, \deg(f) = d\}} \textbf{DS}^\beta(f)$$ and suppose $f$ is any function of degree $d$. Let $C$ be any minimal certificate for $f$, and let $H$ be the variables $i$ which are fixed by $C$. Let $H' \subseteq H$ be those $i \in H$ with $\deg_i(f) = d$. Since any restriction $\alpha$ of $H'$ lowers the degree of $f_\alpha$, inequality (\ref{H_rest}) implies
\be \textbf{DS}^{\beta}(f) \leq \frac{d^3}{2^{2-\beta}(2^\beta)^d} + \textbf{DS}^{\beta}_{d-1},
\ee
where we have used that $ds_i^\beta(f) \leq 2^{-\beta d - 2(1-\beta)}$ for each of the variables $i \in H'$, and that $|H'| \leq C(f) \leq \text{DT}(f) \leq \deg(f)^3$ by a result of Midrijanis \cite{Midj}. By induction, we then have for all $d$ that
\be \textbf{DS}_d^{\beta} \leq \sum_{i = 1}^{d}\frac{i^3}{2^{2-\beta}(2^{\beta})^i} < \infty \ee
which implies the desired conclusion with constant $\frac{1}{2^{2-\beta}}\sum_{i = 1}^{\infty}\frac{i^3}{(2^{\beta})^i}$, but this can be improved dramatically. By Lemma \ref{inf_bound}, $\textbf{DS}^{\beta}(f) \leq \frac{1}{2^{2-\beta}}\textbf{I}[f]$, so $\textbf{DS}^{\beta}(f) \leq \min_{k \geq 1} \{\frac{k}{2^{2-\beta}} + \sum_{i \geq k+1} \frac{i^3}{2^{2-\beta}(2^{\beta})^i}\}.$ For $\beta = \frac{1}{2}$, this minimum occurs at $k = 32$, giving
\be \nn
\textbf{DS}^{\frac{1}{2}}(f) \leq \frac{1}{2^{1.5}}\left(32 + \sum_{i = 33}^{\infty} \frac{i^3}{2^{i/2}}\right) = 11.602.
\ee
We can also keep track of block sensitivity and turn the crank of Lemma \ref{bd_recursive}, which ultimately yields a bound $\textbf{DS}^{\frac{1}{2}}(f) \leq 8.277$.
\end{proof}
\begin{coro}\label{ds_bound}
For any $f$, $n(f) \leq 8.277\cdot 2^{\frac{\deg(f)}{2} + s(f)}$.
\end{coro}
Corollary \ref{ds_bound} implies in particular, that for every $\epsilon > 0$, either $n(f) \leq \frac{1}{\varepsilon} \cdot 4^{s(f)}$ or $n(f) < 70\cdot\varepsilon \cdot 2^{\deg(f)}$. In other words, any function $f$ which fails to satisfy Conjecture \ref{s_conj} has $n(f) = o(2^{\deg(f)})$. We can prove a similar result for $\textbf{CS}^\beta(f)$, although the bound is slightly worse at $\beta = 1/2$ (by a logarithmic factor). The idea is similar to the proof of Theorem \ref{C<1/2}, although the recursive bound that arises is a bit more complicated. To deal with this, we make use of a small technical lemma.
\begin{lem}\label{technical}
Suppose a non-negative sequence $\{A_d\}_{d \in \N}$ satisfies \be A_{d+1} \leq \max_{h \in \N} \left\{ B\cdot h \cdot \alpha^{h} + \left(1-\frac{1}{2^h}\right)A_d \right\}
\ee
for some constants $B$ and $\alpha < 1$, and every $d \geq 1$. Then $A_d = O_{B,\alpha}(\log d)$. If $\alpha < 1/2$, then $A_d = O_{B,\alpha}(1)$.
\end{lem}
\begin{proof}
First we treat the easy case $\alpha < 1/2$. In this case, we can write $B\cdot h \cdot \alpha^h = (B\cdot h \cdot \gamma^h)\cdot \frac{1}{2^h}$, for some $\gamma < 1$. Since $m := \max_{h \in \N} B\cdot h \cdot \gamma^h < \infty$, we can set $C = \max\{A_1, m\}$. If $A_d \leq C$, then for every $h$, $\frac{B\cdot h\cdot \gamma^h}{2^h} + (1 - \frac{1}{2^h})A_d \leq C\cdot(\frac{1}{2^h} + (1-\frac{1}{2^h})) = C$, and hence $A_{d+1} \leq C$, and the conclusion follows by induction.
Now suppose $\alpha = \frac{1}{2}$. We can loosen the upper bound by allowing $h$ to take on any positive real value. By reparameterizing $h$ as $h\ln 2$ and modifying the constant $B$, the upper bound is maximized when
\be
\frac{d}{dx}\left[e^{-x}(Bx - A_d)\right] = 0 \implies x = 1 + A_d/B
\ee
and hence $A_{d+1} \leq (B + A_d)e^{-(1 + A_d/B)} + A_d(1-e^{-(1+A_d/B)}) = A_d + Be^{-(1+A_d/B)}$. Note that the function $a \mapsto a + Be^{-(1+a/B)}$, is increasing for $a \geq 0$, so any bound of the form $A_d \leq A^*$, implies the bound $A_{d+1} \leq A^* + Be^{-(1+A^*/B)}$.
We'll prove by induction on $d$ that $A_d \leq C\sum_{i=1}^d \frac{1}{i}$, where $C := \max\{A_1, B\}$. The base case is clear. Suppose the bound holds for some $d \geq 1$.
Since $A_d \leq C\sum_{i=1}^d \frac{1}{i}$, the above reasoning implies
\bea \nn
A_{d+1} &\leq& C\sum_{i=1}^d \frac{1}{i} + (B/e)e^{-(C/B)\sum_{i=1}^d \frac{1}{i}} \\ \nn &\leq& C\sum_{i=1}^d \frac{1}{i} + (B/e)e^{-\ln(d+1)} \\ \nn &\leq& \nn C\sum_{i=1}^d \frac{1}{i} + \frac{B/e}{d+1} \\ \nn &\leq& C\sum_{I=1}^{d+1} \frac{1}{i},
\eea
where we have used the Riemann sum inequality $\sum_{i=1}^d \frac{1}{i} \geq \ln(d+1)$.
\end{proof}
\begin{prop}
For each $\beta \in (\frac{1}{2},1]$, $\emph{\textbf{CS}}^{\beta}(f) = O_{\beta}(1)$. For $\beta = \frac{1}{2}$, $\emph{\textbf{CS}}^{\frac{1}{2}}(f) \leq \log s(f) + \frac{\gamma}{2}$, where $\gamma \approx 0.5772$ is the Euler-Mascheroni constant.
\end{prop}
\begin{proof} Let
$$\textbf{CS}^\beta_d := \max_{\{f \, : \, \deg(f) = d\}} \textbf{CS}^\beta(f)$$
and let $f$ be any function with degree $d$. Let $C$ be a globally minimal certificate for $f$, and let $H$ be the variables $i$ which are fixed by $C$. We note that it follows from global minimality of the certificate $C$ that $\text{cert}_i(f) \geq 2|H|$ for all $i \in H$. Then apply (\ref{H_rest}) to obtain the analogue of (\ref{C_recur_improved}):
\be
\textbf{CS}^{\beta}(f) \leq |H|4^{-\beta\cdot|H| - (1-\beta)} + (1-2^{-|H|})\textbf{CS}^{\beta}_{d-1}
\ee
Hence the sequence $\{\textbf{CS}^{\beta}_{d}\}_{d \in \N}$ satisfies the conditions of Lemma \ref{technical}, with $B = \frac{1}{2}$ and $\alpha = 4^{-\beta}$. Hence, for $\beta > \frac{1}{2}$, $\alpha < \frac{1}{2}$ and $\textbf{CS}^{\beta}(f) = O_{\beta}(1)$. For $\beta = \frac{1}{2}$, we are in the $\alpha = \frac{1}{2}$ case of the lemma, and since $\max\{B,\textbf{CS}_{1}^{\frac{1}{2}}\} = \frac{1}{2}$, we conclude
\bea \nn
\textbf{CS}^{\frac{1}{2}}(f) &\leq& \frac{1}{2}\sum_{i=1}^d \frac{1}{i} \\ \nn &\leq& \frac{1}{2}\log \deg(f) + \gamma/2 \\ \nn &\leq& \log s(f) + \gamma/2,
\eea
where the final inequality is Huang's theorem, $s(f) \geq \sqrt{\deg(f)}$.
\end{proof}
\begin{coro}
For any $\beta \in (\frac{1}{2}, 1]$ and any $f$, $n(f) \lesssim_{\beta} 4^{\beta\cdot C(f) + (1-\beta)\cdot s(f)}$, and $n(f) \leq (\log s(f) + \frac{\gamma}{2})\cdot 4^{\frac{C(f) + s(f)}{2}}$.
\end{coro}
\subsection{Decision tree depth}
For decision tree depth -- unlike the other complexity measures considered thus far -- getting a tight bound on $n(f)$ is trivial. Indeed, a depth $d$ binary tree has at most $2^d - 1$ nodes, so $n(f) \leq 2^{\text{DT}(f)} - 1$, and this is obtained by the function which queries a different variable at each node. However, the question becomes nontrivial when restricted to \emph{monotone} boolean functions. Let us denote the set of monotone boolean functions of depth $d$ by $\mathcal{M}_d$ and define the quantities
\be\nn
\textbf{R}^{\text{DT}}_d := \max_{f \in \mathcal{M}_d} n(f) \hspace{5pt} \text{ and } \hspace{5pt} \textbf{R}^{\text{DT}} := \limsup_{d \to \infty} \frac{\textbf{R}^{\text{DT}}_d}{2^d}.
\ee
It is quite possible (and, we believe, probably true) that $\textbf{R}^{\text{DT}} = 0$ -- see Section \ref{sec_future_directions} for comments. In this section, we give a proof that \be\label{mon_DT}\textbf{R}^{\text{DT}} \leq \frac{1}{4}.\ee Here we do not use the general restriction-reduction strategy of the previous sections. Instead, our main idea is in the following lemma, which essentially says that unless both of the subfunctions $f_0, f_1$ of a node in a monotone decision tree have very short certificates, they must share a significant number of relevant variables.\footnote{This property, of course, does not hold in general for non-monotone decision trees!}
\begin{lem}\label{DT_intersect} Let $f_0$, $f_1$ be the two subfunctions from the root node in a monotone decision tree. If neither $f_0$ nor $f_1$ is constant, then
$$C_{\min}^0(f_0) + C_{\min}^1(f_1) \leq |R(f_0) \cap R(f_1)| + 1.$$
\end{lem}
\begin{proof}
We first claim that every assignment to $R(f_0) \cap R(f_1)$ must either force $f_0 = 0$ or force $f_1 = 1$.
To see this, let $C:= R(f_0) \cap R(f_1)$. Let us decompose any assignment $\alpha$ to $R(f)$ into $(\alpha_x, \alpha_0, \alpha_1, \alpha_C)$, where each component is the assignments to $x$ (the root node), $R(f_0) \setminus R(f_1)$, $R(f_1) \setminus R(f_0)$, and $C$ respectively. Suppose for the sake of contradiction that there is some assignment $\beta_C$ to $C$ which does not force $f_0 = 0$ or $f_1 = 1$ -- then we can pick assignments $\alpha, \alpha'$ such that: (i) $f_0(\alpha_0, \beta_C) = 1$ and (ii) $f_1(\alpha'_1, \beta_C) = 0$. But then $f(0, \alpha_0, \alpha'_1, \beta_C) = 1$ and $f(1, \alpha_0, \alpha'_1, \beta_C) = 0$, which violates monotonicity of $f$ since $(0, \alpha_0, \alpha'_1, \beta_C) \prec (1, \alpha_0, \alpha'_1, \beta_C)$, which proves our claim.
Now fix some ordering $x_1, \dots, x_{|C|}$ of $C$ and consider the $|C|+1$ assignments $$\alpha_i := (\underbrace{1, \dots, 1}_{i}, \underbrace{0 \dots, 0}_{|C| - i}), \text{ for } i = 0, 1, \dots, |C|.$$ By the claim above, each $\alpha_i$ forces either $f_0 = 0$ or $f_1 = 1$. In particular, we know that $\alpha_0$ forces $f_0 = 0$ and $\alpha_{|C|}$ forces $f_1 = 1$. (Indeed, if $\alpha_0$ does not force $f_0 = 0$, then $f_1 \equiv 1$, which we assumed is not the case, and likewise for $\alpha_{|C|}$.) Therefore, since $\alpha_i \prec \alpha_{i+1}$, there must be some $0 \leq i \leq |C| - 1$ for which $\alpha_i$ forces $f_0 = 0$ and $\alpha_{i+1}$ forces $f_1 = 1$. Hence, by monotonicty, there is a 1-certificate for $f_1$ fixing only the variables $\{x_1, \dots, x_{i+1}\}$ to 1, and a 0-certificate for $f_0$ fixing only the variables $\{x_{i+1}, \dots, x_{|C|}\}$ to 0. This implies $C_{\min}^0(f_0) + C_{\min}^1(f_1) \leq i + 1 + |C| - i = |C| + 1$.
\end{proof}
We also need the following standard fact, whose easy proof we omit:
\begin{fact}\label{DT and/or}
Let $g$ be any function which does not depend on the variable $a$. Then $\emph{\text{DT}}(a \lor g)= \emph{\text{DT}}(a \land g) = 1 + \emph{\text{DT}}(g)$.
\end{fact}
\begin{lem}\label{DT_bound}
For $d \geq 2$, $\emph{\textbf{R}}^{\emph{\text{DT}}}_d \leq \max\left\{2 \emph{\textbf{R}}^{\emph{\text{DT}}}_{d-1} - 2, \,2+2\emph{\textbf{R}}^{\emph{\text{DT}}}_{d-2}, \, 1+\emph{\textbf{R}}^{\emph{\text{DT}}}_{d-1} \right\}.$
\end{lem}
\begin{proof}
Let $f \in \mathcal{M}_d$, for $d \geq 2$. We consider the possible values of $c_0 := C_{\min}^0(f_0)$ and $c_1:= C_{\min}^1(f_1)$. If either $c_0 = 0$ or $c_1 = 0$ (i.e. one of the subfunctions is constant), then $n(f) \leq 1 + \textbf{R}_{d-1}^{\text{DT}}$.
Otherwise, $c_0, c_1 \geq 1$ and Lemma \ref{DT_intersect} applies. If $\min\{c_0,c_1\} \geq 2$, then $c_0 + c_1 \geq 4$, and so by the lemma,
\bea \nn n(f) &\leq& 1+ n(f_0) + n(f_1) - |R(f_0) \cap R(f_1)| \\ &\leq& n(f_0) + n(f_1) - 2 \\ \nn &\leq& 2\textbf{R}_{d-1}^{\text{DT}} - 2.
\eea
If $c_0 = c_1 = 1$, then we can write $f_0 = a \land g$ and $f_1 = a \lor h$ for some functions $g$ and $h$ which do not depend on $a$. By Fact \ref{DT and/or}, $\text{DT}(g) = \text{DT}(f_0) - 1 \leq d-2$, and similarly $\text{DT}(h) \leq d-2$, so
$n(f) \leq 1 + 1 + n(g) + n(h) \leq 2 + 2\textbf{R}^{\text{DT}}_{d-2}.$
Finally, it remains to consider the case when $\{c_0, c_1\} = \{1,2\}$. Without loss of generality, suppose $c_0 = 1$. It follows that we can write $f_0 = a \land g$, for some function $g$ which does not depend on $a$. By Fact \ref{DT and/or}, $\text{DT}(g) = \text{DT}(f_0) - 1 \leq d-2$, and hence
\bea \nn
n(f) &\leq& 1 + n(f_0) + n(f_1) - |R(f_0) \cap R(f_1)|\\
&\leq& 1 + \textbf{R}_{d-1}^{\text{DT}} + 1 + \textbf{R}_{d-2}^{\text{DT}} - 3 \\ \nn
&=& {\textbf{R}}^{{\text{DT}}}_{d-1} + {\textbf{R}}^{{\text{DT}}}_{d-2} - 1 \\ \nn &\leq& 2{\textbf{R}}^{{\text{DT}}}_{d-1} - 2.
\eea
\end{proof}
\noindent\textit{Proof of (\ref{mon_DT}):} Since $\textbf{R}_{1}^{\text{DT}} = 1$, Lemma \ref{DT_bound} immediately implies $\textbf{R}_{2}^{\text{DT}} \leq 2$, $\textbf{R}_{3}^{\text{DT}} \leq 4$, $\textbf{R}_{4}^{\text{DT}} \leq 6$, and $\textbf{R}_{5}^{\text{DT}} \leq 10$. It is also easy to construct explicit examples showing that these all of these inequalities are actually equalities -- in fact, if $g(x) \in \mathcal{M}_{d-2}$, then the function $$f(a,b,x,y) = ((\neg a) \land (b \land g(x))) \lor (a \land (b \lor g(y))) \in \mathcal{M}_d$$ has $2n(g) + 2$ relevant variables. Therefore $\textbf{R}^{\text{DT}}_d \geq 2\textbf{R}^{\text{DT}}_{d-2} + 2$, and the bound $\textbf{R}^{\text{DT}}_d \leq 2\textbf{R}^{\text{DT}}_{d-1} - 2$ becomes the dominant bound in the lemma for $d \geq 4$. We can rewrite this inequality as $$(\textbf{R}^{\text{DT}}_d - 2) \leq 2(\textbf{R}^{\text{DT}}_{d-1} - 2) \, \, \text{ for } d \geq 5,$$ and therefore $(\textbf{R}^{\text{DT}}_d - 2) \leq 2^{d-5}(10-2) = 2^{d-2} \implies \textbf{R}^{\text{DT}}_d \leq 2^{d-2} + 2$ and $\textbf{R}^{\text{DT}} \leq \frac{1}{4}$. \qed
\section[]{A constant factor improvement in the sensitivity conjecture}\label{sec_bs_d}
In their seminal 1994 paper, Nisan and Szegedy \cite{NS} proved an upper bound on the block sensitivity of any boolean function $f$ in terms of its degree, namely
\be \label{bs_2_deg}
\text{bs}(f) \leq 2 \deg(f)^2.
\ee
In \cite{Tal}, Avishay Tal gives a tensorization argument showing that the constant factor 2 in (\ref{bs_2_deg}) can be reduced to 1:
\be \label{bs_1_deg}
\text{bs}(f) \leq \deg(f)^2.
\ee
In this section, we improve upon the original argument of Nisan and Szegedy to further improve the constant in (\ref{bs_1_deg}):
\begin{theorem}\label{improved_bs_deg}
For any boolean function $f$,
\be \label{b^2} \emph{\text{bs}}(f)^2 - \emph{\text{bs}}(f) \leq \frac{2}{3}(\deg(f)^4 - \deg(f)^2) \ee
and hence
\be \label{b^1} \emph{\text{bs}}(f) \leq \sqrt{2/3} \cdot \deg(f)^2 + 1. \ee
\end{theorem}
For many pairs of complexity measures, the proof of the best-known relationships between them make use of the inequality (\ref{bs_1_deg}) as an intermediate step. Upgrading those proofs (see \cite{Midj}, \cite{Huang} and \cite{Nisan}) with Theorem \ref{improved_bs_deg} immediately improves those relations by a constant factor:
\begin{coro}
For any boolean function $f$,
\bea
\label{improved_sens}\emph{\text{bs}}(f) &\leq& \sqrt{2/3} \cdot s(f)^4 + 1 \\
\emph{\text{DT}}(f) &\leq& \sqrt{2/3} \cdot \deg(f)^3 + \deg(f)\\
C(f) &\leq& \sqrt{2/3} \cdot s(f)^5 + s(f)
\eea
\end{coro}
In particular, (\ref{improved_sens}) improves on Huang's recent result $\text{bs}(f) \leq s(f)^4$, which constitutes the best-known progress on the (strong) sensitivity conjecture, namely $\text{bs}(f) \lesssim s(f)^2$. We note that while the bound in Theorem \ref{improved_bs_deg} can probably be improved further, there is a limit to this approach. The family obtained by tensorizing the function
\bea \nn
f(x_1, \dots, x_6) := \left(\sum_{i=1}^6 x_i\right) - \left(\sum_{1\leq i < j \leq 6} x_ix_j\right) +
x_1x_3x_4 + x_1x_2x_5 + x_1x_4x_5 \\ \nn + x_2x_3x_4 + x_2x_3x_5 + x_1x_2x_6 + x_1x_3x_6 + x_2x_4x_6 +
x_3x_5x_6 + x_4x_5x_6
\eea
certifies that $\text{bs}(f) \geq \deg(f)^{1.63}$ is possible,\footnote{This example is due to Kushilevitz \cite{NW}, and achieves the best-known separation between $\text{bs}(f)$ and $\deg(f)$.} and since Huang's theorem ($\deg(f) \leq s(f)^2$) is tight, combining the two inequalities can never yield a bound stronger than $\text{bs}(f) \leq s(f)^{3.26}$. We also remark that, as a consequence of Theorem \ref{improved_bs_deg}, any function family generated by tensorizing a single example will always have a truly subquadratic separation between $\text{bs}(f)$ and $\deg(f)$. So if it \textit{is} possible to quadratically separate $\text{bs}(f)$ from $\deg(f)$, this will require a different proof technique.
\subsection{Proof of Theorem \ref{improved_bs_deg}}
We begin by recalling Fact \ref{bs_reduction}, which says that the maximal block sensitivity among functions of degree $d$ is actually obtained by a function $f$ with (i) $f(0) = 0$ and (ii) $f(x) = 1$ for all vectors $x$ of hamming weight 1. Let us say any $f$ satisfying properties (i) and (ii) is in \emph{standard form}. It is easy to see that any function $f(x)$ in standard form has a real multilinear polynomial expansion which looks like
\be \label{polynomial_f}
f(x_1, \dots, x_b) = x_1 + \cdots + x_b + \sum_{i < j} c_{ij}x_i x_j + \text{ (higher degree terms) }
\ee
where $b = \text{bs}(f) = s(f)$. As it turns out, the coefficients $c_{ij}$ on the quadratic terms $x_ix_j$ in such functions can only take one of two values:
\begin{lem}\label{quad}
If $f(x_1, \dots, x_b)$ is in standard form, then each quadratic term $x_ix_j$ appears with coefficient $c_{ij} \in \{-1, -2\}$ in the polynomial expansion of $f$.
\end{lem}
\begin{proof}
For any pair $i, j$ of coordinates, let $e_{i,j}$ be the vector which has ones in the $i$th and the $j$th coordinates and zeroes elsewhere. Since $f$ is boolean-valued, $f(e_{i,j}) \in \{0,1\}$. On the other hand, we can compute $f(e_{i,j})$ by plugging into the polynomial (\ref{polynomial_f}), which yields $1 + 1 + c_{ij} \in \{0,1\}$, since all higher degree terms evaluate to 0.
\end{proof}
If we plug any real numbers $(\mu_1, \dots, \mu_b)$ in $[0,1]^b$ into equation (\ref{polynomial_f}) for the $x_i$, we can interpret the result as the expected value of $f(x)$ where the bits $x_i$ of $x$ are independently sampled Bernoulli$(\mu_i)$'s. In particular, taking all $\mu_i = \mu$, we obtain a univariate polynomial $p_f(\mu)$ whose relevant properties are summarized in the lemma below.
\begin{lem}\label{p_f}
If $f(x_1, \dots, x_b)$ is in standard form, then the polynomial $p_f(\mu)$ satisfies
\begin{enumerate}
\item $\deg(p_f) \leq \deg(f)$
\item $\sup_{x \in [0,1]}|p_f(x)| \leq 1$
\item $|p_f''(0)| \geq b(b-1)$.
\end{enumerate}
\end{lem}
\begin{proof}
Item (1) follows directly from the definition of $p_f$, while item (2) follows from the interpretation of $p_f(\mu)$ as the expected value of the boolean function $f$. To see (3), observe that (\ref{polynomial_f}) implies that
\be\label{polynomial_pf}
p_f(\mu) = b\cdot \mu + \left(\sum_{i < j} c_{ij}\right)\mu^2 +\text{ (higher degree terms)},
\ee
and hence by Lemma \ref{quad},
\be\nn p''_f(0) = 2\cdot \sum_{i < j} c_{ij} \in \left[-4\binom{b}{2}, -2\binom{b}{2}\right],\ee
which clearly implies (3).
\end{proof}
In light of Lemma \ref{p_f}, to bound $b$ in terms of $\deg(f)$, it suffices to bound $|p_f''(0)|$ in terms of $\deg(p_f)$. This is accomplished by the following fact, which is a direct consequence of V. A. Markov's inequality \cite{Markov}.
\begin{fact}\label{markov_fact}
If $p(x)$ is a degree $d$ polynomial satisfying $0 \leq p(x) \leq 1$ for all $x \in [0,1]$, then
$$|p''(0)| \leq \frac{2 d^2(d^2-1)}{3}.$$
\end{fact}
\begin{proof}
Recall the famous Markov brothers' inequality, which states that if $q(x)$ is a degree $d$ real polynomial, then for each $k \geq 1$,
$$\sup_{x \in [-1,1]}|q^{(k)}(x)| \leq \frac{d^2(d^2-1^2)(d^2-2^2)\cdots (d^2 - (k-1)^2)}{1 \cdot 3 \cdot 5 \cdots (2k-1)} \sup_{x \in [-1,1]}|q(x)|.$$
In particular, for $k = 2$
\be\label{markov_2}
\sup_{x \in [-1,1]}|q''(x)| \leq \frac{d^2(d^2 - 1)}{3}\sup_{x \in [-1,1]}|q(x)|.
\ee
To translate (\ref{markov_2}) from $[-1,1]$ to $[0,1]$, we simply let $q(x) := \frac{1}{2} - p\left(\frac{1+x}{2}\right).$ Since $x \mapsto \frac{1+x}{2}$ maps $[-1,1]$ to $[0,1]$, we know that $$\sup_{x \in [-1,1]} |q(x)| = \sup_{x \in [0,1]} | \frac{1}{2} - p(x)| \leq \frac{1}{2}.$$
Similarly, since $q''(x) = -\frac{1}{4} p''(\frac{1+x}{2})$, we also have
\bea
\nn|p''(0)| \leq \sup_{x \in [0,1]} |p''(x)| &=& 4\sup_{x \in [-1,1]}|q''(x)|\\ \nn &\leq& 4\cdot \frac{d^2(d^2-1)}{3} \cdot \sup_{x \in [-1,1]}|q(x)| \\ \nn &\leq& \frac{2 d^2(d^2-1)}{3},
\eea
which is what we wanted to show.
\end{proof}
Combining (1), (2), and (3) from Lemma \ref{p_f} with Fact \ref{markov_fact} yields
(\ref{b^2}). This then implies (\ref{b^1}), because if $b$ is an integer with $b = (\sqrt{2/3})d^2 + \ell$ for some $\ell \geq 1$, then $$b^2 - b = (2/3)d^4 - (\sqrt{2/3})d^2 + \ell^2 - \ell > (2/3)d^4 - (2/3)d^2,$$ which contradicts (\ref{b^2}). Therefore (\ref{b^1}) holds, and Theorem \ref{improved_bs_deg} is proved. \qed
\subsection{Block sensitivity vs. approximate degree:}
In \cite{NS}, the authors also prove a bound on block sensitivity in terms of the \emph{approximate} degree, namely
\be\label{bs_2_adeg} \text{bs}(f) \leq 6 \cdot (\widetilde{\deg}_{1/3}(f))^2. \ee
Again we can streamline their argument to improve the constant, this time from $6$ to $5$. We remark that, although $\widetilde{\deg}(f \circ g) = O(\widetilde{\deg}(f) \cdot \widetilde{\deg}(g))$ (by a result of Sherstov \cite{Sherstov}), the implicit constant in the $O(\cdot)$ obstructs us from reducing the constant in (\ref{bs_2_adeg}) to 1 via tensorization. Another difference between (\ref{bs_2_adeg}) and (\ref{bs_2_deg}) is that (\ref{bs_2_adeg}) is known to be tight up to the constant -- it is shown in \cite{NS} that $\text{OR}_n$ can be $1/3$-approximated by a Chebyshev polynomial of degree $2\sqrt{n}$, and hence the 6 cannot be replaced by anything smaller than $\frac{1}{4}$ in (\ref{bs_2_adeg}).
\begin{theorem}\label{approx}
For any boolean function $f$,
$$\emph{\text{bs}}(f) \leq 5 \cdot (\widetilde{\deg}_{1/3}(f))^2$$
\end{theorem}
\begin{proof} By reasoning as in Fact \ref{bs_reduction}, we may assume that $f$ is in standard form with (block) sensitivity $b$. Let $p(x_1, \dots, x_b)$ be a polynomial of degree $d = \widetilde{\deg}_{1/3}(f)$ satisfying $|p(x) - f(x)| \leq 1/3$ for all $x \in \{0,1\}^b$. Write
\be \nn
p(x) = c_0 + c_1x_1 + \cdots + c_bx_b + (\text{higher order terms}),\ee
and observe that
\bea
|p(0) - f(0)| &\leq& 1/3 \implies |c_0| \leq 1/3\\
|p(e_i) - f(e_i)| &\leq& 1/3 \implies c_i + c_0 \geq 2/3.\eea
Therefore each $c_i \geq 1/3$, and so $\sum_{i = 1}^b c_i \geq b/3$. Viewing $p$ as a function on $[0,1]^b$ via its multilinear extension, and considering the univariate function $q(t) := \frac{1}{2} - p(\frac{1+t}{2}, \frac{1+t}{2}, \dots, \frac{1+t}{2})$, we have that
$$\sup_{-1 \leq t \leq 1} |q(t)| = \sup_{0 \leq t \leq 1} |\frac{1}{2} - p(t, t, \dots, t)| = \sup_{x \in [0,1]^b} |\frac{1}{2} - p(x)| \leq \frac{1}{2} + \frac{1}{3} = 5/6, $$
where the middle inequality is due to convexity/multilinearity. On the other hand, $q'(0) = \frac{1}{2} \sum_{i=1}^b (\partial_i p)(0) = \frac{1}{2}\sum_{i=1}^s c_i \geq b/6.$ By Markov's inequality (in the $k=1$ case), this implies
\be \nn b/6 \leq \frac{5d^2}{6} \implies b \leq 5d^2.\ee \end{proof}
\section{Open problems and future directions}\label{sec_future_directions}
In addition to Conjecture \ref{s_conj}, we suggest some other questions left open by our work: \\
\noindent \textbf{Asymptotically stronger bounds on $n(f)$ for monotone functions:} For monotone functions $f$, our work shows stronger bounds on $n(f)$ in terms of $\deg(f)$, $s(f)$ and $\text{DT}(f)$ than are known for general functions. However, these bounds still fall short of the best construction. The best-known construction for each of the three measures above is due to Wegener \cite{Weg}, which we now briefly describe. For each odd integer $k \geq 1$, we define the \emph{monotone address function}
$$\text{MAF}_{k}\left(x_1, \dots, x_k, \left\{y_S\right\}_{S \in \binom{[k]}{\lfloor k/2 \rfloor}}\right) := \text{MAJ}(x_1, \dots, x_k) \bigvee_{ S \in \binom{[k]}{\lfloor k/2 \rfloor}} \left( \bigwedge_{i \in S} x_i \land y_S \right).$$
It isn't hard to show that for $f = \text{MAF}_k$,
$$n(f) = \Theta\left(\frac{2^{\text{DT}(f)}}{\sqrt{\text{DT}(f)}} \right) = \Theta\left(\frac{2^{\deg(f)}}{\sqrt{\deg(f)}}\right) = \Theta\left(\frac{4^{s(f)}}{\sqrt{s(f)}}\right).$$
We conjecture that this is the best possible for monotone functions. \\
\noindent \textbf{Approximate junta size:} If $s(f) = s$, then is $f$ $\varepsilon$-close to a $O_{\varepsilon}(4^s)$ junta? Verbin, Servedio and Tan conjectured that for \emph{monotone} $f$ with $\text{DT}(f) = d$, $f$ must be $\varepsilon$-close to a $\text{poly}_{\varepsilon}(d)$ junta, which would imply the same for $s(f)$. However, Kane \cite{Kane} showed this was false, by constructing a (random) monotone function with $\text{DT}(f) = d$ which is not $0.1$-close to any $\exp(\sqrt{d})$-junta. This is tight up to a constant in the exponent by Freidgut's theorem and the OS inequality ($\textbf{I}[f] \leq \sqrt{\text{DT}(f)}$ for monotone $f$, see \cite{OS}). Since $s(f) \leq \text{DT}(f)$, Kane's construction is also a monotone function with $s(f) = s$ that is not $0.1$-close to any $\exp(\sqrt{s})$-junta. \\
\noindent \textbf{Do large juntas have smaller separations?} If $n(f)$ (the number of relevant variables) is exponential in $s(f), \deg(f), C(f), \text{DT}(f)$, then how are these measures related? For example, if $n(f) = 2^{\Omega(\deg(f))}$ then $s(f) = \Omega(\deg(f))$, by Simon's theorem; if $n(f) = 2^{\Omega(s)}$, then $\deg(f) = \Omega(s(f))$ by Nisan-Szegedy. Do the other directions hold? What can be said if $n(f) \geq 2^{s(f)^{1/100}}$? \\
\bibliographystyle{amsplain}
| {
"timestamp": "2020-05-05T02:01:04",
"yymm": "2005",
"arxiv_id": "2005.00566",
"language": "en",
"url": "https://arxiv.org/abs/2005.00566",
"abstract": "We generalize and extend the ideas in a recent paper of Chiarelli, Hatami and Saks to prove new bounds on the number of relevant variables for boolean functions in terms of a variety of complexity measures. Our approach unifies and refines all previously known bounds of this type. We also improve Nisan and Szegedy's well-known block sensitivity vs. degree inequality by a constant factor, thereby improving Huang's recent proof of the sensitivity conjecture by the same constant.",
"subjects": "Combinatorics (math.CO); Discrete Mathematics (cs.DM)",
"title": "Relationships between the number of inputs and other complexity measures of Boolean functions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9861513877494096,
"lm_q2_score": 0.7185943925708561,
"lm_q1q2_score": 0.7086428574626937
} |
https://arxiv.org/abs/1709.01415 | A fractal shape optimization problem in branched transport | We investigate the following question: what is the set of unit volume which can be best irrigated starting from a single source at the origin, in the sense of branched transport? We may formulate this question as a shape optimization problem and prove existence of solutions, which can be considered as a sort of "unit ball" for branched transport. We establish some elementary properties of optimizers and describe these optimal sets A as sublevel sets of a so-called landscape function which is now classical in branched transport. We prove $\beta$-H{ö}lder regularity of the landscape function, allowing us to get an upper bound on the Minkowski dimension of the boundary: dim $\partial$A $\le$ d -- $\beta$ (where $\beta$ := d($\alpha$ -- (1 -- 1/d)) $\in$ (0, 1) is a relevant exponent in branched transport, associated with the exponent $\alpha$ > 1 -- 1/d appearing in the cost). We are not able to prove the upper bound, but we conjecture that $\partial$A is of non-integer dimension d -- $\beta$. Finally, we make an attempt to compute numerically an optimal shape, using an adaptation of the phase-field approximation of branched transport introduced some years ago by Oudet and the second author. | \section*{Introduction}
Given two probability measures $\mu,\nu$ on $\mathbb{R}^d$, a classical optimization problem amounts to finding a connection between the two measures which has minimal cost. In branched transport, such a connection will be performed along a $1$-dimensional structure such that the cost for moving a mass $m$ at distance $\ell$ is proportional to $m^\alpha \times \ell$ where $\alpha$ is some concave exponent $\alpha \in [0,1]$. The map $t \mapsto t^\alpha$ being subadditive (even strictly subadditive for $\alpha < 1$), that is to say $(a+b)^\alpha \leq a^\alpha + b^\alpha$, it is cheaper for masses to travel together as much as possible. Consequently, the optimal connections exhibit branching structures: for instance, if one wishes to transport one Dirac mass to two Dirac masses of mass $1/2$, the optimal graph will be $Y$-shaped.
A early model has been proposed by Gilbert in \cite{Gil} as an extension of Steiner problem (see \cite{GilPol}) in a discrete setting, where the connection between two atomic measures is made through weighted oriented graphs. There are two main extensions of this model to a continuous setting, i.e. with arbitrary probability measures. The first one was introduced in 2003 by the third author in \cite{Xia03} and can be viewed as a Eulerian model. It is based on vector measures and roughly reads as:
\[\min \quad \left\{ \int \abs*{\frac{dv}{\mathop{}\mathopen{}\mathrm{d}\!\hdm^1}(x)}^\alpha \mathop{}\mathopen{}\mathrm{d}\hdm^1(x) : \nabla \cdot v = \mu - \nu \right\},\]
minimizing among vector measures which have an $\hdm^1$-density. A Lagrangian model was introduced essentially at the same time by and Maddalena, Solimini, Morel \cite{MadSolMor}, and then intensively studied by Bernot, Caselles, Morel \cite{BerCasMor05} . It is based on measures on a set of curves, but the description of this model, which is a little more involved, is given in Section \ref{sec:prelim}. An almost up-to-date reference on branched transport resides in the book by the same authors \cite{BerCasMor}.
Looking at the optimal branching structures computed numerically in \cite{OudSan} (in some non-atomic cases), or at natural drainage networks and their irrigations basins, one is tempted to describe them as fractal (see \cite{RodRin}). Actually, even though the underlying network has infinitely many branching points, it is stil a $1$-rectifiable set, hence it is not clear in what sense fractality appears. Fractality is a notion which usually relates either to self-similarity properties of non-smooth objects, or to non-integer dimension of sets. A first rigorous result which would fall in the first category is proven by Brancolini and Solimini in \cite{BraSol14}: for sufficiently diffuse measures (for example the Lebesgue measure restricted to a Lipschitz open set), the number of branches of length $\sim \varepsilon$ stemming from a branch of length $\ell$ is of order $\ell/\varepsilon$. This may read as a self-similarity property since in a way the total length is preserved when looking at subbranches at all scales.
The present paper leans towards the other notion of fractality, that is towards \enquote{fractal} dimension. Some sets in branched transport have already been proposed as candidates to exhibit non-integer dimension, for instance the boundary of adjacent irrigation basins (an open conjecture by J.-M. Morel). Here we are interested in another candidate which is related to branched transport: the boundary of what we call unit balls for branched transport. With the results of the present paper, we can only prove an upper bound on the dimension, which is non-integer, and conjecture that this upper bound is actually sharp.
The article is divided into five parts. In a preliminary section we define properly the Lagrangian framework of branched transport and its basic features, and we formulate our question as a shape optimization problem involving the irrigation distance. Section \ref{sec:existence} is devoted to the proof of existence of minimizers and to elementary properties of minimizers. In Section \ref{sec:hölder} we prove the $\beta$-Hölder regularity of the landscape function, which appears in the description of optimizers, and use it to derive an upper bound on the Minkowski dimension of the boundary of the optimizers in Section \ref{sec:4}. The final section is an attempt at computing optimizers numerically, which is particularly useful due to the fact that we are not fully able to answer theoretically the question of the fractal behavior of the boundary. This is done by adapting the Modica-Mortola approach introduced by \cite{OudSan}, and allows to provide some convincing computer visualizations.
\section{Preliminaries}\label{sec:prelim}
As preliminaries, we quickly set the Lagrangian framework of branched transport and its main features. For more details, we refer to the book \cite{BerCasMor} or to \cite[Sections 1--2]{Peg} for a simpler exposition.
\subsection{The irrigation problem}
We denote by $\Gamma(\mathbb{R}^d)$ the set of $1$-Lipschitz curves in $\mathbb{R}^d$ parameterized on $[0,\infty]$, endowed with the topology of uniform convergence on compact sets.
\subsubsection*{Irrigation plans} We call \emph{irrigation plan} any probability measure $\eta \in \mathrm{Prob}({\Gamma})$ satisfying the following finite-length condition
\begin{equation}\label{fin_len}
\mathbf{L}(\eta) \coloneqq \int_\Gamma L(\gamma) \:\mathop{}\mathopen{}\mathrm{d} \eta(\gamma) < +\infty,
\end{equation}
where $L(\gamma) = \int_0^\infty \abs{\dot{\gamma}(t)} \mathop{}\mathopen{}\mathrm{d} t$. Notice that any irrigation plan is concentrated on $\Gamma^1(\mathbb{R}^d) \coloneqq \{ \gamma : L(\gamma) < \infty\}$. We denote by $\mathrm{IP}(\mathbb{R}^d)$ the set of all irrigation plans $\eta\in \mathrm{Prob}({\Gamma})$. If $\mu$ and $\nu$ are two probability measures on $\mathbb{R}^d$, one says that $\eta \in \mathrm{IP}(\mathbb{R}^d)$ irrigates $\nu$ from $\mu$ if one recovers the measures $\mu$ and $\nu$ by sending the mass of each curve respectively to its initial point and to its final point, which means that
\[
(\pi_0)_\#\eta = \mu \text{ and }
(\pi_\infty)_\#\eta = \nu,
\]
where $\pi_0(\gamma) = \gamma(0)$, $\pi_\infty(\gamma) = \gamma(\infty) \coloneqq \lim_{t \to +\infty} \gamma(t)$ and $f_\#\eta$ denotes the push-forward of $\eta$ by $f$ whenever $f$ is a Borel map\footnote{Notice that $\lim_{t\to\infty} \gamma(t)$ exists if $\gamma \in \Gamma^1(K)$, and this is all we need since any irrigation plan is concentrated on $\Gamma^1(K)$.}. We denote by $\mathrm{IP}(\mu,\nu)$ the set of irrigation plans irrigating $\nu$ from $\mu$:
\[\mathrm{IP}(\mu,\nu) = \{ \eta \in \mathrm{IP}(\mathbb{R}^d) : (\pi_0)_\#\eta = \mu, (\pi_\infty)_\#\eta = \nu\}.\]
If $\eta$ is a given irrigation plan, we define the multiplicity at $x$, that is the total mass passing by $x$, as
\[\theta_\eta(x) = \eta(\{\gamma \in \Gamma : x \in \gamma\}),\]
where $x\in \gamma$ means that $x$ belongs to the image of the curve $\gamma$. Finally, for any nonnegative function $f$, we denote by $\int_\gamma f(x) \mathop{}\mathopen{}\mathrm{d} x$ the line integral of $f$ along $\gamma \in \Gamma$:
\[\int_\gamma f(x) \mathop{}\mathopen{}\mathrm{d} x \coloneqq \int_0^{+\infty} f(\gamma(t)) \abs{\dot{\gamma}(t)} \mathop{}\mathopen{}\mathrm{d} t.\]
\subsubsection*{Irrigation costs}
For $\alpha \in [0,1]$ we consider the \emph{irrigation cost} $\mathbf{I}_\alpha : \mathrm{IP}(\mathbb{R}^d) \to [0,\infty]$ defined by
\[\mathbf{I}_\alpha(\eta) \coloneqq \int_\Gamma \int_\gamma \theta_\eta(x)^{\alpha -1} \mathop{}\mathopen{}\mathrm{d} x\mathop{}\mathopen{}\mathrm{d}\eta(\gamma),\]
with the conventions $0^{\alpha -1} = \infty$ if $\alpha <1$, $0^{\alpha -1} = 1$ otherwise, and $\infty \times 0 = 0$. If $\mu,\nu$ are two probability measures on $\mathbb{R}^d$, the irrigation (or branched transport) problem consists in minimizing the cost $\mathbf{I}_\alpha$ on the set of irrigation plans which send $\mu$ to $\nu$, which reads
\begin{equation}\label{lag_pb}\tag{$\text{LI}_\alpha$}
\min_{\eta \in \mathrm{IP}(\mu,\nu)}\quad \int_\Gamma \int_\gamma \theta_\eta(x)^{\alpha -1} \mathop{}\mathopen{}\mathrm{d} x \mathop{}\mathopen{}\mathrm{d}\eta(\gamma).
\end{equation}
We set $Z_\eta(\gamma) = \int_\gamma \theta_\eta(x)^{\alpha-1} \mathop{}\mathopen{}\mathrm{d} x$ so that the cost may expressed as
\[\mathbf{I}_\alpha(\eta) = \int_\Gamma Z_\eta(\gamma) \mathop{}\mathopen{}\mathrm{d}\eta(\gamma).\]
The following results are extracted from \cite{BerCasMor05,Peg,Xia03}.
\begin{prop}[First variation inequality for $\mathbf{I}_\alpha$]
If $\eta$ is an irrigation plan with $\mathbf{I}_\alpha(\eta)$ finite, then for all irrigation plan $\tilde{\eta}$ the following holds:
\begin{equation}
\mathbf{I}_\alpha(\tilde{\eta}) \leq \mathbf{I}_\alpha(\eta) + \alpha \int Z_\eta(\gamma) \mathop{}\mathopen{}\mathrm{d}(\tilde{\eta}-\eta).
\end{equation}
Notice that the integral $\int Z_\eta \mathop{}\mathopen{}\mathrm{d}(\tilde{\eta}-\eta)$ is well-defined since $\int Z_\eta \mathop{}\mathopen{}\mathrm{d}\eta < \infty$ and $Z_\eta$ is nonnegative, though it may be infinite.
\end{prop}
\begin{thm}[Existence of minimizers,]
For any pair of probability measures $\mu,\nu \in \mathrm{Prob}(\mathbb{R}^d)$ which have compact support, the problem \eqref{lag_pb} admits a minimizer.
\end{thm}
\begin{thm}[Irrigability]
If $1- \frac{1}{d} < \alpha < 1$, for any $\mu,\nu \in \mathrm{Prob}(\mathbb{R}^d)$ with compact support there exists some $\eta \in \mathrm{IP}(\mu,\nu)$ such that $\mathbf{I}_\alpha(\eta)$ is finite.
\end{thm}
From now on we assume that $\alpha \in \left]1-\frac{1}{d}, 1 \right[$.
\subsubsection*{Irrigation distance}
Let us set
\[d_\alpha(\mu,\nu) = \min \{\mathbf{I}_\alpha(\eta) \; : \; \eta \in \mathrm{IP}(\mu,\nu)\}\]
for any pair $\mu,\nu$ of probability measures on $\mathbb{R}^d$. For any compact $K \subseteq \mathbb{R}^d$, it induces a distance on $\mathrm{Prob}(K)$ which metrizes the weak-$\star$ convergence of measures in the duality with $\mathcal{C}(K)$. On non-compact subsets of $\mathbb{R}^d$, the distance $d_\alpha$ is lower semicontinuous w.r.t. the weak-$\star$ convergence of measures in the duality with bounded and continuous functions (narrow convergence)\footnote{Proving this is just an adaptation of the proof on compact sets. If $\mu$ is fixed (for example) and $\nu_n \to \nu$ with $\eta_n \in \mathrm{IP}(\mu,\nu_n)$ optimal and parameterized by arc length, assuming that the cost is bounded, the irrigation plans $\eta_n$ are tight and one may extract a subsequence converging to some $\eta$ which irrigates $\nu$ and whose cost is less than $\liminf d_\alpha(\mu,\nu_n)$ by lower semicontinuity of $\mathbf{I}_\alpha$.}.
\begin{prop}[Scaling law]\label{prp:bt_law}
For any compactly supported measures $\mu,\nu$ with equal mass, there is an upper bound on the irrigation distance depending on the mass and the diameter. We set $\mu' = \mu - \mu\wedge \nu, \nu' = \nu - \mu \wedge \nu$ the disjoint parts of the measures and $m = \abs{\mu'} = \abs{\nu'}$ their common mass. Then:
\[d_\alpha(\mu,\nu) \leq C m^\alpha \diam(\supp \mu' \cup \supp \nu').\]
\end{prop}
\subsubsection*{Landscape function}
The landscape function was introduced by the second author in \cite{San}, in the single-source case. It has been then studied by Brancolini, Solimini in \cite{BraSol11} and by the third author in \cite{Xia14}. It will be a central tool in the study of the shape optimization problem we are going to introduce. We recall here the basic definitions and properties. Given an optimal irrigation plan $\eta \in \mathrm{IP}(\delta_0,\nu)$, we say that a curve $\gamma$ is $\eta$-good if
\begin{itemize}
\item the quantity $Z_\eta(\gamma) = \int_\gamma \theta_\eta(x)^{\alpha-1} \mathop{}\mathopen{}\mathrm{d} x$ is finite,
\item for all $t < T(\gamma)$,
\[\theta_\eta(\gamma(t)) = \eta(\{\tilde{\gamma}\in \Gamma(\mathbb{R}^d) : \gamma = \tilde{\gamma} \text{ on } [0,t]\}),\]
where $T(\gamma)=\inf \{t\in [0,\infty]: \gamma(s)=\gamma(\infty) \text{ for all } s\in[ t, \infty]\}$ is the stopping time of $\gamma$.
\end{itemize}
One may prove by optimality that $\eta$ is concentrated on the set of $\eta$-good curves. Moreover it is proven in \cite{San} that for all $\eta$-good curves $\gamma$, the quantity $Z_\eta(\gamma)$ depends only on the final point $\gamma(\infty)$ of the curve, thus we may define the landscape function $z_\eta$ as follows:
\[z_\eta(x) = \begin{dcases*}
Z_\eta(\gamma)&
if $\gamma$ is an $\eta$-good curve s.t. $x = \gamma(\infty)$,\\
+\infty&
otherwise.
\end{dcases*}\]
Notice that for an optimal $\eta$ the cost may be expressed in terms of $z_\eta$:
\[\mathbf{I}_\alpha(\eta) = \int_\Gamma Z_\eta(\gamma) \mathop{}\mathopen{}\mathrm{d}\eta(\gamma) = \int_{\mathbb{R}^d} z_\eta(x) \mathop{}\mathopen{}\mathrm{d}\nu(x).\]
Finally, one may show that $z_\eta$ is lower semicontinuous and that the inequality $z_\eta(x)\geq |x|$ holds.
\subsection{The shape optimization problem}
We ask ourselves the following question: what is the set of unit volume which is closest to the origin in the sense of irrigation? To give this a precise meaning, we embed everything in the space of probability measures; hence we want to minimize the $d_\alpha$ distance between the unit Dirac mass at $0\in \mathbb{R}^d$ and sets $E$ of unit volume, seen as the uniform measure on $E$. This problem reads
\begin{equation}\tag{$\text{S}_\alpha$}\label{pb:S}
\min\quad \{d_\alpha(\delta_0, \mathbf{1}_E \lbm) : \abs{E} = 1\},
\end{equation}
where $\lbm$ denotes the Lebesgue measure on $\mathbb{R}^d$. We relax this problem by minimizing on a larger set, which is the set of probability measures with Lebesgue density bounded by $1$, thus getting:
\begin{equation}\tag{$\text{R}_\alpha$}\label{pb:R}
\min\quad \{\mathbf{X}_\alpha(\nu) : \nu \leq 1, \nu \in \mathrm{Prob}(\mathbb{R}^d)\},
\end{equation}
where $\mathbf{X}_\alpha(\nu) = d_\alpha(\delta_0, \nu)$.
In the following, we will sometimes encounter positive measures which do not have unit mass, thus we extend the functional by setting $\mathbf{X}_\alpha(\nu) \coloneqq d_\alpha(\abs{\nu}\delta_0, \nu)$ for any finite measure $\nu$.
A key tool in the analysis of this problem lies in the following proposition, proved in \cite{San} under slightly more restrictive hypotheses.
\begin{prop}[First variation inequality for $\mathbf{X}_\alpha$]\label{prop:first_var}
Suppose that $\nu\in \mathrm{Prob}(\mathbb{R}^d)$ with $\mathbf{X}_\alpha(\nu)< \infty$.
Suppose also that $\eta$ is an optimal irrigation plan between $\abs{\nu}\delta_0$ and $\nu$, with landscape function $z_\eta$. The following holds:
\[\mathbf{X}_\alpha(\tilde{\nu}) \leq \mathbf{X}_\alpha(\nu) + \alpha \int z_\eta \mathop{}\mathopen{}\mathrm{d}(\tilde{\nu}-\nu)\]
for any $\tilde{\nu}\in \mathrm{Prob}(\mathbb{R}^d)$.
\end{prop}
Notice also that the integral $\int z_\eta \mathop{}\mathopen{}\mathrm{d}(\tilde{\nu}-\nu)$ is well-defined since $\int z_\eta \mathop{}\mathopen{}\mathrm{d}\nu=\mathbf{I}_{\alpha}(\eta)=\mathbf{X}_{\alpha}(\nu) < \infty$ and $z_\eta$ is non-negative, though it may be infinite.
\begin{proof}
If $\int z_\eta \mathop{}\mathopen{}\mathrm{d} \tilde{\nu} = \infty$ then there is nothing to prove. Otherwise for $\nu$-a.e. $x$, $z_\eta(x)$ is finite hence there are $\eta$-good curves reaching $x$ and one can find a measurable\footnote{One can characterize $\eta$-good curves as those $\gamma$ such that $\tilde{Z}_\eta(\gamma) < \infty$ where $\tilde{Z}_\eta(\gamma) \coloneqq \int_0^\infty \abs{\gamma}_{t,\eta} \,\mathop{}\mathopen{}\mathrm{d} t$ is a slight variation of $Z_\eta$ defined in \cite{San} which is also lower semicontinuous. Hence the multifunction associating to every $x$ the set of $\eta$-good curves reaching $x$ can be written as $\bigcup_{\ell \in \mathbb{Q}}\{\gamma\in\Gamma\,:\,\tilde{Z}_\eta(\gamma)\leq \ell, \gamma(\infty)=x\}$, i.e. as a countable union of multifunctions with closed graph. This means that this multifunction is measurable and admits a measurable selection (see e.g. \cite{CasVal}).} map $g:\mathbb{R}^d\rightarrow \Gamma$ which associates with every $x$ an $\eta$-good curve reaching $x$. Let us build an irrigation plan $\tilde{\eta} \in \mathrm{IP}(\abs{\tilde{\nu}}\delta_0, \tilde{\nu})$ which is concentrated on $\eta$-good curves, by setting $\tilde{\eta} = g_\# \nu$, so that
\[\int_\Gamma Z_\eta \mathop{}\mathopen{}\mathrm{d}\tilde{\eta} = \int_\Gamma z_\eta(\gamma(\infty)) \mathop{}\mathopen{}\mathrm{d}\tilde{\eta}(\gamma) = \int_{\mathbb{R}^d} z_\eta(x) \mathop{}\mathopen{}\mathrm{d}\tilde{\nu}.\]
Then, by the first variation inequality for $\mathbf{I}_\alpha$, we get:
\[\mathbf{X}_\alpha(\tilde{\nu}) \doteq d_\alpha(\abs{\tilde{\nu}}\delta_0,\tilde{\nu}) \leq \mathbf{I}_\alpha(\tilde{\eta}) \leq \mathbf{I}_\alpha(\eta) + \alpha \int_\Gamma Z_\eta \mathop{}\mathopen{}\mathrm{d}(\tilde{\eta}-\eta) = \mathbf{X}_\alpha(\nu) + \alpha \int_{\mathbb{R}^d} z_\eta \mathop{}\mathopen{}\mathrm{d} (\tilde{\nu}-\nu).\qedhere\]
\end{proof}
\section{Existence and first properties}\label{sec:existence}
We will often denote by $C = C(\alpha,d)$ or $c = c(d)$ different positive constants which depend only on $\alpha,d$ or $d$ respectively.
\begin{thm}
The relaxed shape optimization problem \eqref{pb:R} admits at least a minimizer.
\end{thm}
\begin{proof}
The existence of a minimizer follows from the lower semicontinuity and tightness. Indeed, any minimizing sequence $\nu_n$ must have bounded first moment since
$$\int |x|\mathop{}\mathopen{}\mathrm{d} \nu(x)\leq \int z_\eta(x)\mathop{}\mathopen{}\mathrm{d} \nu(x)=d_\alpha(\delta_0,\nu).$$
A bound on the first moment implies tightness of the sequence and, up to extracting a subsequence, one has $\nu_n\rightharpoonup\nu$. The condition $\nu_n\leq 1$ implies $\nu\leq 1$ and the lower semicontinuity of $d_\alpha$ provides the optimality of $\nu$.
\end{proof}
For $1>\alpha>1-\frac{1}{d}$, we will denote the optimal value for the relaxed shape optimization problem $\eqref{pb:R}$ by:
\[e_{\alpha}: = \min\{d_{\alpha}(\delta_0,\nu): \nu \le 1 \text{ and } \nu \in \mathrm{Prob}(\mathbb{R}^d) \}.\]
\begin{lem}[Scaling lemma]\label{lem:scaling}
For any finite measure $\nu$ we have
\[\mathbf{X}_{\alpha}(\nu) \geq e_{\alpha} \abs{\nu}^{\alpha+\frac{1}{d}}.\]
\end{lem}
\begin{proof}
For $\lambda=\abs{\nu}^{-1/d}$, let $\tilde{\nu}=\lambda^d \varphi_{\#}(\nu)$ be a scaling of $\nu$ under the map $\varphi(x)=\lambda x$ in $\mathbb{R}^d$. Then,
$\int_{\mathbb{R}^d}\mathop{}\mathopen{}\mathrm{d}\tilde{\nu}=\lambda^d \int_{\mathbb{R}^d} \mathop{}\mathopen{}\mathrm{d}\nu=\lambda^d \abs{\nu}=1$ and $\nu \leq 1$. Thus,
\[e_{\alpha}\leq d_{\alpha}(\tilde{\nu}, \delta_0)=\lambda^{\alpha d+1} d_{\alpha}(\nu, \abs{\nu} \delta_0)=\abs{\nu}^{-\left(\alpha+\frac{1}{d}\right)} \mathbf{X}_\alpha(\nu).\]
\end{proof}
For any $\nu$, we say that $z$ is a landscape function of $\nu$ if it is the landscape function $z_\eta$ associated with some optimal irrigation plan $\eta \in \mathrm{IP}(\delta_0,\nu)$.
\begin{thm}\label{thm:exist}
Let $\nu$ be a minimizer of \eqref{pb:R} and $z$ a landscape function of $\nu$. Then $\nu$ is the indicator of a set $A$ which is a sublevel set of $z$:
\begin{equation}
\label{eq: z*_value}
A = \{x : z(x) \leq z^\star \}, \text{ with }z^\star = \frac{e_{\alpha}}{\alpha}\left(\alpha+\frac{1}{d}\right).
\end{equation}
In particular, $A$ is a solution to problem \eqref{pb:S} and it is a compact and path-connected set.
\end{thm}
\begin{proof}
We show that $\nu$ also minimizes the first variation of $\mathbf{X}_\alpha$, that is $\mu \mapsto \int z \mathop{}\mathopen{}\mathrm{d} \mu$. Take $\tilde{\nu}$ a competitor for \eqref{pb:R}. By Proposition \ref{prop:first_var}, one has:
\[\mathbf{X}_\alpha(\tilde{\nu}) \leq \mathbf{X}_\alpha(\nu) + \alpha \int z \mathop{}\mathopen{}\mathrm{d}(\tilde{\nu}-\nu),\]
but $\mathbf{X}_\alpha(\nu) \leq \mathbf{X}_\alpha(\tilde{\nu})$, thus
\[\int z \mathop{}\mathopen{}\mathrm{d}\nu \leq \int z \mathop{}\mathopen{}\mathrm{d}\tilde{\nu}\]
for any $\tilde{\nu}$. So as to minimize this quantity, $\nu$ must concentrate its mass on the points where $z$ takes its lowest values. More precisely, there is a value $z^\star \in [0,\infty]$ such that
\[\nu(x) \begin{dcases*} = 1& if $z(x) < z^\star$,\\
\in [0,1]& if $z(x) = z^\star$,\\
= 0& if $z(x) > z^\star$.
\end{dcases*}\]
Indeed, we just take $z^\star = \sup \{t \in \mathbb{R} : \abs{\{z(x) \leq t\}} < 1\}$. Since $\int z \mathop{}\mathopen{}\mathrm{d} \nu = e_\alpha > 0$, necessarily $z^\star > 0$.
This kind of arguments is typical in optimization problems under an upper density constraint, as it was for instance done for crowd motion applications in \cite{MauRouSan}.
\paragraph{\textbf{Step 1:} $z^\star \leq \frac{e_\alpha}{\alpha} \left(\alpha + \frac{1}{d}\right)$}~
For $0 \leq k < z^\star$, we consider the competitor $\tilde{\nu} = \mathbf{1}_{\{z \leq k\}}$ and set $\abs{\tilde{\nu}} = 1 - m$, noting that $m > 0$ by definition of $z^\star$. Using Lemma \ref{lem:scaling} and Proposition \ref{prop:first_var}, one gets
\begin{align*}
e_\alpha (1-m)^{\alpha+\frac{1}{d}} \leq \mathbf{X}_\alpha(\tilde{\nu}) \leq \mathbf{X}_\alpha(\nu)+\alpha \int z \mathop{}\mathopen{}\mathrm{d}(\tilde{\nu}-\nu) = e_\alpha-\alpha \int_{\{z>k\}}z \mathop{}\mathopen{}\mathrm{d}\nu.
\end{align*}
Since $\nu(\{z > k\}) = 1 - \abs{\{z \leq k\}} = m$, it follows that
\begin{equation}
\label{eq: step1eqn}
e_\alpha (1-m)^{\alpha+\frac{1}{d}} \leq e_\alpha - \alpha k m.
\end{equation}
As $\alpha + \frac{1}{d} > 1$, the map $t \mapsto t^{\alpha + \frac{1}{d}}$ is (strictly) convex, thus
\[e_\alpha \left( 1 - \left(\alpha + \frac{1}{d}\right) m \right) \leq e_\alpha (1-m)^{\alpha+\frac{1}{d}} \leq e_\alpha - \alpha k m,\]
hence forgetting the middle term, substracting $e_\alpha$ and dividing by $m$:
\[
\alpha k \leq e_\alpha \left(\alpha + \frac{1}{d}\right).
\]
Taking the limit $k \to z^\star$ yields:
\begin{equation}\label{eq:zstar_leq}
z^\star \leq \frac{e_\alpha}{\alpha} \left(\alpha + \frac{1}{d}\right).
\end{equation}
\paragraph{\textbf{Step 2:} $\nu = \mathbf{1}_A$ where $A = \{z\leq z^\star\}$}~
Take the competitor $\tilde{\nu} = \mathbf{1}_{\{z \leq z^\star\}}$ and set $\abs{\tilde{\nu}} = 1 + m$, $m \geq 0$. Using again the scaling lemma and the first variation of $\mathbf{X}_\alpha$ one gets:
\[e_\alpha (1+m)^{\alpha + \frac{1}{d}} \leq e_\alpha + \alpha\int_{z = z^\star} z \mathop{}\mathopen{}\mathrm{d} (\tilde{\nu}-\nu) = e_\alpha + \alpha z^\star m.\]
Now by strict convexity of $t \mapsto t^{\alpha + \frac{1}{d}}$, if $m > 0$ then one has
\begin{gather*}
e_\alpha (1+m)^{\alpha + \frac{1}{d}} > e_\alpha \left(1 + \left(\alpha+\frac{1}{d}\right)m\right),
\shortintertext{thus}
e_\alpha \left(\alpha+ \frac{1}{d}\right)m < \alpha z^\star m,
\end{gather*}
which contradicts \eqref{eq:zstar_leq}. Consequently $m = 0$, hence $\nu = \tilde{\nu} = \mathbf{1}_{\{z \leq z^\star\}}$.
\paragraph{\textbf{Step 3:} Compactness and connectedness}~
$A$ is closed since $z$ is lower semicontinuous and bounded since $z(x) \geq \abs{x}$ for all $x\in \mathbb{R}^d$. It is path-connected since any point $x$ with $z(x) \leq z^\star$ is the endpoint of an $\eta$-good curve $\gamma$ starting from $0$ and $\gamma \subseteq A$ because $z$ is increasing along this curve.
\paragraph{\textbf{Step 4:} $z^\star \geq \frac{e_\alpha}{\alpha}\left(\alpha +\frac{1}{d}\right)$}~
Take $x_0 \in A$ with maximal Euclidean norm. Then the half ball $H_r(x_0) \coloneqq B_r(x_0) \cap \{ x : \langle x - x_0 , x_0 \rangle > 0\}$ is included in $A^c$. We consider the competitor $\tilde{\nu} = \mathbf{1}_{A \sqcup H_r(x_0)}$, with mass $\abs{\tilde{\nu}} = 1 + m$, where $m = \abs{H_r(x_0)} = c r^d$ for some constant $c = c(d)$. To irrigate $\tilde{\nu}$, we pay at most the cost of irrigation of $\nu$, plus the price for moving an extra mass $m$ from $0$ to $x_0$ along the irrigation plan, plus the cost for moving this mass to $B_r(x_0) \setminus A$, which we can bound by $C m^\alpha r$ thanks to Proposition \ref{prp:bt_law}, as follows:
\begin{align*}
\mathbf{X}_\alpha(\tilde{\nu}) = d_\alpha((1+m)\delta_0, \tilde{\nu}) &\leq d_\alpha((1+m)\delta_0, \nu+ m\delta_{x_0}) + d_\alpha(\nu+ m\delta_{x_0}, \nu + \mathbf{1}_{H_r(x_0)})\\
&= \mathbf{X}_\alpha(\nu + m\delta_{x_0}) + d_\alpha(m\delta_{x_0},\mathbf{1}_{H_r(x_0)})\\
&\leq e_\alpha + \alpha m z(x_0) + C r m^\alpha,
\end{align*}
where $C = C(\alpha,d)$ is some positive constant. Combining this inequality with the following convexity inequality
\[\mathbf{X}_\alpha(\tilde{\nu})\ge e_\alpha (1+m)^{\alpha + \frac{1}{d}} \geq e_\alpha \left( 1 + \left(\alpha +\frac{1}{d}\right)m\right),\] and dividing by $m >0$, one gets:
\[e_\alpha\left(\alpha + \frac{1}{d}\right) \leq \alpha z(x_0) + C r^{1+d\alpha -d}.\]
Passing to the limit $r\to 0$, we obtain
\[z^\star \geq z(x_0) \geq \frac{e_\alpha}{\alpha}\left(\alpha +\frac{1}{d}\right).\qedhere\]
\end{proof}
\section{Hölder continuity of the landscape function}\label{sec:hölder}
The Hölder regularity of the landscape function has been proved in \cite{San} under some regularity assumptions on $\nu$ using Campanato spaces (these spaces were introduced in \cite{Cam}, see \cite[Section 2.3]{Giu} for a modern exposition). Namely, if $\nu$ is of the form $\nu = f \lbm_{\mathop{\hbox{\vrule height 6pt width .5pt depth 0pt \vrule height .5pt width 4pt depth 0pt}}\nolimits E}$ where the density $f(x)$ and the fraction of mass $\Theta_E(x,r) \coloneqq \frac{\abs*{E \cap B_r(x)}}{\abs*{B_r(x)}}$ lying in $E$ are bounded from below by some constant $c > 0$ for all $x\in E$ and all $r \leq \diam E$, then $z$ is $\beta$-Hölder continuous where
\[\beta \coloneqq d \left(\alpha - \left(1- \frac{1}{d}\right)\right) = 1 + d \alpha - d,\]
is a number which is strictly between $0$ and $1$ as $1 > \alpha > 1 - \frac{1}{d}$. Another proof for more general regularity assumptions on $\nu$ has been given in \cite{BraSol11}. In our case, we do not know a priori that $A$ is regular (on the contrary we suspect it has a fractal boundary), hence the Hölder regularity of $z$ does not follow from previous works. Exploiting the fact that $A$ is optimal, we are going to show that $z$ is $\beta$-Hölder continuous adapting classical computations to pass from Campanato to Hölder spaces. More precisely, setting $A_r(x) \coloneqq A \cap B_r(x)$ and $z_r(x)$ the mean of $z$ on $A_r(x)$, we are going to prove the following sequence of inequalities, for arbitrary $r \leq \diam A$:
\begin{align}
\fint_{A_r(x)} \abs{z-z_r(x)} &\leq C r^\beta,\\
\abs{z_r(x)-z_{r/2}(x)} &\leq C r^\beta,\\
z_r(x) - z(x) &\leq C r^\beta,\\
\abs{z(x) - z_r(x)} &\leq C r^\beta,\\
\abs{z_{\abs{y-x}}(x) - z_{\abs{y-x}}(y)} &\leq C \abs{y-x}^\beta.
\end{align}
Notice that the two last inequalities imply that $z$ is indeed $\beta$-Hölder continuous:
\begin{align*}
\abs{z(y)-z(x)} &\leq \abs{z(y) - z_{\abs{y-x}}(y)} + \abs{z_{\abs{y-x}}(x) - z_{\abs{y-x}}(y)} + \abs{z(x) - z_{\abs{y-x}}(x)}\\
&\leq C \abs{y-x}^\beta.
\end{align*}
The main difficulty we will encounter is that we will quite easily obtain estimates of the form
\[ \cdots \leq C \frac{r^\beta}{\Theta_A(x,r)^{1-\alpha}},\]
and will need to get rid of the term $\Theta_A(x,r)^{1-\alpha}$, i.e. treat the case when it becomes small.
\subsection{Main lemmas}
The following lemma will be key to prove the regularity of the landscape function.
\begin{lem}[Maximum deviation]\label{lem:dev_max}
There is a constant $C = C(d,\alpha) > 0$ such that the following holds:
\begin{equation}\label{eq:dev_max}
\forall y\in A_r(x), \qquad z^\star - z(y) \leq C \frac{r^\beta}{\Theta_{A^c}(x,r)^{1-\alpha}}.
\end{equation}
\end{lem}
\begin{proof}
We consider the competitor $\tilde{\nu} = \mathbf{1}_{A \cup B_r(x)}$, with mass $\abs{\tilde{\nu}} = 1 + m$ where $m = \abs{B_r(x) \setminus A}$. For any $y\in A_r(x)$, let us irrigate $\tilde{\nu}$ from $0$ by irrigating $\nu$ from $0$, moving an extra mass $m$ from $0$ to $y$ along the irrigation plan, then irrigating $\mathbf{1}_{B_r(x)\setminus A}$ from this mass at $y$. Using Lemma \ref{lem:scaling} and Proposition \ref{prp:bt_law}, we have
\[e_\alpha(1+m)^{\alpha + \frac{1}{d}} \leq \mathbf{X}_\alpha(\tilde{\nu}) \leq \mathbf{X}_\alpha(\nu) + \alpha \int z \mathop{}\mathopen{}\mathrm{d} (m\delta_y) + C r m^\alpha.\]
By convexity,
\[e_\alpha\left(1 + \left(\alpha+\frac{1}{d}\right)m\right) \leq (1+m)^{\alpha + \frac{1}{d}} e_\alpha \leq e_\alpha + \alpha m z(y) + C r m^\alpha,\]
thus, knowing that $e_\alpha \left(\alpha + \frac{1}{d}\right) = \alpha z^\star$ by \eqref{eq: z*_value}:
\[\alpha m z^\star \leq \alpha m z(y) + C r m^\alpha.\]
By definition, $m = \omega_d r^d \Theta_{A^c}(x,r)$ where $\omega_d$ is the volume on the unit $d$-dimensional ball, hence
\[ z^\star - z(y) \leq C r (r^d \Theta_{A^c}(x,r))^{\alpha -1} = C \frac{r^\beta}{\Theta_{A^c}(x,r)^{1-\alpha}}.\qedhere\]
\end{proof}
\begin{rem}
One can see that if $\Theta_A(x,r)$ becomes small, then $\Theta_{A^c}(x,r)$ is large (close to $1$), and actually all values of $z$ in $A_r(x)$ become close to the same value $z^\star$ up to $C r^\beta$.
\end{rem}
\begin{lem}[Mean deviation]\label{lem:mean_dev}
There is some constant $C = C(d,\alpha) > 0$ such that
\[ \fint_{A_r(x)} \abs{z(y) - z_r(x)} \mathop{}\mathopen{}\mathrm{d} y \leq C r^\beta\]
for all $r >0$ and all $x \in A$.
\end{lem}
\begin{proof}
We will first show that
\[ \fint_{A_r(x)} \abs*{z(y) - z_r(x)} \mathop{}\mathopen{}\mathrm{d} y \leq C \frac{r^\beta}{\Theta_A(x,r)^{1-\alpha}}.\]
Denoting by $\bar{z}_r(x)$ the central median of $z$ on the set $A_r(x)$, there is a disjoint union $A_r(x) = A^- \sqcup A^+$ such that $\abs{A^-} = \abs{A^+} = \frac{\abs{A_r(x)}}{2}$ and $z \leq \bar{z}_r(x)$ on $A^-$, $z \geq \bar{z}_r(x)$ on $A^+$.
Let us consider the competitor $\tilde{\nu} = \mathbf{1}_A - \mathbf{1}_{A^+} + \mathbf{1}_{A^-}$. By the first variation lemma:
\[\mathbf{X}_\alpha(\tilde{\nu}) \leq \mathbf{X}_\alpha(\nu) + \alpha \int z \mathop{}\mathopen{}\mathrm{d}(\tilde{\nu}-\nu).\]
Recall that $\mathbf{X}_\alpha(\rho) = d_\alpha(\delta_0,\rho)$ when $\rho$ is a probability measure, which is the case for $\nu$ and $\tilde{\nu}$, and that $d_\alpha$ is a distance. Thus by the triangle inequality:
\[\alpha \int z \mathop{}\mathopen{}\mathrm{d} (\nu - \tilde{\nu}) \leq d_\alpha(\delta_0,\nu) - d_\alpha(\delta_0,\tilde{\nu}) \leq d_\alpha(\nu,\tilde{\nu}).\]
We know that $d_\alpha(\nu,\tilde{\nu}) \leq C \abs{\tilde{\nu}-\nu}^\alpha \diam(\supp(\tilde{\nu}-\nu)) \leq C \abs{A_r(x)}^\alpha r$ for some $C = C(\alpha,d) > 0$. Moreover notice that
\begin{align*}
\int z \mathop{}\mathopen{}\mathrm{d}(\nu-\tilde{\nu}) &= \int_{A^+} z(y) \mathop{}\mathopen{}\mathrm{d} y - \int_{A^-} z(y) \mathop{}\mathopen{}\mathrm{d} y\\
&= \int_{A^+} (z(y)-\bar{z}_r(x)) \mathop{}\mathopen{}\mathrm{d} y + \int_{A^-} (\bar{z}_r(x)-z(y)) \mathop{}\mathopen{}\mathrm{d} y\\
&= \int_{A_r(x)} \abs{z(y) - \bar{z}_r(x)} \mathop{}\mathopen{}\mathrm{d} y.
\end{align*}
Consequently:
\[\fint_{A_r(x)} \abs{z(y) - \bar{z}_r(x)} \mathop{}\mathopen{}\mathrm{d} y \leq C \abs{A_r(x)}^{\alpha -1} r \leq C \frac{\abs{A_r(x)}^{\alpha -1}}{\abs{B_r(x)}^{\alpha -1}} r^{1+d(\alpha -1)} = C \frac{r^\beta}{\Theta_A(x,r)^{1-\alpha}}.\]
Moreover, one has
\[\abs{z_r(x) - \bar{z}_r(x)} = \abs*{\fint_{A_r(x)} z(y) - \bar{z}_r(x)\mathop{}\mathopen{}\mathrm{d} y} \leq \fint_{A_r(x)} \abs{z(y) - \bar{z}_r(x)}\mathop{}\mathopen{}\mathrm{d} y \leq C \frac{r^\beta}{\Theta_A(x,r)^{1-\alpha}}\]
which leads to
\[ \fint_{A_r(x)} \abs{z(y) - z_r(x)} \mathop{}\mathopen{}\mathrm{d} y = \fint_{A_r(x)} \abs{z(y) - \bar{z}_r(x)}\mathop{}\mathopen{}\mathrm{d} y + \abs{z_r(x) - \bar{z}_r(x)} \leq C \frac{r^\beta}{\Theta_A(x,r)^{1-\alpha}}.\]
Now we get rid of $\Theta_A(x,r)^{1-\alpha}$. If $\Theta_A(x,r) \geq 1/2$, we get the desired inequality. On the other hand, if $\Theta_{A^c}(x,r) \geq 1/2$, by Lemma \ref{lem:dev_max}, we have
\[0 \leq z^\star - z(y) \leq C r^\beta, \qquad \forall y \in A_r(x),\]
which also implies that
\[0 \leq z^\star - z_r(x) \leq C r^\beta.\]
By these two inequalities, we have
\[\abs{z(y)-z_r(x)} \leq C r^\beta, \qquad \forall y \in A_r(x).\]
Now, taking the mean over $A_r(x) \ni y$ leads to the wanted inequality as well:
\[ \fint_{A_r(x)} \abs{z(y) - z_r(x)} \mathop{}\mathopen{}\mathrm{d} y \leq C r^\beta.\qedhere\]
\end{proof}
\begin{rem}
Notice that the estimate
\[ \fint_{A_r(x)} \abs{z(y) - z_r(x)} \mathop{}\mathopen{}\mathrm{d} y \leq C \frac{r^\beta}{\Theta_A(x,r)^{1-\alpha}}\]
is valid in general: we only use the fact that $\nu$ is an indicator function (a density bounded from below would suffice). The optimality of $\nu$ comes into play to to get rid of $\Theta_A(x,r)$.
\end{rem}
\subsection{Hölder regularity}
\begin{prop}[Small-scale difference]\label{prop: small_scale}
For all $x \in A$ and all $r > 0$ one has
\[\abs{z_r(x)-z_{r/2}(x)} \leq C r^\beta.\]
\end{prop}
\begin{proof}
First we show that
\[\abs{z_r(x)-z_{r/2}(x)} \leq C \frac{r^\beta}{\Theta_A(x,r)^{1-\alpha}}.\]
Indeed, by Lemma \ref{lem:mean_dev},
\begin{align*}
\abs{z_r(x)-z_{r/2}(x)} &\leq \frac{\int_{A_{r/2}(x)} \abs{z(y)-z_r(x)}\mathop{}\mathopen{}\mathrm{d} y}{\abs{A_{r/2}(x)}} \leq \frac{\abs{A_r(x)}}{\abs{A_{r/2}(x)}} \fint_{A_r(x)} \abs{z(y)-z_{r}(x)}\mathop{}\mathopen{}\mathrm{d} y\\
&\leq 2^d \frac{\Theta_A(x,r)}{\Theta_A(x,r/2)} C r^\beta \leq C \frac{r^\beta}{\Theta_A(x,r/2)}.
\end{align*}
As before, if $\Theta_A(x,r/2) \geq 1/2$ we get the desired estimate. Otherwise $\Theta_{A^c}(x,r/2) \geq 1/2$ and $\Theta_{A^c}(x,r) \geq 2^{-d} \Theta_{A^c}(x,r/2) \geq 2^{-1-d}$. Now, by Lemma \ref{lem:dev_max},
\[0 \le z^\star - z(y) \leq C \frac{r^\beta}{\Theta_{A^c}(x,r)} \leq C r^\beta, \qquad \forall y \in A_r(x).\]
Consequently $0 \le z^\star - z_{r/2}(x) \leq C r^\beta$ and $0 \le z^\star - z_r(x) \leq C r^\beta$ which implies that
\[\abs{z_r(x) - z_{r/2}(x)} \leq C r^\beta.\qedhere\]
\end{proof}
\begin{lem}[Lower deviation to the mean]
There is a constant $C = C(d,\alpha) > 0$ such that for all $x\in A$ and all $r>0$ one has:
\begin{equation}\label{low_dev_mean}
\forall y \in A_r(x), \qquad z_r(x) - z(y) \leq C r^\beta.
\end{equation}
\end{lem}
\begin{proof}
First we show that
\[z_r(x) - z(y) \leq C \frac{r^\beta}{\Theta_A(x,r)^{1-\alpha}}.\]
Remove the mass $m=\abs{A_r(x)}$ going to $A_r(x)$ from the irrigation plan, make it travel along the plan to any fixed $y \in A_r(x)$ and then send it to $A_r(x)$: this should cost more. This implies
\[\alpha m z(y) - \alpha \int_{A_r(x)} z + C m^\alpha r \geq 0,\]
which may be rewritten as
\[z_r(x) - z(y) \leq C m^{\alpha-1} r \leq C \frac{r^\beta}{\Theta_A(x,r)^{1-\alpha}}.\]
Now if $\Theta_A(x,r) \geq 1/2$ one gets the desired result. Otherwise $\Theta_{A^c}(x,r) \geq 1/2$ and Lemma \ref{lem:dev_max} yields:
\[0 \le z^\star - z(y) \leq C r^\beta, \qquad \forall y \in A_r(x).\]
Thus $0\le z^\star - z_r(x)\leq C r^\beta$ and for any fixed $y \in A_r(x)$,
\[\abs{z_r(x) - z(y)} \leq \abs{z_r(x) - z^\star}+\abs{z^\star - z(y)} \leq C r^\beta,\]
from which we also get the wanted inequality.
\end{proof}
\begin{lem}[Deviation to the mean]\label{lem:dev_mean}
For all $x\in A$ and all $r > 0$, one has
\[\abs{z(x)-z_r(x)} \leq C r^\beta.\]
\end{lem}
\begin{proof}
By Proposition \ref{prop: small_scale}, one has
\[\abs{z(x) - z_r(x)} \leq \abs{z(x) - z_{r/2}(x)} + \abs{z_{r/2}(x) - z_r(x)} \leq \abs{z(x)-z_{r/2}(x)} + C r^\beta,\]
which means by setting $f(r) = \abs{z(x) - z_r(x)}$ for $r>0$ that:
\[f(r) \leq f(r/2) + C r^\beta.\]
Consequently for all $k \in \mathbb{N}$
\[f(r) \leq f(r \cdot 2^{-(k+1)}) + Cr^\beta \sum_{i=0}^k 2^{-i\beta}\]
thus
\[f(r) \leq \limsup_{\varepsilon\to 0} f(\varepsilon) + Cr^\beta \sum_{i=0}^\infty 2^{-i\beta} \leq \limsup_{\varepsilon\to 0} f(\varepsilon) + Cr^\beta.\]
Now let us prove that $f(\varepsilon) \to 0$ when $\varepsilon\to 0$, i.e. $z_\varepsilon(x) \xrightarrow{\varepsilon\to 0} z(x)$. We already know that $z$ is lower semi-continuous hence $z(x) \leq \liminf_{\varepsilon\to 0} z_{\varepsilon}(x)$. Moreover using \eqref{low_dev_mean}, we have
\[\limsup_{\varepsilon\to 0} z_\varepsilon(x) \leq \limsup_{\varepsilon\to 0} (z(x) + C\varepsilon^\beta) = z(x),\]
which implies that $z_\varepsilon(x) \to z(x)$ when $\varepsilon\to 0$. Therefore the inequality $f(r) \leq C r^\beta$ holds, that is to say:
\[ \abs{z(x) - z_r(x)} \leq C r^\beta.\qedhere\]
\end{proof}
\begin{lem}[Large scale difference]\label{lem:large_scale}
For any $x,y \in A$, one has:
\[\abs{z_{\abs{y-x}}(x)-z_{\abs{y-x}}(y)} \leq C \abs{y-x}^\beta.\]
\end{lem}
\begin{proof}
Set $r = \abs{y-x}$, and $\Delta_r = B_r(x) \cap B_r(y)$. Notice that, $\Delta_r$ being a fixed fraction of $B_r(x)$ (independant of $r$), $\abs{\Delta_r} = c \abs{B_r}$ for some $c = c(d) \in (0,1)$.
If both $\Theta_{A^c}(x,r) \ge \frac{c}{2}$ and $\Theta_{A^c(}y,r) \ge \frac{c}{2}$, then by Lemma \ref{lem:dev_max} one has:
\begin{align*}
0\le z^\star - z_r(x) \leq C \frac{r^\beta}{\Theta_{A^c}(x,r)^{1-\alpha}} \leq C r^\beta,\text{ and }
0 \le z^\star - z_r(y) \leq C \frac{r^\beta}{\Theta_{A^c}(y,r)^{1-\alpha}} \leq C r^\beta,
\end{align*}
which implies the desired inequality
\begin{equation}\label{eq: z_rxy}
\abs{z_r(x)-z_r(y)} \leq C r^\beta.
\end{equation}
On the other hand, if either $\Theta_{A^c}(x,r)$ or $\Theta_{A^c}(y,r)$ is less than $c/2$, say $\Theta_{A^c}(x,r) \le \frac{c}{2}$, we claim the desired inequality (\ref{eq: z_rxy}) still holds. Indeed, for all $u\in A_r(x) \cap A_r(y)$ one has
\[\abs{z_r(x)-z_r(y)} \leq \abs{z_r(x) - z(u)} + \abs{z_r(y) - z(u)}\]
thus integrating over $A_r(x) \cap A_r(y)$ in $u$ one gets:
\begin{align*}
\abs{z_r(x)-z_r(y)} &\leq \frac{1}{\abs{A_r(x)\cap A_r(y)}}\left[\int_{A_r(x)} \abs{z(u)-z_r(x)} \mathop{}\mathopen{}\mathrm{d} u +\int_{\abs{A_r(y)}} \abs{z(u)-z_r(y)} \mathop{}\mathopen{}\mathrm{d} u\right]\\
&\leq C r^\beta \frac{\abs{A_r(x)}+\abs{A_r(y)}}{\abs{A_r(x)\cap A_r(y)}},
\end{align*}
the last inequality resulting from Lemma \ref{lem:mean_dev}. Note that
\[\abs{A_r(x) \cap A_r(y)} = \abs{\Delta_r \cap A} \geq \abs{\Delta_r} - \abs{B_r(x) \setminus A} = c \abs{B_r(x)} - \abs{B_r(x) \setminus A}\]
which implies that
\[\frac{\abs{A_r(x)}+\abs{A_r(y)}}{\abs{A_r(x) \cap A_r(y)}} \leq \frac{2 \abs{B_r(x)}}{c \abs{B_r(x)} - \abs{B_r(x) \setminus A}}
=\frac{2}{c-\Theta_{A^c}(x,r)}\le \frac{4}{c}.\]
Thus, in this case, we still have
\[\abs{z_r(x)-z_r(y)} \leq C r^\beta\frac{\abs{A_r(x)}+\abs{A_r(y)}}{\abs{A_r(x) \cap A_r(y)}}\leq C r^\beta.\]
\end{proof}
\begin{thm}[Hölder continuity]\label{thm: Holder}
The function $z$ is $\beta$-Hölder continuous on $A$. More precisely:
\[\forall x,y \in A, \quad \abs{z(y)-z(x)} \leq C \abs{y-x}^\beta,\]
for some constant $C = C(\alpha,d)$.
\end{thm}
\begin{proof}
By Lemma \ref{lem:dev_mean} and Lemma \ref{lem:large_scale},
\[\abs{z(y)-z(x)} \leq \abs{z(y) - z_{\abs{y-x}}(y)} + \abs{z_{\abs{y-x}}(y)-z_{\abs{y-x}}(x)} + \abs{z(y)-z_{\abs{y-x}}(y)} \leq 3 C \abs{y-x}^\beta.\qedhere\]
\end{proof}
As a consequence of this result we may quantify the minimal size of a ball one can put inside $A$ around $x$ in terms of $z^\star - z(x)$ and prove that $A$ has non-empty interior.
\begin{prop}[Interior points]\label{prop:interior_points}
For some constant $C = C(\alpha,d)$ the following holds:
\begin{equation}\label{eq:ball_inclusion}
\forall x\in A, \qquad B_{r(x)}(x)\subseteq A,
\end{equation}
where $r(x) \coloneqq C(z^\star-z(x))^{1/\beta}\ge 0$.
In particular \[\{x\in A: z(x) < z^\star\} \subseteq \overset{\circ}{A} \text{ and }\partial A \subseteq \{x\in A: z(x) = z^\star\}.\]
\end{prop}
\begin{proof}
It suffices to prove \eqref{eq:ball_inclusion} for $x_0\in A$ satisfying $z(x_0) < z^\star$. Consider a point $x \in A^c$. Take a point $y \in A$ which is closest to $x$ : it is possible since $A$ is compact. By Lemma \ref{lem:dev_max}, we know that for small $r$
\[\Theta_{A^c}(y,r)^{1-\alpha} (z^\star - z(y))\leq C r^\beta.\]
But by construction $y$ is such that $\liminf_{r\to 0} \Theta_{A^c}(y,r) \geq 1/2$, and since the right-hand side tends to $0$, necessarily $z(y) = z^\star$. By the Hölder continuity of $z$ stated in Theorem \ref{thm: Holder},
\begin{align*}
z^\star - z(x_0) = \abs{z(y) - z(x_0)} \leq C \abs{y-x_0}^\beta \leq C \abs{x-x_0}^\beta,
\end{align*}
where the last inequality follows from the fact that $\abs{y-x_0} \leq \abs{y-x} + \abs{x-x_0} \leq 2 \abs{x-x_0}$ because $y$ minimizes the distance from $x$. Hence, for all $x\in A^c$, $\abs{x-x_0} \ge C(z^\star-z(x_0))^{1/\beta}=r(x_0)$, which implies the desired result.
\end{proof}
\section{On the dimension of the boundary}\label{sec:4}
We are interested in the dimension of the boundary $\partial A$, our guess being that it should be non-integer, and lie between $d-1$ and $d$. Here we look at the Minkowski dimension (also called box-counting dimension). Given a set $X$, we denote by $N_\varepsilon(X)$ the maximum amount of disjoint balls of radius $\varepsilon$ centered at points of $X$.
\begin{defn}[Minkowski dimension]
We define the upper Minkowski dimension of $X$ by
\[\overline{\dim}_M(X) = \limsup_{\varepsilon \to 0} \frac{\log(N_\varepsilon(X))}{-\log \varepsilon},\]
and the lower Minkowski dimension by
\[\underline{\dim}_M(X) = \liminf_{\varepsilon \to 0} \frac{\log(N_\varepsilon(X))}{-\log \varepsilon}.\]
When these coincide we just call it the Minkowski dimension and denote it by $\dim_M(X)$.
\end{defn}
We shall get an upper bound on the upper Minkowski dimension. We say that $X$ is of dimension smaller than $\delta$ if $\overline{\dim}_M X \leq \delta$.
\begin{lem}\label{lem:volume_bound}
There is a constant $C=C(\alpha,d)$ such that for all $k \leq z^\star$,
\[\abs{\{x\in A: k< z(x) \leq z^\star\}} \leq C (z^\star -k).\]
\end{lem}
\begin{proof}
Consider the competitor $\tilde{\nu} = \mathbf{1}_{\{z \leq k\}}$ with total mass $\abs{\tilde{\nu}} = 1 - m$, where $m=|\{x\in A: k<z(x) \le z^*\}|$. As in (\ref{eq: step1eqn}), one has
\[e_\alpha (1-m)^{\alpha + 1/d} \leq e_\alpha - \alpha km \]
hence knowing that $\alpha z^\star = (\alpha + 1/d) e_\alpha$ and developing the term on the left-hand side at order $2$, we obtain:
\[-\alpha m z^\star + \frac{e_{\alpha}}{2}(\alpha + \frac{1}{d})(\alpha+\frac{1}{d}-1)m^2 \leq -\alpha km\]
Thus
\[m \leq C(z^\star - k)\]
with $1/C = e_{\alpha}(\alpha + 1/d)(\alpha+1/d-1)/(2\alpha)$.
\end{proof}
\begin{thm}
The set $\partial A$ is of dimension less than $d-\beta$.
\end{thm}
\begin{proof}
For $\varepsilon > 0$ fixed, take disjoint balls $(B_i)_{i \in I}$ of radius $\varepsilon$, where $N \coloneqq \abs{I} = N_\varepsilon(\partial A)$. We set $B_i^+ = B_i \setminus A$, $B_i^- = B_i \cap A$. We split the set of balls into two parts: those which have a larger intersection with $A$ rather than $A^c$, and vice-versa. Namely, we set
\begin{align*}
I^+ &= \{i \in I: \abs{B_i^+} \geq \abs{B_i}/2\},&
N^+ &= \abs{I^+},\\
I^- &= \{i \in I: \abs{B_i^-} \geq \abs{B_i}/2\},&
N^- &= \abs{I^-},
\end{align*}
so that $I = I^+ \cup I^-$ and $N \leq N^+ + N^-$. We are going to bound $N^+$ and $N^-$ by some power of $\varepsilon$.
\paragraph{\textbf{Step 1:} Bound on $N^-$}~
Since $z$ is $\beta$-Hölder continuous on $A$, one has for each $B_i = B_\varepsilon(x_i)$:
\[\forall x \in B_i \cap A,\qquad \abs{z(x) - z^\star} < C \varepsilon^\beta,\]
since the center $x_i$ lies in $\partial A \subseteq \{z = z^\star\}$ according to Proposition \ref{prop:interior_points}. Consequently
\[(\partial A)_\varepsilon \cap A \subseteq \{ z^\star - C \varepsilon^\beta < z \leq z^\star\},\]
thus because of Lemma \ref{lem:volume_bound}:
\[\abs{(\partial A)_\varepsilon \cap A} \leq \abs{\{ z^\star - C \varepsilon^\beta < z \leq z^\star\}} \leq C \varepsilon^\beta.\]
Using the previous inequality and the fact that $\abs{B_i^-} \geq \abs{B_i}/2 \geq C \varepsilon^d$ for $i\in I^-$, one has:
\[C N^- \varepsilon^d \leq \sum_{i \in I^-} \abs{B_i^-} \leq \abs{(\partial A)_\varepsilon \cap A} \leq C \varepsilon^\beta,\]
which implies:
\begin{equation}\label{eq:Nminus}
N^- \leq C \varepsilon^{-(d-\beta)}.
\end{equation}
\paragraph{\textbf{Step 2:} Bound on $N^+$}~
We consider the competitor $\tilde{\nu} = \mathbf{1}_{\tilde{A}}$ where $\tilde{A} = A \cup \bigcup_{i\in I^+} B_i^+$. It has a mass $\abs{\tilde{\nu}} = 1 + m$ where $m = \sum_{i\in I^+} \abs{B_i^+}$. To irrigate $\tilde{\nu}$, we send an extra mass $\abs{B_i^+}$ to each center $x_i$ along the irrigation plan, which costs $\alpha\abs{B_i^+} z^\star$, then we send this mass towards $B_i^+$, which costs at most $C\abs{B_i^+}^\alpha \varepsilon$. But one should get a cost no less than $e_\alpha(1+m)^{\alpha + 1/d}$ by the scaling lemma. Moreover, with a development of order $2$ one has:
\begin{align*}
(1+m)^{\alpha + 1/d} &\geq 1 + (\alpha + 1/d)m + 1/2 \cdot (\alpha + 1/d)(\alpha + 1/d-1) (1+m)^{\alpha +1/d -2} m^2\\
&\geq 1 + (\alpha + 1/d)m + C m^2
\end{align*}
because for $\varepsilon$ small, $1+m$ is less than $2$ for example. Consequently one may say:
\[e_\alpha \left(1 + (\alpha + 1/d)m + C m^2 \right) \leq e_\alpha + \alpha m z^\star + \sum_{i\in I^+} C\varepsilon \abs{B_i^+}^\alpha.\]
Recall that $\alpha z^\star = e_\alpha (\alpha +1/d)$, thus after simplifying one gets for some $C > 0$:
\begin{equation}\label{eq:step_Nplus}
m^2 \leq C \sum_{i\in I^+} \abs{B_i^+}^\alpha \varepsilon \leq C N^+ \varepsilon^{1+\alpha d}.
\end{equation}
Notice that for $i \in I^+$, $\abs{B_i^+} \geq \abs{B_i}/2 \geq C \varepsilon^d$, so that
\[m = \sum_{i\in I^+} \abs{B_i^+} \geq C N^+ \varepsilon^d.\]
Injecting this into \eqref{eq:step_Nplus}, one gets:
\[(N^+\varepsilon^d)^2 \leq C N^+ \varepsilon^{1+\alpha d},\]
thus
\begin{equation}\label{eq:Nplus}
N^+ \leq C \varepsilon^{1+\alpha d-2d} = C \varepsilon^{-(d-\beta)}.
\end{equation}
Putting \eqref{eq:Nminus} and \eqref{eq:Nplus} together yields:
\[N_\varepsilon(\partial A) = N \leq N^+ + N^- \leq \frac{C}{\varepsilon^{d-\beta}},\]
and
\[\overline{\dim}_M(\partial A) = \limsup_{\varepsilon \to 0} \frac{\log(N_\varepsilon(\partial A))}{-\log(\varepsilon)} \leq d-\beta,\]
which means that $\partial A$ is of dimension smaller than $d-\beta$.
\end{proof}
This result pushes us to propose the following conjecture:
\begin{conj}
The boundary $\partial A$ is of dimension $d-\beta$, in the sense that:
\[\dim_H(\partial A) = \dim_M(\partial A) = d-\beta.\]
\end{conj}
Proving this requires to establish the inequality $\dim_H(\partial A) \geq d-\beta$, for which we do not have a working strategy yet.
\section{Numerical simulations}
Our goal now is to compute solutions to our shape optimization problem numerically. To perform numerical simulations, we use the Eulerian framework of branched transport, first defined by the third author in \cite{Xia03}. This framework is based on vector measures with a measure divergence, i.e. measures $v \in \mathcal{M}^d(\mathbb{R}^d)$ such that $\nabla \cdot v \in \mathcal{M}(\mathbb{R}^d)$, the set of such measures being denoted by $\mathcal{M}_{div}(\mathbb{R}^d)$. The cost is the so-called $\alpha$-mass:
\[M_\alpha(v) = \begin{dcases*}
\int \abs*{\frac{dv}{\mathop{}\mathopen{}\mathrm{d}\!\hdm^1}(x)}^\alpha \mathop{}\mathopen{}\mathrm{d}\hdm^1(x)& if $v$ is $1$-rectifiable,\\
+\infty& otherwise.
\end{dcases*}\]
An elliptic approximation of this functional was introduced by Oudet and the second author in \cite{OudSan} (see also \cite{SanCRAS}), in the spirit of Modica and Mortola \cite{ModMor}. The approximate functional is defined for $\varepsilon > 0$ by:
\[M^\alpha_\varepsilon(v) = \varepsilon^{-\sigma_1} \int \abs{v(x)}^\sigma \mathop{}\mathopen{}\mathrm{d} x + \varepsilon^{\sigma_2} \int \frac{\abs{v(x)}^2}{2} \mathop{}\mathopen{}\mathrm{d} x\]
for suitably chosen $\sigma,\sigma_1,\sigma_2$. It is proven in \cite{OudSan} that $M_\varepsilon^\alpha$ $\Gamma$-converges to $M^\alpha$ as $\varepsilon$ goes to $0$, for a suitable topology on $\mathcal{M}_{div}(\mathbb{R}^d)$. Moreover, the $\Gamma$-convergence result also holds imposing an equality constraint on the divergence $\nabla \cdot v = f_\varepsilon$, for a suitable sequence $f_\varepsilon\rightharpoonup f$, as proven in \cite{Mon17}. The results of \cite{OudSan} are proven in dimension $d=2$, but in \cite{Mon15} there is a proof of how to extend to higher dimension, in the case $\alpha>1-1/d$ (in dimension $d=2$ there is also a version of the $\Gamma$-convergence result for $\alpha \leq 1/2$). Also note that, recently, other phase-field approximations for branched transport or other network problems have been studied, see for instance \cite{BonOrlOud,ChaFerMer,BonLemSan}.
Here we adapt the approach of \cite{OudSan} to our shape optimization problem by adding this time an inequality constraint on the divergence.
Recall that the Lagrangian and Eulerian frameworks are equivalent \cite{Peg}, so that the irrigation distance may be computed in the following way:
\[d_\alpha(\mu,\nu) = \inf_v \quad \{ M^\alpha(v) \quad : \quad \nabla \cdot v = \mu - \nu\}.\]
Consequently the shape optimization problem \eqref{pb:R} rewrites, in relaxed form, as:
\begin{equation}\tag{ES}\label{pb:ES}
\min_v \quad \{ M^\alpha(v) \; :\; \mu -1 \leq \nabla \cdot v \leq \mu \} \quad \text{where $\mu=\delta_0$}.
\end{equation}
Setting $a = \mu-1, b = \mu$ and some mollified versions $a_\varepsilon = \mu_\varepsilon-1, b_\varepsilon = \mu_\varepsilon$,for example a convolution of $\mu$ with the standard mollifier of suitable size $r_\varepsilon$ (e.g. $\varepsilon^{\sigma_2} r_\varepsilon^{-d} = o(1)$ as in \cite{Mon17}), we define the following approximate problem, for $\varepsilon > 0$:
\begin{equation}\label{pb:AS}\tag{AS}
\min_v \quad \{ M^\alpha_\varepsilon(v) \; :\; a_\varepsilon \leq \nabla \cdot v \leq b_\varepsilon \}.
\end{equation}
Let us remark that the above-mentioned $\Gamma$-convergence results do not allow us to say that this problem approximates \eqref{pb:ES}, as the inequality constraint on the divergence is not directly in these works. We leave this question for further investigation, as our aim is for now to make a first attempt to compute numerically an optimal shape for the original problem \eqref{pb:S}.
\subsection{Optimization methods}
We tackle problem \eqref{pb:AS} by descent methods. Two difficulties arise: first of all, the functional $M_\varepsilon^\alpha$ is not convex hence there is no garantee that the methods converge, and if they do, they may converge to a local minimizer which is not necessarily a global minimizer ; secondly, this is a constrained problem, hence we will need to compute projections or resort to proximal methods to handle the constraint. The simplest approach is to use a first-order method, for instance to perform a projected gradient descent on the functional $M^\alpha_\varepsilon$ for $\varepsilon$ fixed (but small):
\begin{pjgm}
\[\left|\begin{aligned}
v_0 &\in C\\
v_{n+1} &= p_C(v_n - \tau_n \nabla M^\alpha_\varepsilon(v_n)),
\end{aligned}\right.\]
where
\[C = \{ v : a_\varepsilon \leq \nabla \cdot v \leq b_\varepsilon\}\]
is the convex set of admissible vector fields for \eqref{pb:AS}.
\end{pjgm}
Computing the projection $p_C$ is not an easy task, even more so as we want fast computations since this projection should be done at each step of the algorithm. Actually, this projection step will be quite costly (at least in our approach), hence we need to pass to a higher order method to get to an approximate minimizer in a reasonable number of iterations.
Recall that the projected gradient method is a particular case of the proximal gradient method, which we describe briefly. Consider a problem of the form
\[\min_v f(v) + g(v)\]
where $f$ is smooth and $g$ \enquote{proximable}, in the sense that one may easily compute its proximal operator
\[\prox^\tau_g(v) = \argmin_{v'} g(v') + \frac{1}{2\tau} \abs{v'-v}^2.\]
The proximal gradient method consists in doing at each step an explicit descent for $f$ and an implicit descent for $g$:
\begin{pxgm}
\[\left|\begin{aligned}
v_0 &\text{ given}\\
v_{n+1} &= \prox_g^{\tau_n}(v_n - \tau_n \nabla f(v_n)).
\end{aligned}\right.\]
\end{pxgm}
The projected gradient method corresponds to the case
\[g(v) = \begin{dcases*}
0&
if $v \in C$,\\
+\infty&
otherwise.
\end{dcases*}\]
If there was no function $g$, we recover the classical gradient descent method. Notice that there is an implicit choice in this method, since we compute gradients which depend on the scalar product. There is no reason that the canonical scalar product is well adapted to the function we want to minimize. Following the work of Lee, Sun and Saunders \cite{LeeSunSau} on Newton-type proximal methods, one may \enquote{twist} the scalar product, leading to the more general method:
\begin{tpgm}
\begin{equation}\label{met:gen_prox}
\left|\begin{aligned}
v_0 &\text{ given}\\
v_{n+1} &= \prox_g^{\tau_n,H_n}(v_n - \tau_n \nabla_{H_n} f(v_n)),
\end{aligned}\right.
\end{equation}
where $\nabla_H f(x)$ is the gradient of $f$ with respect to the scalar product $\langle x,y\rangle_H = \langle H x, y \rangle$ for $H$ an invertible self-adjoint operator, and
\[\prox^{\tau,H}_g(v) = \argmin_{v'} g(v') + \frac{1}{2\tau} \norm{v'-v}_H^2.\]
\end{tpgm}
The best quadratic model of $f$ around a point $x_0$ is
\begin{align*}
Qf_{x_0}(x) &= f(x_0) + \langle \nabla_H f(x_0) , x \rangle_H + 1/2 \langle x,x \rangle_H,
\end{align*}
with $H = H_f(x_0)$ being the Hessian of $f$ at $x$, thus it is natural to consider \eqref{met:gen_prox} with $H_n = H_f(x_n)$. Notice indeed that if $g$ is zero, the proximal operator is the identity and that $\nabla_H f(v) = H^{-1} \nabla f$, so that one recovers Newton's method:
\[v_{n+1} = v_n - \tau_n H_n^{-1} \nabla f(v_n),\]
which is known to converge quadratically for smooth enough $f$. This is why this method is called \emph{proximal Newton method}. However, for large-scale problems, computing and storing the Hessian is very costly, thus an alternative is to set $H_n$ to be an approximation of the Hessian of $f$ at $v_n$, thus leading to \emph{proximal quasi-Newton methods}. These methods were introduced in \cite{LeeSunSau}, which we refer to for further detail and theoretical results of convergence.
A very popular choice for $H_n$ is given by the L-BFGS method (see \cite{LiuNoc}), which is a quasi-Newton method building in some sense the \enquote{best} approximation of the Hessian at $v_n$ using only the information of the points $v_k$ and the gradients $\nabla f(v_k)$ for a fixed number of previous steps $k = n, n-1, \ldots, n-L+1$. The interest is that no matrix is stored, and there is a very efficient way to compute the matrix-vector product $H_n^{-1} \cdot v$ using simple algebra. Therefore, we decided to implement a proximal L-BFGS method, which in our case reads:
\begin{plbfgs}
\begin{equation}
\left|\begin{aligned}
v_0 &\text{ given}\\
v_{n+1} &= p_C^{\tilde{H}_n}(v_n - \tau_n \tilde{H}_n^{-1}\nabla f(v_n)),
\end{aligned}\right.
\end{equation}
where $\tilde{H}_n$ is the approximate Hessian computed with the L-BFGS method with $L$ steps and $p_C^{\tilde{H}_n}$ is the projection on $C$ with respect to the norm $\norm{\cdot}_{\tilde{H}_n}$.
\end{plbfgs}
The algorithm to compute the matrix-vector product $\tilde{H}_n^{-1} \cdot x$ is given in Section \ref{sec:algo}.
\subsection{Computing the projection}\label{sec:proj}
The difficulty lies in the computation of the projection, that is on the proximal operator. A box constraint on the variable is very easy to deal with, but here we are faced with box constraints on $\nabla \cdot v$, that is on a linear operator applied to $v$. Moreover, we want to compute a projection with respect to a twisted scalar product $\langle \cdot, \cdot \rangle_H$, which adds some extra difficulty. For simplicity of notations, we rename $a_\varepsilon, b_\varepsilon$ as $a,b$. By definition, finding the projection $p_C^H(v_0)$ of $v_0$ amounts to solving the optimization problem:
\begin{equation}\label{pb:P}\tag{P}
\min \quad \left\{\frac{\norm{v-v_0}_H^2}{2} \: : \: a \leq \nabla \cdot v \leq b,\, v // \partial \Omega\right\}.
\end{equation}
Note that, when one considers the divergence operator as an operator acting on vector fields defined on the whole $\mathbb{R}^d$ (extended to $0$ outside $\Omega$), the Neumann boundary condition above exactly corresponds to the fact that the divergence has no mass on $\partial\Omega$, which can be considered as included in the inequality constraints.
As a convex optimization, such a problem admits a dual problem, which we are going to use. We set
\[\psi(w) = \begin{dcases*}
0& if $a \leq w \leq b$,\\
+\infty& if not,
\end{dcases*}.\]
whose Legendre transform is
\[g(u) = \psi^\star(u) = \int bu_+ - \int a u_-,\]
so that $\psi = \psi^{\star\star} = g^\star$. Let us derive formally the dual problem by an $\inf - \sup$ exchange:
\begin{align*}
\inf_{v // \partial \Omega} \quad \left\{\frac{\norm{v-v_0}_H^2}{2} \: : \: a \leq \nabla \cdot v \leq b\right\}
&= \inf_v \frac 12 \norm{v-v_0}^2_H + \psi(\nabla \cdot v)\\
&= \inf_v \frac 12 \norm{v-v_0}^2_H + \sup_u -\langle \nabla u, v\rangle - g(u)\\
&= \inf_v \sup_u \frac 12 \norm{v-v_0}^2_H -\langle \nabla_H u, v\rangle_H - g(u)\\
&\geq \sup_u - g(u) + \inf_v \frac 12 \norm{v-v_0}^2_H -\langle \nabla_H u, v\rangle_H \\
&= - \inf_u g(u) + \sup_v \langle \nabla_H u, v\rangle_H - \frac 12 \norm{v-v_0}^2_H \\
&= - \inf_u g(u) + \frac{\norm{\nabla_H u}_H^2}{2} + \langle \nabla_H u , v_0 \rangle_H\\
&= - \inf_u g(u) + \frac 12 \int H^{-1} \nabla u \cdot \nabla u - \int u (\nabla \cdot v_0).
\end{align*}
Hence the dual problem reads:
\begin{equation}\label{pb:D}\tag{D}
\min_u \underbrace{\frac 12 \int H^{-1} \nabla u \cdot \nabla u - \int u (\nabla \cdot v_0)}_{f(u)} + \underbrace{\int bu_+ - \int a u_-}_{g(u)} .
\end{equation}
The $\inf - \sup$ interversion can be justified with equality via Fenchel's duality \cite[Chapter 1]{Bre} in a well-chosen Banach space. Hence there is no duality gap:
\[ \min \eqref{pb:P} + \min \eqref{pb:D} = 0.\]
As a consequence solving the dual problem provides a solution to the primal one. Indeed if $u$ is optimal for \eqref{pb:D} then $v = v_0 + \nabla_H u$ is optimal for \eqref{pb:P}. Now let us justify why it was interesting to pass by the resolution of a dual problem. Such a problem is of the form
\begin{equation}\label{pb:general}
\min_u f(u) + g(u),
\end{equation}
where $f$ is smooth, with gradient $\nabla f(u) = -\nabla \cdot (H^{-1} \nabla u) - \nabla \cdot v_0$, and $g$ is proximable:
\[\prox^\tau_g (u)(x) = \begin{dcases*}
u(x) - \tau a& if $u(x) < \tau a$,\\
0& if $\tau a \leq u(x) \leq \tau b$,\\
u(x) - \tau b& if $u(x) > \tau b$.
\end{dcases*}\]
We know how to compute the proximal operator and the gradient of $f$, since L-BFGS provides a simple method to compute the product $H^{-1} x$. Problems of the form \eqref{pb:general} with $f$ smooth (and computable gradient) and $g$ proximable can be tackled with first-order methods such as the proximal gradient method described in the previous section (also called ISTA) or a fast proximal gradient method called FISTA, introduced in \cite{BecTeb}. We opted for the latter, which is a slight modification of the proximal gradient method using an intermediary point:
\begin{equation}\tag{FISTA}\label{met:FISTA}
\left|\begin{aligned}
u_0 &\in H^1(\mathbb{R}^d),\\
\tilde{u}_n &= u_n + \lambda_n (u_n-u_{n-1}),\\
u_{n+1} &= \prox^\tau_g(\tilde{u}_n - \tau \nabla f (\tilde{u}_n)),
\end{aligned}\right.
\end{equation}
where $\lambda_n$ is given by some recursive formula (we refer to \cite{BecTeb} for the details). It enjoys a theoretical and pratical rate of convergence which is higher than ISTA and which is that of the classical gradient method:
\[f(u_n) - f_{opt} \leq \frac{2 L_f \abs{u_0 - u_{opt}}^2}{(n+1)^2}.\]
\subsection{Algorithms and numerical experiments}\label{sec:algo}
Following the work of \cite{OudSan}, we discretize our problem on a staggered grid : we divide the cube $Q = [-1,1]^2$ into $M^2$ subcubes of side $2/M$, the functions $U$ are defined at the center of the small cubes, while the $x$ component $V^x$ of a vector fields $V$ is defined on the vertical edges of the grid and the $y$ component $V^y$ on the horizontal edges of the grid. This is quite convenient to compute the discrete divergence of a vector field and the discrete gradient of a function.
\begin{itemize}
\item Unknowns: $(V^x_{i,j})_{\substack{1\leq i \leq M\\1\leq j \leq M+1}}$, $(V^y_{i,j})_{\substack{1\leq i \leq M+1\\1\leq j \leq M}}$, with
\[V^x_{1,j} = V^x_{M+1,j} = V^y_{i,1} = V^y_{1,M+1} = 0,\]
which means that $V$ is parallel to the boundary.
\item Objective function:
\[F(V) = \varepsilon^{-\sigma_1} h^2 \sum_{i,j} N(\hat{V}_{i,j})^\sigma + \varepsilon^{\sigma_2} h^2/2 \left( \sum_{i,j} \abs{\nabla_{i,j} V^x}^2 + \sum_{i,j} \abs{\nabla_{i,j} V^y}^2\right).\]
There are several definitions to give to make sense of $F$. First of all $N$ is a smooth approximation of the norm, of the form
\[N(x) = (\abs{x}^2 + \varepsilon_s^2)^{1/2} \quad \text{ for $\varepsilon_s$ small.}\]
The discrete vector field $\hat{V}_{i,j} = (\hat{V}^x_{i,j},\hat{V}^y_{i,j})$ is an interpolation of $(V^x,V^y)$ defined at the centers of the cubes:
\[\hat{V}^x_{i,j} = \frac{V^x_{i,j}+V^x_{i+1,j}}{2}, \quad \hat{V}^y_{i,j} = \frac{V^y_{i,j}+V^y_{i,j+1}}{2}, \quad 1 \leq i,j \leq M.\]
Finally the discrete gradient is defined as usual by
\begin{align*}
\nabla_{i,j} V^x &= ((V^x_{i,j+1}-V^x_{i,j})/h,(V^x_{i+1,j}-V^x_{i,j})/h),&
1 \leq i\leq M-1 &,1 \leq j\leq M,\\
\nabla_{i,j} V^y &= ((V^y_{i,j+1}-V^y_{i,j})/h,(V^y_{i+1,j}-V^y_{i,j})/h),&
1 \leq j\leq M-1 &,1 \leq i\leq M.\\
\end{align*}
\end{itemize}
We may now give the main algorithm and its sub-methods.
\begin{algorithm}[H]
\caption{Proximal L-BFGS for $F$}
\begin{algorithmic}
\State Data: tolerance $tol$, initial vector field $V_0$, step $\tau_0$, source $\delta$
\State $V \gets V_0$, $U \gets U_0$
\State \textbf{compute} $error$
\While {$error > tol$}
\State $\tau \gets \tau_0$
\Repeat
\State $G \gets \textsc{MultiplyBFGS}(\nabla F (V))$
\State $V,U \gets \textsc{Project}(V - \tau G,U,\delta,\tau)$
\State $\tau \gets \tau/2$
\Until{$F(V)$ has decreased}
\State \textbf{update} L-BFGS data
\State \textbf{compute} $error$
\EndWhile
\end{algorithmic}
\end{algorithm}
The update step for L-BFGS data consists in storing in $Y,Z,r$ the points and gradients of the $L$ previous steps, so that at step $n$:
\[Y_{L-k} = \nabla F(V_{n-k}) - \nabla F(V_{n-k-1}), \quad Z_{L-k} = V_{n-k} - V_{n-k-1}\]
for all $k = 0, \ldots, L-1$, and $r_k = 1/(Y_k \cdot Z_k)$ for all $k = 0, \ldots, L-1$.
Notice here that we do a simple backtracking line search by reducing the stepsize $\tau$ until the energy has decreased, for example until it has sufficiently decreased and satisfies the Armijo rule. Also, notice that the potential $U$ computed at step $n$ is used at the next step as initial data ; this trick extensively speeds up the computation of the projection. Finally, we took as error measurement some relative difference between two consecutive steps.
Now, as stated in Section \ref{sec:proj}, the projection on $C$ with respect to $\norm{\cdot}_H$ is computed via the FISTA method, as follows:
\begin{algorithm}[H]
\caption{Project $V_0$ on $C$ with respect to $\norm{\cdot}_H$}
\begin{algorithmic}
\State Data: tolerance $tol_p$, step $\tau_p$
\Function{Project}{$V_0,U_0,\delta,\tau$}
\State $D_0 \gets \nabla \cdot V_0$
\State $U \gets U_0$
\While{$error > tol_p$}
\State $t_p \gets t;\; t\gets (1+\sqrt{1+ 4 t_p^2})/2;\; s \gets (t_p-1)/t$
\State $G \gets \textsc{MultiplyBFGS}(\nabla U)$
\State $U_i \gets U + s(U-U_{old})$
\State $U_{old} \gets U$
\State $U \gets \textsc{Prox}(U_i - \tau_p(\nabla \cdot G-D_0),\delta,\tau)$
\State \textbf{compute} $error$
\EndWhile
\State $V \gets V_0 + \textsc{MultiplyBFGS}(\nabla U)$
\State \textbf{return} $V,U$
\EndFunction
\end{algorithmic}
\end{algorithm}
The \textsc{Prox} function is just the proximal operator associated with the discrete counterpart of $g : u \mapsto \int b u_+ - \int a u_-$ where $a = \delta - 1, b = \delta$. Thus $P = \textsc{Prox}(U,\delta,\tau)$ is defined by:
\[P_i = \begin{dcases*}
U_i - \tau (\delta_i -1)&
if $U_i < \tau(\delta_i -1)$,\\
0&
if $\tau(\delta_i -1) \leq U_i \leq \tau \delta_i$,\\
U_i - \tau \delta_i&
if $U_i > \tau\delta_i$.
\end{dcases*}\]
For the sake of completeness, we give a simple method to compute the L-BFGS multiplication $H^{-1} X$ (see \cite{LiuNoc,Noc} for details).
\begin{algorithm}[H]
\caption{L-BFGS multiplication $H^{-1} X$}
\begin{algorithmic}
\Function{MultiplyBFGS}{$X$}
\State $G \gets X$
\For{$ i = L, \ldots, 1$}
\State $s_i \gets r_i Z_i \cdot G$
\State $G \gets G - s_i Y_i$
\EndFor
\State $G \gets (Z_L \cdot Y) / (Y_L \cdot Y_L) \: G$
\For{$ i = 1, \ldots, L$}
\State $t \gets r_i Y_i \cdot G$
\State $G \gets G + (s_i - t) Z_i $
\EndFor
\State \textbf{return} $G$
\EndFunction
\end{algorithmic}
\end{algorithm}
We present some numerical results obtained with $\varepsilon_s = 10^{-4}$, on a $M \times M$ grid with $M = 201$ and $\varepsilon = 3h$ where $h = 2/M$, the code being written in Julia. We have started with random initial values for $V$ and a smooth approximation $\delta$ of the Dirac $\delta_0$. After some days of computation on a standard laptop, one gets the following shapes and underlying networks.
With no surprise, the shape for $\alpha=0.85$ is rounder than those obtained for $\alpha=0.55$ and $\alpha=0.65$. These two are quite similar, but a simple zoom shows that the one with the smallest value of $\alpha$ is a slightly more irregular than the other. The corresponding irrigation networks are also coherent with the expected results: the branches have larger multiplicity (close to the origin) for smaller $\alpha$.
\begin{figure}[H]
\centering
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{M201_rand_alpha055_nUs.png}
\caption{Norm of the vector field, $\alpha = 0.55$}
\end{subfigure}%
~
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{M201_rand_alpha055_divs.png}
\caption{Irrigated measure, $\alpha = 0.55$}
\end{subfigure}
\\
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{M201_rand_alpha065_nUs.png}
\caption{Norm of the vector field, $\alpha = 0.65$}
\end{subfigure}%
~
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{M201_rand_alpha065_divs.png}
\caption{Irrigated measure, $\alpha = 0.65$}
\end{subfigure}
\\
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{M201_rand_alpha085_nUs.png}
\caption{Norm of the vector field, $\alpha = 0.85$}
\end{subfigure}%
~
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{M201_rand_alpha085_divs.png}
\caption{Irrigated measure, $\alpha = 0.85$}
\end{subfigure}
\caption{Algorithm output for different $\alpha$'s after $\sim 15 000$--$25 000$ iterations ($e$ stands for the computed optimal value, which is an approximation of $e_\alpha$, and $M$ for the number of discretization points on each side of the domain).}
\end{figure}
\noindent {\bf Acknowledgments and Conflicts of Interests.} The support of the ANR project ANR-12-BS01-0014-01 GEOMETRYA and of the PGMO project MACRO, of EDF and Fondation Math\'ematique Jacques Hadamard, are gratefully acknowledged. The work started during a visit of the second author to UC Davis, and also profited of a visit of the third author to Univ. Paris-Sud at Orsay, and both Mathematics Department are acknowledged for the warm hospitality.
The authors declare that no conflict of interests exists concerning this work.
\printbibliography
\end{document}
| {
"timestamp": "2018-01-01T02:04:00",
"yymm": "1709",
"arxiv_id": "1709.01415",
"language": "en",
"url": "https://arxiv.org/abs/1709.01415",
"abstract": "We investigate the following question: what is the set of unit volume which can be best irrigated starting from a single source at the origin, in the sense of branched transport? We may formulate this question as a shape optimization problem and prove existence of solutions, which can be considered as a sort of \"unit ball\" for branched transport. We establish some elementary properties of optimizers and describe these optimal sets A as sublevel sets of a so-called landscape function which is now classical in branched transport. We prove $\\beta$-H{ö}lder regularity of the landscape function, allowing us to get an upper bound on the Minkowski dimension of the boundary: dim $\\partial$A $\\le$ d -- $\\beta$ (where $\\beta$ := d($\\alpha$ -- (1 -- 1/d)) $\\in$ (0, 1) is a relevant exponent in branched transport, associated with the exponent $\\alpha$ > 1 -- 1/d appearing in the cost). We are not able to prove the upper bound, but we conjecture that $\\partial$A is of non-integer dimension d -- $\\beta$. Finally, we make an attempt to compute numerically an optimal shape, using an adaptation of the phase-field approximation of branched transport introduced some years ago by Oudet and the second author.",
"subjects": "Optimization and Control (math.OC); Classical Analysis and ODEs (math.CA)",
"title": "A fractal shape optimization problem in branched transport",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.986151393040476,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7086428553217623
} |
https://arxiv.org/abs/2107.02724 | The proportion of derangements characterizes the symmetric and alternating groups | Let $G$ be a subgroup of the symmetric group $S_n$. If the proportion of fixed-point-free elements in $G$ (or a coset) equals the proportion of fixed-point-free elements in $S_n$, then $G=S_n$. The analogue for $A_n$ holds if $n \ge 7$. We give an application to monodromy groups. | \section{Introduction}
\subsection{Derangements in permutation groups}
Motivated by an application to monodromy groups, we prove the following.
\begin{theorem}
\label{T:main}
Let $G$ be a subgroup of the symmetric group $S_n$ for some $n \ge 1$.
Let $C$ be a coset of $G$ in $S_n$.
If
\begin{equation}
\label{E:C}
\frac{|\{\sigma\in C : \sigma\ \textup{has no fixed points}\}|}{|C|}=\frac{|\{\sigma\in S_n : \sigma\ \textup{has no fixed points}\}|}{|S_n|},
\end{equation}
then $G=C=S_n$.
\end{theorem}
Elements of $S_n$ with no fixed points are called derangements.
Let $D_n$ be the number of derangements in $S_n$.
The right side of \eqref{E:C} is
\[
\frac{D_n}{n!} = \sum_{i=0}^n \frac{(-1)^i}{i!};
\]
see \cite{Stanley2012}*{Example~2.2.1}, for instance.
When the denominator of $D_n/n!$ in lowest terms is $n!$,
the conclusion of Theorem~\ref{T:main} follows immediately,
but controlling $\gcd(D_n,n!)$ in general is nontrivial.
Our proof requires an irrationality measure for $e$,
divisibility properties of $D_n$,
and a bound on the orders of primitive permutation groups.
\begin{remark}
The proof shows also that for $n \ge 5$, if $C$ is not necessarily a coset but just any subset of $S_n$ having the same size as $G$, then \eqref{E:C} implies that $G$ is $A_n$ or $S_n$. In fact, we prove that if a subgroup $G$ of $S_n$ has order divisible by the denominator of $D_n/n!$, then $G$ is $A_n$ or $S_n$.
\end{remark}
\begin{remark}
We also prove an analogue of Theorem~\ref{T:main} in which both appearances of $S_n$ on the right side of \eqref{E:C} are replaced by the alternating group $A_n$ for some $n \ge 7$; see Theorem~\ref{T:alternating}.
But there are counterexamples for smaller alternating groups.
For example, the order~$10$ dihedral group in $A_5$ has the same proportion of derangements as $A_5$, namely $4/10=24/60$.
\end{remark}
\subsection{Application to monodromy}
Let $\mathbb{F}_q$ be the finite field of $q$ elements.
Let $f(T)\in\mathbb{F}_q[T]$ be a polynomial of degree $n$.
Birch and Swinnerton--Dyer \cite{Birch-Swinnerton-Dyer1959} define what it means for $f$ to be ``general'' and estimate the proportion of field elements in the image of a general $f$:
\[
\frac{|f(\mathbb{F}_q)|}{q} = 1 - \sum_{i=0}^n\frac{(-1)^i}{i!} + O_n(q^{-1/2}).
\]
More generally, let $f\colon X\to Y$ be a degree~$n$ generically \'etale morphism of schemes of finite type over $\mathbb{F}_q$, with $Y$ geometrically integral.
The geometric and arithmetic monodromy groups $G$ and $A$ are subgroups of $S_n$ fitting in an exact sequence
\[
1 \longrightarrow G \longrightarrow A \longrightarrow \Gal(\mathbb{F}_{q^r}/\mathbb{F}_q) \longrightarrow 1
\]
for some $r \ge 1$; see \cite{Entin2021}*{Section~4} for an exposition.
Let $C$ be the coset of $G$ in $A$ mapping to the Frobenius generator of $\Gal(\mathbb{F}_{q^r}/\mathbb{F}_q)$.
Let $M$ be a bound on the geometric complexity of $X$ and $Y$.
Assume that $Y(\mathbb{F}_q) \ne \emptyset$, which is automatic if $q$ is large relative to $M$.
Then the Lang--Weil bound implies
\begin{equation}
\label{E:meaning of fraction}
\frac{|f(X(\mathbb{F}_q))|}{|Y(\mathbb{F}_q)|}=\frac{|\{\sigma\in C : \sigma\ \textup{has at least one fixed point}\}|}{|C|}+O_{n,M}(q^{-1/2});
\end{equation}
see \cite{Entin2021}*{Theorem 3}, for example.
In particular, if $G=S_n$, then
\begin{equation}
\frac{|f(X(\mathbb{F}_q))|}{|Y(\mathbb{F}_q)|}=1-\sum_{i=0}^n\frac{(-1)^i}{i!}+O_{n,M}(q^{-1/2}).
\label{E:image_point_count}
\end{equation}
We prove a \emph{converse}, that an estimate as
in \eqref{E:image_point_count}
on the proportion of points in the image
implies that the geometric monodromy group of $f$ is the full symmetric group $S_n$:
\begin{corollary}
Given $n$ and $M$, there exists an effectively computable constant $c=c(n,M)$ such that for any $f \colon X \to Y$
as above, with $\deg f=n$ and the complexities of $X$ and $Y$ bounded by $M$, if
\[
\frac{|f(X(\mathbb{F}_q))|}{|Y(\mathbb{F}_q)|} = 1 - \sum_{i=0}^n \frac{(-1)^{i}}{i!} + \epsilon, \quad \text{where $|\epsilon| < \frac{1}{n!} - c q^{-1/2}$,}
\]
then $G=S_n$.
\label{Cor:application}
\end{corollary}
\begin{proof}
Combine \eqref{E:meaning of fraction} and Theorem~\ref{T:main}.
\end{proof}
\begin{remark}
We originally proved Corollary~\ref{Cor:application} in order to prove a version of \cite{Poonen-Slavov2020}*{Theorem~1.9}, about specialization of monodromy groups, but later we found a more natural argument.
\end{remark}
\subsection{Structure of the paper}
The proof of Theorem~\ref{T:main} occupies the rest of the paper, which is divided in sections
according to the properties of $G$.
Throughout, we assume that $G$, $C$, and $n$ are such that \eqref{E:C} holds.
The cases with $n \le 4$ can be checked directly,
so assume that $n \ge 5$ and $G \ne S_n$.
\section{Primitive permutation groups}
\label{S:primitive}
The proportion of derangements in $A_n$ is given by the inclusion-exclusion formula;
it differs from $D_n/n!$ by the nonzero quantity $\pm(n-1)/n!$.
The proportion for $S_n$ is the average of the proportions for $A_n$ and $S_n-A_n$,
so the proportion for $S_n-A_n$ also differs from $D_n/n!$.
Thus $G \ne A_n$.
Suppose that $G$ is primitive, $n \ge 5$, and $G \ne A_n,S_n$.
The main theorem in \cite{Praeger-Saxl1980}\footnote{This is independent of the classification of finite simple groups. Using the classification, \cite{Maroti2002} gives better bounds.}
gives $|G|<4^n$.
On the other hand, $D_n/n!$ is close to $1/e$ and hence cannot equal a rational number with small denominator; this will show that $|G|$ is at least about $\sqrt{n!}$.
These will give a contradiction for large $n$.
We now make this precise.
Let $a=|\{\sigma\in C : \sigma\ \textup{has no fixed points}\}|$ and $b=|C|=|G|$, so $a \le b = |G| < 4^n$.
Then
\[\left|\frac{a}{b}-\frac{1}{e}\right|=\left|\frac{D_n}{n!}-\frac{1}{e}\right|<\frac{1}{(n+1)!}.\]
No rational number with numerator $\le 4$ is within $1/6!$ of $1/e$, so $a \ge 5$.
By the main result of \cite{Okano1992} (see also \cite{Alzer1998}),
\[
\left| e - \frac{b}{a} \right| > \frac{\log \log a}{3 a^2 \log a}.
\]
Combining the two displayed inequalities yields
\begin{equation}
\label{E:big inequality}
\frac{1}{(n+1)!} > \left|\frac{a}{b}-\frac{1}{e}\right|
= \frac{a}{b e}\left|e-\frac{b}{a}\right|
> \frac{1}{b e}\cdot\frac{\log\log a}{3a\log a}
> \frac{\log \log 4^n}{3 e (4^n)^2 \log 4^n};
\end{equation}
the last step uses that $a,b < 4^n$ and
that $\dfrac{\log\log x}{x\log x}$
is decreasing for $x\geq 5$.
Inequality~\eqref{E:big inequality} implies $n \le 41$.
Let $d_n$ be the denominator of the rational number $\dfrac{D_n}{n!} = \dfrac{a}{b}$.
Then $d_n \mid b$, so $d_n \le b < 4^n$.
For $11 < n \le 41$, the inequality $d_n < 4^n$ fails.
For $n \le 11$, a Magma computation \cite{Magma} shows that there are no degree $n$ primitive subgroups $G \ne A_n,S_n$ for which $d_n \mid b$.
\section{Imprimitive but transitive permutation groups}
\label{S:imprimitive}
Suppose that $G$ is imprimitive but transitive.
Then $G$ preserves a partition of $\{1,\ldots,n\}$ into $l$ subsets of equal size $k$,
for some $k,l \ge 2$ with $kl=n$.
The subgroup of $S_n$ preserving such a partition has order $(k!)^l l!$ (it is a wreath product $S_k \wr S_l$).
Thus $|G|$ divides $(k!)^l l!$.
For a prime $p$, let $\nu_p$ denote the $p$-adic valuation.
Since $\dfrac{a}{|G|} = \dfrac{D_n}{n!}$, every prime $p \nmid D_n$ satisfies $\nu_p(n!) \le \nu_p(|G|) \le \nu_p((k!)^l l!) \le \nu_p(n!)$.
Thus for every prime $p\nmid D_n$, the inequality
$ \nu_p((k!)^l l!) \le \nu_p(n!)$
is an equality.
The third of the three following lemmas will prove that this is impossible for $n \ge 5$.
\begin{lemma}
Let $k, l\geq 2$ and let $p$ be a prime. The inequality
\begin{equation}
\nu_p((k!)^l l!)\leq \nu_p((kl)!)
\label{nu_p_factorial_inequality}
\end{equation}
is an equality if and only if at least one of the following holds:
\begin{itemize}
\item $k$ is a power of $p$;
\item there are no carry operations in the $l$-term addition $k+\cdots+k$ when $k$ is written in base $p$ $($in particular, $l<p$$)$.
\end{itemize}
\label{equality_case}
\end{lemma}
\begin{proof}
Let $s_p(k)$ denote the sum of the $p$-adic digits of a positive integer $k$; then $\nu_p(k!)=\dfrac{k-s_p(k)}{p-1}$.
Thus equality in \eqref{nu_p_factorial_inequality} is equivalent to equality in
\begin{equation}
l+s_p(kl)\leq ls_p(k)+s_p(l).
\label{nu_p_digits_inequality}
\end{equation}
We always have
\begin{equation}
l+s_p(kl)\leq l+s_p(k)s_p(l)\leq ls_p(k)+s_p(l);
\label{E:two steps}
\end{equation}
the first follows from $s_p(kl)\leq s_p(k)s_p(l)$,
and the second is simply
\[
(s_p(k)-1)(l-s_p(l))\geq 0.
\]
Thus equality in~\eqref{nu_p_digits_inequality} is equivalent to equality in both inequalities of~\eqref{E:two steps}.
The second inequality of~\eqref{E:two steps} is an equality
if and only if either $k$ is a power of $p$ or $l<p$;
in each case, we must check when equality holds in the first inequality~\eqref{E:two steps},
i.e., when $s_p(kl) = s_p(k) s_p(l)$.
If $k$ is a power of $p$, then it holds.
If $l<p$, then it holds if and only if $s_p(kl) = l s_p(k)$,
which holds if and only if there are no carry operations in the $l$-term addition
$k+\cdots+k$ when $k$ is written in base $p$.
\end{proof}
The following lemma will help us produce primes $p$ not dividing $D_n$.
\begin{lemma}
\label{L:supply_primes}
For $0 \le m \le n$, we have $D_n \equiv (-1)^{n-m} D_m \pmod{n-m}$.
In particular,
\begin{align}
\label{E:D mod n} D_n &\equiv \pm 1\pmod{n}\\
\label{E:D mod n-2} D_n &\equiv \pm 1\pmod{n-2}\\
\label{E:D mod n-3} D_n &\equiv \pm 2\pmod{n-3}.
\end{align}
\end{lemma}
\begin{proof}
Reduce each term in $D_n$ modulo $n-m$; most of them are $0$.
\end{proof}
\begin{lemma}
Let $k,l\geq 2$. Set $n=kl$ and assume $n>4$. Then there exists a prime $p\nmid D_n$ such that
\[\nu_p((k!)^l l!)< \nu_p(n!).\]
\label{L:imprimitive_main_lemma}
\end{lemma}
\begin{proof}
{\bfseries Case 1. $l\geq 3$ and $n-2$ is not a power of $2$.}
Let $p\geq 3$ be a prime with $p\mid n-2$.
By \eqref{E:D mod n-2}, $p\nmid D_n$, so $\nu_p((k!)^l l!) = \nu_p(n!)$.
Apply Lemma~\ref{equality_case}.
If $k$ is a power of $p$, then $p$ divides $k$, which divides $n$,
so $p\mid n-(n-2)=2$, contradicting $p \ge 3$.
Otherwise, there are no carry operations in the $l$-term addition $k+\cdots+k$ in base $p$.
This is impossible because the last digit of $n$ is $2$ (since $p\mid n-2$ and $p\geq 3$) and $l\geq 3$.
\medskip
{\bfseries Case 2. $l=2$.}
Then $2 \mid n$.
By \eqref{E:D mod n}, $2 \nmid D_n$.
By Lemma \ref{equality_case}, $k$ is a power of $2$ (since $l<2$ is violated).
Thus $n=2k$ is a power of $2$.
Since $n \ge 5$, there exists a prime $p \mid n-3$.
Since $n$ is a power of $2$, this implies $p \ge 5$.
By \eqref{E:D mod n-3}, $p\nmid D_n$. Apply Lemma~\ref{equality_case}. Note that $k$ is not a power of $p$, since $k$ is a power of $2$ and $p\neq 2$. Therefore, there are no carry operations in $k+k=n$,
so the last digit of $n$ is even.
But $p \mid n-3$ and $p \ge 5$, so the last digit of $n$ is $3$.
\medskip
{\bfseries Case 3. $l=3$ and $n-2$ is a power of $2$.}
Then $3\mid n$.
By \eqref{E:D mod n}, $3 \nmid D_n$.
By Lemma~\ref{equality_case}, $k$ must be a power of $3$ (since $l<3$ is violated).
Then $n=3k$ is a power of $3$, contradicting the fact that $n$ is even.
\medskip
{\bfseries Case 4. $l>3$ and $n-2$ is a power of $2$.}
In particular, $n = kl > 6$.
Then $n-3$ is not a power of $3$,
because otherwise we would have a solution to $3^u=2^v-1$ with $u>1$,
whereas the only solution in positive integers is $(u,v)=(1,2)$
(proof: $3\mid 2^v-1$, so $v$ is even, so $2^{v/2}-1$ and $2^{v/2}+1$ are powers of $3$ that differ by $2$, so they are $1$ and $3$).
Let $p\neq 3$ be a prime divisor of $n-3$.
Then $p\geq 5$.
Apply \eqref{E:D mod n-3} and Lemma~\ref{equality_case}.
If $k$ is a power of $p$, then $p\mid n$, so $p \mid n-(n-3)=3$, contradicting $p \ne 3$.
Therefore, there are no carry operations in the $l$-term addition $k+\dots+k$. This is impossible, since the last digit of $kl$ is $3$ (since $p\mid n-3$ and $p\geq 5$) and $l>3$.
\end{proof}
\section{Intransitive permutation groups}
\label{S:intransitive}
Suppose that $G$ is intransitive.
Then $G$ embeds in $S_u \times S_v \subset S_n$ for some $u,v\ge 1$ with $u+v=n$.
Consider a prime $p\mid n$. By \eqref{E:D mod n}, $p\nmid D_n$.
Then, analogously to the second paragraph of Section~\ref{S:imprimitive},
$\nu_p(n!) \le \nu_p(|G|) \le \nu_p(u! \, v!) \le \nu_p(n!)$,
so $\nu_p(u!) + \nu_p(v!) = \nu_p(n!)$; equivalently, $s_p(u) + s_p(v) = s_p(n)$.
So there are no carry operations in $u+v$.
Let $e=\nu_p(n)$, so the last $e$ base $p$ digits of $n$ are zero;
then the same holds for $u$ and $v$.
In other words, $p^e\mid u,v$ as well.
Since this holds for each $p\mid n$, we conclude that $n\mid u,v$.
This contradicts $0 < u,v < n$.
This completes the proof of Theorem~\ref{T:main}.
\section{Alternating group}
\begin{theorem}
Let $G$ be a subgroup of the symmetric group $S_n$ for some $n\geq 7$. Let $C$ be a coset of $G$ in $S_n$ having the same proportion of fixed-point-free elements as $A_n$. Then $G=A_n$.
\label{T:alternating}
\end{theorem}
\begin{remark}
For $n\leq 6$, the subgroups of $S_n$ other than $A_n$ for which some coset has the same proportion as $A_n$, up to conjugacy, are
\begin{itemize}
\item the order~$4$ subgroup of $S_4$ generated by $(1423)$ and $(12)(34)$;
\item the order~$4$ subgroup of $S_4$ generated by $(34)$ and $(12)(34)$;
\item the order~$8$ subgroup of $S_4$;
\item the subgroups of $S_5$ of order $5$, $10$, or $20$;
\item the order~$36$ subgroup of $S_6$ generated by $(1623)(45)$, $(12)(36)$, $(124)(365)$, and $(142)(365)$;
\item the order~$36$ subgroup of $S_6$ generated by $(13)(25)(46)$, $(14)(36)$, $(154)(236)$, and $(145)(236)$.
\end{itemize}
\end{remark}
The proof of Theorem~\ref{T:alternating} follows the proof of Theorem~\ref{T:main};
we highlight only the differences.
The proportion of fixed-point-free elements in $A_n$ is $E_n/n!$, where
$E_n\colonequals D_n+(-1)^{n-1}(n-1)$.
\subsection{Primitive permutation groups}
Suppose $G\neq A_n$.
The first paragraph of Section~\ref{S:primitive} shows that $G \ne S_n$.
For $7 \le n \le 13$, we use Magma to check Theorem~\ref{T:alternating} for each primitive subgroup of $S_n$.
So assume $n \ge 14$.
Define $a$ and $b$ as in Section~\ref{S:primitive}.
We have
\[
\left|\frac{a}{b}-\frac{1}{e}\right| = \left| \frac{E_n}{n!} - \frac{1}{e} \right| \le \left| \frac{E_n-D_n}{n!} \right| + \left| \frac{D_n}{n!} - \frac{1}{e} \right|<
\frac{n-1}{n!} + \frac{1}{(n+1)!} =\frac{n^2}{(n+1)!}.
\]
No $a/b$ with $a<5$ is within $15^2/16!$ of $1/e$, so $a \ge 5$.
Inequality~\eqref{E:big inequality} with $1/(n+1)!$ replaced by $n^2/(n+1)!$ implies $n \le 49$.
Let $e_n$ be the denominator of $E_n/n!$, so $e_n$ divides $|G|$, which is less than $4^n$.
But for $13<n\leq 49$, the inequality $e_n<4^n$ fails.
\subsection{Imprimitive permutation groups that preserve a partition into blocks of equal size}
\label{Sub:imprimitive}
To rule out imprimitive permutation groups that preserve a partition into $l$ blocks of size $k$,
we argue as in Section~\ref{S:imprimitive}, but with Lemma~\ref{L:imprimitive_main_lemma} replaced by the following.
\begin{lemma}
Let $k,l\geq 2$. Set $n=kl$ and assume that $n>6$. Then there exists a prime $p\nmid E_n$ such that
\[\nu_p((k!)^l l!)<\nu_p(n!).\]
\label{Lem:strict_ineq}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{Lem:strict_ineq}]
For each integer $n\in (6,30]$, we check directly that there exists a prime $p\in (n/2,n]$ such that $p\nmid E_n$. Assume from now on that $n>30$.
Suppose the statement is false. Then whenever a prime $p$ satisfies $p\nmid E_n$, \eqref{nu_p_factorial_inequality} is an equality and Lemma \ref{equality_case} applies.
By using $D_n\equiv (-1)^{n-s}D_s\pmod{n-s}$ and $E_n=D_n+(-1)^{n-1}(n-1)$, we obtain
\begin{align}
\label{E:n} E_n &\equiv 2(-1)^n\pmod{n}\\
\label{E:n-3} E_n &\equiv 4(-1)^{n-1}\pmod{n-3}\\
\label{E:n-4} E_n &\equiv 6(-1)^n\pmod{n-4}\\
\label{E:n-5} E_n &\equiv (-1)^{n-1}2^4\times 3\pmod{n-5}
\end{align}
{\bfseries Case 1. $n-4$ is a power of $2$.}
Then $n-3$ is not a power of $3$ because otherwise, we have a solution to $3^u-1=2^v$ with $u \ge 3$;
working modulo~$4$ shows that $u$ is even, and factoring the left side leads to a contradiction.
Let $p\neq 3$ be a prime with $p\mid n-3$. Since $n-3$ is odd, $p\geq 5$. By \eqref{E:n-3}, $p\nmid E_n$, so we have one of the conclusions of Lemma~\ref{equality_case}.
If $k$ is a power of $p$, then $p\mid k\mid n$, which, combined with $p\mid n-3$ gives $p=3$, a contradiction.
Suppose that there is no carry in $k+\cdots+k$ ($l$ terms). This sum has last digit $3$ in base $p$, so $l=3$, so $3\mid n$, and hence $3\nmid E_n$ by \eqref{E:n}. Apply Lemma \ref{equality_case} for the prime $3$. Since $l<3$ is violated, we deduce that $k$ is a power of $3$. Then $n=kl$ is also a power of $3$, but this contradicts the fact that $n$ is even.
\bigskip
{\bfseries Case 2. $n-3$ is a power of $2$ and $l\neq 2,4$.}
Then $n-4$ is odd and is not a power of $3$. Let $p\neq 3$ be a prime with $p\mid n-4$. Then $p\geq 5$, so $p\nmid E_n$ by \eqref{E:n-4}. If $k$ is a power of $p$, then $p\mid k\mid n$, which contradicts $p\mid n-4$ since $p \ge 5$. If there are no carry operations in the $l$-term addition $k+\cdots+k$ (which has last digit $4$ in base $p$), then $l=2$ or $l=4$, contrary to assumption.
\bigskip
{\bfseries Case 3. $l=3$.}
Then $3\mid n$, hence $3\nmid E_n$ by \eqref{E:n}. Apply Lemma \ref{equality_case} for the prime $3$. Since $3<l$ is violated, $k$ is a power of $3$.
Then $n=kl$ is also a power of $3$. Then $n-4$ is odd and not divisible by $3$. Let $q$ be a prime with
$q\mid n-4$. Then $q\geq 5$, and hence $q\nmid E_n$ by
\eqref{E:n-4}. Since $k$ is a power of $3$, it is not a power of $q$. So there is no carry in $k+k+k$ in base $q$. But this sum has last digit $4$ in base $q$, which is a contradiction.
\bigskip
{\bfseries Case 4. $l\neq 2,4$.}
By the previous cases, we may assume in addition that $n-4$ and $n-3$ are not powers of $2$ and $l\neq 3$.
Let $p\neq 2$ be a prime with $p\mid n-3$. Then $p\nmid E_n$ by \eqref{E:n-3}.
Since the $l$-term addition $k+\cdots+k$ has last digit $3$ and $l\neq 3$, there is some carry. Therefore $k$ is a power of $p$. Then $p\mid k\mid n$, which, combined with $p\mid n-3$, gives
$p=3$. In particular, $3\mid n$.
Let $q\neq 2$ be a prime with $q\mid n-4$. Since $3 \mid n$, we have $q\neq 3$ so $q\geq 5$. By \eqref{E:n-4}, $q\nmid E_n$. If $k$ is a power of $q$, then $q\mid n$, hence $q \mid 4$ --- contradiction. Therefore there is no carry in the $l$-term addition $k+\cdots+k$ in base $q$. This sum has last digit $4$ and $l\neq 2,4$, so this case is impossible.
\bigskip
{\bf Case 5. $l=2$ or $l=4$.}
Then $n$ is even, so $n-3$ and $n-5$ are odd.
\medskip
{\em Subcase 5.1: $n-3$ is not a power of $3$.}
Let $p\neq 3$ be a prime such that $p\mid n-3$. Then $p\geq 5$ and $p\nmid E_n$ by \eqref{E:n-3}. If $k$ is a power of $p$, then $p\mid k\mid n$, giving $p=3$, which is a contradiction. However, there is carry in the $l$-term addition $k+\cdots+ k$ because the sum has last digit $3$, and $l$ is $2$ or $4$.
\medskip
{\em Subcase 5.2: $n-3$ is a power of $3$ but $n-5$ is not a power of $5$.}
Let $p\neq 5$ be a prime with $p\mid n-5$. Then $p\geq 7$ and we apply the argument of subcase~5.1: an $l$-term sum $k+\cdots+k$ cannot have last digit $5$ in base $p$.
\medskip
{\em Subcase 5.3: $n-3=3^a$ and $n-5=5^b$ for some $a,b\geq 1$.}
Then $3^a-5^b=2$, so $a=3$ and $b=2$ by \cite{Brenner-Foster}*{Theorem~4.06}.
This contradicts $n > 30$.
\end{proof}
\subsection{Intransitive subgroups}
As in Section \ref{S:intransitive}, $G$ embeds in
$S_u \times S_v \subset S_n$ for some $u,v\ge 1$ with $u+v=n$.
Write $n=2^s m$, where $s\geq 0$ and $2\nmid m$. The argument in Section~\ref{S:intransitive} for odd $p$ with $E_n$ in place of $D_n$ and \eqref{E:n} in place of \eqref{E:D mod n} implies that $m\mid u,v$. Thus $s\geq 1$.
If $s=1$, then $n=2m$, so $u=v$. This case is covered in Section~\ref{Sub:imprimitive}.
Suppose that $s\geq 2$. Then $4 \mid n$, so \eqref{E:n} implies that $E_n/2$ is odd. Using
$\frac{a}{|G|}=\frac{E_n/2}{n!/2}$, we obtain
$\nu_2(n!/2)\leq \nu_2(|G|)\leq \nu_2(u!v!)\leq \nu_2(n!)$.
If the last inequality is an equality,
then the same argument used in Section~\ref{S:intransitive} shows that $\nu_2(u) = \nu_2(v) = \nu_2(n)$;
combining this with $m \mid u,v$ shows that $n \mid u,v$, a contradiction.
Therefore the first two inequalities must be equalities, so $\nu_2(u!v!)=\nu_2(n!)-1$; equivalently, $s_2(u)+s_2(v)=s_2(n)+1$. This means there is exactly one carry operation in $u+v$ in base $2$. This is possible only when $2^{s-1}\mid u,v$. Also, $m \mid u,v$, so $n/2 \mid u,v$, so again $u=v$, and this case is covered in Section~\ref{Sub:imprimitive}.
\section*{Acknowledgements}
We thank Andrew Sutherland for useful discussions concerning Section~\ref{S:primitive} and specifically for drawing our attention to~\cite{Okano1992}.
We thank Michael Bennett and Samir Siksek for suggesting references for the solution of $3^a-5^b=2$.
We also thank the referees for comments.
\begin{bibdiv}
\begin{biblist}
\bib{Alzer1998}{article}{
author={Alzer, Horst},
title={On rational approximation to $e$},
journal={J. Number Theory},
volume={68},
date={1998},
number={1},
pages={57--62},
issn={0022-314X},
review={\MR{1492888}},
doi={10.1006/jnth.1997.2199},
}
\bib{Birch-Swinnerton-Dyer1959}{article}{
author={Birch, B. J.},
author={Swinnerton-Dyer, H. P. F.},
title={Note on a problem of Chowla},
journal={Acta Arith.},
volume={5},
date={1959},
pages={417--423},
issn={0065-1036},
review={\MR{113844}},
doi={10.4064/aa-5-4-417-423},
}
\bib{Brenner-Foster}{article}{
author={Brenner, J. L.},
author={Foster, Lorraine L.},
title={Exponential Diophantine equations},
journal={Pacific J. Math.},
volume={101},
date={1982},
number={2},
pages={263--301},
issn={0030-8730},
review={\MR{675401}},
doi={10.2140/pjm.1982.101.263},
}
\bib{Entin2021}{article}{
author={Entin, Alexei},
title={Monodromy of hyperplane sections of curves and decomposition
statistics over finite fields},
journal={Int. Math. Res. Not. IMRN},
date={2021},
number={14},
pages={10409--10441},
issn={1073-7928},
review={\MR{4285725}},
doi={10.1093/imrn/rnz120},
}
\bib{Magma}{article}{
author={Bosma, Wieb},
author={Cannon, John},
author={Playoust, Catherine},
title={The Magma algebra system. I. The user language},
note={Computational algebra and number theory (London, 1993). Magma is available at \url{http://magma.maths.usyd.edu.au/magma/}\phantom{i}},
journal={J. Symbolic Comput.},
volume={24},
date={1997},
number={3-4},
pages={235\ndash 265},
issn={0747-7171},
review={\MR{1484478}},
label={Magma},
}
\bib{Maroti2002}{article}{
author={Mar\'{o}ti, Attila},
title={On the orders of primitive groups},
journal={J. Algebra},
volume={258},
date={2002},
number={2},
pages={631--640},
issn={0021-8693},
review={\MR{1943938}},
doi={10.1016/S0021-8693(02)00646-4},
}
\bib{Okano1992}{article}{
author={Okano, Takeshi},
title={A note on the rational approximations to $e$},
journal={Tokyo J. Math.},
volume={15},
date={1992},
number={1},
pages={129--133},
issn={0387-3870},
review={\MR{1164191}},
doi={10.3836/tjm/1270130256},
}
\bib{Poonen-Slavov2020}{article}{
author={Poonen, Bjorn},
author={Slavov, Kaloyan},
title={The exceptional locus in the Bertini irreducibility theorem for a morphism},
journal={Int.\ Math.\ Res.\ Notices},
volume={rnaa182},
date={2020-08-04},
issn={1687-0247},
doi={10.1093/imrn/rnaa182},
}
\bib{Praeger-Saxl1980}{article}{
author={Praeger, Cheryl E.},
author={Saxl, Jan},
title={On the orders of primitive permutation groups},
journal={Bull. London Math. Soc.},
volume={12},
date={1980},
number={4},
pages={303--307},
issn={0024-6093},
review={\MR{576980}},
doi={10.1112/blms/12.4.303},
}
\bib{Stanley2012}{book}{
author={Stanley, Richard P.},
title={Enumerative combinatorics. Volume 1},
series={Cambridge Studies in Advanced Mathematics},
volume={49},
edition={2},
publisher={Cambridge University Press, Cambridge},
date={2012},
pages={xiv+626},
isbn={978-1-107-60262-5},
review={\MR{2868112}},
}
\end{biblist}
\end{bibdiv}
\end{document}
| {
"timestamp": "2021-10-20T02:02:19",
"yymm": "2107",
"arxiv_id": "2107.02724",
"language": "en",
"url": "https://arxiv.org/abs/2107.02724",
"abstract": "Let $G$ be a subgroup of the symmetric group $S_n$. If the proportion of fixed-point-free elements in $G$ (or a coset) equals the proportion of fixed-point-free elements in $S_n$, then $G=S_n$. The analogue for $A_n$ holds if $n \\ge 7$. We give an application to monodromy groups.",
"subjects": "Group Theory (math.GR); Algebraic Geometry (math.AG)",
"title": "The proportion of derangements characterizes the symmetric and alternating groups",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.986151393040476,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7086428553217623
} |
https://arxiv.org/abs/0904.2115 | Colorful Strips | Given a planar point set and an integer $k$, we wish to color the points with $k$ colors so that any axis-aligned strip containing enough points contains all colors. The goal is to bound the necessary size of such a strip, as a function of $k$. We show that if the strip size is at least $2k{-}1$, such a coloring can always be found. We prove that the size of the strip is also bounded in any fixed number of dimensions. In contrast to the planar case, we show that deciding whether a 3D point set can be 2-colored so that any strip containing at least three points contains both colors is NP-complete.We also consider the problem of coloring a given set of axis-aligned strips, so that any sufficiently covered point in the plane is covered by $k$ colors. We show that in $d$ dimensions the required coverage is at most $d(k{-}1)+1$.Lower bounds are given for the two problems. This complements recent impossibility results on decomposition of strip coverings with arbitrary orientations. Finally, we study a variant where strips are replaced by wedges. | \section{Introduction}
In this paper, we are interested in coloring finite point sets in $\Re^d$ so that any region
bounded by two parallel axis-aligned hyperplanes, that contains at least
some fixed number of points, also contains a point of each color.
To rephrase, we study the following problem:
What is the minimum number $p(k,d)$ for which it is always possible
to $k$-color any given point set in $\Re^d$, so that any axis-aligned
strip containing at least $p(k,d)$ points contains a point of each color?
Note that this is in fact a purely combinatorial problem. An axis-aligned strip isolates a subsequence of the points in sorted order with respect to one of the axes.
Therefore, the only thing that matters is the order in which the points appear along each axis. We can thus rephrase our problem, considering $d$-dimensional points sets, as follows:
For $d$ permutations of a set of items $S$, is it possible to color the items with $k$ colors, so that in all $d$
permutations every sequence of at least $p(k,d)$ contiguous items contains one item of each color?\medskip
We also study {\em circular} permutations, in which the first and the last elements are contiguous.
We consider the problem of finding a function $p'(k,d)$ such that, for any $d$
circular permutations of a set of items $S$, it is possible to $k$-color the items so that in every permutation, every sequence of $p'(k,d)$ contiguous items contains all colors.
A restricted geometric version of this problem in $\Re^2$ consists of coloring a point set $S$ with respect to wedges. For our purposes, a wedge is any area delimited by two half-lines with common endpoint at one of $d$ given apices. Each apex induces a circular ordering of the points in $S$. This is illustrated in Figure~\ref{fig}. We want to color $S$ so that any wedge containing at least $p'(k,d)$ points is polychromatic.
In $\Re^2$, the non-circular case corresponds to wedges with apices at infinity. Therefore the wedge coloring problem is strictly more difficult than the strip coloring problem. \medskip
\begin{figure}
\center\includegraphics[width=\textwidth]{pictures.pdf}
\caption{\label{fig} Illustration of the definitions of $p(k,2)$ and $p'(k,2)$. On the left, points are 2-colored so that any axis-aligned strip containing at least three points is bichromatic. On the right, two points $A$ and $B$ define two circular permutations of the point set. In this case, we wish to color the points so that there is no long monochromatic subsequence in either of the two circular orderings.}
\end{figure}
Finally, we study a dual problem, in which a set of axis-aligned strips is to be colored, so that sufficiently covered points are contained in strips from all
color classes.
For instance, in the planar case we study the following problem:
What is the minimum number $\bar{p}(k,d)$ for which it is always possible to $k$-color any given
set of axis-aligned strips so that any point contained in at least $\bar{p}(k,d)$ strips is contained in strips of
$k$ distinct colors?\medskip
These problems are closely tied to other previously studied problems in discrete geometry with applications
to wireless and {\em ad hoc} networks, such as the sensor cover problem~\cite{othersensors}, conflict-free colorings~\cite{shakharcf}, or covering decomposition problems~\cite{pachtoth}.
\paragraph{Definitions.}
An {\em axis-aligned strip}, or simply a {\em strip} (unless otherwise specified), is the area enclosed between two
parallel axis-aligned hyperplanes.
A {\em $k$-coloring} of a finite set assigns one of $k$ colors to each element in the set.
Let $S$ be a $k$-colored set of points in $\Re^d$. A strip is said do be {\em polychromatic}
with respect to $S$ if it contains at least one element of each color class.
The function $p(k,d)$ is the minimum number for which there always exists
a $k$-coloring of any point set in $\Re^d$ such that every strip containing at least $p(k,d)$ points is polychromatic.
The function $p'(k,d)$ is the minimum number such that, for any given $d$ circular permutations of a set of items $S$,
there always exists a $k$-coloring of $S$ such that every sequence of at least $p'(k,d)$ contiguous items is polychromatic.
Let $H$ be a $k$-colored set of strips in $\Re^d$. A point is said to be polychromatic with respect to $H$ if it is contained in strips of all
$k$ color classes. The function $\bar{p}(k,d)$ is the minimum number for which there always exists
a $k$-coloring of any set of strips in $\Re^d$ such that every point of $\Re^d$ contained in at least $\bar{p}(k,d)$ strips .
The functions $p(k,d)$, $p'(k,d)$ and $\bar{p}(k,d)$ are monotone and non-decreasing. We always consider the set that we color to be ``large enough'', that is, unbounded in terms of $k$.
\paragraph{Previous results.}
Our work is part of a broader area of research related to range space colorings.
A {\em range space} $(S,R)$ is defined by a set $S$ (called the {\em ground set}) and a set $R$ of
subsets of $S$. The main problem studied here is the coloring of a geometric range space where the ground set $S$ is a finite set of points, and the set of ranges $R$ consists of all subsets of S which can be isolated by a single strip; or in the dual case the ground set $S$ is a finite set of geometric shapes and the ranges are points contained in the common intersection of a subset of $S$. Note that finite geometric range spaces are also referred to as geometric hypergraphs.
Several similar problems have been studied in this context~\cite{Pa80,TT07,Aloup1}, where the range space is not defined by strips, but rather by halfplanes, triangles, disks, pseudo-disks, or translates of a centrally symmetric convex polygon, {\em etc.}
The problem was originally stated in terms of decomposition of {\em $c$-covers} (or \emph{$f$-fold coverings}) in the plane: A $c$-cover of the plane by a convex body $Q$ ensures that every point in the plane is covered by at least $c$ translated copies of $Q$.
In 1980, Pach~\cite{Pa80} asked if, given $Q$, there exists a function $f(Q)$ such that every $f(Q)$-cover of the plane
can be decomposed into $2$ disjoint $1$-covers. A natural extension is to ask if given $Q$, there exists a function $f(k,Q)$ such that every $f(k,Q)$-cover of the plane
can be decomposed into $k$ disjoint $1$-covers.
This corresponds to a $k$-coloring of the $f(k,Q)$-cover, such that every point of the plane is polychromatic.
Partial answers to this problem are known: first, Mani and Pach~\cite{decompball} proved that any 33-cover of the plane by unit disks can be decomposed into two $1$-covers. This implies that the function $f$ exists for unit disks, but it could still be exponential in $k$. Recently, Tardos and T\'{o}th~\cite{TT07} proved that any 43-cover by translated copies of a triangle can be decomposed into two $1$-covers.
For the case of centrally symmetric convex polygons,
Pach~\cite{Pach86} proved that $f$ is at most exponential in $k$. More than 20 years later,
Pach and T{\'o}th~\cite{pachtoth} improved this by showing that
$f(k,Q) = O(k^{2})$, and recently Aloupis et al.~\cite{Aloup2} proved that $f(k,Q)=O(k)$.
On the other hand, for the range space induced by arbitrary disks, Mani
and Pach~\cite{decompball} (see also~\cite{pachindecomp}) proved that $f(2,Q)$ is
unbounded: For any constant $c$, there exists a set of points
that cannot be $2$-colored so that all open disks containing at
least $c$ points are polychromatic. Pach, Tardos and T{\'o}th~\cite{pachindecomp} obtained a similar
result for the range spaces induced by the family of either non-axis-aligned strips, or axis-aligned
rectangles. Specifically, for any integer $c$ there exist $c$-fold coverings
with non-aligned strips that cannot be decomposed into two coverings (or equivalently, that cannot be $2$-colored). The previous impossibilities constitute our main
motivation for introducing the problem of $k$-coloring
axis-aligned strips, and strips with a bounded number of
orientations.
\paragraph{Paper Organization.}
In Section~\ref{sec_plane} we give (constructive) upper bounds on the values of $p(k,2)$ and $p'(k,2)$. In Section~\ref{higher_dim} we consider higher-dimensional cases, as well as the computational complexity of finding a valid coloring. Section~\ref{sec_dual} concerns the dual problem of coloring strips with respect to points. Our lower and upper bounds are summarized in Table~\ref{tbl_res}.
\begin{table}
\renewcommand{\arraystretch}{1.5}
\center\begin{tabular}{|c|c|c|c|}
\hline
& $p(k,d)$ & $p'(k,d)$ & $\bar{p}(k,d)$ \\
\hline
upper bound & $k(4\ln k +\ln d)$& $k(4\ln k +\ln d)$ & $d(k{-}1)+1$ \\
&($2k{-}1$ for $d{=}2$)&($2k$ for $d{=}2$) & \\
\hline
lower bound & $2\cdot \lfloor \frac{(2d-1)k}{2d} \rfloor +1$ & $2\cdot \lfloor \frac{(2d-1)k}{2d} \rfloor +1$ & $2\cdot \lfloor \frac{(2d-1)k}{2d} \rfloor +1$ \\
\hline
\end{tabular}
\caption{Bounds on $p$, $p'$ and $\bar{p}.$\label{tbl_res}}
\end{table}
\section{Axis-aligned strips and circular permutations for $d=2$}
\label{sec_plane}
\subsection{Axis-aligned strips: Upper bound on $p(k,2)$}
We refer to a strip containing $i$ points as an {\em $i$-strip}.
Our goal is to show that for any integer $k$ there is a
constant $p(k,2)$ such that any finite planar point
set can be $k$-colored so that all $p(k,2)$-strips are polychromatic.\medskip
For $d=2$, there is a reduction to the recently studied
problem of $2$-coloring graphs so that monochromatic
components are small. Haxell et al.~\cite{HST} proved that the vertices of
any graph with maximum degree $4$ can be $2$-colored so
that every monochromatic connected component has size at most
$6$. For a given finite point set $S$ in the plane, let $E$ be the set
of all pairs of points $u,v \in S$ such that there is a strip containing
only $u$ and $v$. The graph $G=(S,E)$ has maximum degree $4$,
as it is the union of two paths. By the results of \cite{HST}, $G$ can be
$2$-colored so that every monochromatic connected component has
size at most $6$. In particular every path of size at
least $7$ contains points from both color classes. To finish the
reduction argument one may observe that every strip
containing at least $7$ points corresponds to a path (of size at
least $7$) in $G$.
We improve and generalize this first bound in the following.
\begin{theorem}
\label{poly-strips}
For any finite planar set $S$ and any integer $k$, $S$ can
be $k$-colored so that any $(2k{-}1)$-strip is polychromatic.
That is,
$$
p(k,2) \leq 2k-1.
$$
\end{theorem}
\begin{proof}
Let $s_1,\ldots,s_n$ be the points of $S$ sorted by (increasing) $x$-coordinates and let
$s_{\pi_1},\ldots,s_{\pi_n}$ be the sorting by $y$-coordinates. We first
assume that $k$ divides $n$, and later show how to remove the need for this assumption.
Let $V_x$ be the set of $n/k$ disjoint contiguous $k$-tuples in $s_1,\ldots,s_n$.
Namely, $V_x=\{\{s_1,\ldots,s_k\},\{s_{k+1},\ldots,s_{2k}\},\ldots,\{s_{n-k+1},\ldots,s_n\}\}$.
Similarly, let $V_y$ be the $k$-tuples defined by $s_{\pi_1},\ldots,s_{\pi_n}$.\medskip
We define a bipartite multigraph $G=(V_x,V_y,E)$ as follows:
For every pair of $k$-tuples $A\in V_x$, $B\in V_y$, we include an edge
$e_s =\{A,B\} \in E$ if there exists a point $s$ in both $A$ and $B$.
Note that an edge $\{A,B\}$ has multiplicity $\cardin{A\cap B}$ and that
the number of edges $\cardin{E}$ is $n$. The
multigraph $G$ is $k$-regular because every $k$-tuple $A$ contains
exactly $k$ points and every point $s\in A$ determines exactly one
incident edge labeled $e_s$. It is well known that the chromatic index of
any bipartite $k$-regular multigraph is $k$ (and can be
efficiently computed, see e.g., \cite{Alon03,ColeOstSchirra01}).
Namely, the edges of such a multigraph can be partitioned into
$k$ perfect matchings. Let $E_1,\ldots,E_k$ be such a
partition and $S_i \subset S$ be the set of labels of the edges
of $E_i$. The sets $S_1,\ldots,S_k$ form a partition (i.e., a
coloring) of $S$. We assign color $i$ to the points of $S_i$.\medskip
We claim that this coloring ensures that any
$(2k{-}1)$-strip is polychromatic. Let $h$ be a $(2k{-}1)$-strip and assume without
loss of generality that $h$ is parallel to the $y$-axis. Then $h$
contains at least one $k$-tuple $A \in V_x$. By the properties of
the above coloring, the edges incident to $A$ in $G$ are colored
with $k$ distinct colors. Thus, the points that correspond to the labels of
these edges are colored with $k$ distinct colors, and $h$ is
polychromatic.\medskip
To complete the proof, we must handle the case where $k$ does
not divide $n$. Let $i= n\pmod k$.
Let $Q=\{q_1,\ldots,q_{k-i}\}$
be an additional set of $k{-}i$ points, all located to the right and above
the points of $S$. We repeat our preceding construction on $S\cup Q$.
Now, any $(2k{-}1)$-strip which
is, say, parallel to the $y$-axis will also contain a $k$-tuple
$A \in V_x$ disjoint from $Q$. Thus our arguments follow as before.
\end{proof}
The proof of Theorem~\ref{poly-strips} is constructive and leads directly to
an $O(n\log n)$-time algorithm to $k$-color $n$ points in the plane so that every
$(2k{-}1)$-strip is polychromatic.
The algorithm is simple: we sort $S$, construct $G=(V_x,V_y,E)$, and color the edges of $G$ with $k$ colors.
The time analysis is as follows: sorting takes $O(n \log n)$ time. Constructing $G$ takes $O(n+|E|)$ time.
As $G$ has $\frac{2n}{k}$ vertices and is $k$-regular, it has $n$ edges; so this step takes $O(n)$ time. Finding the
edge-coloring of $G$ takes $O(n \log n)$ time~\cite{Alon03}. The total running time is therefore $O(n \log n)$.
\subsection{Circular permutations: Upper bound on $p'(k,2)$}
\label{sec_wedges}
We now consider the value of $p'(k,d)$. Given $d$ circular permutations of a set $S$, we color $S$ so that every sufficiently long subsequence in any of the circular permutations is polychromatic.
The previous proof for $p(k,d)\leq 2k{-}1$ (Theorem~\ref{poly-strips}) does not hold when we consider circular permutations. However, a slight
modification provides essentially the same upper bound.
\begin{theorem}
\hskip 0.3in $p'(k,2) \leq 2k$
\end{theorem}
\begin{proof}
If $k$ divides $n$, we separate each circular permutation into $n/k$ groups of size $k$.
We define a multigraph, where the vertices represent the groups of $k$ items, and there is an edge between two vertices if two groups share the same item. Trivially, this graph is $k$-regular and bipartite, and can thus be edge-colored with $k$ colors. Each edge in this graph corresponds to one item in the permutation, thus each group of $k$ items contains points of all $k$ colors.\medskip
If $k$ does not divide $n$, let $a=\lfloor n/k \rfloor$, and $b=n\pmod k$.
If $a$ divides $b$, we separate each of the two circular permutations into $2a$ groups, of alternating sizes $k$ and $b/a$.
Otherwise, the even groups will also alternate between size $\lceil b/a \rceil$ and $\lfloor b/a \rfloor$, instead of $b/a$.
We extend both permutations by adding dummy items to each group of size less than $k$, so that we finally have only groups of size $k$.
Dummy items appear in the same order in both permutations.
We can now define the multigraph just as before. \medskip
If we remove the dummy nodes, we deduce a coloring for our original set. As each color appears in every group of size $k$, the length of any subsequence between two items of the same color is at most $2(k-1) + \lceil b/a \rceil$. Therefore, $p'(k,2)\leq 2(k-1) + \lceil b/a \rceil +1$.\medskip
Finally, if $n \ge k(k-1)$, then $a \ge k-1$, and $b \le a$, we know that $\lceil b/a \rceil\le 1$, and thus $p'(k,2)\leq 2k$.
\end{proof}
\section{Higher dimensional strips}
\label{higher_dim}
In this section we study the same problem for strips in higher dimensions.
We provide upper and lower bounds on
$p(k,d)$. We then analyse the complexity of the coloring problem, and show that
deciding whether a given instance $S \subset \Re^d$ can be $k$-colored
such that every 3-strip is polychromatic is NP-complete.
\subsection{Upper bound on strip size, $p(k,d)$}
\begin{theorem}
\label{lll_strips}
Any finite set of points $S \subset \Re^d$ can be $k$-colored so that every axis-aligned strip
containing $k(4 \ln k +\ln d)$ points is polychromatic, that is,
$$
p(k,d) \leq k(4 \ln k +\ln d).
$$
\end{theorem}
\begin{proof}
The proof uses the probabilistic method. Let $\{1,\ldots,k\}$ denote the set of $k$ colors. We randomly
color every point in $S$ independently
so that a point $s$ gets color $i$ with probability
$\frac{1}{k}$ for $i=1,\ldots,k$. For a $t$-strip $h$, let $\mathcal{B}_h$ be the ``bad'' event where $h$
is not polychromatic. It is easily seen that
$\Pr [\mathcal{B}_h] \leq k (1-\frac{1}{k})^t$.
Moreover, $\mathcal{B}_h$ depends on at most $(d-1)t^2 + 2t-2$ other events.
Indeed, $\mathcal{B}_h$ depends only on $t$-strips that share points with $h$. Assume without
loss of generality that $h$ is orthogonal to the $x_1$ axis. Then $\mathcal{B}_h$ has a non-empty intersection with at most $2(t-1)$
other $t$-strips which are orthogonal to the $x_1$ axis. For each of the other $d{-}1$ axes, $h$ can intersect at most $t^2$
$t$-strips since every point in $h$ can belong to at most $t$ other $t$-strips.\medskip
By the Lov\'asz Local Lemma, (see, e.g., \cite{AS00}) we have that
if $t$ satisfies
$$
e \cdot \left((d-1)\cdot t^2+ 2t-1\right) \cdot k\left(1-\frac{1}{k}\right)^t < 1
$$
(where $e$ is the basis of the natural logarithm), then
$$
\Pr\left[\bigwedge_{\cardin{h} = t} \bar{\mathcal{B}_h}\right]>0.
$$
In particular, this means that there exists a $k$-coloring for which
every $t$-strip is polychromatic. It can be verified that $t=k(4 \ln k +\ln d)$ satisfies the condition.
\end{proof}
The proof of Theorem~\ref{lll_strips} is non-constructive. We can use known
algorithmic versions of the Local Lemma (see for instance~\cite{AS00}, Chapter 5) to obtain a constructive proof,
although this yields a weaker bound.
Theorem~\ref{lll_strips} holds in the more general case where the strips are not necessarily axis-aligned.
In fact, one can have a total of $d$ arbitrary strip orientations in some fixed arbitrary dimension and the proof will hold verbatim.
Finally, the same proof yields the same upper bound for the case of circular permutations:
\begin{theorem}
\label{lll_circular}
\hskip 0.3in $p'(k,d) \leq k(4 \ln k +\ln d)$
\end{theorem}
\subsection{Lower bound on $p(k,d)$}
The following result on the decomposition of complete graphs from \cite{decompEven} is useful:
\begin{lemma}
\label{lem-decompEven}
The edges of $K_{2h}$ can be decomposed into $h$ pairwise edge-disjoint Hamiltonian paths.
\end{lemma}
Note that if the vertices of $K_{2h}$ are labeled $V=\{1, \ldots, 2h\}$, each path can be seen as a permutation of $2h$ elements. Using Lemma~\ref{lem-decompEven} we obtain:
\begin{theorem}
\label{thm_lb}
For any fixed dimension $d$ and number of colors $k$, let $s=\left\lfloor\frac{(2d-1)k}{2d}\right\rfloor$. Then,
$$
p(k,d) \geq 2s +1.
$$
\end{theorem}
\begin{proof}
Let $\sigma_1, \ldots, \sigma_d$ be any decomposition of $K_{2d}$ into $d$ paths: we construct the set $P=\{p_i| 0 \leq i \leq 2d \}$, where $p_i=(\sigma_1(i), \ldots, \sigma_d(i))$. Note that the ordering of $P$, when projected to the $i$-th axis, gives permutation $\sigma_i$. Since the elements $\sigma$ decompose $K_{2d}$, in particular for any $i,j \leq 2d$ there exists a permutation in which $i$ and $j$ are adjacent.
We replace each point $p_i$ by a set $A_i$ of $s$ points arbitrarily close to $p_i$. By construction, for any $i,j \leq 2d$, there exists a $2s$-strip containing exactly $A_i \cup A_j$. Consider any possible coloring of the sets $A_i$: there are more than $k/2d$ colors not present in any set $A_i$ (since $|A_i|=s$). By the pigeonhole principle, there exist $i$ and $j$ such that the set $A_i \cup A_j$ is missing a color (otherwise there would be more than $k$ colors). In particular, the strip that contains set $A_i \cup A_j$ is not polychromatic, thus the theorem is shown.
We gave a set of bounded size $n=2d$ reaching the lower bound, but we can easily create larger sets reaching the same bound: we can add as many dummy points as needed at the end of every permutation, which does not decrease the value of $p(k,d)$.
\end{proof}
\subsection{Computational complexity}
In Section~\ref{sec_plane}, we provided an algorithm that finds
a $k$-coloring such that every planar $(2k{-}1)$-strip is polychromatic.
Thus for $d{=}2$ and $k{=}2$, this yields a $2$-coloring such that every $3$-strip is polychromatic.
Note that in this case $p(2,2)=3$, but the minimum required size of a strip for a given instance can be either $2$ or $3$. Testing if it is equal to 2 is easy: we can simply alternate the colors in the first permutation, and check if they also alternate in the other. Hence the problem of minimizing the size of the largest monochromatic strip on a given instance is polynomial for $d=2$ and $k=2$. We now show that it becomes NP-hard for $d>2$ and $k=2$. The same problem for $k>2$ is left open.
\begin{theorem}
The following problem is $NP$-complete:\\
{\bf {\em Input}:} $3$ permutations $\pi _1, \pi _2,\pi _3 $ of an $n$-element set $S$.\\
{\bf {\em Question}:} Is there a 2-coloring of $S$, such that every 3
elements of $S$ that are consecutive according to one of the
permutations are not monochromatic?
\end{theorem}
\begin{proof}
We show a reduction from NAE 3SAT (not-all-equal 3SAT) which is the
following $NP$-complete problem~\cite{GareyJohnson-76}:\\
{\bf {\em Input}:} A 3-CNF Boolean formula $\Phi$.\\
{\bf {\em Question}:} Is there a $NAE$ assignment to $\Phi$? An
assignment is called $NAE$ if every clause has at least one
literal assigned True and at least one literal assigned False.\medskip
We first transform $\Phi$ into another instance $\Phi'$ in which
all variables are non-negated (i.e., we make the instance
monotone). We then show how to realize $\Phi'$ using three
permutations $\pi _1, \pi _2, \pi _3 $.
To transform $\Phi$ into $\Phi'$, for each variable $x$, we first replace the $i$th
occurrence of $x$ in its positive form by a variable
$x_i$, and the $i$th occurrence of $x$ in its negative form by
$x'_i$. The index $i$ varies between 1 and the number of
occurrences of each form (the maximal of the two). We also add the
following {\em consistency-clauses}, for each variable $x$ and for all $i$:
\begin{eqnarray*}
\left(Z^x_i,x_i,x'_i \right),
\left(x_i,x'_i,Z^x_{i+1} \right),
\left(x_i,Z'^x_{i},x'_i \right)&,&
\left(Z^x_{i},Z'^x_{i},Z^x_{i+1} \right)\\
\left(x'_i, Z^x_{i+1}, x_{i+1}, \right),
\left(Z'^x_{i}, x'_i, x_{i+1} \right),
\left(x'_i, x_{i+1}, Z'^x_{i+1}, \right)&,&
\left(Z'^x_{i},Z^x_{i+1},Z'^x_{i+1} \right)
\end{eqnarray*}
where $Z_i^x$ and $Z'^x_i$ are new variables. This completes the
construction of $\Phi'$. Note that $\Phi'$ is monotone, as every negated variable has been replaced.
Moreover, $\Phi'$ has a $NAE$ assignment if and only if $\Phi$ has
a $NAE$ assignment. To see this, note that a $NAE$ assignment for
$\Phi$ can be translated to a $NAE$ assignment to $\Phi'$ as
follows: for every variable $x$ of $\Phi$ and every $i$, set
$x_i\equiv x$,
\hskip 0.07in $x'_i \equiv \overline{x}$,
\hskip 0.07in $Z_i^x \equiv True,
\hskip 0.07in Z'^x_i \equiv False$.
On the other hand, if $\Phi'$ has a $NAE$ assignment, then, by the
consistency clauses,
the variables in $\Phi'$ corresponding to any variable $x$ of $\Phi$ are assigned a consistent value.
Namely, for every $i,j$ we have $x_i = x_j$ and $x_i \neq
x'_i$. This assignment naturally translates to a $NAE$ assignment
for $\Phi$, by setting $x \equiv x_1$.\medskip
We next show how to realize $\Phi'$ by a set $S$ and three
permutations $\pi_1,\pi_2,\pi_3$. The elements of the set $S$ are
the variables of $\Phi'$, together with some additional elements
that are described below. Permutation $\pi_1$ realizes the clauses of $\Phi'$
corresponding to the original clauses of $\Phi$, while $\pi_2$ and
$\pi_3$ realize the consistency clauses of $\Phi'$.
The additional elements in $S$ are clause elements (two elements $c_{2j-1}$ and $c_{2j}$ for every clause $j$
of $\Phi$)
and \emph{dummy elements} $\star$ (the dummy elements are not indexed for the ease of presentation, but they appear in the same order in all three permutations).
Permutation $\pi_1$ encodes the clauses of $\Phi'$
corresponding to original clauses of $\Phi$ as follows (note that
all these clauses involve different variables). For each such
clause $(u,v,w)$, permutation $\pi_1$ contains the following
sequence: $$ c_{2j{-}1}, u, v, w, c_{2j}, \star, \star $$
\noindent At the end of $\pi_1$, for every variable $x$ of $\Phi'$
we have the sequence:
$$ Z^x_1, Z'^x_1, Z^x_2, Z'^x_2\
Z^x_3, Z'^x_3, \ldots, \star, \star$$
\noindent Permutation $\pi_2$ contains, for every variable $x$ of $\Phi$, the sequences:\\
$$Z^x_1, x_1, x'_1, Z^x_2, x_2, x'_2, Z^x_3, x_3, x'_3, Z^x_4, \ldots \hskip 0.5cm \text{and} \hskip 0.5cm \star, \star, Z'^x_1,
\star, \star, Z'^x_2, \star, \star, Z'^x_3, \ldots \star, \star$$
At the
end of $\pi_2$ we have the clause-elements and
remaining dummy elements:\\
\noindent $$ \hskip 0.5cm \star, \star, c_{1}, \star, \star,
c_{2}, \star, \star, c_{3}\ldots$$
\noindent Similarly, permutation $\pi_3$ contains, for every variable $x$ of $\Phi$, the sequences:\\
$$x_1, Z'^x_1, x'_1, x_2, Z'^x_2, x'_2, x_3, Z'^x_3, x'_3, \ldots
\hskip 0.5cm \text{and} \hskip 0.5cm \star, \star, Z^x_1,
\star, \star, Z^x_2, \star, \star, Z^x_3, \ldots \star, \star$$ and at the end of $\pi_3$
we have the clause-elements and
remaining dummy elements:\\
$$\star, \star, c_{1}, \star, \star, c_{2}, \star,
\star, c_{3},\ldots$$
This completes the construction of $S$ and $\pi_1,\pi_2,\pi_3$.
Note that for every clause of $\Phi'$ (whether it is derived from
$\Phi$ or is a consistency clause), the elements corresponding
to its three variables appear in sequence in one of the three
permutations. Therefore, if there is a 2-coloring of $S$, such that
every 3 elements of $S$ that are consecutive according to one of
the permutations are not monochromatic, then there is a $NAE$
assignment to $\Phi'$: each variable of $\Phi'$ is assigned True
if its corresponding element is colored `1', and False otherwise.
For the other direction, consider a $NAE$ assignment for $\Phi'$.
Observe that there is always a solution where $Z^x_i$ and $Z'^x_i$
are assigned opposite values. Then assign color `1' to elements
corresponding to variables assigned with True, and
assign color `0' to elements corresponding to variables
assigned with False. For the clause elements $c_{2j{-}1}$ and
$c_{2j}$ appearing in the subsequence $c_{2j{-}1},\ u,\ v,\ w,\
c_{2j}$, assign to $c_{2j{-}1}$ the color opposite to $u$, and to
$c_{2j}$ the color opposite to $w$. Finally, assign colors `0' and `1' to each pair of
consecutive dummy elements, respectively.
It can be verified that there is no monochromatic
consecutive triple in any permutation.
\end{proof}
\paragraph{Approximability.}
Note that the minimization problem (find a $k$-coloring that minimizes the number of
required points) can be approximated within a constant factor: it suffices to use a constructive
version of the Lov\'asz Local Lemma (see~\cite{AS00}), yielding an actual coloring. The number
of points that this coloring will require is bounded by a constant, provided the values of $k$
and $d$ are fixed. The approximation factor is therefore bounded by the ratio between that
constant and $k$.
\section{Coloring strips}
\label{sec_dual}
In this section we prove that any finite set of
strips in $\Re^d$ can be $k$-colored so that
every ``deep" point is polychromatic. For a given set of strips,
we say that a point is {\em $i$-deep} if it is contained in at
least $i$ of the strips. We begin with this simple result:
\begin{lemma}
\label{intervals} Let $\cal I$ be a finite set of intervals. Then
for every $k$, \hskip 0.02in $\cal I$ can be $k$-colored so that
every $k$-deep point is polychromatic, while any point
covered by fewer than $k$ intervals will be covered by distinct colors.
\end{lemma}
\begin{proof}
We use induction on $\cardin{\cal I}$. Let $I$ be the interval
with the leftmost right endpoint. By induction, the intervals in
${\cal I} \setminus \{I\}$ can be $k$-colored with the desired
property. Sort the intervals intersecting $I$ according to their
left endpoints and let $I_1,\ldots,I_{k-1}$ be the first $k{-}1$
intervals in this order. It is easily seen that coloring $I$ with
a color distinct from the colors of those $k{-}1$ intervals
produces a coloring with the desired property, and hence a valid
coloring.
\end{proof}
\begin{theorem}
For any $d$ and $k$, one can $k$-color any set of axis-aligned strips
in $\Re^d$ so that every $d(k{-}1){+}1$-deep point
is polychromatic. That is,
$$\bar{p}(k,d)\le d(k{-}1)+1.$$
\end{theorem}
\begin{proof}
We start by coloring the strips parallel
to any given axis $x_i$ ($i=1,\ldots,d$) separately using the
coloring described in Lemma~\ref{intervals}. We claim that this
procedure produces a valid polychromatic coloring for all
$d(k{-}1){+}1$-deep points. Indeed assume that a given point $s$ is
$d(k{-}1){+}1$-deep and let $H(s)$ be the set of strips covering $s$.
Since there are $d$ possible orientations for the strips in
$H(s)$, by the pigeonhole principle at least $k$ of the strips
in $H(s)$ are parallel to the same axis. Assume without loss of
generality that this is the $x_1$-axis. Then by property of the
coloring of Lemma~\ref{intervals}, $H(s)$ is polychromatic.
\end{proof}
The above proof is constructive. By sorting the intervals that
correspond to any of the given directions, one can easily find
a coloring in $O(n \log n)$ time.
\begin{theorem}
For any fixed dimension $d$ and number of colors $k$, let $t=\left\lfloor\frac{(2d-1)k}{2d}\right\rfloor$. Then,
$$
\bar{p}(k,d) \geq 2t{+}1.
$$
\end{theorem}
\begin{proof}
The proof is analogous to that of Theorem~\ref{thm_lb}. In this case, $A_{2i}$ is defined as the set containing $t$ strips of the form $0< x_i <2$ (where $x_i$ is the coordinate of i-th dimension). Set $A_{2i+1}$ contains $t$ strips of the form $1<x_i<3$. What remains is to show that for any pair $i,j$, there always exists a point in both sets of strips.\medskip
Let $\delta(i)=1/2$ (if $i$ is odd) or $\delta(i)=5/2$ (if $i$ is even); we define $s_{i,j}$ as the point whose $\left\lfloor i/2\right\rfloor$-th and $\left\lfloor j/2\right\rfloor$-th coordinates are $\delta(i)$ and $\delta(j)$, respectively, while all other coordinates are $-1$. Clearly $s_{i,j}$ is only inside strips of set $A_{i} \cup A_{j}$.
\end{proof}
\section*{Acknowledgements}
This research was initiated during the WAFOL'09 workshop at Universit\'e Libre de {Bruxelles}~(U.L.B.), Brussels, Belgium.
The authors want to thank all other participants, and in particular Erik D. Demaine, for helpful discussions.
\bibliographystyle{plain}
| {
"timestamp": "2009-04-14T15:00:42",
"yymm": "0904",
"arxiv_id": "0904.2115",
"language": "en",
"url": "https://arxiv.org/abs/0904.2115",
"abstract": "Given a planar point set and an integer $k$, we wish to color the points with $k$ colors so that any axis-aligned strip containing enough points contains all colors. The goal is to bound the necessary size of such a strip, as a function of $k$. We show that if the strip size is at least $2k{-}1$, such a coloring can always be found. We prove that the size of the strip is also bounded in any fixed number of dimensions. In contrast to the planar case, we show that deciding whether a 3D point set can be 2-colored so that any strip containing at least three points contains both colors is NP-complete.We also consider the problem of coloring a given set of axis-aligned strips, so that any sufficiently covered point in the plane is covered by $k$ colors. We show that in $d$ dimensions the required coverage is at most $d(k{-}1)+1$.Lower bounds are given for the two problems. This complements recent impossibility results on decomposition of strip coverings with arbitrary orientations. Finally, we study a variant where strips are replaced by wedges.",
"subjects": "Computational Geometry (cs.CG)",
"title": "Colorful Strips",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.986151392226466,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7086428547368192
} |
https://arxiv.org/abs/2008.09107 | Greedoids from flames | A digraph $ D $ with $ r\in V(D) $ is an $ r $-flame if for every $ {v\in V(D)-r} $, the in-degree of $ v $ is equal to the local edge-connectivity $ \lambda_D(r,v) $. We show that for every digraph $ D $ and $ r\in V(D) $, the edge sets of the $ r $-flame subgraphs of $ D $ form a greedoid. Our method yields a new proof of Lovász' theorem stating: for every digraph $ D $ and $ r\in V(D) $, there is an $ r $-flame subdigraph $ F $ of $ D $ such that $ \lambda_F(r,v) =\lambda_D(r,v) $ for $ v\in V(D)-r $. We also give a strongly polynomial algorithm to find such an $ F $ working with a fractional generalization of Lovász' theorem. | \section{Introduction}
Subgraphs preserving some connectivity properties while having as few edges as possible have been a subject of
interest since the beginning of graph theory. Suppose that $ D $ is a digraph with $ r\in V(D) $ and let us denote the local
edge-connectivity\footnote{The local edge-connectivity from $ r $ to $ v $ is the maximal
number of pairwise edge-disjoint $ r\rightarrow
v $
paths.} from $ r $ to some $ v\in V(D)-r $ by $ \lambda_D(r,v) $.
We are looking
for a spanning subgraph $ H $ of
$ D $ with the smallest possible number of edges in which all the local edge-connectivities outwards from the root $ r $ are the
same as in $ D $, i.e., $
\lambda_H(r,v)=\lambda_D(r,v)
$ for all $ v\in V(D)-r $. In order to have $ \lambda_D(r,v) $ many pairwise edge-disjoint paths from $ r $ to $ v $ in $ H $, it is
obviously necessary that the in-degree $ \varrho_H(v) $ of $ v $ in $ H $ is at least $ \lambda_D(r,v) $. This leads to the
estimation
$ \left|E(H)\right|\geq \sum_{v\in V(D)-r}\lambda_D(r,v) $. It was shown by Lovász that, maybe surprisingly, this trivial
lower bound is always sharp.
\begin{thm}[Lov\'asz, Theorem 2 of \cite{lovasz}]\label{large flame}
For every digraph $ D $ and $ r\in V(D) $, there is a spanning subdigraph $ H $ of $ D $ such that for every $ v\in
V(D)-r $
\[ \lambda_D(r,v)=\lambda_{H}(r,v)= \varrho_H(v).\]
\end{thm}
Calvillo-Vives rediscovered Theorem \ref{large flame} independently in \cite{calvillo-vives} and named the
rooted digraphs $ F $
with $ \lambda_{F}(r,v)= \varrho_F(v) $ for all $ v\in V(F)-r $ `$ r $-flames' .
We establish a direct connection between the extremal problem
above and the theory of greedoids. The latter were introduced by Korte and Lovász as a generalization of matroids to capture
greedy solvability in problems where
the matroid concept turned out
to be too restrictive. The field is actively investigated since the '80s, for a survey we refer to \cite{GreedoidBook}.
We show that the subflames of a
rooted digraph always form a greedoid whose bases are exactly the subdigraphs described in Theorem \ref{large flame}.
\begin{thm}\label{flame greedoid}
Let $ D=(V,E) $ be a digraph and $ r\in V $. Then
\[ \mathcal{F}_{D,r}:=\{ E(F)\, |\, F\subseteq D \text{ is an }r\text{-flame} \} \]
is a greedoid on $ E $. Furthermore, for each $ \subseteq $-maximal element $ E(F^{*}) $ of
$ \mathcal{F}_{D,r} $ we have $ \lambda_{F^{*}}(r,v)=\lambda_D(r,v) $ for all $ v\in V-r $.
\end{thm}
The proof of Theorem \ref{large flame} by Lov\'asz is algorithmic but only for simple digraphs polynomial. We prove a
fractional generalization of Lovász' theorem considering digraphs with non-negative edge-capacities and replacing
`edge-connectivity' by `flow-connectivity'. Our proof provides a simple strongly polynomial algorithm to find an $ H$ with
properties given in Theorem \ref{large flame}.
It is worth to mention that one can formulate a structural infinite generalization of Theorem \ref{large flame} in the same manner
as Erdős conjectured such an extension of Menger's theorem (see \cite{aharoni2009menger}). As in the case of Menger's
theorem, the problem is getting much harder in the infinite setting. The ``vertex-variant'' of this
generalization was proved for countably infinite digraphs in \cite{attila-flames} which was then further developed in
\cite{erde2020enlarging}.
\section{Notation}
In this paper we deal only with finite combinatorial structures. An $ \mathcal{F}\subseteq 2^{E} $ is a greedoid on $ E $ if $
\varnothing\in \mathcal{F} $ and $ \mathcal{F} $ has the \emph{Augmentation property}, i.e.,
whenever $ F, F'\in
\mathcal{F} $ with $ \left|F\right|<\left|F'\right| $, there is some $ e\in F'\setminus F $ such that $ F+e\in \mathcal{F} $.
A digraph $ D $ is an ordered pair $ (V, E) $ where $ E $ is a set of directed edges with their endpoints in $ V $ where parallel
edges are allowed but loops are not. Let us fix
throughout this paper a vertex set $ V $ and a ``root vertex'' $ r\in V $. For $
U\subseteq V $, $ \mathsf{in}_D(U) $ and $ \mathsf{out}_D(U) $ stand for the set of ingoing and outgoing edges of $ U $
respectively,
furthermore, let $ \varrho_D(U):=\left|\mathsf{in}_D(U)\right| $ and $ \delta_D(U):=\left|\mathsf{out}_D(U)\right| $.
For simplicity we always assume that $ \mathsf{in}_D(r)=\varnothing $.
We write shortly $ \lambda_D(v) $ for $ \lambda_D(r,v) $ where $ v\in V-r $. Recall, this is the
local edge-connectivity (i.e., the maximal number of pairwise disjoint paths) from $ r $ to $ v $ . We
define
$ \mathcal{G}_{D}(v) $ to be the set of those $ I \subseteq \mathsf{in}_D(v) $ for which there exists a system $ \mathcal{P} $
of edge-disjoint $ r\rightarrow v $ paths where the set of the last edges of the paths in $ \mathcal{P} $ is $ I $. It is known that
that set $ \mathcal{G}_{D}(v) $ is the family of independent sets of a matroid. Matroids representable this way were discovered
by Perfect \cite{perfect1969independence} and Pym \cite{pym1969proof} independently (using an equivalent definition based
on vertex-disjoint paths between vertex sets) and are called
gammoids.
A digraph $ F $ is a flame if $ \mathcal{G}_{F}(v) $ is a free matroid\footnote{A free matroid is a matroid where all sets are
independent.} for every
$ v\in V-r $, equivalently $
\lambda_{F}(v)=\varrho_F(v) $ for every $ v\neq r $.
\section{The flame greedoid of a rooted digraph}
The core of the proof of Theorem \ref{flame greedoid} is the following lemma.
\begin{lem}\label{key lemma}
Let $ H $ and $ D $ be digraphs and assume that
$\lambda_H(u)<\lambda_D(u) $ for some $ u\in V-r $. Then there is an $e\in E(D)\setminus E(H) $ with head, say $ v $, such
that
$ e $ is a coloop\footnote{A coloop is an edge of a matroid which can be added to any independent set without ruin
independence.} of $
\mathcal{G}_{H+e}(v) $, i.e.,
\[ \mathcal{G}_{H+e}(v)\supseteq\{ I+e:I\in \mathcal{G}_{H}(v) \}.\]
\end{lem}
\begin{proof}
Let $ \mathcal{U}:=\{ U\subseteq V-r: u\in U \text{
and } \varrho_H(U)=\lambda_{H}(u) \} $. By Menger's theorem
$ \mathcal{U}\neq \varnothing $ and the submodularity of the map $ X \mapsto \varrho_H(X) $ ensures that $
\mathcal{U} $ is closed under union and intersection. Let $ U $ be the $ \subseteq $-largest element of $ \mathcal{U} $. Since
$\lambda_H(u)<\lambda_D(u) $, there exists some edge $ e\in \mathsf{in}_{D}(U)\setminus \mathsf{in}_{H}(U) $. Note
that in $ H+e $ every $ X\subseteq V-r $ with $ X \supseteq U $ has at least $
\lambda_H(u)+1=\varrho_{H+e}(U) $ many ingoing
edges because of the maximality of $ U $.
By applying Menger's theorem in $ H+e $ with $ r $ and $ U $, we find a system $ \mathcal{P} $ of edge-disjoint $ r\rightarrow
U$ paths of size $ \lambda_H(u)+1 $ (see Figure \ref{Fig1}). The set
of the last edges of the paths in $ \mathcal{P} $ is necessarily the whole $ \mathsf{in}_{H+e}(U) $. Let the
head of $ e $ be $ v $ and let $ I\in \mathcal{G}_{H}(v) $ witnessed by the path-system $ \mathcal{Q} $. Clearly each $ Q\in
\mathcal{Q} $ enters $ U $ at least once. For $ Q\in \mathcal{Q} $, we define $ f_{Q} $ as the last meeting of $ Q $ with
$ \mathsf{in}_H(U) $. Finally, we build a path-system $ \mathcal{R} $ witnessing $ I+e\in \mathcal{G}_{H+e}(v) $ as follows.
For $ Q\in
\mathcal{Q} $, we consider the unique $ P_Q\in \mathcal{P} $ with last edge $ f_Q $ and concatenate it with the terminal
segment of $ Q $ from $ f_Q $ to obtain $ R_Q $. Moreover, let $ R_e $ be the unique path in $ \mathcal{P} $ with last edge $
e $. Then $ \mathcal{R}:=\{ R_Q: Q\in \mathcal{Q} \}\cup \{ R_e \} $ witnesses $ I+e\in \mathcal{G}_{H+e}(v) $ as desired.
\begin{figure}[h]
\centering
\begin{tikzpicture}
\draw (-2,0.5) rectangle (5,-3);
\node (v14) at (1.5,-0.5) {$v$};
\node at (-1,-2.5) {$u$};
\node (v1) at (0.5,3.5) {$r$};
\node[circle,inner sep=0pt,draw,minimum size=5] (v2) at (-0.5,3) {};
\node[circle,inner sep=0pt,draw,minimum size=5] (v3) at (-0.5,2) {};
\node[circle,inner sep=0pt,draw,minimum size=5] (v4) at (-1,1) {};
\node[circle,inner sep=0pt,draw,minimum size=5] (v5) at (-1,-0.5) {};
\draw [-triangle 60] (v1) edge (v2);
\draw [-triangle 60] (v2) edge (v3);
\draw [-triangle 60] (v3) edge (v4);
\draw [-triangle 60, very thick] (v4) edge (v5);
\node[circle,inner sep=0pt,draw,minimum size=5] (v6) at (0,1) {};
\node[circle,inner sep=0pt,draw,minimum size=5] (v7) at (0,-0.5) {};
\draw [-triangle 60] (v1) edge (v3);
\draw [-triangle 60] (v3) edge (v6);
\draw [-triangle 60, very thick] (v6) edge (v7);
\node[circle,inner sep=0pt,draw,minimum size=5] (v8) at (1.5,2.5) {};
\node[circle,inner sep=0pt,draw,minimum size=5] (v9) at (2,1) {};
\node[circle,inner sep=0pt,draw,minimum size=5] (v10) at (3.5,-0.5) {};
\draw [-triangle 60] (v1) edge (v8);
\draw [-triangle 60] (v8) edge (v9);
\draw [-triangle 60, very thick] (v9) edge (v10);
\node[circle,inner sep=0pt,draw,minimum size=5] (v11) at (2,3.5) {};
\node[circle,inner sep=0pt,draw,minimum size=5] (v12) at (3,2) {};
\draw [-triangle 60] (v1) edge (v11);
\draw [-triangle 60] (v11) edge (v12);
\draw [-triangle 60, very thick] (v12) edge (v10);
\node[circle,inner sep=0pt,draw,minimum size=5] (v13) at (1,1.5) {};
\node[circle,inner sep=0pt,draw,minimum size=5] (v15) at (-0.5,-1.5) {};
\node[circle,inner sep=0pt,draw,minimum size=5] (v16) at (1,-1.5) {};
\draw [-triangle 60] (v1) edge (v13);
\draw [-triangle 60, very thick] (v13) edge (v14);
\draw [-triangle 60, dashed] (v5) edge (v15);
\draw [-triangle 60, dashed] (v15) edge (v16);
\draw [-triangle 60, dashed] (v16) edge (v14);
\draw [-triangle 60, dashed] (v7) edge (v14);
\node at (1,0.8) {$e$};
\node at (4.4,0.8) {$U$};
\node at (0.4,2.4) {$\mathcal{P}$};
\node at (0.6,-1) {$\mathcal{Q}$};
\end{tikzpicture}
\caption{$ \mathsf{in}_{H+e}(U) $ consists of the thick edges, the terminal segments of the paths in $ \mathcal{Q} $ are
dashed. } \label{Fig1}
\end{figure}
\end{proof}
\begin{proof}[Proof of Theorem \ref{flame greedoid}]
Suppose that $ F_0, F_1\subseteq D $ are flames with $ \left|F_0\right|< \left|F_1\right| $. Then there must
be some $ u\in V-r $ for which $ \varrho_{F_0}(u)< \varrho_{F_1}(u)$. Since $ F_0 $ and $ F_1 $
are flames
\[ \lambda_{F_0}(u)= \varrho_{F_0}(u)< \varrho_{F_1}(u)=\lambda_{F_1}(u).\]
By applying Lemma \ref{key lemma} with $ F_0, F_1 $ and $ u $, we find an $ e\in E(F_1)\setminus (F_0) $ with head $ v $
where $
e $ is a coloop of $
\mathcal{G}_{F_0+e}(v) $. On the one hand, $ \mathcal{G}_{F_0}(v) $ is a free matroid and the previous
sentence ensures that $ \mathcal{G}_{F_0+e}(v) $ is free as well. On the other hand, for $ w\in V\setminus \{ r,v \} $ any
path-system
witnessing that $ \mathcal{G}_{F_0}(w) $ is a free matroid shows the same for
$ \mathcal{G}_{F_0+e}(w) $. By combining these we may conclude that $ F_0+e $ is a flame.
In order to prove the last sentence of Theorem \ref{flame greedoid}, let $ F^{*} $ be a maximal flame in $ D $ and suppose for a
contradiction that
$ \lambda_{F^{*}}(u)<\lambda_{D}(u)$ for some $ u\in V-r $. Applying Lemma \ref{key lemma} gives again some
$ e\in E\setminus E(F^{*}) $ for which $ F^{*}+e $ is a flame contradicting the maximality of $ F^{*} $.
\end{proof}
\section{Fractional generalization and algorithmic aspects}
In this section we define a fractional version of Lovász's theorem and prove it by giving a strongly polynomial algorithm that finds
a desired optimal substructure. We consider non-negative vectors indexed by
the edge set $ E $ of a fixed digraph $ D=(V,E) $. This time we assume without loss of generality that $ D $ has no parallel edges
because
replacing a bunch of parallel
edges by a single edge whose capacity is defined to be the sum of the capacities of those will be a meaningful reduction step in all
the results we discuss. For $
x,y\in
\mathbb{R}_+^{E} $, we write $ x\leq y $ if $ x(e)\leq y(e) $ for every $ e\in E $ and for $ U\subseteq V $ let
$\varrho_x(U):=\sum_{e\in \mathsf{in}_{D}(U)}x(e) $ and $ \delta_x(U):=\sum_{e\in \mathsf{out}_{D}(U)}x(e) $. An $ x\in
\mathbb{R}_+^{E} $ is an $ r\rightarrow v $ flow if $ \varrho_x(u)=\delta_x(u) $ holds for all $ u\in
V\setminus \{ r,v \} $ and $ \varrho_x(r)=\delta_x(v)=0 $. We introduce some concepts and basic facts about flows, one can find
more details and proofs for example in subsection 3.4 of \cite{frank2011connections}. The \emph{amount} of the flow $ x $ is
defined to be $ \delta_x(r) $
which is equal to $ \varrho_x(W)-\delta_x(W) $ for every choice of $ W\subseteq V-r $ containing $ v $. Note that $ x $ can
be written
as the non-negative
combination of directed cycles and $ r\rightarrow v $ paths (more precisely of their characteristic vectors). Such a decomposition
can be found in a greedy way. The sum of the
coefficients of the paths in any such a decomposition is again $ \delta_x(r) $. For $ v\in V-r $
and $ c\in \mathbb{R}_+^{E} $, the
\emph{flow-connectivity} of $ c $
from $ r $ to $ v $ is
\[ \lambda_c(v):=\max \{ \delta_x(r) : x\text{ is an }r\rightarrow v\text{ flow with }x\leq c. \} \]
The Max flow min cut theorem (see \cite{MFMC}) guarantees that $ \lambda_c(v) $ is well-defined and equals to
\[ \min \{ \varrho_c(W): W\subseteq V-r\text{ with }v\in W \}. \] For $v\in V-r $ and $ c\in \mathbb{R}_+^{E} $, we write
$ \mathcal{G}_c(v) $ for the set of those vectors in $ \mathbb{R}_+^{\mathsf{in}_D(v)} $ that can be obtained as a restriction
of
an $ r\rightarrow v $ flow $ x\leq c $ to $ \mathsf{in}_D(v) $ that we denote by $ x \upharpoonright \mathsf{in}_D(v) $. It is
not too hard to prove that $ \mathcal{G}_c(v) $ is a polymatroid and it is natural to call it a \emph{polygammoid}. An $ f\in
\mathbb{R}_+^{E} $ is a \emph{fractional flame}
if $ f\upharpoonright \mathsf{in}_D(v)\in \mathcal{G}_f(v) $ (equivalently $ \lambda_f(v)=\varrho_f(v) $) for all $ v\in V-r $.
For $ e\in E $, let $ \chi_e\in \mathbb{R}_+^{E} $ be the vector where $ \chi_e(e') $ is $ 1 $ if $ e=e' $ and $ 0 $
otherwise.
We call a vector \emph{integral} if all of its coordinates are integers.
The fractional version of Lovász' theorem can be formulated in the following way.
\begin{thm}\label{large fractional flame Alg}
Let $ D=(V,E) $ be a digraph and $ r\in V $. Then for every $ c\in \mathbb{R}_+^{E} $ there is an $ f\leq c $ such
that for every $ v\in
V-r $
\[ \lambda_c(v)=\lambda_f(v)=\varrho_f(v),\]
moreover, if $ c $ is integral then $ f $ can be chosen to be integral. Such an $ f $ can be found in strongly polynomial time.
\end{thm}
\begin{proof}
In the contrast of Theorem \ref{large flame}, the following fractional analogue of Lemma \ref{key lemma} is not
sufficient itself to provide the existence part of Theorem \ref{large fractional flame Alg} but will be an important tool later.
\begin{lem}\label{epsilon lemma}
Let $ x, y\in \mathbb{R}_+^{E} $ such that
$\lambda_y(u)<\lambda_x(u) $ for some $ u\in V-r $. Then there is an $e\in E $ with head, say $ v $, and
an $ \varepsilon>0 $ such that $
x(e)-y(e)\geq\varepsilon $ and
\[ \mathcal{G}_{y+\varepsilon \chi_e}(v)=\{s+\delta \chi_e :s\in \mathcal{G}_{y}(v)\wedge 0\leq\delta \leq \varepsilon
\}.\]
\end{lem}
\begin{proof}
The proof goes similarly as for Lemma \ref{key lemma}. By applying the Max flow min cut theorem and the submodularity of the
function $ X\mapsto \varrho_y(X) $, we take the maximal $ U\subseteq V-r $ with $ u\in U $ and $ \varrho_y(U)=\lambda_y(u)
$. We pick some $ e\in \mathsf{in}_D(U) $ with $ x(e)>y(e) $ and let
\[ \varepsilon:= \min \{x(e)-y(e),\ \varrho_y(W)-\varrho_y(U): U\subsetneq W\subseteq V-r \}. \]
Let $ p $ be an $ r\rightarrow u $ flow of maximal amount
with respect to the capacity $ y+\varepsilon\chi_e $ in the auxiliary digraph we obtain by contracting $ U $ to $ u $ while
deleting the arising loops. By defining $ p $ on the edges with both ends in $ U $ to be $ 0 $, we ensure $ p\in
\mathbb{R}_{+}^{E} $.
The Max flow min cut theorem and the choice of $ \varepsilon $ guarantee that
\[ p\upharpoonright \mathsf{in}_D(U)=(y+\varepsilon\chi_e) \upharpoonright \mathsf{in}_D(U). \]
We may assume that $ p $ is a non-negative combination of $
r\rightarrow U $ paths. Let $ s\in \mathcal{G}_y(u) $ witnessed by the $ r\rightarrow u $ flow $ q $ which is a non-negative
combination of $
r\rightarrow u $ paths. Take the sum of the terminal segments of these weighted paths from the last common edge with $
\mathsf{in}_D(U) $ together with the trivial path $ e $ with a given weight $ \delta $ with $ 0\leq \delta \leq \varepsilon$ to
obtain a vector $ q' $.
Starting with $ p $ one can construct a $ p'\leq p $ which is a non-negative combination of $
r\rightarrow U $ paths and for which $p'\upharpoonright \mathsf{in}_D(U)=q' \upharpoonright \mathsf{in}_D(U) $. It is easy to
see that the coordinate-wise maximum of $ p' $ and $ q' $ witnessing $ s+\delta \chi_e\in \mathcal{G}_{y+\varepsilon
\chi_e}(v) $.
\end{proof}
Now we turn to the description of the algorithm. Let $ V=\{ v_0,\dots, v_n \} $ where $ v_0=r $. The algorithm starts with $
f_0:=c $. If $ f_k\in
\mathbb{R}_+^{E} $ is already
constructed and $ k<n $, then we take an $ r\rightarrow v_{k+1} $ flow $ z_{k+1}\leq f_k $ of amount $
\lambda_{f_k}(v_{k+1}) $,
which we
choose to
be integral if
$ f_k $ is integral, and define \[ f_{k+1}(e):= \begin{cases} z_{k+1}(e) &\mbox{if } e\in \mathsf{in}_D(v_{k+1}) \\
f_k(e) & \mbox{otherwise.}
\end{cases} \]
Since the flow problem can be solved in strongly polynomial time, the algorithm described above is strongly polynomial with a
suitable flow-subroutine. We claim that $ f_n $ satisfies the demands of Theorem \ref{large fractional flame Alg}. Since we start
with $ c $ and lower some values in each step, $ f_n\leq c $ holds. If $ c\in \mathbb{Z}_+^{E} $, then a straightforward
induction shows that $ f_n\in \mathbb{Z}_+^{E} $.
\begin{lem}\label{side key lemma}
If $ z\leq x $ is an $ r\rightarrow v $ flow of amount $ \lambda_x(v) $ and $ {y(e):=
\begin{cases} z(e) &\mbox{if }
e\in \mathsf{in}_D(v) \\
x(e) & \mbox{otherwise}\end{cases}} $ then $ \lambda_y(u)=\lambda_x(u) $ for every $ u\in V-r $.
\end{lem}
\begin{proof}
Suppose for a contradiction that there exists a $ u\in V-r $ with $ \lambda_y(u)<\lambda_x(u) $. Note that $ u\neq v $ because $
\lambda_x(v)=\lambda_y(v) $ is witnessed by $ z $. By
Lemma \ref{epsilon lemma}, there is an $e\in E $ and
an $ \varepsilon $ such that $
x(e)-y(e)>\varepsilon>0 $ (which implies that the head of $ e $ must be $ v $) and
$ \mathcal{G}_{y+\varepsilon \chi_e}(v)=\{s+\delta \chi_e :s\in \mathcal{G}_{y}(v)\wedge 0\leq\delta \leq \varepsilon
\} $. Let $ s_0:= z \upharpoonright \mathsf{in}_D(v) $.
\[\lambda_x(v)\geq \lambda_{y+\varepsilon \chi_e}(v)\geq \left|\left|s_0 \right|\right|_1+\varepsilon=\lambda_x(v)+\varepsilon \]
which is a contradiction.
\end{proof}
By applying Lemma \ref{side key lemma} with $ x=f_k,\ y=f_{k+1} $ and $ z=z_{k+1} $ we obtain the following.
\begin{cor}\label{keep trim}
$ \lambda_{f_k}(v)=\lambda_{f_{k+1}}(v) $ for every $ k<n $ and $ v\in V-r $.
\end{cor}
It follows by induction on $ k $ that $
\lambda_{f_k}(v) =\lambda_c(v)$ for every $ v\in V-r $ and $ k\leq n $. In particular $ \lambda_{f_n}(v)=\lambda_c(v) $ for all
$ v\in V-r $. Let $ 1\leq k \leq n $ be arbitrary. Then $
\varrho_{f_k}(v_k)=\lambda_{f_k}(v_k) $ follows directly from the algorithm (the common value is $ \varrho_{z_k}(v_k) $).
On the one hand, the left side is equal to $ \varrho_{f_n}(v_k) $ since in moving from $ f_k $ to $ f_n $ the algorithm no longer
changes the values on the elements of $ \mathsf{in}_D(v_k) $. On the other hand, we have seen that $
\lambda_{f_k}(v_k)=\lambda_{f_n}(v_k)=\lambda_c(v_k) $. By combines
these we have $ \varrho_{f_n}(v)=\lambda_{f_n}(v) $ which completes the proof of Theorem \ref{large fractional flame Alg}.
\end{proof}
Finally, let us point out a special case of Lemma \ref{side key lemma}.
\begin{cor}
Let $ D $ be a directed graph and let $ \mathcal{P} $ be a maximal sized family of pairwise edge-disjoint $ r\rightarrow v $ paths
in $ D $. Then the deletion of those ingoing edges of $ v $ that are unused by the path-family $ \mathcal{P} $ does not reduce
any local
edge-connectivities of the form $ \lambda_D(r,u) $ with $ u\in V(D)-r $.
\end{cor}
\section{Outlook}
By Theorem \ref{large fractional flame Alg}, finding a spanning subdigraph of a given digraph $ D $ that preserves all the local
edge-connectivities from a prescribed root vertex $ r $ and has the fewest possible edges with respect to this property can be done
in polynomial time. It is natural to ask the complexity of the weighted version:
\begin{quest}
What is the complexity of the following combinatorial optimization problem?\\
Input: digraph $ D $, $ r\in V(D) $ and cost function $ c: E(D)\rightarrow \mathbb{R}_+ $\\
Output: spanning subdigraph $ F $ of $ D $ with $ \lambda_F(r,v)=\lambda_D(r,v) $ for every $ v\in V(D)-r $ for which $
\sum_{e\in E(F)}c(e) $ is minimal with respect to this property.
\end{quest}
The special case where $ \lambda_D(r,v) $ is the same for every $ v\in V(D)-r $ can be solved in polynomial time by using
weighted matroid intersection (see \cite{MR0270945}).\\
There are more general flow models involving polymatroidal bounding functions (see for example \cite{polyflow} and
\cite{quasipolyflow}). The
Max flow min
cut theorem is preserved under these models.
\begin{quest}
Is it possible to generalize Theorem \ref{large fractional flame Alg} by using the polymatroidal flow model introduced by
Hassin in \cite{hassin1982minimum} (and rediscovered later by Lawler and Martel
in \cite{polyflow} independently)?
\end{quest}
\
The relation between matroids and polymatroids motivates the following concept of polygreedoids:
a \emph{polygreedoid} is a compact $ \mathcal{P}\subseteq \mathbb{R}_+^{E} $ such that
\begin{enumerate}
\item[PG1] $ \underline{0}\in \mathcal{P} $,
\item[PG2] whenever $ x,y\in
\mathcal{P} $ with $\left|\left|x\right|\right|_1<\left|\left|y\right|\right|_1$, there is some $ e\in E $ with $ y(e)>x(e) $ such that $
x+\varepsilon \chi_e\in \mathcal{P} $ for all small enough $ \varepsilon>0 $.
\end{enumerate}
It follows directly from Lemma \ref{epsilon lemma} that fractional flames under a given bounding vector form a polygreedoid.
Greedoids have a property called \emph{accesibility} which can be considered as a weakening of the downward closedness of
matroids. It tells that every $ F\in \mathcal{F} $ can be enumerated in such a way that each initial segment belongs to $
\mathcal{F} $, i.e., $ F=\{ e_1,\dots, e_n \} $ such that $ \{ e_1,\dots, e_k \}\in \mathcal{F} $ for every $ k\leq n $.
Accessibility tends to be a part of the axiomatization of greedoids via the restriction the Augmentation axiom for pairs with
$ \left|F'\right|=\left|F\right|+1 $. It is not too hard to prove that polygreedoids satisfy the following analogous property: for every
$ x\in \mathcal{P} $ there is a continues strictly increasing\footnote{Strictly increasing is meant with respect to the
coordinate-wise partial ordering of $
\mathbb{R}_+^{E} $.} function $ g:[0,1]\rightarrow
\mathcal{P} $ with $ g(0)= \underline{0}$ and $ g(1)=x $. Finally, let us end the paper with the following general question.
\begin{quest}
How much of the theory of greedoids is preserved for polygreedoids?
\end{quest}
\begin{bibdiv}
\begin{biblist}
\bib{lovasz}{article}{
author={Lov\'{a}sz, L.},
title={Connectivity in digraphs},
journal={J. Combinatorial Theory Ser. B},
volume={15},
date={1973},
pages={174--177},
issn={0095-8956},
review={\MR{325439}},
doi={10.1016/0095-8956(73)90018-x},
}
\bib{calvillo-vives}{thesis}{
author={Calvillo-Vives, Gilberto},
title={Optimum branching systems},
date={1978},
type={Ph.D. Thesis},
organization={University of Waterloo},
}
\bib{Greedoid Book}{book}{ author={Korte, Bernhard}, author={Lov\'{a}sz, L\'{a}szl\'{o}}, author={Schrader, Rainer},
title={Greedoids}, series={Algorithms and Combinatorics}, volume={4}, publisher={Springer-Verlag, Berlin}, date={1991},
pages={viii+211}, isbn={3-540-18190-3}, review={\MR{1183735}}, doi={10.1007/978-3-642-58191-5},}
\bib{aharoni2009menger}{article}{
author={Aharoni, Ron},
author={Berger, Eli},
title={Menger's theorem for infinite graphs},
journal={Invent. Math.},
volume={176},
date={2009},
number={1},
pages={1--62},
issn={0020-9910},
review={\MR{2485879}},
doi={10.1007/s00222-008-0157-3},
}
\bib{attila-flames}{article}{
author={Jo\'{o}, Attila},
title={Vertex-flames in countable rooted digraphs preserving an Erd\H{o}s-Menger separation for each vertex},
journal={Combinatorica},
volume={39},
date={2019},
pages={1317--1333},
doi={10.1007/s00493-019-3880-z},
}
\bib{erde2020enlarging}{article}{
title={Enlarging vertex-flames in countable digraphs},
author={Joshua Erde and J. Pascal Gollin and Attila Jo\'{o}},
year={2020},
journal={arXiv preprint arXiv:2003.06178},
note={\url{https://arxiv.org/abs/2003.06178v1}}
}
\bib{perfect1969independence}{article}{
title={Independence spaces and combinatorial problems},
author={Perfect, Hazel},
journal={Proceedings of the London Mathematical Society},
volume={3},
number={1},
pages={17--30},
year={1969},
publisher={Wiley Online Library}
}
\bib{pym1969proof}{article}{
title={A proof of the linkage theorem},
author={Pym, JS},
journal={Journal of Mathematical Analysis and Applications},
volume={27},
number={3},
pages={636--638},
year={1969},
publisher={Elsevier}
}
\bib{frank2011connections}{book}{
title={Connections in combinatorial optimization},
author={Frank, Andr{\'a}s},
volume={38},
year={2011},
publisher={OUP Oxford}
}
\bib{MFMC}{book}{ author={Ford, L. R., Jr.}, author={Fulkerson, D. R.}, title={Flows in networks},
publisher={Princeton University Press, Princeton, N.J.}, date={1962}, pages={xii+194}, review={\MR{0159700}},}
\bib{MR0270945}{article}{
author={Edmonds, Jack},
title={Submodular functions, matroids, and certain polyhedra},
conference={
title={Combinatorial Structures and their Applications},
address={Proc. Calgary Internat. Conf., Calgary, Alta.},
date={1969},
},
book={
publisher={Gordon and Breach, New York},
},
date={1970},
pages={69--87},
review={\MR{0270945}},
}
\bib{hassin1982minimum}{article}{
title={Minimum cost flow with set-constraints},
author={Hassin, Refael},
journal={Networks},
volume={12},
number={1},
pages={1--21},
year={1982},
publisher={Wiley Online Library}
}
\bib{polyflow}{article}{
author={Lawler, E. L.},
author={Martel, C. U.},
title={Polymatroidal flows with lower bounds},
note={Applications of combinatorial methods in mathematical programming
(Gainesville, Fla., 1985)},
journal={Discrete Appl. Math.},
volume={15},
date={1986},
number={2-3},
pages={291--313},
issn={0166-218X},
review={\MR{865009}},
doi={10.1016/0166-218X(86)90050-8},
}
\bib{quasipolyflow}{article}{
author={Kochol, M.},
title={Quasi polymatroidal flow networks},
journal={Acta Math. Univ. Comenian. (N.S.)},
volume={64},
date={1995},
number={1},
pages={83--97},
issn={0862-9544},
review={\MR{1360989}},
}
\end{biblist}
\end{bibdiv}
\end{document} | {
"timestamp": "2021-04-01T02:25:17",
"yymm": "2008",
"arxiv_id": "2008.09107",
"language": "en",
"url": "https://arxiv.org/abs/2008.09107",
"abstract": "A digraph $ D $ with $ r\\in V(D) $ is an $ r $-flame if for every $ {v\\in V(D)-r} $, the in-degree of $ v $ is equal to the local edge-connectivity $ \\lambda_D(r,v) $. We show that for every digraph $ D $ and $ r\\in V(D) $, the edge sets of the $ r $-flame subgraphs of $ D $ form a greedoid. Our method yields a new proof of Lovász' theorem stating: for every digraph $ D $ and $ r\\in V(D) $, there is an $ r $-flame subdigraph $ F $ of $ D $ such that $ \\lambda_F(r,v) =\\lambda_D(r,v) $ for $ v\\in V(D)-r $. We also give a strongly polynomial algorithm to find such an $ F $ working with a fractional generalization of Lovász' theorem.",
"subjects": "Combinatorics (math.CO)",
"title": "Greedoids from flames",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.986151391819461,
"lm_q2_score": 0.7185943865443349,
"lm_q1q2_score": 0.7086428544443476
} |
https://arxiv.org/abs/0807.3415 | On the spectral gap of the Kac walk and other binary collision processes | We give a new and elementary computation of the spectral gap of the Kac walk on the N-sphere. The result is obtained as a by-product of a more general observation which allows to reduce the analysis of the spectral gap of an N-component system to that of the same system for N=3. The method applies to a number of random 'binary collision' processes with complete-graph structure, including non-homogeneous examples such as exclusion and colored exclusion processes with site disorder. | \section{Introduction}
The following model for energy preserving binary collisions
was introduced by M.\ Kac in \cite{Kac}.
Let $\nu$ denote the uniform probability measure on the sphere
$$
S^{N-1} = \{\eta\in\bbR^N\,:\;\sum_{i=1}^N\eta_i^2=1\}\,,
$$
and consider the $\nu$--reversible
Markov process on $S^{N-1}$ with infinitesimal generator given by
\be\la{genkac}
\cL f (\eta)= \frac1{2N}\sum_{i,j=1}^N \frac1{2\pi}
\int_0^{2\pi} \left[f(R_\theta^{ij}\eta) - f(\eta)\right]\,d\theta\,,
\end{equation}
where $R_\theta^{ij}$, $i\neq j$
is a clockwise rotation of angle $\theta$
in the plane $(\eta_i,\eta_j)$. As a convention, we take
$R^{ii}_\theta = {\rm Id}$.
In words, the associated Markov process goes as follows:
we have independent Poisson clocks of rate $1/2$
at each coordinate; when coordinate $i$ rings we choose uniformly at random (with replacement) another coordinate $j$; if $j\neq i$ then we perform a rotation of angle $\theta$ in the plane $(\eta_i,\eta_j)$, with $\theta$ uniform over $[0,2\pi)$, while if $i=j$ we do nothing.
Note that $-\cL$ is a non--negative, bounded self--adjoint operator on
$L^2(\nu)$. Any constant is an eigenfunction with eigenvalue $0$
and the spectral gap $\l=\l(N)$ is defined as
\be\la{gapkac}
\l(N)=\inftwo{f\in L^2(\nu):}{\nu(f)=0}\,\frac{\nu(f(-\cL) f)}{\nu(f^2)}\,,
\end{equation}
where $\nu(f)$ stands for the expectation $\int f d\nu$.
M.\ Kac conjectured that $\l(N)$ stays bounded away from $0$ as $N\to\infty$.
This conjectured was first proved by Janvresse \cite{J}, where a powerful recursive approach due to H.T.\ Yau was used.
After that, in the beautiful paper \cite{CCL}, Carlen, Carvalho and Loss
introduced a new recursive approach which allows to actually compute
the value of $\l(N)$ for every $N$:
\be\la{ccl1}
\l(N)=\frac{N+2}{4N}\,, \quad N\geq 2\,.
\end{equation}
Around the same time, Maslen \cite{Maslen} derived formulae
for all eigenvalues of $\cL$ by means of
harmonic analysis techniques. We refer to \cite{CCL} for further background, motivation and references on Kac's conjecture.
Our result below shows that the proof of
(\ref{ccl1}) can be somewhat simplified.
In particular, we do not need any recursive analysis:
in one step we go from $\l(N)$ to $\l(3)$ and the conclusion follows by direct computations in the case $N=3$.
\begin{Th}
\la{th1}
For any $N\geq 3$:
\be
\la{bound3}
\l(N)=(3\,\l(3) - 1)\left(1-\frac2N\right) + \frac1{N}\,.
\end{equation}
In particular, (\ref{ccl1}) follows from (\ref{bound3})
with $\l(3)=5/12$.
\end{Th}
The proof uses a well known equivalent characterization of the spectral gap as the largest constant $\l$ such that
the inequality
\be
\nu\left((\cL f)^2\right) \geq \l\,\nu\left(f(-\cL )f\right)\,,
\la{equi}
\end{equation}
holds for all $f\in L^2(\nu)$. The equivalence of (\ref{gapkac}) and
(\ref{equi}) follows from elementary spectral theory.
A similar approach has been exploited recently in \cite{BCDP} to obtain spectral gap bounds for a class of interacting particle systems.
A proof of Theorem \ref{th1} is given at the end of the
introduction. In the following sections we shall show that
variants of the same method can be used to obtain spectral gap estimates for several models sharing some of the features of the Kac walk. The argument turns out to be especially powerful in
non--homogeneous cases where other known methods are harder to apply
because of the lack of permutation symmetry. In particular, for
exclusion processes with site disorder we obtain a remarkable
simplification of a spectral gap estimate proved by the author in
\cite{C1}. The latter estimate
is at the heart of recent
results on the hydrodynamic limit of disordered lattice gases
\cite{FM,Q}. In the last section of this work we prove a new result which
extends the spectral gap estimate to the
case of colored particles.
On the other hand, we point out that for some
of the physically more relevant generalizations of the Kac walk
treated in \cite{CCL} our argument will not necessarily yield sharp
results. This is the case, for example, of the
Kac model with momentum and energy conserving collisions discussed in
the next section.
The problem of the determination of the spectral gap for the latter model has been recently solved in \cite{CJL}, where the authors
develop an interesting extension of the recursive scheme introduced in \cite{CCL}.
\subsection{Proof of Theroem \ref{th1}}
We start with some preliminaries. We write
\be\la{eij}
E_{b} f (\eta) = \frac1{2\pi}\int_0^{2\pi} f(R_\theta^{ij}\eta)\,d\theta\,,\quad b=\{i,j\}\,,
\end{equation}
for the binary average operator appearing in the definition of $\cL$.
Note that $E_{b}$ is a projection which coincides with the
$\nu$--conditional expectation given the $\si$--algebra $\cF_{b}$ generated by variables $\{\eta_\ell,\; \ell\notin b\}$.
Thus, we rewrite the Markov generator as follows:
\be
\cL f(\eta) = \frac1N \sum_{b} D_b f (\eta)\,,
\la{gen}
\end{equation}
where
the sum runs over all $\binom{N}{2}$
unordered pairs $b$ and for each
such pair
\be
D_b = E_b - {\rm Id}\,,\quad\;\, E_b f(\eta) = \nu(f\thsp | \thsp \cF_b) \,.
\la{ub}
\end{equation}
For each $b$, $D_b$ is a bounded self-adjoint operator in $L^2(\nu)$ satisfying
$D_b^2=-D_b$.
In particular,
\be
\nu\left(f(-\cL)g\right) = \frac1N \sum_b \nu[D_bf D_bg]\,.
\la{efg}
\end{equation}
On the other hand we have
\be
\nu\left((\cL f)^2\right) =
\frac1{N^2} \sum_{b,b'} \nu[D_b f D_{b'} f]\,.
\la{ufg1}
\end{equation}
We are going to use the expressions (\ref{efg}) and (\ref{ufg1}) in (\ref{equi}) to compute $\l(N)$. We start with the lower bound.
We write $b\sim b'$ when two unordered pairs have at least one common
vertex (including the case $b=b'$). Otherwise we write
$b\not\sim b'$. We observe that
if $b\not\sim b'$, then $E_b$ and $E_{b'}$ commute. Therefore,
using $D_b^2=-D_b$ and self--adjointness
\be\la{commu}
\nu[D_b f D_{b'} f] = - \nu[(D_{b'}D_b f) (D_{b'} f)]
= \nu[(D_{b'}D_b f)^2] \geq 0\,,\quad \;b\not\sim b'\,.
\end{equation}
It follows that
\be\la{argo1}
\nu\left((\cL f)^2\right) \geq \frac1{N^2} \sum_{b,b':\; b\sim b'}
\nu[D_b f D_{b'} f]\,.
\end{equation}
Unordered triples $\{i,j,k\}$ of distinct vertices
are
denoted by $T$ (triangles). We say that $b\in T$ if $b=\{i,j\}$ and
$i,j\in T$. Clearly, if $b\sim b'$
and $b\neq b'$ there is only one triangle $T$ such
that $b,b'\in T$.
We may therefore write
\begin{align}
\sum_{ b,b'\,:\; b\sim b'} & \nu[D_b f D_{b'}f] = \sum_{b,b'\,:\; b\sim b'\,,\;b\neq b'} \nu[D_b f D_{b'}f]
+ \sum_{b} \nu[(D_b f)^2] \nonumber \\
& = \sum_{T} \sum_{b,b'\in T} \nu[D_b f D_{b'}f]
- \sum_{T} \sum_{b\in T} \nu[(D_b f)^2] + \sum_{b} \nu[(D_b f)^2] \,.
\la{tria}
\end{align}
Since for every $b$ there are exactly $N-2$ triangles $T$ such that
$b\in T$ we see that
\begin{align}
\sum_{ b,b'\,:\; b\sim b'} & \nu[D_b f D_{b'}f] \nonumber\\
&\quad= \sum_{T} \sum_{b,b'\in T} \nu[D_b f D_{b'}f] - (N-3)\sum_{b} \nu[(D_b f)^2] \,.
\la{co9}
\end{align}
Let us now apply the inequality (\ref{equi}) to a fixed triangle $T$.
Let $\cF_T$ denote the $\si$--algebra generated by
$\{\eta_\ell,\;\ell\notin T\}$. The conditional probability
$ \nu[\cdot\thsp | \thsp \cF_T]$ coincides with the uniform probability measure
on the sphere $S^2(t)$ in $\bbR^3$ with radius
$$
t = \sqrt{1-\sum_{\ell\notin T}\eta_\ell^2}\,.
$$
Moreover, as noted in \cite{CCL}, it is not hard to show that
the spectral gap of the Kac model
does not depend on the radius of the sphere on which the walk is performed.
Using (\ref{equi}) on each triangle $T$, we therefore have
\be
\frac13
\sum_{b,b'\in T} \nu[D_b f D_{b'} f\thsp | \thsp\cF_T] \geq \l(3)\sum_{b\in T}
\nu[(D_b f)^2\thsp | \thsp \cF_T]\,,
\la{trgap}
\end{equation}
uniformly in $\eta\in S^{N-1}$.
Taking $\nu$--expectation we can remove the conditioning on $\cF_T$ in
(\ref{trgap}).
Using this in (\ref{co9}) gives
\begin{align*}
\sum_{ b,b'\,:\; b\sim b'} & \nu[D_b f D_{b'}f]\\
& \geq 3\,\l(3)\,(N-2) \sum_{b} \nu[(D_b f)^2] - (N-3)\sum_{b} \nu[(D_b f)^2] \\
& = \left((3\l(3)-1)(N-2) + 1\right)
\sum_{b} \nu[(D_b f)^2] \,.
\end{align*}
From (\ref{equi}) we conclude that $\l(N)$ is larger or equal than the right hand side of (\ref{bound3}).
It remains to show that this bound is attained for a given $f$.
To this end, take
\be
f_N(\eta)=\sum_{i=1}^N \eta_i^4 + {\rm const.}
\,
\la{eigen1}
\end{equation}
Let us first check that
$\nu[D_b f_N D_{b'} f_N] = 0$ whenever
$b\not\sim b'$, so that (\ref{argo1}) is an equality for $f=f_N$.
To this end, note that $\nu[D_b f_N D_{b'} f_N] = \nu[f_N D_b D_{b'}
f_N] = 0$. Indeed, if $b=\{i,j\}$ and
$b'\not\sim b$, then $D_{b'}f_N(\eta)$ depends on $\eta_i,\eta_j$ only
through $\eta_i^2+\eta_j^2 = 1- \sum_{k\notin b} \eta_k^2$ and
therefore $D_bD_{b'}f_N=0$.
Next, we need that (\ref{trgap})
is an equality as well. This requires checking that for $N=3$,
for any value of the conservation law $\sum_{i=1}^3 \eta_i^2=t>0$,
$f_3$ is, up to additive constants (that may depend on $t$), an
eigenfunction of $-\cL$ with eigenvalue $\l(3)$.
For the solution of this $3$--dimensional problem, as well as for the
calculation of $\l(3)=5/12$, we refer to \cite[Section 3]{CCL}.
Once the estimates (\ref{argo1}) and (\ref{trgap})
can be turned into identities we see that all our bounds are saturated for the function (\ref{eigen1}). This completes the proof.
\qed
\section{Homogeneous models}\la{hom}
The general setting can be described as follows. We consider a product space $\O=X^N$, where $X$, the single component space
is a measurable space equipped with a probability measure
$\mu$.
On $\O$ we consider the product measure $\mu^N$. Elements of
$\O$ will be denoted by $\eta=(\eta_1,\dots,\eta_N)$. Next, we take a measurable function $\xi:X\to\bbR^d$, for a given $d\geq 1$, and
we define the probability measure $\nu=\nu_{N,\o}$ on $\O$ as
$\mu^N$ conditioned on the event
\be \O_{N,\o}:=\left\{\eta\in\O:\;\sum_{i=1}^N\xi(\eta_i)=\o\right\}\,,
\la{cons}
\end{equation}
where $\o\in\bbR^d$ is a given parameter.
We interpret the constraint on $ \O_{N,\o}$
as a {\em conservation law}.
In all the examples considered below there are no difficulties in defining the conditional probability $\nu$,
therefore we do not attempt here at a justification of this setting
in full generality but rather refer
to the examples for full rigor. The crucial property of $\nu$ that will be repeatedly used below is that, for any set of indices $A$, conditioned on the $\si$--algebra $\cF_A$ generated by
variables $\eta_i$, $i\notin A$, $\nu$ becomes the $\mu$--product law over $\eta_j$, $j\in A$, conditioned on the event
$$
\sum_{j\in A}\xi(\eta_j)=\o - \sum_{i\notin A}\xi(\eta_i)\,.
$$
We shall call this the {\em non--interference} property.
In analogy with (\ref{gen}) we consider the binary collision
Markov process described by the infinitesimal generator
\be\la{genhom}
\cL f = \frac1N \sum_{b} \left[\nu(f\thsp | \thsp\cF_b) - f\right] \,,
\end{equation}
where
the sum runs over all $\binom{N}{2}$
unordered pairs $b=\{i,j\}$
and
$\nu[ f\thsp | \thsp\cF_b] $ is the $\nu$--conditional expectation of $f$ given
the variables $\eta_\ell,\; \ell\notin b$.
This defines a bounded self--adjoint operator on $L^2(\nu)$.
Setting, as before, $D_b =\nu[\cdot\thsp | \thsp \cF_b] - {\rm Id}$, we see that
$D_b^2=-D_b$ and the operator $\cL$ satisfies
(\ref{efg}).
In principle, our arguments could be extended to more general
processes. For instance, one can consider binary ``collisions'' which are not
given by simple averages as in (\ref{genhom}) but by other mechanisms
which still preserve reversibility (see e.g.\ the non uniform models
considered in \cite{CCL}), or one could look at
``collisions'' which can involve
more than two components at a time. However, we
shall not investigate such extensions.
Returning to our model (\ref{genhom}),
the spectral gap is defined as in (\ref{gapkac}).
Note that, by definition,
$\l(2)=\frac12$ always, since for $N=2$ there is only one pair $b$ and $\nu((D_b f)^2) = \nu(f^2)$
for any $f$ such that $\nu(f)=0$.
Here we shall assume that
$\l(3)$ {\em is independent of the choice of} $\o$.
This is a strong assumption which holds only for special choices of
the model. However, it does hold in the examples considered below.
More general models are treated in the next section.
\begin{Th}
\la{th2}
Suppose $\l(3)$ is independent of $\o$.
Then, for any $N\geq 2$:
\be
\la{bar3}
\l(N)\geq (3\,\l(3) - 1)\left(1-\frac2N\right) + \frac1{N}\,.
\end{equation}
If, in addition,
there exists $\varphi:X\to\bbR$ such that the function
\be\la{sumform}
f_3(\eta_1,\eta_2,\eta_3)=\sum_{i=1}^3 \varphi(\eta_i)\,,
\end{equation}
satisfies, for $N=3$, $\cL f_3 = -\l(3) f_3 + {\rm const.}$,
regardless of the value of
$\sum_{i=1}^3\xi(\eta_i)$ (although the constant may depend on this value),
then (\ref{bar3}) can be turned into an identity for each $N\geq 2$.
\end{Th}
\proof
We repeat the steps of the proof of Theorem \ref{th1}. We start from (\ref{ufg1}) and arrive at (\ref{argo1}) with the same commutation property used in (\ref{commu}). Indeed, this is a simple consequence of the non--interference property.
The latter property also implies that the conditional probability $\nu(\cdot\thsp | \thsp\cF_T)$ is nothing but $\nu_{3,\o'}$ with $\o'=\o-\sum_{\ell\notin T} \xi(\eta_\ell)$.
Since $\l(3)$ is independent of the value $\o$ in the conservation
law we may repeat the argument leading to (\ref{trgap}).
This proves the lower bound (\ref{bar3}).
As for the reverse direction, again the arguments given in the proof
of Theorem \ref{th1} can be repeated line by line. \qed
\bigskip
Next, we examine some examples to which the theorem applies.
\subsection{Kac model} The model discussed in the introduction can be
seen as a special case of our general setting, so that Theorem
\ref{th1} becomes a special case of Theorem \ref{th2}.
Here $X=\bbR$, $\mu$ is the centered Gaussian measure with variance $\si^2>0$ and we take
$\xi(\eta_i)=\eta_i^2$ (with $d=1$). Then, for every $\o>0$,
$\nu_{N,\o}$ is the uniform probability measure on the sphere of radius
$\sqrt \o$. Clearly, the choice of $\si^2>0$ is uninfluential in the determination of $\nu_{N,\o}$.
As we have seen in the introduction,
this model satisfies the two main assumptions in Theorem \ref{th2}.
\subsection{``Flat'' Kac model}
Here $X=\bbR^+$, $\mu$ is the exponential law with parameter $\g>0$. We take $\xi(\eta_i)=\eta_i$ (with $d=1$). Then, for every $\o>0$,
independently of the choice of $\g>0$,
$\nu_{N,\o}$ is the uniform probability measure on the simplex
$\O_{N,\o}$. The binary collision process (\ref{genhom}) associated to this
setting does not appear explicitly in the literature, so we shall give
more details on the computation of $\l(N)$ in this case.
Let us first
check that $\l(N)$ is independent of $\o$, for any $N$.
This is a consequence of the fact that
$\cL$ commutes with the unitary change of scale from $\O_{N,\o}$ to
$\O_{N,\o'}$, for any $\o,\o'>0$. Indeed, $\nu_{N,\o'}$ is the image
of $\nu_{N,\o}$ under the map $\cT:\eta \to \o'\eta/\o$ and if
$f_\cT(\eta)=f(\cT\eta)$, then
\be\la{comma}
\nu_{N,\o}(f_\cT\thsp | \thsp\cF_b)(\eta) = \nu_{N,\o'}(f\thsp | \thsp\cF_b)(\cT\eta)\,,
\end{equation}
for all $\eta\in\O_{N,\o}$ and for all pairs $b$.
Next, we shall prove that $\l(3)=\frac49$ and that the eigenfunction
for $N=3$ is given by
\be
\la{sumflat}
f_3(\eta_1,\eta_2,\eta_3) = \sum_{i=1}^3 \eta_i^2 + {\rm const.}
\end{equation}
for any value of the constraint $\sum_{i=1}^3\eta_i=\o$
(of course, the constant will be given by $-3\nu_{3,\o}(\eta_1^2)$, since we must have $\nu_{3,\o}(f)=0$).
From these facts and Theorem \ref{th2} we therefore obtain, for all $N\geq 2$:
\be\la{gapflat}
\l(N) = \frac{N+1}{3N}\,.
\end{equation}
To solve the $3$--dimensional problem, we observe that when $N=3$, then $\cL+1$ coincides with the {\em average operator} $P$ introduced in \cite{CCL}. Therefore we can apply the general analysis of
Section 2 in \cite{CCL} or equivalently that of Theorem 4.1 in \cite{C2}. The outcome is that
\be
\l(3)\geq \frac13\,\min\{2+\mu_1\,,\,2-2\mu_2\}\,,
\la{3k}
\end{equation}
where the parameters $\mu_1,\mu_2$ are given by
\be\la{mus}
\mu_1=\inf_{\varphi}
\nu(\varphi(\eta_1)\varphi(\eta_2))
\,,\quad\;
\mu_2=\sup_{\varphi}
\nu(\varphi(\eta_1)\varphi(\eta_2))
\end{equation}
with $\varphi$ chosen among all functions $\varphi: X\to \bbR$ satisfying
$\nu( \varphi(\eta_1)^2)=1$ and $\nu(\varphi(\eta_1))=0$.
Here $\nu$ stands for $\nu_{3,\o}$, but we have removed the subscripts
for simplicity. As in (\ref{comma}) one checks that
the parameters $\mu_1,\mu_2$ do not depend on $\o$.
Write $\cK\varphi(\a) = \nu[\varphi(\eta_2)\thsp | \thsp\eta_1=\a]$,
$\a\geq 0$. This defines a self--adjoint Markov operator on
$L^2(\nu_1)$, where $\nu_1$ is the marginal on $\eta_1$ of
$\nu$.
In particular,
the spectrum ${\rm Sp}(\cK)$ of $\cK$
contains $1$ (with eigenspace given by the
constants). Then $\mu_1,\mu_2$ are, respectively, the smallest and the
largest value in ${\rm Sp}(\cK)\setminus\{1\}$, as we see by writing
$\nu(\varphi(\eta_1)\varphi(\eta_2)) =
\nu[\varphi(\eta_1)\cK\varphi(\eta_1)]$. This is now a
one--dimensional problem and $\mu_1,\mu_2$ can be computed as
follows. To fix ideas we use the value $\o=1$ for the conservation law
$\eta_1+\eta_2+\eta_3$. In this case $\nu_1$ is the law on
$[0,1]$ with density $2(1-\eta_1)$.
Moreover,
$$
\cK\varphi(\eta_1) = \frac1{1-\eta_1}\int_0^{1-\eta_1}\varphi(\eta_2)d\eta_2\,, \quad \eta_1\in[0,1)\,.
$$
($\cK\varphi(1)=\varphi(0)$).
In particular, $\varphi_1(\a)=\a - \frac13$ is an eigenfunction of
$\cK$ with eigenvalue $-1/2$.
Moreover, $\cK$ preserves the degree of polynomials so that if $Q_n$
denotes the space of all polynomials of degree $d\leq n$ we have $\cK Q_n\subset
Q_n$. By induction we see that for each $n\geq 1$ the
polynomial
$\a^n + q_{n-1}(\a)$, for a suitable
$q_{n-1}\in Q_{n-1}$, is an eigenfunction with eigenvalue
$\mu_n=\frac{(-1)^{n}}{n+1}$, and it is orthogonal to
$Q_{n-1}$ in $L^2(\nu_1)$. Since the union of $Q_n$, $n\geq 1$, is dense
in $L^2(\nu_1)$
this shows that there is a complete orthonormal set of eigenfunctions
$\varphi_n$, where $\varphi_n$ is a polynomial of degree $n$ with
eigenvalue $\mu_n$ and ${\rm Sp}(\cK)=\{\mu_n\,,\;n=0,1,\dots\}$.
Therefore we can take
$\mu_1=-\frac12$ and
$\mu_2 = \frac13$ in the formula (\ref{3k}). We conclude
that $\l(3)\geq \frac49$.
To end the proof we take $f=\eta_1^2+\eta_2^2+\eta_3^2$
and, using $\nu[\eta_1^2\thsp | \thsp\eta_2] = \frac13(\eta_2^2 - 2\eta_2 + 1)$ we compute
$$
\cL f (\eta) = -\frac49 \,f(\eta) + {\rm const.}\,
$$
Thus, $\l(3)=\frac49$ and the eigenfunction is given by (\ref{sumflat}).
Clearly, the unitary change of scale $\cT$ introduced above does not alter the form of the eigenfunction so that all the hypothesis of Theorem \ref{th2} apply and (\ref{gapflat}) follows.
\subsection{Momentum and energy conserving collision model}. Here
$X=\bbR^3$, and $\mu$ is a centered $3$--dimensional Gaussian law
$N(0,C)$ with covariance matrix $C$ given by a multiple of the
identity. Each coordinate $\eta_i$ is a
$3$--dimensional velocity vector $\eta^\a_i$, $\a=1,2,3$. We have $d=4$ conservation laws, with
$\xi^\a(\eta_i)=\eta^\a_i$, $\a=1,2,3$ (momentum conservation)
and $\xi^4(\eta_i) = |\eta_i|^2=\sum_{\a=1}^3(\eta^\a_i)^2$
(energy conservation).
For any $\o=(\o^1,\dots,\o^4)\in\bbR^4$ with $\o^4 > 0$, $\nu_{N,\o}$
is the uniform probability measure on the manifold $\O_{N,\o}$
(whenever $\O_{N,\o}\neq\emptyset$).
We refer to \cite{CCL} for an explicit description of the
probability measure and its main properties. It is still the case that $\l(3)$ is independent of the conservation law, see \cite{CCL}. In particular, the lower bound (\ref{bar3}) holds in this case. However, a computation of $\l(3)$ for this model shows that $\l(3)=\frac13$ (see (\ref{gapbol}) below) and therefore the estimate becomes $\l(N)\geq \frac1N$ which is rather poor.
Indeed, it was shown in \cite{CCL} that the
the spectral gap is bounded away from zero uniformly in $N$:
\be\la{inf}
\inf_{N\geq 2} \l(N)>0\,.
\end{equation}
Recently, by a very deep analysis of the Jacobi polynomials naturally associated to this model Carlen, Jeronimo and Loss \cite{CJL}
succeeded in computing $\l(N)$ exactly for every $N$:
\be
\la{gapbol}
\l(N)=\frac13\,,\quad\; N\geq 3\,.
\end{equation}
This shows our approach is too rough here.
As we know from Theorem \ref{th2} the loss must come from the lack of the second property required in that theorem. It was shown in \cite{CJL} that the eigenspace of $\l(N)$, for the choice $\o=(0,0,0,1)$, is spanned by
the functions
$$
f_{N,\a}(\eta)=\sum_{i=1}^N |\eta_i|^2\eta^\a_i\,,\quad\a=1,2,3\,.
$$
One cannot expect that the
change of scale from $\o$ to $\o'$
transforms a linear combination of $f_{N,\a}$'s into itself
(up to multiplicative and additive constants), and
the second property in Theorem \ref{th2} must fail here.
Let us show that our approach can nevertheless be used to prove the
weaker result (\ref{inf}) without any recursive analysis. Namely, we
prove that if $\l(4)>1/4$ then (\ref{inf}) holds.
The choice of triangles in the proof of Theorem \ref{th1} and Theorem \ref{th2} can be replaced by the choice of larger cliques
(i.e.\ complete subgraphs) of the original complete graph.
Namely, if in (\ref{tria}) we sum over cliques with $4$ vertices instead of triangles we shall obtain the bound, for $N\geq 4$:
\be\la{l4}
\l(N)\geq (4\l(4) - 1)\left(\frac12-\frac1N \right) + \frac1N\,,
\end{equation}
instead of (\ref{bar3}). To see this, set for simplicity
$a_{b,b'}:=\nu[D_bf D_{b'}f]$, for a given $f\in L^2(\nu)$,
and recall that $N^2\nu[(\cL f)^2] = \sum_{b,b'}a_{b,b'}$. Denote by $Q$ the cliques of $4$ vertices and note that: for every $b$ there are $\frac12(N-2)(N-3)$ $Q$'s such that $b\in Q$; for any $b\neq b'$ with $b\sim b'$ there are $(N-3)$ $Q$'s
such that $b,b'\in Q$; for any $b, b'$ with $b\not\sim b'$ there is
only one $Q$
such that $b,b'\in Q$. Then
\begin{align*}
\sum_{b,b'\,:\, b\neq b',\, b\sim b'} a_{b,b'} & = \frac1{N-3}\sum_{Q}
\sum_{b,b'\in Q\,:\, b\neq b',\, b\sim b'} a_{b,b'} \\ & =
\frac1{N-3}\sum_{Q}
\left\{\sum_{ b, b'\in Q} a_{b,b'} - \sum_{ b, b'\in Q\,:\; b\not\sim b'} a_{b,b'} - \sum_{ b\in Q} a_{b,b}\right\}\\ &
\geq \frac1{N-3}\sum_{Q}(4\l(4) - 1) \sum_{ b\in Q} a_{b,b}
- \frac1{N-3}\sum_{Q}\sum_{ b, b'\in Q\,:\; b\not\sim b'} a_{b,b'} \\&
= \frac12(N-2)(4\l(4) - 1) \sum_{ b} a_{b,b} - \frac1{N-3}\sum_{ b, b'\,:\; b\not\sim b'} a_{b,b'} \,.
\end{align*}
Therefore
\begin{align*}
\sum_{ b,b'} a_{b,b'}
&= \sum_{ b\neq b'\,:\; b\sim b'} a_{b,b'}
+ \sum_{ b, b'\,:\; b\not\sim b'} a_{b,b'} + \sum_{ b} a_{b,b}\\
& \geq \frac12 [(N-2)(4\l(4)-1) + 2] \sum_{ b} a_{b,b}
+ \left(1-\frac1{N-3}\right)
\sum_{ b, b'\,:\; b\not\sim b'} a_{b,b'}
\\ & \geq \frac12[(N-2)(4\l(4)-1) + 2] \sum_{ b} a_{b,b}\,,
\end{align*}
where, in the last line, we have used (\ref{commu}).
Since $-N\nu(f\cL f) = \sum_{ b} a_{b,b}$, this proves the claim (\ref{l4}). Note that this argument applies in the more general setting of Theorem \ref{th2}, and similar computations can in principle be carried out for any choice of cliques of $k<N$ vertices.
Therefore, to prove (\ref{inf})
it suffices to prove $\l(4)>\frac14$. With the value $\l(4)=\frac13$ from (\ref{gapbol}) this gives $\l(N)\geq \frac16 + \frac2{3N}$, for
all $N\geq 4$.
\section{Non--homogeneous models}\la{nonhom}
We generalize the setting introduced above as follows. As before we
take $\O=X^N$ but now each copy of $X$ will be equipped with
possibly distinct probability measures $\mu_i$, $i=1,\dots,N$. Again
we consider conservation laws as in (\ref{cons}), associated to a
given function $\xi$ on $X$ and a given parameter $\o$, and the
probability $\nu$ given by the product $\mu_1\times\cdots\times\mu_N$ conditioned on
$\eta\in\O_{N,\o}$. Note that the non--interference property still
holds for this setting.
The binary collision process
is defined as in (\ref{genhom}). Again, $-\cL$ is a non--negative self--adjoint operator on $L^2(\nu)$ and we may define its spectral gap just as in (\ref{gapkac}). This time, however, to keep track of the conservation law we shall write $\l(N,\o)$ instead of just $\l(N)$. Note that $\l(2,\o)=\frac12$ always. We define
\be
\bar\l(N) = \inf_{\o} \l(N,\o)\,,
\la{infl}
\end{equation}
where the infimum ranges over the set of admissible values of the parameter
$\o$. This set depends on the choice of the model and, as usual, we
refer to the examples for fully rigorous formulations of the results.
As a convention we may set $\l(N,\o)=+\infty$ if $\o$ is such that the
measure $\nu$ becomes a Dirac delta.
Thanks to the non--interference property of $\nu$
there is no difficulty in repeating the
previous arguments to prove the following estimate.
\begin{Th}
\la{th3}
For any $N\geq 2$ and any $\o$:
\be
\la{barbar3}
\bar\l(N)\geq (3\,\bar\l(3) - 1)\left(1-\frac2N\right) + \frac1{N}\,.
\end{equation}
\end{Th}
Let us investigate some specific models.
\subsection{Non--homogeneous Kac models}
Consider the following non--homogeneous version of the ``flat'' Kac model introduced in Section \ref{hom}. Take $X=\bbR_+$ and $\mu_i$ the probability on $\bbR_+$ with density
$$
\frac1{z_i}\exp{(-\eta_i + b_i(\eta_i))}\,,$$
where $b_i$ are bounded
measurable functions and $z_i=\int_0^\infty dx\exp{(-x + b_i(x))}$
is the normalizing constant.
We set $B:=\sup_i|b_i|_\infty$. For this model, any value $\o>0$ of
the conservation law is allowed in the definition (\ref{infl}) of
$\bar \l(N)$.
We claim that
for every $\e<\frac13$ there is $\d>0$ such that
\be\la{infnonh}
\liminf_{N\to\infty} \bar\l(N) \geq \e\,,
\end{equation} provided $B<\d$.
Thanks to Theorem \ref{th3}, to prove (\ref{infnonh})
it suffices to show that
$\l(3,\o)\geq \frac4{9}(1-\e(B))$ with some $\e(B)\to 0$
as $B\to 0$, uniformly in $\o>0$, the value of the conservation law $\sum_{i=1}^3\eta_i = \o$.
To this end we shall use a standard perturbation argument.
Let $\nu$ denote the measure $\mu_1\times \mu_2\times \mu_3(\cdot\thsp | \thsp \sum_{i=1}^3\eta_i = \o)$ and call $\nu_0$ the same measure in the case $b_1=b_2=b_3=0$. Thus, for any bounded measurable function $g$ we have
\be\la{nuf}
\nu(g)= \frac1{Z_\o}\,\int_0^\o d\eta_1\int_0^{\o-\eta_1}d\eta_2\,
g(\eta_1,\eta_2,\o-\eta_1-\eta_2) \,u(\eta_1,\eta_2)\,,
\end{equation}
where $u(x,y)=\nep{-b_1(x)-b_2(y) - b_3(\o-x-y)}$ and $Z_\o=\int_0^\o dx\int_0^{\o-x}dy\,
u(x,y)$. Using
$$\nep{-3B}\leq u(\eta_1,\eta_2)\leq \nep{3B}\,,$$
it is easily seen that, for any bounded $f$ we have the bound between variances
\be\la{flat1}
\var_{\nu}(f)\leq \nep{6B}\var_{\nu_0}(f)
\,.
\end{equation}
(Use $g= (f-\nu_0(f))^2$ in (\ref{nuf}) and the fact that
$\nu(g)\geq \var_{\nu}(f)$).
The same reasoning shows that
\be\la{flat2}
\nu(f(-\cL) f)\geq \nep{-10B}\,\nu_0(f(-\cL_0) f)\,,
\end{equation}
where $\cL_0$
is the generator corresponding to the choice $b_i\equiv 0$.
Indeed,
\begin{align*}
\nu(f(-\cL) f) &= \frac13 \sum_{i=1}^3\nu\left[(\nu[f\thsp | \thsp\eta_i] -
f)^2\right]\\
&=
\frac13 \sum_{i=1}^3\nu\left(\var_{\nu}(f\thsp | \thsp\eta_i)\right)\,,
\end{align*}
(where $\var_{\nu}(f\thsp | \thsp\eta_i)$ denotes the variance of $f$ w.r.t.\
$\nu(\cdot\thsp | \thsp\eta_i)$). For each $i=1,2,3$ we have (as above)
\be\la{flat3}
\var_{\nu}(f\thsp | \thsp\eta_i)\geq \nep{-4B} \var_{\nu_0}(f\thsp | \thsp\eta_i)\,,
\end{equation}
uniformly in $\eta$.
A further comparison gives
$\nu\left(\var_{\nu}(f\thsp | \thsp\eta_i)\right)\geq \nep{-10B}
\nu_0\left(\var_{\nu_0}(f\thsp | \thsp\eta_i)\right)$ which implies (\ref{flat2}).
Recall that the spectral gap for $\cL_0$ is equal to $4/9$ regardless of
the value of $\o>0$. The previous estimates therefore imply that
$$
\l(3,\o)\geq \frac49\,\nep{-16 B}\,.
$$
This proves our claim with $\e(B) = 1-\nep{-16 B}$, from which (\ref{infnonh}) follows.
\smallskip
The same argument can be used to produce uniform lower bounds on the gap of non--homogeneous versions of the Kac walk on $S^{N-1}$
and of the momentum and energy conserving collision model, under the assumption of small perturbations. In the latter model we need the argument in (\ref{l4}) to obtain a uniform estimate $\inf_N\bar\l(N)>0$.
\subsection{Non--uniform random permutations}
We take $X=\{1,\dots,N\}$ and, for each $i=1,\dots,N$ we consider a
probability $\mu_i$ on $X$ given by
$\mu_i(\eta_i=j)=\frac{\nep{-b_i(j)}}{Z_i}$, where $b_i:X\to\bbR$ are
bounded functions, and $Z_i=\sum_{j=1}^N\nep{-b_i(j)}$. To model permutations we use $N$ conservation
laws that will force all components of $\eta$ to have distinct
values: set $d=N$ and define $\xi=(\xi^j)_{j=1}^N$ with
$\xi^j(\eta_i) = 1_{\{\eta_i=j\}}$. Fixing $\o=(1,1,\dots,1)$ we see
that the set $\O_{N,\o}$ coincides with the set of $N!$ permutations of
$N$ letters.
We define the probability $\nu$ as usual by
$\mu_1\times\cdots\mu_N(\cdot\thsp | \thsp\eta\in\O_{N,\o})$. Note that if
the bias functions $b_i$ are all $0$ then $\nu$ is simply the uniform
probability measure over permutations.
The binary collision is now a {\em random transposition} process.
Note that only the value $\o=(1,1,\dots,1)$ is considered for the
conservation law so that $\bar\l(N) = \l(N,\o)$ for this model (there
is no real infimum in (\ref{infl}) here).
In the uniform case ($b_i\equiv 0$) a simple variant of the
argument of Theorem \ref{th2} proves that $\l(N)=\frac12$ for
every $N\geq 2$.
While relaxation to equilibrium for the uniform
case is well known, the non--uniform case is
certainly less understood. Here we can show that if
$B=\sup_i|b_i|_\infty$ is sufficiently small we have
$\inf_N\bar \l(N) > 0$. More precisely, for every $\e<\frac12$ there exists
$\d>0$ such that $B<\d$ implies $$\liminf_{N\to\infty}\bar\l(N)\geq \e\,,$$ for all $N\geq
2$. To prove this we use exactly the same argument we have used to prove
(\ref{infnonh}). In particular, it suffices to show that
$\bar\l(3)\to\frac12$ as $B\to 0$.
The same perturbation argument will yield the desired estimate. To avoid
repetitions we leave the details to the reader.
\subsection{Exclusion processes with site disorder}
Here we consider a non--homogeneous version of what is sometimes called the Bernoulli--Laplace process.
The inhomogeneous
distribution models site impurities or site disorder. We take $X=\{0,1\}$, $\mu_i$ is a Bernoulli law with parameter $p_i\in(0,1)$, i.e.\ $\mu_i (\eta_i=1) = p_i$
and $\mu_i (\eta_i=0) = 1-p_i$. The value of $\eta_i$ is interpreted as the presence ($\eta_i=1$) or absence ($\eta_i=0$) of a particle at the vertex $i$. The function $\xi$ is given by $\xi(\eta_i)=\eta_i$ so that
for any integer $\o\in\{0,1,\dots,N\}$, the set $\O_{N,\o}$ denotes the configurations of $\o$ particles over $N$ vertices.
The binary collision process (\ref{genhom}) becomes nothing but the
well known exclusion process on the complete graph $\{1,\dots,N\}$. This can be seen as follows.
Given a pair $b=\{i,j\}$ and a configuration $\eta\in\O_{N,\o}$, write
$\o_{ij}=\o-\sum_{\ell\notin b}\eta_\ell$. Clearly, $\o_{ij}\in\{0,1,2\}$.
Observe that, if $\o_{ij}=1$ then $\nu(f\thsp | \thsp\cF_b)(\eta) $ is given by
$
\frac{p_i(1-p_j)}{p_i(1-p_j)+p_j(1-p_i)} f(\eta;1,0)
+ \frac{p_j(1-p_i)}{p_i(1-p_j)+p_j(1-p_i)} f(\eta;0,1)
$
where, for simplicity we write explicitly the $i$-th and $j$-th entries in
$f(\eta)=f(\eta;\eta_i,\eta_j)$. On the other hand in the case
$\o_{ij}\in\{0,2\}$ we have $ \nu(f\thsp | \thsp\cF_b)(\eta) = f(\eta)$.
Setting
\be\la{ratesoo}
c_{b}(\eta) = \frac{p_i(1-p_j)\eta_j(1-\eta_i)}{p_i(1-p_j)+p_j(1-p_i)}
+ \frac{p_j(1-p_i)\eta_i(1-\eta_j)}{p_i(1-p_j)+p_j(1-p_i)}\,,
\end{equation}
we therefore obtain, for any $ \eta\in\O_{N,\o}$,
\be\la{rateso}
\nu(f\thsp | \thsp\cF_b)(\eta) - f(\eta)= c_b(\eta)\left(f(\eta^b) - f(\eta)\right)\,,
\end{equation}
where $\eta^b$ denotes the configuration
in which $\eta_i$ and $\eta_j$ have been exchanged.
From (\ref{rateso}) wee see that $\cL$ has the familiar form of the
exclusion process.
If we proceed by perturbative arguments (as in the previous
two subsections) we would be able to prove a
result of the form: if $p_i$ are (uniformly) sufficiently close to
$\frac12$
then we have a uniform bound from below on the spectral gap.
However, we shall prove here a much stronger result.
We assume there exists $\e>0$ such that the parameters $p_i$ satisfy
\be
\la{pi}
\e\leq p_i\leq 1-\e\,,\quad i=1,\dots,N\,.
\end{equation}
The minimal spectral gap $\bar\l(N)$ is defined as usual by
(\ref{infl}), where the infimum ranges over all
$\o\in\{0,1,\dots,N\}$, with the convention that
$\l(N,0)=\l(N,N)=\infty$.
Under the same assumption the following theorem was proved in
\cite{C1} by means of rather technical local limit theorem
estimates. It is surprising that the simple argument of Theorem
\ref{th3} allows a straightforward proof. The uniform spectral gap
bound below is an important step in the recent works \cite{FM,Q} establishing hydrodynamic limits for exclusion processes with disorder.
\begin{Th}\la{BEgg}
Assume (\ref{pi}) for some $\e>0$. Then, there exists $c_\e>0$
such that for all $N\geq 2$
\be\la{exc1}
\bar\l(N)\geq c_\e\,.
\end{equation}
\end{Th}
\proof
Thanks to Theorem \ref{th3}, all we have to do is prove that
\begin{equation}\la{exc2}
\bar\l(3)\geq \frac13+ c_\e\,,
\end{equation}
for some $c_\e>0$.
We fix three vertices $i=1,2,3$, with their occupation probabilities $p_i$
satisfying (\ref{pi}) and with $\sum_{i=1}^3\eta_i = \o$.
We may assume that $\o=1$, i.e.\
there is one particle. Indeed if
there are two we may look at occupied vertices as empty and
vice-versa, if there is none (or three) the measure is a Dirac delta
and
by our convention (see discussion after (\ref{infl}))
the estimate $\l(3,\o)\geq \frac13+ c_\e$ becomes obvious.
Since there is one particle we shall call $x$, respectively $y$,
the probability that the particle is at $i=1$, respectively $i=2$.
We set $z=1-x-y$ for the probability that the particle is at $i=3$.
Note that, thanks to (\ref{pi}), $x,y,z$ are all bounded away from $0$ and $1$. For instance,
$$
x= \frac{p_1(1-p_2)(1-p_3)}{p_1(1-p_2)(1-p_3)+(1-p_1)p_2(1-p_3)+(1-p_1)(1-p_2)p_3}\,.
$$
It is easily checked that the process generated by $\cL$ on our three sites becomes
a $3$--state Markov chain with the $3\times 3$ transition
matrix $P=\cL+{\rm Id}$ given by
\be
P = \frac13 \,\left( \begin{array}{ccc}
1+
\frac{x}{x+y}
+
\frac{x}{x+z}
& \frac{y}{x+y} & \frac{z}{x+z} \\
\frac{x}{x+y} & 1+
\frac{y}{x+y}
+
\frac{y}{y+z} &
\frac{z}{y+z} \\
\frac{x}{x+z} & \frac{y}{y+z} & 1+
\frac{z}{x+z}
+
\frac{z}{y+z}
\end{array} \right)\,.
\la{P}
\end{equation}
We need to estimate the eigenvalues of $P$. Clearly, one
eigenvalue is $1$. From (\ref{P}) we see that ${\rm Tr}(P)=2$.
Therefore the other two
eigenvalues must satisfy $\l_1={\rm Tr}(P)-1-\l_2=1-\l_2$.
Note that $$\l(3,\o)=\min\{1-\l_1,1-\l_2\}\,.$$
To estimate $\l_i, i=1,2$ we compute the determinant of $P$.
The next lemma shows that $\det(P)>\frac29$. Therefore, for both
$i=1,2$ we have
$\l_i(1-\l_i) > \frac29$. This implies $|\l_i - \frac12| < \frac16$.
In particular, it shows that $\l_i < \frac23$. In conclusion:
$\bar\l(3)>\frac13$ as claimed. (Note that $\l_i=\frac12$ would be the value in the homogeneous case $p_i\equiv\frac12$.)
\begin{Le}
\la{detp}
\be
\det(P) = \frac29\left( 1 + \frac{xyz}{(1-x)(1-y)(1-z)}\right)
\la{detp1}
\end{equation}
\end{Le}
\proof
Set $\G=3 P$. Also, set $$
\d = \frac{x}{x+z}\,,\quad
\b = \frac{x}{x+y}\,,\quad
\g = \frac{y}{y+z}\,.
$$
From (\ref{P}) we compute
\begin{align*}
\det(\G)& = \left(1+ \b + \d\right)\left(6 -
3\,\b - 2\,\d +
\b\,\,\d
- \g\,\,\d + \g\,\,\b
\right) \\
&+ \left(\b -1 \right)
\left(3\,\b - \b\,\,\d
- \g\,\,\b - \d + \g\,\,\d
\right)\\
&+ \left(1-\d \right)
\left(\b\,\,\d
- \g\,\,\d +
\g\,\,\b - 2\,\d \right)\,.
\end{align*}
Simplifying we arrive at
$$
\det(\G)= 6 + 3\,\d - 3\,\b\,\d - 3\,\g\,\d + 3\,\b\,\g
\,.
$$
Rewriting this in terms of the probabilities $x,y,z$ we
obtain
\be
\det(\G) = 6 + \frac{6\,x\,y\,z}{(1-x)\,(1-y)\,(1-z)}\,.
\la{detp3}
\end{equation}
This implies (\ref{detp1}). \qed
\subsection{Colored exclusion processes with site disorder}
A natural generalization of the previous model is a system
where particles can be of several different kinds - or colors.
Namely, suppose there are $m$ colors and let each
particle be painted with one of the available colors.
Configurations of colored particles are denoted by
$\eta\in\O:=\{0,1,\dots,m\}^N$ with the
interpretation that $\eta_i=0$ means that $i$ is empty, while
$\eta_i=k$, $k\in\{1,\dots,m\}$ means that $i$ is occupied by a
particle with color $k$. Thus, the single state space $X$ is
$\{0,\dots,m \}$. The conservation laws are given by
$\xi^k(\eta_i)=1_{\{\eta_i = k\}}$,
$k=1,\dots,m$ and the vector $\o=(\o_1,\dots,\o_m)$, where
$\o_i$ are non--negative integers such that $\sum_{k=1}^m\o_k \leq N$. Thus, the set $\O_{N,\o}$ denotes the configurations of particles over $N$ vertices,
with $\o_k$ particles with color $k$.
We shall use the notation
$\psi_i=1_{\{\eta_i\geq 1\}}$ so that the variables
$\psi\in\{0,1\}^N$ denote the configuration of occupied sites.
Let $p_i\,,\;i=1,\dots N$ be given parameters satisfying (\ref{pi}).
Let $\mu_i$ denote the probability on $X$ such that
$$
\mu_i(\eta_i=k)=\frac1{Z_i}\left(p_i1_{\{k \geq 1\}} + (1-p_i)1_{\{k = 0\}}\right)\,,
$$
where $Z_i=(m-1)p_i +1$ is the normalization.
In particular, w.r.t.\ $\mu_i$ the occupation variable $\psi_i$ is a Bernoulli random variable with parameter $mp_i/Z_i$. We call, as usual, $\nu$ the product
$\mu_1\times\cdots\mu_N$ conditioned on $\O_{N,\o}$. To avoid
degenerate cases we take $1\leq \sum_{k=1}^m\o_k \leq N-1$.
We are interested in the following exclusion--type dynamics.
For any configuration $\eta$ and any edge $b=\{i,j\}$ we write
$\eta^b$
for the configuration where the variables $\eta_i,\eta_j$ have been exchanged.
For every $\g\geq 0$ we define the Markov generator by
\be
\cL_\g f(\eta) = \frac1{N}\sum_{b}c^\g_b(\eta)
\left(f(\eta^b)-f(\eta)\right)\,,
\la{genera}
\end{equation}
where the rates are given, for $b=(i,j)$, by
\be
c^\g_{b}(\eta) = c_b(\psi) + \frac\g{2}\,1_{\{\psi_i=\psi_j\}}
\la{rates}
\end{equation}
where $c_b$ are the functions defined in (\ref{ratesoo}), but now evaluated at the occupation variables $\psi=\psi(\eta)$ defined above.
It is easily checked that the rates (\ref{rates}) are reversible w.r.t.\
$\nu$: for any $\eta$ and any $b$
\be
\nu(\eta)c^\g_b(\eta) = \nu(\eta^b)c^\g_b(\eta^b)\,.
\la{detba}
\end{equation}
The latter statement is equivalent to self--adjointness of $\cL_\g$
in $L^2(\nu)$. Moreover,
\be\la{diri}
-\nu(f\cL_\g f) = \frac12\sum_{b}\nu\left[c^\g_b\,(f^b-f)^2\right]\,,
\end{equation}
where $f^b(\eta):=f(\eta^b)$.
If $1\leq \sum_{k=1}^m\o_k \leq N-1$ the Markov chain
generated by $\cL_\g$ is irreducible. We shall consider the cases
$\g=0$ and $\g=1$.
The difference between $\cL_0$ and $\cL_1$ is that in $\cL_1$ we have
added the possibility of ``stirring'' between particles, i.e.\
exchange of positions of particles of different colors. The case
$\g=1$ coincides then with the usual binary collision dynamics given
by local averages (\ref{genhom}).
On the other hand, the case
$\g=0$ is a true {\em exclusion} process, with particles jumping only
to empty sites. The addition of stirring can result in a faster
relaxation to the equilibrium distribution $\nu$, or equivalently in a
larger spectral gap. Note, however, that for the case $m=1$ there is
no difference: in this case $\cL_\g=\cL_0$ for all $\g$. The case
$m=1$, of course, is the one analyzed in
Theorem \ref{BEgg}. From now on we take $m>1$.
We denote by $\l_\g(N,\o)$ the spectral gap
of the generator $\cL_\g$. The following theorem shows that
in the case $\g=1$ we have a uniform lower bound $\l_1(N,\o)\geq c_\e$ as in Theorem \ref{BEgg} and, in the case $\g=0$ we
have $ \l_0(N,\o) \geq c_\e(1-\r)$, where $\r$ is the global density:
\be
\r=\frac1N\sum_{k=1}^m \o_k\,.
\la{rho}
\end{equation}
The slow--down in the limit $\r\to 1$ is natural in view of the
absence of stirring. Moreover, we shall show that, up to a constant
the reverse inequality $ \l_0(N,\o) \leq C(1-\r)$ holds as well
for some choices of $\o$, see
the remark after the end of the proof. Similar results had been
obtained in \cite{Q1} in the homogeneous case $p_i\equiv\frac12$.
\begin{Th}
\la{main}
Assume (\ref{pi}) for some $\e>0$. For $m>1$, $\g=1$,
there exists $c_\e>0$ such that for any $N\geq 2$ and any $\o$ such that $0< \r <1$:
\be
\l_1(N,\o)\geq c_\e\,.
\la{gap2}
\end{equation}
For $m>1$, $\g=0$, there exists $c_\e>0$ such that for any $N\geq 2$ and any $\o$ with $0<\r<1$:
\be
\l_0(N,\o)\geq c_\e\,(1-\r) \,.
\la{gap3}
\end{equation}
\end{Th}
\proof
We start with some preliminary facts.
Consider functions $f$ of the occupation variables $\psi=\{\psi_i\}$ only.
Let $\cH_0$
denote the space of all such functions and observe that
$\cL_\g\cH_0\subset\cH_0$, i.e.\ $\cH_0$ is invariant, for both
$\g=0,1$. This follows from
the fact that the only dependence of the rates on the configuration
$\eta$ is through the variables $\{\psi_i\}$, see (\ref{rates}). In particular, under the
generator
$\cL_\g$, the variables $\{\psi_i\}$ evolve as the Markov chain
of the case $m=1$. Therefore, the spectral gap estimate of Theorem
\ref{BEgg} applies to functions in $\cH_0$, for both $\g=0,1$.
For any $f\in L^2(\nu)$ we may write $f=f_0+f_0^\perp$, where $f_0=\nu(f\thsp | \thsp\cF_\psi)\in\cH_0$
and $f_0^\perp=f-f_0\in\cH_0^\perp$. Here $\cF_\psi$ denotes the
$\si$--algebra generated by the functions $\psi_i$ and
$\cH_0^\perp$ is the orthogonal
complement of $\cH_0$, i.e.\ the space of $f$ such that
$\nu(f g)=0$
for all $g\in\cH_0$.
Since $\cL_\g$ leaves $\cH_0$ invariant, by self--adjointness it follows
that
$\cL_\g\cH_0^\perp\subset\cH_0^\perp$. In conclusion
$$
\nu(f\cL_\g f) = \nu(f_0\cL_\g f_0) +\nu(f_0^\perp\cL_\g f_0^\perp) \,.
$$
Moreover, $\var_\nu(f) = \var_\nu(f_0) + \var_\nu(f_0^\perp)$.
From these simple observations it is clear that
the constant $\l_\g(N,\o)$
satisfies
\be
\l_\g(N,\o)\geq \min\{\l^0_\g(N,\o),\l_{\g}^\perp(N,\o)\}\,,
\la{gammas}
\end{equation}
where
$\l_\g^0(N,\o),\l_\g^\perp(N,\o)$ are the
constants obtained in the variational principle
(\ref{gapkac}) -- applied to $\cL_\g$ -- by restricting
to $f\in\cH_0$ and $f\in\cH_0^\perp$ respectively.
From Theorem \ref{BEgg} we know that
\be\la{goodda}
\l^0_\g(N,\o)\geq c_\e\,,
\end{equation}
for some $c_\e$, for both $\g= 0,1$.
To prove Theorem \ref{main} we thus have to
estimate from below the constant $\l_\g^\perp(N,\o)$.
\subsubsection{The case $\g=1$}
If $f\in\cH_0^\perp$, then
we must have $\nu(f\thsp | \thsp\cF_\psi)=0$.
Therefore
\be
\var_\nu(f)=\nu[\var(f\thsp | \thsp \cF_\psi)] + \var_\nu\left(\nu(f\thsp | \thsp \cF_\psi)\right) = \nu[\var(f\thsp | \thsp \cF_\psi)] \,.
\la{varxi}
\end{equation}
Here $\var(\cdot\thsp | \thsp \cF_\psi)$ denotes variance w.r.t.\
$\nu(\cdot\thsp | \thsp\cF_\psi)$,
the conditional probability given the
color--blind configuration $\psi$, and we have used the standard decomposition of variance under conditioning. Let
$S=S(\eta):=\{i\in\{1,\dots,N\}:\;\,\psi_i=1\}$
denote the set of occupied sites. Clearly $|S|=\r N$, the total number of particles.
Observe that once $\psi$ is known then the distribution
of the variables $\eta$ is given by $\eta_i=0$
for $i\notin S$ (deterministically)
and $\eta_i\in\{1,\dots,m\}$ on $S$ uniformly with
the constraint that $\sum_{i\in S}1_{\{\eta_i=k\}} = \o_k$,
$k\in \{1,\dots,m\}$.
In particular there is no inhomogeneity once the set $S$ (or,
equivalently the configuration $\psi$) is
given.
Therefore $\var(f\thsp | \thsp\cF_\psi )$ can be estimated for every
$\eta$
with the well known
bound
for random transpositions without disorder (see e.g.\ \cite{C1}):
\be
\var(f\thsp | \thsp \cF_\psi)\leq
\frac1{4\,|S|}\sum_{i,j\in S}
\nu\left[(\grad_{ij}f)^2\thsp | \thsp \cF_\psi\right]
\,.
\la{varbo}
\end{equation}
Here we use the notation $\grad_{ij}f(\eta) = f(\eta^{b})-f(\eta)$, $b=\{i,j\}$,
for the exchange gradient.
Averaging with $\nu$ and using (\ref{varxi}) we obtain
(since $|S|=\r\, N$, deterministically)
\be
\var_\nu(f)\leq \frac1{4\,\r\,N}\,\,\,\nu\left(
\sum_{i,j\in S}(\grad_{ij}f)^2\right)\,.
\la{varbo2}
\end{equation}
Suppose first $\r\geq \frac12$.
Then (\ref{varbo2}), (\ref{diri}) and a uniform
lower bound on the rates $c_{b}$ imply
$$
\var_\nu(f)\leq \frac1{2\,N}\,\,
\sum_{i,j}\nu\left[(\grad_{ij}f)^2\right]
\leq C_\e
\,\nu(f(-\cL_1)f)\,,
$$
for some constant $C_\e<\infty$, with the sum now extending to all pairs $i,j$.
This shows that, with $c_\e=1/C_\e$:
\be
\l_1^\perp(N,\o) \geq c_\e\
,\quad\;\r\geq \frac12\,.
\la{lperp}
\end{equation}
Note that here we have used
$\g=1$ (if $\g=0$ there is no uniform lower bound on the rates $c_b$).
\smallskip
We turn to the proof in the case $\r\leq \frac12$.
We rewrite (\ref{varbo2}) as
\begin{align}
\var_\nu(f) &\leq \frac1{4\,\r\,N}\sum_{i,j}
\nu\left[(\grad_{ij}f)^2
\,
1_{\{i\in S\}}1_{\{j\in S\}}
\right]\nonumber\\
& = \frac1{4\,\r(1-\r)N^2}
\sum_{i,j,\ell}
\nu\left[(\grad_{ij}f)^2
\,
1_{\{i\in S\}}1_{\{j\in S\}}1_{\{\ell\notin S\}}
\right]
\,,\la{os}
\end{align}
where we use the identity $(1-\r)N = N - \sum_{k=1}^m\o_k =
\sum_{\ell}1_{\{\ell\notin S\}}$ for the number of empty sites.
Let $\eta\in\O_{N,\o}$ be fixed.
Pick $i,j\in S(\eta)$ and write
$\eta^{i,j}$ for the exchanged
configuration.
Observe that for any $\ell\notin S(\eta)$ we
can write
$$
\eta^{i,j} = \left(\left(\eta^{i,\ell}\right)^{i,j}\right)^{j,\ell}\,.
$$
Therefore \begin{align*}
\grad_{ij}f(\eta) &= [f(\eta^{i,j})-f(\eta)]\\
&= \grad_{j\ell}f\left(\left(\eta^{i,\ell}\right)^{i,j}\right)
+ \grad_{ij}f\left(\eta^{i,\ell}\right)
+\grad_{i\ell}f\left(\eta\right)
\,.
\end{align*}
Each term in the sum appearing in (\ref{os}) can then be estimated by
\begin{align}
&\nu\left[(\grad_{ij}f)^2\,1_{\{i\in S\}}1_{\{j\in S\}}1_{\{\ell\notin S\}}
\right]
\la{os0}\\&\leq 3\,\nu
\left[\left\{\left(\grad_{j\ell}f
\left(\left(\eta^{i,\ell}\right)^{i,j}\right)\right)^2\,
+\left(\grad_{ij}f\left(\eta^{i,\ell}\right)\right)^2
+\left(\grad_{i\ell}f\right)^2\right\}
1_{\{i\in S\}}1_{\{j\in S\}}1_{\{\ell\notin
S\}}\right] \,.
\nonumber
\end{align}
Next, we claim that there exists $C_1 = C_1(\e)<\infty$ such that
\begin{align}
&\nu
\left[\left(\grad_{j\ell}f
\left(\left(\eta^{i,\ell}\right)^{i,j}\right)\right)^2
1_{\{i\in S\}}1_{\{j\in S\}}1_{\{\ell\notin
S\}}\right] \nonumber\\
&\quad\quad\quad\qquad\;\;\leq C_1 \,\nu
\left[\left(\grad_{j\ell}f(\eta)\right)^2
1_{\{i\in S\}}1_{\{j\notin S\}}1_{\{\ell\in
S\}}\right]\,.
\la{os1}
\end{align}
Note the change of the indicator functions in (\ref{os1}).
To prove (\ref{os1}) we write, with the change of variables
$\varphi:=\eta^{i,\ell}$ and $\b:=\varphi^{i,j}$
\begin{align*}
&\nu
\left[\left(\grad_{j\ell}f
\left(\left(\eta^{i,\ell}\right)^{i,j}\right)\right)^2
1_{\{i\in S\}}1_{\{j\in S\}}1_{\{\ell\notin
S\}}\right]\\
&\qquad\qquad = \sum_{\eta}\nu(\eta)\,\left(\grad_{j\ell}f
\left(\left(\eta^{i,\ell}\right)^{i,j}\right)\right)^2\,
1_{\{i\in S(\eta)\}}1_{\{j\in S(\eta)\}}1_{\{\ell\notin
S(\eta)\}}\\
&\qquad\qquad =\sum_{\varphi}\nu(\varphi^{i,\ell})\,\left(\grad_{j\ell}f
\left(\varphi^{i,j}\right)\right)^2\,
1_{\{i\notin S(\varphi)\}}1_{\{j\in S(\varphi)\}}1_{\{\ell\in
S(\varphi)\}}\\
&\qquad\qquad =\sum_{\b}\nu((\b^{i,j})^{i,\ell})\,
\left(\grad_{j\ell}f
\left(\b\right)\right)^2\,
1_{\{i\in S(\b)\}}1_{\{j\notin S(\b)\}}1_{\{\ell\in
S(\b)\}}\\
&\qquad\qquad= \nu
\left[\chi_{i,j,\ell}\,\left(\grad_{j\ell}f\right)^2
1_{\{i\in S\}}1_{\{j\notin S\}}1_{\{\ell\in
S\}}\right]\,,
\end{align*}
where $$
\chi_{i,j,\ell}(\eta):=\frac{\nu((\eta^{i,j})^{i,\ell})}{\nu(\eta)}\,,
$$
is the change of measure.
Since the variables $p_i$ defining our measure satisfy (\ref{pi})
it is not hard to check that $\chi_{i,j,\ell}(\eta)\leq C_\e$ uniformly for
some constant $C_\e$. This proves (\ref{os1}).
\smallskip
Moreover, in a similar way one proves that there is a constant
$C_2=C_2(\e)<\infty$ such that
\begin{align}
&\nu
\left[\left(\grad_{ij}f
\left(\eta^{i,\ell}\right)\right)^2
1_{\{i\in S\}}1_{\{j\in S\}}1_{\{\ell\notin
S\}}\right] \nonumber\\
&\quad\quad\quad\qquad\;\;\leq C_2 \,\nu
\left[\left(\grad_{ij}f(\eta)\right)^2
1_{\{i\notin S\}}1_{\{j\in S\}}1_{\{\ell\in
S\}}\right]\,.
\la{os2}
\end{align}
From (\ref{os}) and (\ref{os0}),
using (\ref{os1}) and (\ref{os2}) we obtain
for a suitable constant $C_3$:
\begin{align}
&\var_\nu(f) \leq \frac{C_3}{\r(1-\r)N^2}
\sum_{i,j,\ell}
\Big\{\nu\left[(\grad_{j\ell}f)^2
\,
1_{\{i\in S\}}1_{\{j\notin S\}}1_{\{\ell\in S\}}
\right]\nonumber\\
&+ \nu\left[(\grad_{ij}f)^2
\,
1_{\{i\notin S\}}1_{\{j\in S\}}1_{\{\ell\in S\}}
\right] +
\nu\left[(\grad_{i\ell}f)^2
\,
1_{\{i\in S\}}1_{\{j\in S\}}1_{\{\ell\notin S\}}
\right]\Big\}\,.
\la{os3}
\end{align}
Since $\sum_{i}1_{\{i\in S\}}=\r N$, setting $C_4=3\,C_3$
we see that (\ref{os3}) can be written as
\be
\var_\nu(f) \leq \frac{C_4}{(1-\r)N}
\sum_{i,j}
\nu\left[(\grad_{ij}f)^2
\,
1_{\{i\in S\}}1_{\{j\notin S\}}
\right]\,.
\la{good}
\end{equation}
Using $\r\leq \frac{1}2$ and (\ref{diri}) we see that
$$
\var_\nu(f) \leq C\,\nu(f(-\cL_1)f)\,,
$$
for some constant $C=C(\e)<\infty$ and for all $f\in\cH_0^\perp$.
Therefore
we have proved that $\l_1^\perp(N,\o)\geq c_\e\,$.
Together with
(\ref{gammas}), (\ref{goodda}) and (\ref{lperp})
we see that $\l_1(N,\r)$ is uniformly
bounded from below and the claim (\ref{gap2}) holds. This proves Theorem
\ref{main} in the case $\g=1$.
\subsection{The case $\g=0$}
Here we cannot use the argument leading to (\ref{lperp})
above. However,
we can use the argument giving (\ref{good}) without modification
(it never used the fact that $\g=1$).
In particular, (\ref{good}) holds
for any $0<\r <1$ and any $f \in\cH_0^\perp$.
Now, observe that for any edge $b=\{i,j\}$ such that
$i\in S$ and $j\notin S$ the rate $c_b^0$ defined in (\ref{rates})
is uniformly bounded away from
zero with constants only depending on the $\e$ from
(\ref{pi}) (this follows from the fact that for such cases either
$\psi_i(1-\psi_j)=1$ or $\psi_j(1-\psi_i)=1$). Therefore, there exists $C=C(\e)<\infty$ such that
for any $f \in\cH_0^\perp$:
\be
\var_\nu(f) \leq \frac{C}{(1-\r)}
\,\nu(f(-\cL_0)f)\,.
\la{good2}
\end{equation}
This proves that $\l_0^\perp(N,\o)\geq c\,(1-\r)$.
From
(\ref{gammas}) and (\ref{goodda}) we see that
$\l_0(N,\o)$ satisfies the claim (\ref{gap3}). This proves Theorem
\ref{main} in the case $\g=0$. \qed
\bigskip
Let us briefly address the issue of
comparable upper bounds on spectral gaps.
For example, to prove that
$\l_0(N,\o)=O(1-\r) $, as $\r\to 1$
one can
consider the case of $m=2$ colors, with $\o_1=\o_2$
such that $\o_1+\o_2 = \r\,N$
with $\r\to 1$. Then choose
$f$ in the variational principle
(\ref{gapkac}) as the indicator function of the event that
a given vertex $i$ is occupied by a particle of color $1$.
The variance of $f$ is of order $1$. On the other hand, since the
number of empty sites that can be used to change the value of $\eta_i$ is
$(1-\r)N$,
it is not hard to show that
$\nu(f(-\cL_0)f)$ is of order $(1-\r)$. It follows that
$\l_0(N,\o)=O(1-\r)$. Of
course, if $\g=1$ then $\nu(f(-\cL_1)f)$ is of order $1$ and the gap
does not vanish as $\r\to 1$ in that case.
\smallskip
Finally, we point out an interesting problem concerning
local versions of the exclusion
dynamics described by (\ref{diri}). The local dynamics is obtained
by summing over pairs $b$ which are given by the edges of a small
subgraph of the complete graph,
such as e.g.\ the box of side $L\sim N^{1/d}$ in a $d$--dimensional grid:
$\L_L=[1,L]^d\cap\bbZ^d$. In the latter cases one expects the gap to
scale as $L^{-2}$. In the case $m=1$ there is a nice argument
in \cite[Lemma 5.1 and Lemma 5.2]{Q} which allows to derive
such an estimate from the complete--graph bound (\ref{exc1}). On
the other hand, the case $m>1$ with $\g=0$
seems to be much more complicated and
we are not aware of any result in this direction, except for the
homogeneous case considered in \cite{Q1}.
\smallskip
\bigskip
\noindent
{\bf Acknowledgments.}
I would like to thank Eric A.\ Carlen, Maria C.\
Carvalho, Paolo Dai Pra, Gustavo Posta
and Prasad Tetali for several useful discussions around this work.
| {
"timestamp": "2008-07-22T10:39:40",
"yymm": "0807",
"arxiv_id": "0807.3415",
"language": "en",
"url": "https://arxiv.org/abs/0807.3415",
"abstract": "We give a new and elementary computation of the spectral gap of the Kac walk on the N-sphere. The result is obtained as a by-product of a more general observation which allows to reduce the analysis of the spectral gap of an N-component system to that of the same system for N=3. The method applies to a number of random 'binary collision' processes with complete-graph structure, including non-homogeneous examples such as exclusion and colored exclusion processes with site disorder.",
"subjects": "Probability (math.PR); Mathematical Physics (math-ph)",
"title": "On the spectral gap of the Kac walk and other binary collision processes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9861513905984457,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.708642853566933
} |
https://arxiv.org/abs/1707.04018 | Hardy's inequality in a limiting case on general bounded domains | In this paper, we study Hardy's inequality in a limiting case: $$\int_{\Omega} |\nabla u |^N dx \ge C_N(\Omega) \int_{\Omega} \frac{|u(x)|^N}{|x|^N \left(\log \frac{R}{|x|} \right)^N} dx $$ for functions $u \in W^{1,N}_0(\Omega)$, where $\Omega$ is a bounded domain in $\mathbb{R}^N$ with $R = \sup_{x \in \Omega} |x|$. We study the (non-)attainability of the best constant $C_N(\Omega)$ in several cases. We provide sufficient conditions that assure $C_N(\Omega) > C_N(B_R)$ and $C_N(\Omega)$ is attained, here $B_R$ is the $N$-dimensional ball with center the origin and radius $R$. Also we provide an example of $\Omega \subset \mathbb{R}^2$ such that $C_2(\Omega) > C_2(B_R) = 1/4$ and $C_2(\Omega)$ is not attained. | \section{Introduction}
The classical Hardy inequality in one space dimension states that
\begin{equation}
\label{Hardy_1D}
\int_0^{\infty} |u'(t)|^p \, dt \ge \( \frac{p-1}{p} \)^p \int_0^{\infty} \frac{|u(t)|^p}{t^p} \, dt
\end{equation}
holds for all $u \in W^{1,p}_0(0, +\infty)$ where $1 < p < \infty$.
This scaling invariant inequality is now very classical and there are wonderful treatises \cite{Ghoussoub-Moradifam(book)}, \cite{Mazya}, \cite{Opic-Kufner} on further generalizations of the inequality \eqref{Hardy_1D}.
It is also known that the constant $\( \frac{p-1}{p} \)^p$ is best possible and it is not achieved by any function in $W^{1,p}_0(0,+\infty)$.
The inequality \eqref{Hardy_1D} has been generalized to higher dimensions in two directions:
one is to replace the function $t$ in the right-hand side by the distance to the origin,
and the other is to replace it by the distance to the boundary.
For the former direction, let $\Omega$ be a domain with $0 \in \Omega$ in $\re^N$ ($N \ge 2$) and let $p \geq 1$.
Then the classical $L^p$-Hardy inequality states that
\begin{equation}
\label{H_p}
\intO |\nabla u|^p \, dx \ge \left| \frac{N-p}{p} \right|^p \intO \frac{|u|^p}{|x|^p} \, dx
\end{equation}
holds for all $u \in W^{1,p}_0(\Omega)$ when $1 \le p < N$, and
for all $u \in W^{1,p}_0(\Omega \setminus \{ 0 \})$ when $p > N$.
It is known that for $p > 1$, the best constant $|\frac{N-p}{p}|^p$ is never attained in $W^{1,p}_0(\Omega)$ when $p < N$, or in $W^{1,p}_0(\Omega \setminus \{ 0 \})$ when $p > N$, respectively.
After the pioneering work of Brezis and V\'{a}zquez \cite{Brezis-Vazquez}, which showed that the inequality can be improved on bounded domains when $p < N$,
there are many papers that treat the improvements of the inequality (\ref{H_p})
(see \cite{ACR}, \cite{BFT1}, \cite{BFT2}, \cite{Cazacu}, \cite{DPP}, \cite{Filippas-Tertikas}, \cite{GGM}, \cite{Sano-TF},
the recent book \cite{Ghoussoub-Moradifam(book)} and the reference therein.)
For the latter direction,
let $\Omega \subset \re^N$ be an open set with Lipschitz boundary and define $d(x) = {\rm dist}(x, \pd\Omega)$.
Then, a version of Hardy inequalities, called ``geometric type", states that for any $p > 1$,
there exists $c_p(\Omega) > 0$ such that the inequality
\begin{equation}
\label{GH_p}
\intO |\nabla u|^p \, dx \ge c_p(\Omega) \intO \frac{|u|^p}{(d(x))^p} \, dx
\end{equation}
holds for all $u \in W^{1,p}_0(\Omega)$.
For this inequality, refer to \cite{Ancona}, \cite{BFT1}, \cite{Brezis-Marcus}, \cite{DPP}, \cite{LP}, \cite{Lehrback}, \cite{MS(NA)}, \cite{Tidblom(JFA)}, \cite{Tidblom(PAMS)},
the recent book \cite{BEL(book)} and the references therein.
In \cite{MS(NA)}, it is proved that $c_p(\Omega) = \( \frac{p-1}{p} \)^p$ is the best constant on any convex domain $\Omega$,
that is,
\begin{equation}
\label{hq}
c_p(\Omega) = \inf_{u \in W^{1,p}_0(\Omega), u \not\equiv 0}
\frac{\intO |\nabla u |^p dx}{\intO \frac{|u(x)|^p}{(d(x))^p} dx} = \( \frac{p-1}{p} \)^p.
\end{equation}
In \cite{BFT1}, \cite{Tidblom(PAMS)}, the authors obtained an additional extra term on the right-hand side of (\ref{GH_p}),
which means that the best constant $c_p(\Omega)$ is never attained on any convex domain $\Omega$.
When $\Omega$ is the half-space $\re^N_{+} = \{ x = (x_1, \cdots, x_N) \,|\, x_N > 0 \}$, the inequality (\ref{GH_p}) has the form
\begin{equation}
\label{Hardy_half}
\int_{\re^N_{+}} |\nabla u|^p \, dx \ge \( \frac{p-1}{p} \)^p \int_{\re^N_{+}} \frac{|u|^p}{x_N^p} \, dx
\end{equation}
and the best constant $\( \frac{p-1}{p} \)^p$ is never attained by functions in $W^{1,p}_0(\re^N_{+})$.
On the other hand, let $\Omega$ be a bounded domain with $C^{1, \gamma}$ boundary for some $\gamma \in (0,1)$.
Then it is proved by Marcus, Mizel, and Pinchover in \cite{MMP} that
there exists a minimizer of $C_2(\Omega)$ if and only if $C_2(\Omega) < 1/4$.
See also \cite{MMP}, \cite{Marcus-Shafrir}, \cite{LP} for the corresponding results for $1 < p < \infty$.
So the compactness of any minimizing sequence fails only at the
bottom level $\( \frac{p-1}{p} \)^p.$
In the critical case $p = N$, the weight $|x|^{-N}$ is too singular for the same type of inequality as (\ref{H_p}) to hold true for functions in $W^{1,N}_0(\Omega)$.
Instead of (\ref{H_p}), it is known that the following {\it Hardy inequality in a limiting case}
\begin{equation}
\label{Hardy_N}
\intO |\nabla u |^N dx \ge \( \frac{N-1}{N} \)^N \intO \frac{|u(x)|^N}{|x|^N \( \log \frac{R}{|x|} \)^N} dx
\end{equation}
holds true for all $u \in W^{1,N}_0(\Omega)$ where $R = \sup_{x \in \Omega} |x|$;
refer to \cite{Leray}, \cite{Ladyzhenskaya}, \cite{DP}, \cite{Ioku-Ishiwata}, \cite{TF} and references therein.
Note that the additional $\log$ term weakens the singularity of $|x|^{-N}$ at the origin,
however, the weight function
\[
W_R(x) = \frac{1}{|x|^N \( \log \frac{R}{|x|} \)^N}
\]
becomes singular also on the boundary $\pd\Omega$ since $R = \sup_{x \in \Omega} |x|$.
Indeed, since
\begin{equation}
\label{Taylor}
|x|^N \( \log \frac{R}{|x|} \)^N = (R-|x|)^N + o((R-|x|)^N)
\end{equation}
as $|x| \to R$, $W_R$ has a similar effect of $(1/d(x))^N$ near the boundary.
In this sense, the critical Hardy inequality (\ref{Hardy_N}) has both features of the inequalities \eqref{H_p} and \eqref{GH_p}.
Note that (\ref{Hardy_N}) is invariant under the scaling
\begin{equation}
\label{scaling_N}
u_{\la}(x) = \la^{-\frac{N-1}{N}} u\( \( \frac{|x|}{R} \)^{\la-1} x \) \quad \text{for} \, \la > 0,
\end{equation}
which is different from the usual scaling $u_{\la}(x) = \la^{\frac{N-p}{p}} u(\la x)$ for (\ref{H_p}) when $\Omega = \re^N$ and $p < N$.
(However recently, a relation of both scaling transformations is obtained, see \cite{Sano-TF}).
Let $C_N(\Omega)$ be the best constant of the inequality (\ref{Hardy_N}):
\begin{equation}
\label{CHN}
C_N(\Omega) = \inf_{u \in W^{1,N}_0(\Omega), u \not\equiv 0}
\frac{\intO |\nabla u |^N dx}{\intO \frac{|u(x)|^N}{|x|^N \( \log \frac{R}{|x|} \)^N} dx}.
\end{equation}
By this definition and \eqref{Hardy_N}, we see $C_N(\Omega) \ge \( \frac{N-1}{N} \)^N$ for any bounded domain $\Omega \subset B_R$ with $R = \sup_{x \in \Omega} |x|$.
Here and henceforth, $B_R$ will denote the $N$-dimensional ball with radius $R$ and center $0$.
In \cite{Ioku-Ishiwata}, the authors proved that $C_N(B_R) = \( \frac{N-1}{N} \)^N$ and $C_N(B_R)$ is never attained by any function in $W^{1,N}_0(B_R)$.
See also \cite{DFP}, \cite{DP}.
Let us recall the arguments in \cite{Ioku-Ishiwata}.
First, the authors of \cite{Ioku-Ishiwata} prove that, if the infimum $C_N(B_R)$ is attained by a radially symmetric function $u \in W^{1,N}_{0, rad}(B_R)$,
then $u \in C^1(B_R \setminus \{ 0 \})$, $u > 0$ and $u$ is unique up to multiplication of positive constants.
By using these facts and the scaling invariance (\ref{scaling_N}), the authors prove that $C_N(B_R)$ is not attained by radially symmetric functions.
Indeed, by the scaling invariance (\ref{scaling_N}) and the uniqueness up to multiplication of positive constants,
the possible radially symmetric minimizer has the form $C (\log \frac{R}{|x|})^{\frac{N-1}{N}}$ which is not in $W^{1,N}_0(B_R)$.
Finally, they prove that if there exists a minimizer of $C_N(B_R)$, then there exists also a radially symmetric minimizer.
The argument of this part is elementary and the proof of the non-attainability of $C_N(B_R)$ is established.
The main purpose of this paper is to study the (non-)attainability of the infimum $C_N(\Omega)$ for more general domains $\Omega \subset B_R$.
Some new phenomena will be shown in this paper.
We first note that if $C_N(\Omega) = \( \frac{N-1}{N} \)^N$, $C_N(\Omega)$ is not attained.
In fact, if $C_N(\Omega)$ is attained by an element $u \in W_0^{1,N}(\Omega),$ by a trivial extension of $u$ as an element in $W_0^{1,N}(B_R),$
$C_N(B_R) = \( \frac{N-1}{N} \)^N $ is attained by $u$; this contradicts the result in \cite{Ioku-Ishiwata} that $ C_N(B_R) $ is not attained.
In the following, we may not impose the assumption that $0 \in \Omega$.
Since the weight function $W_R(x) = (|x| (\log \frac{R}{|x|} ))^{-N}$ itself depends on the geometric quantity $R$,
it is not clear whether $C_N(\Omega)$ has the same value as $C_N(B_R)$ for all domains $\Omega \subset B_R$ or not.
Since $W_R$ becomes unbounded around the origin and also around the set $|x| = R$,
it is plausible that minimizing sequences for $C_N(\Omega)$ tend to concentrate on the origin or on the boundary portion $\pd \Omega \cap \pd B_R$
in order to minimize the quotient
\[
Q_R(u) = \frac{\intO |\nabla u |^N dx}{\intO W_R(x) |u(x)|^N dx}.
\]
This will result in that $C_N(\Omega) = C_N(B_R)$ and $C_N(\Omega)$ is not attained,
if the origin is the interior point of $\Omega$, or $\Omega$ has a smooth boundary portion at a distance $R$ to the origin
(just like a ball $B_R$).
We will prove later that these intuitions are true, see Theorem \ref{theorem-origin} and Theorem \ref{theorem-smooth}.
However, when we treat a domain $\Omega \subset B_R$ with $R = \sup_{x \in \Omega} |x|$,
which does not contain the origin in its interior, nor have the smooth boundary portion $\pd\Omega \cap B_R$,
the situation is rather different.
Actually, we provide a sufficient condition on $\Omega \subset B_R$ which assures that $C_N(\Omega) > C_N(B_R)$ (Theorem \ref{theorem-inequality}).
Moreover, we prove that a stronger condition on $\Omega$ than the sufficient condition assures that $C_N(\Omega)$ is attained (Theorem \ref{theorem-existence}).
Finally, we provide an example of domain in $\re^2$ on which $C_2(\Omega) > C_2(B_R) = 1/4$ and $C_2(\Omega)$ is not attained (Theorem \ref{theorem-nonexistence}).
This is quite a contrast to the result for \eqref{hq} in \cite{MMP}, which says that
if $c_2(\Omega)$ is strictly less than the critical number $\frac 14,$ the infimum $c_2(\Omega)$ is attained.
The organization of this paper is as follows:
In \S 2, we prove Theorem \ref{theorem-origin}, which says that if $0 \in \Omega$, then
$C_N(\Omega) =\( \frac{N-1}{N} \)^N$ and the infimum is not attained.
In \S 3, we prove Theorem \ref{theorem-smooth}, which says that if $\partial B_R \cap \partial \Omega$
enjoys some regularity, then $C_N(\Omega) =\( \frac{N-1}{N} \)^N$ and the infimum is not attained.
In \S 4, we prove Theorem \ref{theorem-inequality}, which says that a strict inequality
$C_N(\Omega) > \( \frac{N-1}{N} \)^N$ holds under some condition on $\Omega$
and Theorem \ref{theorem-existence}, which says that under a stronger condition than the one in Theorem \ref{theorem-inequality}, the infimum is attained.
Finally in \S 6, we prove Theorem \ref{theorem-nonexistence}, which says that the condition for the existence of a minimizer in Theorem \ref{theorem-existence} is optimal.
Now, we fix some notations and usages.
For a bounded domain $\Omega \subset \re^N$,
the letter $R$ will be used to denote $R = \sup_{x \in \Omega} |x|$ throughout the paper.
$B_R$ will denote the $N$-dimensional ball with radius $R$ and center $0$.
The surface area $\int_{S^{N-1}} dS_{\omega}$ of the $(N-1)$ dimensional unit sphere $S^{N-1}$ in $\re^N$ will be denoted by $\omega_{N-1}$.
$S^{N-1}(r)$ will denote the sphere of radius $r$ with center $0$.
Finally, the letter $C$ may vary from line to line.
\begin{section}{Hardy's inequality for the case $0 \in \Omega$}
In this section, we treat the case when $\Omega \subset B_R$ has the origin as an interior point of $\Omega$.
In this case, we prove the following theorem.
\begin{theorem}
\label{theorem-origin}
For any bounded domain $\Omega \subset \re^N$ with $0 \in \Omega$ and $R = \sup_{x \in \Omega} |x|$,
\begin{align*}
C_N(\Omega) = C_N(B_R) = \( \frac{N-1}{N} \)^N,
\end{align*}
and the infimum $C_N(\Omega)$ is not attained.
\end{theorem}
\begin{proof}
Note that by the definition of $R$, we have $\Omega \subset B_R$.
By a trivial extension of a function $u \in W^{1,N}_0(\Omega)$ on $B_R$ by $u(x) = 0$ for $x \in B_R \setminus \Omega$,
we see $W^{1,N}_0(\Omega) \subset W^{1,N}_0(B_R)$ and thus
\begin{equation}
\label{C_N_lower}
C_N(\Omega) \ge C_N(B_R) = \( \frac{N-1}{N} \)^N.
\end{equation}
For the fact $C_N(B_R) = \( \frac{N-1}{N} \)^N$, we refer to \cite{Ioku-Ishiwata}.
In \cite{Ioku-Ishiwata}, the authors prove this fact by using the test functions
\begin{align*}
\psi_{\beta} (x) = \begin{cases}
1, &\quad 0 \le |x| \le \frac{R}{e}, \\
\( \log \frac{R}{|x|} \)^\beta, &\quad \frac{R}{e} \le |x| \le R
\end{cases}
\end{align*}
for $\beta > \frac{N-1}{N}$.
Note that $\{ \psi_{\beta} \}$ will concentrate on the boundary $\pd B_R$ when $\beta \downarrow \frac{N-1}{N}$.
In our case, since $0 \in \Omega$ is an interior point, there exists a small $c \in (0,1)$ such that $B_{cR}(0) \subset \Omega$.
For $0 < \alpha < \frac{N-1}{N}$, we define a function
\begin{align*}
\phia (x) = \begin{cases}
\( \log \frac{R}{|x|} \)^{\alpha}, &\quad |x| \le \frac{cR}{2}, \\
\( \log \frac{2R}{c} \)^{\alpha}(2-\frac{2|x|}{cR}) , &\quad \frac{cR}{2} \le |x| \le cR, \\
0, &\quad cR \le |x|, \, \text{and} \; x \in \Omega.
\end{cases}
\end{align*}
Then we see that
\begin{align*}
A \equiv &\intO |\nabla \phia|^N dx
= \omega_{N-1} \int_0^{\frac{cR}{2}} \left| \alpha \( \log \frac{R}{r} \)^{\alpha-1} \( \frac{-1}{r} \) \right|^N r^{N-1} dr + O(1) \\
&= \omega_{N-1} \alpha^N \int_0^{\frac{cR}{2}} \( \log \frac{R}{r} \)^{N(\alpha-1)} \frac{1}{r} \; dr + O(1) \\
&= \omega_{N-1} \alpha^N \left[ \frac{-1}{N(\alpha-1) + 1} \( \log \frac{R}{r} \)^{N(\alpha-1) + 1} \right]_0^{\frac{cR}{2}} + O(1) \\
&= \omega_{N-1} \alpha^N \( \frac{-1}{N(\alpha-1) + 1} \) \log \frac{2}{c} + O(1).
\end{align*}
Since $\alpha < \frac{N-1}{N}$, we have $N(\alpha-1) + 1 < 0$.
Thus $|\nabla \phia|^N$ is integrable near the origin and $\phia \in W^{1,N}_0(\Omega)$ for any $\alpha \in (0, \frac{N-1}{N})$.
Also we see that
\begin{align*}
B \equiv &\intO \frac{|\phia(x)|^N}{|x|^N \( \log \frac{R}{|x|} \)^N} dx
= \omega_{N-1} \int_0^{\frac{cR}{2}} \frac{(\log \frac{R}{r})^{\alpha N}}{r^N (\log \frac{R}{r})^N} r^{N-1} dr + O(1) \\
&= \omega_{N-1} \int_0^{\frac{cR}{2}} \( \log \frac{R}{r} \)^{N\alpha - N} \frac{1}{r} \; dr + O(1) \\
&= \omega_{N-1} \( \frac{-1}{N(\alpha-1) + 1} \) \log \frac{2}{c} + O(1).
\end{align*}
Therefore, we conclude that
\begin{align*}
\frac{A}{B} &= \frac{\omega_{N-1} \alpha^N \( \frac{-1}{N(\alpha-1) + 1} \) \log \frac{2}{c} + O(1)}{\omega_{N-1} \( \frac{-1}{N(\alpha-1) + 1} \) \log \frac{2}{c} + O(1)}
= \frac{\alpha^N + O(1) (N(\alpha-1) + 1)}{1 + O(1) (N(\alpha-1) + 1)} \\
&\to \( \frac{N-1}{N} \)^N \quad \text{as} \; \alpha \uparrow \frac{N-1}{N}.
\end{align*}
This proves that
\[
C_N(\Omega) = \( \frac{N-1}{N} \)^N,
\]
thus the infimum $C_N(\Omega)$ is not attained; see Introduction.
\end{proof}
\end{section}
\begin{section}{Hardy's inequality for smooth domains}
In this section, we prove that $C_N(\Omega)$ equals to $\( \frac{N-1}{N} \)^N$ if the domain has a smooth boundary portion on $\pd B_R$.
For the smoothness on the boundary, the interior sphere condition is enough to obtain the result.
Here we say that a point $x_0 \in \pd\Omega \cap \pd B_R$ satisfies an {\it interior sphere condition} if there is an open ball $B \subset \Omega$
such that $x_0 \in \pd B$.
The idea here is to construct a (non-convergent) minimizing sequence $\{ u_n \}$ for $C_N(\Omega)$ for which the value of $Q_R(u_n)$ goes to $\( \frac{N-1}{N} \)^N$,
by modifying a minimizing sequence for the best constant of Hardy's inequality on the half-space \eqref{Hardy_half} when $p = N$:
\begin{equation}
\label{Hardy_half_inf}
\inf_{u \in C_0^\infty(\re^N_{+}) \setminus \{0\}} \frac{\int_{\re^N_{+}} |\nabla u|^N dx}{\int_{\re^N_{+}} |\frac{u}{x_N}|^N dx} = \( \frac{N-1}{N} \)^N.
\end{equation}
This is possible since the weight function $W_R(x)$ can be considered as $(1/d(x))^N$ near the smooth boundary portion $\pd\Omega \cap \pd B_R$.
\begin{theorem}
\label{theorem-smooth}
For a bounded domain $\Omega$, we assume that there exists a point $x_0 \in \pd\Omega \cap \pd B_R$ satisfying an interior sphere condition.
Then
\[
C_N(\Omega) = \( \frac{N-1}{N} \)^N
\]
and the infimum $C_N(\Omega)$ is not attained.
\end{theorem}
\begin{proof}
The following proof is inspired by \cite{MMP}.
We write $x = (x_1, \cdots, x_{N-1}, x_N) = (x^{\prime}, x_N)$ for $x \in \re^N_{+}$.
Fix $\e > 0$ arbitrary.
By \eqref{Hardy_half_inf}, we may take $v_\e \in C_0^\infty(\re^N_+)$ such that
\[
\int_{\re^N_+} \left|\frac{v_\e}{x_N} \right|^N dx = 1, \quad \text{and} \quad \int_{\re^N_+} |\nabla v_\e|^N dx \le \(\frac{N-1}{N} \)^N + \e.
\]
Since $\textrm{supp}(v_\e)$ is compact, we may assume that
\[
\textrm{supp}(v_\e) \subset \{x = (x^{\prime}, x_N) \in \re^N_+ \ | \ |x^{\prime}|^2 < A x_N, \ x_N < B \}
\]
if we take $A,B > 0$ sufficiently large depending on $\e$.
We think $v_{\e}$ is $0$ outside of its support and is defined on the whole $\re^N_{+}$.
For $l \in \N$, we define $v_\e^l(x) = v_\e(l x)$.
Note that for each $l > 0$, we have
\[
\int_{\re^N_+} |\nabla v^l_\e|^N dx = \int_{\re^N_+} |\nabla v_\e|^N dx, \quad \int_{\re^N_+} \left|\frac{v^l_\e}{x_N} \right|^N dx = \int_{\re^N_+} \left|\frac{v_\e}{x_N} \right|^N dx
\]
and
\[
\textrm{supp}(v^l_\e) \subset \left\{(x^{\prime}, x_N) \in \re^N_+ \ | \ |x^{\prime}|^2 < \frac{A}{l} x_N, \ x_N < \frac{B}{l} \right\}.
\]
By a rotation, we may assume that $x_0 = (-R) e_N \in \partial \Omega \cap \pd B_R$ satisfies an interior sphere condition,
where $e_N = (0, \cdots, 0, 1)$.
Then we see that for some $A^{\prime}$, $B^{\prime} > 0$,
\[
\{(x^{\prime}, x_N) \in \re^N_+ \ | \ |x^{\prime}|^2 < A^{\prime} x_N, \ x_N < B^{\prime} \} \subset \Omega + R e_N
\]
Since \eqref{Taylor} holds for small $R - |x|$, we see that
\begin{equation}
\label{S1}
|x|^N \( \log\frac{R}{|x|} \)^N \le (x_N + R)^N + o((x_N +R)^N)
\end{equation}
for $x \in \Omega$ with small $x_N+R$.
Now we define
\[
u_\e^l(x) \equiv v_\e^l(x + R e_N)
\]
for $x \in \Omega$.
Then, for large $l > 0$, we see that
$u_\e^l \in C_0^\infty(\Omega)$ and
\[
{\rm supp}(u_\e^l) \subset \Omega \cap \{x \in B_R \ | \ x_N+R < B/l\}.
\]
Now \eqref{S1} implies that
\begin{align*}
\intO \frac{|u_\e^l(x)|^N}{|x|^N\big (\log\frac{R}{|x|}\big )^N} dx \geq \intO \frac{|u_\e^l(x)|^N}{(x_N + R)^N} dx + o_l(1) = \int_{\Omega + Re_N} \frac{|v_\e^l(y)|^N}{|y_N|^N} dy +o_l(1)
\end{align*}
where $o_l(1) \to 0$ as $l \to \infty$,
and
\begin{align*}
\intO \big |\nabla u_{\e}^l(x) \big|^N dx = \int_{\Omega + Re_N} \big |\nabla v_{\e}^l(y) \big|^N dy \leq \int_{\re^N_{+}} \big |\nabla v_{\e}^l(y) \big|^N dy.
\end{align*}
Thus we have
\begin{align*}
\frac{\intO \big |\nabla u_{\e}^l(x) \big|^N dx}{\intO \frac{|u_\e^l(x)|^N}{|x|^N\big (\log\frac{R}{|x|}\big )^N} dx}
\le
\frac{\int_{\re^N_+} \big |\nabla v_\e^l\big|^N dy}{\int_{\re^N_+} \frac{|v_\e^l(y)|^N}{|y_N|^N} dy} + o_l(1)
\le \(\frac{N-1}{N} \)^N + \e + o_l(1).
\end{align*}
This implies that
\[
\inf_{u \in W_0^{1,N}(\Omega) \setminus \{0\}} \frac{\intO \big |\nabla u \big|^N dx}{\intO \frac{|u(x)|^N}{|x|^N \(\log\frac{R}{|x|}\)^N} dx}
\le \( \frac{N-1}{N} \)^N.
\]
Since $C_N(\Omega) \ge C_N(B_R) = \(\frac{N-1}{N} \)^N$ by \eqref{C_N_lower}, we conclude the equality.
This again implies that the infimum $C_N(\Omega)$ is not attained.
\end{proof}
\end{section}
\begin{section}{Hardy's inequality for nonsmooth domains}
In this section, first we provide a sufficient condition to assure the strict inequality $C_N(\Omega) > C_N(B_R)$ for bounded domains $\Omega$ with $R = \sup_{x \in \Omega} |x|$.
First, we recall the notion of spherical symmetric rearrangement.
Let $B_r(p,s)$ denote the geodesic open ball in $S^{N-1}(r)$ with center $p \in S^{N-1}(r)$ and geodesic radius $s$.
Then for each $r \in (0,R)$, there exists a constant $a(r) \ge 0$ such that
the $(N-1)$-dimensional measure of the geodesic open ball $B_r(r e_N, a(r))$ with center $r e_N = (0,\cdots,0,r)$ and radius $a(r)$ equals to $\mathcal{H}^{N-1}(\Omega \cap S^{N-1}(r))$,
here $\mathcal{H}^{N-1}$ denotes the $(N-1)$-dimensional Hausdorff measure.
Define the {\it spherical symmetric rearrangement} $\Omega^*$ of a domain $\Omega \subset B_R$ by
\[
\Omega^* \equiv \bigcup_{r \in (0,R)} B_r(r e_N, a(r))
\]
and the {\it spherical symmetric rearrangement} $u^*$ of a function $u$ on $\Omega$ by
\[
u^*(x) \equiv \sup \{ t \in \re \, | \, x \in \{ x \in \Omega \, | \, u(x) \ge t \}^{*} \}, \quad x \in \Omega^*,
\]
see Kawohl \cite{Kawohl} p.17.
Note that this is an equimeasurable rearrangement with $u^*$ rotationally symmetric around the positive $x_N$-axis,
and there hold that the Polya-Szeg\"o type inequality
\[
\intO |\nabla u|^p \, dx \ge \int_{\Omega^*} |\nabla u^*|^p \, dx
\]
for $u \in W^{1,p}_0(\Omega)$ with $p > 1$,
and the Hardy-Littlewood inequality
\[
\intO u(x) v(x) \, dx \le \int_{\Omega^*} u^*(x) v^*(x) \, dx
\]
for nonnegative functions $u, v$ on $\Omega$, see \cite[pages 21, 23, and 26]{Kawohl}.
In the sequel, we use the {\it Poincar\'e inequality on a subdomain of spheres} of the following form:
\begin{proposition}
\label{prop-Poincare}
Let $S^n$ denote an $n$-dimensional unit sphere and $U \subset S^n$ be a relatively compact open set in $S^n$.
For any $1 \le p < \infty$,
there exists $C > 0$ depending on $p$ and $n$ such that the inequality
\[
\int_U | \nabla_{S^n} u |^p dS_{\omega} \ge C | U |^{-p/n} \int_U |u|^p dS_{\omega}
\]
holds for any $u \in W^{1,p}_0(U)$.
Here $|U|$ denotes the $n$-dimensional measure of $U \subset S^n$.
\end{proposition}
\begin{proof}
The inequality $\int_U | \nabla_{S^n} u |^p dS_{\omega} \ge C(U, p) \int_U |u|^p dS_{\omega}$ holds, see for example, \cite{Saloff-Coste} pp.86.
The constant $C(U, p)$ is bounded from below by the first Dirichlet eigenvalue $\lambda_p(U)$ of the $p$-Laplacian $-\Delta_p$ on the sphere,
and the estimate
\[
\la_p(U) \ge C(n, p) |U|^{-p/n}
\]
can be seen, for example, in \cite{Lieb} or \cite{Kawohl-Fridman} when the ambient space is $\re^n$.
Indeed, the lower bound of the first Dirichlet eigenvalue is also obtained on spheres.
By spherically symmetric rearrangement, we have the Faber-Krahn type inequality
\[
\la_p(U) \ge \la_p(U^*)
\]
where $U^* \subset S^n$ be a geodesic ball with $|U| = |U^*|$.
Also we have a scaling property $\la_p(r U) = r^{-p} \la_p(U)$ for the first eigenvalue of the $p$-Laplacian.
Since $U^* = r B_1$ for some $r > 0$ where $B_1$ denotes the geodesic ball of radius $1$, we have $|U| = |U^*| = r^n |B_1|$,
which implies $r = (|U|/|B_1|)^{1/n}$.
Thus we have
\[
\la_p(U) \ge \la_p(U^*) = \la_p(r B_1) = r^{-p} \la_p(B_1) = \(\frac{|U|}{|B_1|}\)^{-p/n} |B_1|.
\]
\end{proof}
Define
\begin{equation}
\label{m(r)}
m(r) = \mathcal{H}^{N-1}( \{ x \in \Omega \, | \, |x| = r \}) = \mathcal{H}^{N-1}(\Omega \cap S^{N-1}(r))
\end{equation}
for $r \in (0, R)$.
Then we have the following.
\begin{theorem}
\label{theorem-inequality}
If
\begin{equation}
\label{m_0}
m_0 \equiv \limsup_{r \to 0} \, m(r)/r^{N-1} < \omega_{N-1}
\end{equation}
and
\begin{equation}
\label{m_R_finite}
m_R \equiv \limsup_{r \to R} \, m(r)/(R-r)^{N-1} < \infty,
\end{equation}
it holds that
\[
C_N(\Omega) > \( \frac{N-1}{N} \)^N.
\]
\end{theorem}
\begin{proof}
If $0 \in \Omega$, then $m(r) = r^{N-1}\omega_{N-1}$ for any small $r > 0$.
Thus under the assumption \eqref{m_0}, the origin must not be interior of $\Omega$.
We assume the contrary and suppose that there exists a sequence $\{\phi_n\}_{n \in \N} $ in $ C_0^\infty(\Omega)\setminus \{0\}$ such that
\[
\lim_{n \to \infty} \frac{\intO \big|\nabla \phi_n \big|^N dx}{\intO \frac{|\phi_n(x)|^N}{|x|^N \(\log\frac{R}{|x|} \)^N} dx} = C_N(\Omega) = \(\frac{N-1}{N} \)^N.
\]
Let $\phi_n^*$ be the spherical symmetric rearrangement of $\phi_n$.
Then by the above remarks, it follows that
\[
\lim_{n \to \infty} \frac{\int_{\Omega^*} \big |\nabla \phi^*_n \big|^N dx}{\int_{\Omega^*} \frac{|\phi^*_n(x)|^N}{|x|^N \( \log\frac{R}{|x|} \)^N} dx}
= C_N(\Omega^*) = \(\frac{N-1}{N} \)^N.
\]
Since $\textrm{supp}(\phi_n^*)$ is compact in $\Omega^*$,
we find positive constants $R_n$ and $\delta_n$ with $\lim_{n \to \infty}R_n$ $=$ $R$ and $\lim_{n \to \infty} \delta_n = 0$ such that
$\textrm{supp}(\phi^*_n) \subset B_{R_n} \setminus \ol{B_{\delta_n}}$.
We define
\[
\Omega^*_n \equiv \Omega^* \cap (B_{R_n} \setminus \ol{B_{\delta_n}}).
\]
Since the weight function $W_R$ is bounded from above and below by positive constants on $\Omega_n^*$,
there exists a minimizer $\psi_n \in W^{1,N}_0(\Omega^*_n)$ of
\[
c_n \equiv \inf \Big \{ \int_{\Omega^*_n} \big |\nabla \psi \big|^N dx \ \Big | \
\int_{\Omega^*_n} \frac{|\psi(x)|^N}{|x|^N \( \log\frac{R}{|x|} \)^N} dx = 1, \, \psi \in W_0^{1,N}(\Omega^*_n) \Big \}.
\]
We may assume $\psi_n \ge 0$, $\psi_n$ satisfies
\[
\textrm{div}(|\nabla \psi_n|^{N-2}\nabla \psi_n) + c_n \frac{\psi_n(x)^{N-1}}{|x|^N\big (\log\frac{R}{|x|}\big )^N} = 0 \quad \textrm {in} \ \Omega^*_n,
\]
and $\psi_n$ is rotationally symmetric with respect to $x_N$-axis.
We think that $\psi_n$ is defined on $\Omega^*$ by extending by zero.
Then we see
\begin{equation}
\label{c_n}
\int_{\Omega^*} |\nabla \psi_n|^{N} dx = c_n \to \Big(\frac{N-1}{N}\Big )^N
\end{equation}
as $n \to \infty$.
Since $\( \frac{N-1}{N} \)^N$ is not attained by any element in $W_0^{1,N}(\Omega^*)$,
elliptic estimates imply that for any small $R^{\prime} > 0$ and any $\tilde{R} < R$ sufficiently close to $R$,
$\psi_n$ converges uniformly to $0$ on $\Omega^* \cap (B_{\tilde{R}} \setminus \ol{B_{R^{\prime}}})$
and $\psi_n$ converges weakly to $0$ in $W_0^{1,N}(\Omega^*)$ as $n \to \infty$.
We denote
\[
\Omega^*(r) \equiv \{ \omega \in S^{N-1} \ | \ r\omega \in \Omega^* \} \subset S^{N-1},
\]
so $m(r) = r^{N-1} \mathcal{H}^{N-1}(\Omega^*(r))$.
Then we note that
\begin{align}
\label{concentration}
1 &= \int_{\Omega^*} \frac{|\psi_n(x)|^N}{\big (|x|\log\frac{R}{|x|}\big )^N} dx =
\int_0^R \int_{\Omega^*(r)} \frac{|\psi_n(r\omega)|^N }{r\big (\log\frac{R}{r}\big )^N} dS_{\omega} dr \notag \\
&= \int_0^{R^\prime} \int_{\Omega^*(r)}\frac{|\psi_n(x)|^N}{r\big (\log\frac{R}{r}\big )^N} dS_{\omega} dr
+ \int_{\tilde{R}}^R \int_{\Omega^*(r)} \frac{|\psi_n(r\omega)|^N }{r\big (\log\frac{R}{r}\big )^N} dS_{\omega} dr + o_n(1)
\end{align}
as $n \to \infty$.
First, let us assume
\begin{equation}
\label{concentration on zero}
\lim_{n \to \infty} \int_0^{R^{\prime}} \int_{\Omega^*(r)} \frac{|\psi_n(r\omega)|^N}{r \(\log\frac{R}{r} \)^N} dS_{\omega} dr \ge C
\end{equation}
for some $C > 0$.
Since $m_0 <\omega_{N-1}$ by assumption \eqref{m_0},
$\Omega^*(r)$ is a proper subset of $S^{N-1} \setminus \{ -e_N \} \simeq \re^{N-1}$ for any small $r >0$.
Thus there exists a constant $C > 0$ independent of small $r > 0$ and $n \in \N$ such that the Poincar\'e inequality
in Proposition \ref{prop-Poincare} (with $U = \Omega^*(r)$, $p = N$, $n = N-1$)
\begin{equation}
\label{Poincare}
\int_{\Omega^*(r)}|\nabla_{S^{N-1}} \psi_n(r\omega)|^N dS_{\omega} \ge C \int_{\Omega^*(r)}|\psi_n(r\omega)|^N dS_{\omega}
\end{equation}
holds true.
Note that
\[
\nabla \psi_n = \frac{x}{|x|}\frac{\partial \psi_n}{\partial r} + \frac{1}{r} \nabla_{S^{N-1}}\psi_n, \qquad
|\nabla \psi_n|^N \ge \left| \frac{\partial \psi_n}{\partial r} \right|^N + \frac{1}{r^N} |\nabla_{S^{N-1}}\psi_n|^N.
\]
Then for each small $R^\prime > 0$, we have
\begin{align}
\label{poes1}
\int_{\Omega^*}|\nabla \psi_n|^N dx &= \int_0^R \int_{\Omega^*(r)} \nabla \psi_n(r\omega)|^N r^{N-1} dS_{\omega} dr \notag \\
&\ge \int_0^{R^\prime} \int_{\Omega^*(r)} \frac{1}{r^{N}} |\nabla_{S^{N-1}} \psi_n|^N r^{N-1} dS_{\omega} dr \notag \\
&\ge C\int_{0}^{R^\prime} \int_{\Omega^*(r)} \frac{|\psi_n(r\omega)|^N}{r} dS_{\omega} dr
\end{align}
by the Poincar\'e inequality \eqref{Poincare}.
On the other hand, since
\[
\int_0^{R^\prime} \int_{\Omega^*(r)}\frac{|\psi_n(r\omega)|^N}{r} dS_{\omega} dr
\ge \( \log\frac{R}{R^\prime} \)^N
\int_0^{R^\prime} \int_{\Omega^*(r)} \frac{|\psi_n(r\omega)|^N }{r \(\log\frac{R}{r} \)^N} dS_{\omega} dr,
\]
we have by \eqref{concentration on zero},
\begin{equation}
\label{C2}
\int_0^{R^\prime} \int_{\Omega^*(r)}\frac{|\psi_n(r\omega)|^N}{r} dS_{\omega} dr \ge \(C + o_n(1) \) \( \log\frac{R}{R^\prime} \)^N
\end{equation}
where $o_n(1) \to 0$ as $n \to \infty$.
Then by \eqref{c_n}, \eqref{poes1}, and \eqref{C2}, we have
\begin{align*}
\( \frac{N-1}{N} \)^N + o_n(1) &= \int_{\Omega^*}|\nabla \psi_n|^N dx \ge \frac{C}{2} \( \log\frac{R}{R^\prime} \)^N
\end{align*}
as $n \to \infty$.
This inequality is invalid if $R^{\prime}$ is very small.
Thus \eqref{concentration on zero} cannot happen and
\[
\lim_{n \to \infty} \int_0^{R^{\prime}} \int_{\Omega^*(r)} \frac{|\psi_n(r\omega)|^N}{r \(\log\frac{R}{r} \)^N} dS_{\omega} dr = 0
\]
under the assumption \eqref{m_0}.
Therefore by \eqref{concentration}, we have
\begin{equation}
\label{concentration on boundary}
\lim_{n \to \infty} \int_{\tilde{R}}^R \int_{\Omega^*(r)} \frac{|\psi_n(r\omega)|^N}{r \(\log\frac{R}{r} \)^N} dS_{\omega} dr = 1.
\end{equation}
Next, we will prove that \eqref{concentration on boundary} cannot occur under the assumption \eqref{m_R_finite}.
In fact, we see by \eqref{concentration on boundary} and \eqref{Taylor} that
\begin{align*}
1 + o_n(1) &= \int_{\tilde{R}}^R \int_{\Omega^*(r)} \frac{|\psi_n(r\omega)|^N}{\( r \log \frac{R}{r} \)^N}r^{N-1} dS_{\omega} dr \\
&= (1+o(1)) R^{N-1}\int_{\tilde{R}}^R \int_{\Omega^*(r)} \frac{|\psi_n(r\omega)|^N}{\( R - r \)^N}dS_{\omega} dr,
\end{align*}
where $o_n(1) \to 0$ as $n \to \infty$ and $o(1) \to 0$ as $\tilde{R} \to R$.
Thus we have
\begin{equation}
\label{ap}
\lim_{n \to \infty} \int_{\tilde{R}}^R \int_{\Omega^*(r)} \frac{|\psi_n(r\omega)|^N}{\( R - r \)^N}dS_{\omega} dr = (1 + o(1)) R^{-(N-1)}
\end{equation}
as $\tilde{R} \to R$.
On the other hand, since $\psi_n(r\omega) \big|_{r = R} = 0$, we can apply the one-dimensional Hardy inequality
\begin{equation}
\label{1D_Hardy}
\(\frac{N-1}{N}\)^N \int_{\tilde{R}}^R \frac{|\psi_n(r\omega)|^N}{\(R - r \)^N} dr \le \int_{\tilde{R}}^R \bigg |\frac{\partial \psi_n(r\omega)}{\partial r}\bigg |^N dr
\end{equation}
to $\psi_n(r\omega)$.
Note that the best constant $\(\frac{N-1}{N}\)^N$ in the inequality \eqref{1D_Hardy} is the same as, by assumption, the value of $C_N(\Omega^*)$.
Then \eqref{1D_Hardy} implies
\begin{align*}
\(\frac{N-1}{N}\)^N \int_{\tilde{R}}^R \int_{\Omega^*(r)} \frac{|\psi_n(r\omega)|^N}{\(R - r \)^N}dS_{\omega} dr
&\le \int_{\tilde{R}}^R \int_{\Omega^*(r)} \bigg |\frac{\partial \psi_n}{\partial r}(r\omega) \bigg |^N dS_{\omega} dr \\
&= (1+o(1)) R^{-(N-1)} \int_{\Omega^*} \bigg|\frac{\partial \psi_n}{\partial r}(x) \bigg|^N dx.
\end{align*}
The above inequality, \eqref{ap} and $C_N(\Omega^*) = (\frac{N-1}{N})^N = \lim_{n \to \infty} \int_{\Omega^*} |\nabla \psi_n(x)|^N dx$ by \eqref{c_n}
imply that
\[
\lim_{n \to \infty} \int_{\Omega^*}|\nabla \psi_n|^N dx \le \lim_{n \to \infty} \int_{\Omega^*}\bigg|\frac{\partial \psi_n}{\partial r}(x) \bigg|^N dx.
\]
The converse inequality holds trivially, thus we see that
\[
\lim_{n \to \infty} \int_{\Omega^*}|\nabla \psi_n|^N dx
= \lim_{n \to \infty} \int_{\Omega^*}\bigg|\frac{\partial \psi_n}{\partial r}\bigg|^N dx,
\]
which implies
\begin{equation}
\label{angle vanish}
\lim_{n \to \infty} \int_{R^\prime}^R \int_{r \Omega^*(r)} |\nabla_{S^{N-1}(r)}\psi_n(\sigma)|^N| d\sigma_r dr = 0,
\end{equation}
here $\sigma = r\omega \in S^{N-1}(r)$, $d\sigma_r = r^{N-1} dS_{\omega}$ is a volume element of a geodesic ball $r\Omega^*(r)$ with center $r e_N$ in $S^{N-1}(r)$,
and $\nabla_{S^{N-1}(r)} = (1/r) \nabla_{S^{N-1}}$.
From the assumption $m_R < \infty$ in \eqref{m_R_finite}, there exists a constant $ C > 0$ independent of $r \in (\tilde{R}, R)$ and $n$ such that
\[
r^{N-1} \mathcal{H}^{N-1}(\Omega^*(r)) \le C(R-r)^{N-1}
\]
holds true.
This implies that \[\( \mathcal{H}^{N-1}(r \Omega^*(r)) \)^{-N/(N-1)} \ge D (R - r)^{-N},\]
where $D = C^{-N/(N-1)} > 0$ independent of $r \in (\tilde{R}, R)$ and $n$.
Then, by the Poincar\'e inequality in Proposition \ref{prop-Poincare} ($n = N-1$, $p = N$) on the spherical cap $U = r \Omega^*(r) \subset S^{N-1}(r)$,
\begin{equation}
\label{Poincare2}
\int_{r\Omega^*_r} |\nabla_{S^{N-1}(r)}\psi_n(\sigma)|^N d\sigma_r \ge D \int_{r\Omega^*_r} \frac{|\psi_n(\sigma)|^N}{|R-r|^N} d\sigma_r
\end{equation}
holds true.
Combining \eqref{angle vanish} and \eqref{Poincare2}, we have
\begin{align*}
o_n(1) &= \int_{\tilde{R}}^R \int_{r\Omega^*(r)} |\nabla_{S^{N-1}(r)}\psi_n(\sigma)|^N d\sigma_r dr \ge D \int_{\tilde{R}}^R \int_{r \Omega^*(r)} \frac{|\psi_n(\sigma)|^N}{|R-r|^N} d\sigma_r dr \\
&= (1+o(1)) D R^{N-1} \int_{\tilde{R}}^R \int_{\Omega^*(r)} \frac{|\psi_n(r\omega)|^N}{\(R - r \)^N} dS_{\omega} dr
\end{align*}
where $o_n(1) \to 0$ as $n \to \infty$ and $o(1) \to 0$ as $\tilde{R} \to R$.
Combining this to \eqref{ap} and letting $n \to \infty$, we see
\[
0 = D (1 + o(1))R^{N-1} \times (1 + o(1)) R^{-(N-1)} = D + o(1)
\]
as $\tilde{R} \to R$.
This is a contradiction and we complete the proof.
\end{proof}
Next, we prove that a condition on $\Omega$ stronger than that of in Theorem \ref{theorem-inequality} assures the attainability of $C_N(\Omega)$.
The condition below implies that the boundary point $x \in \partial B_R \cap \pd\Omega$, if it existed, must be cuspidal,
but the origin, if $0 \in \partial \Omega$, may be a Lipschitz continuous boundary point.
\begin{theorem}
\label{theorem-existence}
For $r \in (0,R)$, let $m(r)$ be defined as \eqref{m(r)}.
If
\[
m_0 \equiv \limsup_{r \to 0} \, m(r)/r^{N-1} < \omega_{N-1}
\]
and
\begin{equation}
\label{m_R_0}
m_R \equiv \limsup_{r \to R} \, m(r)/(R-r)^{N-1} = 0,
\end{equation}
then
\[
C_N(\Omega) > \( \frac{N-1}{N} \)^N
\]
and $C_N(\Omega)$ is attained.
\end{theorem}
\begin{proof}
The strict inequality $C_N(\Omega) > \( \frac{N-1}{N} \)^N $ was proved in Theorem \ref{theorem-inequality}.
For each positive integer $n$, we define
\[
\Omega_n \equiv \Omega \cap (B_{R-1/n} \setminus \overline{B_{1/n}}).
\]
Then, since the weight function $W_R(x)$ is bounded on $\Omega_n$, there exists a minimizer $\psi_n$ of
\[
d_n \equiv \inf \Big \{ \int_{\Omega_n} \big |\nabla \psi\big|^N dx \ \Big | \
\int_{\Omega_n} \frac{|\psi(x)|^N}{|x|^N \(\log\frac{R}{|x|}\)^N} dx = 1, \, \psi \in W_0^{1,N}(\Omega_n) \Big \}.
\]
We may assume $\psi_n \ge 0$ and $\psi_n$ satisfies
\[
\textrm{div}(|\nabla \psi_n|^{N-2}\nabla \psi_n) +
d_n \frac{\psi_n(x)^{N-1}}{|x|^N \(\log\frac{R}{|x|} \)^N} = 0 \ \ \textrm { in } \ \Omega_n.
\]
We note that
\[
\int_{\Omega_n} |\nabla \psi_n|^{N} dx = d_n \to C_N(\Omega) \ \textrm { as } \ n \to \infty.
\]
Let $u$ be a weak limit of the sequence $\{\psi_n\}_{n \in \N}$ in $W_0^{1,N}(\Omega)$.
Then, we see that for each positive integer $n_0$,
$\psi_n$ converges uniformly to $u$ in $C^{1}(\Omega_{n_0})$,
and that
\[
\textrm{div}(|\nabla u|^{N-2}\nabla u) +
C_N(\Omega) \frac{|u(x)|^{N-1}}{|x|^N \(\log\frac{R}{|x|} \)^N} = 0, \ \ u \ge 0 \ \ \textrm {in} \ \Omega.
\]
Now it suffices to prove that $u \ne 0$ in $\Omega$, then $u$ becomes a minimizer for $C_N(\Omega)$.
To the contrary, we assume that $u \equiv 0$.
Then, we see that for each positive integer $n_0$, $\psi_n$ converges uniformly to $0$ on $\Omega_{n_0}$.
We denote
\[
\Omega(r) \equiv \{ \omega \in S^{N-1} \ | \ r\omega \in \Omega \} \subset S^{N-1}.
\]
Since $m_0 <\omega_{N-1}$,
by the spherical symmetric rearrangement, Poly\'a-Szeg\"o and the Poincar\'e inequality,
we see there exists a constant $C > 0$, independent of small $r > 0$ and $n \in \N$, such that
\[
\int_{\Omega(r)}|\nabla_{S^{N-1}} \psi_n|^N dS_{\omega} \ge C\int_{\Omega(r)}|\psi_n|^N dS_{\omega},
\]
see the proof of Theorem \ref{theorem-inequality}.
Then, we see that for each large positive integer $n_0$,
\begin{align}
\label{poes2}
\intO |\nabla \psi_n|^N dx &\ge \int_0^{1/n_0} \int_{\Omega(r)} |\nabla_{S^{N-1}} \psi_n(r\omega)|^N r^{-1} dS_{\omega} dr \notag \\
&\ge C\int_{0}^{1/n_0} \int_{\Omega(r)}|\psi_n(r\omega)|^N r^{-1} dS_{\omega} dr.
\end{align}
Put $f_n(r) \equiv \int_{\Omega(r)} |\psi_n(r\omega)|^N / r \(\log\frac{R}{r} \)^N dS_{\omega}$.
Then we have
\begin{align*}
1 = &\intO \frac{|\psi_n(x)|^N}{\(|x|\log\frac{R}{|x|}\)^N} dx = \int_0^R \int_{\Omega(r)} \frac{|\psi_n(r\omega)|^N }{r \(\log\frac{R}{r} \)^N} dS_{\omega} dr \\
& = \int_0^{1/n_0} f_n(r) dr + \int_{1/n_0}^{R-1/n_0} f_n(r) dr + \int_{R-1/n_0}^{R} f_n(r) dr,
\end{align*}
and that
\begin{align*}
&\int_0^{1/n_0} \int_{\Omega(r)} \frac{|\psi_n(r\omega)|^N }{r \(\log\frac{R}{r} \)^N} dS_{\omega} dr
\le \( \log\frac{R}{1/n_0} \)^{-N} \int_0^{1/n_0} \int_{\Omega(r)}\frac{|\psi_n(r\omega)|^N}{r} dS_{\omega} dr.
\end{align*}
Then, \eqref{poes2} implies that for each large positive integer $n_0$,
\[
\int_0^{1/n_0} \int_{\Omega(r)} \frac{|\psi_n(r\omega)|^N }{r \(\log\frac{R}{r} \)^N} dS_{\omega} dr
\le \big (\log\frac{R}{1/n_0}\big )^{-N} \frac{d_n}{C}.
\]
The right-hand side of the above inequality can be arbitrarily small if $n_0$ large, thus we have
$\lim_{n \to \infty} \int_0^{1/n_0} f_n(r) dr = 0$.
Since $\lim_{n \to \infty} \int_{1/n_0}^{R-1/n_0} f_n(r) dr = 0$,
we deduce that for each large positive integer $n_0$,
\[
\lim_{n \to \infty} \int_{R-1/n_0}^R f_n(r) dr = 1.
\]
Now, as in the proof of Theorem \ref{theorem-inequality},
let $\Omega^*(r) \subset S^{N-1}$ be a geodesic ball with the center $e_N$ such that the $(N-1)$-dimensional measure of $\Omega^*(r)$ equals to that of $\Omega(r)$.
Let $\psi^*_n$ be the spherical symmetric rearrangement of $\psi_n$ and
put $f^*_n(r) = \int_{\Omega^*(r)} \frac{|\psi^*_n(r\omega)|^N }{r\big (\log\frac{R}{r}\big )^N} dS_{\omega}$.
Since $r\log(R/r) = (R-r) + o(1)$ for small $R - r > 0$,
we see that
\begin{equation}
\label{E1}
f^*_n(r) = \int_{\Omega^*(r)} \frac{|\psi^*_n(r\omega)|^N }{r\big (\log\frac{R}{r}\big )^N} dS_{\omega} = R^{N-1} \int_{\Omega^*(r)} \frac{|\psi^*_n(r\omega)|^N }{(R-r)^N} dS_{\omega} + o(1)
\end{equation}
for small $R - r > 0$.
On the other hand, by the assumption $m_R = 0$,
there exists $h(r) > 0$ with $h(r) \to 0$ as $r \to R$ such that $\mathcal{H}^{N-1}(r \Omega^*(r)) \le h(r) (R - r)^{N-1}$.
Thus
\[
\( \mathcal{H}^{N-1}(\Omega^*(r)) \)^{-N/(N-1)} \ge r^N \( h(r) \)^{-N/(N-1)} (R - r)^{-N}.
\]
Put $g(r) = r^N ( h(r) )^{-N/(N-1)}$. Then $\lim_{r \to R} g(r) = \infty$ and the Poincar\'e inequality in Proposition \ref{prop-Poincare}
(with $U = \Omega^*(r)$, $p = N$, $n = N-1$)
\begin{equation}
\label{E2}
\int_{\Omega^*(r)} |\nabla_{S^{N-1}}\psi^*_n(r\omega)|^N dS_{\omega} \ge C g(r)\int_{\Omega^*(r)} \frac{|\psi^*_n(r\omega)|^N}{|R-r|^N} dS_{\omega}
\end{equation}
holds. Here $C = C(N) >0$ is an absolute constant.
Then by \eqref{E1} and \eqref{E2}, we see
\[
\int_{\Omega^*(r)} |\nabla_{S^{N-1}}\psi^*_n(r\omega)|^N dS_{\omega} \ge \frac{C}{2} g(r) \frac{f^*_n(r)}{R^{N-1}}
\]
and we may apply Poly\'a-Szeg\"o inequality
\[
\int_{\Omega(r)} |\nabla_{S^{N-1}} \psi_n(r\omega)|^N dS_{\omega} \ge \int_{\Omega^*(r)} |\nabla_{S^{N-1}} \psi^*_n(r\omega)|^N dS_{\omega}.
\]
Then for large $n_0$, we have
\begin{align*}
&\intO |\nabla \psi_n|^N dx \ge \int_{R - 1/n_0}^R \int_{\Omega(r)} |\nabla_{S^{N-1}} \psi_n(r\omega)|^N dS_{\omega} dr \\
&\ge \int_{R-1/n_0}^{R} \frac{C}{2} \frac{g(r) f^*_n(r)}{R^{N-1}} dr \ge \frac{C g(r^*)}{2 R^{N-1}} \int_{R-1/n_0}^{R} f^*_n(r) dr
= \frac{C g(r^*)}{2 R^{N-1}} (1 + o_n(1))
\end{align*}
where $r^*$ is a number with $r^* \in (R-1/n_0, R)$.
Since $g(r^*) \to \infty$ as $n_0 \to \infty$,
we conclude that $\lim_{n \to \infty}\int_{\Omega}|\nabla \psi_n|^N dx = \infty$.
This is a contraction; thus $C_N(\Omega)$ is attained.
\end{proof}
\end{section}
\begin{section}{Nonexistence of a minimizer for a domain $\Omega$ with $C_2(\Omega) > \frac{1}{4} $}
In this section, we provide a Lipschitz domain $\Omega$ in $\re^2$ on which $C_2(\Omega) > 1/4$ and $C_2(\Omega)$ is not attained. Recall Hardy's inequality \eqref{Hardy_half_inf} when $N = 2$:
\[
\inf \left\{ \int_{\re^2_{+}} |\nabla u|^2 dx \ \Big | \
\int_{\re^2_{+}} \frac{u^2}{(x_2)^2} dx = 1, \, u \in W_0^{1,2}(\re^2_+) \right\} = \frac {1}{4},
\]
and the best constant $1/4$ is not attained, where $x = (x_1, x_2)$.
For $a \in [0,\pi/2),$ we define
\[
E(a) \equiv \inf \Big \{ \frac{\int_{a}^{\pi-a}(\phi_\theta)^2 d\theta} {\int_{a}^{\pi-a} (\phi^2 / \sin^2 \theta) d\theta}
\ \Big | \ \phi \in C_0^\infty((a,\pi-a)) \setminus \{ 0 \} \Big \}.
\]
From \cite[Corollary 4.4]{Davies}, we see that
\begin{equation}
\label{Davies}
E \equiv E(0) = \inf \Big \{ \frac{\int_{0}^{\pi}(\phi_\theta)^2 d\theta} {\int_{0}^{\pi} (\phi^2/ \sin^2 \theta) d\theta}
\ \Big | \ \phi \in C_0^\infty((0,\pi)) \setminus \{ 0 \} \Big \} = \frac {1}{4}
\end{equation}
and $E$ is not achieved.
We prove these facts in Appendix for the reader's convenience.
It is obvious that for $a \in (0,\pi/2),$ $E(a)$ is achieved by a positive function $\varphi_a$ on $(a,\pi-a).$
Since $E(0)$ is not achieved in $W^{1,2}_0(0,\pi),$ $E(a) > E(0) = \frac 14$ for $a \in (0,\pi/2).$
\begin{theorem}
\label{theorem-nonexistence}
There exists a domain $\Omega \subset B_1 \subset \re^2$
such that
$C_2(\Omega) > \frac{1}{4}$ and $C_2(\Omega)$ is not attained.
\end{theorem}
\begin{proof}
For $a \in (0,\pi/2)$, we define a cone
\[
\mathbf{C}_a \equiv \{ (r\cos \theta, r\sin \theta) \in \re^2_+ \ | \ r \in (0,\infty), \theta \in (a,\pi-a)\} \subset \re^2_+.
\]
We define
\begin{align*}
R(y_1,y_2) &\equiv \((y_1)^2 + (1-y_2)^2 \) \( \log \frac{1}{((y_1)^2 + (1 -y_2)^2)^{1/2}} \)^2 \\
&= \frac{1}{4} h(r, \theta) \{ \log h(r,\theta) \}^2
\end{align*}
for $(y_1, y_2) = (r \cos \theta, r \sin \theta)$, where $h(r, \theta) = r^2 - 2 r \sin \theta + 1$.
Since \[ \log h(r, \theta) = h(r,\theta) - 1 - \frac{(h(r, \theta) - 1)^2}{2} + O(r^3) \textrm{ as } r \to 0, \]
we have
\begin{align}
\label{asymp}
\frac{R(y_1, y_2)}{(y_2)^2} &= \frac{(r^2 - 2r \sin \theta + 1) (4\sin^2 \theta - 4r \sin \theta (1 - 2\sin^2 \theta) + O(r^2))}{4 \sin^2 \theta} \notag \\
&= \frac{4 \sin^2 \theta - 4r \sin \theta + O(r^2)}{4 \sin^2 \theta}
\end{align}
as $r \to 0$.
Thus we see that
\[
\lim_{y_2 \to 0, (y_1,y_2) \in \mathbf{C}_a} R(y_1,y_2)/(y_2)^2 = 1
\]
for each $a > 0$.
From now on, we fix $a \in (\pi/4,\pi/2)$.
We define
\[
g(r) \equiv \inf \Big \{ \frac{R(y_1,y_2)}{(y_2)^2} \ \Big | \ (y_1,y_2) \in \mathbf{C}_a, \, y_1^2+y_2^2 = r^2 \Big \}.
\]
By \eqref{asymp}, we see that $\lim_{r \to 0}g(r) = 1$.
Further, we see that $g(r) < 1$ for small $r > 0$.
We take $r_0 \in (0,1/2)$ such that $g(r) < 1$ for any $r \in (0,r_0)$.
Note that $E(a)$ is monotone non-decreasing with respect to $a \in (0, \pi/2)$.
Now for each $r \in (0,r_0)$, we take $a(r) \in (a,\pi/2)$ such that
$E(a)/E(a(r)) = g(r) \in (0,1)$.
Since $\lim_{r \to 0}g(r) =1$, it follows that $\lim_{r \to 0} a(r) = a$.
Since $E$ is continuous on $(0,\pi/2)$ and $g$ on $(0,r_0)$,
$a(r)$ is continuous with respect to $r \in (0,r_0)$.
We define
\[
\tilde{\Omega} \equiv \{(r\cos\theta,r\sin\theta) \in \re^2_+ \ | \ r \in (0, r_0), \theta \in (a(r),\pi-a(r))\}
\]
and
\[
\Omega = \{(x_1,x_2) \in B_1 \ | \ (x_1,1-x_2) \in \tilde{\Omega} \} \subset B_1 \subset \re^2.
\]
We claim that $C_2(\Omega) = E(a) >\frac14 $ and $C_2(\Omega)$ is not attained.
For any $u \in C_0^\infty(\Omega),$ we define $\tilde{u}(y_1,y_2) = u(y_1,1-y_2)$ for $y = (y_1, y_2) \in \tilde{\Omega}$.
Then, we see that $\tilde{u} \in C_0^\infty(\tilde{\Omega})$ and
\[
\int_{\Omega} |\nabla u|^2 dx_1dx_2 = \int_{\tilde{\Omega}} |\nabla \tilde{u}|^2 dy_1dy_2
= \int_0^{r_0} \int_{a(r)}^{\pi-a(r)} r(\tilde{u}_r)^2 + r^{-1}(\tilde{u}_\theta)^2 d\theta dr
\]
and
\[
\int_{\Omega} \frac{(u(x_1,x_2))^2}{|x|^2(\log |x|)^2} dx_1dx_2 = \int_{\tilde{\Omega}} \frac{(\tilde{u}(y_1,y_2))^2}{ R(y_1,y_2) } dy_1dy_2.
\]
First of all, we claim that $C_2(\Omega) \le E(a)$.
To prove this, we note that for any $a^\prime \in (a,\pi/2)$, we can find $\delta^\prime \in (0,r_0)$ such that
\[
\{(r\cos\theta,r\sin \theta) \in \tilde{\Omega} \ | \ r \in (0,\delta^\prime), \theta \in (a^\prime,\pi-a^\prime)\} \subset \tilde{\Omega}.
\]
For any small $\e,\delta > 0$ with $4\e < \delta < \delta^\prime$,
we find a Lipschitz continuous function $\psi_\e^\delta$ satisfying
$\psi_\e^\delta(r) = 0$ for $r \le \e$ or $r \ge \delta$, $\psi_\e^\delta(r) = 1$ for $2\e \le r \le \delta/2$,
$|(\psi_\e^\delta)^\prime(r)| = 1/\e$ for $r \in (\e,2\e)$, and $|(\psi_\e^\delta)^\prime(r)| = 2/\delta$ for $r \in (\delta/2, \delta)$.
We define that for $y=(y_1,y_2) = (r \cos \theta,r\sin\theta) \in \tilde{\Omega}$ and $x=(x_1,x_2) \in \Omega$,
\[
\tilde{u}^{\delta}_{\e}(y_1,y_2) = \tilde{u}^{\delta}_{\e}(r,\theta) = \psi_\e^\delta(r)\varphi_{a^\prime}(\theta) \textrm { and }
u^{\delta}_{\e}(x_1,x_2) = \tilde{u}^{\delta}_{\e}(x_1,1-x_2).
\]
Then we see that
\begin{align*}
&\int_{\Omega} |\nabla u^{\delta}_{\e}|^2 dx = \int_{\tilde{\Omega}} |\nabla\tilde{u}^{\delta}_{\e} |^2 dy
= \int_0^\infty \int_{a^\prime}^{\pi-a^\prime} r((\tilde{u}^{\delta}_{\e})_r)^2 + r^{-1}((\tilde{u}^{\delta}_{\e})_{\theta})^2 d\theta dr \\
& = \( \int_\e^{2\e} ((\psi_{\e}^\delta)^{\prime}(r))^2 r dr + \int_{\delta/2}^{\delta} ((\psi_{\e}^\delta)^{\prime}(r))^2 r dr \) \int_{a^\prime}^{\pi-a^\prime}(\varphi_{a^\prime}(\theta))^2d\theta \\
&\quad + \int_\e^{\delta}\int_{a^\prime}^{\pi-a^\prime} r^{-1}(\psi_\e^\delta(r))^2\Big(\frac{d\varphi_{a^\prime}}{d\theta}\Big)^2 d\theta dr \\
& = 3\int_{a^\prime}^{\pi-a^\prime}(\varphi_{a^\prime}(\theta))^2d\theta + \int_\e^{\delta} r^{-1}(\psi_\e^\delta(r))^2 dr\int_{a^\prime}^{\pi-a^\prime}\Big(\frac{d\varphi_{a^\prime}}{d\theta}\Big )^2 d\theta
\end{align*}
and
\[
\int_{\Omega} \frac{(u^{\delta}_{\e}(x))^2}{|x|^2(\log |x|)^2} dx = \int_{\tilde{\Omega}}\frac{(\tilde{u}^{\delta}_{\e}(y))^2}{R(y_1,y_2)} dy = \int_\e^{\delta}\int_{a^\prime}^{\pi-a^\prime}\frac{(y_2)^2}{R(y_1,y_2)} r^{-1}(\psi_\e^\delta(r))^2 \Big(\frac{\varphi_{a^\prime}}{\sin \theta}\Big )^2 d\theta dr.
\]
Since $\lim_{\e \to 0}\int_\e^{\delta} r^{-1}(\psi^\delta_\e(r))^2 dr = \infty$ for each $\delta > 0,$
we see that
\[
\lim_{\e \to 0} \frac{\int_{\Omega}|\nabla u^\delta_\e|^2 dx}{ \int_{\Omega} \frac{|u^\delta_\e|^2}{|x|^2(\log |x|)^2} dx } \le
E(a^\prime) (\min_{r \in [0,\delta]}g(r))^{-1}.
\]
Then, $C_2(\Omega) \le E(a^\prime)$ for any $a^\prime \in (a, \pi/2)$ since $\lim_{r \to 0} g(r) = 1$.
This implies that $C_2(\Omega) \le E(a)$.
Now for any $v \in W_0^{1,2}(\Omega)$ with $\tilde{v}(y_1,y_2) \equiv v(y_1,1-y_2) \in W_0^{1,2}(\tilde{\Omega}),$
we see that
\begin{align*}
\int_{\Omega} |\nabla v|^2 dx_1dx_2
& \ge \int_0^{r_0} \int_{a(r)}^{\pi-a(r)} r(\tilde{v}_r)^2 + E(a(r))r^{-1}\frac{(\tilde{v})^2}{\sin^2 \theta} d\theta dr \\
& = \int_0^{r_0} \int_{a(r)}^{\pi-a(r)} \Big [ (\tilde{v}_r)^2 + E(a(r))\frac{(\tilde{v})^2}{(y_2)^2}\Big] rd\theta dr \\
&= \int_0^{r_0} \int_{a(r)}^{\pi-a(r)} \Big [ (\tilde{v}_r)^2 + E(a(r)) \frac{R(y_1,y_2)}{(y_2)^2}\frac{(\tilde{v})^2}{R(y_1,y_2)} \Big ] rd\theta dr \\
& \ge \int_0^{r_0} \int_{a(r)}^{\pi-a(r)} \Big [ (\tilde{v}_r)^2 + E(a(r)) g(r)\frac{(\tilde{v})^2}{R(y_1,y_2)} \Big ] rd\theta dr \\
& = \int_0^{r_0} \int_{a(r)}^{\pi-a(r)} \Big [ (\tilde{v}_r)^2 + E(a) \frac{(\tilde{v})^2}{R(y_1,y_2)} \Big ] rd\theta dr \\
& = \int_0^{r_0} \int_{a(r)}^{\pi-a(r)} (\tilde{v}_r)^2 rd\theta dr + E(a) \int_{\tilde{\Omega}} \frac{(\tilde{v})^2}{R(y_1,y_2)} dy_1dy_2\\
& = \int_0^{r_0} \int_{a(r)}^{\pi-a(r)} (\tilde{v}_r)^2 rd\theta dr + E(a) \int_{\Omega} \frac{(v(x))^2}{|x|^2(\log |x|)^2} dx.
\end{align*}
This implies that $C_2(\Omega) \ge E(a).$
Combining above upper and lower estimates, we see that $C_2(\Omega) = E(a) > \frac14.$
From above estimate, we see that
for any $u \in W_0^{1,2}(\Omega),$ we see that
\begin{equation} \label{ces} \int_{\Omega} |\nabla u|^2 dx_1dx_2 \ge \int_0^{r_0} \int_{a(r)}^{\pi-a(r)} (\tilde{u}_r)^2 rd\theta dr + E(a) \int_{\Omega} \frac{(u(x_1,x_2))^2}{|x|^2(\log |x|)^2} dx_1dx_2.\end{equation}
If $C_2(\Omega)$ is attained by $u \in W_0^{1,2}(\Omega) \setminus \{0\},$
we see from \eqref{ces} that $\tilde{u}_r \equiv 0$ in $\tilde{\Omega}$.
This contradicts to the fact $u \in W_0^{1,2}(\Omega)$.
Thus we conclude that $C_2(\Omega)$ is not attained in $W_0^{1,2}(\Omega)$.
\end{proof}
$\Box$
\begin{remark}
For the domain $\Omega$ in Theorem \ref{theorem-nonexistence},
let $P, Q$ be two points in $\pd \Omega \cap \pd B_r$ when $r$ is close to $1$.
Then $m(r)$ is the length of the arc $\stackrel{\frown}{PQ}$, which is larger than the length of the segment $PQ$.
Thus it is easy to see that in this case, $m_0 = 0$ and
\[
m_1 = \limsup_{r \to 1} m(r)/(1-r) \ge 2\cos a > 0;
\]
see Theorem \ref{theorem-existence}.
\end{remark}
\end{section}
| {
"timestamp": "2018-03-09T02:04:56",
"yymm": "1707",
"arxiv_id": "1707.04018",
"language": "en",
"url": "https://arxiv.org/abs/1707.04018",
"abstract": "In this paper, we study Hardy's inequality in a limiting case: $$\\int_{\\Omega} |\\nabla u |^N dx \\ge C_N(\\Omega) \\int_{\\Omega} \\frac{|u(x)|^N}{|x|^N \\left(\\log \\frac{R}{|x|} \\right)^N} dx $$ for functions $u \\in W^{1,N}_0(\\Omega)$, where $\\Omega$ is a bounded domain in $\\mathbb{R}^N$ with $R = \\sup_{x \\in \\Omega} |x|$. We study the (non-)attainability of the best constant $C_N(\\Omega)$ in several cases. We provide sufficient conditions that assure $C_N(\\Omega) > C_N(B_R)$ and $C_N(\\Omega)$ is attained, here $B_R$ is the $N$-dimensional ball with center the origin and radius $R$. Also we provide an example of $\\Omega \\subset \\mathbb{R}^2$ such that $C_2(\\Omega) > C_2(B_R) = 1/4$ and $C_2(\\Omega)$ is not attained.",
"subjects": "Analysis of PDEs (math.AP)",
"title": "Hardy's inequality in a limiting case on general bounded domains",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9861513901914405,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7086428532744614
} |
https://arxiv.org/abs/2205.14260 | A Note on the Fibonacci Sequence and Schreier-type Sets | A set $A$ of positive integers is said to be Schreier if either $A = \emptyset$ or $\min A\ge |A|$. We give a bijective map to prove the recurrence of the sequence $(|\mathcal{K}_{n, p, q}|)_{n=1}^\infty$ (for fixed $p\ge 1$ and $q\ge 2$), where $$\mathcal{K}_{n, p, q} \ = \ \{A\subset \{1, \ldots, n\}\,:\, \mbox{either }A = \emptyset \mbox{ or } (\max A-\max_2 A = p\mbox{ and }\min A\ge |A|\ge q)\}$$ and $\max_2 A$ is the second largest integer in $A$, given that $|A|\ge 2$. When $p = 1$ and $q=2$, we have that $(|\mathcal{K}_{n, 1, 2}|)_{n=1}^\infty$ is the Fibonacci sequence. As a corollary, we obtain a new combinatorial interpretation for the sequence $(F_n + n)_{n=1}^\infty$. | \section{Introduction}
The Fibonacci sequence is defined as follows: $F_1 = F_2 = 1$ and $F_n = F_{n-1} + F_{n-2}$ for $n\ge 3$.
There has been many nice combinatorial interpretations of $(F_n)_{n=1}^\infty$ (see \seqnum{A000045} for a quick summary), and the Fibonacci sequence often appears unexpectedly and satisfyingly in many counting problems. A. Bird \cite{B} showed that for each $n\ge 1$, if we let
$$\mathcal{A}_n \ :=\ \{A\subset\{1, \ldots, n\}\,:\, n\in A\mbox{ and } \min A\ge |A|\},$$
then $|\mathcal{A}_n| = F_n$. The condition $\min A\ge |A|$ is called the \textit{Schreier condition}, and a set that satisfies the Schreier condition is called a \textit{Schreier set}. (The empty set satisfies the Schreier condition vacuously.) Schreier sets appeared in a paper of Schreier \cite{S} who used them to solve a problem in Banach space theory. The Schreier condition is also the central concept in a celebrated theorem by Odell \cite{Od}. Moreover, Schreier sets were independently discovered in combinatorics and appeared in Ramsey-type theorems for subsets of $\mathbb{N}$. Following the discovery by A. Bird, there has been research on various recurrences produced by counting Schreier-type sets (see \cite{BCF, C0, C1, C2, C3, M}). In this short note, we first retrieve the Fibonacci sequence from a different counting problem than the one by A. Bird. In particular, for $n\ge 1$, define the set
$$\mathcal{K}_n\ :=\ \{A\subset [n]\,:\, \mbox{either }A = \emptyset \mbox{ or } (\max A-1\in A\mbox{ and }\min A\ge |A|)\},$$
where $[n] = \{1, \ldots, n\}$. While we fix the maximum of sets in $\mathcal{A}_n$, we do not fix the maximum of sets in $\mathcal{K}_n$. Instead, we fix the distance between the largest and the second largest numbers of sets in $\mathcal{K}_n$.
\begin{thm}\label{m1}
For $n\ge 1$, $|\mathcal{K}_n| = F_n$.
\end{thm}
We have the following immediate corollary, which gives the sequence $(F_n + n)_{n=1}^\infty$ (see \seqnum{A002062}).
\begin{cor}
Let
$$\mathcal{K}'_n\ :=\ \{A\subset [n]\,:\, \mbox{either }|A| \le 1 \mbox{ or } (\max A-1\in A\mbox{ and }\min A\ge |A|)\}.$$
Then $|K'_n| = F_n + n$ for all $n\ge 1$.
\end{cor}
\begin{proof}
Clearly, $|\mathcal{K}'_n| - |\mathcal{K}_n| = n$ for all $n\ge 1$. Using Theorem \ref{m1}, we are done.
\end{proof}
Our next result is a generalization of Theorem \ref{m1}. Let $\max_2 A$ be the second largest number in $A$ if $|A|\ge 2$. For $n, p\ge 1$ and $q\ge 2$, define
$$\mathcal{K}_{n, p, q} \ :=\ \{A\subset [n]\,:\, \mbox{either }A = \emptyset \mbox{ or } (\max A-\max_2 A = p\mbox{ and }\min A\ge |A|\ge q)\}.$$
Obviously, $\mathcal{K}_{n, 1, 2} = \mathcal{K}_n$.
\begin{thm}\label{m2}
Fix $n, p\ge 1$ and $q\ge 2$. We have
$$|\mathcal{K}_{n, p, q}| \ =\ \begin{cases}
1 &\mbox{ if } 1\le n\le p+2q-3,\\
|\mathcal{K}_{n-1, p, q}| + |\mathcal{K}_{n-2, p, q}| + \binom{n-p-q}{q-2} - 1 &\mbox{ if }n > p+2q-3.
\end{cases}$$
\end{thm}
\begin{rek}\normalfont
The case $p = 1$ and $q = 2$ in Theorem \ref{m2} gives Theorem \ref{m1}. Though Theorem \ref{m2} implies Theorem \ref{m1}, we decide to prove Theorem \ref{m2} independently for two reasons. First, Theorem \ref{m2} involves the Fibonacci sequence explicitly. Second and more importantly, the proof of Theorem \ref{m2} is a bit more involved, which requires the splitting of the set $\mathcal{K}_{n, p, q}\backslash \mathcal{K}_{n-1, p, q}$ into two disjoint sets, while the proof of Theorem \ref{m1} does not. However, the proof of Theorem \ref{m1} gives the intuition behind the proof of Theorem \ref{m2}.
\end{rek}
\section{Schreier-type sets and the Fibonacci sequence}
To prove Theorem \ref{m1}, we observe that $\mathcal{K}_n\subset \mathcal{K}_{n+1}$, so we need only to define a bijection between $\mathcal{K}_{n+1}\backslash \mathcal{K}_{n}$ and $\mathcal{K}_{n-1}$. For a set $A\subset\mathbb{N}$ and a number $r$, write $A + r := \{a+r: a\in A\}$.
\begin{proof}[Proof of Theorem \ref{m1}]
It is easy to check that $|\mathcal{K}_1| = |\mathcal{K}_2| = 1$. We need only to show that $|\mathcal{K}_{n+1}| - |\mathcal{K}_n| = |\mathcal{K}_{n-1}|$ for all $n\ge 2$. Fix $n\ge 2$. By definition, $\mathcal{K}_n\subset \mathcal{K}_{n+1}$ and
$$\mathcal{K}_{n+1}\backslash \mathcal{K}_n \ =\ \{A\subset [n+1]\,:\, n,n+1\in A\mbox{ and }\min A\ge |A|\}.$$
We define a bijection $\pi: \mathcal{K}_{n-1}\rightarrow \mathcal{K}_{n+1}\backslash \mathcal{K}_n$: for $A\in \mathcal{K}_{n-1}$,
$$\pi(A)\ :=\ \begin{cases}(A\backslash \{\max A\} + 1)\cup \{n, n+1\} &\mbox{ if } A\neq \emptyset,\\ \{n, n+1\}&\mbox{ if } A = \emptyset.\end{cases}$$
We now verify that $\pi$ is a well-defined bijection.
Firstly, $\pi$ is well-defined. Let $A\in \mathcal{K}_{n-1}$ be nonempty. Then $n, n+1\in \pi(A)$ and
$$\min \pi(A)\ =\ \min A + 1\ \ge\ |A| + 1\ =\ |\pi(A)|.$$
Hence, $\pi(A) \in \mathcal{K}_{n+1}\backslash \mathcal{K}_n$.
Secondly, $\pi$ is one-to-one. Pick two sets $A_1, A_2\in \mathcal{K}_{n-1}$ and suppose that $\pi(A_1) = \pi(A_2)$. If $A_1 = \emptyset$, then $|\pi(A_2)| = |\pi(A_1)| = 2$, which implies that $A_2 = \emptyset$ because otherwise, $|\pi(A_2)| = |A_2| + 1 \ge 3$. By the same reasoning, if $A_1\neq \emptyset$, then $A_2\neq \emptyset$. Hence, $\pi(A_1) = \pi(A_2)$ implies that $A_1\backslash \{\max A_1\} = A_2\backslash \{\max A_2\}$. Using the fact that $\max A_i = \max_2 A_i + 1$ for $i=1, 2$, we conclude that $A_1 = A_2$.
Lastly, $\pi$ is onto. Let $B\in \mathcal{K}_{n+1}\backslash \mathcal{K}_n$. If $B = \{n, n+1\}$, then $\pi(\emptyset) = B$. If $|B|\ge 3$, then define $E = (B\backslash \{n, n+1\})-1$ and $F = E\cup\{\max E + 1\}$. We need only to verify that $F\in \mathcal{K}_{n-1}$ as it then follows that $\pi(F) = B$. We have
\begin{align*}
\min F &\ =\ \min E\ =\ \min B-1 \ \ge\ |B| - 1 \ = \ |F|\\
\max F &\ =\ \max E + 1\ \le\ n-1.
\end{align*}
Therefore, $\pi$ is indeed onto. This completes our proof.
\end{proof}
\section{An order-two recurrence that involves a binomial coefficient}
We prove Theorem \ref{m2} by recalling that $\mathcal{K}_{n-1, p, q}\subset \mathcal{K}_{n, p, q}$, then writing $$\mathcal{K}_{n, p, q}\backslash \mathcal{K}_{n-1, p, q} \ = \ \mathcal{S}\cup \mathcal{T}$$ for certain disjoint sets $\mathcal{S}$ and $\mathcal{T}$, and finally verifying that $|\mathcal{S}| = |\mathcal{K}_{n-2, p, q}|- 1$, while $|\mathcal{T}| = \binom{n-p-q}{q-2}$.
\begin{proof}[Proof of Theorem \ref{m2}]
Fix $p\ge 1$ and $q\ge 2$. First, we verify that for $1\le n\le p+2q-3$, $|\mathcal{K}_{n, p, q}| = 1$. Recall that
$$\mathcal{K}_{n, p, q} \ =\ \{A\subset [n]\,:\, \mbox{either }A = \emptyset \mbox{ or } (\max A-\max_2 A = p\mbox{ and }\min A\ge |A|\ge q)\}.$$ Suppose $A$ is nonempty and $A\in \mathcal{K}_{n,p,q}$. Write $A = \{a_1, \ldots, a_k\}$, then $a_1\ge q$, $a_k\le p+2q-3$, and $a_{k-1}\le 2q-3$. Hence,
$$|\{a_1,\ldots, a_{k-1}\}|\ \le\ q-2$$
and so, $|A|\le q-1$, which contradicts the requirement that $|A|\ge q$. Therefore, for $1\le n\le p+2q-3$, $\mathcal{K}_{n,p,q} = \{\emptyset\}$.
For $n\ge p+2q-2$, we shall show that $|\mathcal{K}_{n,p,q}| = |\mathcal{K}_{n-1, p, q}| + |\mathcal{K}_{n-2, p, q}| + \binom{n-p-q}{q-2} - 1$. Let $\mathcal{S} = \{A\in \mathcal{K}_{n, p, q}\backslash \mathcal{K}_{n-1, p, q}: |A|\ge q+1\}$ and $\mathcal{T} = \{A\in \mathcal{K}_{n, p, q}\backslash \mathcal{K}_{n-1, p, q}: |A| = q\}$.
We define a bijection $\pi: \mathcal{K}_{n-2, p, q}\backslash \{\emptyset\}\rightarrow \mathcal{S}$: for a nonempty set $A\in \mathcal{K}_{n-2, p, q}$,
$$\pi(A)\ :=\ (A\backslash \{\max A\} + 1)\cup \{n-p, n\}.$$
Firstly, $\pi$ is well-defined. Since $n\in \pi(A)$, $\pi(A)\notin \mathcal{K}_{n-1, p, q}$. That $\max A\le n-2$ implies that $\max_2 A\le n-2-p$, so $\pi(A)$ does not contain any number strictly between $n-p$ and $n$. Hence, $$\max \pi(A) - \max_2 \pi(A)\ =\ n - (n-p)\ =\ p.$$
Also, $|\pi(A)| = |A| + 1\ge q+1$ and
$$\min \pi(A)\ =\ \min A + 1 \ \ge\ |A| + 1 = |\pi(A)|.$$
Therefore, $\pi(A)\in \mathcal{S}$.
Next, $\pi$ is one-to-one. Let $A_1, A_2\in \mathcal{K}_{n-2, p, q}\backslash \{\emptyset\}$ such that $\pi(A_1) = \pi(A_2)$. Note that $$\max (A_i\backslash \{\max A_i\} +1)\ \le\ (n-2-p)+1 \ =\ n-1-p,\mbox{ for }i = 1,2.$$
Hence, $\pi(A_1) = \pi(A_2)$ implies that $A_1\backslash \{\max A_1\} = A_2\backslash \{\max A_2\}$. So, $\max_2 A_1 = \max_2 A_2$, which, combined with $\max A_i - \max_2 A_i = p$ for $i=1, 2$, gives $A_1 = A_2$. We conclude that $\pi$ is one-to-one.
Next, $\pi$ is onto. Take $A\in \mathcal{S}$. Then $n, n-p\in A$ and $|A|\ge q+1$. Let $B = A\backslash \{n-p, n\} - 1$ and $\ell = \max B$. Let $C = B\cup \{\ell + p\}$. We claim that $C\in \mathcal{K}_{n-2, p, q}$. Indeed,
\begin{align*}\max C&\ =\ \max B + p\ \le\ n-p-1-1 + p \ =\ n-2,\\
\min C &\ =\ \min A - 1\ \ge\ |A| - 1\ =\ |B| + 1 \ =\ |C|,\mbox{ and }\\
|C| &\ =\ |B| + 1 \ =\ |A| - 1\ \ge\ (q+1)-1\ =\ q.\end{align*}
It is obvious from how we define $C$ that $\max C - \max_2 C = p$. Finally, $\pi(C) = A$ by construction.
We have shown that $|\mathcal{S}| = |\mathcal{K}_{n-2, p, q}\backslash \{\emptyset\}| = |\mathcal{K}_{n-2, p, q}| - 1$. It remains to show that
$$|\mathcal{T}|\ =\ \binom{n-p-q}{q-2}.$$
A set $A$ is in $\mathcal{T}$ if and only if $\min A\ge |A| = q$, $\max A = n$, and $\max_2 A = n-p$. Hence, we can write
a set $A$ in $\mathcal{T}$ as $A = D\cup \{n-p, n\}$, where $D\subset \{q, \ldots, n-p-1\}$ and $|D| = q-2$. Therefore,
$|\mathcal{T}| = \binom{n-p-q}{q-2}$. This completes our proof as
\begin{align*}|\mathcal{K}_{n,p,q}|&\ =\ |\mathcal{K}_{n-1, p, q}| + |\mathcal{K}_{n,p,q}\backslash \mathcal{K}_{n-1, p, q}|\\
&\ =\ |\mathcal{K}_{n-1, p, q}| + |\mathcal{S}| + |\mathcal{T}|\\
&\ =\ |\mathcal{K}_{n-1, p, q}| + |\mathcal{K}_{n-2, p, q}| + \binom{n-p-q}{q-2} - 1.
\end{align*}
\end{proof}
| {
"timestamp": "2022-05-31T02:03:31",
"yymm": "2205",
"arxiv_id": "2205.14260",
"language": "en",
"url": "https://arxiv.org/abs/2205.14260",
"abstract": "A set $A$ of positive integers is said to be Schreier if either $A = \\emptyset$ or $\\min A\\ge |A|$. We give a bijective map to prove the recurrence of the sequence $(|\\mathcal{K}_{n, p, q}|)_{n=1}^\\infty$ (for fixed $p\\ge 1$ and $q\\ge 2$), where $$\\mathcal{K}_{n, p, q} \\ = \\ \\{A\\subset \\{1, \\ldots, n\\}\\,:\\, \\mbox{either }A = \\emptyset \\mbox{ or } (\\max A-\\max_2 A = p\\mbox{ and }\\min A\\ge |A|\\ge q)\\}$$ and $\\max_2 A$ is the second largest integer in $A$, given that $|A|\\ge 2$. When $p = 1$ and $q=2$, we have that $(|\\mathcal{K}_{n, 1, 2}|)_{n=1}^\\infty$ is the Fibonacci sequence. As a corollary, we obtain a new combinatorial interpretation for the sequence $(F_n + n)_{n=1}^\\infty$.",
"subjects": "Combinatorics (math.CO)",
"title": "A Note on the Fibonacci Sequence and Schreier-type Sets",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9861513889704251,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7086428523970466
} |
https://arxiv.org/abs/2107.04554 | Whitney's Extension Theorem and the finiteness principle for curves in the Heisenberg group | Consider the sub-Riemannian Heisenberg group $\mathbb{H}$. In this paper, we answer the following question: given a compact set $K \subseteq \mathbb{R}$ and a continuous map $f:K \to \mathbb{H}$, when is there a horizontal $C^m$ curve $F:\mathbb{R} \to \mathbb{H}$ such that $F|_K = f$? Whitney originally answered this question for real valued mappings, and Fefferman provided a complete answer for real valued functions defined on subsets of $\mathbb{R}^n$. We also prove a finiteness principle for $C^{m,\sqrt{\omega}}$ horizontal curves in the Heisenberg group in the sense of Brudnyi and Shvartsman. | \section{Introduction}
Fix a compact set $K \subseteq \mathbb{R}^n$ and a continuous function $f:K \to \mathbb{R}$,
and consider the following question:
\smallskip
\noindent
\textbf{Whitney's question.} When is there some $F \in C^m(\mathbb{R}^n)$ such that $F|_K = f$?
\smallskip
Whenever such an $F$ exists, we will say that $f \in C^m(K)$.
Here, $C^m(\mathbb{R}^n)$
is the space of $m$-times continuously differentiable functions
$f:\mathbb{R}^n \to \mathbb{R}$ such that the following seminorm is finite:
$$
\Vert f \Vert_{C^m(\mathbb{R}^n)} := \sup_{x \in \mathbb{R}^n} \sum_{|\alpha|=m} |\partial^\alpha f(x)|.
$$
Similarly,
we write $f \in C^m(\mathbb{R}^n,\mathbb{R}^N)$ to indicate that each component of $f$
is in $C^m(\mathbb{R}^n)$.
In his classical extension theorem \cite{Whitney},
Whitney proved that $f \in C^m(K)$ if and only if
there is a family (or ``jet'')
of functions $( f_\alpha )_{|\alpha| \leq m}$
which act as the partial derivatives of $f = f_0$ in the sense of Taylor's theorem.
Moreover, the extension $F$ can be chosen in such a way that $\partial^\alpha F = f_\alpha$ on $K$.
See Theorem~\ref{t-WhitClassic} below for the statement of Whitney's result in $\mathbb{R}$.
A $C^1$ version of this result was proven in \cite{FraSerSer}
for real valued mappings defined on subsets of the
sub-Riemannian Heisenberg group $\H$.
One naturally then considers the problem of smoothly extending
a map from a subset of Euclidean space {\em into} $\H$.
In this setting, however, one also requires that the extension
respects
the sub-Riemannian geometry of $\H$.
As noted in Proposition~4.1 from \cite{ZimSpePinWhitney},
Whitney's classical assumptions
do not suffice to guarantee
the existence of such an extension in this setting,
and, as such, more assumptions on $f$ are required.
A version of
Whitney's classical extension theorem
from \cite{Whitney}
was proven by the author
for horizontal $C^1$ curves in the Heisenberg group \cite{ZimWhitney}
and by Pinamonti, Speight, and the author for horizontal $C^m$ curves \cite{ZimSpePinWhitney}.
See Theorem~\ref{t-HeisWhit} for the statement of and more discussion regarding these results.
This extension theorem has since been applied to verify the existence of
Lusin-type approximations of Lipschitz curves in the Heisenberg group by horizontal $C^m$ curves \cite{PinGarCap}.
Whitney-type extensions for horizontal $C^1$
curves
in general Carnot groups and sub-Riemannian manifolds
have been considered by Julliet, Sacchelli, and Sigalotti \cite{Pliability,SaccSiga}.
Let us return to Whitney's question.
We would like an answer
without having assigned a family of
``derivatives'' of $f$ on $K$.
In the case $n=1$, Whitney provided an answer
using the divided differences of $f$.
\begin{theorem}[\cite{Whitney2}]
\label{t-WhitFin}
Suppose $K \subseteq \mathbb{R}$ is compact
and $f:K \to \mathbb{R}$ is continuous.
Then $f \in C^m(K)$ if and only if
the $m$th divided differences
of $f$ converge uniformly on $K$.
\end{theorem}
See Subsection~\ref{s-DD} for more information about divided differences.
Glaeser later provided an answer to Whitney's question for functions in $C^1(\mathbb{R}^n)$
using a geometric argument \cite{GlaeserWhit}.
If we endow
$C^m(K)$ with the seminorm
$$
\Vert f \Vert_{C^m(K)} := \inf \left\{ \Vert F \Vert_{C^m(\mathbb{R})} \, : \, F \in C^m(\mathbb{R}), \, F|_K = f \right\},
$$
then the proof of Theorem~\ref{t-WhitFin} actually implies the following.
\begin{theorem}[\cite{Whitney2}]
\label{t-WhitCheat}
Suppose $K \subseteq \mathbb{R}$ is compact.
Then there is a bounded, linear operator $W:C^m(K) \to C^m(\mathbb{R})$ such that $Wf|_K = f$ for all $f \in C^m(K)$.
\end{theorem}
Brudnyi and Shvartsman \cite{BruShv}
observed the following reformulation of Whitney's result which has since come to be known as the {\em finiteness principle}.
\begin{theorem}[\cite{Whitney2}]
\label{t-brush}
Suppose $K \subseteq \mathbb{R}$ is compact and $f:K \to \mathbb{R}$ is continuous.
There is some $F \in C^{m,\omega}(\mathbb{R})$ with $F = f$ on $K$
if and only if,
for every subset $X \subseteq K$ with $\# X = m+2$,
there is some $F_X \in C^{m,\omega}(\mathbb{R})$ such that
$F_X = f$ on $X$, and $\sup_X \Vert F_X \Vert < \infty$.
\end{theorem}
Here,
$\omega$ is a modulus of continuity,
and,
for any interval $I \subseteq \mathbb{R}$,
$C^{m,\omega}(I)$ is the space of $m$-times continuously differentiable functions $f:I \to \mathbb{R}$
such that the following seminorm is finite:
$$
\Vert f \Vert =\Vert f \Vert_{C^{m,\omega}(I)}
:=
\sup_{\substack{a,b \in I \\ a \neq b}}
\frac{\left| f^{(m)}(b) - f^{(m)}(a) \right|}{\omega(|b-a|)}.
$$
In other words, $f^{(m)}$ is uniformly continuous with modulus of continuity $\Vert f \Vert \omega$.
The statement of Theorem~\ref{t-brush} does not involve divided differences
and allows one to consider Whitney's theorem for higher dimensional domains.
As a result, Brudnyi and Shvartsman generalized the finiteness principle to $C^{1,\omega}(\mathbb{R}^n)$
functions in \cite{BruShv}.
Their continued work on this problem
can be found in
\cite{BruShv2,BruShv,BruShv5,BruShv6,Brud1,BruShv3,Shv1,Shv2,Shv3}.
In \cite{BMP1,BMP2}, Bierstone, Milman, and Paw\l{}ucki
considered Whitney's question for extensions from subanalytic sets in $\mathbb{R}^n$,
and Fefferman answered Whitney's question fully in \cite{FefferWhitFull,FefferWhitLinear}.
He also proved versions of the finiteness principle for $C^{m,\omega}(\mathbb{R}^n)$ functions
\cite{FefferWhit,FefferWhit2}.
Recent updates on this project
by Fefferman, Israel, and Luli
can be found in \cite{FefIsrLul,FefIsrLul2}
and by
Carruth, Frei-Pearson, Israel, and Klartag in \cite{CoordFree}.
An extensive history of work related to Whitney's question from the past few decades can be found in \cite{FefferSummary}.
In this paper, we will focus on mappings into
the sub-Riemannian Heisenberg group $\H^n$
and consider a Heisenberg-version of Whitney's question.
(See Subsection~\ref{s-Heis} for the appropriate definitions.)
Suppose $K \subseteq \mathbb{R}^n$ is compact,
and fix $f:K \to \mathbb{R}^{2k+1}$.
\smallskip
\noindent
\textbf{Whitney's question in $\H$.} When is there a map $F \in C^m (\mathbb{R}^n,\mathbb{R}^{2k+1})$ such that $F|_K = f$
and $F$ is horizontal?
\smallskip
For the purposes of this paper, we will consider only the setting $\H := \H^1 = \mathbb{R}^3$.
However, all results discussed below hold in higher dimensional Heisenberg groups
with the appropriate changes in notation.
As mentioned above,
Whitney's question in $\H$ was answered on subsets of $\mathbb{R}$
in \cite{ZimSpePinWhitney} and \cite{ZimWhitney}
when the derivatives of the extension
are required to have prescribed values on $K$.
This is an analogue of Whitney's original result from \cite{Whitney}.
Now, we will provide an answer to Whitney's question in $\H$
in the case $n=1$
in analogy to Theorem~\ref{t-WhitFin}.
The following are the main results of this paper.
For a compact set $K \subseteq \mathbb{R}$ and a map $\gamma:K \to \H$,
we will write $\gamma \in C_{\mathbb{H}}^{m}(K)$ to indicate that
there is a horizontal curve $\Gamma \in C^m(\mathbb{R},\mathbb{R}^3)$
with $\Gamma|_K = \gamma$.
\begin{theorem}
\label{c-m1}
Assume $K \subseteq \mathbb{R}$ is compact
and $\gamma:K \to \H$ is continuous.
Then $\gamma \in C_{\mathbb{H}}^1(K)$ if and only if
the Pansu difference quotients of $\gamma$ converge uniformly on $K$ to horizontal points.
\end{theorem}
See Definition~\ref{d-deriv} for a discussion of the Pansu difference quotients.
For higher order derivatives, we have the following.
\begin{theorem}
\label{t-supermain}
Assume $K \subseteq \mathbb{R}$ is compact with finitely many isolated points
and $\gamma:K \to \H$ is continuous.
Then $\gamma \in C_{\mathbb{H}}^m(K)$ if and only if
\begin{enumerate}
\item
the $m$th divided differences of $\gamma$ converge uniformly on $K$,
\item $\gamma$ satisfies the discrete $A/V$ condition on $K$.
\end{enumerate}
\end{theorem}
Condition {\em (1)} is clearly necessary due to Whitney's result (Theorem~\ref{t-WhitFin}).
The discrete $A/V$ condition in {\em (2)} is an analogue of
the $A/V$ condition introduced in \cite{ZimSpePinWhitney},
and both are generalizations of the Pansu difference quotient.
See Sections~\ref{s-AV} and \ref{s-dAV} for a thorough discussion of these conditions.
According to Proposition~5.2 in \cite{ZimWhitney},
the $A/V$ condition is
necessary when extending to smooth, horizontal curves in $\H$.
While the $A/V$ condition from \cite{ZimSpePinWhitney} relies on information from a jet of functions defined on $K$,
the discrete $A/V$ condition in {\em(2)} above requires knowledge only
of the values of $\gamma$.
The definition of the discrete $A/V$ condition replaces Taylor polynomials with interpolating polynomials just as Whitney did in his classical proofs.
The following result holds for arbitrary compact sets $K \subseteq \mathbb{R}$.
For $\gamma=(f,g,h)$ with $f,g,h \in C^m(K)$,
we will write $W\gamma$ to denote the curve $(Wf,Wg,Wh) \in C^m(\mathbb{R},\mathbb{R}^3)$
where $W$ is the linear operator whose existence is guaranteed by Theorem~\ref{t-WhitCheat}.
\begin{theorem}
\label{t-supermainWf}
Assume $K \subseteq \mathbb{R}$ is compact and $\gamma:K \to \H$ is continuous.
Then $\gamma \in C_{\mathbb{H}}^m(K)$ if and only if
\begin{enumerate}
\item
the $m$th divided differences of $\gamma$ converge uniformly on $K$,
\item $W\gamma$ satisfies the $A/V$ condition on $K$.
\end{enumerate}
\end{theorem}
The advantage of this result over Theorem~\ref{t-supermain} is clearly in its generality for all compact sets.
However, one might expect that the hypotheses of Theorem~\ref{t-supermainWf}
are harder to ``compute'' (in the sense of \cite{FittingI,FittingII,FittingIII})
than those of Theorem~\ref{t-supermain}.
This is summarized in the following (imprecise) question:
\begin{question}
Suppose $K \subseteq \mathbb{R}$ is compact,
$\gamma:K \to \H$ is continuous,
and the $m$th divided differences of $\gamma$ converge uniformly on $K$.
Which is easier to compute: verifying that $\gamma$ satisfies the discrete $A/V$ condition on $K$
or computing $W\gamma$ and verifying that it satisfies the $A/V$ condition on $K$?
\end{question}
Finally, we come to our discussion of the finiteness principle (Thoerem~\ref{t-brush})
for curves in the Heisenberg group.
Just as in the Euclidean setting,
the statement of the following result removes all mention of divided differences.
\begin{theorem}
\label{t-finiteness}
Assume $K \subseteq \mathbb{R}$ is compact
with finitely many isolated points.
If
there exist a
modulus of continuity $\omega$ and a
constant $M > 0$ such that,
for any $X \subseteq K$ with $\#X = m+2$,
there is a curve $\Gamma_X \in C^{m,\omega}(\mathbb{R},\mathbb{R}^3)$
with $\Gamma_X = \gamma$ on $X$,
$
\sup_X \Vert \Gamma_X \Vert < \infty,
$
and
$$
\left| \frac{A(\Gamma_X;a,b)}{V(\Gamma_X;a,b)} \right| \leq M\omega(b-a)
\quad \text{for all } a,b \in K \text{ with } a<b,
$$
then there is a horizontal curve $\Gamma \in C^{m,\sqrt{\omega}}(\mathbb{R},\mathbb{R}^3)$
such that $\Gamma|_K = \gamma$.
\end{theorem}
Note the drop in regularity of the $m$th derivative.
This is a result of the construction of the horizontal $C^m$ extension theorem in \cite{ZimSpePinWhitney}.
The following proposition hints that this drop is due to the construction itself and may be possible to remedy.
\begin{proposition}
\label{p-finiteness}
If $\gamma \in C^{m,\omega}(\mathbb{R},\mathbb{R}^3)$ and $\gamma$ is horizontal, then
there is a constant $M > 0$ such that
$$
\left| \frac{A(\gamma;a,b)}{V(\gamma;a,b)} \right| \leq M\omega(b-a)
\quad \text{for all } a,b \in \mathbb{R} \text{ with } a<b.
$$
\end{proposition}
The proof of this proposition is nearly identical to that of Proposition~5.1 in \cite{ZimSpePinWhitney}, so it will not be included below. Simply replace all instances of $\varepsilon$ with a constant multiple of $\omega(b-a)$ throughout the proof.
The paper is organized as follows.
Section~\ref{s-prelim} establishes preliminary facts about the sub-Riemannian Heisenberg group,
prior Whitney-type results, and divided differences
which will be important for the later discussion.
The bulk of the new content is contained in Sections~\ref{s-AV} and \ref{s-dAV}
wherein the $A/V$ conditions are defined and several important lemmas relating the $A/V$ condition from \cite{ZimSpePinWhitney} to the discrete $A/V$ condition are established.
Using the technical tools provided in these sections,
we then prove Theorems~\ref{c-m1}, \ref{t-supermain}, \ref{t-supermainWf}, and \ref{t-finiteness} in Section~\ref{s-proofs}.
\section{Preliminaries}
\label{s-prelim}
Throughout the rest of the paper, $m$ will represent a positive integer, and $\omega$ will be a modulus of continuity i.e. a continuous, increasing, concave function $\omega:[0,\infty) \to [0,\infty)$ with $\omega(0)=0$.
In what follows, given an object $d > 0$ we will write
$a \lesssim_d b$ to indicate that $a\leq Cb$ where $C=C(d)>0$ is a constant depending possibly on $d$.
\subsection{The Heisenberg group}
\label{s-Heis}
For any positive integer $n$,
we define the {\em nth sub-Riemannian Heisenberg group}
to be
$\mathbb{H}^n = \mathbb{R}^{2n+1}$
with the group law
\begin{align*}
(x,y,z)*(x',y',z')
=
\left(x+x',y+y',z+z'+2\sum_{j=1}^n(y_jx_j'-x_jy_j')\right)
\end{align*}
for $x,y,x',y' \in \mathbb{R}^n$
and $z,z' \in \mathbb{R}$.
One may check that $(x,y,z)^{-1} = (-x,-y,-z)$.
With this group law, $\H^n$ is a Lie group with left invariant vector fields
$$
X_i(p) = \tfrac{\partial}{\partial x_i} + 2y_i \tfrac{\partial}{\partial z}, \quad
Y_i(p) = \tfrac{\partial}{\partial y_i} - 2x_i \tfrac{\partial}{\partial z}, \quad
Z(p) = \tfrac{\partial}{\partial z}
\qquad
\text{for } 1 \leq i \leq n
$$
for $p = (x,y,z) \in \mathbb{R}^n \times \mathbb{R}^n \times \mathbb{R}$.
Since $[X_i,Y_i] = -4Z$ for each $1 \leq i \leq n$,
the Lie group $\H^n$ is a Carnot group of step 2
with horizontal distribution $\text{span} \{ X_1,Y_1,\dots,X_n,Y_n\}$.
We say that a point in $\H^n$ is {\em horizontal} if it lies in $\mathbb{R}^{2n} \times \{0 \}$,
and an absolutely continuous curve $\gamma:\mathbb{R} \to \mathbb{R}^{2n+1}$
is {\em horizontal} if $\gamma'(t) \in \text{span} \{ X_1,Y_1,\dots,X_n,Y_n\}$
for almost every $t \in \mathbb{R}$.
We may equivalently write the following.
\begin{proposition}
Suppose $\gamma=(f,g,h):\mathbb{R} \to \mathbb{R}^n \times \mathbb{R}^n \times \mathbb{R}$ is absolutely continuous.
Then
$\gamma$ is horizontal if and only if
$$
h' = 2 \sum_{j=1}^n \left(f'_jg_j - f_jg'_j \right)
\qquad
\text{a.e. in } \mathbb{R}.
$$
\end{proposition}
For a proof of this, see Lemma~2.3 in \cite{GarethLusin}.
In $\H = \H^1$,
this equation is simply $h' = 2(f'g-fg')$.
If $\gamma \in C^m_\H (\mathbb{R})$
(i.e. $\gamma \in C^m(\mathbb{R},\mathbb{R}^3)$ and $\gamma$ is a horizontal curve),
then, according to the Leibniz rule, we have
\begin{equation}
\label{e-LeibnizRule}
h^{(k)} = 2 \sum_{i=0}^{k-1} \binom{k-1}{i} \left(f^{(k-i)}g^{(i)} - g^{(k-i)}f^{(i)} \right)
\qquad \text{for } 1 \leq k \leq m \text{ on } \mathbb{R}.
\end{equation}
The {\em dilations} $(\delta_r)_{r\in \mathbb{R} \setminus \{ 0 \}}$ defined as
$$
\delta_r(x,y,z) = (rx,ry,r^2z)
$$
form a
family of group automorphisms on $\H^n$.
Recall that the {\em Pansu derivative} of $\gamma : \mathbb{R} \to \H$ at $x \in \mathbb{R}$
is defined as
$$
\lim_{y \to x} \delta_{1/(y-x)} \left( \gamma(x)^{-1} * \gamma(y) \right)
$$
whenever this limit exists.
If $\gamma$ is Lipschitz, then this limit exists almost everywhere
and converges to a horizontal point.
See, for example, Lemma~2.1.4 in \cite{MontiThesis}.
\begin{definition}
\label{d-deriv}
Suppose $K \subseteq \mathbb{R}$ is compact and fix $\gamma:K \to \H$.
We say that the {\em Pansu difference quotients of $\gamma$ converge uniformly on $K$ to horizontal points} if,
for every $a \in K$, there is a horizontal point $p_a \in \mathbb{R}^2 \times \{ 0 \}$ such that
$$
\lim_{\substack{|b-a| \to 0 \\ a,b \in K}} \left| \delta_{1/(b-a)} \left(\gamma(a)^{-1} * \gamma(b) \right) - p_a \right| = 0.
$$
\end{definition}
Compare this definition to the statement of Thoerem~1.7 in \cite{ZimWhitney}.
\subsection{Prior Whitney-type results}
\label{s-WhitHist}
Fix $K \subseteq \mathbb{R}$.
We say that $F = (F^k)_{k=0}^m$ is an {\em $m$-jet} on $K$ if
each $F^k$ is a continuous, real valued function on $K$.
Define the $m$th order Taylor polynomial of an $m$-jet $F$ at $a \in K$ as
\begin{equation}
\label{e-TaylorJet}
T_a^m F(x)
:=
\sum_{k=0}^m \frac{F^{k}(a)}{k!}(x-a)^k
\quad
\text{for all } x \in \mathbb{R}.
\end{equation}
If $f:\mathbb{R} \to \mathbb{R}$ is $m$ times differentiable at $a$,
the Taylor polynomial $T_a^m f$
is defined as usual using the jet $F=(f^{(k)})_{k=0}^m$ in \eqref{e-TaylorJet}.
We will often drop the exponent and write $T_a F$ or $T_a f$ when the order of the polynomial is obvious from the context.
If $f \in C^m(\mathbb{R})$
and $K \subseteq \mathbb{R}$ is compact,
then,
according to Taylor's theorem,
\begin{equation}
\label{e-taylorApprox}
\lim_{\substack{|b-a| \to 0 \\ a,b \in K}}
\frac{\left| f^{(k)}(b) - T_a^{m-k}f^{(k)}(b) \right|}{|b-a|^{m-k}} = 0
\qquad
\text{for } 0 \leq k \leq m.
\end{equation}
We note in particular that, when $f \in C^m(\mathbb{R})$ and $K \subseteq \mathbb{R}$ is compact, there is a modulus of continuity $\alpha$ such that
\begin{equation}
\label{e-taylor0}
|f^{(m)}(x)-f^{(m)}(y)| \leq \alpha(|x-y|),
\end{equation}
\begin{equation}
\label{e-taylor1}
|f(y)-T_x^mf(y)| \leq \alpha(|x-y|) |x-y|^m,
\end{equation}
\begin{equation}
\label{e-taylor2}
|f'(y)-(T_x^mf)'(y)| \leq \alpha(|x-y|) |x-y|^{m-1}
\end{equation}
for all $x,y \in K$
since $(T_x^mf)' = T_x^{m-1}(f')$.
For a compact set $K \subseteq \mathbb{R}$,
we say that an $m$-jet $F$ is a {\em Whitney field of class $C^m$ on $K$}
if
$$
\lim_{\substack{|b-a| \to 0 \\ a,b \in K}}
\frac{\left|F^k(b) - T_a^{m-k} F^k(b) \right|}{|b-a|^{m-k}} = 0
\qquad
\text{for } 0 \leq k \leq m
$$
where $T_a^{m-k} F^k$ is the $(m-k)$th order Taylor polynomial of the $(m-k)$-jet $(F^j)_{j=k}^{m}$.
Note that,
if $f \in C^m(\mathbb{R})$, then the $m$-jet $F= (f^{(k)})_{k=0}^m$ is a Whitney field of class $C^m$ on $K$ for any compact $K \subseteq \mathbb{R}$.
Whitney's classical extension theorem in dimension 1 may now be stated as follows:
\begin{theorem}[\cite{Whitney}]
\label{t-WhitClassic}
Suppose $K \subseteq \mathbb{R}$ is closed and $F$ is an $m$-jet on $K$.
There is some $f \in C^m(\mathbb{R})$ satisfying $f^{(k)}|_K = F^k$ for $0 \leq k \leq m$
if and only if
$F$ is a Whitney field of class $C^m$ on $K$.
\end{theorem}
See also \cite{Bierstone} for a proof.
Suppose now that $f \in C^{m,\omega}(\mathbb{R})$.
According to (2) in \cite{FollandRemainder},
there is a constant $C>0$ such that
\begin{align}
\label{e-Taylor-int}
\frac{|f^{(k)}(b) - T_a^{m-k}f^{(k)}(b)|}{|b-a|^{m-k}}
\leq C \omega(|b-a|)
\end{align}
for any $a,b \in \mathbb{R}$ and $0 \leq k \leq m$.
For a set $K \subseteq \mathbb{R}$,
we say that an $m$-jet $F$ is a {\em Whitney field of class $C^{m,\omega}$ on $K$}
if there is a constant $C >0$ such that
$$
\frac{\left|F^k(b) - T_a^{m-k} F^k(b) \right|}{|b-a|^{m-k}} \leq C \omega(|b-a|)
\qquad
\text{for } a,b \in K, \, 0 \leq k \leq m.
$$
A proof similar to that of Theorem~\ref{t-WhitClassic}
then implies the following result.
This theorem will be useful when proving the finiteness principle (Theorem~\ref{t-finiteness}).
\begin{theorem}[\cite{Whitney}]
\label{t-WhitClassicLip}
Suppose $K \subseteq \mathbb{R}$ is closed, and $F$ is an $m$-jet on $K$.
There is some $f \in C^{m,\omega}(\mathbb{R})$ satisfying $f^{(k)}|_K = F^k$ for $0 \leq k \leq m$
if and only if
$F$ is a Whitney field of class $C^{m,\omega}$ on $K$.
\end{theorem}
The following version of Theorem~\ref{t-WhitClassic}
was proven for $C^1$ horizontal curves in the Heisenberg group in \cite{ZimWhitney}
and for $C^m$ horizontal curves in \cite{ZimSpePinWhitney}.
\begin{theorem}[\cite{ZimSpePinWhitney,ZimWhitney}]
\label{t-HeisWhit}
Suppose $K \subseteq \mathbb{R}$ is compact and $F$, $G$, and $H$ are $m$-jets on $K$.
There is a curve $\Gamma \in C^m_\H(\mathbb{R})$
satisfying $\Gamma^{(k)}|_K = (F^k,G^k,H^k)$
for $0 \leq k \leq m$
if and only if
\begin{enumerate}
\item
\label{c-whitfield}
$F$, $G$, and $H$ are Whitney fields of class $C^m$ on $K$,
\item \label{c-leibniz} for every $1 \leq k \leq m$ and $x \in K$, the following holds on $K$:
$$
H^k = 2 \sum_{i=0}^{k-1} \binom{k-1}{i} \left(F^{k-i}G^i- G^{k-i}F^i \right),
$$
\item \label{c-av} $(F^0,G^0,H^0)$ satisfies the $A/V$ condition on $K$.
\end{enumerate}
\end{theorem}
Condition {\em (1)} here was discussed above, and condition {\em (2)} is a consequence of the Leibniz rule as in \eqref{e-LeibnizRule}.
Condition {\em (3)} (discussed at length in Section~\ref{s-AV})
establishes a control on the rate at which the curve gathers symplectic area in the plane,
and this area is fundamentally tied to the height of a horizontal curve in the Heisenberg group.
See \cite{ZimSpePinWhitney, GarethLusin, ZimWhitney} for more discussion on this relationship.
We will also record one direction of this result
for $C^{m.\omega}$ curves.
The proof is similar to the one found in \cite{ZimSpePinWhitney},
and the differences are noted below.
This result is why
Theorem~\ref{t-finiteness}
produces a curve of class $C^{m,\sqrt{\omega}}$
rather than $C^{m,\omega}$,
and it is not obvious to me how the construction in \cite{ZimSpePinWhitney} can be strengthened.
\begin{theorem}[\cite{ZimSpePinWhitney}]
\label{t-HeisWhitLip}
Suppose $K \subseteq \mathbb{R}$ is compact, and $F$, $G$, and $H$ are $m$-jets on $K$.
If
\begin{enumerate}
\item
$F$, $G$, and $H$ are Whitney fields of class $C^{m,\omega}$ on $K$,
\item for every $1 \leq k \leq m$ and $x \in K$, the following holds on $K$:
$$
H^k = 2 \sum_{i=0}^{k-1} \binom{k-1}{i} \left(F^{k-i}G^i- G^{k-i}F^i \right),
$$
\item
and, writing $\gamma = (F_0,G_0,H_0)$,
$$
\left| \frac{A(\gamma;a,b)}{V(\gamma;a,b)} \right| \leq \omega(b-a)
\quad \text{for all } a,b \in K \text{ with } a<b,
$$
\end{enumerate}
then there is a horizontal curve $\Gamma \in C^{m,\sqrt{\omega}}(\mathbb{R},\mathbb{R}^3)$
satisfying $\Gamma^{(k)}|_K = (F^k,G^k,H^k)$
for $0 \leq k \leq m$
\end{theorem}
\begin{proof}
This follows from the proof of Theorem~6.1 in \cite{ZimSpePinWhitney}.
We note the differences here.
Rather than invoking Whitney's original extension theorem (which is Thoerem~\ref{t-WhitClassic} in this paper and Theorem~2.8 in \cite{ZimSpePinWhitney}), we instead use Theorem~\ref{t-WhitClassicLip} above to extend the $m$-jets $F$ and $G$ to $C^{m,\omega}$ functions $f$ and $g$ on $\mathbb{R}$.
Moreover, it follows from the definition of the Whitney field of class $C^{m,\omega}$
and condition {\em(3)} above
that we may replace the modulus of continuity $\alpha$ in the estimates (2.3), (2.4), and (6.1)-(6.6)
in \cite{ZimSpePinWhitney}
with a constant multiple of $\omega$.
Since $K$ is compact, the modulus of continuity $\beta$ in Proposition~6.2 may then be replaced by a constant multiple of $\sqrt{\omega}$.
(This is because, in \cite{ZimSpePinWhitney}, $\beta$ is bounded by a constant multiple of $\sqrt{\hat{\alpha}} = \sqrt{\alpha + \alpha^2}$.
This is where the drop in regularity occurs.)
The proofs of Lemma~6.7 and Proposition~6.8 then follow.
\end{proof}
\subsection{Divided differences}
\label{s-DD}
Fix $A \subseteq \mathbb{R}$.
For any $f:A \to \mathbb{R}$ and
any set of $m+1$ distinct points $X = \{ x_0,\dots,x_m \}\subseteq A$, define
\begin{align}
f[x_0] &= f(x_0) \nonumber \\
f[x_0,\dots,x_k] &= \frac{f[x_1,\dots, x_k] - f[x_0,\dots,x_{k-1}]}{x_k-x_0}
\quad
\text{for } 1 \leq k \leq m. \label{e-dddefine}
\end{align}
We will call $f[X] = f[x_0,\dots,x_m]$ an {\em $m$th divided difference of $f$},
and, if $K \subseteq \mathbb{R}$ is compact,
we say that the {\em $m$th divided differences of $f$ converge uniformly on K} if,
for every $\varepsilon > 0$, there is a $\delta > 0$ such that
$
|f[X] - f[Y]| <\varepsilon
$
whenever
$X$ and $Y$ are sets of $m+1$ distinct points in $K$ and
$\mathop\mathrm{diam}\nolimits(X \cup Y)<\delta$.
For $\gamma:K \to \mathbb{R}^3$,
we define $\gamma[X] := (f[X],g[X],h[X])$.
The following equivalent definition of divided differences for $C^m$ functions is Theorem~2 on page 250 of \cite{NumAnalBook}.
\begin{proposition}
\label{p-intgral}
Fix $m+1$ distinct points $x_0,\dots,x_m \in \mathbb{R}$.
If $f \in C^m(I)$ where $I$ is some interval containing $\{ x_0,\dots,x_m \}$, then
\begin{align*}
f[x_0,\dots,x_m]
&=
\int_0^1 \int_0^{t_1} \cdots \int_0^{t_{m-1}}
f^{(m)} ( t_m (x_m - x_{m-1}) + \cdots \\
& \hspace{2in} + t_1 (x_1 - x_{0}) + x_0 ) \, dt_m \cdots dt_2 dt_1.
\end{align*}
\end{proposition}
In particular,
as long as $f$ is of class $C^m$,
the map
$(x_0,\dots,x_m) \mapsto f[x_0,\dots,x_m]$ extends to a continuous function on $[0,1]^{m+1}$,
and the recursive condition \eqref{e-dddefine}
holds for sets of not necessarily distinct
points.
\subsection{Newton interpolation polynomials}
Given a
set $A \subseteq \mathbb{R}$,
a function $f:A \to \mathbb{R}$, and a
finite set $X = \{x_0,\dots,x_k\} \subseteq A$,
the associated Newton interpolation polynomial is defined as
\begin{align*}
P(X;f)(x)
= f[x_0]+(x-x_0)f[x_0,x_1]
+ \cdots+
(x-x_0)\cdots(x-x_{k-1})f[x_0,\dots,x_k].
\end{align*}
This is the unique polynomial of degree at most $k$ which satisfies
$P(x_i) = f(x_i)$ for $i= 0,\dots,k$.
\begin{lemma}
\label{l-poly}
Suppose
$\alpha$ is a modulus of continuity and
$f \in C^{m,\alpha}([0,1])$.
There is a constant $C>0$
depending only on $m$ and $\Vert f \Vert_{C^{m,\alpha}([0,1])}$
such that, for any
$X \subseteq [0,1]$
with $\#X = m+1$
and $P = P(X;f)$,
\begin{equation}
\label{e-poly2}
\frac{|f(x)-P(x)|}{\mathop\mathrm{diam}\nolimits(X)^{m}} \leq C\alpha (\mathop\mathrm{diam}\nolimits(X))
\quad
\text{and}
\quad
\frac{|f'(x)-P'(x)|}{\mathop\mathrm{diam}\nolimits(X)^{m-1}} \leq C\alpha (\mathop\mathrm{diam}\nolimits(X))
\end{equation}
for all $x \in [\min X,\max X]$.
\end{lemma}
\begin{proof}
Write $M = \Vert f \Vert_{C^{m,\alpha}([0,1])}$.
For any $y_0,\dots,y_m,z_0,\dots,z_m \in [0,1]$, Proposition~\ref{p-intgral} gives
\begin{align*}
|f[y_0,\dots,y_m] &- f[z_0,\dots,z_m]|\\
&\leq
M \int_0^1 \int_0^{t_1} \cdots \int_0^{t_{m-1}}
\alpha \big( |t_m ((y_m - y_{m-1})-(z_m-z_{m-1})) + \cdots \\
&\hspace{1in} + t_1 ((y_1 - y_{0})-(z_1-z_{0})) + (y_0 - z_0)| \big) \, dt_m \cdots dt_2 dt_1\\
&\leq
M \alpha \big(|y_m-z_m| + 2|y_{m-1}-z_{m-1}| + \cdots + 2|y_1-z_1| + 2|y_0-z_0|\big)\\
&\leq M (2m+1) \alpha\left(\max_{i} |y_i - z_i|\right).
\end{align*}
Therefore, if we have $m+1$ distinct points
$X = \{x_0,\dots,x_{m}\} \subseteq [0,1]$
with $P = P(X;f)$
then, according to the definition of $P$,
we have
for any $x \in [\min X,\max X]$ with $x \neq x_0$ that
\begin{align*}
|f(x)-P(x)|
&= \left|f[x_0,\dots,x_m,x] (x-x_0)\cdots (x-x_m)\right|\\
&=\frac{|f[x_1,\dots,x_m,x] - f[x_0,\dots,x_m]|}{|x-x_0|}|x-x_0|\cdots |x-x_m|\\
&=\left| f[x_1,\dots,x_m,x] - f[x_0,\dots,x_m] \right||x-x_1|\cdots |x-x_m|\\
&\leq M (2m+1) \alpha(\mathop\mathrm{diam}\nolimits(X)) \mathop\mathrm{diam}\nolimits(X)^m.
\end{align*}
Since $f(x_0) = P(x_0)$, this gives the first inequality in \eqref{e-poly2}.
Now, for every $x \in [\min X, \max X]$
with $x \neq x_0$ and $x \neq x_1$,
Problem~7 on page 255 of \cite{NumAnalBook}
and the symmetry of divided differences
implies that
\begin{align*}
\frac{d}{dx} f[x_0,\dots,x_m,x]
&=
f[x_0,\dots,x_m,x,x]
=
\frac{f[x_1,\dots,x_m,x,x] - f[x_0,\dots,x_m,x]}{x-x_0}\\
&=
\frac{f[x_2,\dots,x_m,x,x] - f[x_1,\dots,x_m,x]}{(x-x_0)(x-x_1)} \\
& \qquad \qquad-
\frac{f[x_0,x_2\dots,x_m,x] - f[x_1,x_0,x_2,\dots,x_m]}{(x-x_0)(x-x_1)}.
\end{align*}
Thus, as above,
\begin{align*}
|f'(x)-P'(x)|
&\leq \left|\frac{d}{dx} f[x_0,\dots,x_m,x] \cdot
(x-x_0)\cdots (x-x_m)\right|\\
& \qquad +
|f[x_0,\dots,x_m,x]|\sum_{i=0}^m \prod_{j \neq i} |x-x_j|
\\
&\leq
M (2m+1) (m+3) \alpha(\mathop\mathrm{diam}\nolimits(X)) \mathop\mathrm{diam}\nolimits(X)^{m-1}.
\end{align*}
The continuity of $f'$ and $P'$ gives the second inequality in \eqref{e-poly2}.
\end{proof}
\section{The $A/V$ condition}
\label{s-AV}
The following quantities were first defined in \cite{ZimSpePinWhitney}
to establish Theorem~\ref{t-HeisWhit}.
\begin{definition}
Suppose $F$ and $G$ are $m$-jets on a set $E \subseteq \mathbb{R}$ and $h:E \to \mathbb{R}$ is continuous.
Set $\gamma=(f,g,h) := (F^0,G^0,h)$.
For each $a,b \in E$, define the {\em area discrepancy} $A(\gamma;a,b)$ and {\em velocity} $V(\gamma ;a,b)$ as follows
\begin{align*}
A(\gamma;a,b) &= h(b) - h(a) - 2 \int_a^b ((T_a F)'T_a G - (T_a G)'T_aF)\\
& \hspace{1in} +2f(a)(g(b) - T_a G(b)) - 2g(a)(f(b) - T_aF(b))\\
V(\gamma;a,b) &= (b-a)^{2m} + (b-a)^m \int_a^b \left( |(T_aF)'|+ |(T_aG)'| \right)
\end{align*}
\end{definition}
If $\gamma \in C^m(\mathbb{R},\mathbb{R}^3)$,
we use the jets $F = (f^{(k)})_{k=0}^m$ and $G = (g^{(k)})_{k=0}^m$ in this definition as before unless otherwise noted.
\begin{definition}
Suppose $F$ and $G$ are $m$-jets on $E \subseteq \mathbb{R}$ and $h:E \to \mathbb{R}$ is continuous.
Set $\gamma=(f,g,h) := (F^0,G^0,h)$.
We
say that $\gamma$ satisfies the {\em $A/V$ condition on $E$}
if
$$
\lim_{\substack{(b-a) \searrow 0 \\ a,b \in E}}
\frac{A(\gamma;a,b)}{V(\gamma;a,b)} = 0.
$$
\end{definition}
As seen above, the $A/V$ condition is part of the necessary and sufficient conditions to guarantee the existence of smooth, horizontal extensions in Theorem~\ref{t-HeisWhit}.
We will now make a few observations about the quantities $A$ and $V$.
The following shows that they are left invariant with respect to the group operation on $\H$.
In particular, it will allow us to assume without loss of generality that $\gamma(a)=0$
when working with $A$ and $V$.
\begin{lemma}
\label{l-AVleftinv}
Suppose $\gamma \in C^m(\mathbb{R},\mathbb{R}^3)$.
For any $p \in \H$ and $a,b \in \mathbb{R}$,
we have
$$
A(p * \gamma;a,b) = A(\gamma;a,b) \quad \text{and} \quad V(p * \gamma;a,b) = V(\gamma;a,b).
$$
\end{lemma}
\begin{proof}
Fix $a,b, \in \mathbb{R}$ and $p \in \H$.
Write $\gamma = (f,g,h)$ and $p = (x,y,z)$,
and write $\hat{\gamma} = (\hat{f},\hat{g},\hat{h}) = p * \gamma$.
We then have
$$
\hat{f} = x + f,
\qquad
\hat{g} = y + g,
\qquad
\hat{h} = z + h + 2(yf-xg).
$$
Notice also that $T_a\hat{f} = T_af +x$ and $T_a \hat{g} = T_ag +y$.
Clearly, $V(p*\gamma,a,b) = V(\gamma,a,b)$.
Assume first that $\gamma(a) = 0$.
In this case, $\hat{\gamma}(a) = p$, so
\begin{align*}
A(p * \gamma;a,b) &= \hat{h}(b) - \hat{h}(a) - 2 \int_a^b \left((T_a \hat{f})'T_a \hat{g} - (T_a \hat{g})'T_a\hat{f}\right)\\
& \hspace{1in} +2\hat{f}(a)(\hat{g}(b) - T_a \hat{g}(b)) - 2\hat{g}(a)(\hat{f}(b) - T_a\hat{f}(b))\\
&= h(b) + 2(yf(b) - xg(b)) - 2 \int_a^b \left((T_a f)'(T_a g + y) - (T_a g)'(T_af+x)\right)\\
& \hspace{1in} +2x(g(b) - T_a g(b)) - 2y(f(b) - T_af(b))\\
&= h(b) - 2 \int_a^b \left((T_a f)'T_a g - (T_a g)'T_af\right)\\
& \hspace{1in}
-2y \int_a^b (T_a f)' + 2x \int_a^b (T_a g)'
- 2xT_a g(b) + 2y T_af(b)\\
&= A(\gamma;a,b).
\end{align*}
If $\gamma(a)$ is arbitrary, then, since $\tilde{\gamma} := \gamma(a)^{-1} * \gamma$ satisfies $\tilde{\gamma}(a)=0$, we have from above
\begin{align*}
A(\gamma;a,b) = A\left(\gamma(a)*\tilde{\gamma};a,b\right)
=A\left(\tilde{\gamma};a,b\right)
= A\left((p * \gamma(a)) * \tilde{\gamma};a,b\right)
=A(p*\gamma;a,b).
\end{align*}
\end{proof}
When $\gamma$ is smooth, we are allowed to swap $a$ and $b$ in $A(\gamma;a,b)$ if we account for a small error term.
\begin{lemma}
\label{l-AVswap}
Suppose $\gamma \in C^m(\mathbb{R},\mathbb{R}^3)$ and $K \subseteq \mathbb{R}$ is compact.
Assume $\alpha$ is a modulus of continuity
such that $f$ and $g$ satisfy
\eqref{e-taylor0}--\eqref{e-taylor2}
for all $x$ and $y$ in an interval containing $K$.
Then, for any $a,b \in K$,
we have
$$
|A(\gamma;b,a)| \lesssim_{\gamma,K} |A(\gamma;a,b)| + \alpha(|b-a|)|b-a|^m.
$$
\end{lemma}
\begin{proof}
According to the previous lemma, we may assume that $\gamma(a)=0$.
Thus
\begin{align}
| A&\left(\gamma;b,a\right)| \nonumber \\
&=
\left| - h(b) - 2\int_b^a \left((T_bf)'T_bg - (T_bg)'T_bf\right)
- 2f(b) T_bg(a) +2g(b) T_bf(a)
\right| \nonumber \\
&=
\left| - h(b) - 2\int_b^a \left((T_af)'T_ag - (T_ag)'T_af\right)
\right| \label{e-firstline}\\
&\qquad +
2\left|\int_b^a \left((T_af)'T_ag - (T_ag)'T_af\right) - \left((T_bf)'T_bg - (T_bg)'T_bf\right)\right|
\label{e-secondline}\\
&\qquad + 2\left|f(b) T_bg(a) - g(b) T_bf(a) \right| \nonumber.
\end{align}
Note that \eqref{e-firstline} is exactly $|A(\gamma;a,b)|$.
Also, for any $x$ between $a$ and $b$, we have
\begin{align*}
\left|T_ag(x) - T_bg(x) \right|
&\leq
\left|T_ag(x) - g(x) \right| + \left|g(x) - T_bg(x) \right| \\
&\leq \alpha(|x-b|)|x-b|^m + \alpha(|x-a|)|x-a|^m\\
&\leq
2\alpha(|b-a|)|b-a|^m.
\end{align*}
A similar argument gives $\left|(T_af)'(x) - (T_bf)'(x) \right| \leq 2 \alpha(|b-a|)|b-a|^{m-1}$ so that
\begin{align*}
\left|(T_af)'T_ag - (T_bf)'T_bg\right|
&\leq |T_ag|\left|(T_af)' - (T_bf)'\right| + \left|(T_bf)'\right| \left|T_ag - T_bg\right| \\
&\lesssim_{f,g,K} \alpha(|b-a|)|b-a|^{m-1}.
\end{align*}
Swapping $f$ and $g$ and arguing similarly, we bound \eqref{e-secondline} by a constant multiple of $\alpha(|b-a|)|b-a|^m$.
Moreover, since $\gamma(a)=0$,
\begin{align*}
|f(b) T_bg(a) - g(b) T_bf(a)|
&\leq
|f(b)| |T_bg(a)-g(a)| + |g(b)||T_bf(a) - f(a)|\\
&\lesssim_{f,g,K} \alpha(|b-a|) |b-a|^m.
\end{align*}
This proves the lemma.
\end{proof}
\subsection{$A/V$ and horizontality}
The following is possibly the most useful observation from this paper.
As long as a $C^m$ curve satisfies the $A/V$ condition on a compact set $K \subseteq \mathbb{R}$,
the following lemma ensures that
we may drop the horizontality assumption (condition {\em (2)} in Theorem~\ref{t-HeisWhit}) on $K$.
\begin{lemma}
\label{l-horiz}
Suppose $f, g, h \in C^m(\mathbb{R})$
and $K \subseteq \mathbb{R}$ is compact.
If $\gamma =(f,g,h)$
satisfies the $A/V$ condition on $K$,
then there is some $\hat{h} \in C^m(\mathbb{R})$
such that $\hat{h}|_K = h|_K$
and
\begin{equation}
\label{e-horiz1}
\hat{h}' = 2(f'g-fg') \quad \text{ on } K.
\end{equation}
\end{lemma}
\begin{proof}
Assume that $K \subseteq [0,1]$.
We will prepare our setting so that we may apply Whitney's classical extension theorem.
Set $H^0 := h$ on $K$,
and, for $1 \leq k \leq m$,
define $H^k = \eta^{(k-1)}|_K$
where $\eta \in C^{m-1}(\mathbb{R})$ is defined as
\begin{align*}
\eta = 2(f'g-g'f) \quad \text{ on } \mathbb{R}.
\end{align*}
\noindent \textbf{Claim:}
$H$ is a Whitney field of class $C^m$ on $K$.
Fix $a,b \in K$.
We will first check that
$|h(b) - T_aH(b)|$
is uniformly $o(|b-a|^m)$ on $K$.
Using Lemma~\ref{l-AVleftinv} and the fact that the group operation in $\H$ is $C^\infty$ smooth,
we may assume that $\gamma(a)=0$.
Recalling the definition \eqref{e-TaylorJet} of the Taylor polynomial of a jet
and that $H^k(a) = \eta^{(k-1)}(a)$ for each $1 \leq k \leq m$,
observe that
\begin{align*}
T_a H (b)
= \int_a^b (T_a^m H)'
= \int_a^b T_a^{m-1} H^1
= \int_a^b T_a^{m-1} \eta
= 2\int_a^b T_a^{m-1} (f'g) - T_a^{m-1}(fg').
\end{align*}
Here, we used the convention that $T_a^0H^1 = H^1(a)$.
Then
\begin{align*}
|h(b) - T_aH(b)|
&\leq \left| h(b) - 2\int_a^b (T_a^mf)'T_a^mg - T_a^mf(T_a^mg)'\right| \\
&\quad +
2 \int_a^b \left| (T_a^mf)'T_a^mg - T_a^{m-1} (f'g) \right|
+2 \int_a^b \left|
T_a^{m-1} (fg') - T_a^mfT_a^{m-1}(g')\right|.
\end{align*}
Notice that, for any $x$ between $a$ and $b$,
\begin{align*}
(T_a^mf)'(x)&T_a^mg(x) - T_a^{m-1} (f'g) (x)\\
&=
\left[ T_a^{m-1}(f')(x) T_a^{m-1}g(x) - T_a^{m-1} (f'g)(x) \right]
+ (T_a^mf)'(x) \frac{g^{(m)}(x)}{m!}(x-a)^m,
\end{align*}
and recall that $T_a^{m-1} (f'g)(x)$
is simply the polynomial
consisting
of those terms in the polynomial $T_a^{m-1}(f')(x) \cdot T_a^{m-1}g(x)$ which have degree at most $m-1$.
Therefore, the quantity in brackets above
is a polynomial in $(x-a)$ whose
coefficients are
linear combinations of derivatives of $f$ and $g$
and whose terms have degree at least $m$.
Thus there is a constant $C > 0$ depending only on the derivatives of $f$ and $g$ on $K$ such that
\begin{align*}
\left|(T_a^mf)'T_a^mg - T_a^{m-1} (f'g)\right|
\leq C
|b-a|^m
\end{align*}
between $a$ and $b$. Swapping $f$ and $g$ and repeating the above discussion, we find that
\begin{align}
|h(b) - T_aH(b)|
&\leq
\left| h(b) - 2\int_a^b (T_a^mf)'T_a^mg - (T_a^mg)'T_a^mf\right|
+
4 C
|b-a|^{m+1} \nonumber\\
&= \left| A\left(\gamma;a,b\right) \right|
+ 4 C
|b-a|^{m+1}.\label{e-Aswap}
\end{align}
If $a \leq b$, then
$|h(b) - T_aH(b)|$ is uniformly $o(|b-a|^m)$ on $K$
since $\gamma$ satisfies the $A/V$ condition on $K$.
If $a > b$, we choose a modulus of continuity $\alpha$
such that $f$ and $g$ satisfy
\eqref{e-taylor0}--\eqref{e-taylor2} on $[0,1]$,
we apply Lemma~\ref{l-AVswap} to \eqref{e-Aswap}, and then we apply the previous sentence.
It remains to check that $|H^k(b) - T_a^{m-k} H^k (b)|$
is uniformly $o(|b-a|^{m-k})$ on $K$ for $1 \leq k \leq m$,
but this follows easily from the definition of $H^k$ since, for such a $k$,
$$
|H^k(b) - T_a^{m-k} H^k (b)|
=
|\eta^{(k-1)}(b) - T_a^{m-k} (\eta^{(k-1)}) (b)|
$$
which is uniformly $o(|b-a|^{(m-1)-(k-1)})$ on $K$
since $\eta$ is of class $C^{m-1}$.
This proves the claim.
Therefore, according to Whitney's classical extension theorem
(Theorem~\ref{t-WhitClassic}),
there is a $C^m$ extension $\hat{h}$ of $H$.
In particular, we have $\hat{h}(x) = H^0(x) = h(x)$ for all $x \in K$,
and, by the definition of $H$,
$$
\hat{h}' = H^1 = \eta = 2(f'g-g'f) \quad \text{on } K.
$$
This completes the proof of the lemma.
\end{proof}
We will now record a version of the above result for $C^{m,\omega}$ curves
to be used in the proof of Theorem~\ref{t-finiteness}.
\begin{lemma}
\label{l-horizLip}
Suppose $f, g, h \in C^{m,\omega}(\mathbb{R})$
and $K \subseteq \mathbb{R}$ is compact.
If $\gamma =(f,g,h)$
satisfies the $A/V$ condition on $K$,
then there is some $\hat{h} \in C^{m,\omega}(\mathbb{R})$
such that $\hat{h}|_K = h$
and
$$
\hat{h}' = 2(f'g-fg') \quad \text{ on } K.
$$
\end{lemma}
\begin{proof}
The proof of this lemma is nearly identical to the previous one.
The main difference is that we must use the fact that $\eta$ is now of class $C^{m-1,\omega}$
to conclude that
$$
|H^k(b) - T_a^{m-k} H^k (b)|
=
|\eta^{(k-1)}(b) - T_a^{m-k} (\eta^{(k-1)}) (b)|
\leq C \omega(|b-a|) |b-a|^{m-k}
$$
for some constant $C>0$.
Thus, we may apply Theorem~\ref{t-WhitClassicLip} in lieu of Theorem~\ref{t-WhitClassic} to construct a $C^{m,\omega}$ extension $\hat{h}$ of $h$ and conclude the lemma.
\end{proof}
\section{The discrete $A/V$ condition}
\label{s-dAV}
\begin{definition}
Fix $E \subseteq \mathbb{R}$ and $\gamma=(f,g,h):E \to \mathbb{R}^3$.
Suppose $X \subseteq E$ with $\# X = m+1$,
and
set
$P_f = P(X;f)$ and $P_g = P(X;g)$.
For any $a,b \in X$, define the {\em discrete area discrepancy} $A[X,\gamma;a,b]$
and {\em discrete velocity} $V[X,\gamma;a,b]$ as follows:
\begin{align*}
A[X,\gamma;a,b] &= h(b) - h(a) - 2 \int_{a}^{b} (P_f'P_g - P_g'P_f)\\
V[X,\gamma;a,b] &= \mathop\mathrm{diam}\nolimits(X)^{2m} + \mathop\mathrm{diam}\nolimits(X)^m \int_a^b \left(|P_f'|+ |P_g'|\right).
\end{align*}
\end{definition}
Note in particular that the definitions of $A[X,\gamma;a,b]$ and $V[X,\gamma;a,b]$ depend only on the functions $f$, $g$, and $h$ rather than a family of functions (as the definitions of $A(\gamma;a,b)$ and $V(\gamma;a,b)$ do).
\begin{definition}
For any set $E \subseteq \mathbb{R}$ and $\gamma:E \to \mathbb{R}^3$,
say that $\gamma$ satisfies the {\em discrete $A/V$ condition on $E$}
if
$$
\lim_{\substack{\mathop\mathrm{diam}\nolimits X \to 0 \\ a,b \in X \subseteq E, \, a<b \\ \# X = m+1}}
\frac{A[X,\gamma ; a,b]}{V[X,\gamma ; a,b]} = 0.
$$
\end{definition}
We once again have a left invariance property for the discrete versions of $A$ and $V$.
\begin{lemma}
\label{l-AVleftinvDisc}
Suppose $\gamma :K \to \H$ for some $K \subseteq \mathbb{R}$,
and suppose $X \subseteq K$ with $\# X = m+1$.
Fix $a,b \in X$ and $p \in \H$.
Then
$$
A[X,p *\gamma;a,b] = A[X,\gamma;a,b]
\quad
\text{and}
\quad
V[X,p * \gamma;a,b] = V[X,\gamma;a,b].
$$
\end{lemma}
\begin{proof}
This lemma follows almost
exactly the same proof as that of Lemma~\ref{l-AVleftinv}
since
$P_{\hat{f}} = P_f + x$ and $P_{\hat{g}} = P_g + y$
and since $P_f(x) = f(x)$ and $P_g(x) = g(x)$ for any $x \in X$ by the definition of the interpolating polynomials.
\end{proof}
\subsection{Equivalence of the conditions for $C^m$ curves}
Here, we will compare the $A/V$ and discrete $A/V$ conditions for $C^m$ curves.
\begin{lemma}
\label{l-AV}
Suppose $\gamma \in C^m(\mathbb{R},\mathbb{R}^3)$ and $K \subseteq \mathbb{R}$ is a compact set containing at least $m+1$ points.
If $\gamma$ satisfies the $A/V$ condition on $K$,
then
$$
\lim_{\substack{\mathop\mathrm{diam}\nolimits X \to 0 \\ a,b \in X \subseteq K, \, a<b \\ \# X = m+1}}
\left| \frac{A(\gamma;a,b)}{V(\gamma;a,b)} - \frac{A[X,\gamma ; a,b]}{V[X,\gamma ; a,b]}\right| = 0.
$$
In particular, if $\gamma$ satisfies the $A/V$ condition on $K$, then
$\gamma$ satisfies the discrete $A/V$ condition on $K$.
\end{lemma}
\begin{proof}
Assume without loss of generality that $K \subseteq [0,1]$,
and write $\gamma = (f,g,h)$.
We may choose a modulus of continuity $\alpha$ and a constant $C>0$
such that
$f$ and $g$ satisfy
\eqref{e-poly2}
for any
$X \subseteq [0,1]$
with $\#X = m+1$
and
\eqref{e-taylor0}--\eqref{e-taylor2}
for all $x,y \in [0,1]$.
Suppose $X$ is a set of $m+1$ distinct
points in $K$. Choose $a,b \in X$ with $a <b$.
By Lemmas~\ref{l-AVleftinv} and \ref{l-AVleftinvDisc},
we may assume without loss of generality that $\gamma(a) = 0$.
For simplicity,
write $A = A(\gamma;a,b)$ and $V=V(\gamma;a,b)$,
and write
$A_X = A[X,\gamma;a,b]$ and $V_X = V[X,\gamma;a,b]$.
Then
\begin{equation}
\left| \frac{A}{V} - \frac{A_X}{V_X}\right|
\leq
\left| \frac{A}{V} - \frac{A}{V_X}\right|
+
\left| \frac{A - A_X}{V_X}\right|
=
\left| \frac{A}{V} \right|
\left| \frac{V_X - V}{V_X}\right|
+
\left| \frac{A - A_X}{V_X}\right|.
\label{e-AV}
\end{equation}
Write $\alpha := \alpha(b-a)$.
We will prove that $|(A-A_X)/V_X|$
is bounded by a constant multiple of $\alpha$
and that $|(V_X-V)/V_X|$ is bounded.
To begin, notice that
\begin{align}
\label{e-AminusA}
A - A_X
=
2 \int_a^b \left[ (P_f'P_g - (T_af)'T_ag) + ((T_ag)'T_af - P_g'P_f) \right].
\end{align}
Let us bound the first term in this integrand. Note that
\begin{align*}
P_f'P_g - (T_af)'T_ag
=
P_f'(P_g - T_ag)
&+P_g(P_f' - (T_af)')
\\
&\quad +((T_af)' - P_f')(P_g-T_ag).
\end{align*}
Now
Lemma~\ref{l-poly} gives
\begin{align}
\label{e-1st}
|P_g - T_ag|
\leq
|P_g - g| + |g - T_ag|
\leq
2C \alpha \mathop\mathrm{diam}\nolimits(X)^m
\end{align}
and
\begin{align}
\label{e-2nd}
|P_f' - (T_af)'|
\leq
|P_f' - f'| + |f' - (T_af)'|
\leq
(C+1)\alpha \mathop\mathrm{diam}\nolimits(X)^{m-1}
\end{align}
on $[a,b]$.
Therefore,
\begin{align}
\label{e-3rd}
|(T_af)' - P_f'||P_g-T_ag| \leq 2C(C+1)\alpha^2 \mathop\mathrm{diam}\nolimits(X)^{2m-1}
\end{align}
on $[a,b]$.
Combining \eqref{e-1st}, \eqref{e-2nd}, and \eqref{e-3rd} gives
\begin{align*}
\int_a^b |P_f'P_g - (T_af)'T_ag|
\lesssim_C
\alpha \mathop\mathrm{diam}\nolimits(X)^m \int_a^b |P_f'|
&+
\alpha \mathop\mathrm{diam}\nolimits(X)^{m-1} \int_a^b |P_g|\\
&\quad +
\alpha^2 \mathop\mathrm{diam}\nolimits(X)^{2m}
\end{align*}
According to Corollary~2.11 in \cite{ZimSpePinWhitney}
applied to the polynomial $P_g$,
we have
$$
\int_a^b |P_g|
\leq 8m^2(b-a)\int_a^b |P_g'|.
$$
Hence
\begin{align*}
\int_a^b |P_f'P_g - (T_af)'T_ag|
\lesssim_C
\alpha^2 \mathop\mathrm{diam}\nolimits(X)^{2m}
+
(1+8m^2)\alpha \mathop\mathrm{diam}\nolimits(X)^m \int_a^b |P_f'| + |P_g'|.
\end{align*}
Similar arguments give
\begin{align*}
\int_a^b |(T_ag)'T_af - P_g'P_f|
\lesssim_C
\alpha^2 \mathop\mathrm{diam}\nolimits(X)^{2m}
+
(1+8m^2)\alpha \mathop\mathrm{diam}\nolimits(X)^m \int_a^b |P_f'| + |P_g'|,
\end{align*}
and inputting these bounds into \eqref{e-AminusA} gives
\begin{align*}
|A - A_X |
&\lesssim_{m,C}
\alpha^2 \mathop\mathrm{diam}\nolimits(X)^{2m}
+
\alpha \mathop\mathrm{diam}\nolimits(X)^m \int_a^b |P_f'| + |P_g'|\\
&\lesssim_{m,C}
(\alpha^2
+
\alpha) V_X.
\end{align*}
This bounds the first term in \eqref{e-AV}.
To bound the second term, notice that
\begin{align*}
|V_X - V|
&\leq
\mathop\mathrm{diam}\nolimits(X)^{2m}
+
(b-a)^{2m}
+
\left|\mathop\mathrm{diam}\nolimits(X)^m - (b-a)^m\right|
\int_a^b\left(|P_f'| + |P_g'|\right)\\
& \hspace{1.5in}
+
(b-a)^m\int_a^b\left||P_f'| - |(T_af)'| + |P_g'| - |(T_ag)'|\right|.
\end{align*}
As above, we have
\begin{align*}
\left||P_f'| - |(T_af)'|\right| \leq |P_f' - (T_af)'|
\leq(C+1)
\alpha \mathop\mathrm{diam}\nolimits(X)^{m-1}
\end{align*}
and
\begin{align*}
\left||P_g'| - |(T_ag)'|\right| \leq |P_g' - (T_ag)'|
\leq(C+1)
\alpha \mathop\mathrm{diam}\nolimits(X)^{m-1}.
\end{align*}
Hence,
\begin{align*}
|V_X - V|
\lesssim_C
\mathop\mathrm{diam}\nolimits(X)^{2m}
+
\mathop\mathrm{diam}\nolimits(X)^{m} \int_a^b\left(|P_f'| + |P_g'|\right)
=
V_X.
\end{align*}
Thus, by \eqref{e-AV},
$$
\left|\frac{A}{V} - \frac{A_X}{V_X} \right|
\lesssim_{m,C} \left|\frac{A}{V}\right| + \alpha.
$$
Since $\gamma$ satisfies the $A/V$ condition on $K$, the proof is complete.
\end{proof}
\begin{lemma}
\label{l-AV2}
Suppose $\gamma \in C^m(\mathbb{R},\mathbb{R}^3)$
and $K \subseteq \mathbb{R}$ is a compact set containing at least $m+1$ points
with finitely many isolated points.
If $\gamma$ satisfies the discrete $A/V$ condition on $K$, then
it satisfies the $A/V$ condition on $K$.
Moreover, if $m=1$, then $K$ can be any compact set containing at least 2 points.
\end{lemma}
\begin{proof}
Again,
assume without loss of generality that $K \subseteq [0,1]$.
Choose
a modulus of continuity
as in the proof of Lemma~\ref{l-AV}
which also satisfies
\begin{equation}
\label{e-AV-discrete-ass}
\frac{A[X,\gamma ; a,b]}{V[X,\gamma ; a,b]} \leq \alpha(\mathop\mathrm{diam}\nolimits(X))
\end{equation}
for all $a,b \in X \subseteq K$ with $a<b$ and $\# X = m+1$.
Let $a,b \in K$ with $a<b$.
By our assumption on $K$,
we may assume that either $a$ or $b$ is a limit point of $K$.
Thus we can choose a finite set $X$ consisting of $a$, $b$, and $m-1$ other distinct points in $K$ within $(b-a)/2$ of $a$ or $b$.
In particular, $\mathop\mathrm{diam}\nolimits (X) \leq 2(b-a)$.
(When $m=1$, we may skip this argument since $X = \{a,b\}$, and so $\mathop\mathrm{diam}\nolimits(X)=b-a$.
In particular, there is no need to concern ourselves with limit points or isolated points in this case.)
The proof of this lemma will now be nearly identical to that of the previous lemma.
We will only note the main differences.
Write $A$, $V$, $A_X$, $V_X$, $\alpha$, and $C$ as before.
Also, as before (but slightly differently), write
\begin{align}
\left| \frac{A}{V} - \frac{A_X}{V_X}\right|
\leq
\left| \frac{A_X}{V_X} \right|
\left| \frac{V - V_X}{V}\right|
+
\left| \frac{A - A_X}{V}\right|.
\label{e-AV2}
\end{align}
Again, Lemmas~\ref{l-AVleftinv} and \ref{l-AVleftinvDisc} ensure that
we may assume $\gamma(a) = 0$.
With \eqref{e-AminusA} in mind, write
\begin{align*}
P_f'P_g - (T_af)'T_ag
=
(T_af)'(P_g - T_ag)
&+T_ag(P_f' - (T_af)')
\\
&\quad +(P_f' - (T_af)')(P_g-T_ag).
\end{align*}
By applying Corollary~2.11 in \cite{ZimSpePinWhitney}
to the polynomials $T_ag$ and $T_af$,
we may use \eqref{e-1st}, \eqref{e-2nd}, and \eqref{e-3rd} and
the fact that $\mathop\mathrm{diam}\nolimits(X) \leq 2(b-a)$
to conclude as before that
\begin{align*}
|A - A_X|
&\lesssim_{m,C}
\alpha^2 (b-a)^{2m}
+
\alpha(b-a)^m \int_a^b |T_af'| + |T_ag'|\\
&\lesssim_{m,C}
(\alpha^2
+
\alpha) V.
\end{align*}
Moreover, arguing as above using the fact that $\mathop\mathrm{diam}\nolimits(X) \leq 2(b-a)$, we have
\begin{align*}
|V - V_X|
&\leq
(b-a)^{2m}
+
\mathop\mathrm{diam}\nolimits(X)^{2m}
+
\left|(b-a)^m - \mathop\mathrm{diam}\nolimits(X)^m\right|
\int_a^b\left(|(T_af)'| + |(T_ag)'|\right)\\
& \hspace{1.5in}
+
\mathop\mathrm{diam}\nolimits(X)^m\int_a^b\left||(T_af)'| - |P_f'|+ |(T_ag)'| - |P_g'| \right|\\
&\lesssim_m
(b-a)^{2m} + (b-a)^m
\int_a^b\left(|(T_af)'| + |(T_ag)'|\right) = V.
\end{align*}
By \eqref{e-AV-discrete-ass}, the proof is complete.
\end{proof}
\subsection{Stronger equivalence for $C^{m,\omega}$ curves}
We now record two analogous results which will be important in the proof of the finiteness principle Theorem~\ref{t-finiteness}.
The additional regularity on $|A/V|$ provides more control on the difference between the $A/V$ fractions.
\begin{lemma}
\label{l-AVLip}
Suppose $\gamma \in C^{m,\omega}(\mathbb{R},\mathbb{R}^3)$ and $K \subseteq \mathbb{R}$ is compact,
and suppose there is a constant $M>0$ such that $|A(\gamma;a,b)/V(\gamma;a,b)| \leq M \omega(b-a)$ for all $a,b \in K$ with $a<b$.
Then for any $X \subseteq K$ with $\#X = m+1$ and $a,b \in X$ with $a<b$, we have
$$
\left| \frac{A(\gamma;a,b)}{V(\gamma;a,b)} - \frac{A[X,\gamma ; a,b]}{V[X,\gamma ; a,b]}\right|
\leq C_0
\omega(\mathop\mathrm{diam}\nolimits(X))
$$
where $C_0 \geq 1$ is a polynomial combination of $m$, $M$, and $\Vert \gamma \Vert$.
\end{lemma}
\begin{proof}
The proof of this lemma is nearly identical to
that of Lemma~\ref{l-AV}.
Indeed, we may simply replace all instances of $\alpha$ with a constant multiple of $\omega$.
Our added assumption that $|A/V| \leq M \omega(b-a) \leq M \omega(\mathop\mathrm{diam}\nolimits(X))$
completes the proof.
\end{proof}
The proof of the final lemma follows from the proof of Lemma~\ref{l-AV2} in the same way that the proof of Lemma~\ref{l-AVLip} followed from the proof of Lemma~\ref{l-AV}.
\begin{lemma}
\label{l-AVLip2}
Suppose $\gamma \in C^{m,\omega}(\mathbb{R},\mathbb{R}^3)$ and $K \subseteq \mathbb{R}$ is compact
with finitely many isolated points.
Assume that there
is a constant $M>0$ such that
$|A[Y,\gamma;a,b]/V[Y,\gamma;a,b]| \leq M \omega(\mathop\mathrm{diam}\nolimits(Y))$
for all $Y \subseteq K$ with $\# Y = m+1$ and $a,b \in Y$ with $a<b$.
For any $a,b \in K$ with $a<b$,
there is a set $X \subseteq K$ containing $a$ and $b$ with $\# X = m+1$ and $\mathop\mathrm{diam}\nolimits(X) \leq 2(b-a)$
such that
$$
\left| \frac{A(\gamma;a,b)}{V(\gamma;a,b)} - \frac{A[X,\gamma ; a,b]}{V[X,\gamma ; a,b]}\right|
\leq C_1
\omega(b-a)
$$
where $C_1 \geq 1$ is a polynomial combination of $m$, $M$, and $\Vert \gamma \Vert$.
\end{lemma}
\section{Proofs of the main theorems}
\label{s-proofs}
In this section, we will prove Theorems~\ref{c-m1}, \ref{t-supermain}, \ref{t-supermainWf}, and \ref{t-finiteness}.
\subsection{Answering Whitney's question in $\H$ for $n =1$}
We will first observe that the assumptions of Theorems~\ref{c-m1} and \ref{t-supermain} are necessary
even when $K$ is an arbitrary compact set.
\begin{proposition}
If
$\gamma \in C^m_\mathbb{H}(\mathbb{R})$
and $K \subseteq \mathbb{R}$ is compact,
then
\begin{enumerate}
\item \label{nec1}
the $m$th divided differences of $\gamma$ converge uniformly in $K$, and
\item \label{nec3} $\gamma$ satisfies the discrete $A/V$ condition on $K$.
\end{enumerate}
\end{proposition}
\begin{proof}
Condition {\em (\ref{nec1})} follows from Theorem~\ref{t-WhitFin},
and Therorem~\ref{t-HeisWhit} implies that $\gamma$ satisfies the $A/V$ condition on $K$.
Thus Lemma~\ref{l-AV} gives {\em (\ref{nec3})}.
\end{proof}
We will now prove sufficiency in Theroem~\ref{t-supermain}.
Our restriction on the set $K$ appears here
only because we will be applying Lemma~\ref{l-AV2}.
\begin{theorem}
\label{t-backward}
Suppose $K \subseteq \mathbb{R}$ is compact with finitely many isolated points, and fix $\gamma :K \to \mathbb{H}$.
Assume
\begin{enumerate}
\item
the $m$th divided differences of $\gamma$ converge uniformly in $K$, and
\item $\gamma$ satisfies the discrete $A/V$ condition on $K$.
\end{enumerate}
Then there is a horizontal curve $\Gamma \in C^m(\mathbb{R},\mathbb{R}^3)$ such that $\Gamma|_K = \gamma$.
\end{theorem}
\begin{proof}
We may clearly assume that $K$ contains at least $m+1$ points.
By {\em (1)} and Theorem~\ref{t-WhitFin},
there is some $\tilde{\gamma} =(f,g,h) \in C^m(\mathbb{R},\mathbb{R}^3)$ such that $\tilde{\gamma}|_K = \gamma$.
By {\em (2)} and Lemma~\ref{l-AV2},
$\tilde{\gamma}$ satisfies the $A/V$ condition on $K$.
Therefore, Lemma~\ref{l-horiz}
implies that there is some $\hat{h} \in C^m(\mathbb{R})$
such that $\hat{h}|_K = h|_K$,
and
$\hat{h}' = 2(f'g-fg')$ on $K$.
By the Leibniz rule, then, we have
\begin{equation*}
\hat{h}^{(k)} = 2 \sum_{i=0}^{k-1} \binom{k-1}{i}
\left(f^{(k-i)}g^{(i)} - g^{(k-i)}f^{(i)}\right)
\quad
\text{ on } K.
\end{equation*}
Set $\hat{\gamma} = (f,g,\hat{h}):\mathbb{R} \to \mathbb{H}$.
By our above construction, we have that $\hat{\gamma}|_K = \tilde{\gamma}|_K=\gamma$
and that the $m$-jets $(F^k,G^k,H^k)_{k=0}^m = \left( \hat{\gamma}^{(k)} \right)_{k=0}^m$ on $K$ satisfy the assumptions of
Theorem~\ref{t-HeisWhit}.
Therefore, there is a $C^m$ horizontal curve $\Gamma:\mathbb{R} \to \mathbb{H}$ such that $\Gamma|_K = \hat{\gamma}|_K = \gamma$.
\end{proof}
\subsection{The special case $m=1$}
Here we prove Theorem~\ref{c-m1} which holds for arbitrary compact sets.
\begin{proof}
The necessity follows from Theorem~1.7 in \cite{ZimWhitney}.
Now fix a compact set $K \subseteq \mathbb{R}$ and
a continuous map $\gamma=(f,g,h):K \to \mathbb{R}^3$
such that the Pansu difference quotients of $\gamma$ converge uniformly on $K$ to horizontal points.
That is,
for every $a \in K$, there is a horizontal point $p_a=(x,y,0) \in \mathbb{R}^2 \times \{ 0 \}$ such that
$$
\lim_{\substack{|b-a| \to 0 \\ a,b \in K}} \left| \delta_{1/(b-a)} \left(\gamma(a)^{-1} * \gamma(b) \right) - p_a \right| = 0.
$$
By the definition of the group law,
this implies that the classical Euclidean difference quotients of $f$ and $g$ converge uniformly in $K$.
Moreover,
since the $z$-coordinate of $\gamma(a)^{-1} * \gamma(b)$ is
$
h(b) - h(a) - 2 (f(b)g(a) - f(a)g(b))
$,
we have
\begin{align*}
|h(b)-h(a)| &\leq |h(b) - h(a) - 2 (f(b)g(a) - f(a)g(b))|
+
2|f(b)g(a) - f(a)g(b)|\\
&\leq \alpha(|b-a|)(|b-a|^2)
+ 2 |g(a)||f(b)-f(a)| + 2|f(a)||g(a)-g(b)|\\
&\leq \alpha(|b-a|)\left(|b-a|^2 + |b-a|\right)
\end{align*}
for some modulus of continuity $\alpha$.
That is, the difference quotients of $h$ converge uniformly on $K$.
In other words, the 1st divided differences of $\gamma$ converge uniformly in $K$,
and so there is some $\tilde{\gamma} = (\tilde{f},\tilde{g},\tilde{h}) \in C^m(\mathbb{R},\mathbb{R}^3)$ such that $\tilde{\gamma}|_K = \gamma$.
In particular, we have that $p_a = (\tilde{f}'(a),\tilde{g}'(a),0)$ for any $a \in K$.
Now suppose $X=\{a,b\} \subseteq K$ with $a<b$,
and
set
$P_f = P(X;f)$ and $P_g = P(X;g)$.
Since $P_f$ and $P_g$ are affine,
we have
\begin{align*}
A[X,\gamma;a,b]
&= h(b) - h(a) - 2 \int_{a}^{b} (P_f'P_g - P_g'P_f)\\
&= h(b) - h(a) - 2 (f(b)g(a) - f(a)g(b)),\\
V[X,\gamma;a,b] &= \mathop\mathrm{diam}\nolimits(X)^{2} + \mathop\mathrm{diam}\nolimits(X) \int_a^b \left|\tfrac{f(b)-f(a)}{b-a}\right|+ \left|\tfrac{g(b)-g(a)}{b-a}\right|
\geq \mathop\mathrm{diam}\nolimits(X)^2
= (b-a)^2.
\end{align*}
Therefore,
since $\tilde{\gamma} = \gamma$ on $K$,
$$
\left| \frac{A[X,\tilde{\gamma};a,b] }{V[X,\tilde{\gamma};a,b]}\right|
=
\left| \frac{A[X,\gamma;a,b] }{V[X,\gamma;a,b]}\right|
\leq
\frac{|h(b) - h(a) - 2 (f(b)g(a) - f(a)g(b))|}{(b-a)^2}
$$
which is precisely the $z$-coordinate of the Pansu difference quotient
$\delta_{1/(b-a)} \left(\gamma(a)^{-1} * \gamma(b) \right)$.
This vanishes uniformly on $K$ as $|b-a| \to 0$ by assumption,
so $\tilde{\gamma}$ satisfies the discrete $A/V$ condition on $K$.
It follows from Lemma~\ref{l-AV2}
that
$\tilde{\gamma}$ satisfies the $A/V$ condition on $K$ as well since $m=1$.
Hence, Lemma~\ref{l-horiz} gives
an $\hat{h} \in C^1(\mathbb{R})$ such that
$\hat{h}|_K = \tilde{h}|_K = h$
and $\hat{h}' = 2(\tilde{f}'\tilde{g}-\tilde{f}\tilde{g}')$ on $K$.
Setting $\hat{\gamma} = (\tilde{f},\tilde{g},\hat{h})$,
it follows that $\hat{\gamma}|_K$ and $\hat{\gamma}'|_K$
satisfy the assumptions of Theorem~1.7 in \cite{ZimWhitney}.
Thus there is a horizontal curve $\Gamma \in C^1(\mathbb{R},\mathbb{R}^3)$ such that $\Gamma|_K = \gamma$.
\end{proof}
\subsection{Answering Whitney's question in $\H$ using $W$}
Here we prove Theorem~\ref{t-supermainWf}.
\begin{proof}
Necessity once again follows from Theorems~\ref{t-WhitFin} and \ref{t-HeisWhit}.
Assume now that $K$ is compact, $\gamma:K \to \H$ is continuous, and the $m$th divided differences of $\gamma$ converge uniformly on $K$.
In particular, $W\gamma = (f,g,h)$ exists.
Assume $W\gamma$ satisfies the $A/V$ condition on $K$.
Lemma~\ref{l-horiz} then gives a function $\hat{h} \in C^m(\mathbb{R})$ such that $\hat{h}|_K = h|_K$ and $\hat{h}' = 2(f'g-fg')$
on $K$.
Once again defining $\hat{\gamma} = (f,g,\hat{h})$,
the $m$-jet $\left( \hat{\gamma}^{(k)} \right)_{k=0}^m$ on $K$ satisfies the assumptions of
Theorem~\ref{t-HeisWhit},
so there is a horizontal curve $\Gamma \in C^m(\mathbb{R},\mathbb{R}^3)$ such that $\Gamma|_K = \hat{\gamma}|_K = (W\gamma)|_K = \gamma$.
\end{proof}
\subsection{The finiteness principle}
This subsection is devoted to the proof of Theorem~\ref{t-finiteness}.
\begin{proof}
Suppose $K \subseteq \mathbb{R}$ is compact with finitely many isolated points.
We may assume that $K$ contains at least $m+2$ points.
Suppose $M>0$ and $\gamma:K\to \H$ satisfy the following:
for any $X \subseteq K$ with $\#X = m+2$,
there is a function $\Gamma_X \in C^{m,\omega}(\mathbb{R},\mathbb{R}^3)$
such that $\Gamma_X = \gamma$ on $X$,
$
\sup_X \Vert \Gamma_X \Vert =: C_2 < \infty,
$
and
\begin{equation}
\label{e-assump}
\left| \frac{A(\Gamma_X;a,b)}{V(\Gamma_X;a,b)} \right| \leq M \omega(b-a)
\quad \text{ for all } a,b \in X \text{ with } a<b.
\end{equation}
Suppose $X=\{x_0,\dots,x_{m+1}\}$ is
a set of $m+2$ distinct points in $K$
with $x_0 < x_1 < \cdots < x_{m+1}$.
Choose $\Gamma_X \in C^{m,\alpha}(\mathbb{R},\mathbb{R}^3)$ as above.
Then
\begin{align*}
|\gamma[X]|
=|\Gamma_X[X]|
= \frac{|\Gamma_X[x_1,\dots,x_{m+1}] - \Gamma_X[x_0,\dots,x_{m}]|}{|x_{m+1} - x_0|}
&= \frac{|\Gamma_X^{(m)}(y_1) - \Gamma_X^{(m)}(y_2)|}{(m+1)!|x_{m+1} - x_0|}\\
&\leq \frac{C_2}{(m+1)!} \frac{\omega(\mathop\mathrm{diam}\nolimits(X))}{\mathop\mathrm{diam}\nolimits(X)}
\end{align*}
for some $y_1 \in (x_1,x_{m+1})$ and $y_2 \in (x_0,x_{m})$
by the mean value theorem for divided differences.
Therefore, there is some $\tilde{\gamma} =(f,g,h) \in C^{m,\omega}(\mathbb{R},\mathbb{R}^3)$ such that $\tilde{\gamma}|_K = \gamma$.
(See, for example, Theorem~A in \cite{BruShv}.)
We will now observe that $\tilde{\gamma}$ satisfies
\begin{align}
\left| \frac{A[Y,\tilde{\gamma};a,b]}{V[Y,\tilde{\gamma};a,b]} \right|
\lesssim_{m,M,C_2} \omega(\mathop\mathrm{diam}\nolimits(Y)) \label{e-discretechain}
\end{align}
for any $a,b \in Y \subseteq K$ with $a<b$ and $\# Y = m+1$.
Indeed, choose such a set $Y$, set $X = Y \cup \{ x \}$ for some $x \in K \setminus Y$,
and choose $\Gamma_X$ as above.
Since $\Gamma_X = \gamma = \tilde{\gamma}$ on $Y$,
it follows from
\eqref{e-assump} and
Lemma~\ref{l-AVLip} that,
for any $a,b \in Y$ with $a<b$,
\begin{align*}
\left| \frac{A[Y,\tilde{\gamma};a,b]}{V[Y,\tilde{\gamma};a,b]} \right|
=
\left| \frac{A[Y,\Gamma_X;a,b]}{V[Y,\Gamma_X;a,b]} \right|
&\leq
\left| \frac{A[Y,\Gamma_X;a,b]}{V[Y,\Gamma_X;a,b]} - \frac{A(\Gamma_X;a,b)}{V(\Gamma_X;a,b)} \right|
+
\left| \frac{A(\Gamma_X;a,b)}{V(\Gamma_X;a,b)} \right| \nonumber\\
&\lesssim_{m,M,C_2} \omega(\mathop\mathrm{diam}\nolimits(Y)).
\end{align*}
In particular, $\tilde{\gamma}$ satisfies the discrete $A/V$ condition on $K$ and thus,
by Lemma~\ref{l-AV2}, it satisfies the $A/V$ condition on $K$ as well.
Now, by Lemma~\ref{l-horizLip},
there is some
$\hat{h} \in C^{m,\omega}(\mathbb{R})$
such that $\hat{h}|_K = h|_K$
and $\hat{h}' = 2(f'g-fg')$ on $K$.
Set $\hat{\gamma} = (f,g,\hat{h})$.
As before, the $m$-jet
$\left( \hat{\gamma}^{(k)} \right)_{k=0}^m$ on $K$
satisfies
conditions {\em (1)} and {\em (2)} from Theorem~\ref{t-HeisWhitLip}.
We will now verify condition {\em (3)}.
Note that \eqref{e-discretechain} holds with $\hat{\gamma}$ in place of $\tilde{\gamma}$
since
$\hat{\gamma} = \tilde{\gamma}$ on $K$.
Fix $a,b \in K$ with $a<b$.
According to Lemma~\ref{l-AVLip2},
there is a set $Y \subseteq K$ containing $a$ and $b$ with $\# Y = m+1$
and $\mathop\mathrm{diam}\nolimits(Y) \leq 2(b-a)$
such that
\begin{align*}
\left| \frac{A(\hat{\gamma};a,b)}{V(\hat{\gamma};a,b)} \right|
\leq
\left| \frac{A(\hat{\gamma};a,b)}{V(\hat{\gamma};a,b)} - \frac{A[Y,\hat{\gamma};a,b]}{V[Y,\hat{\gamma};a,b]} \right|
+ \left| \frac{A[Y,\hat{\gamma};a,b]}{V[Y,\hat{\gamma};a,b]} \right|
\lesssim_{m,M,C_2} \omega(b-a).
\end{align*}
We may therefore apply Theorem~\ref{t-HeisWhitLip}
to find a horizontal curve $\Gamma \in C^{m,\sqrt{\omega}}(\mathbb{R},\mathbb{R}^3)$ such that $\Gamma|_K = \hat{\gamma}|_K = \tilde{\gamma}|_K = \gamma$.
\end{proof}
| {
"timestamp": "2021-07-12T02:22:50",
"yymm": "2107",
"arxiv_id": "2107.04554",
"language": "en",
"url": "https://arxiv.org/abs/2107.04554",
"abstract": "Consider the sub-Riemannian Heisenberg group $\\mathbb{H}$. In this paper, we answer the following question: given a compact set $K \\subseteq \\mathbb{R}$ and a continuous map $f:K \\to \\mathbb{H}$, when is there a horizontal $C^m$ curve $F:\\mathbb{R} \\to \\mathbb{H}$ such that $F|_K = f$? Whitney originally answered this question for real valued mappings, and Fefferman provided a complete answer for real valued functions defined on subsets of $\\mathbb{R}^n$. We also prove a finiteness principle for $C^{m,\\sqrt{\\omega}}$ horizontal curves in the Heisenberg group in the sense of Brudnyi and Shvartsman.",
"subjects": "Metric Geometry (math.MG)",
"title": "Whitney's Extension Theorem and the finiteness principle for curves in the Heisenberg group",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9861513881564148,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7086428518121033
} |
https://arxiv.org/abs/2302.03715 | Decompositions and Terracini loci of cubic forms of low rank | We study Waring rank decompositions for cubic forms of rank $n+2$ in $n+1$ variables. In this setting, we prove that if a concise form has more than one non-redundant decomposition of length $n+2$, then all such decompositions share at least $n-3$ elements, and the remaining elements lie in a special configuration. Following this result, we give a detailed description of the $(n+2)$-th Terracini locus of the third Veronese embedding of $n$-dimensional projective space. | \section{Introduction}
For an integer $n$, let $V$ be a complex vector space of dimension $n+1$ and let $\mathbb{P}^n = \mathbb{P} V$ be the projective space of lines in $V$. Identify $V$ with the space of linear forms on $V^*$ and let $S^d V$ be the space of homogeneous polynomials of degree $d$ on $V^*$. The Waring rank of a form $F \in S^d V$ is
\[
\mathrm{R}(F) = \min \{ r : F = L_1^d + \cdots + L_r^d \text{ for some $L_1 , \dots , L_r \in V$}\}.
\]
The $d$-th Veronese embedding of $\mathbb{P} V$ is the map $v_d : \mathbb{P} V \to \mathbb{P} S^d V$ defined by $v_d([L]) = [L^d]$. If $A$ is a set of $r$ points in $\mathbb{P} V$, we say that $A$ has length $r$, and we write $\ell(A) = r$. Geometrically, the Waring rank of $F$ can be equivalently defined as
\[
\mathrm{R}(F) = \min \{ r : [F] \in \langle v_d(A) \rangle \text{ for a set $A \subseteq \mathbb{P} V$ with $\ell(A) = r$}\}.
\]
Here $\langle - \rangle$ denotes the (projective) linear span. A finite set $A$ with $\ell(A) = r$ such that $[F] \in \linspan{\nu_d(A)}$ is a \emph{decomposition} of $F$ of length $r$. If $A = \{ [L_1] , \dots , [L_r]\}$, this is equivalent to the existence of an expression
\begin{equation}\label{eqn: decomposition F}
F = \alpha_1 L_1^d + \cdots + \alpha_r L_r^d
\end{equation}
for some $\alpha_1 , \dots , \alpha_r \in \mathbb{C}$. We say that the decomposition is \emph{non-redundant} (or irredundant) if there is no proper subset $A' \subsetneq A$ such that $[F] \in \linspan{v_d(A')}$; this is equivalent to saying that $L_1^d , \dots , L_r^d$ are linearly independent and no coefficient $\alpha_i$ in \eqref{eqn: decomposition F} is $0$. We say that the decomposition is \emph{minimal} if its length is the minimal possible, namely $r = \mathrm{R}(F)$. If $F$ is a form of rank $r$ which has a unique minimal decomposition of length $r$, we say that $F$ is identifiable.
We say that $F \in S^d V$ is concise if there is no proper subspace $V' \subsetneq V$ such that $F \in S^d V'$; equivalently, there is no change of coordinates in $V$ after which $F$ can be written in fewer than $n+1$ variables. It is a classical fact that $F$ is concise if and only if its first order partial derivatives are linearly independent; equivalently $F$ is concise if and only if its partial derivatives of order $d-1$ span $V$; we refer to \cite{Car:ReducingNumberVariables} for a proof of this fact. If $F$ is concise, then every decomposition $A$ of $F$ satisfies $\langle A \rangle = \mathbb{P} V$; in other words, no decomposition of $F$ is contained in a hyperplane. In particular $\mathrm{R}(F) \geq n+1$. A partial converse of this property in low rank is given in \autoref{lemma: minimal plus one concise}.
For a subset $A \subseteq \mathbb{P} V$ with $\ell(A) = r$, we say that $A$ is linearly independent if $\dim \langle A \rangle = \ell(A) - 1$, which is the maximum possible value; here $\dim$ is the dimension in projective space. The Kruskal rank of $A$ is the largest integer $k$ such that every subset of $k$ elements of $A$ is linearly independent. We say that $A$ is in \emph{linear general position} (LGP) if any subset of elements of $A$ is linearly independent, namely the Kruskal rank of $A$ is $\min \{ \ell(A) , n+1\}$, which is the maximum possible value.
In this work, we study non-redundant decompositions of length $n+2$ for concise cubic forms in $n+1$ variables. This is the first case where Kruskal's criterion does not guarantee uniqueness of the decomposition and in fact there are examples of forms having multiple decompositions, see e.g. \cite{Derk:KruskalSharp}. However, our main result shows that if the form is concise, then non-uniqueness of a decomposition can only occur under very special conditions, and in particular very low Kruskal rank.
\begin{theorem}\label{thm: main theorem}
Let $V$ be a vector space of dimension $n+1$, with $n \geq 1$. Let $F \in S^3 V$ be concise with $\mathrm{R}(F) \leq n+2$. Then one of the following occurs:
\begin{enumerate}[(I)]
\item there is a unique non-redundant decomposition $A$ of length $n+2$ for $F$; in this case the Kruskal rank of $A$ is at least $4$;
\item there are infinitely many non-redundant decompositions of $F$ of length $n+2$; any two decompositions $A,B$ of length $n+2$ intersect in at least $n-3$ points; moreover $(A \setminus B) \cup (B \setminus A)$ lies in the union of two lines; in particular the Kruskal rank of any decomposition is $2$;
\item there are infinitely many non-redundant decompositions of $F$ of length $n+2$; any two decompositions $A,B$ of length $n+2$ intersect in at least $n-2$ points; moreover $(A \setminus B) \cup (B \setminus A)$ is contained in the union of two planes; in particular the Kruskal rank of any decomposition is at most $3$.
\end{enumerate}
\end{theorem}
\autoref{thm: main theorem} is built on an induction argument for $n \geq 4$. The base of the induction is given by the cases $n=1,2,3$, where special configurations of points yield some pathological behaviour. The general cases are obtained as extensions of a special configuration on a subspace.
As a consequence of \autoref{thm: main theorem}, in \autoref{sec:terraciniloci}, we obtain a detailed description of the $(n+2)$-th Terracini locus of the Veronese variety $v_3(\mathbb{P}^n)$ in the concise setting.
The study of identifiability of homogeneous polynomials, and more generally of tensors, is a classical topic in algebraic geometry and invariant theory. Generic forms in $S^d V$ are identifiable only for few combination of degree $d$ and number of variables $n+1$, namely $(d,n) = (2k-1,1), (3,3), (5,2)$: identifiability in these cases was known classically \cite{Sylv:PrinciplesCalculusForms,Hilblet}; the full proof that these are the only cases where generic identifiability holds is recent \cite{GalMel:IdentifiabilityHomPoly}. On the other hand, generic forms of subgeneric rank are identifiable with only few exceptions \cite{BocChiOtt:Refined_identifiability_tensors,ChiOttVan:GenericIdentifSubgenRank}. In general, however, little is known about the explicit genericity condition providing identifiability. The most general tool in this context is Kruskal's criterion \cite{Krusk:ThreeWayArrays}, which guarantees uniqueness of decomposition under the hypothesis of large enough \emph{Kruskal rank}; a variant of Kruskal criterion is given in \cite{LovPet:GeneralizationKruskal}. A geometric study of the subvarieties of non-identifiable forms, in special cases, is given in \cite{AngChiVan:IdentifiabilityBeyondKruskal,AngChi:IdentifiabilityTernaryForms,ChiOtt:FootnoteFootnote}, which are based on properties of the Hilbert function of points \cite{Chi:HilbertFunctionsTensorAn} and on apolarity theory \cite{IarrKan:PowerSumsBook}.
\subsection*{Acknowledgements} This work is partially supported by the Thematic Research Programme ``Tensors: geometry, complexity and quantum entanglement'', University of Warsaw, Excellence Initiative -- Research University and the Simons Foundation Award No. 663281 granted to the Institute of Mathematics of the Polish Academy of Sciences for the years 2021-2023. We thank IMPAN and the AGATES organizers for providing an excellent research environment throughout the semester.
\section {Preliminaries}
Let $X \subseteq \mathbb{P}^N$ be an irreducible projective variety. The $r$-th secant variety of $X$ is
\[
\sigma_r(X) = \bar{\{ p \in \mathbb{P}^N : p \in \langle x_1 , \dots , x_r \rangle \text{ for some } x_1 , \dots , x_r \in X \}};
\]
here the closure can be taken equivalently in the Zariski or the Euclidean topology. The $r$-th symmetric product of $X$ is $X^{(r)} = X^{\times r} / \mathfrak{S}_r$, where the symmetric group $\mathfrak{S}_r$ acts by permuting the factors of $X^{\times r}$. One can verify that $X^{(r)}$ is a projective variety. The $r$-th abstract secant variety of $X$ is
\[
A\sigma_r(X) = \bar{\{ ( (x_1 , \dots , x_r), p) : p \in \langle x_1 , \dots , x_r \rangle \}} \subseteq X^{(r)} \times \mathbb{P}^N.
\]
There are two natural projections $\pi_X : A\sigma_r(X) \to X^{(r)}$, which is surjective, and $\pi_\sigma : A\sigma_r(X) \to \mathbb{P}^N$, which surjects onto the secant variety $\sigma_r(X)$. It is easy to verify that $ A\sigma_r(X)$ is irreducible and $\dim A\sigma_r(X) = r \dim X + (r-1)$; in fact, $A\sigma_r(X)$ is birational to a projective bundle over $X^{(r)}$ whose fibers are copies of $\mathbb{P}^{r-1}$. We say that $\sigma_r(X)$ is non-defective, or that $X$ is not $r$-defective, if $\dim \sigma_r(X) = \min \{ N , r \dim X + (r-1)\}$: if $\sigma_r(X) \subsetneq \mathbb{P}^N$, this is equivalent to the condition that $\pi_\sigma$ has generically finite fibers.
\begin{definition} Let $X \subseteq \mathbb{P}^N$ be an irreducible variety and let $r \geq 1$ be an integer such that $N \geq r \dim X + (r-1)$. The $r$-th Terracini locus of $X$ is
\[
\mathbb{T}_r(X) = \bar{\left\{ (x_1 , \dots , x_r) : \begin{array}{l} x_i \in X^{\mathrm{sm}} \text{ are linearly independent}, \\
T_{x_1} X , \cdots , T_{x_r} X \text{ are not linearly independent}
\end{array}
\right\}} \subseteq X^{(r)},
\]
where $X^{\mathrm{sm}}$ is the open set of the smooth points of $X$ and $T_xX$ denotes the affine tangent space to $X$ at $x$.
\end{definition}
Terracini's Lemma \cite[Lemma 1]{BCCGO:HitchhikerGuide} guarantees that if $\sigma_r(X)$ is non-defective and it does not fill the space, then $\mathbb{T}_r(X)$ is a proper subvariety of $X^{(r)}$.
The preimage $\pi_X^{-1} ( \mathbb{T}_r(X)) \subseteq A\sigma_r(X)$ is contained in the locus where the differential of $\pi_\sigma$ drops rank. If a point $q \in \mathbb{P}^N$ is in the image $\pi_\sigma \pi_X^{-1}(\mathbb{T}_r(X)) \subseteq \sigma_r(X)$ then either $\pi_\sigma^{-1}(q)$ is positive dimensional, or $q$ is a cuspidal singularity of $\sigma_r(X)$.
By definition, the Terracini locus is the sets of points $\{x_1 , \dots , x_r\}$ with the property that they impose independent conditions on $\mathcal{O}_X(1)$ but the double points $\{ 2x_1 , \dots , 2x_r\}$ do not impose independent conditions on $\mathcal{O}_X(1)$. In fact, with this formulation, one can give the definition of Terracini locus for an abstract variety and any (ample) line bundle, see \cite{BalChi:TerraciniLocus}.
An important tool in the study of sets of point in projective space is given by the Hilbert function. Let $\mathbb{C}[V] = \Sym(V^*)$ be the homogeneous coordinate ring of $\mathbb{P} V$, which is a polynomial ring in $n+1$ variables. If $Z \subseteq \mathbb{P} V$ is a subvariety with homogeneous ideal $I(Z)$, the Hilbert function of $Z$ is
\begin{align*}
h_Z : \mathbb{N} &\to \mathbb{N} \\
t &\mapsto \dim ( \mathbb{C}[V]_t / I(Z)_t).
\end{align*}
If $Z$ is a set of points, $Z = \{ [v_1] , \dots , [v_r]\}$ for $v_j \in V$, let $\mathrm{ev}_i : \mathbb{C}[V] \to \mathbb{C}$ be the evaluation map at $v_i$. For every $t$, the restriction $\mathrm{ev}_i : \mathbb{C}[V]_t \to \mathbb{C}$ is linear, and the Hilbert function of $Z$ is characterized as
\[
h_Z(t) = \mathrm{rank}( (\mathrm{ev}_1 , \dots , \mathrm{ev}_r) : \mathbb{C}[V] \to \mathbb{C}^r).
\]
The \emph{first difference} of the Hilbert function of $Z$ is $Dh_Z(t) = h_Z(t) - h_Z(t-1)$. It is often identified simply with the sequence of its non-zero values, which is called the h-vector of $Z$. We record three main properties of the Hilbert function and the h-vector, which will be useful in the following. We refer to \cite{Chi:HilbertFunctionsTensorAn} for details and proofs.
\begin{proposition}\label{prop: basics HF}
Let $Z' \subseteq Z \subseteq \mathbb{P} V$ be finite sets of points. Then
\begin{enumerate}[(i)]
\item $Dh_{Z'}(t) \leq Dh_{Z}(t)$ for every $t$; in particular $h_{Z'}(t) \leq h_{Z}(t)$ for every $t$.
\item There is an integer $\tau \geq 0$ such that $h_Z(t)$ is strictly increasing for $t \leq \tau$ and $h_Z(t) = \ell(Z)$ for $t \geq \tau$. In particular $Dh_Z (t) > 0$ if $t\leq \tau$ and $Dh_Z(t) = 0$ if $t > \tau$.
\item If $Dh_Z(t) \leq t$, then $Dh_Z(t+1) \leq Dh_Z(t)$.
\end{enumerate}
\end{proposition}
Understanding what functions $h$ can occur as Hilbert functions of sets of points is the object of a long line of research. Macaulay characterized the \emph{maximal growth} of $h_Z$. In \cite{BigGerMig:GeometricConsequencesMacaulay}, strong consequences of Macaulay's result are given; we state here a restricted version of \cite[Theorem 3.6]{BigGerMig:GeometricConsequencesMacaulay}, extending a result from \cite{Davis:CompleteIntersections} in the case of $\mathbb{P}^2$.
\begin{theorem}\label{thm: BGM}
Let $Z \subseteq \mathbb{P} V$ be a finite set of points. If $s := Dh_Z(t_0) = Dh_Z(t_0+1) \leq t_0$ for some $t_0$ then there exists a reduced curve $C \subseteq \mathbb{P} V$ of degree $s$ such that, setting $Z' = Z \cap C$, one has
\begin{itemize}
\item $Dh_{Z'}(t) = Dh_C(t)$ for $t \leq t_0+1$;
\item $Dh_{Z'}(t) = Dh_Z(t)$ for $t \geq t_0$.
\end{itemize}
\end{theorem}
Intuitively, \autoref{thm: BGM} says that if $Dh_Z$ is constant and small on an interval, then a subset of points of $Z$ lies on a low degree curve. This is useful to deduce pathologies of certain sets of points and prove that under suitable genericity assumptions only certain Hilbert functions are possible.
Sets of points defining non-redundant decompositions satisfy particular conditions that are reflected in their Hilbert function. An important one concerns the Cayley-Bacharach property. A finite set $Z \subseteq \mathbb{P} V$ satisfies the Cayley-Bacharach property in degree $t$, denoted $\mathit{CB}(t)$, if, for every $p \in Z$, every form of degree $d$ vanishing on $Z \setminus \{p\}$ vanishes at $p$ as well; in other words, $Z$ satisfy $\mathit{CB}(t)$ if for every $p \in Z$, $I(Z \setminus \{p\})_t = I(Z)_t$.
The role of the Cayley-Bacharach property in the study of decompositions of homogeneous polynomials is stated in the following result.
\begin{proposition}[\cite{AngChi:IdentifiabilityTernaryForms}, Proposition 2.25]\label{prop: CB for nonredundant}
Let $F \in S^d V$ and let $A,B \subseteq \mathbb{P} V$ be non-redundant disjoint decompositions of $F$. The $A \cup B$ satisfies the Cayley-Bacharach property in degree $d$.
\end{proposition}
In particular, the failure to satisfy the Cayley-Bacharach property allows one to rule out the existence of certain decompositions. This will be achieved using the fact that the Cayley-Bacharach property poses strong conditions on the Hilbert function of a set of points. More precisely, the following holds, see \cite[Thm. 4.9]{AngChiVan:IdentifiabilityBeyondKruskal}.
\begin{theorem}\label{thm: CB implies HF}
Let $Z \subseteq \mathbb{P} V$ be a set of points satisfying $\mathit{CB}(t)$. Then, for every $s \leq t+1$
\[
Dh_Z(0) + \cdots + Dh_Z(s) \leq Dh_Z(t+1) + \cdots + Dh_Z(t+1-s).
\]
\end{theorem}
\subsection{First results}
In this section we prove some simple technical results toward the proof \autoref{thm: main theorem}. The first immediate observation is that $n+2$ points in LGP in $\mathbb{P} V$ can be normalized via the action of $\mathrm{GL}(V)$. We record a slightly more general fact in the following result, see e.g. \cite[Ch. 10]{Harris:AlgGeo}.
\begin{lemma}\label{lemma: orbits}
Let $V$ be a vector space with $\dim V = n+1$ and let $e_0 , \dots , e_n$ be a basis of $V$. Let
\[
\Omega = \{ A \subseteq \mathbb{P} V: \ell(A) = n+2, \langle A \rangle = \mathbb{P} V\} \subseteq (\mathbb{P} V)^{(n+2)}.
\]
Then $\Omega$ is Zariski open in $(\mathbb{P} V)^{(n+2)}$; the action of $\mathrm{GL}(V)$ on $\Omega$ has exactly $n$ orbits $K_2 , \dots , K_{n+1}$. The elements of $K_r$ are subsets having Kruskal rank exactly $r$ and they are all equivalent to
\[
A_r = \{ [e_0] , \dots , [e_n] , [e_0 + \cdots + e_{r-1}]\}.
\]
Moreover, $K_r \subseteq \bar{K_{r+1}}$ and for every $r$, $\bar{K_r}$ is irreducible of dimension $n(n+1) + r-1$.
\end{lemma}
\begin{proof}
The first part of the statement is classical. If $A \in (\mathbb{P} V)^{(n+2)}$ is a set of points having Kruskal rank $r$ and such that $\langle A \rangle = \mathbb{P} V$, then we may assume $A = \{ [v_0] , \dots , [v_n], [v_{n+1}]\}$ with $v_0 , \dots , v_n$ linearly independent and $v_{n+1} \in \langle v_0 , \dots , v_{r-1}\rangle$. Since $v_0 , \dots , v_n$ are linearly independent, there is an element $g \in \mathrm{GL}(V)$ such that $g v_i = e_i$ for $i = 0 , \dots , n$; as a result $g v_{n+1} = \lambda_0 e_0 + \cdots + \lambda_{r-1} e_{r-1}$ for some $\lambda_j \in \mathbb{C} \setminus \{ 0 \}$. Define an element $h \in \mathrm{GL}(V)$ by $h e_i = \lambda_i^{-1} e_i$ for $i \leq r-1$ and $h e_i = e_i$ for $i \geq r$. We conclude that $hg A = A_r$.
The condition that $K_r \subseteq \bar{K_{r+1}}$ is clear; moreover $\bar{K_r}$ is irreducible, because it is the closure of an orbit under the action of $\mathrm{GL}(V)$. Finally, to compute the dimension of $\bar{K_r}$, consider the diagram
\[
\xymatrix{
& (\mathbb{P} V)^{ n+2} \ar[dl]_{\pi_\mathfrak{S}} \ar[dr]^{\pi_{n+2}} & \\
(\mathbb{P} V)^{(n+2)} & & (\mathbb{P} V)^{n+1}
}
\]
where $\pi_\mathfrak{S}$ is the projection onto the quotient $(\mathbb{P} V)^{(n+2)} = (\mathbb{P} V)^{ n+2}/\mathfrak{S}_{n+2}$ and $\pi_{n+2}$ is the projection onto the first $n+1$ factors. The preimage $\pi_\mathfrak{S} (K_r)$ is the union several irreducible components, all isomorphic and all surjecting onto $K_r$; one of these components is
\[
J_r = \left \{ ([v_0] , \dots , [v_{n+1}]) \in (\mathbb{P} V)^{ n+2} :
\begin{array}{l}
\mathbb{P} V = \langle [v_0] , \dots , [v_n] \rangle \\
v_{n+1} \in \langle v_0 , \dots , v_{r-1}\rangle
\end{array}
\right\}
\]
Since $\pi_\mathfrak{S}$ is generically finite, $\dim K_r = \dim J_r$. The restriction $\pi_{n+2} : J_r \to (\mathbb{P} V)^{n+1}$ is dominant; the fiber over a generic element $([v_0] , \dots , [v_n])$ is the linear span $\mathbb{P} \langle v_0 , \dots , v_{r-1}\rangle$ which has dimension $r-1$. We conclude
\[
\dim K_r = \dim J_r = \dim ((\mathbb{P} V)^{n+1}) + \dim \mathbb{P}^{r-1}(r-1) = n(n+1) + r-1.
\]
This concludes the proof.
\end{proof}
We observe that forms admitting non-redundant decompositions of length $n+2$ are necessarily concise:
\begin{lemma}\label{lemma: minimal plus one concise}
Let $F \in S^d V$ be a homogeneous form with $\dim V = n+1$. Let $A \subseteq \mathbb{P} V$ be a set of $n+2$ points with $\langle A \rangle = \mathbb{P} V$. If $A$ is a non-redundant decomposition of $F$ then $F$ is concise.
\end{lemma}
\begin{proof}
By \autoref{lemma: orbits}, we may assume $A \in K_r$ for some $r$ and we can easily reduce to the case $r= n$. Write $L = x_0 + \cdots + x_n$ and assume
\[
F = \alpha_0 x_0^d + \cdots + \alpha_n x_n^d + \alpha_{n+1} L^d
\]
where $\alpha_j \neq 0$ for every $j$.
Let $H$ be the space of $(d-1)$-th order partial derivatives of $F$. Up to scaling, we have
\[
\frac{\partial^{d-1}}{\partial x_0^{d-2}\partial x_1} F = \alpha_{n+1} L;
\]
hence $L \in H$. Moreover, $\frac{\partial^{d-1}}{\partial x_j^d} F = \alpha_j x_j + \alpha_{n+1} L$, showing $x_j \in H$ for every $j$. This shows $H = V$, therefore $F$ is concise.
\end{proof}
Similarly, forms admitting non-redundant decompositions of length exactly $n+1$ are concise, and their decomposition is unique. The uniqueness of the decomposition can be obtained via an easy calculation with the Hilbert function, but it is also a consequence of the classical Kruskal criterion for tensors, \cite{Krusk:ThreeWayArrays}.
\begin{lemma}\label{cor: uniqueness for LI decomp}
Let $F \in S^d V$ be a homogeneous form, with $\dim V = n+1$ with $d \geq 3$. Let $A \subseteq \mathbb{P} V$ be a non-redundant decomposition of $F$ with $A$ linearly independent in $\mathbb{P} V$ and $\ell(A) = n+1$. Then $F$ is concise, $\mathrm{R}(F) = n+1$ and $A$ is the unique minimal decomposition of $F$.
\end{lemma}
\begin{proof}
Since $A$ is linearly independent with $\ell(A) = n+1$, we may normalize it via the action of $\mathrm{GL}(V)$ and assume $A = \{ [x_0] , \dots , [x_n]\}$. Therefore $\langle v_d(A) \rangle = \langle [x_0^d] , \dots , [x_n^d]\rangle$ and $F = x_0^d + \cdots + x_n^d$. This shows that $F$ is concise with $\mathrm{R}(F) = n+1$.
The uniqueness of the decomposition follows by Kruskal's criterion, see \cite{Krusk:ThreeWayArrays}. The proof follows from the immediate fact that if $A \subseteq \mathbb{P} V$ is a set of linearly elements with $\ell(A) = n+1$, then $A$ is in LGP and in particular it has maximal Kruskal ranks.
\end{proof}
A stronger result holds for forms admitting two non-redundant decompositions of length $n+1$ and $n+2$, respectively.
\begin{lemma}\label{lemma: minimal plus one for fermat}
Let $F \in S^d V$ be a concise form with $d \geq 3$. Let $A \subseteq \mathbb{P} V$, $B \subseteq \mathbb{P} V$ be non-redundant decompositions of $F$ of length $n+1$ and $n+2$ respectively. Then $d = 3$, $\ell(A \cap B) \geq n-1$ and $(A \setminus B) \cup (B \setminus A)$ is collinear.
\end{lemma}
\begin{proof}
We proceed by induction on $n$. If $n = 1$ the statement is clearly true. Indeed, every concise $F \in S^d V$ has at most one decomposition of length $2$ and a two-parameter family of decompositions of length $3$. All decompositions are collinear because $\mathbb{P} V$ is a line.
Assume $n \geq 2$. First assume $A \cap B = \emptyset$. Let $Z = A \cup B$. Since $F$ is concise, we have $Dh_A = (1,n)$, which implies $Dh_Z(0) + Dh_Z(1) = n+1$; by the Cayley-Bacharach property of $Z$, we have $Dh_Z(d+1) + Dh_Z(d) \geq n+1$. This implies $Dh_Z(2) \leq 1$, which arithmetically is a contradiction by \autoref{prop: basics HF}.
Therefore $A \cap B \neq \emptyset$. Without loss of generality, assume $A = \{ [x_0] , \dots , [x_n]\}$ and $F = x_0^d + \cdots + x_n^d$. Since $A \cap B \neq \emptyset$, assume $[x_n] \in B$ and write $B = \{ [L_0] , \dots , [L_n],[x_n]\}$, so that $F = L_0^d + \cdots + L_n^d + \beta x_n^d$ for some nonzero $\beta \in \mathbb{C}$. Consider
\begin{equation}\label{eqn: two decomps F'}
F' = F - \beta x_n^d = x_0^d + \cdots + x_{n-1}^d + (1-\beta) x_n^d = L_0^d + \cdots + L_n^d.
\end{equation}
Let $B' = \{ [L_0] , \dots , [L_n]\}$; clearly $[x_n] \notin B'$. If $\beta \neq 1$, then $F'$ is concise and \eqref{eqn: two decomps F'} gives two non-redundant decompositions $F'$ of length $n+1$. By \autoref{cor: uniqueness for LI decomp}, the two decompositions are the same, in contradiction with the fact that $[x_n] \notin B'$. Therefore $\beta = 1$ and $F'$ is non-concise. Moreover, if $B'$ was linearly independent in $\mathbb{P} V$, \autoref{cor: uniqueness for LI decomp} would imply that $F'$ is concise; hence $B'$ is not linearly independent.
Let $V' = \langle x_0 , \dots , x_{n-1} \rangle$. Notice $V' = \langle B'\rangle$ and $F'$ is concise in $S^d V'$. Let $A' = \{ [x_0] , \dots , [x_{n-1}]\}$. Now $F'$ is concise and it has two decompositions $A',B'$ of length $n$ and $n+1$ respectively. By the inductive hypothesis, we deduce $d = 3$, $\ell(A' \cap B') \geq n-2$ and $B'$ has three collinear points. Since $A = A' \cup \{ [x_n]\}$ and similarly for $B$, we deduce that the statement holds for $A$ and $B$ as well. This concludes the proof.
\end{proof}
In the study of identifiability of forms, it is often useful to consider the case where two potential decompositions of a form are disjoint. We illustrate a general method to reduce the case of non-disjoint decompositions to the one of disjoint decompositions:
\begin{remark}\label{rmk: pass to disjoint}
Let $F \in S^d V$ and suppose $A,B \subseteq \mathbb{P} V$ give two non-redundant decompositions of $F$. Write $A = \{ [L_1] , \dots , [L_r]\}$ and $B = \{ [M_1] , \dots , [M_s]\}$. Then
\[
F = \alpha_1 L_1^d + \cdots + \alpha_r L_r^d = \beta_1 M_1^d + \cdots + \beta_s M_s^d.
\]
After possibly reordering the elements of $A$ and $B$, assume $L_i = M_i$ for $i = 1 , \dots , p$, where $ p = \ell(A\cap B)$. In this case, define
\[
F' = F - (\alpha_1 L_1^d + \cdots + \alpha_p L_p^d) = (\beta_1 - \alpha_1)M_1^d + \cdots + (\beta_p - \alpha_p)M_p^d + \beta_{p+1} M_{p+1} + \cdots + \beta_s M_s^d;
\]
note
\[
F ' = \alpha_{p+1} L_{p+1}^d + \cdots + \alpha_r L_r^d.
\]
Let $A' = A \setminus B$; let $B' = \{ [M_i] : \alpha _i \neq \beta_i , i = 1 , \dots , p\} \cup \{[M_{p+1}] , \dots , [M_s]\}$; then $A',B'$ are two disjoint non-redundant decompositions of $F'$. Moreover, if $A$ is a minimal decomposition for $F$, then $A'$ is a minimal decomposition for $F'$.
\end{remark}
We conclude this section with a classical result for which we do not know an explicit reference, in the form that we need in the following. The result dates back to \cite{Sylv:PrinciplesCalculusForms}; analogous statements, in a more general setting, are discussed in \cite[Section 7.1]{BalBerChrGes:PartiallySymRkW}.
\begin{proposition}\label{prop: sylvester's d+2} Let $F$ be a concise form in $S^d V$, with $\dim V = n+1$. Let $A,B$ be non-redundant decompositions of $F$ of length $r$ and $s$ respectively. Then $r+s \geq d+2n$.
\end{proposition}
\begin{proof} Let $p$ be the length of $A\cap B$. If $p=0$, set $F'=F$. If $p>0$, consider the form $F'$ with disjoint decompositions $A',B'$ of length $r',s'$ respectively, as constructed in \autoref{rmk: pass to disjoint}. Notice that $r-r'=p\geq s-s'$; specifically $s-s'$ is the number of elements of $A \cap B$ appearing with the same coefficient in the two decompositions of $F$. Observe $\dim \langle B' \rangle \geq n - (s-s')$: indeed $F$ is concise, which implies that $B$ spans $\mathbb{P} V$; since $\dim \langle B \setminus B' \rangle \leq s-s'-1$, we deduce $\dim \langle B' \rangle \geq n - (s-s')$.
Set $Z = A'\cup B'$ and let $n' = \dim \langle Z' \rangle$. We have $n' \geq \dim \langle B' \rangle \geq n-(s-s')$, so $n-n' \leq s-s' \leq r-r'$. Since $Z$ is union of two disjoint decompositions of a form of degree $d$, we have $Dh_{Z'}(d+1) \geq 1$, which implies $Dh_{Z'}(t) \geq 1$ for $t \leq d+1$. Moreover, $Dh_{Z'}(1) = \dim \langle Z' \rangle = n'$. Finally, \autoref{prop: CB for nonredundant} implies that the Cayley-Bacharach property holds for $Z'$ in degree $d$, and \autoref{thm: CB implies HF} implies $Dh_{Z'}(d+1) + Dh_{Z'}(d) \geq Dh_{Z'}(0) + Dh_{Z'}(1) \geq 1+n'$.
We deduce $Dh_{Z'} = (1,n', \delta_2 , \dots , \delta_{d+1}, \ldots)$ with $\delta_t \geq 1$ for $t \leq d+1$ and $\delta_{d+1} + \delta_d \geq 1+n'$. We conclude $r' + s' = \ell(Z') \geq \sum_{t=0}^{d+1} \delta_t \geq 2+2n' + (d-2) = d + 2n'$. Therefore
\[
r+s=(r-r')+(s-s')+(r'+s')\geq (n-n')+(n-n')+d+2n' = d + 2n
\]
as desired.
\end{proof}
The bound of \autoref{prop: sylvester's d+2} is sharp. An example where the bound is attained can be constructed, for every $n$ and $d$, as follows. When $d=2$, general forms of maximal rank have many decompositions of length $n+1$, which attains the bound.
For higher values of $d$, write $V = V' \oplus V''$ where with $\dim V' = 2$, $\dim V' = n-1$. Let let $F' \in S^d V'$ be a binary form having two minimal disjoint decompositions $A',B'$ of length $r',s'$ with $r'+s'=d+2$; it is a classical result that such forms exist \cite{Sylv:PrinciplesCalculusForms,ComaSigu:RankBinaryForms}. Let $F'' = L_1^d+\dots +L_{n-1}^d$ be a sum of $n-1$ $d$-powers of generic linear forms. Then $F = F' + F''$ is concise and it has two distinct non-redundant decompositions $A=A'\cup\{L_1,\dots,L_{n-1}\}$ and $B=B'\cup \{L_1,\dots,L_{n-1}\}$ with $\ell(A) = r' + (n-1)$, $\ell(B) = s' + (n-1)$ and we have $\ell(A) + \ell(B) = r'+s'+2(n-1) = d+2n$.
\section{First cases}
In this section, we characterize decompositions of length $n+2$ in $\mathbb{P}^n$ for $n =1,2$.
\subsection*{Case $n=1$} ~
Let $V$ be a vector space with $\dim V= 2$. The generic Waring rank in $\mathbb{P} S^3 V$ is $2$. In this case \autoref{thm: main theorem} is easily verified. Let $F \in S^3 V$ be a concise form. If $\mathrm{R}(F) = 2$, then $F$ has a unique decomposition of length $2$ by Kruskal criterion, and in particular \autoref{cor: uniqueness for LI decomp}; moreover $F$ has a $2$-dimensional family of decompositions of length $3$. If $\mathrm{R}(F) = 3$, then $[F]$ is an element of $\tau(v_3(\mathbb{P}^1)) \setminus v_3(\mathbb{P}^1)$, where $\tau(v_3(\mathbb{P}^1)) = \{ [L_1^2 L_2] \in \mathbb{P} S^3 V: [L_1],[L_2] \in \mathbb{P}^1\}$; in this case $F$ has a two dimensional family of decompositions of length $3$.
In both cases, the two dimensional family of decompositions can be described as follows. Let $\Lambda$ be a generic plane passing through $[F]$. Since $v_3(\mathbb{P} V)$ is a curve of degree $3$, its intersection with $\Lambda$ consists of three points counted with multiplicity. The genericity assumption guarantees that the three intersection points are distinct, $v_3(\mathbb{P} V) \cap \Lambda = \{ [L_1^3],[L_2^3], [L_3^3]\}$ for some $L_1,L_2,L_3 \in V$. Moreover, any three points on $v_3(\mathbb{P} V)$ are linearly independent, therefore $\Lambda = \langle [L_1^3],[L_2^3], [L_3^3] \rangle$. This shows that $A = \{ [L_1],[L_2],[L_3]\}$ is a decomposition of length $3$. The genericity assumption on $\Lambda$ guarantees it is non-redundant, otherwise one would obtain several decompositions of length $2$, in contradiction with \autoref{cor: uniqueness for LI decomp}.
In other words, every generic enough plane $\Lambda$ through $[F]$ determines uniquely a non-redundant decomposition of length $3$. Planes through $[F]$ form a hyperplane in $\mathbb{P} S^3V^*$, hence this is a $2$-dimensional family.
Since $\mathbb{P} V= \mathbb{P}^1$, all decompositions are defined by collinear points. In particular, both case (II) and case (III) of \autoref{thm: main theorem} trivially hold.
\medskip
\subsection*{Case $n=2$}~
Let $V$ be a vector space with $\dim V= 3$. The generic Waring rank in $\mathbb{P} S^3 V$ is $4$. Let $F \in S^3 V$ be a concise form. It is known that $3 \leq \mathrm{R}(F) \leq 5$ \cite{LanTei:RanksBorderRanksSymTensors} and we are interested in the dense subset of forms satisfying $\mathrm{R}(F) \leq 4$. Notice $\dim A\sigma_4(v_3(\mathbb{P}^2)) = 12$. Therefore the projection map $A\sigma_4(v_3(\mathbb{P}^2)) \to \mathbb{P} S^3 V$ has generic fibers of dimension $2$. This guarantees that if $\mathrm{R}(F) \leq 4$, then $F$ has at least a $2$-dimensional family of decompositions of length $4$.
In particular, case (III) of \autoref{thm: main theorem} trivially holds. We also provide a geometric characterization of the decompositions.
\begin{theorem}
Let $F \in S^3 V$ be a concise form with $\dim V = 3$. Let $A,B$ be non-redundant decompositions of $F$, with $\ell(A) = \ell(B) = 4$. Then one of the following holds:
\begin{itemize}
\item $A \cap B = \emptyset$ and there is a conic passing through $A \cup B$; if $A$ has three collinear points, so does $B$.
\item $\ell(A \cap B) \geq 1$ and all non-redundant decompositions of $F$ of length $4$ have three collinear points, on a line $\Lambda$.
\end{itemize}
\end{theorem}
\begin{proof}
The statement follows from \autoref{prop: unique conic} and \autoref{prop: three collinear on plane} below. \autoref{prop: unique conic} analyzes the case where $A$ and $B$ are disjoint and \autoref{prop: three collinear on plane} the one where they are not.
\end{proof}
\begin{proposition}\label{prop: unique conic}
Let $F \in S^3 V$ be a concise form with $\dim V = 3$. Let $A,B$ be non-redundant decompositions of $F$, with $\ell(A) = \ell(B) = 4$. If $A$ and $B$ are disjoint then there is a unique conic $C$ vanishing on $A \cup B$. Moreover, if $\Lambda$ is a line with $\ell(A \cap \Lambda) \geq 3$, then $C = \Lambda \cup \Lambda'$ for a line $\Lambda'$ and $\ell(B \cap \Lambda') =3$.
\end{proposition}
\begin{proof}
Since $F$ is concise, there are no linear forms vanishing on $A$. Therefore the h-vector of $A$ is $Dh_A = (1,2,1)$ and there is a pencil of conics vanishing on $A$.
Let $Z = A \cup B$; since $A \cap B =\emptyset$, $\ell(Z) = 8$. Since $[F] \in \linspan {v_3(A)}\cap \linspan {v_3(B)}$, we have that $v_3(Z)$ is not linearly independent, namely $\dim \linspan{v_3(Z)} < \ell(Z) - 1 = 7$. In particular, $Dh_Z(4)>0$. Since $Dh_A = (1,2,1)$, we have $Dh_Z(0) + Dh_Z(1) \geq 3$. Since $A,B$ are non-redundant, they satisfy the Cayley-Bacharach property in degree $3$ by \autoref{prop: CB for nonredundant}: therefore \autoref{thm: CB implies HF} provides $Dh_Z(3)+Dh_Z(4)\geq 3$, so $Dh_Z(2) \leq 2$. If $Dh_Z(2) \leq 1$, by \autoref{prop: basics HF}, we have $Dh_Z(i) \leq 1$ for $i\geq 2$, which provides a contradiction; therefore $Dh_Z(2) = 2$. We obtain $Dh_Z = (1,2,2,2,1)$. In particular, $h_Z(2) = 1+2+2 = 5$, which implies that there is a unique conic vanishing on $Z$.
Now suppose $A$ contains three collinear points, lying on a line $\Lambda \subseteq \mathbb{P} V$. Since $A \subseteq C$, we deduce that the conic $C$ passing through $Z$ is reducible with $C = \Lambda \cup \Lambda'$ for some other line $\Lambda' \subseteq \mathbb{P} V$. We are going to show that $B \cap \Lambda'$ consists of three points. Suppose by contradiction $B \cap \Lambda'$ contains fewer than three points; since $B \subseteq C$, we deduce that $B \cap \Lambda$ contains at least two points. Notice $h_Z(3) = 1+2+2 +2 = 7 = \ell(Z)-1$: this guarantees $ \dim ( \linspan {v_3(A)} \cap \linspan{v_3(B)}) = 0$ therefore the two linear spans only intersect at $[F]$. However,
\[
\ell(\Lambda \cap Z) = \ell(\Lambda \cap A) + \ell(\Lambda \cap B) \geq 3+2 = 5;
\]
therefore $v_3(\Lambda \cap Z)$ is linearly dependent. This implies $ \linspan {v_3(A \cap \Lambda )} \cap \linspan{v_3(B \cap \Lambda)} \neq \emptyset$, in contradiction with the fact that the two spans only intersect at $[F]$, which is not an element of $\linspan{v_3(\Lambda)}$ because $F$ is concise. This shows that $B \cap \Lambda'$ contains three points.
\end{proof}
The case where $A$ and $B$ are not disjoint is characterized in the following result.
\begin{proposition}\label{prop: three collinear on plane}
Let $F \in S^3 V$ be a concise form with $\dim V = 3$. Let $A,B$ be non-redundant decompositions of $F$, with $\ell(A) = \ell(B) = 4$. If $\ell(A \cap B) \geq 1$, then there is a line $\Lambda \subseteq \mathbb{P} V$ such that $A \setminus \Lambda = B \setminus \Lambda$ consists of exactly one point. In particular, both $A$ and $B$ have three collinear points on $\Lambda$.
\end{proposition}
\begin{proof}
Let $p = \ell(A \cap B)$. Let $F'$, $A',B'$ be constructed from $F,A,B$ as in \autoref{rmk: pass to disjoint}. Then $A'$ is a non-redundant decomposition of $F'$ of length $4-p \leq 3$ and $B'$ is a non-redundant decomposition of $F'$ of length at most $4$.
Suppose $A'$ is not contained in a line. In particular, $\ell(A') =3$ and $F'$ is concise by \autoref{cor: uniqueness for LI decomp}. Moreover $A'$ is the unique decomposition of length $3$. This contradicts the existence of $B'$.
Therefore there exists $V' \subseteq V$, with $\dim V' = 2$ and $A' \subseteq \Lambda = \mathbb{P} V'$. In particular $F' \in S^3 V'$ is not concise in $S^3 V$. If $F'$ is not concise in $S^3 V'$, then $F' = L^3$ for some $L \in V$; therefore $\{[L]\}$ is a decomposition of $F'$ and by \autoref{prop: sylvester's d+2}, any other non-redundant decomposition of $F'$ must have length at least $4$. On the other hand $\ell(B') \leq 3$, because if $\ell(B') = 4$, then $B = B'$, and $F'$ would be concise by \autoref{lemma: minimal plus one concise}. Therefore, the condition $\ell(B') \leq 3$ provides a contradiction with \autoref{prop: sylvester's d+2}. This guarantees $F'$ is concise in $S^3 V'$.
To conclude, we analyze two possibilities. If $\ell(A') = 2$, then $A'$ is the unique decomposition of $F'$ of length $2$ and $\ell(B') = 3$. In this case $\ell(A \cap B) = 2$ and it consists of one point on $\Lambda$ and one not in $\Lambda$, because $B'$ contains three of the four points of $B$ and $B' \subseteq \Lambda$. In particular $A \setminus \Lambda = B \setminus \Lambda = \{ [L]\}$ with $F = F' + L^3$. If $\ell(A') = 3 = \ell(B') = 3$, then $A \cap B = A \setminus \Lambda = B \setminus \Lambda = \{ [L]\}$ with $F = F' + L^3$.
\end{proof}
\section{The general case}
In this section, we give the proof of \autoref{thm: main theorem}. This will follow from an induction argument, where the cases $n=1,2,3$ give the base of the induction. We provide first a result which analyzes the case $n=3$.
If $\dim V= 4$, the generic Waring rank in $\mathbb{P} S^3 V$ is $5$. Sylvester's Pentahedral Theorem \cite{Sylv:PrinciplesCalculusForms,Cleb:TheorieFlachen} guarantees that a generic form in $S^3 V$ has a unique decomposition. We first study the situation of two disjoint decompositions.
\begin{proposition}\label{prop: surfaces on union of two lines}
Let $\dim V = 4$. Let $F \in S^3 V$ be a concise form. Let $A,B \subseteq \mathbb{P} V$ be two disjoint non-redundant decompositions of $F$ with $\ell(A),\ell(B) = 5$. Then $A \cup B$ is contained in the union of two lines.
\end{proposition}
\begin{proof}
Since $A \cap B =\emptyset$, the union $Z=A\cup B$ is a set of length $10$. Since $F$ is concise, the decomposition $A$ is not contained in a hyperplane; therefore $Dh_A(0) = 1$, $Dh_A(1) = 3$. Since $A,B$ are non-redundant, by \autoref{prop: CB for nonredundant} $Z$ satisfies the Cayley-Bacharach property in degree $3$. Therefore $Dh_Z(4) + Dh_Z(3) \geq Dh_Z(0) + Dh_Z(1) \geq 4$. We deduce $Dh_Z(2)\leq 2$, which implies $Dh_Z(3), Dh_Z(4) \leq 2$ by \autoref{prop: basics HF}(iii). This guarantees $Dh_Z = (1,3,2,2,2)$.
We apply \autoref{thm: BGM}: we have $Dh_Z(3) = Dh_Z(4) = 2 \leq 2$. Therefore there is a curve $C$ of degree $2$ such that, setting $Z' = Z \cap C$, one has $Dh_{Z'}(t) = Dh_{C} (t)$ for $t \leq 4$. Since $\deg(C) = 2$, $C$ is either a plane conic or a union of two lines. If $C$ is a plane conic, we have $Dh_{Z'}(t) = (1,2,2,2,2,\delta_5)$, with $\delta_{5} = 0,1$; in this case $Z' = Z \cap C$ contains at least $9$ points, therefore either $A \subseteq C$ or $B \subseteq C$, in contradiction with the conciseness of $F$, which implies $\langle A \rangle = \langle B \rangle = \mathbb{P} V$. Therefore $C$ is union of two lines, and we have $Dh_{Z'} = (1,3,2,2,2)$; this guarantees $Z = Z' \subseteq C$, as desired.
\end{proof}
On the other hand, when $n \geq 4$, two distinct decompositions always intersect, as shown in the next result.
\begin{proposition} \label{prop: A and B must intersect}
Let $n \geq 4$ and let $V$ be a vector space with $\dim V = n+1$. Let $F \in S^3 V$ be a concise form. Let $A$ be a non-redundant decomposition of $F$ of length $n+2$ and let $B$ be a non-redundant decomposition of $F$ of length $s \leq n+2$. Then $A \cap B \neq \emptyset$.
\end{proposition}
\begin{proof}
Assume $B\cap A=\emptyset$, so that $Z=A\cup B$ is a set of length at most $2n+4$. Since $F$ is concise, then $A$ is not contained in a hyperplane; therefore $Dh_A(0) = 1$, $Dh_A(1) = n$. Since $A,B$ are non-redundant, $Z$ satisfies the Cayley-Bacharach property; therefore $Dh_Z(4) + Dh_Z(3) \geq Dh_Z(0) + Dh_Z(1) \geq n+1$. We deduce $Dh_Z(2)\leq 2$, which implies $Dh_Z(3), Dh_Z(4) \leq 2$; this provides a contradiction, for $n \geq 4$.
\end{proof}
The proof of \autoref{thm: main theorem} will follow because in the setting of \autoref{prop: A and B must intersect} one can guarantee that the intersection between two decompositions must be surprisingly large, similarly to \autoref{lemma: minimal plus one for fermat}.
\begin{proposition} \label{prop: main prop}
Let $V$ be a vector space with $\dim V = n+1$. Let $F \in S^3 V$ be a concise form. Let $A$ be a non-redundant decomposition of $F$ of length $n+2$ and let $B$ be a non-redundant decomposition of $F$ of length $s \leq n+2$. Then one of the following holds:
\begin{itemize}
\item $\ell(A\cap B) \geq n-2$, and the points of $(A \setminus B) \cup (B \setminus A)$ are contained in the union of two planes;
\item $\ell(A \cap B) \geq n-3$ and the points of $(A \setminus B) \cup (B \setminus A)$ are contained in the union of two lines.
\end{itemize}
\end{proposition}
\begin{proof}
If $s = n+1$, then the statement follows immediately from \autoref{lemma: minimal plus one for fermat}. Therefore assume $s = n+2$.
Let $A = \{ [L_1 ] , \dots , [L_{n+2}] \}$, $B = \{ [M_1 ] , \dots , [M_{n+2}] \}$. We proceed by induction on $n$. The case $n=1$ and $n=2$ are clearly verified. So assume $n \geq 3$.
If $ n =3$ and $A \cap B = \emptyset$, then by \autoref{prop: surfaces on union of two lines} condition (ii) is verified. If $ n >3$, then $A \cap B \neq \emptyset$ by \autoref{prop: A and B must intersect}. Therefore assume $A \cap B \neq \emptyset$ with $[L_{n+2}] = [M_{n+2}] \in A\cap B$.
Write, as usual,
\begin{align*}
F &= \alpha_1 L_1^3 + \cdots + \alpha_{n+1} L_{n+1}^3 + \alpha_{n+2} L_{n+2}^3 = \\
&= \beta_1 M_1^3 + \cdots + \beta_{n+1} M_{n+1}^3 + \beta_{n+2} L_{n+2}^3.
\end{align*}
Let $F' = F - \beta_{n+2} L_{n+2}^3$. Then $F'$ has a decomposition $B' = B \setminus \{ [L_{n+2}]\}$ of length $n+1$.
If $B'$ is linearly independent, by \autoref{cor: uniqueness for LI decomp}, we deduce that $F'$ is concise and $B'$ is its unique decomposition of length $n+1$. If $\alpha_{n+2} \neq \beta_{n+2}$, then $A$ defines a non-redundant decomposition of $F'$ of length $n+2$; in this case \autoref{lemma: minimal plus one for fermat} applies, and we deduce $\ell(A \cap B') \geq n-1$ and $A \setminus B'$ is collinear. Since $A \setminus B' = A \setminus B$, condition (ii) holds. If $\alpha_{n+2} = \beta_{n+2}$, then $A' = A \setminus \{[L_{n+2}]\}$ is a decomposition of $F'$ of length $n+1$; by Kruskal's criterion \autoref{cor: uniqueness for LI decomp}, we conclude $A' = B'$ which implies $A = B$ and the statement follows.
Now assume $B'$ is not linearly independent, which implies that $F'$ is not concise. Let $V' \subseteq V$ be the subspace in which $F'$ is concise. Since $F$ is concise, we have $\dim V' = \dim V - 1 = n$, and $\langle B' \rangle = V'$. Now, if $\alpha_{n+2} \neq \beta_{n+2}$, $A$ defines a non-redundant decomposition of $F'$ of length $n+2$. Moreover, $\langle A \rangle = V$. By \autoref{lemma: minimal plus one concise}, this would imply that $F'$ is concise, which is a contradiction. Therefore $\alpha_{n+2} = \beta_{n+2}$.
We reduced to the following setting. The form $F'$ is concise in $S^3 V'$, and $A',B'$ are two non-redundant decompositions of $F'$ of length $n+1$. Therefore the induction hypothesis applies. We deduce that condition (i) or condition (ii) hold for $A '$ and $B'$. Since $A = A' \cup\{ [L_{n+2}]\}$ and $B = B' \cup\{ [L_{n+2}]\}$, we deduce that condition (i) or condition (ii) holds for $A$ and $B$ as well.
\end{proof}
The proof of \autoref{thm: main theorem} is obtained directly from the results of this section.
\begin{proof}[Proof of \autoref{thm: main theorem}]
If $F$ has a unique decomposition $A$ of length $n+2$, then the Kruskal rank of $A$ must be at least $4$. Indeed, let $A' \subseteq A$ be a minimal linearly dependent set, so that the Kruskal rank of $A$ equals $\ell(A')-1$. Let $A = A' \cup A''$: $A''$ is linearly independent and $\langle v_3(A') \rangle \cap \langle v_3(A'') \rangle = \emptyset$. Correspondingly $F = F'+F''$, with $A'$ non-redundant decomposition of $F'$ and $A''$ non-redundant decomposition of $F''$. If $\ell(A') \leq 4$, then $F'$ has many decompositions of length $\ell(A')$, by the discussion on the case $n=2$. This implies that $A$ is not the unique decomposition of $F$, in contradiction with the assumption.
If $F$ does not have a unique decomposition, then either case (II) or case (III) occur, by \autoref{prop: main prop}.
\end{proof}
We record two easy consequence of \autoref{thm: main theorem}.
\begin{corollary}
Let $F \in S^3 V$ be a concise form admitting at least two distinct non-redundant decompositions of length $n+2$. Then the Kruskal rank of any non-redundant decomposition of length $n+2$ is at most $3$. Moreover, the family of decompositions of $F$ of length $n+2$ has dimension at least $2$.
\end{corollary}
\begin{proof}
Since $F$ has at least two non-redundant decompositions of length $n+2$, either case (II) or case (III) of \autoref{thm: main theorem} is verified. This guarantees every non-redundant decomposition of $F$ of length $n+2$ has Kruskal rank at most $3$.
Write $A = \{ [L_0] , \dots , [L_{n+1}]\}$ for a non-redundant decomposition of length $n+2$ and assume $[L_0] , \dots , [L_3]$ are linearly dependent. Write $F = L_0^3 + \cdots + L_{n+1}^3$ and define $F' = L_0^3 + \cdots + L_3^3$. Then $F' \in S^3 V'$ with $\dim V' = 3$ and it has a non-redundant decomposition of length $4$. From the discussion on the case $n=2$, we deduce that $F'$ has a $2$-dimensional family of decomposition of length $4$. For every such decomposition $B'$, define $B = B' \cup \{ [L_4] , \dots , [L_{n+1}]\}$. Then $B$ is a non-redundant decomposition of $F$, showing that the family of decompositions of $F$ has dimension at least $2$.
\end{proof}
As a byproduct of these results, we completely characterize the genericity condition of Sylvester's Pentahedral Theorem.
\begin{theorem}
Let $\dim V = 4$ and let $F \in S^3 V$ be a concise form. Then the following are equivalent:
\begin{enumerate}[(i)]
\item $F$ has at least two decompositions as sum of five powers of linear forms;
\item $F$ has a $2$-dimensional family of decompositions as sum of five powers of linear forms;
\item $F = F' + L^3$ for $F' \in S^3 V'$ where $V' \subseteq V$ is a subspace of dimension $3$ and $L$ is a linear form with $L \notin V'$.
\end{enumerate}
\end{theorem}
\begin{proof}
\underline{$(i)\Rightarrow (iii)$.} Since the decomposition is not unique, either condition (I) or condition (II) of \autoref{thm: main theorem} hold. If condition (II) holds, then $F = F' + L^3$ as desired. If condition (I) holds, then $F = F'' + L_1^3 + L_2^3$ where $F'' \in S^3 V''$ for a subspace $V'' \subseteq V$ of dimension $2$; setting $F' = F'' + L_1^3$ we obtain the desired expression.
\underline{$(iii) \Rightarrow (ii)$} Let $F = F' + L^3$. Every non-redundant decomposition of $F'$ provides a non-redundant decomposition of $F$. Since $\dim V'=3$, $F'$ has at least a $2$-dimensional family of decompositions.
\underline{$(ii) \Rightarrow (i)$} This is trivially satisfied.
\end{proof}
\begin{corollary}
Let $\dim V = 4$. A form $F \in S^3V$ has a unique decomposition as sum of five powers of linear forms if and only if
\[
[ F ] \notin \mathbf{J}( \mathrm{Sub}_3 , v_3(\mathbb{P} V))
\]
where $\mathrm{Sub}_3 $ denotes the variety of non-concise forms and $\mathbf{J}(-,-)$ denotes the geometric join of two varieties.
\end{corollary}
The variety $\mathrm{Sub}_3$ is called \emph{subspace variety} in \cite[Ch. 3]{Lan:TensorBook} and it is defined by the vanishing of $4 \times 4$ minors of the first partial derivative map, see, e.g., \cite{Car:ReducingNumberVariables}. We refer to \cite[Ch. 8]{Harris:AlgGeo} for the definition and the basic properties of the join of two varieties.
\section{Characterization of the Terracini locus}\label{sec:terraciniloci}
In this section, we characterize a subvariety of the $(n+2)$-th Terracini locus of the third Veronese embedding of $\mathbb{P}^n$. Define the \emph{$r$-th concise Terracini locus} to be the subvariety of the Terracini locus arising from points in $\mathbb{P}^n$ whose span is the entire space. More precisely, for a vector space $V$ of dimension $n+1$, define
\[
\mathbb{T}^{\mathrm{con}}_{r}(v_3(\mathbb{P} V)) = \bar{\mathbb{T}_{r}(v_3(\mathbb{P} V)) \cap \{ v_3(A) : A \in \mathbb{P} V^{(r)} \text{ and } \langle A \rangle = \mathbb{P} V \}}.
\]
It is clear that if $ \mathbb{T}^{\mathrm{con}}_{r}(v_3(\mathbb{P} V)) $ is non-empty, then it is (union of) irreducible components of $\mathbb{T}_r(v_3(\mathbb{P} V))$. \autoref{thm: main theorem} allows us to describe $ \mathbb{T}^{\mathrm{con}}_{n+2}(v_3(\mathbb{P} V))$ as the closure of the set $v_3(K_3) \subseteq \mathbb{P} V^{(n+2)}$, where $K_3$ is the orbit, for the action of $\mathrm{GL}(V)$, described in \autoref{lemma: orbits}. In particular, this guarantees that it is irreducible.
\begin{theorem}\label{thm: terracini locus characterization}
The concise Terracini locus $\mathbb{T}^{\mathrm{con}}_{n+2}(v_3(\mathbb{P} V))$ is the closure of
\begin{equation}\label{eqn: concise terracini}
v_3(K_3) = \{ v_3(A) \in v_3(\mathbb{P} V) ^{(n+2)}: A \text{ concise with four coplanar points}\}.
\end{equation}
In particular, $\mathbb{T}^{\mathrm{con}}_{n+2}(v_3(\mathbb{P} V))$ is irreducible of dimension $n(n+1) + 2$.
\end{theorem}
The proof of \autoref{thm: terracini locus characterization} is based on \autoref{lemma: orbits} and two technical results presented below, \autoref{noint} and \autoref{noint2}.
\begin{lemma}\label{noint}
Let $d \geq 3$. Let $\mathbb{P} V_1 ,\mathbb{P} V_2 \subseteq \mathbb{P} V$ be disjoint linear spaces. For $i = 1,2$, let
\[
H_i = \linspan {T_{[M^d]} v_d(\mathbb{P} V) : M \in V_i }.
\]
Then $\mathbb{P} H_1 \cap \mathbb{P} H_2 = \emptyset$.
\end{lemma}
\begin{proof} We have $T_{[M^d]} v_d(\mathbb{P} V) = V \cdot M^{d-1}$. Therefore
\[
H_i = \linspan {T_{[M^d]} v_d(\mathbb{P} V) : M \in V_i \}} = V \cdot S^{d-1} V_i;
\]
since $V_1 \cap V_2 = 0$ and $d \geq 3$, we deduce $\mathbb{P} H_1 \cap \mathbb{P} H_2 = 0$.
\end{proof}
The next result is essentially contained in \cite[Lemma 5.10]{ChiCilib}.
\begin{lemma}\label{noint2} Let $V' \subseteq V$ be a proper subspace of $V$ and let $L_1,\dots L_k \in V'$. If the tangent spaces to $v_d(\mathbb{P} V)$ at $[L_1^d] , \dots , [L_k^d]$ are linearly dependent, so are the tangent spaces to $v_d(\mathbb{P} V')$ to $[L_1^d] , \dots , [L_k^d]$.
\end{lemma}
\begin{proof} If $[L_1^{d-1}] , \dots , [L_k^{d-1}]$ are linearly dependent then the statement is clearly satisfied. Therefore suppose they are independent. If $T_{[L_1^d]} v_d(\mathbb{P} V) , \dots , T_{[L_k^d]} v_d(\mathbb{P} V)$ are linearly dependent, then there exist $M_1 , \dots , M_k \in V$, not all zero, such that
\[
M_1 L_1^{d-1} + \cdots + M_k L_k^{d-1} = 0.
\]
In fact, we show $M_j \in V'$ for every $j$. To see this, let $V''$ be a complement to $V'$, so that $V = V' \oplus V''$; write $M_j = M_j' + M_j''$ for some $M_j' \in V'$, $M_j'' \in V''$. We have
\begin{align*}
& M_1 L_1^{d-1} + \cdots + M_k L_k^{d-1} = \\ & (M_1' L_1^{d-1} + \cdots + M_k' L_k^{d-1}) + (M_1'' L_1^{d-1} + \cdots + M_k'' L_k^{d-1}) = 0.
\end{align*}
Since $L_j \in V'$, the two summands in the expression above are elements of $S^{d} V'$ and $V'' \cdot S^{d-1} V'$ respectively; since these two subspaces are linearly independent, each summand must vanish. Since $V' \cap V'' = 0$, the elements of $V'' \cdot S^{d-1} V'$ can be regarded as elements of $V'' \otimes S^{d-1} V' \subseteq S^d V$; in particular, we have $M_1'' \otimes L_1^{d-1} + \cdots + M_k'' \otimes L_k^{d-1} = 0$. If some $M_j''$ is nonzero, we deduce that $L_1^{d-1} , \dots , L_k^{d-1}$ are linearly dependent, which contradicts the initial assumption. Therefore $M_j'' = 0$ for every $j$ and $M_j = M_j'$.
\end{proof}
We can now provide a complete proof of \autoref{thm: terracini locus characterization}.
\begin{proof}[Proof of \autoref{thm: terracini locus characterization}]
Let $\Omega \subseteq \mathbb{P} V^{(n+2)}$ be the subset of sets of points spanning the entire $\mathbb{P} V$. By definition of concise Terracini locus, $v_3(\Omega) \cap \mathbb{T}^{\mathrm{con}}_{n+2}(v_3(\mathbb{P} V))$ is Zariski open in $\mathbb{T}^{\mathrm{con}}_{n+2}(v_3(\mathbb{P} V))$ and non-empty if $\mathbb{T}^{\mathrm{con}}_{n+2}(v_3(\mathbb{P} V))$ is non-empty. We prove that $v_3(\Omega) \cap \mathbb{T}^{\mathrm{con}}_{n+2}(v_3(\mathbb{P} V)) = v_3(K_3)$.
Let $A \in \Omega$ and write $A = \{ [L_1] , \dots , [L_{n+2}]\}$. Let $r$ be the Kruskal rank of $A$: by \autoref{lemma: orbits}, $A \in K_r$ and it can be normalized as $A = \{[x_0] , \dots , [x_n],[x_0 + \cdots + x_{r-1}]\}$. Write $T_j = T_{[x_j]}v_3(\mathbb{P} V)$, $T_+ = T_{[x_0 + \cdots + x_{r-1}]} v_3(\mathbb{P} V)$; let $V' = \langle x_0 , \dots , x_{r-1}\rangle$ and let $T_j' = T_{[x_j]}v_3(\mathbb{P} V')$, $T_+' = T_{[x_0 + \cdots + x_{r-1}]} v_3(\mathbb{P} V')$.
If $v_3(A)$ is an element of the Terracini locus, then $T_0 , \dots , T_n , T_+$ are linearly dependent. By \autoref{noint}, the same must hold for the tangent spaces $T_0 , \dots , T_{r-1}, T_+$. Further, by \autoref{noint2}, the same must hold for $T'_0 , \dots , T_{r-1}',T_+'$. In particular, it is enough to $A' \subseteq A$ defined by $A' = \{ [x_0] , \dots , [x_{r-1}], [x_0 + \cdots +x_{r-1}]\} \in (\mathbb{P} V')^{(r+2)}$.
We deduce that $v_3(A')$ belongs to the concise Terracini locus $\mathbb{T}^{\mathrm{con}}_{r+2}(v_3(\mathbb{P} V'))$. By the action of $\mathrm{GL}(V')$, the same holds for every set of $r+2$ points in linear general position in $\mathbb{P} V'$. Passing to the closure, we obtain $v_3(\mathbb{P} V')^{(r+2)}$ is entirely contained in the Terracini locus. By Terracini's Lemma, we deduce that all fibers of $\pi_\sigma : A\sigma_{r+2}(v_3(\mathbb{P} V')) \to \mathbb{P} S^3 V'$ have positive dimension. \autoref{thm: main theorem} guarantees that this is the case only if $r \leq 3$. This shows that $A \subseteq K_3$, or equivalently that it has four coplanar points. We obtained $v_3(\Omega) \cap \mathbb{T}^{\mathrm{con}}_{n+2}(v_3(\mathbb{P} V)) \subseteq v_3(K_3)$.
The reverse inclusion holds because of \autoref{thm: main theorem} and this concludes the proof of the equality in \eqref{eqn: concise terracini}.
Finally, by \autoref{lemma: orbits}, we conclude that $\mathbb{T}^{\mathrm{con}}_{n+2}(v_3(\mathbb{P}^n)) = \bar{v_3(K_3)}$ is irreducible, and of dimension $n(n+1) + 2$.
\end{proof}
\bibliographystyle{alphaurl}
| {
"timestamp": "2023-02-09T02:00:27",
"yymm": "2302",
"arxiv_id": "2302.03715",
"language": "en",
"url": "https://arxiv.org/abs/2302.03715",
"abstract": "We study Waring rank decompositions for cubic forms of rank $n+2$ in $n+1$ variables. In this setting, we prove that if a concise form has more than one non-redundant decomposition of length $n+2$, then all such decompositions share at least $n-3$ elements, and the remaining elements lie in a special configuration. Following this result, we give a detailed description of the $(n+2)$-th Terracini locus of the third Veronese embedding of $n$-dimensional projective space.",
"subjects": "Algebraic Geometry (math.AG)",
"title": "Decompositions and Terracini loci of cubic forms of low rank",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9861513881564148,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7086428518121033
} |
https://arxiv.org/abs/1010.4101 | Boundary-twisted normal form and the number of elementary moves to unknot | Suppose $K$ is an unknot lying in the 1-skeleton of a triangulated 3-manifold with $t$ tetrahedra. Hass and Lagarias showed there is an upper bound, depending only on $t$, for the minimal number of elementary moves to untangle $K$. We give a simpler proof, utilizing a normal form for surfaces whose boundary is contained in the 1-skeleton of a triangulated 3-manifold. We also obtain a significantly better upper bound of $2^{120t+14}$ and improve the Hass--Lagarias upper bound on the number of Reidemeister moves needed to unknot to $2^{10^5 n}$, where $n$ is the crossing number. | \section{Introduction}
Suppose $M$ is a triangulated, compact 3-manifold with $t$ tetrahedra and $K$ is an unknot in the 1-skeleton. Recall that $K$ can be isotoped in $M$ using polygonal moves across triangles called \emph{elementary moves}. J. Hass and J. Lagarias obtained an upper bound of $2^{10^7t}$ on the minimum number of elementary moves to take $K$ to a triangle in one tetrahedron \cite{hass-lagarias2001}.
The central idea of their proof is to use normal surface theory. Take a double barycentric subdivision of $M$, and let $N(K)$ denote the simplicial neighborhood of $K$. Since $K$ is the unknot, a homotopically nontrivial curve $l$ on $\partial N(K)$ bounds a normal disc $D$ in $M - \operatorname{int}(N(K))$. Normal surface theory gives an exponential upper bound in $t$ on the number of triangles in a minimal such $D$ \cite{hass-lagarias-pippenger1999}. Naively we might think to move $K$ across $D$ and obtain the bound that way, but this overlooks that $K$ must first be moved to $l = \partial D$. Thus Hass and Lagarias break up the complete isotopy of $K$ to a triangle into three parts. First isotope $K$ to $\partial N(K)$, isotope across $\partial N(K)$ to $l$, and finally isotope across $D$. The bound on $D$ gives a similar bound for the number of elementary moves to isotope across $D$. The bounds for the number of elementary moves realizing the other isotopies are obtained from an involved analysis, which takes up a good part of \cite{hass-lagarias2001}. Independently, but also using normal surface theory, S. Galatolo worked out a bigger bound, but in his announcement the problem of how to maneuver $K$ to $l$ is not adequately addressed \cite{galatolo1998}.
By considering a normal form for surfaces whose boundary is contained in the 1-skeleton of a triangulated 3-manifold, we avoid the retriangulation and subsequent isotopies related to $N(K)$. This not only substantially simplifies the proof but improves the upper bound on the number of moves to $2^{120t+14}$.
A corollary of bounding the number of elementary moves is a bound on the number of Reidemeister moves to unknot an unknot diagram with crossing number $n$. Our result lets us improve the bound given by Hass and Lagarias from $2^{10^{11}n}$ to $2^{10^5n}$.
In section~\ref{sec:btnf} we define the normal form and relate it to a restricted version of normal surface theory in truncated tetrahedra. Section~\ref{sec:enumeration} explains how to enumerate the normal discs, which is a key technical detail for this paper. In section~\ref{sec:normalization} we explain how an essential surface spanning a link in the 1-skeleton can be isotoped into boundary-twisted normal form. Section~\ref{sec:fund} proves the existence of a fundamental spanning disc for the unknot. In Section~\ref{sec:disc_bound}, using the previous results, we obtain an improved upper bound on the number of triangles needed in a spanning disc for an unknot. From this bound, we obtain bounds which improve the main results of \cite{hass-lagarias2001}.
\section{Boundary-twisted and boundary-restricted normal forms}\label{sec:btnf}
\subsection{Marked triangulations}
Given a pair $(M, L)$ consisting of a compact 3--manifold $M$ and link $L$ in $M$, we say $\mathcal T$ is a \emph{marked triangulation} for $(M, L)$ if $\mathcal T$ is a triangulation of $M$ with $L$ contained in the 1--skeleton. The vertices and edges of $\mathcal T$ which belong to $L$ are called \emph{marked vertices and edges}.
If two edges of the same face of a tetrahedron are marked and not part of a triangle component of $L$, there is an isotopy of these edges to the third edge of the face reducing the number of marked edges. It will suffice to consider that $L$ has no triangle component and the number of edges is minimal.
\begin{conv}
No component of $L$ is a triangle, and in any tetrahedron of a marked triangulation, every face has at most one marked edge.
\end{conv}
\subsection{Boundary-twisted normal form}
A tetrahedron, $T$, of a marked triangulation is of 9 possible types and is called a \emph{marked tetrahedron}. Normal disc types in a marked tetrahedron are more complicated than the usual triangles and quads.
\begin{figure}[htbp]
\begin{center}
\subfigure[edge-edge arc \label{ee_arc}]{
\includegraphics[width=2.6cm]{normal_arc-bdy}
}
\hspace{.3cm}
\subfigure[vertex-edge arc \label{ve_arc}]{
\includegraphics[width=2.6cm]{normal_arc-bdy2}
}
\hspace{.3cm}
\subfigure[vertex-vertex arc \label{vv_arc}]{
\includegraphics[width=2.6cm]{normal_arc-bdy3}
}
\caption{Normal arcs in a face}
\label{fig:normal_arc-bdy}
\end{center}
\end{figure}
\begin{defn}
We define a \emph{normal arc} in a face of $T$ to be a properly embedded arc such that one of the following holds:
\begin{itemize}
\item starts in the interior of an edge and ends in the interior of another edge (Figure~\ref{ee_arc})
\item starts at a marked vertex and ends in the interior of the opposite edge (Figure~\ref{ve_arc})
\item starts at a marked vertex and ends at a different marked vertex (Figure~\ref{vv_arc})
\end{itemize}
\end{defn}
\begin{defn}\label{defn:twisted}
We define a \emph{twisted normal disc} $D$ in a marked tetrahedron $T$ to be a properly embedded disc such that its boundary $\partial D$ satisfies the following conditions ($e$ is an edge and $\Delta$ is a face):
\begin{enumerate}
\item $\partial D \cap \operatorname{int}(\Delta)$ consists of normal arcs for every face $\Delta$
\item $\partial D \cap e$ is one of the following: a) empty, b) one endpoint of $e$, c) an arc and $e$ is marked, d) an interior point of $e$ and $e$ is unmarked, e) both endpoints of $e$ and $e$ is unmarked
\item a pair of normal arcs of $\partial D$ abutting the same point or arc of $\partial D \cap e$ must be in different faces
\item $\partial D \cap \Delta$ does not have two normal arcs with endpoints at the same marked vertex
\end{enumerate}
\end{defn}
\begin{defn}[Boundary-twisted normal form]
Let $(M, L)$ be a 3-manifold with link $L$ and marked triangulation $\mathcal T$. Suppose also $S$ is a surface with boundary contained in $L$. We say $S$ is in \emph{boundary-twisted normal form} if its intersection with every tetrahedron consists of twisted normal discs.
\end{defn}
In attempting normal surface theory with boundary-twisted normal surfaces, a technical obstacle arises: it is necessary to consider an additional kind of surface punctured by marked edges. Rather than work with that setup, we have chosen to work in a more familiar setting. We will truncate each marked tetrahedron at its marked vertices and edges. Then we do normal surface theory with respect to these truncated tetrahedra while imposing some additional conditions on the surface's boundary behavior in the truncated regions.
\begin{defn}
Let $T$ be a marked tetrahedron. Then the truncation of $T$, denoted $T_{tr}$, is a polyhedron obtained by the process indicated in Figure~\ref{fig:truncate}. Given a marked triangulation $\mathcal T$, the \emph{truncated triangulation} $\mathcal T_{tr}$ is the polyhedral decomposition given by truncating every marked tetrahedron of $\mathcal T$.
\end{defn}
\begin{figure}
\begin{center}
\subfigure[Start by truncating (cut off) marked edges]
{\includegraphics[height=4.0cm]{truncate_edge}}
\hspace{.6cm}
\subfigure[Then truncate marked vertices]
{\includegraphics[width=2.2cm]{truncate_vertex}}
\end{center}
\caption{Obtaining truncated tetrahedra from marked tetrahedra}
\label{fig:truncate}
\end{figure}
\begin{defn}
A properly-embedded disc $D$ in a truncated tetrahedron $T_{tr}$ is \emph{normal} if it is the restriction to $T_{tr}$ of a twisted normal disc in $T$. A surface in a truncated triangulation is normal if it intersects each triangulation in normal discs.
\end{defn}
\begin{rmk}
This is not the same as the usual generalization of the concept of normal disc from a tetrahedron to a polyhedron, e.g. \cite{brittenham1991} or \cite{lackenby2001b}. Our definition follows naturally from the view of boundary-twisted normal surfaces and has the advantage of reducing the number of disc types.
\end{rmk}
Since the truncation $T_{tr}$ can be considered a subset of $T$, a normal disc in $T_{tr}$ naturally sits inside $T$. It can be extended to a twisted normal disc of $T$ in a natural way. In general, this extension is far from unique, since one can choose different directions to twist and in some cases twist either in the interior of an edge or at an end (see Figure~\ref{fig:twisting}).
\begin{figure}
\begin{center}
\subfigure[In the interior of a marked edge]
{\includegraphics[width=4.8cm]{interior_twisting}}
\hspace{.5cm}
\subfigure[At an end of a marked edge]
{\includegraphics[width=4.8cm]{exterior_twisting}}
\caption{Discs differing by an opposite twist along a marked edge}
\label{fig:twisting}
\end{center}
\end{figure}
\subsection{Boundary-restricted normal form}
A boundary-twisted normal surface is a normal surface in the truncated triangulation $\mathcal T_{tr}$, whose boundary has been extended through the truncated region to $L$. In order for this extension to happen in a simple way, the normal surface must satisfy additional conditions on its boundary. We call this type of normal surface in $\mathcal T_{tr}$ a \emph{boundary-restricted normal surface}.
There are two kinds of boundary regions given by the truncation, triangles and rectangles. Rectangles should be visualized as ``long'' with the long sides corresponding to the longitudal direction and the short sides corresponding to the meridional direction.
We define a boundary-restricted normal surface to be a normal surface $S$ in $\mathcal T_{tr}$ such that for any rectangle $R$ we require $\partial S \cap R$ look as in Figure~\ref{fig:rectangle}. We either have one longitudal normal arc, $n$ meridional arcs, or $n$ meridional arcs with one or two corner-cutting arcs ($n \geq 0$). If there are two corner-cutting arcs, they must be at opposite corners.
\begin{figure}
\begin{center}
\subfigure[One longitudal arc]
{\includegraphics[width=3cm]{rectangle-horizontal}}
\hspace{.5cm}
\subfigure[$n$ meridional arc(s)]
{\includegraphics[width=3cm]{rectangle-vertical}}\\
\subfigure[$n$ meridional arc(s) and one corner-cutting arc]
{\includegraphics[width=3cm]{rectangle-twisting2}}
\hspace{.5cm}
\subfigure[$n$ meridional arc(s) and two corner-cutting arcs]
{\includegraphics[width=3cm]{rectangle-twisting}}
\caption{Pictures of the restrictions on the boundary of a boundary-restricted normal surface}
\label{fig:rectangle}
\end{center}
\end{figure}
\section{Enumeration of twisted normal discs}\label{sec:enumeration}
\begin{figure}
\begin{center}
\includegraphics[width=6cm]{flipping}
\caption{Discs that differ from flipping a vertex-vertex arc to an adjacent face}
\label{fig:flipping}
\end{center}
\end{figure}
Figures~\ref{fig:tet_nv0e}-\ref{fig:tet_0v2e} show pictures of some twisted normal discs in the different marked tetrahedra. To obtain further disc types from a particular figure, apply a symmetry of the tetrahedron (preserving markings), change the direction of twisting along a marked edge (Figure~\ref{fig:twisting}), and/or flip a vertex-vertex arc to an adjacent face (Figure~\ref{fig:flipping}). Each figure's caption includes a number denoting the total number of disc types obtainable from a figure in this manner. Also note that some of the discs in one kind of marked tetrahedron also show up in other kinds, but the figures illustrate only the non-redundant ones. For example, all of the disc types for a tetrahedron with one marked vertex and one marked edge show up in the tetrahedron with two marked vertices and one marked edge.
Except for these operations and the non-illustration of redundant disc types, these figures are complete. The organization of the figures is meant to suggest the method of enumeration. We now explain the enumeration in further detail.
\subsection{Marked tetrahedron with only marked vertices}
For an unmarked tetrahedron, the only twisted normal discs are the standard normal discs: triangles and quads (Figure~\ref{fig:usual}). This gives the usual 7 disc types.
For a marked tetrahedron with one or more marked vertices but no marked edges, in addition to the previous triangles and quads, we have vertex touching triangles (Figure~\ref{fig:vertex_touching}) and possibilities utilizing vertex-vertex arcs (Figures~\ref{fig:bigon}, \ref{fig:triangle_vv_arc}, \ref{fig:triangle_vv_arc2}, and \ref{fig:quad_vv_arc}).
For a tetrahedron with one marked vertex, we have 7 discs from the unmarked case and the three vertex touching triangles, making 10 disc types.
For a tetrahedron with two marked vertices, we have 4 quads, 3 triangles, 6 vertex touching triangles, 2 triangles with a vertex-vertex arc, and 1 bigon. This gives 16 total.
For a tetrahedron with three marked vertices, 4 quads, 3 triangles, 9 vertex touching triangles, 6 triangles with one vertex-vertex arc, 4 triangles with all vertex-vertex arcs, and 3 bigons. This gives 29 total.
For a tetrahedron with four marked vertices, we have 4 triangles, 3 quads, 6 quads with all vertex-vertex arcs, 12 vertex-touching triangles, 12 triangles with one vertex-vertex edge, 16 triangles with all vertex-vertex arcs, 6 bigons, giving 59 total.
\begin{figure}[htbp]
\begin{center}
\subfigure[The usual suspects: triangle (4) and quad (3) \label{fig:usual}]
{\includegraphics[width=6.5cm]{unmarked_tet}}\\
\subfigure[A vertex-touching triangle (3) \label{fig:vertex_touching} ]
{\includegraphics[width=3cm]{vertex_touching_triangle}}
\hspace{.3cm}
\subfigure[A bigon (1)\label{fig:bigon} ]
{\includegraphics[width=3cm]{bigon}}
\hspace{.3cm}
\subfigure[Triangle with one vertex-vertex arc (2)\label{fig:triangle_vv_arc} ]
{\includegraphics[width=3cm]{two_marked_vertices}}\\
\subfigure[Triangle with all vertex-vertex arcs (4) \label{fig:triangle_vv_arc2} ]
{\includegraphics[width=3cm]{three_marked_vertices}}
\hspace{.3cm}
\subfigure[Quad with all vertex-vertex arcs (6) \label{fig:quad_vv_arc} ]
{\includegraphics[width=3cm]{four_marked_vertices}}
\hspace{.3cm}
\caption{Twisted normal discs in a tetrahedron with $n$ marked vertices ($n \geq 0$)}
\label{fig:tet_nv0e}
\end{center}
\end{figure}
\subsection{Tetrahedron with one marked edge}
For the one marked edge case, we have 30 total disc types.
There are two triangles and one quad that don't use the marked edge at all. Moving on to discs that utilize an endpoint of the marked edge, there are 6 vertex-touching triangles, as in a previous case.
The remaining discs utilize the entire marked edge or a subarc of it (see Figure~\ref{fig:tet_0v1e}). The first row illustrates those using the entire marked edge, while the last two rows show disc types using an interior or exterior subarc, resp.
\begin{figure}
\begin{center}
\subfigure[1\label{fig:triangle_full}]
{\includegraphics[width=3cm]{triangle_full_marked_edge}}\\
\subfigure[4 \label{fig:0v1e-int_quad}]
{\includegraphics[width=3cm]{0v1e-interior_quad}}
\hspace{.3cm}
\subfigure[4\label{fig:0v1e-int_pent}]
{\includegraphics[width=3cm]{0v1e-interior_pentagon}}\\
\subfigure[4\label{fig:0v1e-ext_quad}]
{\includegraphics[width=3cm]{0v1e-exterior_quad}}
\hspace{.3cm}
\subfigure[4\label{fig:0v1e-ext_quad2}]
{\includegraphics[width=3cm]{0v1e-exterior_quad2}}
\hspace{.3cm}
\subfigure[4\label{fig:0v1e-ext_pent}]
{\includegraphics[width=3cm]{0v1e-exterior_pentagon}}
\caption{Twisted normal discs in a tetrahedron with one marked edge}
\label{fig:tet_0v1e}
\end{center}
\end{figure}
\subsection{Tetrahedron with one marked edge and one marked vertex}
There are 47 total disc types, with 30 types from the 1 marked edge case, and 17 new types, which we will enumerate and illustrate. The 17 new disc types must utilize the marked vertex (see Figure~\ref{fig:tet_1v1e}). The first row of the figure are 2 discs not utilizing a subarc of the marked edge: a vertex-touching triangle and a bigon. Each subsequent row of the figure, as before, illustrates the disc types utilizing a particular kind of subarc of the marked edge.
\begin{figure}
\begin{center}
\subfigure[1\label{fig:1v1e-vertex-touching}]
{\includegraphics[width=3cm]{1v1e-vertex_touching_triangle}}
\subfigure[1\label{fig:1v1e-bigon}]
{\includegraphics[width=3cm]{1v1e-bigon}}\\
\subfigure[3\label{fig:1v1e-triangle_full}]
{\includegraphics[width=3cm]{1v1e-triangle_full_edge}}
\subfigure[2\label{fig:1v1e-ext_triangle}]
{\includegraphics[width=3cm]{1v1e-exterior_triangle}}
\subfigure[4\label{fig:1v1e-ext_quad}]
{\includegraphics[width=3cm]{1v1e-exterior_quad}}
\subfigure[2\label{fig:1v1e-ext_quad3}]
{\includegraphics[width=3cm]{1v1e-exterior_quad3}}\\
\subfigure[4\label{fig:1v1e-int_quad}]
{\includegraphics[width=3cm]{1v1e-interior_quad}}
\caption{Twisted normal discs in a tetrahedron with one marked edge and one marked vertex}
\label{fig:tet_1v1e}
\end{center}
\end{figure}
\subsection{Tetrahedron with one marked edge and two marked vertices}
There are a total of 93 disc types. From the 1 marked edge case, we have 30. From the 1 marked edge and 1 marked vertex case, we get double contributions (since we now have two marked vertices), giving 2*17 = 34. We enumerate the 29 new disc types below. Note that they must utilize both of the marked vertices.
There are 9 disc types not using any arc of the marked edge: 1 bigon (Figure~\ref{fig:2v1e-bigon}) and 8 triangles with all vertex-vertex arcs (Figure~\ref{fig:2v1e-triangle_vv_arcs}).
The remaining disc types are all quads. There are 4 quads with three vertex-vertex arcs which utilize the full marked edge (Figure~\ref{fig:2v1e-quad_full_edge}). There are 12 quads utilizing an exterior subarc of the marked edge (Figure~\ref{fig:2v1e-ext_quad}). There are 4 quads utilizing an interior subarc of the marked edge (Figure~\ref{fig:2v1e-int_quad}).
\begin{figure}[htbp]
\begin{center}
\subfigure[1\label{fig:2v1e-bigon}]
{\includegraphics[width=3cm]{2v1e-bigon}}
\hspace{.3cm}
\subfigure[8\label{fig:2v1e-triangle_vv_arcs}]
{\includegraphics[width=3cm]{2v1e-triangle_vv_arcs}}
\hspace{.3cm}
\subfigure[4\label{fig:2v1e-quad_full_edge}]
{\includegraphics[width=3cm]{2v1e-quad_full_edge}}\\
\subfigure[12\label{fig:2v1e-ext_quad}]
{\includegraphics[width=3cm]{2v1e-exterior_quad}}
\hspace{.3cm}
\subfigure[4\label{fig:2v1e-int_quad}]
{\includegraphics[width=3cm]{2v1e-interior_quad}}
\caption{Twisted normal discs in a tetrahedron with marked edge and two marked vertices}
\label{fig:tet_2v1e}
\end{center}
\end{figure}
\begin{figure}[htbp]
\subfigure[8\label{fig:truncate_disc}]
{\includegraphics[width=3cm]{0v2e-quad_both_full}}\hspace{1cm}
\subfigure[16]
{\includegraphics[width=3cm]{0v2e-quad_ext_full}}\hspace{1cm}
\subfigure[4]
{\includegraphics[width=3cm]{0v2e-quad_int_full}}\\
\subfigure[2\label{fig:truncate_disc2}]
{\includegraphics[width=3cm]{0v2e-quad_ext_ext}}\hspace{1cm}
\subfigure[4]
{\includegraphics[width=3cm]{0v2e-pentagon_2ext2}}\hspace{1cm}
\subfigure[8]
{\includegraphics[width=3cm]{0v2e-pentagon_2ext3}}\\
\subfigure[16]
{\includegraphics[width=3cm]{0v2e-pentagon_ext_int}}\hspace{1cm}
\subfigure[4]
{\includegraphics[width=3cm]{0v2e-hexagon_2int}}\hspace{1cm}
\subfigure[4\label{fig:truncate_disc3}]
{\includegraphics[width=3cm]{0v2e-hexagon_2int2}}
\caption{Twisted normal discs in a tetrahedron with two marked edges}
\label{fig:tet_0v2e}
\end{figure}
\subsection{Tetrahedron with two marked edges}
There are a total of 148 disc types. There are 66 new disc types, which are illustrated in Figure~\ref{fig:tet_0v2e}.
As before, the discs from the previous cases should be counted, but not all of them are compatible with this type of marked tetrahedron. From the 1 marked edge case, we have 1 normal quad, 4 vertex-touching triangles, 8 quads with an interior subarc, and 8 quads with an exterior subarc. The total is 21.
From the 1 marked edge and 1 marked vertex case, we have 2 bigons, and a quadruple contribution of the rest of the disc types, except for the vertex-touching triangles which are considered in the previous paragraph. This gives 2 + 4*15 = 62 total.
The non-redundant discs from the 1 marked edge and 2 marked vertex case are not allowable here.
\subsection{Obtaining normal discs from twisted normal discs by truncating}
For normal surface theory, we need to know the maximal number of normal discs in any truncated tetrahedron.
Table~\ref{big_table} shows the total number of normal discs in a truncated tetrahedron. These are obtained by truncating the tetrahedron and observing carefully how different twisted normal disc types become the same normal disc type. For example, after truncation of the edges, the two discs illustrated in Figure~\ref{fig:truncate_disc2} are amongst the discs obtained from Figure~\ref{fig:truncate_disc}. The last column of the table will be used later in Section~\ref{sec:disc_bound}. It is the maximum number of disc types whose boundary has a common normal arc in a face.
\begin{table}[htbp]
\begin{center}
\begin{tabular}{@{}lr@{}}
\toprule
\textbf{Tetrahedron type} & \textbf{Total}\\
\midrule
No marked vertices or edges & 7\\
one marked vertex & 10\\
two marked vertices & 16\\
three marked vertices & 29\\
four marked vertices & 59\\
1 marked edge & 30 \\
1 marked edge, 1 marked vertex & 47 \\
1 marked edge, 2 marked vertices & 93 \\
2 marked edges & 148\\
\bottomrule\\
\end{tabular}
\end{center}
\caption{Number of twisted normal disc types in each type of marked tetrahedron}
\label{table-twisted_normal}
\end{table}
\begin{table}[htbp]
\begin{center}
\begin{tabular}{@{}lrc@{}}
\toprule
\textbf{Tetrahedron type} & \textbf{Total} & \quad \textbf{Max arc \#}\\
\midrule
No truncation & 7 & 2\\
One truncated vertex & 10 & 3\\
Two truncated vertices & 16 & 3\\
Three truncated vertices & 29 & 3\\
Four truncated vertices & 59 & 3\\
One truncated edge & 15 & 3\\
Two truncated edges & 17 & 6\\
One truncated edge, one truncated vertex & 22 & 3\\
One truncated edge, two truncated vertices & 40 & 6\\
\bottomrule\\
\end{tabular}
\end{center}
\caption{Number of normal disc types in each type of truncated tetrahedron}
\label{big_table}
\end{table}
\section{Isotoping to boundary-twisted normal form}\label{sec:normalization}
\begin{thm}\label{thm:btnf}
Let $M$ be a compact irreducible $3$-manifold with triangulation $\mathcal T$ and $L$ be a polygonal link contained in the 1-skeleton of $\mathcal T$. Suppose $L$ has at most one edge in each triangle of $\mathcal T$ and bounds an incompressible surface $S$. Then $S$ can be isotoped (rel $\partial$) into boundary-twisted normal form.
\end{thm}
\begin{proof}
Isotope $S$ to be in general position with respect to the 2-skeleton of $\mathcal T$. Then $\operatorname{Int}(S) \cap \partial\Delta^3$ for any tetrahedron $\Delta^3$ consists of simple closed curves and/or embedded (open) arcs with endpoints on marked edges or vertices.
Consider the portions of the marked edges of $\Delta^3$ which abut pieces of $S$ inside of $\Delta^3$. By a small isotopy of $S$, we can assume that they are arcs or endpoints.
We call a simple closed curve in $S \cap \partial \Delta^3$ which is a boundary component of an annulus contained in $ S \cap \Delta^3$ a \emph{circle of intersection} of $S$ and $\partial\Delta^3$. Note that strictly speaking, this is an abuse of terminology as a circle of intersection should refer to a component of the intersection which is a circle.
Define the \emph{weight} of $S$ to be the sum of the number of components of $S \cap \operatorname{int}(\Delta)$ over all faces $\Delta$ of $\mathcal T$. We will now perform a series of weight-reducing isotopies until $S$ is in boundary-twisted normal form. Each type of isotopy will eliminate an unwanted situation and bring $S$ closer to being in boundary-twisted normal form. After an isotopy, we may have disturbed our work in previous stages, but we can repeat the entire simplification procedure up to that point, which drives down the weight, to ensure all the previous conditions still hold. We will assume this is done in the following descriptions.
Consider a particular $\Delta^3$. For each circle of intersection, starting with innermost ones, we can take the disc bound by it in $\partial \Delta^3$ and push it in slightly. By doing so we obtain a compression disc for $S$. Since $S$ is incompressible and $M$ is irreducible, we can isotope $S$ until its intersection with $\Delta^3$ coincides with these compression discs. Therefore we can arrange that each circle of intersection on $\partial \Delta^3$ bounds a disc inside it.
Consider a circle of intersection which is in the interior of a face. We can isotope the disc of $S$ bound by the circle through the face, removing one or more circles of intersection. We repeat this to remove any further circles of intersection inside a face.
Now consider any circle of intersection whose restriction to a face has a non-normal curve. The arc either has both its endpoints on a marked vertex, a \emph{monogon} (Figure~\ref{monogon}), or at least one endpoint is in the interior of an edge, a \emph{$D$-curve} (Figure~\ref{D-curve}). A monogon can be eliminated by pushing the disc it bounds into the next tetrahedron.
\begin{figure}
\begin{center}
\subfigure[monogon \label{monogon}]{
\includegraphics[width=3cm]{monogon}
}
\hspace{.3cm}
\subfigure[D-curve \label{D-curve}]{
\includegraphics[width=3cm]{d-curve}
}
\caption{Some non-normal curves}
\end{center}
\end{figure}
In the $D$-curve case, we can suppose it is innermost. Such an innermost curve and a segment of the marked edge bounds a disc in the face. This is a compression disc, so we can isotope $S$ to the disc and then push through the face. This isotopy must reduce the weight.
Because we are decreasing the weight, by repeating this procedure, we can ensure that the surface $S$ in $\Delta^3$ satisfies:
\begin{itemize}
\item Its intersection with each tetrahedron consists of discs
\item The boundary of each disc consists of normal arcs and arcs of marked edges (Definition~\ref{defn:twisted} (1) and (2c))
\end{itemize}
We can eliminate a disc with two normal arcs in a face touching the same vertex by pushing the disc through the face near the vertex (Definition~\ref{defn:twisted} (4)). This reduces the weight by combining the two arcs into one.
It may be the case that the boundary of a disc $D$ intersects an edge $e$ more than once, including at least once in its interior. We can suppose $D$ is innermost. We now have two cases: 1) $e$ is not marked. 2) $e$ is marked.
Case 1): There is a disc $E$, whose interior is disjoint from $S$, such that $\partial E = \alpha \cup \beta$, where $\alpha$ is an arc on $S$ connecting two points of $\partial D \cap e$ and $\beta$ is a subarc of $e$. We can push $S$ across $E$ so that $\alpha$ is taken to $\beta$. A further small isotopy of $S$ through $e$ will split $D$ into two discs in the tetrahedron. If the two points of $\partial D \cap e$ were in the interior of $e$, these two discs have combined weight less than that of $D$. Otherwise, one point was at an endpoint of $e$ and it is possible the two discs have the same weight as that of $D$. Nonetheless, the intersection of $S$ with the 1-skeleton is simplified. Since none of our weight-reducing isotopies increase intersection with the 1-skeleton, we can eliminate this situation also.
Case 2): If $D$ intersects $e$ only at its endpoints, we need to avoid violating Definition~\ref{defn:twisted} (2e). In this case, there must be an arc $\alpha$ on $D$ joining the endpoints of $e$. Then $\alpha$ and $e$ bound a compression disc which gives a weight-reducing compression. Otherwise $D$ intersects $e$ in its interior in at least one arc. By taking an arc $\alpha$ of the marked edge adjacent to two innermost components of $\partial D \cap e$ and an arc $\beta$ in the interior of $S$ connecting the endpoints of $\alpha$, we obtain a simple closed curve $\gamma =\alpha \cup \beta$. $\gamma$ bounds a compression disc in $\Delta^3$ for $S$. Thus we can isotope $S$ to the disc, reducing the weight.
Now consider a circle of intersection which touches a marked edge but does not pass through from one face to another (this violates Definition~\ref{defn:twisted} (3)). Suppose the curve bounds the disc $D$. We can reduce the weight by pushing the part of $D$ near the marked edge through a face.
Eventually we arrive at a situation where the pieces of $S$ inside $\Delta^3$ are exactly what we defined previously as twisted normal discs (see Definition~\ref{defn:twisted}). In particular, if Definition~\ref{defn:twisted} (2) is not satisfied, clearly we can do one of the above isotopies.
Now we move onto another tetrahedron and repeat the entire process until $S$ is cleaned up to the requisite form inside the tetrahedron. Note this may ruin our work in the previous tetrahedron. We move onto yet another tetrahedron and do the process. By cycling through the tetrahedra and noting that the weight is strictly decreasing, eventually the weight is at a minimum and $S$ is in boundary-twisted normal form.
\end{proof}
\section{Existence of a fundamental unknotting disc}\label{sec:fund}
Normal surface theory in truncated triangulations is analogous to that in standard triangulations. Each normal disc type in a truncated tetrahedron is represented by a variable. There are integer linear equations in these variables, \emph{matching equations}, given by each pair of truncated tetrahedra sharing a face. A coordinate vector is obtained from a normal surface by simply counting the number of normal discs of each type it contains. A normal surface's vector must be a solution to the equations, but non-negative integral solutions are not necessarily normal surfaces. Non-negative integer solutions which satisfy a no-intersection condition are \emph{uniquely} realized as a normal surface. The condition is that certain pairs of coordinates cannot both be nonzero; the pairs correspond to normal disc types that cannot be realized at the same time as disjoint discs.
A normal surface with vector $v$ such that $v \neq v_1 + v_2$ where each $v_i\neq 0$ is a non-negative integer solution to the matching equations is called \emph{fundamental}. All such $v$ constitute the minimal Hilbert basis for the system of equations. This basis is a finite, generating set for all non-negative integer solutions and can be found using methods of integer linear programming.
Let $S$ be a normal surface with solution vector $v(S) = v_1 + v_2$ and each $v_i$ is a nonnegative integer solution vector. The no-intersection condition on $v(S)$ passes to each $v_i$ so that $v_i=v(S_i)$ for a normal surface $S$. There is a \emph{Haken sum}, a cut-and-paste addition of normal surfaces, such that the Haken sum of $S_1$ and $S_2$ is $S$ (Figure~\ref{fig:haken_sum}). Two important properties are that Euler characteristic is additive under Haken sum, and there is a way of cutting and pasting the ``wrong way'', the so-called \emph{irregular switch}, that creates a non-normal surface.
\begin{figure}
\begin{center}
\subfigure[Example of intersection of two summands of a normal surface]
{\includegraphics[width=3.2cm]{haken_sum}}
\hspace{.3cm}
\subfigure[A cut-and-paste giving normal discs]
{\includegraphics[width=3.2cm]{haken_sum3}}
\hspace{.3cm}
\subfigure[A cut-and-paste resulting in non-normal discs]
{\includegraphics[width=3.2cm]{haken_sum2}}
\caption{The Haken sum}
\label{fig:haken_sum}
\end{center}
\end{figure}
In our setup, we work with boundary-restricted normal surfaces. In order that our arguments work, we need only ensure that passing to a summand lets us continue working with boundary-restricted normal surfaces. That is the content of the following simple, but crucial, lemma.
\begin{lem}
Let $S$ be a boundary-restricted normal surface and suppose $v(S) = v_1 + v_2$, where each $v_i$ is a nonzero solution to the matching equations. Then $v_i = v(S_i)$ for a unique boundary-restricted normal surface $S_i$ and $S$ is the Haken sum $S_1 + S_2$.
\end{lem}
\begin{proof}
From the remarks about normal surface theory preceding the lemma, clearly $v_i = v(S_i)$ for a unique normal surface $S_i$ and $S = S_1 + S_2$. It remains only to check that $S_i$ is boundary-restricted. But the conditions on the boundary (Figure~\ref{fig:rectangle}) are inherited from $v$ by each $v_i$, so this follows.
\end{proof}
Now recall that the \emph{weight} of a normal surface is the number of points of intersection with the 1-skeleton.
\begin{thm}\label{thm:fund}
Let $(M, K)$ have a marked triangulation $\mathcal T$ with $K$ a knot in $\mathcal T^{(1)}$. Suppose $\mathcal T_{tr}$ is the truncation of $\mathcal T$ and $D$ is a a boundary-restricted normal disc in $\mathcal T_{tr}$. If $D$ is of least weight over all such discs, then it is fundamental.
\end{thm}
\begin{proof}
Standard techniques as in \cite{jaco-oertel1984} are applicable here. So we are brief on some points and refer the reader to \cite{jaco-oertel1984} for more details.
Suppose $D$ is not fundamental. Then $D = D_1 + D_2$. By Schubert's lemma (\cite{jaco-oertel1984}, 1.9, p. 199), we can suppose the $D_i$'s are connected. The condition on $\partial D$ means that one summand, say $D_2$, is a surface with only meridional boundary components (if any), while the other spans $K$. Since Euler characteristic is additive under Haken sum, we must have two cases: 1) $D_1$ a disc and $D_2$ is a torus, Klein bottle, annulus, or M\"obius band, or 2) $D_1$ is a punctured torus or Klein bottle and $D_2$ is a sphere.
In the first case, $D_1$ is a normal disc spanning $K$ of smaller weight than $D$. In the second case, Schubert's lemma also says we can suppose that no curve of intersection is separating on both $D_i$'s. Thus since every curve separates on a sphere, no curve of intersection separates $D_1$. Pick such a innermost curve on $D_2$ that bounds a disc $E$ of least weight over all innermost curves. This curve must be 2-sided on $D_1$. By compressing along $E$, we obtain a disc $D'$ with at most weight of $D$. If the disc is not normal, then normalization will reduce the weight, contradicting the definition of $D$.
Suppose $D'$ is normal. Since $D$ is of minimal weight, $D'$ must have the same weight. This implies that that the weight of $D_2 - E$ equals the weight of $E$. If there is no other curve of intersection, compressing along the disc $D_2 - \operatorname{Int}(E)$ will result in a non-normal disc and we obtain a contradiction as before. If there is another curve of intersection, examination shows that there is another innermost curve of intersection on $D_2$ which has less weight than $E$, a contradiction.
\end{proof}
\section{A bound on the number of elementary moves to unknot}\label{sec:disc_bound}
J. Hass and J. Lagarias obtained an upper bound of $2^{10^7t}$ on the minimum number of elementary moves to take $K$ to a triangle in one tetrahedron \cite{hass-lagarias2001}. Their key insight was to use normal surface theory to use a normal spanning disc $D$ for the unknot $K$, which was of exponential size (in $t$). But this disc $D$ was normal with respect to a doubly barycentric subdivision of $M$ with a simplicial neighborhood of $K$ removed. Before we can move $K$ across the disc $D$, $K$ must first be moved to $\partial D$. Recall that the bulk of their efforts was on working out how to do this while bounding the number of elementary moves.
The Hass--Lagarias bound is obtained by isotoping $K$ by elementary moves across an annulus connecting $K$ to a curve on the boundary of the removed neighborhood of $K$, isotoping across the torus boundary of the regular neighborhood to the boundary of a normal disc, and finally isotoping across the normal disc to a single triangle. Since the number of triangles in the normal disc is at most $2^{8t+6}$, the chief culprit for their large final bound of $2^{10^7 t}$ is because of their large bound for the first two isotopies.
We will now show how to how to improve the upper bound on the number of moves to $2^{120t+14}$, and subsequently improve the upper bound on the number of Reidemeister moves to unknot an unknot diagram with crossing number $n$ from $2^{10^{11}n}$ to $2^{10^5n}$. The idea is to use boundary-twisted normal form to implement normal surface theory in the most direct way possible: Theorem~\ref{thm:fund} implies there is a boundary-twisted normal disc $D$ of bounded size. We straighten out the discs to be piecewise-linear, and move the unknot along $D$ using elementary moves until it becomes a triangle in a tetrahedron.
First, we need to understand how to bound the size of the fundamental normal disc given by Theorem~\ref{thm:fund}. In \cite{hass-lagarias-pippenger1999} an upper bound was given for the maximal coordinate of any fundamental solution. This bound depends only on two particular features of the system of integer linear equations: the number of variables, $n$, and the maximum, $m$, over the sum of the absolute values of the coefficients of an equation. The upper bound is $n \cdot m^\frac{n - 1}{2}$.
In the last column of Table~\ref{big_table}, for each type of truncated tetrahedron, we give the maximum number of normal disc types that share a particular normal arc type on a face. Since 6 is the max over all tetrahedra types, we see that any matching equation will have at most 12 as the sum of the absolute values of its coefficients.
Thus plugging these numbers into the bound from \cite{hass-lagarias-pippenger1999}, we obtain $59t\cdot 12^\frac{59t - 1}{2}$. We get a slightly nicer form by relaxing the bound to $59t \cdot2^{118t-2}$.
So the total number of discs of a fundamental surface is at most $59t \cdot 59t \cdot2^{118t-2} \leq 2^{120t + 10}$.
This bound with the next basic result will give a bound on the number of elementary moves to deform $K$ to a triangle. Recall that an \emph{elementary move} of a polygonal link in a piecewise-linear 3-manifold with specified triangulation consists of two kinds of moves (and their inverses) in a tetrahedron (Figure~\ref{elem_moves}): 1) a segment of the link is divided into two by inserting a vertex 2) two segments which are edges of a triangle otherwise disjoint from the link are moved to the third edge of the triangle.
\begin{figure}
\begin{center}
\subfigure{\includegraphics[width=1.8in]{elem_move1}}
\hspace{1.3cm}
\subfigure{\includegraphics[width=2in]{elem_move2}}
\caption{Elementary moves taking place in a tetrahedron}
\label{elem_moves}
\end{center}
\end{figure}
\begin{lem}\cite{hass-lagarias2001}\label{lem:elem_move}
Let $M$ be a triangulated $3$-manifold with $S$ a normal disc in $M$ with $w$ triangles. Then $\partial S$ can be isotoped to a triangle by a series of at most $2w$ elementary moves in $M$, each of which takes place in a triangle or edge in $S$.
\end{lem}
\begin{thm}[Bounding elementary moves to unknot]\label{thm:main}
Let $M$ be a triangulated $3$--manifold with $t$ tetrahedra and $K$ is a knot in the 1-skeleton with at most one edge in each face. Suppose $K$ is an unknot, i.e. bounds a disc in $M$. Then there is a series of at most $2^{120t + 14}$ elementary moves taking $K$ to a triangle lying in a tetrahedron.
\end{thm}
\begin{proof}
By Theorem~\ref{thm:btnf}, $K$ bounds a boundary-twisted normal disc $D$, and thus there is a boundary-restricted normal disc $D'$ in the truncated triangulation. We will suppose $D'$ is of least weight so that Theorem~\ref{thm:fund} applies to $D'$ and obtain a bound of $2^{120t + 10}$ on the number of normal discs in $D'$. After extending each normal disc to a twisted normal disc, the number of twisted normal discs of $D$ is also bounded by the same number. Before we can isotope $K$ across $D$ by elementary moves, we need to straighten each twisted normal disc to a piecewise-linear disc.
Almost every twisted normal disc can be straightened to a piecewise-linear disc by straightening any vertex-vertex arcs to become an edge of the tetrahedron.
The only exception is a ``bigon'', which has boundary composed of exactly two vertex-vertex normal arcs.
After straightening twisted normal discs and collapsing bigons to edges, a priori, the result may not be embedded. This can only happen when two of the tetrahedra around an unmarked edge contain twisted normal discs which have vertex-vertex arcs with endpoints on that edge. Since the disc is not embedded, we can pick two such arcs so that they do not bound a chain of bigons. The arcs bound a compression disc, and after the compression we have a disc with fewer points of intersection with the 1-skeleton. Normalizing the disc to boundary-twisted normal form and then truncation will give a boundary-restricted normal disc of lesser weight than $D'$, which is a contradiction.
By examining twisted normal discs, we see that there are at most 6 sides. So after straightening, every disc can be divided up into at most 6 piecewise-linear triangles. Using Lemma~\ref{lem:elem_move} with our bound, we obtain $2(6) \cdot 2^{120t + 10} = 2^{120t + 14}$ as an upper bound on elementary moves.
\end{proof}
\subsection{Bounding the number of Reidemeister moves}
We recall some results from \cite{hass-lagarias2001}:
\begin{lem}[Triangulating a knot diagram]\label{thm:triangulate_diagram}
Given a knot diagram $D$ of crossing number $n$, there is a triangulated convex polyhedron $P$ in $\mathbb R^3$ with at most $140(n+1)$ tetrahedra so that it contains a knot in the 1-skeleton which orthogonally projects to $D$ on a plane. Furthermore, each face of a tetrahedron contains at most one edge of the knot.
\end{lem}
\begin{rmk}
Hass and Lagarias actually get a larger number because they need to assume the knot is in the interior of $P$.
\end{rmk}
\begin{lem}[Reidemeister bound for a projected elementary move]\label{thm:bound_projection}
Let $L$ and $L'$ be polygonal links in $\mathbb R^3$. Suppose that $L$ (resp. $L'$) has at most $n$ edges and has a link diagram $D$ ( resp. $D'$) under orthogonal projection to the plane $z=0$.
If $k$ elementary moves take $L$ to $L'$, then at most $2k(n + \frac{1}{2}k + 1)^2$ Reidemeister moves take $D$ to $D'$.
\end{lem}
Now we can prove
\begin{thm}
Let $D$ be an unknot diagram with $n$ crossings. Then there is a sequence of at most $2^{10^5 n}$ Reidemeister moves taking $D$ to the standard unknot.
\end{thm}
\begin{proof}
We use Lemma~\ref{thm:triangulate_diagram} to obtain a triangulated polyhedron $P$ of at most $140(n+1)$ tetrahedra such that a knot in its 1-skeleton orthogonally projects to $D$. Let $t$ be the number of tetrahedra.
By Theorem~\ref{thm:main} there is a sequence of at most $2^{120t + 14}$ elementary moves taking $K$ to a triangle in a tetrahedra. Using the bound on the projection of an elementary move (Lemma~\ref{thm:bound_projection}) and noting $K$ contains at most $2t$ edges, we obtain the following bound on Reidemeister moves:
\[2^{120t + 14}(2\cdot t + \frac{2^{120t+14}}{2} + 1)^2 \]
This quantity is less than $2^{360t +43} \leq 2^{360\cdot 140(n+1) +43}$. Except for $n=1$, for which the Reidemeister bound obviously holds, the last is less than $2^{10^5 n}$, which was the desired bound. \end{proof}
| {
"timestamp": "2010-10-21T02:00:59",
"yymm": "1010",
"arxiv_id": "1010.4101",
"language": "en",
"url": "https://arxiv.org/abs/1010.4101",
"abstract": "Suppose $K$ is an unknot lying in the 1-skeleton of a triangulated 3-manifold with $t$ tetrahedra. Hass and Lagarias showed there is an upper bound, depending only on $t$, for the minimal number of elementary moves to untangle $K$. We give a simpler proof, utilizing a normal form for surfaces whose boundary is contained in the 1-skeleton of a triangulated 3-manifold. We also obtain a significantly better upper bound of $2^{120t+14}$ and improve the Hass--Lagarias upper bound on the number of Reidemeister moves needed to unknot to $2^{10^5 n}$, where $n$ is the crossing number.",
"subjects": "Geometric Topology (math.GT)",
"title": "Boundary-twisted normal form and the number of elementary moves to unknot",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9861513881564148,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7086428518121033
} |
https://arxiv.org/abs/1910.02856 | Combinatorial considerations on the invariant measure of a stochastic matrix | The invariant measure is a fundamental object in the theory of Markov processes. In finite dimensions a Markov process is defined by transition rates of the corresponding stochastic matrix. The Markov tree theorem provides an explicit representation of the invariant measure of a stochastic matrix. In this note, we given a simple and purely combinatorial proof of the Markov tree theorem. In the symmetric case of detailed balance, the statement and the proof simplifies even more. | \section{A stochastic matrix and its invariant measure}
We consider a finite state space $Z:=\{1, \dots, N\}$ where the number of species $N\in\mathbb N$ is fixed. A stochastic matrix $M=(m_{ij})_{i,j=1, \dots N}$ (also called Markov operator) on $\mathbb{R}^N$ is a real matrix with non-negative entries and which satisfies $M 1\!\! 1 =1\!\! 1$, where $1\!\! 1 := (1,\dots, 1)^T$. This condition is equivalent to the fact that its adjoint $M^*$ maps the set of probability vectors, i.e. non-negative vectors $v\in\mathbb{R}^N$ with $\sum_{j=1}^N v_j=1$, to itself. See \cite{KemenySnell, Norris} for introductive reading on Markov Chains.
It is well-known that there is always a probability vector $w$ such that $M^*w=w$ or equivalently $w^T M = w^T$. The famous Theorem of Frobenius-Perron states that the eigenvector is positive if $A$ is irreducible.
\begin{thm}[Perron (1907) - \cite{Perron}, Frobenius (1912) - \cite{Frobenius}]
Let $A\in\mathbb{R}^{N\times N}\geq 0$ be an irreducible matrix with spectral radius $\rho(A)$. Then $\rho(A)$ is a simple eigenvalue of the matrix $A$, the corresponding eigenspace is one-dimensional and there is a positive eigenvector.
\end{thm}
The normalized vector $w$ satisfying $w^T M = w^T$ is called the invariant measure of the stochastic matrix. The invariant measure is of great importance for stochastic processes. For a given Markov operator $M$ and initial state $p_0$, the sequence $p_n=M^{*n}p_0$ is called Markov chain and $p_\infty :=\lim_{n\rightarrow \infty} p_n$ is an invariant measure of $M^*$. Moreover, invariant measures are also stationary measures, i.e. for $p_0 = w$ the chain is constant.
A Markov process (sometimes also called continuous time Markov chain) is given by a family $T(t) = \mathrm{e}^{tA}$ of Markov operators. The Theorem of Kakutani-Markov provides the existence of an invariant measure $w$ such that $w^TT(t)=w^T$ for any $t\geq 0$. This is equivalent to $A^*w=0$ or $w^TA=0$, where $A= T'(0)$ is the generator of the semigroup, a Markov generator. Hence, it is an element of the null space of the generator of $A$. If $M$ is a stochastic matrix (or Markov operator) then $A=M-I$ is a Markov generator. Conversely, if $A$ is a bounded Markov generator then there is a positive number $\alpha>0$ such that $M=\alpha A + I$ is a stochastic matrix. That means both of these problems, finding the null space of a Markov generator and finding the invariant measure of a Markov operator can be solved equally. But note the set of stochastic matrices is larger than the set of operators represented by $\mathrm{e}^{tA}$ with some $t$ and a Markov generator $A$. For example, there is no $t\geq0$ and Markov generator $A$ with $\mathrm{e}^{tA}=\begin{pmatrix} 0&1\\1&0 \end{pmatrix}$.
The Theorem of Frobenius-Perron is a pure existence result. An explicit formula for $w\in\mathbb{R}^N$
is provided by the so-called \textit{Markov tree theorem} (see Section \ref{SectionMarkovTreeTheorem} for the exact statement). The Markov Tree Theorem has the Theorem of Frobenius and Perron as an immediate consequence.
There are many different proofs for the Markov tree theorem: algebraic proofs (like in \cite{ KruckmanGreenwaldWicks}) which compute determinants and minors and are similar to Kichhoff's proof of the the Kirchhoff's Matrix Tree Theorem \cite{Kirchhoff} (see e.g. \cite{Bollobas} for a smooth version of Kirchhoff's Matrix Tree Theorem), and stochastic proofs \cite{AnantharamTsoucas} which define a Markov process on the set of trees and investigate its time reversal. The aim of this note is to give an easy proof which is purely combinatorial. Moreover, a similar reasoning can be used for determining the invariant measure of a symmetric stochastic process, i.e. where the corresponding stochastic matrix is detailed balanced (see Section \ref{SectionDetailedBalance}).
\section{A stochastic matrix and the corresponding reaction graph}\label{SectionStochMatrixAndReactionGraph}
The entries $m_{ij}$ of a stochastic matrix correspond to transition probabilities from the state $i$ to $j$. It is convenient to illustrate the action of a stochastic matrix with a reaction network or graph. So let us recall some graph theory (see e.g. \cite{Bollobas} for further references). A graph $\gamma=(V,E)$ consists of vertices $v\in V(\gamma)$ and edges $e\in E(\gamma)$. We have finitely many vertices that are labelled with $i\in Z=\{1, \dots, N\}$. The edge $e$ going from $i$ to $j$ is often just denoted by $e_{ij}$. The transition probability $m_{ij}$ correspond to the edge $e_{ij}$. If $m_{ij}=0$, there is no edge in the graph. Clearly, we deal with directed graphs, i.e. with graphs where edges $e_{ij}$ and $e_{ji}$ can be distinguished. In Section \ref{SectionDetailedBalance} we deal also with undirected graphs where the edges $e_{ij}$ and $e_{ji}$ are not distinguished.
A (directed) path in a graph $\gamma$ is a subset of vertices $i_1, \dots, i_m$ such that $e_{i_1i_2}, \dots, e_{i_{m-1}i_m}\in E(\gamma)$.
Two states $i$ and $j$ communicate if there is a directed path from $i$ to $j$ and a directed path from $j$ to $i$. Clearly, this defines an equivalent relation on the state space $Z$ and hence, the state space $Z$ decomposes into disjoint equivalent classes $C_1, \dots, C_m$ of states which communicate.
It can happen that some of the classes are totally disconnected to other classes, i.e. there is no path in any direction. It can also happen that some classes are connected in the sense that there is a connection only in one direction, i.e. a connection from one class to another but certainly not back. This is sometimes called \textit{weakly connected}. Let $Z_1$ be the union of all classes $C_k$ that are totally disconnected; $Z_2$ the union of all classes $C_k$ such that there are only paths ending in $C_k$ and not starting out of $C_k$; $Z_R$ the union of all remaining classes. So $Z=Z_1\cup Z_2 \cup Z_R$ with (maybe after renumbering) $Z_1 = C_1\cup\dots \cup C_k$, $Z_2 = C_{k+1}\cup\dots \cup C_{k+l}$ and $Z_R = C_{k+l+1}\cup\dots \cup C_m$.
With this definition we get the following general form of (the adjoint of) a stochastic matrix
\begin{align*}
M^* =
\small{\left(
\begin{array}{ccc|ccc|cccc}
\boxed{M_1^*} & 0 & 0 & 0 & \dots & 0 & 0 & 0 & 0 & 0\\
0 & \ddots & 0 & 0 & \dots & 0 & 0 & 0 & 0 & 0\\
0 & 0 & \boxed{M_k^*} & 0 & \dots & 0 & 0 & 0 & 0 & 0\\
\hline
0 & 0 & 0 & \boxed{M_{k+1}^*} & \dots & 0 & \boxed{X} & \boxed{X} & \boxed{X} & \boxed{X}\\
0 & 0 & 0 & 0 & \ddots & 0 & \boxed{X} & \boxed{X} & \boxed{X} & \boxed{X}\\
0 & 0 & 0 & 0 & \dots & \boxed{M_{k+l}^*}& \boxed{X} & \boxed{X} & \boxed{X} & \boxed{X}\\
\hline
0 & 0 & 0 & 0 & 0 & 0 & \boxed{M_{k+l+1}^*} & \boxed{X} & \dots & \boxed{X} \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & \boxed{M_{k+l+2}^*} & \dots & \boxed{X} \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \ddots & \boxed{X} \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \dots & \boxed{M_{m}^*}
\end{array}
\right)}
\end{align*}
Here the boxed entries stand for matrices and $\boxed X$ stand for (maybe different) non-zero matrices which describe the transitions between communicating classes.
The matrices $M_j$ for $j=1,\dots, k+l$ are stochastic matrices now acting on the equivalent class $C_j$. Each of them has an invariant measure by the Theorem of Frobenius-Perron. By definition, in $C_j$ all states are communicating and the stochastic matrix $M_j$ is irreducible. Hence, it has a unique invariant measure $\mu_j$ which is positive, i.e. $M^*_j \mu_j = \mu_j$. By $\widetilde\mu_j$ we denote the trivial continuation of $\mu_j$ in $\mathbb{R}^N$ with zeros. Obviously, it is also an invariant measure of $M$.
\begin{prop}
Any invariant measure of $M$ is given by a convex combination of $\widetilde \mu_j$
\begin{align*}
\widetilde \mu = \sum_{j=1}^{l+k}\lambda_j \widetilde \mu_j, ~~~\lambda_j\geq 0,~~\sum_{j=1}^{l+k}\lambda_j =1.
\end{align*}
In particular, the entries of $\widetilde \mu$ with index larger than $k+l$ are zero.
\end{prop}
\begin{proof}
As above, let $M$ be given in $m\times m$ blocks. Since $M^*_j\mu_j = \mu_j$, it also holds $M^*\widetilde \mu = \widetilde \mu$. Hence $\widetilde \mu$ defined by the above representation is indeed an invariant measure of $M$. Now, let $\eta = (\eta_1, \dots, \eta_m)^T$ be an arbitrary invariant measure of $M$.
By the above considerations, the first $k+l$ components of $\eta$ are uniquely determined by the irreducible components $M_j^*$ by the Theorem of Frobenius-Perron. Hence, it suffices that the entries with index larger than $k+l$ are zero.
Let us look at $\eta_m$ and assume that $\eta_m\neq 0$. We write $M^*=\begin{pmatrix} \widetilde M^*_1 & X \\ 0 & M^*_m \end{pmatrix}$, where $\tilde\eta_1$ is an invariant measure of $\widetilde M_1^*$. Hence, we have
\begin{align*}
X \eta_m = 0, ~~ M^*_m \eta_m = \eta_m,
\end{align*}
where $M^*_m$ is irreducible and non-negative and $X$ is non-zero and non-negative. Since the sums of the columns in $M_m^*$ are less or equal to 1, the corresponding matrix norm is less or equal to 1. Hence, also the spectral radius of $\rho(M_m^*)$ is less or equal to 1. Since $\eta_m$ is a eigenvector to the eigenvalue 1 (it is $\neq 0$), we have $\rho(M_m^*)=1$. By the Theorem of Frobnius-Perron, we conclude that $\eta_m>0$. But this contradicts $X\eta_m =0$, since $X\neq 0$. That shows, $\eta_m =0$.
As above, we can show iteratively that also $\eta_j=0$ for $j=k+l+1, \dots, m$. This proves the claim.
\end{proof}
Summarizing, we showed that the invariant measure of a stochastic matrix is totally determined by the invariant measure of its irreducible components. Moreover, in each irreducible component the invariant measure is unique. The next aim is to get an explicit formula for that unique invariant measure.
\section{Rooted trees}\label{SectionRootedTrees}
In the whole section, we fix an irreducible component $C_j$, $j=1,\dots, k+l$ with the stochastic matrix $M_j$. We denote it simply with $C$, the stochastic matrix by $M=(m_{ij})$, the induced graph of the stochastic matrix is denoted by $\gamma_0$. Let us say that the number of states in $C$ is $n\in \mathbb{N}$.
By definition, a \textit{directed loop} is a closed directed path, i.e. a subset of edges $\{e_{i_1i_2}, \dots, e_{i_{m}i_1}\}\subset E(\gamma)$. The graph is called \textit{acyclic} if it does not contain any directed loop. In the following we consider a special subsets of acyclic graphs. We define a \textit{tree} as a connected acyclic subgraph. A special and important type of trees are the \textit{directed rooted trees}.
\begin{defi}
Fix a state $j\in C$. We define $\Gamma_j$ as the set of all directed trees rooted at $j\in C$, i.e. all graphs $\gamma$ with the following two properties:
\begin{itemize}
\item[a)] $\gamma$ is a directed acyclic graph with $n$ vertices.
\item[b)] Each vertex $\bar j \in C\setminus \{j\}$ has exactly one outgoing edge and $j$ has no outgoing edge.
\end{itemize}
\end{defi}
The edges in a rooted directed tree are oriented towards the root.
Note, each $\gamma\in \Gamma_j$ is a subgraph of the complete directed graph spanned by $n$ vertices, but not necessarily a subgraph of $\gamma_0$ (which is defined by the stochastic matrix $M$). A graph $\gamma\in \Gamma_j$ has $n-1$ edges in total but no edge that starts from $j$. On the other hand the graph necessarily contains an edge that ends at $j$. Otherwise, we would have a loop spanned by all other vertices what is not possible since $\gamma$ is acyclic.
\begin{exam}
Let us fix $j=1$ and we want to look at graphs in $\Gamma_1$. There are many graphs $\gamma\in\Gamma_1$ but some of them are similar in the sense that they only differ in the permutation of states $j\leftrightarrow k$ for some $j\neq 1$ and $k\neq 1$. We call graphs that are not similar \textit{topologically different} and do not specify the vertices in the graph. The number of such different configurations is stated in the box.\\
$n=3$:\\
\begin{center}
\begin{minipage}[t]{12cm}
\unitlength=1.6cm
\begin{picture}(2.3, 1.1)
\linethickness{0.2mm}
\put(0.1,0.0){\circle*{0.1}}
\put(1.1,0.0){\circle*{0.1}}
\put(1.1,1.0){\circle*{0.1}}
\put(0.1,0.0){\vector(1,1){0.9}}
\put(1.1,0.0){\vector(0,1){0.9}}
\put(1.1, 1.3){\makebox(0.0,0.0){1}}
\put(0.4,0.8){\makebox(0.0,0.0){$\boxed{1}$}}
\end{picture}\hspace{3cm}
\begin{picture}(2.3, 1.5)
\linethickness{0.2mm}
\put(0.1,0.0){\circle*{0.1}}
\put(1.1,0.0){\circle*{0.1}}
\put(1.1,1.0){\circle*{0.1}}
\put(0.1,0.0){\vector(1,1){0.9}}
\put(1.1,0.0){\vector(-1,0){0.9}}
\put(1.1, 1.3){\makebox(0.0,0.0){1}}
\put(0.3,0.8){\makebox(0.0,0.0){$\boxed{2}$}}
\end{picture}
\vspace{1cm}
\end{minipage}
\end{center}
$n=4$:\\
\begin{minipage}[t]{12cm}
\unitlength=1.4cm
\begin{picture}(2.3, 1.1)
\linethickness{0.2mm}
\put(0.1,0.0){\circle*{0.1}}
\put(1.1,0.0){\circle*{0.1}}
\put(2.1,0.0){\circle*{0.1}}
\put(1.1,1.0){\circle*{0.1}}
\put(0.1,0.0){\vector(1,1){0.9}}
\put(1.1,0.0){\vector(0,1){0.9}}
\put(2.1,0.0){\vector(-1,1){0.9}}
\put(1.1, 1.3){\makebox(0.0,0.0){1}}
\put(0.4,0.8){\makebox(0.0,0.0){$\boxed{1}$}}
\end{picture}~
\begin{picture}(2.3, 1.1)
\linethickness{0.2mm}
\put(0.1,0.0){\circle*{0.1}}
\put(1.1,0.0){\circle*{0.1}}
\put(2.1,0.0){\circle*{0.1}}
\put(1.1,1.0){\circle*{0.1}}
\put(0.1,0.0){\vector(1,1){0.9}}
\put(1.1,0.0){\vector(0,1){0.9}}
\put(2.1,0.0){\vector(-1,0){0.9}}
\put(1.1, 1.3){\makebox(0.0,0.0){1}}
\put(0.4,0.8){\makebox(0.0,0.0){$\boxed{6}$}}
\end{picture}~
\begin{picture}(2.3, 1.5)
\linethickness{0.2mm}
\put(0.1,0.0){\circle*{0.1}}
\put(1.1,0.0){\circle*{0.1}}
\put(2.1,0.0){\circle*{0.1}}
\put(1.1,1.0){\circle*{0.1}}
\put(0.1,0.0){\vector(1,1){0.9}}
\put(1.1,0.0){\vector(-1,0){0.9}}
\put(2.1,0.0){\vector(-1,0){0.9}}
\put(1.1, 1.3){\makebox(0.0,0.0){1}}
\put(0.3,0.8){\makebox(0.0,0.0){$\boxed{6}$}}
\end{picture}~
\begin{picture}(2.3, 1.1)
\linethickness{0.2mm}
\put(0.1,0.0){\circle*{0.1}}
\put(1.1,0.0){\circle*{0.1}}
\put(2.1,0.0){\circle*{0.1}}
\put(1.1,1.0){\circle*{0.1}}
\put(0.1,0.0){\vector(1,0){0.9}}
\put(1.1,0.0){\vector(0,1){0.9}}
\put(2.1,0.0){\vector(-1,0){0.9}}
\put(1.1, 1.3){\makebox(0.0,0.0){1}}
\put(0.4,0.8){\makebox(0.0,0.0){$\boxed{3}$}}
\end{picture}
\vspace{1cm}
\end{minipage}
\\
$n=5$:\\
\begin{minipage}[t]{12cm}
\unitlength=1.4cm
\begin{picture}(3.3, 1.4)
\linethickness{0.2mm}
\put(0.1,0.0){\circle*{0.1}}
\put(1.1,0.0){\circle*{0.1}}
\put(2.1,0.0){\circle*{0.1}}
\put(3.1,0.0){\circle*{0.1}}
\put(1.6,1.0){\circle*{0.1}}
\put(0.1,0.0){\vector(3,2){1.4}}
\put(1.1,0.0){\vector(1,2){0.4}}
\put(2.1,0.0){\vector(-1,2){0.4}}
\put(3.1,0.0){\vector(-3,2){1.4}}
\put(1.6, 1.2){\makebox(0.0,0.0){1}}
\put(0.4,0.8){\makebox(0.0,0.0){$\boxed{1}$}}
\end{picture}\hspace{5cm}
\begin{picture}(3.3, 1.4)
\linethickness{0.2mm}
\put(0.1,0.0){\circle*{0.1}}
\put(1.1,0.0){\circle*{0.1}}
\put(2.1,0.0){\circle*{0.1}}
\put(3.1,0.0){\circle*{0.1}}
\put(1.6,1.0){\circle*{0.1}}
\put(0.1,0.0){\vector(3,2){1.4}}
\put(1.1,0.0){\vector(1,2){0.4}}
\put(2.1,0.0){\vector(-1,2){0.4}}
\put(3.1,0.0){\vector(-1,0){0.9}}
\put(1.6, 1.2){\makebox(0.0,0.0){1}}
\put(0.4,0.8){\makebox(0.0,0.0){$\boxed{12}$}}
\end{picture}
\begin{picture}(3.3, 1.4)
\linethickness{0.2mm}
\put(0.1,0.0){\circle*{0.1}}
\put(1.1,0.0){\circle*{0.1}}
\put(2.1,0.0){\circle*{0.1}}
\put(3.1,0.0){\circle*{0.1}}
\put(1.6,1.0){\circle*{0.1}}
\put(0.1,0.0){\vector(3,2){1.4}}
\put(1.1,0.0){\vector(1,2){0.4}}
\put(2.1,0.0){\vector(-1,0){0.9}}
\put(3.1,0.0){\vector(-1,0){0.9}}
\put(1.6, 1.2){\makebox(0.0,0.0){1}}
\put(0.4,0.8){\makebox(0.0,0.0){$\boxed{24}$}}
\end{picture}
\hspace{5.3cm}
\begin{picture}(3.3, 1.4)
\linethickness{0.2mm}
\put(0.1,0.0){\circle*{0.1}}
\put(1.1,0.0){\circle*{0.1}}
\put(2.1,0.0){\circle*{0.1}}
\put(3.1,0.0){\circle*{0.1}}
\put(1.6,1.0){\circle*{0.1}}
\put(0.1,0.0){\vector(3,2){1.4}}
\put(1.1,0.0){\vector(-1,0){0.9}}
\put(2.1,0.0){\vector(-1,0){0.9}}
\put(3.1,0.0){\vector(-1,0){0.9}}
\put(1.6, 1.2){\makebox(0.0,0.0){1}}
\put(0.4,0.8){\makebox(0.0,0.0){$\boxed{24}$}}
\end{picture}
\end{minipage}
\vspace{0.3cm}\\
\begin{minipage}[t]{12cm}
\unitlength=1.4cm
\begin{picture}(2.3, 2.3)
\linethickness{0.2mm}
\put(0.1,1.0){\circle*{0.1}}
\put(1.1,1.0){\circle*{0.1}}
\put(0.6,2.0){\circle*{0.1}}
\put(0.6,0.0){\circle*{0.1}}
\put(1.6,0.0){\circle*{0.1}}
\put(0.1,1.0){\vector(1,2){0.45}}
\put(1.1,1.0){\vector(-1,2){0.45}}
\put(0.6,0.0){\vector(1,2){0.45}}
\put(1.6,0.0){\vector(-1,2){0.45}}
\put(0.6, 2.2){\makebox(0.0,0.0){1}}
\put(0.2,1.8){\makebox(0.0,0.0){$\boxed{12}$}}
\end{picture}~
\begin{picture}(2.3, 2.3)
\linethickness{0.2mm}
\put(0.1,1.0){\circle*{0.1}}
\put(1.1,1.0){\circle*{0.1}}
\put(0.6,2.0){\circle*{0.1}}
\put(0.6,0.0){\circle*{0.1}}
\put(1.6,0.0){\circle*{0.1}}
\put(0.1,1.0){\vector(1,2){0.45}}
\put(1.1,1.0){\vector(-1,2){0.45}}
\put(0.6,0.0){\vector(-1,2){0.45}}
\put(1.6,0.0){\vector(-1,2){0.45}}
\put(0.6, 2.2){\makebox(0.0,0.0){1}}
\put(0.2,1.8){\makebox(0.0,0.0){$\boxed{12}$}}
\end{picture}~
\begin{picture}(2.3, 2.3)
\linethickness{0.2mm}
\put(0.1,1.0){\circle*{0.1}}
\put(1.1,1.0){\circle*{0.1}}
\put(0.6,2.0){\circle*{0.1}}
\put(0.6,0.0){\circle*{0.1}}
\put(1.6,0.0){\circle*{0.1}}
\put(0.1,1.0){\vector(1,0){0.9}}
\put(1.1,1.0){\vector(-1,2){0.45}}
\put(0.6,0.0){\vector(1,2){0.45}}
\put(1.6,0.0){\vector(-1,2){0.45}}
\put(0.6, 2.2){\makebox(0.0,0.0){1}}
\put(0.2,1.8){\makebox(0.0,0.0){$\boxed{4}$}}
\end{picture}~
\begin{picture}(2.3, 2.3)
\linethickness{0.2mm}
\put(0.1,1.0){\circle*{0.1}}
\put(1.1,1.0){\circle*{0.1}}
\put(0.6,2.0){\circle*{0.1}}
\put(0.6,0.0){\circle*{0.1}}
\put(1.6,0.0){\circle*{0.1}}
\put(0.1,1.0){\vector(1,-2){0.45}}
\put(1.1,1.0){\vector(-1,2){0.45}}
\put(0.6,0.0){\vector(1,2){0.45}}
\put(1.6,0.0){\vector(-1,0){0.9}}
\put(0.6, 2.2){\makebox(0.0,0.0){1}}
\put(0.2,1.8){\makebox(0.0,0.0){$\boxed{12}$}}
\end{picture}~
\begin{picture}(2.3, 2.3)
\linethickness{0.2mm}
\put(0.1,1.0){\circle*{0.1}}
\put(1.1,1.0){\circle*{0.1}}
\put(0.6,2.0){\circle*{0.1}}
\put(0.6,0.0){\circle*{0.1}}
\put(1.6,0.0){\circle*{0.1}}
\put(0.1,1.0){\vector(1,0){0.9}}
\put(1.1,1.0){\vector(-1,2){0.45}}
\put(0.6,0.0){\vector(1,2){0.45}}
\put(1.6,0.0){\vector(-1,0){0.9}}
\put(0.6, 2.2){\makebox(0.0,0.0){1}}
\put(0.2,1.8){\makebox(0.0,0.0){$\boxed{24}$}}
\end{picture}
\vspace{1cm}
\end{minipage}
\end{exam}
The following lemma is easy but important.
\begin{lem}\label{LemmaDirectedPaths}
Let $\gamma \in \Gamma_j$ for some $j\in C$ be a fixed directed rooted tree. For every vertex $\bar j\neq j$ there is a directed path in $\gamma$ which starts from $\bar j$ and ends at $j$.
\end{lem}
\begin{proof}
Let us assume w.l.o.g that $\bar j =1$ and $j\neq1$. We prove the claim by contradiction, i.e. let us assume that there is no directed path from $\bar j=1$ to $j$ in $\gamma\in\Gamma_j$. We are going to construct a path to $j$ iteratively using the fact that every vertex in $\gamma$ except $j$ has exactly one outgoing edge by definition. So, there is an edge starting from 1 which does not go to $j$ (by assumption) but goes to another vertex say 2. Then there is an edge starting from 2 does not go $j$ (again by assumption since otherwise there would be a path from $\bar j$ to $j$) and not to $1$ since the graph is acyclic. Hence it goes to another vertex say 3. Then there is an edge starting from 3 does not go to $j$, to 1 and to 2, hence it goes to say 4. Skipping the vertex $j$, we conclude till the last vertex $n$. But then, there is no suitable edge starting from $n$, since it can not go to any vertex $1,\dots, n-1$. We get a contradiction.
\end{proof}
In fact, the above proof also says that the path in the directed rooted tree unique.
\section{The Markov tree theorem}\label{SectionMarkovTreeTheorem}
Let a stochastic matrix $M=(m_{ij})_{i,j=1,\dots,n}$ be given, where $\# C = n \in \mathbb{N}$.
We define $w\in\mathbb{R}^n$ by
\begin{align}\label{StationMeasure}
w_j = \sum_{\gamma\in\Gamma_j} \prod_{e_{ik}\in E(\gamma)} m_{ik}.
\end{align}
Note that $w$ is not normalized. The normalizing factor is $Z=\sum_{j=1}^n\sum_{\gamma\in\Gamma_j} \prod_{e_{ik}\in E(\gamma)} m_{ik}$ which contains all directed rooted trees.
\begin{exam}\label{Example3States}
For $n=3$, we get $
w=\begin{pmatrix}
m_{21}m_{31} + m_{23}m_{31} + m_{21}m_{32}\\
m_{12}m_{32} + m_{12}m_{31} + m_{13}m_{32}\\
m_{13}m_{23} + m_{13}m_{21} + m_{12}m_{23}
\end{pmatrix}.$
\end{exam}
We want to show that $w^T M = w^T$, or equivalently that
\begin{align*}
\forall k\in C: w_k = \sum_{j\in C} m_{jk} w_j
\end{align*}
Since for any $k\in C$ it holds $\sum_{j\in Z} m_{kj} =1$, the above condition is equivalent to
\begin{align*}
\forall k\in C: \sum_{j\in C} m_{kj} w_k = \sum_{j\in C}m_{jk}w_j.
\end{align*}
\begin{thm}[Markov tree theorem]\label{MainTheorem}
It holds $w^T M =w^T$ for $w=(w_j)_{j=1,\dots,n}$ defined by $w_j = \sum_{\gamma\in\Gamma_j} \prod_{e_{ik}\in E(\gamma)} m_{ik}$.
\end{thm}
In the proof, we compute both sides and compare. To do so, we focus on $k=1$, but the other cases can be treated exactly the same way. We want to show
\begin{align}\label{ToShow}
\sum_{j\geq 2} m_{1j} w_1 = \sum_{j\geq 2}m_{j1}w_j.
\end{align}
\begin{exam}
Let us compute the left- and right-hand side for $n=3$. Using Example \ref{Example3States}, we have
\begin{align*}
L&HS = m_{12}w_1 + m_{13}w_1 =\\
&= m_{12}m_{21}m_{31} + m_{12}m_{23}m_{31} + m_{12}m_{21}m_{32} + m_{13}m_{21}m_{31} + m_{13}m_{23}m_{31} + m_{13}m_{21}m_{32}\\
\vspace{1cm}\\
R&HS = m_{21} w_2 + m_{31} w_3 =\\
&= m_{21}m_{12}m_{32} + m_{21}m_{12}m_{31} + m_{21}m_{13}m_{32} + m_{31}m_{13}m_{23} + m_{31}m_{13}m_{21} + m_{31}m_{12}m_{23}.
\end{align*}
Hence, (\ref{ToShow}) holds.
\end{exam}
Observe that in the formula (\ref{StationMeasure}) only edges that do not start in $j$ are taken into account. In the identity (\ref{ToShow}), the matrix entries that correspond to edges starting form $j$ are multiplied to $w_j$. That means, we have to treat graphs which emerge from rooted directed graphs through adding one additional edge. If $e_{jk}$ is not an edge in a graph $\gamma$, let denote $\gamma\cup e_{jk}$ the graph that results from adding the edge $e_{jk}$ to $\gamma$.
\begin{lem}\label{LemmaExactlyOneLoop}
Let $\gamma\in\Gamma_j$ and $k\in C$ be arbitrary. Then the graph $\gamma \cup e_{jk}$ contains exactly one loop. Moreover, this loop goes through $j\in C$.
\end{lem}
\begin{proof}
We know by Lemma \ref{LemmaDirectedPaths} that from every vertex in $\gamma$ there is a directed path back to $j$. Hence by adding one more edge $e_{jk}$, there will be definitively a loop. More than one loop is not possible, since there are no loops in $\gamma$ and there is only one edge starting from each vertex. So exactly one loop is in $\gamma\cup e_{jk}$. The loop goes obviously through $j$.
\end{proof}
We define two sets:
\begin{align*}
S_1&:=\{\gamma\cup e_{1j}: \gamma\in \Gamma_1,~ j\in\{2,\dots, n\} \}\\
S_2&:=\{\gamma_k\cup e_{k1}: \gamma_k\in \Gamma_k,\mathrm{~~for~~} k\in\{2,\dots, n\}\}
\end{align*}
Firstly, observe that any two elements in $S_1$ are different. The same holds for the elements of $S_2$. The key step is to show that both sets $S_1$ and $S_2$ are equal.
\begin{prop}\label{PropositionS1equalsS2}
It holds $S_1=S_2$.
\end{prop}
\begin{proof}
The proof is done in two steps.\\
1. Step: $S_1\subset S_2$. Let $\gamma\cup e_{1j}\in S_1$ for $\gamma\in \Gamma_1$ and $j\in\{2, \dots, N\}$ be arbitrary and fixed. That means, that one edge starting from 1 with arbitrary end is added to some graph $\gamma \in \Gamma_1$. By Lemma \ref{LemmaExactlyOneLoop}, there is exactly one loop in $\gamma\cup e_{1j}$. In particular there is one unique edge in the loop which ends at 1 and starts at say $\bar j$.
Let us consider the graph $\gamma\cup e_{1j}$ without the edge $e_{\bar j1}$. We call it $\bar \gamma$ and want to show that $\bar\gamma \in \Gamma_{\bar j}$. Firstly, we observe that for each vertex $k$ apart from $\bar j$ there is in $\bar \gamma$ exactly one edge which starts at $k$. Hence, it remains to show that there is no loop in $\bar \gamma$. Look at $\gamma \cup e_{1j}$. It has exactly one loop, but removing one edge $e_{\bar j 1}$ the loop is destroyed. That shows that $\bar\gamma \in \Gamma_{\bar j}$ and hence we have shown that $\gamma\cup e_{1j} = \bar \gamma \cup e_{\bar j 1}$ for some $\bar\gamma \in \Gamma_{\bar j}$. This proves the first claim.\\
2. Step: $S_2\subset S_1$. Let $\gamma_k\cup e_{k1}\in S_2$ for $\gamma_k\in \Gamma_k$ and $k\in\{2, \dots, N\}$ be arbitrary and fixed. That means, that one edge starting from $k$ with the end 1 is added to $\gamma_k\in\Gamma_k$. As above by Lemma \ref{LemmaExactlyOneLoop}, it follows that there is one loop in $\gamma_k\cup e_{k1}\in S_2$ which goes through $k$ and hence also through 1. Moreover, the loops defines a unique edge that starts in 1 and goes to another vertex say $j$. Let us consider the graph $\gamma_k\cup e_{k1}$ without the edge $e_{1j}$. As above one can show that this graph is in $\Gamma_1$, i.e. there is $\gamma\in\Gamma_1$ such that $\gamma_k\cup e_{k1} = \gamma\cup e_{1j}$. This proves $S_2\subset S_1$.
\end{proof}
We are now able to prove Theorem \ref{MainTheorem}.
\begin{proof}
As above mentioned, we prove only $(w^TM)_1 = w_1$, i.e. the identity (\ref{ToShow}).
Using Proposition \ref{PropositionS1equalsS2}, we compute
\begin{align*}
\sum_{j\geq 2} m_{1j}w_1 &= \sum_{j\geq 2} m_{1j} \sum_{\gamma\in\Gamma_1}\prod_{e_{ik}\in E(\gamma)} m_{ik}=\sum_{j\geq 2} \sum_{\gamma\in\Gamma_1}m_{1j}\prod_{e_{ik}\in E(\gamma)} m_{ik} =\\
&=\sum_{j\geq 2} \sum_{\gamma\in\Gamma_1}\prod_{e_{ik}\in E(\gamma \cup e_{1j})} m_{ik}= \sum_{\gamma \in S_1} \prod_{e_{ik}\in E(\gamma)} m_{ik}=\\
&\stackrel{\mathclap{\tiny{S_1=S_2}}}{=}\;\sum_{\gamma \in S_2} \prod_{e_{ik}\in E(\gamma)} m_{ik} =\sum_{j\geq 2}\sum_{\gamma \in \Gamma_j} \prod_{e_{ik}\in E(\gamma\cup e_{j1})} m_{ik} =\\
&= \sum_{j\geq 2} \sum_{\gamma\in\Gamma_j}m_{j1}\prod_{e_{ik}\in E(\gamma)} m_{ik}= \sum_{j\geq 2} m_{j1}\sum_{\gamma\in\Gamma_j}\prod_{e_{ik}\in E(\gamma)} m_{ik} = \sum_{j\geq 2} m_{j1}w_j.
\end{align*}
\end{proof}
\begin{exam}
Let us consider $n=5$ states with the following reaction graph and the associated stochastic matrix.
\begin{minipage}[c]{4cm}
\unitlength=1.6cm
\hspace{1cm}
\begin{picture}(2.3, 2.3)
\linethickness{0.2mm}
\put(0.1,1.0){\circle*{0.1}}
\put(2.1,1.0){\circle*{0.1}}
\put(1.1,2.0){\circle*{0.1}}
\put(0.6,0.0){\circle*{0.1}}
\put(1.6,0.0){\circle*{0.1}}
\put(0.1,1.0){\vector(1,1){0.9}}
\put(1.1,2.0){\vector(1,-1){0.9}}
\put(2.1,1.0){\vector(-1,-2){0.45}}
\put(1.6,0.0){\vector(-1,0){0.9}}
\put(0.6,0.0){\vector(-1,2){0.45}}
\put(0.1,1.2){\makebox(0.0,0.0){1}}
\put(2.1,1.2){\makebox(0.0,0.0){3}}
\put(1.1,1.8){\makebox(0.0,0.0){2}}
\put(0.63,0.2){\makebox(0.0,0.0){5}}
\put(1.53,0.2){\makebox(0.0,0.0){4}}
\put(0.4,1.6){\makebox(0.0,0.0){$m_{12}$}}
\put(1.6,1.8){\makebox(0.0,0.0){$m_{23}$}}
\put(1.6,0.5){\makebox(0.0,0.0){$m_{34}$}}
\put(1.1,0.1){\makebox(0.0,0.0){$m_{45}$}}
\put(0.58,0.5){\makebox(0.0,0.0){$m_{51}$}}
\end{picture}
\end{minipage}
\hspace{1cm}
$\longleftrightarrow$
\hspace{1cm}
\begin{minipage}[c]{8cm}
$
\begin{pmatrix}
1- m_{12}& m_{12} & 0 & 0 & 0\\
0 & 1- m_{23}& m_{23} & 0 & 0 \\
0 & 0 & 1- m_{34}& m_{34} & 0 \\
0 & 0 & 0 & 1- m_{45}& m_{45}\\
m_{51} & 0 & 0 & 0 & 1-m_{51}
\end{pmatrix}.$
\end{minipage}
\vspace{0.3cm}\newline
Formula (\ref{StationMeasure}) yields
\begin{align*}
w&=(
m_{23}m_{34}m_{45}m_{51},
m_{12}m_{34}m_{45}m_{51},
m_{12}m_{23}m_{45}m_{51},
m_{12}m_{23}m_{34}m_{51},
m_{12}m_{23}m_{34}m_{45})^T \\ &\sim(
1/m_{12},
1/m_{23},
1/m_{34},
1/m_{45},
1/m_{51})
\end{align*}
as its invariant measure (despite normalization).
\end{exam}
\begin{rem}
We want to stress that formula (\ref{StationMeasure}) always defines a vector $w$ such that $w^T M = w^T$ regardless whether $M$ is reducible or not. But it can happen that formula (\ref{StationMeasure}) defines a vector that is identically zero. This case is treated in Section \ref{SectionPositivity}.
\end{rem}
\begin{rem}
Proving Theorem \ref{MainTheorem}, we do not use $m_{ij}\geq 0$, i.e. also negative matrix elements are possible. The only property of the matrix $M$ that we used is that the elements in any row sum to one. Consider for example the matrix
\begin{align*}
M=\begin{pmatrix}
1 & -1 & 1\\
1 & 1 & -1\\
-1 & 1 & 1
\end{pmatrix}.
\end{align*}
We have $M=I + A$, where $A$ is an incidence matrix, often used to model electric circuits or mechanical systems of springs and masses.
$M$ has eigenvalues $1$ and $1\pm i\sqrt 3$. Formula (\ref{StationMeasure}) yields $(1,1,1)$ as the invariant measure of $M$.
\end{rem}
\section{Positivity and Uniqueness}\label{SectionPositivity}
The aim of this section is to show whenever formula (\ref{StationMeasure}) provides a reasonable (i.e. a non-zero) vector the invariant measure is unique; or vice versa if the invariant measure is unique then formula (\ref{StationMeasure}) defines a non-zero vector. Let $\gamma_0$ be the graph defined by a given stochastic matrix $M=(m_{jk})$, i.e. $\gamma_0$ consists of all edges $e_{jk}$ such that $m_{jk}>0$.
\begin{align*}
Z_k = \{j\in Z: \mathrm{there ~is~a~directed~path~from~}j\mathrm{~to~}k\mathrm{~in~}\gamma_0\}.
\end{align*}
Obviously, $M$ is irreducible if and only if $\bigcap_{k\in Z} Z_k = Z$. The next proposition is helpful.
\begin{prop}\label{PropCharacterizazionPositivity}
Let $w$ be defined as (\ref{StationMeasure}). Then
$Z_k =Z$ if and only if $w_k >0$.
\end{prop}
\begin{proof}
We focus again on the case $k=1$.\\
1. Step: Let $Z_1=Z$. We want to show that $w_1> 0$.\\
The problem reduces to the following question. Let a graph $\widetilde \gamma$ with $Z_1=Z$ be given, i.e. from any vertex $j\neq 1$ there is a directed path to 1. Is it possible to obtain a subgraph which is a directed tree rooted at 1, i.e. $\gamma\in\Gamma_1$ by removing edges from $\widetilde \gamma$? It is not hard to see that this is indeed possible and actually there are many ways to construct a suitable $\gamma\in\Gamma_1$. In the following we present one possible way of construction.
For a given graph $\widetilde \gamma$ with $Z_1=Z$, let us define $V_0=\{1\}$ and let $V_1$ be the set of all vertices from which an edge to the vertex 1 starts. Collect all these edges (we call it $E_1$) and remove any other edge that starts from a vertex in $V_1$. Obviously, the graph spanned by $V_0$, $V_1$ and $E_1$ is a subgraph of a spanning tree rooted in 1. Now, let $V_2$ be the set of all vertices from which an edge to some vertex in $V_1$ starts. Collect for any vertex in $V_2$ exactly one edge that goes to some vertex in $V_1$. If there are many choices take an arbitrary one. Remove any other edge that starts from some vertex in $V_2$. We call the set of edges $E_2$. Again, it is clear that the graph spanned by $V_0$, $V_1$, $E_1$, $V_2$ and $E_2$ is a subgraph of a spanning tree rooted in 1. Now proceed as above and we get sets of vertices $V_k$ and edges $E_k$. Observe that $V_k$ contains vertices which have the distance $k$ to the vertex $1$ and hence the construction necessarily stops after at most $N-1$ steps. Define $\gamma$ as the union of $V_0$ and all $V_k$ and $E_K$. Since by assumption $Z_1=Z$ in the end any vertex is contained $\gamma$ and by construction it is clear that $\gamma\in\Gamma_1$. Hence, $w_1>0$.
2. Step: Let $w_1>0$. So, there is at least one graph $\gamma\in\Gamma_1$ such that for any edge $e_{ki}\in E(\gamma)$ it holds $m_{ki}>0$. Hence, there is a path starting from any $j\neq 1$ and ending at $1$ and we get $Z_1=Z$.
\end{proof}
\begin{cor}
Let $n\geq 3$. We have $w=0$ defined by (\ref{StationMeasure}) if and only if the invariant measure is not unique.
\end{cor}
\begin{proof}
If the invariant measure is not unique, we have at least two equivalence classes $C_1, C_2$ such that there is no path from $C_1$ to $C_2$ and no path from $C_2$ to $C_1$ (see Section \ref{SectionStochMatrixAndReactionGraph}). By Proposition (\ref{PropCharacterizazionPositivity}), we conclude $w_j = 0$ for any $j\in Z$. Hence $w=0$.
Let $w=0$. If the graph is totally disconnected, then following the ideas in Section \ref{SectionStochMatrixAndReactionGraph} the invariant measure is surely not unique. So let us assume that the graph is (weakly) connected, i.e. there is at least a path in one direction connecting the different equivalence classes. We want to show that there are at least two communicating classes such that there is no path starting from them, i.e. using notation in Section (\ref{SectionStochMatrixAndReactionGraph}) they belong to $Z_+$. Surely, there is one communicating class, say $C_1$. Since $w|_{C_1} =0$, there is a state (say 2) without any path to $C_1$. Let us denote the communicating class of 2 by $C_2$ and consider the graph $\tilde \gamma$ defined by all communicating classes that can be reached from $C_2$. There is definitively a communicating class in $\tilde \gamma$ without starting paths to other communicating classes. And this communicating class is not $C_1$ since we assumed no path from $2$ to $C_1$. Hence, we found two communicating classes without starting paths, i.e. the invariant measure is not unique.
\end{proof}
\section{Cardinality of the sets of graphs}\label{SectionCardinality}
Now we compute the number of addends in (\ref{StationMeasure}). To do this, we introduce the following subsets of directed acyclic graphs. For $k\in\{1, \dots, n\}$ mark $k$ vertices among all $n$ vertices, say $\{j_1, \dots, j_k\}$. We define $\Gamma_{\{j_1, \dots, j_k\}}$ as a subset of directed graphs $\gamma$ with the following properties:
\begin{itemize}
\item[a)] $\gamma$ is a directed acyclic graph with $n$ vertices.
\item[b)] From each vertex $j \in \{j_1, \dots, j_k\}$ starts exactly one directed edge that ends at some other vertex $\bar j \in C\setminus\{j\}$.
\end{itemize}
Remembering the notation in Section \ref{SectionRootedTrees}, we see that $\Gamma_{\{1, \dots, n\}\setminus \{j\}}=\Gamma_j$.
Let us compute the cardinality of $\Gamma_{\{j_1, \dots, j_k\}}$.
\begin{prop}
It holds $\# ~\Gamma_{\{j_1, \dots, j_k\}} = (n-k)n^{k-1}$
\end{prop}
\begin{proof}
Let $b_k^n := \#~ \Gamma_{\{j_1, \dots, j_k\}}$.
The proof is done in two steps. Firstly, we derive a recursive formula for $b_k^n$. Secondly, we show inductively the claimed expression.
\newline
1.Step:
We define $b_0^n=1$.
We are going to prove that $b_k^n = (n-k)n^{k-1}$.
To compute $b_1^n$, fix one vertex, say $j_1=1$. Hence, there are $n-1$ possible edges from $j_1$. So $b_1^n=n-1$. Let us compute $b_2^n$. Fix again two vertices say 1 and 2. To define an edge from 1, we have two choices: we can go to some of the $n-2$ not marked vertices or to 2. Choosing an edge to one of the vertices that are not marked, we get for an edge from 2 then $b_1^n$ possibilities. Hence in this case $(n-2)b_1^n$. Choosing the edge to 2, we have $n-2$ options for an edge from 2. So, $b_2^n = (n-2) (b_1^n + b_0^n)$ in total. Let us compute $b_3^n$. Fix again three vertices say 1, 2 and 3. To define an edge from 1, we have two choices: we to some of the $n-3$ not marked vertices or to a marked vertex (1 or 2).
Choosing an edge to one not marked vertex, we get $(n-3)$ times $b_2^n$ (for the two remaining vertices) possibilities. Hence in this case $(n-2)b_2^n$. Choosing the edge to a marked vertex, we have 2 times options since we have freedom to go to 2 or to 3. Let us go to say 2. The edge from 2 can not return to 1. It can go to a
marked vertex, which leads to $(n-3) b_1^n$ options for an edge starting from 3. Or, it can go to a not marked vertex, which leads to $(n-3)b_0^n$ options for an edge starting from 3. Hence, $b_3^n = (n-3) (b_2^n + 2(b_1^n + b_0^n))$ in total. Stepping further, we conclude the following recursion formula for $b_k^n$:
\begin{align*}
b_k^n &= (n-k)\left[b_{k-1}^n + (k-1)\left(b_{k-2}^n + (k-2) \left(b_{k-3}^n + \dots +2\left(b_1^n + b_0^n\right)\right)\dots \right)\right] = \\
&= (n-k) \sum_{j=1}^k b_{k-j}^n \frac{(k-1)!}{(k-j)!}
\end{align*}
2. Step: We prove inductively that $b_k^n = (n-k)n^{k-1}$. For $k=0$, we get by definition $b_0^n = 1$. We already computed $b_1^n = n-1$ and $b_2^n = (n-2)n$. Let us assume that $b_{k-j}^n = (n-k+j)n^{k-j-1}$ holds for any $j = 1, \dots, k$. We want to prove the claim for $j=0$. In particular it suffices to show that
\begin{align*}
\sum_{j=1}^k (n-k+j) n^{k-j-1}\frac{(k-1)!}{(k-j)!} = n^{k-1}.
\end{align*}
The left-hand side is
\begin{align*}
&\frac 1 n\sum_{j=1}^k (n-k+j) n^{k-j}\frac{(k-1)!}{(k-j)!} = \frac 1 n\sum_{l=0}^{k-1} (n-l) n^l\frac{(k-1)!}{l!} =\\
=&\frac {(k-1)!}{n}\left(n + \sum_{l=1}^{k-1} \frac{n^{l+1}}{l!} - \sum_{l=1}^{k-1} \frac{n^l}{(l-1)!} \right) = \frac {(k-1)!}{n}\frac{n^k}{(k-1)!} = n^{k-1}.
\end{align*}
This proves the claim.
\end{proof}
\begin{cor}
It holds $\# ~\Gamma_{j} = n^{n-2}$ and hence every entry of $w$ consists of $n^{n-2}$ addends with $n-1$ factors each.
\end{cor}
\section{Symmetric case of detailed balance}\label{SectionDetailedBalance}
A special situation occurs if the Markov process satisfies a symmetry condition. We may assume that the reaction network is connected, otherwise each separated region can be treated independently. A Markov process is detailed balanced with respect to its invariant measure $w$, if by definition it is weakly reversible, i.e. whenever $m_{ij}\neq 0$ then also $m_{ji}\neq 0$, and, moreover, it holds $m_{ij}w_j = m_{ji}w_i$. This means that the stochastic matrix $M$ is symmetric in $L^2(w)$, the $L^2$ over the invariant measure $w>0$. The first property of weak reversibility implies that the invariant measure is unique. The second property, as we will see, simplifies the formula for the invariant measure hugely.
Firstly, we have the following.
\begin{lem}\label{LemOrientationDetailedBalance}
Let the stochastic matrix $M$ be detailed balanced w.r.t. the invariant measure $w>0$. Let $j_1\mapsto j_2 \mapsto \dots \mapsto j_k \mapsto j_1$ be a loop in the graph of $M$. Then
\begin{align}
m_{j_1j_2} m_{j_2j_3} \cdots m_{j_kj_1} = m_{j_1j_k} m_{j_kj_{k-1}} \cdots m_{j_2j_1}\label{eqLoops}.
\end{align}
\end{lem}
\begin{proof}
It holds for any $i= 1, \dots, k$ that $m_{j_ij_{i+1}}w_{j_{i+1}} = m_{j_{i+1}j_i}w_{j_{i}}$, where we use the notation that $j_{k+1}=j_1$. Taking the product of this equation for any $i= 1, \dots, k$ and dividing by $\prod_{i=1}^k w_{j_i}>0$ yields the claim.
\end{proof}
\begin{rem}
The above relation (\ref{eqLoops}) is indeed equivalent to $M$ being detailed balanced.
\end{rem}
To simplify the formula for the invariant measure of a stochastic matrix, we need the definition of a \textit{undirected tree}.
An undirected graph $\gamma=(V,E)$ consists of vertices $V$ and edges $E$ where the edges $e\in E$ do not have any orientation, i.e. there is no difference between the edge going from $i$ to $j$ or from $j$ to $i$. Paths and loops can be defined as in the case of directed case before.
\begin{defi}
Let $\gamma=(V,E)$ be a undirected graph. A \textit{(undirected) tree} in $\gamma$ is a subgraph $t=(V, E')$, $E'\subset E$ containing all vertices $V$ connected by edges $e\in E'$ such that there are no loops in $t$.
\end{defi}
Now observe the following. Pick any tree $t$ and any vertex $k$. Then the tree $t$ defines canonically a unique spanning tree rooted at $k$ by orienting each edge in $t$ into the direction of $k$. The spanning tree is easily constructed inductively by looking at the vertices on the tree which have the distance $1$, $2$ and so on to the vertex $k$. Let us denote this spanning tree by $t_k$. Now, we define $w^t = (w_k^t)_{k}$ by
\begin{align}\label{eqStationaryMeasureDetailedBalance}
w_k^t = \prod_{e_{ij}\in t_k} m_{ij}.
\end{align}
Observe that the only difference between the spanning trees defined by $i$ and $j$ is the orientation of the path between $i$ and $j$:
\begin{center}
\begin{minipage}[t]{12cm}
\unitlength=1.4cm
\begin{picture}(3.3, 2.3)
\linethickness{0.2mm}
\put(0.6,1.0){\circle*{0.1}}
\put(0.1,2.0){\circle*{0.1}}
\put(0.1,0.0){\circle*{0.1}}
\put(1.6,1.0){\circle*{0.1}}
\put(2.1,2.0){\circle*{0.1}}
\put(2.1,0.0){\circle*{0.1}}
\put(2.6,1.0){\circle*{0.1}}
\put(0.1,0.0){\vector(1,2){0.45}}
\put(0.1,2.0){\vector(1,-2){0.45}}
\put(1.6,1.0){\vector(-1,0){0.9}}
\put(2.1,2.0){\vector(-1,-2){0.45}}
\put(2.1,0.0){\vector(-1,2){0.45}}
\put(2.6,1.0){\vector(-1,-2){0.45}}
\put(0.62, 1.2){\makebox(0.0,0.0){$i$}}
\put(2.12, 0.24){\makebox(0.0,0.0){$j$}}
\end{picture}\hspace{3cm}
\begin{picture}(3.3, 2.3)
\linethickness{0.2mm}
\put(0.6,1.0){\circle*{0.1}}
\put(0.1,2.0){\circle*{0.1}}
\put(0.1,0.0){\circle*{0.1}}
\put(1.6,1.0){\circle*{0.1}}
\put(2.1,2.0){\circle*{0.1}}
\put(2.1,0.0){\circle*{0.1}}
\put(2.6,1.0){\circle*{0.1}}
\put(0.1,0.0){\vector(1,2){0.45}}
\put(0.1,2.0){\vector(1,-2){0.45}}
\put(0.6,1.0){\vector(1,0){0.9}}
\put(2.1,2.0){\vector(-1,-2){0.45}}
\put(1.6,1.0){\vector(1,-2){0.45}}
\put(2.6,1.0){\vector(-1,-2){0.45}}
\put(0.62, 1.2){\makebox(0.0,0.0){$i$}}
\put(2.12, 0.24){\makebox(0.0,0.0){$j$}}
\end{picture}
\end{minipage}
\end{center}
\vspace{2mm}
The next aim is to show that $w^t$ is indeed the (unique) invariant measure of the $M$, and moreover, that $w_k^t$ and also $w^t$ do not depend on the undirected tree $t$ chosen before, i.e. any fixed tree $t$ defines the same invariant measure up to a scaling factor.
\begin{thm}
Let the stochastic matrix $M$ be detailed balanced w.r.t. the invariant measure $w>0$. Take any tree $t$, then $w^t$, defined by (\ref{eqStationaryMeasureDetailedBalance}) is (up to normalization) the invariant measure of $M$. In fact, different trees just correspond to different normalization factors.
\end{thm}
\begin{proof} 1. Step: We show that formula (\ref{eqStationaryMeasureDetailedBalance}) defines an invariant measure.
Let us fix one tree $t$. We want to show that for any $k=1,\dots, n$ it holds
\begin{align*}
\sum_{j\neq k} m_{kj} w_k^t = \sum_{j\neq k}m_{jk}w_j^t.
\end{align*}
To be more precise, we show that even $m_{kj}w_k^t = m_{jk}w_j^t$ holds for any $j\neq k$. Similarly to the unsymmetric case before (Lemma \ref{LemmaExactlyOneLoop}), the product $m_{kj}w_k^t$ defines a subgraph with exactly one cycle, which passes the vertices $k$ and $j$. The graph defined by $m_{jk}w_j^t$ has exactly the same structure apart from the orientation of the cycle. But Lemma \ref{LemOrientationDetailedBalance} provides that these two products are equal, which proves the claim.
2. Step: Now, we show that formula (\ref{eqStationaryMeasureDetailedBalance}) is infact independent of the chosen tree $t$, in the sense that for any two tree $t_1$ and $t_2$ the two invariant measure are proportional. Let us take two arbitrary trees $t_1$ and $t_2$ and the associated invariant measures $w^1$ and $w^2$. Take $i\neq j$. Then the claim is equivalent to $\frac{w^1_i}{ w^1_j} = \frac{w^2_i}{ w^2_j}$
Let us look at $w^1_i$ and $w^1_j$. Their only difference is the orientation of a path in $t_1$ connecting $i$ and $j$, i.e. $\frac{w^1_i}{w^1_j} = \frac{\mathrm{Path~in~}t^1: j\mapsto i}{\mathrm {Path~in~}t^1: i\mapsto j}$. The same relation holds for $w^2_i$ and $w^2_j$ with respect to tree $t_2$, i.e. $\frac{w^2_i}{w^2_j} = \frac{\mathrm{Path~in~}t^2: j\mapsto i}{\mathrm {Path~in~}t^2: i\mapsto j}$. So the claim is equivalent to
\begin{align*}
\frac{\mathrm{Path~in~}t^1: j\mapsto i}{\mathrm{Path~in~}t^1: i\mapsto j} = \frac{\mathrm{Path~in~}t^2: j\mapsto i}{\mathrm {Path~in~}t^2: i\mapsto j}
\end{align*}
or, in other words, equivalent to
\begin{align*}
(\mathrm{Path~in~}t^1: j\mapsto i )\cdot (\mathrm {Path~in~}t^2: i\mapsto j) = (\mathrm{Path~in~}t^2: j\mapsto i )\cdot (\mathrm{Path~in~}t^1: i\mapsto j).
\end{align*}
Both sides define a cycle consisting of the same edges but with different orientation. Lemma \ref{LemOrientationDetailedBalance} provides again that both terms are indeed equal.
\end{proof}
\begin{exam}
The scaling factor for different trees is not 1 in general. Consider a stochastic matrix between three states with detailed balance, i.e. $abc=def$.
\begin{center}
\begin{minipage}[t]{12cm}
\unitlength=1.6cm
\begin{picture}(2.3, 1.5)
\linethickness{0.2mm}
\put(0.6,1.2){\circle*{0.1}}
\put(0.1,0.2){\circle*{0.1}}
\put(1.1,0.2){\circle*{0.1}}
\put(0.08,0.2){\vector(1,2){0.45}}
\put(0.62,1.2){\vector(-1,-2){0.45}}
\put(1.08,0.2){\vector(-1,2){0.45}}
\put(0.62,1.2){\vector(1,-2){0.45}}
\put(1.1,0.18){\vector(-1,0){0.9}}
\put(0.1,0.22){\vector(1,0){0.9}}
\put(0.0, 0.2){\makebox(0.0,0.0){$1$}}
\put(1.2, 0.2){\makebox(0.0,0.0){$2$}}
\put(0.6, 1.35){\makebox(0.0,0.0){$3$}}
\put(0.25, 0.7){\makebox(0.0,0.0){$a$}}
\put(0.45, 0.7){\makebox(0.0,0.0){$d$}}
\put(0.72, 0.6){\makebox(0.0,0.0){$f$}}
\put(1.0, 0.7){\makebox(0.0,0.0){$b$}}
\put(0.6, 0.1){\makebox(0.0,0.0){$c$}}
\put(0.6, 0.3){\makebox(0.0,0.0){$e$}}
\end{picture}~
\begin{picture}(2.3, 1.3)
\linethickness{0.2mm}
\put(0.6,1.2){\circle*{0.1}}
\put(0.1,0.2){\circle*{0.1}}
\put(1.1,0.2){\circle*{0.1}}
\put(0.6,1.2){\line(1,-2){0.5}}
\put(1.1,0.2){\line(-1,0){1.0}}
\put(0.0, 0.2){\makebox(0.0,0.0){$1$}}
\put(1.2, 0.2){\makebox(0.0,0.0){$2$}}
\put(0.6, 1.35){\makebox(0.0,0.0){$3$}}
\end{picture}~
\begin{picture}(2.3, 1.3)
\linethickness{0.2mm}
\put(0.6,1.2){\circle*{0.1}}
\put(0.1,0.2){\circle*{0.1}}
\put(1.1,0.2){\circle*{0.1}}
\put(0.1,0.2){\line(1,2){0.5}}
\put(0.6,1.2){\line(1,-2){0.5}}
\put(0.0, 0.2){\makebox(0.0,0.0){$1$}}
\put(1.2, 0.2){\makebox(0.0,0.0){$2$}}
\put(0.6, 1.35){\makebox(0.0,0.0){$3$}}
\end{picture}
\end{minipage}
\end{center}
Let us fix two trees: $1\mapsto 2 \mapsto 3$ and $1\mapsto 3\mapsto 2$. Then formula (\ref{eqStationaryMeasureDetailedBalance}) yields $w^1 = (bc,be,ef)^T$ and $w^2 = (df,ab,af)$ and the proportionality factor is $e/a$.
\end{exam}
| {
"timestamp": "2019-10-08T02:28:16",
"yymm": "1910",
"arxiv_id": "1910.02856",
"language": "en",
"url": "https://arxiv.org/abs/1910.02856",
"abstract": "The invariant measure is a fundamental object in the theory of Markov processes. In finite dimensions a Markov process is defined by transition rates of the corresponding stochastic matrix. The Markov tree theorem provides an explicit representation of the invariant measure of a stochastic matrix. In this note, we given a simple and purely combinatorial proof of the Markov tree theorem. In the symmetric case of detailed balance, the statement and the proof simplifies even more.",
"subjects": "Probability (math.PR)",
"title": "Combinatorial considerations on the invariant measure of a stochastic matrix",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9861513869353992,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7086428509346884
} |
https://arxiv.org/abs/1403.8139 | A Generalization of Tokuyama's Formula to the Hall-Littlewood Polynomials | A theorem due to Tokuyama expresses Schur polynomials in terms of Gelfand-Tsetlin patterns, providing a deformation of the Weyl character formula and two other classical results, Stanley's formula for the Schur $q$-polynomials and Gelfand's parametrization for the Schur polynomial. We generalize Tokuyama's formula to the Hall-Littlewood polynomials by extending Tokuyama's statistics. Our result, in addition to specializing to Tokuyama's result and the aforementioned classical results, also yields connections to the monomial symmetric function and a new deformation of Stanley's formula. |
\section{Introduction}
\indent Schur polynomials, a special class of symmetric polynomials, play an important role in representation theory. They encode the characters of irreducible representations of general linear groups, which may be computed via the Weyl character formula. Tokuyama \cite{tokuyama} gave a deformation of the Weyl character formula for $\text{GL}_n(\mathbb{C})$ (Cartan type $A_n$). This formula expresses Schur polynomials in terms of statistics obtained from strict Gelfand-Tsetlin (GT) patterns and includes two other classical results as specializations, the Gelfand parameterization formula for Schur polynomials \cite{gelfand} and Stanley's formula for the Schur $q$-polynomials \cite{stanley}.
The ideas in \cite{tokuyama} have been extended to other Cartan types. For example, combinatorial expressions of deformations of the Weyl denominator for Cartan types $B_n$, $C_n$, and $D_n$ were given by Okada \cite{okadadenom} and Simpson \cite{simpson1}, \cite{simpson2}. Hamel and King in \cite{hamel02} replicate Tokuyama's deformation of the Weyl character formula in type $C_n$, and Friedberg and Zhang \cite{friedberg2014tokuyama} derived a similar result for type $B_n$. These results are also often expressed using other combinatorial objects such as Young tableaux \cite{fulton1997young}, alternating sign matrices \cite{tokuyama}, \cite{okadadenom}, \cite{hamel2007bijective}, and 6-vertex or ice-type models \cite[Chapter 19]{brubaker2006weyl}, \cite{brubaker2011schur}, \cite{tabony2011deformations}. Hamel and King express the type $A_n$ case \cite{hamel2007bijective} and type $C_n$ case \cite{hamel02} using 6-vertex partition functions, and Brubaker and Schultz \cite{brubaker20146} give Tokuyama-type deformations for types $A_n$, $B_n$, $C_n$, and $D_n$ using modified ice models. One can also try to generalize Tokuyama's ideas to other symmetric polynomials, such as the Hall-Littlewood polynomials, and this is the problem we consider.
The Hall-Littlewood polynomials are a class of symmetric polynomials which may be viewed as a generalization of the Schur polynomials by a deformation along a parameter $t$. The Hall-Littlewood polynomials also interpolate between the dual bases of the Schur polynomials and the monomial symmetric functions at $t=0$ and $t=1$, respectively. These polynomials are used to determine characters of Chevalley groups over local and finite fields \cite{tokuyama}. Stanley's formula expresses the Hall-Littlewood polynomials at the singular value $t=-1$ (commonly known as the Schur $q$-polynomials \cite[Chapter III]{macdonaldbook}) as a summation over strict GT patterns. However, there does not exist an analogue of Tokuyama's formula expressing the Hall-Littlewood polynomials as a summation over combinatorial statistics from Gelfand-Tsetlin patterns. In this paper we provide such a result. Theorem \ref{MainTheorem2*}, in addition to linking the classical specializations of Tokuyama, reduces to a different deformation of Stanley's formula at $t=-1$, and a formula for the monomial symmetric functions in terms of GT patterns at $t=1$.
\section{Preliminary Notation and a Theorem due to Tokuyama}\label{prelim}
A partition $\l = (\l_1,\l_2,\ldots,\l_n)$ is a finite tuple of nonnegative integers, referred to as \emph{parts}. Unless otherwise stated, a partition will be assumed to be weakly decreasing, i.e. $\l_i \geq \l_{i+1}$ for all $i$. The \emph{length} of a partition $\l$ is the number of parts in $\l$, and the \emph{size} of $\l$ is defined as $|\l|=\sum_{i=1}^n \lambda_i$. Addition of partitions of equal length is done component-wise, and given two partitions $\l$ and $\mu$ of lengths $n$ and $m$ respectively, we express their concatenation as the tuple $\conc{\l}{\mu} = (\l_1,\ldots,\l_n,\mu_1,\ldots,\mu_m)$. We will typically use $\a$ to denote some strictly decreasing partition, often taking $\a = \lambda + \rho$ when $\lambda$ is defined.
\noindent Define the partition $\r_n$ as
\begin{equation}
\rho_n=(n-1,n-2,\ldots,1,0).
\end{equation}
We often write $\r$ in place of $\r_n$ as the value of $n$ is clear from context.
We write the polynomial $f(x)$ as short for $f(x_1,\ldots, x_n)$, and similarly $x^\lambda = x_1^{\lambda_1}x_2^{\l_2} \ldots x_n^{\lambda_n}$. Furthermore, a permutation $\sigma \in S_n$ acts on $f(x)$ by permuting the variables $x_i$.
\noindent The Weyl denominator $\WD{n}$ is given by the formula
\begin{equation}
\WD{n} = \prod_{1 \leq i<j \leq n} (x_i-x_j).
\end{equation}
A deformation of the Weyl denominator $\defWD{n}{t}$ is given by the similar formula
\begin{equation}\label{defWD}
\defWD{n}{t} = \prod_{1 \leq i<j \leq n} (x_i-tx_j).
\end{equation}
Note that $\defWD{n}{1} = \WD{n}$ and $\defWD{n}{0} = x^\rho$.
\begin{Theorem}[Weyl Character Formula for $\text{GL}_n$]\label{schur}
The Schur polynomial corresponding to the partition $\l$ of length $n$ is
\begin{equation} s_\lambda(x)
= \sum_{\sigma \in S_n} \sigma \left(\frac{x^{\lambda + \rho}}{\WD{n}} \right)
. \end{equation}
\end{Theorem}
We may define the Hall-Littlewood polynomials analogously using the deformation of the Weyl denominator as follows.
\begin{Definition}
\label{HL}
The Hall-Littlewood polynomial for a partition $\l$ of length $n$ is
\begin{equation}
\HL{\lambda} = \sum_{\sigma \in S_n} \sigma \left( x^{\l} \frac{\defWD{n}{t}}{\WD{n}}\right)
\end{equation}
\end{Definition}
\noindent It is not difficult to see that $\HLtwo{\l}{x;0} = s_\lambda(x)$ and $\HLtwo{\lambda}{x;1} = \sum_{\sigma \in S_n} \sigma(x^\l)$, the monomial symmetric function $m_{\l}(x)$.
Note: The Hall-Littlewood polynomials defined in \cite[Chapter III]{macdonaldbook} are given by
\begin{equation}
\MHL{\l}{x; t} = v_{\l}(t)\HL{\l},
\end{equation}
for a stabilizing factor $v_{\l}(t)$. Since the stabilizing factor may easily be multiplied to Theorem \ref{MainTheorem2*} if necessary, we choose to omit it in this paper, and refer to the polynomials $\HL{\l}$ as the Hall-Littlewood polynomials.
\newpage
\begin{Definition}
A Gelfand-Tsetlin (GT) pattern is a triangular array of nonnegative integers of the form
\vspace{3mm}
\centerline{$\tau_{1,1} \;\;\;\;\;\;\; \tau_{1,2} \;\;\;\;\;\;\; \tau_{1,3} \;\;\;\;\;\;\; \ldots \;\;\;\;\;\;\; \tau_{1,n}$}
\centerline{$\,\tau_{2,2} \;\;\;\;\;\;\;\; \tau_{2,3} \;\;\;\;\;\;\;\; \ldots \;\;\;\;\;\;\;\; \tau_{2,n} $}
\centerline{ $ \ldots \;\;\;\;\;\;\;\; \ldots \;\;\;\;\;\;\;\; \ldots$}
\centerline{$\,\,\, \tau_{n-1,n-1} \;\; \tau_{n-1,n}$}
\centerline{$\,\,\,\,\tau_{n,n}$}
\vspace{3mm}
\noindent where each row $\row{i} = (\tau_{i,i},\tau_{i,i+1},\ldots,\tau_{i,n})$
is a \emph{weakly decreasing} partition, and two consecutive rows $r_i = (\tau_{i,i},\ldots,\tau_{i,n})$ and $r_{i+1} = (\tau_{i+1,i+1},\ldots,\tau_{i+1,n})$ satisfy the \emph{interleaving condition}:
\begin{equation} \tau_{i-1,j-1}\geq \tau_{i,j} \geq \tau_{i-1,j}. \end{equation}
\end{Definition}
For a partition $\a$, let $GT(\a)$ be the set of all GT patterns of top row $\a=r_1$. A \emph{strict} GT pattern is one in which each row $r_i$ is strictly decreasing. Given a partition $\a$, write $\GT(\a) \subseteq GT(\a)$ to be the set of all strict GT patterns with top row $\a$.
\begin{Definition}[\cite{tokuyama}] \label{GTstats}
An entry $\tau_{i,j}$ in a GT pattern is
\begin{itemize}
\item \emph{left-leaning} if $\tau_{i,j}=\tau_{i-1,j-1}$,
\item \emph{right-leaning} if $\tau_{i,j} = \tau_{i-1,j}$, and
\item \emph{special} if it is neither left-leaning nor right-leaning.
\end{itemize}
The quantities $l(T)$, $r(T)$, and $z(T)$ denote the number of left-leaning, right-leaning and special entries in a GT pattern respectively.
\end{Definition}
\noindent Given a GT pattern with $n$ rows, define the statistic $m_i(T)$ as
\begin{equation}
m_i(T) = \begin{cases}
|\row{i}|-|\row{i+1}| & \text{ for }\;1\leq i \leq n-1 \\ |\row{i}| & \text{ for }\;i=n \end{cases},
\end{equation}
and $m(T)$ as
\begin{equation}
m(T)=\left(m_1(T),\ldots,m_n(T)\right).
\end{equation}
We now state the following theorem due to Tokuyama, which we generalize in the rest of this paper.
\begin{Theorem}[\cite{tokuyama}] \label{toks}
For any weakly decreasing partition $\lambda$ of length $n$, we have
\begin{equation} \label{tokseq}
\defWD{n}{q} \cdot s_\l(x) = \sum_{T\in \GT{(\lambda + \rho)}} (1-q)^{z(T)}(-q)^{l(T)}x^{m(T)}.
\end{equation}
\end{Theorem}
\section{Additional Statistics on Gelfand-Tsetlin Patterns}\label{addstats}
To generalize Theorem \ref{toks} to the Hall-Littlewood polynomials, the previous statistics from Definition \ref{GTstats} of \cite{tokuyama} prove inadequate. Instead of only labelling each entry as left-leaning, right-leaning or special, we need to give each entry both a \emph{left-sided} property $p_l(\tau_{i,j})$ and a \emph{right-sided} property $p_r(\tau_{i,j})$. The left-sided property encodes the relationship the entry $\tau_{i,j}$ has to the entry directly above it and to its left, namely $\tau_{i-1,j-1}$. Similarly, the right-sided property encodes the relationship that $\tau_{i,j}$ has to $\tau_{i-1,j}$.
\noindent The left-sided properties of an entry $p_l(\tau_{i,j})$ are assigned as
\begin{equation} \label{pl}
p_l(\tau_{i,j}) =
\begin{cases}
l \text{ (left)} & \text{if $\tau_{i,j} = \tau_{i-1,j-1}$}\\
al \text{ (almost-left)} & \text{if $\tau_{i,j} = \tau_{i-1,j-1} - 1$} \\
s \text{ (special)} & \text{otherwise}
\end{cases},
\end{equation}
and similarly, the right-sided properties of an entry $p_r(\tau_{i,j})$ are assigned as
\begin{equation} \label{pr}
p_r(\tau_{i,j}) =
\begin{cases}
r \text{ (right)} & \text{if $\tau_{i,j} = \tau_{i-1,j}$}\\
ar \text{ (almost-right)} & \text{if $\tau_{i,j} = \tau_{i-1,j} + 1$} \\
s \text{ (special)} & \text{otherwise}
\end{cases}.
\end{equation}
\begin{Definition} \label{c,d}
For an entry $\tau_{i,j}$ with $i>1$, we define
\begin{equation}
c(\tau_{i,j}) =
\begin{cases}
0 & \text{if $p_l(\tau_{i,j}) = l$ or $p_r(\tau_{i,j})=r$}\\
(1-t)(1-q) & otherwise \\
\end{cases}
\end{equation}
and for a property $p$, we define
\begin{equation}
g(p) =
\begin{cases}
-q & \text{if $p=l$} \\
t & \text{if $p=al$} \\
1 & \text{if $p=r$} \\
-qt & \text{if $p=ar$} \\
0 & \text{if $p=s$} \\
\end{cases}
\end{equation}
\end{Definition}
With these we define two more functions: the first is a generalization of the expressions $(-q)$ and $(1-q)$ from Tokuyama's formula; and the second considers the relation of an entry $\tau_{i,j}$ to the two entries above it in the GT pattern.
\begin{Definition}\label{d,w}
For an entry $\tau_{i,j}$ with $i>1$, we define
\begin{equation}\label{def-w} w(\tau_{i,j}) = c(\tau_{i,j}) + g(p_l(\tau_{i,j})) + g(p_r(\tau_{i,j})).\end{equation}
For an entry $\tau_{i,j}$ with $i<j<n$, we define
\begin{equation}\label{def-d} d(\tau_{i,j})=g(p_r(\tau_{i+1,j}))\cdot g(p_l(\tau_{i+1,j+1})).\end{equation}
\end{Definition}
\begin{Example}\label{example:w,d}
Given the following segment of a GT pattern:
\centerline{$ 5 \; \; 3 \; \; 1 $}
\centerline{$4 \; \; 3 $}
\noindent We see that the $4$ has properties $al$ and $ar$. Thus $c(4) = (1-q)(1-t)$ and $w(4) = (1-q)(1-t) + t - qt = 1-q$. Similarly, we have $c(3) = 0$ and $w(3) = 0 - q + 0 = -q$. For the entries in the second row, we find $g(p_r(4)) = -qt$ and $g(p_l(3))= -q$, thus the 3 in the first row gives $d(3)=(-q t)\cdot (-q) = q^2t$.
\end{Example}
For the reader's convenience, we provide Table \ref{table:wd} which lists all possible values for $w(\tau_{i,j})$ and $d(\tau_{i,j})$ that we may need to consider. One may notice that we omit the case for $w(\tau_{i,j})$ when $(p_l(\tau_{i,j}),p_r(\tau_{i,j})) = (l,r)$; this is simply because we need not consider this case at any point in our work.
\vspace{-1ex}
\begin{table}[h]
\caption{Possible $w(\tau_{i,j})$ and $d(\tau_{i,j})$ values for an entry $\tau_{i,j}$.} \label{table:wd}
\centering
\begin{tabular}{|c|c|c|c|c|c|} \cline{1-2} \cline{4-6}
$(p_l(\tau_{i,j}),p_r(\tau_{i,j}))$ & $w(\tau_{i,j})$ & \hspace{2ex} & $p_r(\tau_{i+1,j})$ & $p_l(\tau_{i+1,j+1})$ & $d(\tau_{i,j})$ \\ \cline{1-2} \cline{4-6}
$(l,s)$ & $-q$ & \hspace{2ex} &
$s$ & $s$ & $0$ \\ \cline{1-2} \cline{4-6}
$(s,r)$ & $1$ & \hspace{2ex} &
$s$ & $l$, $al$ & $0$ \\ \cline{1-2} \cline{4-6}
$(l,ar)$ & $-q-qt$ & \hspace{2ex} &
$r$, $ar$ & $s$ & $0$\\ \cline{1-2} \cline{4-6}
$(al,r)$ & $1+t$ & \hspace{2ex} &
$r$ & $l$ & $-q$ \\ \cline{1-2} \cline{4-6}
$(s,ar)$ & $1-q-t$ & \hspace{2ex} &
$ar$ & $l$ & $q^2t$ \\ \cline{1-2} \cline{4-6}
$(al,s)$ & $1-q+qt$ & \hspace{2ex} &
$r$ & $al$ & $t$ \\ \cline{1-2} \cline{4-6}
$(al,ar)$ & $(1-q)$ & \hspace{2ex} &
$ar$ & $al$ & $-qt^2$ \\ \cline{1-2} \cline{4-6}
$(s,s)$ & $(1-q)(1-t)$ & \multicolumn{3}{c}{} \\ \cline{1-2}
\end{tabular}
\end{table}
Example \ref{example:w,d} illustrates that to define $w(\tau_{i,j})$ and $d(\tau_{i,j})$, we only need to know two consecutive rows of a GT pattern. This leads us to the following definition.
\begin{Definition}
Suppose $\alpha$ is a strictly decreasing partition. We define ${GT_2}(\alpha)$ to be the set of all partitions $\mu$ such that the length of $\mu$ is one less than that of $\alpha$, and $\alpha_i \geq \mu_i \geq \alpha_{i+1}$ for all $i$. For $\alpha$ of length 1, we let ${GT_2}(\alpha) = \{\emptyset\}$.
\end{Definition}
\noindent This definition ensures that $\alpha$ and $\mu$ satisfy the interleaving condition, and so $\mu$ would be a valid weakly decreasing row directly below a row $\alpha$ in a GT pattern. Arranging $\alpha$ and $\mu$ in this manner, we are able to extend Definition \ref{d,w} to parts of $\mu$ and $\alpha$, and define $w(\mu_i)$ and $d(\alpha_i)$ for all appropriate $i$.
\begin{Definition}\label{matrix}
Let $\alpha$ and $\mu$ be partitions with $\alpha$ strictly decreasing of length $n$ and $\mu \in {GT_2}(\alpha)$. Then we define
\begin{equation}
\M{\alpha}{\mu} = \det
\begin{bmatrix}
w(\mu_1) & 1 & 0 & \; \ldots & 0 & 0\\
d(\alpha_2) & w(\mu_2) & 1 & \; \ldots & 0 & 0 \\
0 & d(\alpha_3) & w(\mu_3) & \; \ldots & 0 & 0\\
\vdots & \vdots & \vdots & \ddots & \vdots &\vdots \\
0 & 0 & 0 & \ldots & w(\mu_{n-2}) & 1 \\
0 & 0 & 0 & \ldots & d(\alpha_{n-1})& w(\mu_{n-1}) \\
\end{bmatrix}. \end{equation}
\end{Definition}
\noindent If $\alpha$ is of length 1, and $\mu = \emptyset \in {GT_2}(\alpha)$, we define $\M{\alpha}{\mu} =1$. We also extend this notation to any pair $(\alpha; \mu)$ by defining $\M{\alpha}{\mu} = 0$ whenever $\mu \notin {GT_2}(\alpha)$.
\section{A Recursive Statement of Main Theorem}
We first notice that Tokuyama's formula can be restated as a summation over the set ${GT_2}(\a)$, as shown in \eqref{tokGT2}, where $\a = \lambda + \rho$ for some weakly decreasing partition $\lambda$.
Let $S_{\infty}$ denote the symmetric group on $\mathbb{N}$ and take $\zeta \in S_{\infty}$ to be the permutation which maps $k \mapsto k+1$ for each $k \in \mathbb{N}$. Then \eqref{toks} is equivalent to
\begin{equation}\label{tokGT2}
\defWD{n}{q} \cdot s_{\lambda}(x) = \sum_{\substack{\mu \in {GT_2}(\a) \\ \mu \text{ strict}}} (-q)^{l(\a;\mu)} (1-q)^{s(\a;\mu)} \ x_1^{|\a| - |\mu|} \ \zeta \left(\defWD{n-1}{q} \cdot s_{\mu-\rho}(x)\right).
\end{equation}
Here, the function $l(\a;\mu)$ is the number of `left-leaning' parts of $\mu$ with respect to $\a$, i.e. the number of parts $\mu_i$ which satisfy $\mu_i = \a_i$. The function $z(\a;\mu)$ is similarly defined with `special' parts.
Re-expressing \eqref{tokseq} recursively as \eqref{tokGT2} motivates the search for a generalization of Tokuyama's formula expressed as a summation over the set ${GT_2}(\a)$, as below. This theorem is equivalent to Theorem \ref{MainTheorem2*}.
\begin{restatable}{thm}{Mainthree}\label{maintheorem}
Suppose $\lambda$ is a weakly decreasing partition of length $n$, and set $\a = \lambda + \rho$. Then, using the notation defined in Section \ref{addstats}, we have
\begin{equation} \label{maintheoremeq}
\defWD{n}{q} \cdot \HL{\lambda} = \sum_{\mu \in {GT_2}(\a)} \M{\a}{\mu} x_1^{|\a| - |\mu|} \ \zeta\left(\defWD{n-1}{q}\cdot \HL{\mu-\rho}\right). \end{equation}
\end{restatable}
\noindent It is worth noticing that unlike \eqref{tokGT2}, this expression requires all partitions in ${GT_2}(\a)$, including those that are non-strict, distinguishing it from Tokuyama's formula.
Theorem \ref{maintheorem} uses a determinant $\M{\a}{\mu}$ to determine the coefficient of the expression $x_1^{|\a| - |\mu|} \ \zeta\left(\defWD{n-1}{q}\cdot \HL{\mu-\rho}\right)$ in the relevant expansion of $\defWD{n}{q} \cdot \HL{\lambda}$. We use induction on the length of $\l$ and cofactor expansion of the determinant $\M{\a}{\mu}$ to prove Theorem \ref{maintheorem}. We omit the computations of the base cases in which $\l$ has length $1$ or $2$, which are easy to verify.
We move on to the general case of some weakly decreasing partition $\lambda$ of length $n>2$. For the remainder of this proof, we fix some general notation for partitions. Firstly, the partition $\lambda$ and its length $n$ are now fixed, and in any new notation to follow, these will remain independent of the variables in the expression. We consequently fix the partition $\a = \lambda + \rho$, also of length $n$. When referring to an arbitrary partition, we use $\kappa$ of length $m$. Finally, the partition $\mu$ will consistently be used as an arbitrary element of some set of the form ${GT_2}(\kappa)$.
Given a partition $\kappa = (\kappa_1,\ldots,\kappa_m)$, we will write
\begin{equation}
\exone{\kappa} = (\kappa_2,\ldots, \kappa_m) \text{ and }\extwo{\kappa} = (\kappa_3,\ldots,\kappa_m).
\end{equation}
Also, we define
\begin{equation}
\defdeltaone{i}{l}{t} := \prod_{\substack{1 \leq a \leq n \\ a \neq i}} x_l - tx_a,
\end{equation}
and similarly
\begin{equation}
\defdeltatwo{i}{j}{l}{t} := \prod_{\substack{1 \leq a \leq n \\ a \neq i,j}} x_l - tx_a, \qquad \text{and} \qquad
\defdeltathree{i}{j}{k}{l}{t} := \prod_{\substack{1 \leq a \leq n \\ a \neq i,j,k}} x_l - tx_a.
\end{equation}
As with $\WD{n}$, we define $\deltaone{i}{l}=\defdeltaone{i}{l}{1}$ and similarly $\deltatwo{i}{j}{l}=\defdeltatwo{i}{j}{l}{1}$ and $\deltathree{i}{j}{k}{l} = \defdeltathree{i}{j}{k}{l}{1}$.
Our inductive hypotheses will be
\begin{equation}\label{indhyp1}
\HL{\exone{\l}} \cdot \defdeltatwo{1}{n}{1}{q} = \sum_{\mu \in {GT_2}(\exone{\a})} \M{\exone{\a}}{\mu} x_1^{|\exone{\a}| - |\mu|} \zeta\left( \HL{\mu-\rho} \right),
\end{equation}
and
\begin{equation}\label{indhyp2}
\HL{\extwo{\l}} \cdot \defdeltathree{1}{n-1}{n}{1}{q} = \sum_{\mu \in {GT_2}(\extwo{\a})} \M{\extwo{\a}}{\mu} x_1^{|\extwo{\a}| - |\mu|} \zeta\left(\HL{\mu-\rho}\right).
\end{equation}
Note that multiplying both sides of \eqref{indhyp1} by $\zeta\left(\defWD{n-1}{q}\right)$ and \eqref{indhyp2} by $\zetatwo{1}{2}\left( \defWD{n-2}{q}\right)$ gives the form seen in Theorem \ref{maintheorem}.
Prior to the proof, we require another operator: We generalize the notion of $\zeta$ to $\zeta_i \in S_{\infty}$ which is defined to take $k \mapsto k+1$ for each $k \in \mathbb{N}$ with $k \geq i$. Furthermore, we also define $\zeta_{i,j} = \zeta_{j,i} = \zeta_j \zeta_i \in S_{\infty}$ when $i < j$. We notice that these operators act on a polynomial $f(x) = f(x_1, \ldots, x_m)$ to obtain
\begin{equation}
\zeta_i(f(x))=f(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_{m+1}),
\end{equation}
and
\begin{equation}
\zeta_{i,j} (f(x)) = \zeta_{j,i} (f(x))=f(x_1, \ldots x_{i-1} , x_{i+1} , \ldots x_{j-1}, x_{j+1}, \ldots , x_{m+2}).
\end{equation}
Notice that $\zeta_1 = \zeta$, and we shall use the latter in the proof to follow.
We begin with a series of lemmas.
\begin{Lemma}\label{lambda12}
For an arbitrary partition $\kappa$ of length $m > 2$, we express $\HL{\kappa}$ recursively as
\begin{equation}\label{lambda1}
\HL{\kappa}=\sum_{1\leq i \leq m} x_i^{\kappa_1} \left(\defProdone{i}{i} \right) \zeta_i\left(\HL{\exone{\kappa}}\right),
\end{equation}
and
\begin{equation}\label{lambda2}
\HL{\kappa}=\sum_{1\leq i \leq m} \sum_{\substack{1 \leq j \leq m \\ j \neq i}} x_i^{\kappa_1}x_j^{\kappa_2} \left( \defProdone{i}{i} \defProdtwo{i}{j}{j} \right) \zeta_{i,j} \left(\HL{\extwo{\kappa}}\right).
\end{equation}
\end{Lemma}
\begin{proof}
Let the permutation $\psi_i = (i\;i-1\;\cdots\;1)$, and let $H$ be the symmetric group acting on the $(n-1)$ indices $(2,3,\ldots,n)$. The
\begin{align*}
\HL{\kappa}
& = \sum_{1 \leq i \leq m} \sum_{\sigma \in H} \psi_i \ \sigma \left(\frac{x^{{\kappa}} \defWD{m}{t}}{\WD{m}}\right)
= \sum_{1 \leq i \leq m} \sum_{\sigma \in H} \psi_i \ \sigma \left(x_1^{\kappa_1}\defProdone{1}{1} \; \zeta\left(\frac{x^{\exone{\kappa}} \defWD{m-1}{t}}{\WD{m-1}}\right)\right).
\end{align*}
Because $\sigma \in H$ does not permute $x_1$, we have
\begin{align*}
\HL{\kappa}
& =\sum_{1 \leq i \leq m} \psi_i \left(x_1^{\kappa_1}\defProdone{1}{1} \; \zeta\left(\HL{\exone{\kappa}}\right)\right) = \sum_{1\leq i \leq m} x_i^{\kappa_1} \left(\defProdone{i}{i} \right) \zeta_i\left(\HL{\exone{\kappa}}\right).
\end{align*}
We further obtain \eqref{lambda2} by applying \eqref{lambda1} to the $\HL{\exone{\kappa}}$ in \eqref{lambda1}.
\end{proof}
\begin{Lemma}\label{originalproof}
Let $O_i= \big( (x_i - tx_1) x_i^{\lambda_1} - (x_1 -tx_i)x_1^{\lambda_1-\lambda_2}x_i^{\lambda_2} \big) / (x_i-tx_1)$. Then we have
\begin{multline}\label{originalproofeq}
\HL{\lambda} = \sum_{2 \leq i \leq n} O_i \left( \defProdone{i}{i}\right)
\zeta_i\left(\HL{\exone{\lambda}}\right) \\
-x_1^{\lambda_1-\lambda_2} \sum_{2 \leq i \leq n}\sum_{\substack{ 2 \leq j \leq n \\ j \neq i}}
tx_i^{\lambda_2} x_j^{\lambda_2} \defProdtwo{1}{i}{i}\defProdthree{1}{i}{j}{j} \zeta_{i,j}\left(\HL{\extwo{\lambda}}\right).
\end{multline}
\end{Lemma}
\begin{proof}
We begin by observing that
\begin{equation*}
0=\sum_{2 \leq i \leq n} \sum_{2\leq j \leq n} (-1)^{i+j}x_i^{\lambda_2} x_j^{\lambda_2}(x_i-x_j) \; \zetatwo{i}{j} \left( \WD{n-2} \cdot \HL{\extwo{\lambda}} \right)
\defdeltatwo{1}{i}{i}{t} \defdeltatwo{1}{j}{j}{t}.
\end{equation*}
This can easily be seen by swapping the subscripts $i$ and $j$ in the right hand side, revealing RHS $= -$RHS.
We divide through the equality above by $\WD{n}$, altering the products and the bounds of the summation, and multiply by $x_1(1-t)$ to find
\begin{equation*}
0 = \sum_{2 \leq i \leq n}\sum_{\substack{ 2 \leq j \leq n \\ j \neq i}} \frac{ x_1 (1-t)(x_j - tx_i) x_i^{\lambda_2} x_j^{\lambda_2}}{(x_j-x_1)(x_i-tx_1)} \zeta_{i,j}(\HL{\extwo{\lambda}})
\defProdone{i}{i} \defProdthree{1}{i}{j}{j} .
\end{equation*}
Using the identity \begin{equation*}
\frac{x_1(1-t)(x_j-tx_i)}{(x_j-x_1)(x_i-tx_1)} = \frac{(x_1-tx_i)(x_j-tx_1)}{(x_i-tx_1)(x_j-x_1)} + t\frac{x_i-x_1}{x_i-tx_1},
\end{equation*}
we break the double summation into two parts; in particular, if we take
\begin{equation}\label{L}
L = \sum_{2 \leq i \leq n}\sum_{\substack{ 2 \leq j \leq n \\ j \neq i}}
tx_i^{\lambda_2} x_j^{\lambda_2} \defProdtwo{1}{i}{i}\defProdthree{1}{i}{j}{j} \zeta_{i,j}\left(\HL{\extwo{\lambda}}\right),
\end{equation}
then
\begin{equation} \label{last}
0 =
L + \sum_{2 \leq i \leq n}\sum_{\substack{ 2 \leq j \leq n \\ j \neq i}}
\frac{(x_1-tx_i)x_i^{\lambda_2} x_j^{\lambda_2}}{(x_i-tx_1)}\zeta_{i,j}(\HL{\extwo{\lambda}})
\defProdone{i}{i} \defProdtwo{i}{j}{j}.
\end{equation}
Now, one can see that
\begin{equation}\label{Weird2}
-x_1^{\lambda_2} \defProdone{1}{1} \zeta\left(\HL{\exone{\lambda}}\right)
= x_1^{\lambda_2} \sum_{2 \leq i \leq n} \defProdone{i}{i} \defProdtwo{1}{i}{1}
\frac{(x_1-tx_i) x_i^{\lambda_2}}{(x_i-tx_1)}\zeta_{1,i}\left(\HL{\extwo{\lambda}}\right)
\end{equation}
holds by expressing $\HL{\exone{\lambda}}$ of the left hand side explicitly using \eqref{lambda1} and rearranging the result.
We notice that the right hand side of \eqref{Weird2} is equivalent to setting $j=1$ in the double summation of \eqref{last}. Thus, adding either side of \eqref{Weird2} to either side of \eqref{last} respectively, we have
\begin{equation*}
- x_1^{\lambda_2} \defProdone{1}{1} \zeta\left(\HL{\exone{\lambda}}\right)
= L + \sum_{2 \leq i \leq n}\sum_{\substack{ 1 \leq j \leq n \\ j \neq i}}
\frac{(x_1-tx_i)x_i^{\lambda_2} x_j^{\lambda_2}}{(x_i-tx_1)}\zeta_{i,j}(\HL{\extwo{\lambda}})
\defProdone{i}{i} \defProdtwo{i}{j}{j} .
\end{equation*}
Multiplying through by $-x_1^{\lambda_1-\lambda_2}$, and adding
\[\displaystyle \sum_{2 \leq i \leq n} x_i^{\lambda_1} \defProdtwo{i}{j}{j} \zeta_i(\HL{\exone{\lambda}}),\] to both sides, we may apply \eqref{lambda1} once on either side of the equality to obtain
\begin{equation*}
\HL{\lambda} =
-x_1^{\lambda_1-\lambda_2} L + \sum_{2 \leq i \leq n} \left( \defProdone{i}{i} \right) \left( x_i^{\lambda_1} - x_1^{\lambda_1-\lambda_2}x_i^{\lambda_2}\frac{x_1-tx_i}{x_i-tx_1} \right)
\zeta_{i}(\HL{\exone{\lambda}}).
\end{equation*}
Recalling $O_i$ as
\begin{equation*}
O_i = \frac{(x_i - tx_1) x_i^{\lambda_1} -(x_1 -tx_i)x_1^{\lambda_1-\lambda_2}x_i^{\lambda_2} }{x_i-tx_1}
= x_i^{\lambda_1} - x_1^{\lambda_1-\lambda_2}x_i^{\lambda_2}\frac{x_1-tx_i}{x_i-tx_1} ,
\end{equation*}
we combine the two summations on the right hand side and recall $L$ from \eqref{L} to write
\begin{multline*}
\HL{\lambda} = \sum_{2 \leq i \leq n} O_i \left( \defProdone{i}{i}\right)
\zeta_i\left(\HL{\exone{\lambda}}\right) \\
-x_1^{\lambda_1-\lambda_2} \sum_{2 \leq i \leq n}\sum_{\substack{ 2 \leq j \leq n \\ j \neq i}}
tx_i^{\lambda_2} x_j^{\lambda_2} \defProdtwo{1}{i}{i}\defProdthree{1}{i}{j}{j} \zeta_{i,j}\left(\HL{\extwo{\lambda}}\right).
\end{multline*}\qedhere
\end{proof}
We introduce two related functions that will be used in the upcoming lemmas. For some non-negative integers $u$ and $v$, we define
\begin{equation}
F_{\exone{\lambda}}(u) : = \sum_{\mu \in {GT_2}(\exone{\a})} \M{\exone{\a}}{\mu} x_1^{|\exone{\a}| - |\mu|} \zeta\left(\HL{\conc{(u)}{\mu-\rho}}\right),
\end{equation}
and
\begin{equation}
F_{\extwo{\lambda}}(u,v) : = \sum_{\mu \in {GT_2}(\extwo{\a})} \M{\extwo{\a}}{\mu} x_1^{|\extwo{\a}| - |\mu|} \zeta\left(\HL{\conc{(u,v)}{\mu-\rho}}\right).
\end{equation}
\begin{Lemma}\label{GT12}
Suppose $u$ and $v$ are some non-negative integers. Then, assuming the inductive hypotheses in \eqref{indhyp1} and \eqref{indhyp2}, we have
\begin{equation}\label{GT1}
F_{\exone{\lambda}}(u) = \sum_{2 \leq i \leq n} x_i^{u} \defProdtwo{1}{i}{i} \zeta_{i}\left(\HL{\exone{\lambda}}\right) \defdeltatwo{1}{i}{1}{q},
\end{equation}
and
\begin{equation}\label{GT2}
F_{\extwo{\lambda}}(u,v) = \sum_{2\leq i \leq n} \sum_{\substack{2 \leq j \leq n \\ j \neq i}} x_i^{u}x_j^{v}
\defProdtwo{1}{i}{i}\defProdthree{1}{i}{j}{j} \zeta_{i,j}\left(\HL{\extwo{\lambda}}\right) \defdeltathree{1}{i}{j}{1}{q},
\end{equation}
\end{Lemma}
\begin{proof}
The proof of \eqref{GT2} is almost identical to that of \eqref{GT1}, apart from using \eqref{lambda2} and \eqref{indhyp2} in place of \eqref{lambda1} and \eqref{indhyp1}. For brevity, we only present the detailed proof of \eqref{GT1}.
By applying \eqref{lambda1} to write $\HL{ \conc{(u)}{(\mu-\rho)}}$ in terms of $u$ and $\HL{\mu-\rho}$, the left hand side of \eqref{GT1} becomes
\begin{equation*}
\sum_{\mu \in {GT_2}(\exone{\a})} \M{\exone{\a}}{\mu} x_1^{|\exone{\a}| - |\mu|} \; \zeta\left(\sum_{1\leq i \leq n-1} x_i^{u} \defProdtwo{i}{n}{i} \zeta_i\left(\HL{\mu-\rho}\right)\right).
\end{equation*}
We rearrange this expression to write this as
\begin{equation*}
\sum_{2 \leq i \leq n} x_i^{u} \defProdtwo{1}{i}{i} \zeta_i \left( \sum_{\mu \in {GT_2}(\exone{\a})} \M{\exone{\a}}{\mu} x_1^{|\exone{\a}| - |\mu|} \zeta\left(\HL{\mu-\rho}\right) \right).
\end{equation*}
Finally, we replace the argument of $\zeta_i$ using the inductive hypothesis in \eqref{indhyp1} to give the desired result.
\end{proof}
\begin{Lemma}\label{w}
Recall $O_i= \big( (x_i - tx_1) x_i^{\lambda_1} - (x_1 -tx_i)x_1^{\lambda_1-\lambda_2}x_i^{\lambda_2} \big) / (x_i-tx_1)$ from Lemma \ref{originalproof}. Then we have
\begin{equation}
\sum_{\mu \in {GT_2}(\a)} w(\mu_1)\M{\exone{\a}}{\exone{\mu}} x_1^{|\a| - |\mu|} \zeta\left(\HL{\mu-\rho}\right)
= \sum_{2 \leq i \leq n}
O_i \left( \defProdone{i}{i} \right)
\zeta_i\left(\HL{\exone{\l}}\right) \defdeltaone{1}{1}{q}.
\end{equation}
\end{Lemma}
\begin{proof}
First, notice that
$(x_i-tx_1)O_i/(x_i-x_1) = Q_i/({x_1-qx_i})$, where
\begin{align*}
Q_i & = -q x_i^{\lambda_1+1} +tx_1 x_i^{\lambda_1} -qtx_1^{\lambda_1-\lambda_2} x_i^{\lambda_2+1} \\ & \quad +x_1^{\lambda_1-\lambda_2+1} x_i^{\lambda_2} +\sum_{\lambda_2 < i \leq \lambda_1}(1-q)(1-t)x_1^{\lambda_1+1-i}x_i^{i}.
\end{align*}
This can be shown through simple algebraic manipulation, considering three cases for $\lambda$, namely $(1)$: $\lambda_1 = \lambda_2$, $(2)$: $\lambda_1 = 1 + \lambda_2$ and $(3)$: $\lambda_1 > 1 + \lambda_2$.
Substituting the claim in the right hand side of the lemma gives
\begin{equation*}
\text{RHS} = \sum_{2 \leq i \leq n} \defProdtwo{1}{i}{i} \zeta_i\left(\HL{\exone{\lambda}}\right) Q_i \ \defdeltatwo{1}{i}{1}{q} .
\end{equation*}
Then, expanding $Q_i$ and applying \eqref{GT1}, we have
\begin{align*}
\text{RHS} &= -q \cdot F_{\exone{\lambda}}(\lambda_1+1)
+ tx_1\cdot F_{\exone{\lambda}}(\lambda_1)
-qtx_1^{\lambda_1-\lambda_2} \cdot F_{\exone{\lambda}}(\lambda_2+1) \\
& \qquad +x_1^{\lambda_1-\lambda_2+1}\cdot F_{\exone{\lambda}}(\lambda_2)
+ \sum_{\lambda_2 < i \leq \lambda_1} (1-q)(1-t)x_1^{\lambda_1+1-i}\cdot F_{\exone{\lambda}}(i).
\end{align*}
Examining Definition \ref{c,d}, we see that the first two coefficients above are precisely the nonzero possibilities of $g(p_l(\mu_1))$; the next two are precisely the nonzero possibilities of $g(p_r(\mu_1))$; and the final summation is over all the nonzero possibilities of $c(\mu_1)$. Recalling from Definition \ref{d,w} that $w(\mu_1)=c(\mu_1)+g(p_l(\mu_1))+g(p_r(\mu_1))$, we simply have
\begin{equation}
\text{RHS } = \sum_{\mu \in {GT_2}(\a)} w(\mu_1)\M{\exone{\a}}{\exone{\mu}} x_1^{|\a| - |\mu|} \; \zeta\left(\HL{\mu - \rho}\right). \qedhere
\end{equation}
\end{proof}
\begin{Lemma}\label{d}
We have
\begin{multline}
\sum_{\mu \in {GT_2}(\a)} d(\a_2)\M{\extwo{\a}}{\extwo{\mu}} x_1^{|\a| - |\mu|} \; \zeta\left(\HL{\mu - \rho}\right) \\
= x_1^{\lambda_1-\lambda_2}\sum_{2 \leq i \leq n}\sum_{\substack{ 2 \leq j \leq n \\ j \neq i}}
tx_i^{\lambda_2} x_j^{\lambda_2} \defProdtwo{1}{i}{i}\defProdthree{1}{i}{j}{j} \zeta_{i,j}\left(\HL{\extwo{\lambda}}\right) \defdeltaone{1}{1}{q},
\end{multline}
\end{Lemma}
\begin{proof}
Expanding the factor $(x_1-qx_i)(x_1-qx_j)$ from the product $ \defdeltaone{1}{1}{q}$, the right hand side of the above equality becomes
\hspace{1ex}
\begin{equation*}
\sum_{2\leq i \leq n} \sum_{\substack{2 \leq j \leq n \\ j \neq i}} tx_1^{\lambda_1-\lambda_2}x_i^{\lambda_2}x_j^{\lambda_2} (x_1-qx_i)(x_1-qx_j) \defProdtwo{1}{i}{i}\defProdthree{1}{i}{j}{j} \zeta_{i,j}\left(\HL{\extwo{\lambda}}\right) \defdeltathree{1}{i}{j}{1}{q}.
\end{equation*}
We see from Proposition \ref{shift} that $F_{\extwo{\lambda}}(\lambda_2,\lambda_2+1) = t\cdot F_{\extwo{\lambda}}(\lambda_2+1,\lambda_2)$. Then distributing $(q^2 x_i x_j -q x_1 x_i -q x_1 x_j + x_1^{2})$ over the summation and applying \eqref{GT2} yields
\begin{align*}
\text{RHS} & = q^2t x_1^{\lambda_1-\lambda_2} \cdot F_{\extwo{\lambda}}(\lambda_2+1,\lambda_2+1) -q x_1^{\lambda_1-\lambda_2+1} \cdot F_{\extwo{\lambda}}(\lambda_2,\lambda_2+1) \\
& \quad -qt^2 x_1^{\lambda_1-\lambda_2+1} \cdot F_{\extwo{\lambda}}(\lambda_2+1,\lambda_2) +t x_1^{\lambda_1-\lambda_2+2} \cdot F_{\extwo{\lambda}}(\lambda_2,\lambda_2,\mu-\rho) .\\
\intertext{Recalling from Definition \ref{c,d} that $d(\a_2)=g(p_r(\mu_1)) \cdot g(p_l(\mu_2))$, we notice that each of the four coefficients in the previous expression corresponds exactly to each of the four possible nonzero values for $d(\a_2)$. Thus, we have}
\text{RHS} & = \sum_{\mu \in {GT_2}(\a)} d(\a_2)\M{\extwo{\a}}{\extwo{\mu}} x_1^{|\a| - |\mu|} \zeta\left(\HL{\mu - \rho}\right). \qedhere
\end{align*}
\end{proof}
We return to the proof of Theorem \ref{maintheorem}.
\begin{proof}
Lemma \ref{originalproof} gave us that
\begin{multline*}
\HL{\lambda} = \sum_{2 \leq i \leq n} O_i \left( \defProdone{i}{i}\right)
\zeta_i\left(\HL{\exone{\lambda}}\right) \\
-x_1^{\lambda_1-\lambda_2} \sum_{2 \leq i \leq n}\sum_{\substack{ 2 \leq j \leq n \\ j \neq i}}
tx_i^{\lambda_2} x_j^{\lambda_2} \defProdtwo{1}{i}{i}\defProdthree{1}{i}{j}{j} \zeta_{i,j}\left(\HL{\extwo{\lambda}}\right).
\end{multline*}
Furthermore, assuming the inductive hypotheses \eqref{indhyp1} and \eqref{indhyp2}, Lemmas \ref{w} and \ref{d} state that
\begin{equation*}
\sum_{\mu \in {GT_2}(\a)} w(\mu_1)\M{\exone{\a}}{\exone{\mu}} x_1^{|\a| - |\mu|} \zeta\left(\HL{\mu-\rho}\right)
= \sum_{2 \leq i \leq n}
O_i \left( \defProdone{i}{i} \right)
\zeta_i\left(\HL{\exone{\lambda}}\right) \defdeltaone{1}{1}{q}.
\end{equation*}
and
\begin{multline*}
\sum_{\mu \in {GT_2}(\a)} d(\a_2)\M{\extwo{\a}}{\extwo{\mu}} x_1^{|\a| - |\mu|} \; \zeta\left(\HL{\mu-\rho}\right) \\
= x_1^{\lambda_1-\lambda_2}\sum_{2 \leq i \leq n}\sum_{\substack{ 2 \leq j \leq n \\ j \neq i}}
tx_i^{\lambda_2} x_j^{\lambda_2} \defProdtwo{1}{i}{i}\defProdthree{1}{i}{j}{j} \zeta_{i,j}\left(\HL{\extwo{\lambda}}\right) \defdeltaone{1}{1}{q}.
\end{multline*}
Hence, it is clear that
\vspace{-1ex}
\begin{multline*}
\defdeltaone{1}{1}{q} \cdot \HL{\lambda} = \sum_{\mu \in {GT_2}(\a)} w(\mu_1)\M{\exone{\a}}{\exone{\mu}} x_1^{|\a| - |\mu|} \zeta(\HL{\mu-\rho}) \\
- \sum_{\mu \in {GT_2}(\a)} d(\a_2)\M{\extwo{\a}}{\extwo{\mu}} x_1^{|\a| - |\mu|} \zeta(\HL{\mu-\rho}) .
\end{multline*}
Finally, recalling that $\M{\a}{\mu} = w(\mu_1)\M{\exone{\a}}{\exone{\mu}} - d(\a_2)\M{\extwo{\a}}{\extwo{\mu}}$ and observing that $\defWD{n}{q} = \defdeltaone{1}{1}{q} \cdot \zeta(\defWD{n-1}{q})$, we multiply by $\zeta(\defWD{n-1}{q})$ to conclude
\begin{equation*}
\defWD{n}{q} \cdot \HL{\lambda} = \sum_{\mu \in {GT_2}(\a)} \M{\a}{\mu} x_1^{|\a| - |\mu|} \zeta(\WD{n-1}(q)\cdot \HL{\mu-\rho}). \qedhere
\end{equation*}
\end{proof}
\section{Weakly Decreasing Partitions and Raising Operators}
Theorem \ref{maintheorem} expresses the Hall-Littlewood polynomial recursively in terms of Hall-Littlewood polynomials in one fewer variables.
The partitions $\mu \in {GT_2}(\a)$ indexing these polynomials are guaranteed to be weakly decreasing by the interleaving condition, but they are not necessarily strictly decreasing
To express $\mu \in {GT_2}(\a)$ in terms of strictly decreasing partitions, we relate the Hall-Littlewood polynomial associated to a weakly decreasing partition to one associated to a specific strictly decreasing partition, which is related to the weakly decreasing partition through a specified sequence of Young's raising operators.
\begin{Definition}
A \emph{raising operator} ${\phi}$ is a product of operations $[i\;j]$ with $i\leq j$ acting on some finite tuple $\l$ of nonnegative integers such that
\begin{equation}
[i\;j] \cdot (\lambda_1,\ldots, \lambda_n)=(\lambda_1,\ldots,\lambda_i-1,\ldots,\lambda_j+1,\ldots,\lambda_n).
\end{equation}
\end{Definition}
\noindent (Note that these are the inverses of Young's raising operators as defined in \cite[Chapter I]{macdonaldbook}.)
The length of a raising operator ${\phi}$, denoted $l({\phi})$, is defined as the number of operators in the minimal decomposition of ${\phi}$ into elementary operators of the form $[i \; i+1]$. The identity raising operator Id acts trivially on the partition and is assigned length zero.
\begin{Definition}\label{thetaset}
Given a strictly decreasing partition $\a$ of length $n$, we recursively define ${\Omega}(\a)$ to be the set of raising operators such that
\begin{itemize}
\item The identity raising operator $\text{Id }\in {\Omega}(\a)$, and
\item for all raising operators ${\phi} \in {\Omega}(\a)$, if the tuple ${\phi}(\a) = (\a_1', \ldots, \a_n')$ contains consecutive parts $\a_i'$ and $\a_{i+1}'$ such that $\a_i' = \a_{i+1}'+ 2$, then $[i \; i+1] \cdot {\phi} \in {\Omega}(\a)$.
\end{itemize}
\end{Definition}
\begin{Example}
For the partition $\a=(6,4,3,1)$, we have the set
\[{\Omega}(\a) = \{\text{Id},[1\;2],[1\;3],[2\;4],[3\;4],[1\;2][3\;4] \}.\]
\end{Example}
\begin{Prop}\label{shift}
Suppose $\l$ is some weakly decreasing partition and $\a = \l + \r$. Then, for each ${\phi}\in {\Omega}(\a)$, the following identity holds:
\begin{equation}\label{shifteq} \HL{{\phi}(\lambda)}= t^{l({\phi})} \cdot \HL{\lambda}. \end{equation}
\end{Prop}
\begin{proof}
It is clear that \eqref{shifteq} holds for ${\phi} = $ Id. Therefore, by the recursive definition of ${\Omega}$, it suffices to prove that for an arbitrary tuple $\l = (\l_1, \ldots, \l_n)$ and $\a = \l + \r$ such that $\a_i = \a_{i+1} + 2$, or equivalently $\l_i = \l_{i+1} + 1$, we have
\begin{equation}\label{shifteqn}
\HL{[i\;i+1](\lambda)}= t \cdot \HL{\lambda}.
\end{equation}
To prove \eqref{shifteqn}, let $g(x) = \defWD{n}{t}/(x_i - t x_{i+1})$ and $f(x)=x_1^{\lambda_1}\cdots x_{i-1}^{\lambda_{i-1}} \cdot x_{i+2}^{\lambda_{i+2}}\cdots x_n^{\lambda_n}$, noting that both $g(x)$ and $f(x)$ are invariant under the permutation $(i \; i+1)$. Then, if we take $\lambda_{i+1} = a$ and $\lambda_i = a+1$ for some integer $a$, and let $\sgn \sigma$ be the standard sign function for a permutation $\sigma \in S_n$, we see that
\begin{equation*}
\sum_{\sigma \in S_n}\left(\text{sgn }\sigma\right) \sigma\left(x^\lambda \defWD{n}{t}\right)
= \sum_{\sigma \in S_n}\left(\text{sgn }\sigma\right) \sigma\left(f(x)g(x)x_i^{a+2} x_{i+1}^{a} - tf(x)g(x)x_i^{a+1} x_{i+1}^{a+1}\right).
\end{equation*}
If some polynomial $h(x) = (i \; j)(h(x))$ for $(i \; j) \in S_n$, then it is known that $\sum_{\sigma \in S_n}\left(\text{sgn }\sigma\right) \sigma\left(h(x)\right)=0$.
Therefore, since $tf(x)g(x)x_i^{a+1} x_{i+1}^{a+1}$ is invariant under the permutation $(i \; i+1)$, we have
\begin{equation}\label{unraisedWD}
\sum_{\sigma \in S_n}\left(\text{sgn }\sigma\right) \sigma\left(x^\lambda \defWD{n}{t}\right) = \sum_{\sigma \in S_n}\left(\text{sgn }\sigma\right) \sigma\left(f(x)g(x)x_i^{a+2} x_{i+1}^{a}\right).
\end{equation}
Similarly, we can find that
\begin{align}
\sum_{\sigma \in S_n}\left(\text{sgn }\sigma\right) \sigma\left(x^{[i\;i+1](\lambda)}\defWD{n}{t}\right)
& = (- t) \cdot \sum_{\sigma \in S_n}\left(\text{sgn }\sigma\right) \sigma\left(f(x)g(x)x_i^{a} x_{i+1}^{a+2}\right) \nonumber \\ \label{raisedWD}
& = t \cdot \sum_{\sigma \in S_n}\left(\text{sgn }\sigma\right) \sigma\left(f(x)g(x)x_i^{a+2} x_{i+1}^{a}\right),
\end{align}
and combining \eqref{unraisedWD} and \eqref{raisedWD} returns
\begin{equation*}
\sum_{\sigma \in S_n}\left(\text{sgn }\sigma\right) \sigma\left(x^{[i\;i+1](\lambda)}\defWD{n}{t}\right)= t\cdot\sum_{\sigma \in S_n}\left(\text{sgn }\sigma\right) \sigma\left(x^{\lambda}\defWD{n}{t}\right).
\end{equation*}
Dividing through by $\WD{n}$ gives us \eqref{shifteqn}, as desired.
\end{proof}
\section{A Deformation of Tokuyama's Formula}
By enabling us to express Hall-Littlewood polynomials $\HL{\mu-\rho}$ with nonstrict $\mu$ in terms of those with strict $\mu$, Proposition \ref{shift} allows us state Theorem \ref{maintheorem} to non-recursively as Theorem \ref{MainTheorem2*}, providing a deformation of Tokuyama's formula.
\begin{restatable}[]{thm}{Maintwo}
\label{MainTheorem2*}
Suppose $\l$ is a weakly decreasing partition of length $n$. Then, the product $\defWD{n}{q} \cdot \HL{\lambda}$ of the deformed Weyl denominator and the Hall-Littlewood polynomial can be expressed as a summation over the set $\GT(\l+\r)$ of all strict Gelfand-Tsetlin patterns of top row $\l+\r$ by
\begin{equation}\label{MainTheorem2eq}
\defWD{n}{q}\cdot \HL{\lambda} = \sum_{T \in \GT(\lambda + \r)} \prod_{i=1}^{n-1}\left(\sum_{{\phi} \in {\Omega}(\row{i+1})}t^{l({\phi})}\M{\row{i}}{{\phi}(\row{i+1})}\right)x^{m(T)}.
\end{equation}
Here, the determinant $\M{r_i}{{\phi}(r_{i+1})}$ and the set ${\Omega}(r_{i+1})$ are defined in Definitions \ref{matrix} and \ref{thetaset} respectively.
\end{restatable}
\begin{proof}
We use induction, where our inductive hypothesis is that \eqref{MainTheorem2eq} holds for all Hall-Littlewood polynomials in $(n-1)$ variables. The base case formula of one variable is easy to check.
We now prove that \eqref{MainTheorem2eq} holds for a Hall-Littlewood polynomial corresponding to an arbitrary weakly decreasing partition $\lambda$ of length $n>1$, assuming the inductive hypothesis. Let $\a = \l+\r$.
We notice that, while $\a$ must be strictly decreasing, a partition $\mu \in {GT_2}(\a)$ may be weakly decreasing.
If $\mu$ is not strictly decreasing, the computed Hall-Littlewood polynomial $\HL{\mu-\rho}$ has an increasing partition $\mu - \rho$. Proposition \ref{shift} allows us to express these Hall-Littlewood polynomials in terms of some Hall-Littlewood polynomials that are obtained from strictly decreasing $\tilde \mu \in {GT_2}(\a)$. In particular, for each non-strict $\mu \in {GT_2}(\a)$, there exists a unique strict $\tilde \mu \in {GT_2}(\a)$ and a unique element ${\phi} \in {\Omega}(\tilde \mu)$ such that ${\phi}(\tilde \mu) = \mu$. Furthermore, for each ${\phi} \in {\Omega}(\tilde \mu)$, the tuple ${\phi}(\tilde \mu)$ will either be a valid element of ${GT_2}(\a)$ or will cause $\M{\a}{{\phi}(\tilde \mu)}=0$ and can be neglected. Hence, we may rewrite \eqref{maintheoremeq} of Theorem \ref{maintheorem} as an equivalent summation over strictly decreasing partitions in the set ${GT_2}(\a)$:
\begin{equation}\label{recurs}
\HL{\l} \cdot \defWD{n}{q} = \sum_{\substack{\mu \in {GT_2}(\a) \\ \mu \text{ strict}}} \sum_{{\phi} \in {\Omega}(\mu)} t^{l({\phi})}\M{\a}{{\phi}(\mu)} x_1^{|\a| - |\mu|} \zeta(\defWD{n-1}{q} \cdot \HL{\mu-\rho}).
\end{equation}
By applying the inductive hypothesis to the Hall-Littlewood polynomials $\HL{\mu-\rho}$ of $(n-1)$ variables in \eqref{recurs}, we are reduced to \eqref{MainTheorem2eq}.
\end{proof}
\begin{Example}
\input{sA_example.tex}
\end{Example}
\section{Specializations}
The results of this paper generalize Tokuyama's formula and several other existing results. We demonstrate a few of these specializations.
\subsection{Tokuyama's formula.}
Recall from Definition \ref{HL} that $\HLtwo{\l}{x;0} = s_\l(x)$. We know that for all raising operators ${\phi} \neq \text{Id}$, the length of ${\phi}$ is at least $1$. Therefore, setting $t=0$ reduces Theorem \ref{MainTheorem2*} to
\begin{equation*}
\defWD{n}{q}\cdot s_{\lambda} = \sum_{T \in \GT(\a)} \prod_{i=1}^{n-1}\M{\row{i}}{\row{i+1}}x^{m(T)}.
\end{equation*}
As these are all the identity cases on strict Gelfand-Tsetlin patterns, every row is strictly decreasing. This implies that consecutive entries cannot have $p_r(\tau_{i,j}) = r$ and $p_l(\tau_{i,j+1}) = l$, and consequently $d(\tau_{i,j})$ cannot be $-q$. All the remaining possibilities of $d(\tau_{i,j})$ are reduced to $0$ when $t=0$. Thus, if we let $\row{i+1} = (\mu_1, \ldots, \mu_{n-i})$, the matrix $M(\row{i};\row{i+1})$ simplifies to
\begin{equation*}
M(\row{i};\row{i+1}) =
\begin{pmatrix}
w(\mu_1) & 1 & \; \ldots & 0\\
0 & w(\mu_2) & \; \ldots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \ldots & w(\mu_{n-i}) \\
\end{pmatrix}
\end{equation*}
Therefore, we have
\begin{equation}\label{toksimp}
\M{\row{i}}{\row{i+1}}=\prod_{k=1}^{n-i} w(\mu_k).\end{equation}
Finally, returning to Tokuyama's terminology of left-leaning, right-leaning and special entries from Definition \ref{GTstats}, we find that $w(\tau_{i,j})$ simplifies to
\begin{equation}
w(\tau_{i,j}) =
\begin{cases}
-q & \text{if $\tau_{i,j}$ is left-leaning}\\
1-q & \text{if $\tau_{i,j}$ is special} \\
1 & \text{if $\tau_{i,j}$ is right-leaning}
\end{cases},
\end{equation}
and, substituting this into \eqref{toksimp} from earlier, we conclude with Tokuyama's formula:
\begin{equation*}
\defWD{n}{q}\cdot s_{\lambda} = \sum_{T \in \GT(\a)} (-q)^{l(T)}(1-q)^{z(T)} x^{m(T)}.
\end{equation*}
Comparing the results of this paper with Tokuyama's formula reveals some interesting distinctions regarding the structure of the Hall-Littlewood polynomials in relation to the Schur polynomials. Theorem \ref{maintheorem} demonstrates that when expressing $\HL{\lambda}$ recursively in terms of $\HL{\mu-\rho}$, it is more natural to include several non-strictly decreasing partitions $\mu$ in the summation. This is not an issue for Tokuyama's formula as Schur polynomials of such non-strictly decreasing partitions $\mu$ are just $s_{\mu-\rho}(x) = 0$.
In fact, whilst Theorem \ref{MainTheorem2*} is stated as summations over strict GT patterns, the use of ${\Omega}$ is to allow an implicit summation over all possible non-strict rows. We thus naturally seek a way to consider the contributions of such rows directly, eliminating the more \emph{ad hoc} use of ${\Omega}$. Both theorems also highlight the added complexity in a Hall-Littlewood polynomial as they account for the ordering among the entries in a GT pattern instead of simply counting entries as Tokuyama does with $z(T)$ and $l(T)$.
\subsection{Stanley's formula.}
In \cite{stanley}, Stanley gave a formula for the Hall-Littlewood polynomials at the singular value $t=-1$, also known as the Schur $q$-polynomials, as a generating function of strict GT patterns of top row $\l$:
\begin{equation}\label{stanley's}
\HLtwo{\l}{x;-1} = \sum_{T \in \GT(\l)}2^{z(T)}x^{m(T)}
\end{equation}
Tokuyama subsequently showed in \cite{tokuyama} that his formula yields \eqref{stanley's} when the deformation parameter $q$ is set to $-1$. Theorem \ref{MainTheorem2*} thus specializes to \eqref{stanley's} at $t=0$ and $q=-1$, by virtue of specializing to Tokuyama's result.
However, setting $t$ to $-1$ in Theorem \ref{MainTheorem2*} also gives a deformation along $q$ of \eqref{stanley's}, and we can show that this deformation reduces to \eqref{stanley's} at $q=0$.
If $\tau_{i,j}$ is an entry in a GT pattern, then we call $\tau_{i,j}$ a $p$ entry (where $p$ is a property) if either $p_l(\tau_{i,j})$ or $p_r(\tau_{i,j})=p$.
With this terminology, examining Theorem \ref{MainTheorem2*} at the singular values of $t=-1$ and $q=0$, we see that any pattern containing an $l$ entry gives an overall coefficient of zero, which is evident through cofactor expansion of $\M{\row{i}}{\row{i+1}}$.
Thus we may simply sum over the set $\GT l(\a) \subset \GT(\a)$ that contains all GT patterns without $l$ entries. Furthermore, any non-trivial element $\phi \in {\Omega}(r_i)$ will either cause an element of $\phi(r_i)$ to be an $l$ entry with respect to $r_{i+1}$, or result in $\phi(r_i) \notin {GT_2}(r_{i-1})$, both resulting in an overall coefficient of zero. Furthermore, we show that GT patterns with consecutive $r$ then $al$ entries give coefficient $0$.
Let $D=\prod_{i=2}^{n}\vert M(\row{i-1};\row{i})\vert$. Assume for the sake of contradiction that there is a GT pattern with consecutive $r$ then $al$ entries for which $D \neq 0$. Let $\tau_{i,j}$ and $\tau_{i,j+1}$ be the lowest consecutive $r$ then $al$ entries (there may be others on the same row, but none below). Then the $r$ entry $\tau_{i,j} = u$ for some $u \in \mathbb{N}$, and the $al$ entry $\tau_{i,j+1} = u-1$. Now consider the entry $\tau_{i+1,j+1}$. Since it cannot be $l$, or else $D=0$, we must have $\tau_{i+1,j+1}=u-1$. Then $w(\tau_{i+1,j+1}) = 0$; so to ensure $M(\row{i};\row{i+1})\neq 0$ (and consequently that $D \neq 0$), we must have that either $p_r(\tau_{i+1,j})=r$ or $p_l(\tau_{i+1,j+2})=al$. But this contradicts our hypothesis. Therefore, any GT pattern containing consecutive $r$ then $al$ entries yields an overall coefficient of zero.
Let the set $\GT l^*(\a) \subset \GT l(\a)$ contain all GT patterns without $l$ entries or consecutive $r$ then $al$ entries. For an entry $\tau_{i,j}$ in a strict GT pattern $T$, recall that $q$ divides $d(\tau_{i,j})$, and hence $d(\tau_{i,j})=0$ at $q=0$, unless $p_r(\tau_{i+1,j})=r$ and $p_l(\tau_{i+1,j+1})=al$. If $T \in \GT l^*(\a)$, then this cannot occur, so $d(\tau_{i,j})=0$ for all entries in any GT pattern $T \in \GT l^*(\a)$. Thus Theorem \ref{MainTheorem2*} simplifies to
\begin{equation}\label{lar}
\defWD{n}{0}\cdot \HLtwo{\l}{x;-1} = \sum_{T \in \GT l^*(\a)} \prod w(\tau_{i,j}) x^{m(T)},
\end{equation}
where the product is taken over all possible $\tau_{i,j}$.
This leads us to introduce a bijective mapping, similar to one used in \cite{tokuyama}:
\begin{equation}
\begin{array}{ll}
\; & \GT l^*(\a) \longrightarrow \GT(\l) \\
{\Theta} : & \tau_{i,j} \longmapsto \tau_{i,j}+j-n \\
\; & m(T) \longmapsto m(T)-\rho
\end{array}.
\end{equation}
After applying this mapping, we have that $w(\tau_{i,j})=2$ for all \emph{special} entries and $w(\tau_{i,j}) = 1$ otherwise, and thus \eqref{lar} reduces to
\begin{equation*}
x^\r\cdot \HLtwo{\l}{x;-1} = x^\r\cdot \sum_{T \in \GT(\lambda)} 2^{z(T)} x^{m(T)},
\end{equation*}
Dividing out by $x^\r$, we obtain \eqref{stanley's}.
\subsection{Monomial symmetric function.}
Recall from Definition \ref{HL} that the monomial symmetric function
\begin{equation} \label{monomial}
m_\l(x) = \sum_{\sigma \in S_n} \sigma(x^\l)
\end{equation}
We note that, since $\HLtwo{\l}{x;1}=m_\l(x)$, we may obtain a new $q$-deformation of \eqref{monomial} by setting $t=1$ in Theorem \ref{MainTheorem2*}, although the result does not appear as elegant as the $t=0$ and $t=-1$ cases. This specialization is, however, not difficult to prove independently.
\section{Acknowledgments}
The authors thank Paul Gunnells for suggesting this area of research and for his invaluable guidance, and are likewise grateful to Lucia Mocz for her dedicated mentoring. In addition, they would like to thank Glenn Stevens and the PROMYS program, without which this research could not have been pursued.
| {
"timestamp": "2014-09-26T02:04:21",
"yymm": "1403",
"arxiv_id": "1403.8139",
"language": "en",
"url": "https://arxiv.org/abs/1403.8139",
"abstract": "A theorem due to Tokuyama expresses Schur polynomials in terms of Gelfand-Tsetlin patterns, providing a deformation of the Weyl character formula and two other classical results, Stanley's formula for the Schur $q$-polynomials and Gelfand's parametrization for the Schur polynomial. We generalize Tokuyama's formula to the Hall-Littlewood polynomials by extending Tokuyama's statistics. Our result, in addition to specializing to Tokuyama's result and the aforementioned classical results, also yields connections to the monomial symmetric function and a new deformation of Stanley's formula.",
"subjects": "Combinatorics (math.CO)",
"title": "A Generalization of Tokuyama's Formula to the Hall-Littlewood Polynomials",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9861513930404762,
"lm_q2_score": 0.7185943805178139,
"lm_q1q2_score": 0.7086428493787001
} |
https://arxiv.org/abs/2006.16562 | Nonlinear Matrix Concentration via Semigroup Methods | Matrix concentration inequalities provide information about the probability that a random matrix is close to its expectation with respect to the $l_2$ operator norm. This paper uses semigroup methods to derive sharp nonlinear matrix inequalities. In particular, it is shown that the classic Bakry-Émery curvature criterion implies subgaussian concentration for "matrix Lipschitz" functions. This argument circumvents the need to develop a matrix version of the log-Sobolev inequality, a technical obstacle that has blocked previous attempts to derive matrix concentration inequalities in this setting. The approach unifies and extends much of the previous work on matrix concentration. When applied to a product measure, the theory reproduces the matrix Efron-Stein inequalities due to Paulin et al. It also handles matrix-valued functions on a Riemannian manifold with uniformly positive Ricci curvature. | \section{Motivation}
Matrix concentration inequalities describe the probability that a random matrix is close to its expectation,
with deviations measured in the $\ell_2$ operator norm.
The basic models---sums of independent random matrices
and matrix-valued martingales---have been studied extensively,
and they admit a wide spectrum of applications~\cite{tropp2015introduction}.
Nevertheless, we lack a complete understanding of more general
random matrix models. The purpose of this paper is to develop
a systematic approach for deriving ``nonlinear'' matrix
concentration inequalities.
In the scalar setting, functional inequalities
offer a powerful framework for studying nonlinear concentration.
For example, consider a real-valued Lipschitz function $f(Z)$ of
a real random variable $Z$ with distribution $\mu$.
If the measure $\mu$ satisfies a Poincar{\'e} inequality,
then the variance of $f(Z)$ is controlled by the squared
Lipschitz constant of $f$. If the measure satisfies
a log-Sobolev inequality, then $f(Z)$
enjoys subgaussian concentration
on the scale of the Lipschitz constant.
Now, suppose that we can construct a semigroup,
acting on real-valued functions, with stationary distribution $\mu$.
Functional inequalities for the measure $\mu$
are intimately related to the convergence of the semigroup.
In particular, the measure admits a Poincar{\'e} inequality
if and only if the semigroup rapidly tends to
equilibrium (in the sense that the variance is exponentially ergodic).
Meanwhile, log-Sobolev inequalities are associated with
finer types of ergodicity.
In recent years, researchers have attempted to use functional
inequalities and semigroup tools to prove matrix concentration
results. So far, these arguments have met some success,
but they are not strong enough to reproduce the results that are
already available for the simplest random matrix models.
The main obstacle has been the lack of a suitable extension of
the log-Sobolev inequality to the matrix setting.
See Section~\ref{sec:concentration_history} for an account of prior work.
The purpose of this paper is to advance the theory of
semigroups acting on matrix-valued functions and to
apply these methods to obtain matrix concentration
inequalities for nonlinear random matrix models.
To do so, we argue that the classic Bakry--{\'E}mery
curvature criterion for a semigroup acting on real-valued
functions ensures that an associated matrix semigroup
also satisfies a curvature condition. This property
further implies local ergodicity of the matrix semigroup,
which we can use to prove strong bounds on the trace
moments of nonlinear random matrix models.
The power of this approach is that the Bakry--{\'E}mery
condition has already been verified for a large number
of semigroups. We can exploit these results to identify
many new settings where matrix concentration is in force.
This program entirely evades the question about the proper way
to extend log-Sobolev inequalities to matrices.
Our approach reproduces many existing results from
the theory of matrix concentration, such as the matrix
Efron--Stein inequalities~\cite{paulin2016efron}.
Among other new results, we can achieve subgaussian concentration
for a matrix-valued ``Lipschitz'' function on a positively curved
Riemannian manifold. Here is a simplified formulation of this fact.
\begin{theorem}[Euclidean submanifold: Subgaussian concentration]
\label{thm:riemann-simple}
Let $M$ be a compact $n$-dimensional Riemannian submanifold
of a Euclidean space, and let $\mu$ be the uniform measure on $M$.
Suppose that the eigenvalues of the Ricci curvature tensor of $M$ are uniformly
bounded below by $\rho$. Let $\mtx{f} : M \to \mathbb{H}_d$ be a differentiable function.
For all $t \geq 0$, $$
\mathbb{P}_{\mu} \big\{ \norm{ \smash{\mtx{f} - \Expect_{\mu} \mtx{f}} } \geq t \big\}
\leq 2d \, \exp\left( \frac{-\rho t^2}{2 v_{\mtx{f}}} \right)
\quad\text{where}\quad
v_{\mtx{f}} := \sup\nolimits_{x \in M} \norm{ \sum_{i=1}^n (\partial_i \mtx{f}(x))^2 }.
$$
Furthermore, for $q = 2$ and $q\geq 3$,
$$
\left[ \operatorname{\mathbbm{E}}_{\mu} \operatorname{tr} ( \mtx{f} - \operatorname{\mathbbm{E}}_{\mu} \mtx{f} )^q \right]^{1/q}
\leq \rho^{-1/2} \sqrt{q - 1} \left[
\operatorname{\mathbbm{E}}_{\mu} \operatorname{tr} \left( \sum_{i=1}^n (\partial_i \mtx{f})^2 \right)^{q/2} \right]^{1/q}.
$$
The real-linear space $\mathbb{H}_d$ contains all $d \times d$ Hermitian matrices,
and $\norm{\cdot}$ is the $\ell_2$ operator norm. The operators $\partial_i$
compute partial derivatives in local (normal) coordinates.
\end{theorem}
Theorem~\ref{thm:riemann-simple} follows from abstract
concentration inequalities (Theorem~\ref{thm:polynomial_moment} and Theorem~\ref{thm:exponential_concentration})
and the classic fact that the Brownian motion on a positively curved
Riemannian manifold satisfies the Bakry--{\'E}mery criterion~\cite[Sec.~1.16]{bakry2013analysis}.
See Section~\ref{sec:concentration_results_Riemannian} for details.
Particular settings where the theorem is valid
include the unit Euclidean sphere and the special orthogonal group. The variance proxy $v_{\mtx{f}}$ is analogous with the squared Lipschitz
constant that appears in scalar concentration results. We emphasize that
$\partial_i \mtx{f}$ is an Hermitian matrix, and the variance proxy involves
a sum of the matrix squares. Thus, the ``Lipschitz constant'' is tailored to
the matrix setting.
As a concrete example, consider the $n$-dimensional sphere $\mathbb{S}^n \subset \mathbbm{R}^{n+1}$,
with uniform measure $\sigma_n$ and curvature $\rho = n - 1$.
Let $\mtx{A}_1, \dots, \mtx{A}_{n+1} \in \mathbb{H}_d$
be fixed matrices. Construct the random matrix
$$
\mtx{f}(x) = \sum_{i=1}^{n+1} x_i \mtx{A}_i
\quad\text{where $x \sim \sigma_n$.}
$$
By symmetry, $\Expect_{\sigma_n} \mtx{f} = \mtx{0}$.
Moreover, the variance proxy
$v_{\mtx{f}} \leq \norm{ \sum_{i=1}^{n+1} \mtx{A}_i^2 }$.
Thus, Theorem~\ref{thm:riemann-simple} delivers the bound
$$
\mathbb{P}_{\sigma_n} \big\{ \norm{ \mtx{f} } \geq t \big\}
\leq 2d \exp\left( \frac{-(n-1) t^2}{2 v_{\mtx{f}}} \right).
$$
See Section~\ref{sec:riemann-exp} for more instances of Theorem~\ref{thm:riemann-simple} in action.
\begin{remark}[Noncommutative moment inequalities]
After this paper was complete, we learned that Junge \& Zeng~\cite{junge2015noncommutative}
have developed a similar method, based on a noncommutative Bakry--Emery criterion,
to obtain moment inequalities in the setting of a von Neumann algebra equipped
with a noncommutative diffusion semigroup. Their results are not fully comparable
with ours, so we will elaborate on the relationship as we go along.
\end{remark}
\section{Matrix Markov semigroups: Foundations}
\label{sec:matrix_Markov_semigroups}
To start, we develop some basic facts about an important class of Markov semigroups that acts on matrix-valued functions. Given a Markov process, we define the associated matrix Markov semigroup and its infinitesimal generator. Then we construct the matrix carr\'e du champ operator and the Dirichlet form. Afterward, we outline the connection between convergence properties of the semigroup and Poincar{\'e} inequalities. Parts of our treatment are adapted from~\cite{cheng2017exponential,ABY20:Matrix-Poincare}, but some elements appear to be new.
\subsection{Notation}
Let $\mathbb{M}_d$ be the algebra of all $d \times d$ complex matrices.
The real-linear subspace $\mathbb{H}_d$ contains all Hermitian matrices, and $\mathbb{H}_d^+$ is the cone of all positive-semidefinite matrices. Matrices are written in boldface. In particular, $\mathbf{I}_d$ is the $d$-dimensional identity matrix, while $\mtx{f}$, $\mtx{g}$ and $\mtx{h}$ refer to matrix-valued functions. We use the symbol $\preccurlyeq$ for the semidefinite partial order on Hermitian matrices:
For matrices $\mtx{A}, \mtx{B} \in \mathbb{H}_d$, the inequality $\mtx{A}\preccurlyeq \mtx{B}$ means that $\mtx{B}-\mtx{A}\in \mathbb{H}_d^+$.
For a matrix $\mtx{A}\in\mathbb{M}_d$, we write $\|\mtx{A}\|$ for the $\ell_2$ operator norm, $\|\mtx{A}\|_\mathrm{HS}$ for the Hilbert--Schmidt norm, and $\operatorname{tr} \mtx{A}$ for the trace. The normalized trace is defined as
$\operatorname{\bar{\trace}} \mtx{A} := d^{-1} \operatorname{tr} \mtx{A}$. Nonlinear functions bind before the trace.
Given a scalar function $\varphi:\mathbb{R}\rightarrow \mathbb{R}$, we construct
the \emph{standard matrix function} $\phi : \mathbb{H}_d \to \mathbb{H}_d$ using the eigenvalue decomposition:
\[\varphi(\mtx{A}) := \sum_{i=1}^d \varphi(\lambda_i) \, \vct{u}_i\vct{u}_i^* \quad \text{where}\quad \mtx{A} = \sum_{i=1}^d \lambda_i \,\vct{u}_i\vct{u}_i^*. \]
We constantly rely on basic tools from matrix theory; see~\cite{carlen2010trace}.
Let $\Omega$ be a Polish space equipped with a probability measure $\mu$. Define $\Expect_\mu$ and $\mathrm{Var}_{\mu}$ to be the expectation and variance of a real-valued function with respect to the measure $\mu$. When applied to a random matrix, $\Expect_\mu$ computes the entrywise expectation.
Nonlinear functions bind before the expectation.
\subsection{Markov semigroups acting on matrices}
This paper focuses on a special class of Markov semigroups acting on matrices.
In this model, a classical Markov process drives the evolution of a matrix-valued function.
Remark~\ref{rem:nc-semigroup} mentions some generalizations.
Suppose that $(Z_t)_{t\geq0} \subset \Omega$ is a time-homogeneous Markov process
on the state space $\Omega$ with stationary measure $\mu$.
For each matrix dimension $d \in \mathbbm{N}$, we can construct a Markov semigroup $(P_t)_{t\geq0}$
that acts on a (bounded) measurable matrix-valued function
$\mtx{f} : \Omega\rightarrow\mathbb{H}_d$ according to
\begin{equation} \label{eqn:semigroup}
(P_t\mtx{f})(z) := \Expect[\mtx{f}(Z_t)\,|\,Z_0 = z]\quad \text{for all $t\geq 0$ and all $z\in \Omega$}.
\end{equation}
The semigroup property $P_{t+s} = P_{t}P_{s} = P_{s}P_{t}$ holds for all $s, t \geq 0$
because $(Z_t)_{t\geq 0}$ is a homogeneous Markov process.
Note that the operator $P_0$ is the identity map: $P_0 \mtx{f} = \mtx{f}$.
For a fixed $\mtx{A} \in \mathbb{H}_d$, regarded as a constant function on $\Omega$,
the semigroup also acts as the identity: $P_t \mtx{A} = \mtx{A}$ for all $t\geq0$. Furthermore,
$\operatorname{\mathbbm{E}}_{\mu}[ P_t \mtx{f} ] = \operatorname{\mathbbm{E}}_{\mu}[ \mtx{f} ]$ because $Z_0 \sim \mu$ implies
that $Z_t \sim \mu$ for all $t \geq 0$. We use these facts without comment.
Although~\eqref{eqn:semigroup} defines a family of semigroups indexed by the matrix dimension $d$,
we will abuse terminology and speak of this collection as if it were as single semigroup.
A major theme of this paper is that facts about the action of the semigroup~\eqref{eqn:semigroup}
on real-valued functions ($d = 1$) imply parallel facts about the action on matrix-valued functions
($d \in \mathbbm{N}$).
\begin{remark}[Noncommutative semigroups] \label{rem:nc-semigroup}
There is a very general class of noncommutative semigroups acting
on a von Neumann algebra
where the action is determined by a family of completely
positive unital maps~\cite{junge2015noncommutative}. This framework includes~\eqref{eqn:semigroup}
as a special case; it covers quantum semigroups~\cite{cheng2017exponential}
acting on $\mathbb{H}_d$ with a fixed matrix dimension $d$;
it also includes more exotic examples. We will not study
these models, but we will discuss the relationship
between our results and prior work.
\end{remark}
\subsection{Ergodicity and reversibility}
We say that the semigroup $(P_t)_{t\geq0}$ defined in~\eqref{eqn:semigroup}
is \emph{ergodic} if
\begin{equation*}\label{eqn:ergodicity_scalar}
P_tf \rightarrow \Expect_\mu f\quad \text{in $L_2(\mu)$}\quad \text{as}\quad t\rightarrow+\infty \quad \text{for all $f:\Omega\rightarrow\mathbb{R}$}.
\end{equation*}
Furthermore, $(P_t)_{t\geq0}$ is \emph{reversible} if each operator $P_t$
is a \emph{symmetric} operator
on $L_2(\mu)$. That is,
\begin{equation}\label{eqn:reversibility_scalar}
\Expect_\mu [(P_tf) \, g] = \Expect_\mu [f \, (P_tg)] \quad \text{for all $t\geq0$ and all $f,g:\Omega\rightarrow\mathbb{R}$}.
\end{equation}
Note that these definitions involve only real-valued functions $(d = 1)$.
In parallel, we say that the Markov process $(Z_t)_{t\geq0}$ is reversible (resp.~ergodic) if the associated Markov semigroup $(P_t)_{t\geq0}$ is reversible (resp.~ergodic). The reversibility of the process $(Z_t)_{t\geq0}$ implies that, when $Z_0 \sim \mu$, the pair $(Z_t, Z_0)$ is \emph{exchangeable} for all $t \geq 0$. That is, $(Z_t,Z_0)$ and $(Z_0,Z_t)$ follow the same distribution for all $t\geq0$.
Our matrix concentration results require ergodicity and reversibility
of the semigroup action on matrix-valued functions.
These properties are actually a consequence of the
analogous properties for real-valued functions.
Evidently, the ergodicity of $(P_t)_{t\geq0}$ is equivalent with
the statement
\begin{equation}\label{eqn:ergodicity}
P_t\mtx{f} \rightarrow \Expect_\mu \mtx{f}\quad \text{in $L_2(\mu)$}\quad \text{as}\quad t\rightarrow+\infty \quad \text{for all $\mtx{f}:\Omega\rightarrow\mathbb{H}_d$ and each $d \in \mathbbm{N}$.}
\end{equation}
Note that the $L_2(\mu)$ convergence in the matrix setting means $\lim_{t\rightarrow \infty}\Expect_\mu(P_t\mtx{f}-\Expect_\mu \mtx{f})^2= \mtx{0}$ , which is readily implied by the $L_2(\mu)$ convergence of all entries of $P_t\mtx{f}-\Expect_\mu \mtx{f}$.
As for reversibility, we have the following result.
\begin{proposition}[Reversibility] \label{prop:reversibility}
Let $(P_t)_{t \geq 0}$ be the family of semigroups defined in~\eqref{eqn:semigroup}.
The following are equivalent.
\begin{enumerate}
\item The semigroup acting on real-valued functions is symmetric, as in~\eqref{eqn:reversibility_scalar}.
\item The semigroup acting on matrix-valued functions is symmetric. That is, for each $d \in \mathbbm{N}$,
\begin{equation}\label{eqn:reversibility_1}
\Expect_\mu [(P_t\mtx{f}) \, \mtx{g}] = \Expect_\mu [\mtx{f} \, (P_t\mtx{g})] \quad \text{for all $t\geq0$ and all $\mtx{f},\mtx{g}:\Omega\rightarrow\mathbb{H}_d$}.
\end{equation}
\end{enumerate}
\end{proposition}
\noindent
Let us emphasize that~\eqref{eqn:reversibility_1} now involves matrix products.
The proof of Proposition~\ref{prop:reversibility} appears below in
Section~\ref{sec:reversibility-pf}.
\subsection{Convexity}
Given a convex function $\Phi:\mathbb{H}_d\rightarrow\mathbb{R}$ that is bounded below, the semigroup satisfies
a Jensen inequality of the form
\begin{equation*}\label{eqn:semigroup_Jensen_1}
\Phi(P_t\mtx{f}(z)) = \Phi(\Expect[\mtx{f}(Z_t)\,|\,Z_0 = z]) \leq \Expect[\Phi(\mtx{f}(Z_t))\,|\,Z_0 = z]\quad \text{for all $z\in \Omega$}.
\end{equation*}
This is an easy consequence of the definition~\eqref{eqn:semigroup}. In particular,
\begin{equation}\label{eqn:semigroup_Jensen_2}
\Expect_\mu \Phi(P_t\mtx{f}) \leq \Expect_{Z\sim\mu} \Expect[\Phi(\mtx{f}(Z_t))\,|\,Z_0 = Z] = \Expect_{Z_0\sim\mu}[\Phi(\mtx{f}(Z_t))] = \Expect_\mu \Phi(\mtx{f}) .
\end{equation}
A typical choice of $\Phi$ is the trace function $\operatorname{tr} \phi$, where $\phi : \mathbb{H}_d \to \mathbb{H}_d$ is a standard matrix function.
\subsection{Infinitesimal generator}
The \emph{infinitesimal generator} $\mathcal{L}$ of the semigroup~\eqref{eqn:semigroup}
acts on a (nice) measurable function
$\mtx{f}:\Omega\rightarrow\mathbb{H}_d$ via the formula
\begin{equation}\label{eqn:Markov_generator}
(\mathcal{L}\mtx{f})(z) := \lim_{t\downarrow 0}\frac{(P_t\mtx{f})(z) -\mtx{f}(z)}{t}
\quad\text{for all $z \in \Omega$.}
\end{equation}
Because $(P_t)_{t \geq 0}$ is a semigroup, it follows immediately that
\begin{equation}\label{eqn:derivative_relation}
\frac{\diff{} }{\diff t}P_t = \mathcal{L} P_t = P_t\mathcal{L}\quad \text{for all}\ t\geq0.
\end{equation}
The null space of $\mathcal{L}$ contains all constant functions:
$\mathcal{L} \mtx{A} = \mtx{0}$ for each fixed $\mtx{A} \in \mathbb{H}_d$.
Moreover,
\begin{equation}\label{eqn:mean_zero}
\Expect_\mu[\mathcal{L} \mtx{f} ] = \mtx{0}\quad \text{for all $\mtx{f}:\Omega\rightarrow\mathbb{H}_d$}.
\end{equation}
That is, the infinitesimal generator converts an arbitrary function into a zero-mean function.
We say that the infinitesimal generator $\mathcal{L}$ is {symmetric} on $L_2(\mu)$ when
its action on real-valued functions is symmetric:
\[
\Expect_\mu [(\mathcal{L} f)\,g] = \Expect_\mu [f \, (\mathcal{L} g)] \quad \text{for all $f,g:\Omega\rightarrow\mathbb{R}$}.
\]
The generator $\mathcal{L}$ is symmetric if and only if the semigroup $(P_t)_{t \geq 0}$ is symmetric (i.e., reversible).
In this case, the action of $\mathcal{L}$ on matrix-valued functions is also symmetric:
\begin{equation}\label{eqn:reversibility_2}
\Expect_\mu [(\mathcal{L} \mtx{f})\,\mtx{g}] = \Expect_\mu [\mtx{f} \, (\mathcal{L}\mtx{g})] \quad \text{for all $\mtx{f},\mtx{g}:\Omega\rightarrow\mathbb{H}_d$}.
\end{equation}
This point follows from Proposition~\ref{prop:reversibility}.
As we have alluded,
the limit in \eqref{eqn:Markov_generator} need not exist for all functions.
The set of functions $\mtx{f}:\Omega\rightarrow\mathbb{H}_d$ for which $\mathcal{L}\mtx{f}$
is defined $\mu$-almost everywhere is called the \emph{domain}
of the generator.
It is highly technical, but usually unimportant, to characterize the domain of the generator
and related operators.
For our purposes, we may restrict attention to an unspecified
algebra of \emph{suitable} functions (say, smooth and compactly supported)
where all operations involving limits, derivatives, and integrals
are justified. By approximation, we can extend the main results
to the entire class of functions where the statements make sense.
We refer the reader to the monograph~\cite{bakry2013analysis}
for an extensive discussion about how to make these arguments
airtight.
\subsection{Carr\'e du champ operator and Dirichlet form}
For each $d \in \mathbbm{N}$, given the infinitesimal generator $\mathcal{L}$,
the matrix \textit{carr\'e du champ operator} is the bilinear form
\begin{equation}\label{eqn:definition_Gamma}
\Gamma(\mtx{f},\mtx{g}) := \frac{1}{2}\left[ \mathcal{L}(\mtx{f}\mtx{g}) - \mtx{f}\mathcal{L}(\mtx{g}) - \mathcal{L}(\mtx{f})\mtx{g} \right] \in \mathbb{M}_d \quad \text{for all suitable $\mtx{f},\mtx{g} : \Omega \to \mathbb{H}_d$}.
\end{equation}
The matrix \emph{Dirichlet form} is the bilinear form obtained by integrating the carr{\'e} du champ:
\begin{equation} \label{eqn:Dirichlet_form}
\mathcal{E}(\mtx{f},\mtx{g}) := \Expect_\mu \Gamma(\mtx{f},\mtx{g}) \in \mathbb{M}_d \quad \text{for all suitable $\mtx{f},\mtx{g} : \Omega \to \mathbb{H}_d$}.
\end{equation}
We abbreviate the associated quadratic forms as $\Gamma(\mtx{f}):=\Gamma(\mtx{f},\mtx{f})$ and $\mathcal{E}(\mtx{f}):=\mathcal{E}(\mtx{f},\mtx{f})$. Proposition~\ref{prop:Gamma_property}
states that both these quadratic forms
are positive operators in the sense that they take values in the cone of
positive-semidefinite Hermitian matrices.
In many instances, the carr\'e du champ $\Gamma(\mtx{f})$
has a natural interpretation as the squared magnitude of the derivative of $\mtx{f}$,
while the Dirichlet form $\mathcal{E}(\mtx{f})$
reflects the total energy of the function $\mtx{f}$.
Using~\eqref{eqn:mean_zero}, we can rewrite the Dirichlet form as
\begin{align}\label{eqn:Dirichlet_expression_1}
\mathcal{E}(\mtx{f},\mtx{g}) = \Expect_\mu\Gamma(\mtx{f},\mtx{g}) = -\frac{1}{2} \Expect_\mu \left[\mtx{f}\mathcal{L}(\mtx{g}) + \mathcal{L}(\mtx{f})
\mtx{g}\right]
\end{align}
When the semigroup $(P_t)_{t \geq 0}$ is reversible,
then~\eqref{eqn:reversibility_2} and~\eqref{eqn:Dirichlet_expression_1} indicate that
\begin{align}\label{eqn:Dirichlet_expression_2}
\mathcal{E}(\mtx{f},\mtx{g}) = -\Expect_\mu [\mtx{f}\mathcal{L}(\mtx{g})] = -\Expect_\mu [\mathcal{L}(\mtx{f})\mtx{g}].
\end{align}
These alternative expressions are very useful for calculations.
\subsection{The matrix Poincar\'e inequality}
For each function $\mtx{f}:\Omega\rightarrow\mathbb{H}_d$,
the \textit{matrix variance} with respect to the distribution $\mu$ is defined as
\begin{equation} \label{eqn:matrix_variance}
\mVar_\mu[\mtx{f}] := \Expect_\mu\big[(\mtx{f}-\Expect_\mu\mtx{f})^2\big] = \Expect_\mu[\mtx{f}^2] - (\Expect_\mu\mtx{f})^2\in \mathbb{H}_d^+.
\end{equation}
We say that the Markov process satisfies a \textit{matrix Poincar\'e inequality} with constant $\alpha>0$ if
\begin{equation}\label{eqn:matrix_Poincare}
\mVar_\mu(\mtx{f})\preccurlyeq \alpha \cdot \mathcal{E}(\mtx{f})\quad \text{for all suitable $\mtx{f} : \Omega \to \mathbb{H}_d$}.
\end{equation}
This definition seems to be due to Chen et al.~\cite{cheng2017exponential};
see also Aoun et al.~\cite{ABY20:Matrix-Poincare}.
When the matrix dimension $d = 1$, the inequality~\eqref{eqn:matrix_Poincare} reduces to the usual scalar Poincar{\'e} inequality for the semigroup. For the semigroup~\eqref{eqn:semigroup},
the scalar Poincar{\'e} inequality ($d = 1$) already implies the
matrix Poincar{\'e} inequality (for all $d \in \mathbbm{N}$). Therefore, to check the validity of~\eqref{eqn:matrix_Poincare},
it suffices to consider real-valued functions.
\begin{proposition}[Poincar{\'e} inequalities: Equivalence] \label{prop:poincare_equiv}
For each $d \in \mathbbm{N}$, let $(P_t)_{t\geq 0}$ be the semigroup defined in~\eqref{eqn:semigroup}.
The following are equivalent:
\begin{enumerate}
\item \label{Poincare_inequality_scalar}
\textbf{Scalar Poincar\'e inequality.}
$\Var_\mu[f]\leq \alpha \cdot \mathcal{E}(f)$ for all suitable $f:\Omega\to \mathbb{R}$.
\item \label{Poincare_inequality_matrix}
\textbf{Matrix Poincar\'e inequality.}
$\mVar_\mu[\mtx{f}]\preccurlyeq \alpha \cdot \mathcal{E}(\mtx{f})$ for all suitable $\mtx{f}: \Omega \to \mathbb{H}_d$ and all $d \in \mathbbm{N}$.
\end{enumerate}
\end{proposition}
\noindent
The proof of Proposition~\ref{prop:poincare_equiv} appears in Section~\ref{sec:scalar_matrix}.
We are grateful to Ramon van Handel for this observation.
\subsection{Poincar{\'e} inequalities and ergodicity}
As in the scalar case, the matrix Poincar{\'e} inequality~\eqref{eqn:matrix_Poincare} is a powerful tool for understanding
the action of a semigroup on matrix-valued functions. Assuming ergodicity, the Poincar{\'e}
inequality is equivalent with the exponential convergence of the Markov semigroup $(P_t)_{t \geq 0}$
to the expectation operator $\Expect_{\mu}$. The constant $\alpha$ determines the rate
of convergence. The following result makes this principle precise.
\begin{proposition}[Poincar\'e inequality: Consequences] \label{prop:matrix_poincare}
Consider a Markov semigroup $(P_t)_{t \geq 0}$
with stationary measure $\mu$ acting on suitable functions $\mtx{f} : \Omega \to \mathbb{H}_d$
for a fixed $d \in \mathbbm{N}$, as defined in~\eqref{eqn:semigroup}.
The following are equivalent:
\begin{enumerate}
\item \label{Poincare_inequality}
\textbf{Poincar\'e inequality.}
$\mVar_\mu[\mtx{f}]\preccurlyeq \alpha \cdot \mathcal{E}(\mtx{f})$ for all suitable $\mtx{f}: \Omega \to \mathbb{H}_d$.
\item \label{variance_convergence}
\textbf{Exponential ergodicity of variance.}
$\mVar_\mu[P_t\mtx{f}]\preccurlyeq \mathrm{e}^{-2t/\alpha} \cdot \mVar_\mu[\mtx{f}]$ for all $t\geq 0$ and for all suitable $\mtx{f}:\Omega \to \mathbb{H}_d$.
\end{enumerate}
Moreover, if the semigroup $(P_t)_{t\geq 0}$ is reversible and ergodic, then the statements above are also equivalent to the following:
\begin{enumerate}[resume]
\item \label{energy_convergence}
\textbf{Exponential ergodicity of energy.}
$\mathcal{E}(P_t\mtx{f})\preccurlyeq \mathrm{e}^{-2t/\alpha} \cdot \mathcal{E}(\mtx{f})$ for all $t \geq 0$ and for all suitable $\mtx{f}:\Omega \to \mathbb{H}_d$.
\end{enumerate}
\end{proposition}
\noindent
Section~\ref{sec:equivalence_Poincare}
contains the proof of Proposition~\ref{prop:matrix_poincare},
which is essentially the same as in the scalar case~\cite[Theorem 2.18]{van550probability}.
\begin{remark}[Quantum semigroups]
Proposition~\ref{prop:matrix_poincare} only concerns the action of
a semigroup on matrices of fixed dimension $d$. As such, the result
can be adapted to quantum Markov semigroups.
A partial version of the result for this general setting
already appears in \cite[Remark IV.2]{cheng2017exponential}.
\end{remark}
\subsection{Iterated carr{\'e} du champ operator}
To better understand how quickly a Markov semigroup converges to equilibrium, it is valuable to consider the \textit{iterated carr\'e du champ operator}. In the matrix setting, this operator is defined as
\begin{equation}\label{eqn:definition_Gamma2}
\Gamma_2(\mtx{f},\mtx{g}) := \frac{1}{2}\left[ \mathcal{L}\Gamma(\mtx{f},\mtx{g}) - \Gamma(\mtx{f},\mathcal{L}(\mtx{g})) - \Gamma(\mathcal{L}(\mtx{f}),\mtx{g}) \right] \in \mathbb{M}_d \quad \text{for all suitable $\mtx{f},\mtx{g} : \Omega \to \mathbb{H}_d$}.
\end{equation}
As with the carr\'e du champ, we abbreviate the quadratic form $\Gamma_2(\mtx{f}) := \Gamma_2(\mtx{f},\mtx{f})$. We remark that this quadratic form is not necessarily a positive operator.
Rather, $\Gamma_2(\mtx{f})$ reflects the ``magnitude'' of the squared Hessian
of $\mtx{f}$ plus a correction factor that reflects the ``curvature'' of the matrix semigroup.
When the underlying Markov semigroup $(P_t)_{t \geq 0}$ is reversible, it holds that
\[\Expect_\mu \Gamma_2(\mtx{f},\mtx{g}) = \Expect_\mu\left[\mathcal{L}(\mtx{f}) \, \mathcal{L}(\mtx{g})\right]\quad \text{for all suitable $\mtx{f},\mtx{g} : \Omega \to \mathbb{H}_d$}.\]
Thus, for a reversible semigroup, the average value $\operatorname{\mathbbm{E}}_{\mu} \Gamma_2(\mtx{f})$ is a positive-semidefinite matrix.
\subsection{Bakry--{\'E}mery criterion}
\label{sec:local_matrix_Poincare_inequality}
When the iterated carr{\'e} du champ is comparable with the carr{\'e} du champ, we can obtain more information about the convergence of the Markov semigroup.
We say the semigroup satisfies the \textit{matrix Bakry--\'Emery criterion} with constant $c>0$ if
\begin{equation}\label{Bakry-Emery}
\Gamma(\mtx{f}) \preccurlyeq c \cdot \Gamma_2(\mtx{f}) \quad \text{for all suitable $\mtx{f} : \Omega \to \mathbb{H}_d$}.
\end{equation}
Since $\Gamma(\mtx{f})$ and $\Gamma_2(\mtx{f})$ are functions, one interprets this condition
as a pointwise inequality that holds $\mu$-almost everywhere in $\Omega$. It reflects
uniform positive curvature of the semigroup.
When the matrix dimension $d = 1$, the condition~\eqref{Bakry-Emery}
reduces to the classic Bakry--{\'E}mery criterion~\cite[Sec.~1.16]{bakry2013analysis}.
For a semigroup of the form~\eqref{eqn:semigroup}, the scalar result actually
implies the matrix result for all $d \in \mathbbm{N}$.
\begin{proposition}[Bakry--{\'E}mery: Equivalence] \label{prop:BE_equiv}
Let $(P_t)_{t\geq 0}$ be the family of semigroups defined in~\eqref{eqn:semigroup}.
The following statements are equivalent:
\begin{enumerate}
\item \label{Bakry-Emery_criterion_scalar} \textbf{Scalar Bakry--\'Emery criterion.}
$\Gamma(f)\leq c \cdot \Gamma_2(f)$ for all suitable $f:\Omega\to \mathbb{R}$.
\item \label{Bakry-Emery_criterion_matrix} \textbf{Matrix Bakry--\'Emery criterion.}
$\Gamma(\mtx{f})\preccurlyeq c \cdot \Gamma_2(\mtx{f})$ for all suitable $\mtx{f}:\Omega \to \mathbb{H}_d$
and all $d \in \mathbbm{N}$.
\end{enumerate}
\end{proposition}
\noindent
See Section~\ref{sec:scalar_matrix} for the proof
of Proposition~\ref{prop:BE_equiv}.
Proposition~\ref{prop:BE_equiv} is a very powerful tool, and it is a key
part of our method.
Indeed, it is already known~\cite{bakry2013analysis} that many kinds of Markov processes
satisfy the scalar Bakry--{\'E}mery criterion~\eqref{Bakry-Emery_criterion_scalar}.
When contemplating novel settings, we only need to check the scalar criterion,
rather than worrying about matrix-valued functions.
In all these cases, we obtain the matrix extension for free.
\begin{remark}[Curvature]\label{rmk:curvature}
The scalar Bakry--\'Emery criterion, Proposition~\ref{prop:BE_equiv}\eqref{Bakry-Emery_criterion_scalar}, is also known as the curvature condition $CD(\rho,\infty)$ with $\rho=c^{-1}$. In the scenario where the infinitesimal generator $\mathcal{L}$ is the Laplace--Beltrami operator $\Delta_{\mathfrak{g}}$ on a Riemannian manifold $(M,\mathfrak{g})$ with co-metric $\mathfrak{g}$, the Bakry--\'Emery criterion holds if and only if the Ricci curvature tensor is everywhere positive definite, with eigenvalues bounded from below by $\rho>0$. See~\cite[Section 1.16]{bakry2013analysis} for a discussion. We will return to this example
in Section~\ref{sec:Riemannin_intro}.
\end{remark}
\subsection{Bakry--{\'E}mery and ergodicity}
The scalar Bakry--\'Emery criterion, Proposition~\ref{prop:BE_equiv}\eqref{Bakry-Emery_criterion_scalar},
is equivalent to a local Poincar\'e inequality,
which is strictly stronger than the scalar Poincar\'e inequality, Proposition~\ref{prop:poincare_equiv}\eqref{Poincare_inequality_scalar}.
It is also equivalent to a powerful local ergodicity property~\cite[Theorem 2.35]{van550probability}.
The next result states that the matrix Bakry--{\'E}mery criterion~\eqref{Bakry-Emery}
implies counterparts of these facts.
\begin{proposition}[Bakry--{\'Emery}: Consequences]
\label{prop:local_Poincare}
Let $(P_t)_{t \geq 0}$ be a Markov semigroup acting on
suitable functions $\mtx{f} : \Omega \to \mathbb{H}_d$ for fixed $d \in \mathbbm{N}$,
as defined in~\eqref{eqn:semigroup}.
The following are equivalent:
\begin{enumerate}
\item \label{Bakry-Emery_criterion} \textbf{Bakry--\'Emery criterion.}
$\Gamma(\mtx{f})\preccurlyeq c \cdot \Gamma_2(\mtx{f})$ for all suitable $\mtx{f}:\Omega \to \mathbb{H}_d$.
\item \label{local_ergodicity}
\textbf{Local ergodicity.}
$\Gamma(P_t\mtx{f})\preccurlyeq \mathrm{e}^{-2t/c} \cdot P_t\Gamma(\mtx{f})$ for all $t \geq 0$ and for all suitable $\mtx{f}:\Omega \to \mathbb{H}_d$.
\item \label{local_Poincare}
\textbf{Local Poincar\'e inequality.} $P_t(\mtx{f}^2) - (P_t\mtx{f})^2 \preccurlyeq c \,(1-\mathrm{e}^{-2t/c}) \cdot P_t\Gamma(\mtx{f})$ for all $t \geq 0$ and for all suitable $\mtx{f}:\Omega \to \mathbb{H}_d$. \end{enumerate}
\end{proposition}
\noindent
The proof Proposition~\ref{prop:local_Poincare} appears in Section~\ref{sec:equivalence_local_Poincare}.
It follows along the same lines as the scalar result~\cite[Theorem 2.36]{van550probability}.
Proposition~\ref{prop:local_Poincare} plays a central role in this paper.
With the aid of Proposition~\ref{prop:BE_equiv}, we can verify the
Bakry--{\'E}mery criterion~\eqref{Bakry-Emery_criterion} for many
particular Markov semigroups. Meanwhile, the local ergodicity
property~\eqref{local_ergodicity} supports short derivations of
trace moment inequalities for random matrices.
The results in Proposition~\ref{prop:local_Poincare} refine the statements
in Proposition~\ref{prop:matrix_poincare}.
Indeed, the carr\'e du champ operator $\Gamma(\mtx{f})$ measures the local fluctuation of a function $\mtx{f}$,
so the local ergodicity condition~\eqref{local_ergodicity} means that the fluctuation of
$P_t\mtx{f}$ at every point $z\in \Omega$ is decreasing exponentially fast.
By applying $\Expect_\mu$ to both sides of the local ergodicity inequality,
we obtain the ergodicity of energy, Proposition~\ref{prop:matrix_poincare}\eqref{energy_convergence}.
If $(P_t)_{t\geq 0}$ is ergodic, applying the expectation $\Expect_\mu$ to the local Poincar\'e inequality~\eqref{local_Poincare} and then taking $t\rightarrow +\infty$
yields the matrix Poincar\'e inequality, Proposition~\ref{prop:matrix_poincare}\eqref{Poincare_inequality}
with constant $\alpha = c$.
In fact, a standard method for establishing a Poincar\'e inequality
is to check the Bakry--\'Emery criterion.
\begin{remark}[Noncommutative semigroups]
Junge \& Zeng have investigated the implications of
the Bakry--{\'E}mery criterion~\eqref{Bakry-Emery} for noncommutative diffusion processes on a von Neumann algebra.
For this setting, a partial version of Proposition~\ref{prop:local_Poincare} appears in~\cite[Lemma 4.6]{junge2015noncommutative}.
\end{remark}
\subsection{Basic examples}\label{sec:examples} This section contains some examples of Markov semigroups that satisfy the Bakry--{\'E}mery criterion~\eqref{Bakry-Emery}. In Section~\ref{sec:main_results}, we will use these semigroups to derive matrix concentration results for several random matrix models.
\subsubsection{Product measures}\label{sec:product_measure_all_intro} Consider a product space $\Omega = \Omega_1\otimes \Omega_2\otimes \cdots\otimes \Omega_n $ equipped with a product measure $\mu = \mu_1\otimes \mu_2\otimes \cdots\otimes\mu_n$. In Section~\ref{sec:product_measure_all}, we present the standard construction of
the associated Markov semigroup, adapted to the matrix setting.
This semigroup is ergodic and reversible, and its carr\'e du champ operator takes the form
of a discrete squared derivative:
\begin{equation}\label{eqn:variance_proxy}
\Gamma(\mtx{f})(z) = \mtx{V}(\mtx{f})(z) := \frac{1}{2}\sum_{i=1}^n\Expect_Z \left[ (\mtx{f}(z) - \mtx{f}((z;Z)_i))^2 \right] \quad \text{for all $z\in \Omega$}.
\end{equation}
In this expression, $Z = (Z^1,\dots,Z^n)\sim\mu$ and $(z;Z)_i = (z^1,\dots,z^{i-1},Z^i,z^{i+1},\dots,z^n)$ for each $i=1,\dots,n$. Superscripts denote the coordinate index.
Aoun et al.~\cite{ABY20:Matrix-Poincare} have shown that this Markov semigroup satisfies
the matrix Poincar{\'e} inequality~\eqref{eqn:matrix_Poincare} with constant $\alpha = 1$.
In Section~\ref{sec:product_measure_all}, we will show that the semigroup also satisfies the Bakry--\'Emery criterion~\eqref{Bakry-Emery} with constant $c = 2$.
\subsubsection{Log-concave measures}\label{sec:log-concave_intro}
Log-concave distributions~\cite{Pre73:Logarithmic-Concave,ambrosio2009existence,saumard2014log} are a fundamental class of probability measures on $\Omega = \mathbb{R}^n$ that are closely related to diffusion processes. A log-concave measure takes the form $\diff \mu \propto \mathrm{e}^{-W(z)}\idiff z$ where the potential $W:\mathbb{R}^n\rightarrow \mathbb{R}$ is a convex function, so it captures a form of negative dependence.
The associated diffusion process naturally induces a semigroup whose
carr{\'e} du champ operator takes
the form of the squared ``magnitude'' of the gradient:
\[\Gamma(\mtx{f})(z) = \sum_{i=1}^n(\partial_i\mtx{f}(z))^2\quad \text{for all $z\in \mathbb{R}^n$}.\]
As usual, $\partial_i := \partial/\partial z_i$ for $i = 1, \dots, n$.
Many interesting results follow from the condition that the potential $W$
is uniformly strongly convex on $\mathbb{R}^n$.
In other words, for a constant $\eta > 0$,
we assume that the Hessian matrix satisfies
\begin{equation} \label{eqn:hess-sc-intro}
(\operatorname{Hess} W)(z) := \big[ (\partial_{ij} W)(z) \big]_{i,j=1}^n \succcurlyeq \eta \cdot \mathbf{I}_n
\quad\text{for all $z \in \mathbb{R}^n$.}
\end{equation}
The partial derivative $\partial_{ij} := \partial^2/(\partial z_i \partial z_j)$ for $i,j=1,\dots,n$.
It is a standard result~\cite[Sec. 4.8]{bakry2013analysis} that the strong convexity condition~\eqref{eqn:hess-sc-intro} implies the scalar Bakry--\'Emery criterion with constant $c = \eta^{-1}$. Therefore, according to Proposition~\ref{prop:BE_equiv},
the matrix Bakry--{\'E}mery criterion~\eqref{Bakry-Emery} is valid for every $d \in \mathbbm{N}$.
One of the core examples of a log-concave measure is the standard Gaussian measure on $\mathbb{R}^n$, which is given by the potential $W(z) = z^\mathsf{T} z/2$. The associated diffusion process induces the Ornstein--Uhlenbeck semigroup, which satisfies the Bakry--\'Emery criterion~\eqref{Bakry-Emery} with constant $c = 1$.
A more detailed discussion on log-concave measures is presented in Section~\ref{sec:log-concave}.
\subsection{Measures on Riemannian manifolds} \label{sec:Riemannin_intro}
The theory of diffusion processes on Euclidean spaces can be generalized to the setting of Riemannian manifolds. Although this exercise may seem abstract, it allows us to treat some interesting and important examples in a unified way. We refer to~\cite{bakry2013analysis} for more background on this subject, and we instate their conventions.
Consider an $n$-dimensional compact Riemannian manifold $(M,\mathfrak{g})$. Let $\mathfrak{g}(x) = (g^{ij}(x) : 1 \leq i,j \leq n )$ be the matrix representation of the co-metric tensor $\mathfrak{g}$ in local coordinates, which is a symmetric and positive-definite matrix defined for every $x \in M$.
The manifold is equipped with a canonical Riemannian probability measure $\mu_\mathfrak{g}$ that has local density $\diff \mu_\mathfrak{g} \propto \det(\mathfrak{g}(x))^{-1/2} \idiff{x}$ with respect to the Lebesgue measure in local coordinates. This measure $\mu_\mathfrak{g}$ is the stationary measure of the diffusion process on $M$ whose infinitesimal generator $\mathcal{L}$ is the Laplace--Beltrami operator $\Delta_\mathfrak{g}$. This diffusion process is called the \emph{Riemannian Brownian motion}.\footnote{Many authors use the convention that Riemmanian Brownian motion has infinitesimal generator $\tfrac{1}{2} \Delta_{\mathfrak{g}}$.}
The associated matrix carr{\'e} du champ operator coincides with the squared ``magnitude'' of the differential:
\begin{equation}\label{eqn:gamma_Riemannian_0}
\Gamma(\mtx{f})(x) = \sum_{i,j=1}^ng^{ij}(x) \,\partial_i\mtx{f}(x)\, \partial_j\mtx{f}(x)\quad \text{for suitable $\mtx{f} : M \to \mathbb{H}_d$.}
\end{equation}
Here, $\partial_i$ for $i=1,\dots,n$ are the components of the differential, computed in local coordinates. We emphasize that the matrix carr{\'e} du champ operator is intrinsic; expressions for the carr{\'e} du champ resulting from different choices of local coordinates are equivalent under change of variables. See Section~\ref{sec:extension_Riemannian_manifold} for a more detailed discussion.
As mentioned in Remark~\ref{rmk:curvature}, the scalar Bakry--\'{E}mery criterion holds with $c=\rho^{-1}$ if and only if the Ricci curvature tensor of $(M, \mathfrak{g})$ is everywhere positive, with eigenvalues bounded from below by $\rho>0$. In other words, for Brownian motion on a manifold, the Bakry--{\'E}mery criterion is equivalent to the uniform positive curvature of the manifold. Proposition~\ref{prop:BE_equiv} ensures
that the matrix Bakry--{\'E}mery criterion~\eqref{Bakry-Emery} holds with $c = \rho^{-1}$
under precisely the same circumstances.
Many examples of positively curved Riemannian manifolds are discussed in \cite{ledoux2001concentration,gromov2007metric,cheeger2008comparison,bakry2013analysis}.
We highlight two particularly interesting cases.
\begin{example}[Unit sphere]
Consider the $n$-dimensional unit sphere $\mathbb{S}^{n} \subset \mathbbm{R}^{n+1}$ for $n \geq 2$.
The sphere is equipped with the Riemannian manifold structure induced by
$\mathbbm{R}^{n+1}$. The canonical Riemannian measure on the
sphere is simply the uniform probability measure.
The sphere has a constant Ricci curvature tensor, whose eigenvalues all equal $n - 1$.
Therefore, the Brownian motion on $\mathbb{S}^n$ satisfies
the Bakry--{\'E}mery criterion~\eqref{Bakry-Emery}
with $c = (n-1)^{-1}$.
See~\cite[Sec.~2.2]{bakry2013analysis}.
\end{example}
\begin{example}[Special orthogonal group]
The special orthogonal group $\mathrm{SO}(n)$ can be regarded as a Riemannian submanifold
of $\mathbbm{R}^{n \times n}$. The Riemannian metric is the Haar probability measure
on $\mathrm{SO}(n)$. It is known that the eigenvalues of the Ricci curvature tensor
are uniformly bounded below by $(n-1)/4$. Therefore, the Brownian motion on $\mathrm{SO}(n)$ satisfies the Bakry--{\'E}mery criterion~\eqref{Bakry-Emery} with $c = 4/(n-1)$.
See~\cite[pp.~26ff]{ledoux2001concentration}.
\end{example}
The lower bound on Ricci curvature is stable under (Riemannian) products of manifolds, so similar results are valid for products of spheres or products of the orthogonal group; cf.~\cite[p.~27]{ledoux2001concentration}.
\subsection{History}
In the scalar setting, much of the classic research on Markov processes concerns the behavior of diffusion processes on Riemannian manifolds. Functional inequalities connect the convergence of these Markov processes to the geometry of the manifold.
The rate of convergence to equilibrium of a Markov process plays a core role in developing concentration properties
for the measure. The treatise \cite{bakry2013analysis} contains a comprehensive discussion. Other references include \cite{ledoux2001concentration,boucheron2013concentration,van550probability}.
Matrix-valued Markov processes were originally introduced to model the evolution of quantum systems \cite{davies1969quantum,lindblad1976generators,accardi1982quantum}. In recent years,
the long-term behavior of quantum Markov processes has received significant attention in the field of quantum information.
A general approach to exponential convergence of a quantum system
is to establish quantum log-Sobolev inequalities for density operators \cite{majewski1998dissipative,olkiewicz1999hypercontractivity,kastoryano2013quantum}.
In this paper, we consider a mixed classical-quantum setting,
where a classical Markov process drives a matrix-valued function.
The papers~\cite{cheng2017exponential,cheng2019matrix,ABY20:Matrix-Poincare}
contain some foundational results for this model.
Our work provides a more detailed understanding
of the connections between the ergodicity of the
semigroup and matrix functional inequalities.
The companion paper~\cite{HT20:Trace-Poincare}
contains further results on trace Poincar{\'e} inequalities,
which are equivalent to the Poincar{\'e} inequality~\eqref{eqn:matrix_Poincare}.
A general framework for noncommutative diffusion processes
on von Neumann algebras can be found in \cite{junge2006h,junge2015noncommutative}.
In particular, the paper~\cite{junge2015noncommutative} shows that a
noncommutative Bakry--{\'E}mery criterion implies local ergodicity
of a noncommutative diffusion process.
In spite of its generality, the presentation in~\cite{junge2015noncommutative}
does not fully contain our treatment. On the one hand,
the noncommutative semigroup model includes the
mixed classical-quantum model~\eqref{eqn:semigroup} as a special case.
On the other hand, we do not need the underlying Markov process to
be a diffusion (with continuous sample paths), while Junge \& Zeng
pose a diffusion assumption.
\section{Nonlinear Matrix Concentration: Main Results}
\label{sec:main_results}
The matrix Poincar{\'e} inequality~\eqref{eqn:matrix_Poincare} has been associated
with subexponential concentration inequalities for random matrices~\cite{ABY20:Matrix-Poincare,HT20:Trace-Poincare}.
The central purpose of this paper is to establish that the (scalar) Bakry--{\'E}mery criterion
leads to matrix concentration inequalities via a straightforward semigroup method.
This section outlines our main results; the proofs appear in Section~\ref{sec:trace_to_moment}.
\begin{remark}[Noncommutative setting]
After this paper was written, we learned that Junge \& Zeng~\cite{junge2015noncommutative}
have used the (noncommutative) Bakry--{\'E}mery criterion to obtain subgaussian moment
bounds for elements of von Neumann algebra using a martingale approach. Their setting
is more general (if we ignore the diffusion assumptions),
but we will see that their results are weaker in several respects.
\end{remark}
\subsection{Markov processes and random matrices}
Let $Z$ be a random variable, taking values in the state space $\Omega$, with the distribution $\mu$.
For a matrix-valued function $\mtx{f} : \Omega \to \mathbb{H}_d$, we can
define the random matrix $\mtx{f}(Z)$, whose distribution is the push-forward
of $\mu$ by the function $\mtx{f}$. Our goal is to understand how well
the random matrix $\mtx{f}(Z)$ concentrates around its expectation
$\operatorname{\mathbbm{E}} \mtx{f}(Z) = \operatorname{\mathbbm{E}}_{\mu} \mtx{f}$.
To do so, suppose that we can construct a reversible, ergodic Markov process
$(Z_t)_{t \geq 0} \subset \Omega$ whose stationary distribution is $\mu$.
We have the intuition that the faster that the process $(Z_t)_{t \geq 0}$ converges
to equilibrium, the more sharply the random matrix $\mtx{f}(Z)$ concentrates
around its expectation.
To quantify the rate of convergence of the matrix Markov process,
we use the Bakry--{\'E}mery criterion~\eqref{Bakry-Emery}
to obtain local ergodicity of the semigroup. This property allows
us to prove strong bounds on the trace moments of the random matrix.
Using standard arguments (Appendix~\ref{apdx:matrix_moments}), these moment bounds imply
nonlinear matrix concentration inequalities.
\subsection{Polynomial concentration}
We begin with a general estimate on the polynomial trace moments
of a random matrix under a Bakry--\'Emery criterion.
\begin{theorem}[Polynomial moments]\label{thm:polynomial_moment} Let $\Omega$ be a Polish space equipped with a probability measure $\mu$. Consider a reversible, ergodic Markov semigroup~\eqref{eqn:semigroup} with stationary measure $\mu$ that acts on (suitable) functions $\mtx{f} : \Omega \to \mathbb{H}_d$.
Assume that the Bakry--\'Emery criterion \eqref{Bakry-Emery} holds for a constant $c>0$.
Then, for $q=1$ and $q\geq1.5$,
\begin{equation}\label{eqn:polynomial_moment_1}
\left[ \Expect_\mu \operatorname{tr}|\mtx{f}-\Expect_\mu\mtx{f}|^{2q}\right]^{1/(2q)}\leq \sqrt{c\,(2q-1)}\left[ \Expect_\mu\operatorname{tr}\Gamma(\mtx{f})^q\right]^{1/(2q)}.
\end{equation}
If the variance proxy $v_{\mtx{f}} := \norm{ \|\Gamma(\mtx{f})\| }_{L_{\infty}(\mu)} <+\infty$,
then
\begin{equation}\label{eqn:polynomial_moment_2}
\left[ \Expect_\mu \operatorname{tr}|\mtx{f}-\Expect_\mu\mtx{f}|^{2q}\right]^{1/(2q)}\leq d^{1/(2q)}\sqrt{c\,(2q-1) \,\smash{v_{\mtx{f}}}} .
\end{equation}
\end{theorem}
\noindent
We establish this theorem in Section~\ref{sec:trace_to_moment}.
For noncommutative diffusion semigroups,
Junge \& Zeng~\cite{junge2015noncommutative} have developed polynomial moment bounds
similar to Theorem~\ref{thm:polynomial_moment}, but they only obtain moment growth
of $O(q)$ in the inequality~\eqref{eqn:polynomial_moment_1}. We can trace this
discrepancy to the fact that they use a martingale argument based on the
noncommutative Burkholder--Davis--Gundy inequality. At present, our proof only
applies to the mixed classical-quantum semigroup~\eqref{eqn:semigroup}, but it
seems plausible that our approach can be generalized.
For now, let us present some concrete results that follow when we apply Theorem~\ref{thm:polynomial_moment}
to the semigroups discussed in Section~\ref{sec:examples}. In each of these cases,
we can derive bounds for the expectation and tails of $\norm{ \smash{\mtx{f} - \Expect_{\mu} \mtx{f}} }$
using the matrix Chebyshev inequality (Proposition~\ref{prop:matrix_Chebyshev}).
In particular, when $v_{\mtx{f}} < + \infty$, we obtain subgaussian concentration.
\subsubsection{Polynomial Efron--Stein inequality for product measures}
The first consequence of Theorem~\ref{thm:polynomial_moment} is a polynomial moment inequality for product measures.
This result exactly reproduces the matrix polynomial Efron--Stein inequalities established by Paulin et al.~\cite[Theorem 4.2]{paulin2016efron}.
\begin{corollary}[Product measure: Polynomial moments]\label{cor:product_measure_Efron--Stein} Let $\mu = \mu_1\otimes \mu_2\otimes \cdots\otimes\mu_n$ be a product measure on a product space $\Omega = \Omega_1\otimes \Omega_2\otimes \cdots\otimes \Omega_n $. Let $\mtx{f}:\Omega \rightarrow \mathbb{H}_d$ be a suitable function.
Then, for $q= 1$ and $q\geq1.5$,
\begin{equation}\label{eqn:product_measure_Efron--Stein}
\left[ \Expect_\mu \operatorname{tr}|\mtx{f}-\Expect_\mu\mtx{f}|^{2q}\right]^{1/(2q)}\leq \sqrt{2(2q-1)}\left[\Expect_\mu\operatorname{tr}\mtx{V}(\mtx{f})^q\right]^{1/(2q)}.
\end{equation}
The matrix variance proxy $\mtx{V}(\mtx{f})$ is defined in \eqref{eqn:variance_proxy}.
\end{corollary}
\noindent
The details appear in Section~\ref{sec:concentration_results_product}.
\subsubsection{Log-concave measures}
The second result is a new polynomial moment inequality for matrix-valued
functions of a log-concave measure.
To avoid domain issues, we restrict our attention to the Sobolev space
\begin{equation}\label{def:H2_function}
\mathrm{H}_{2,\mu}(\mathbb{R}^n;\mathbb{H}_d) := \left\{\mtx{f} : \mathbb{R}^n\rightarrow\mathbb{H}_d:\Expect_\mu \|\mtx{f}\|_\mathrm{HS}^2+\sum_{i=1}^n\Expect_\mu \|\partial_i\mtx{f}\|_\mathrm{HS}^2 + \sum_{i,j=1}^n\Expect_\mu \|\partial_{ij}\mtx{f}\|_\mathrm{HS}^2 <\infty\right\}.
\end{equation}
For these functions, we have the following matrix concentration inequality.
\begin{corollary}[Log-concave measure: Polynomial moments]\label{cor:log-concave_polynomial_inequality}
Let $\diff \mu \propto \mathrm{e}^{-W(z)}\idiff z$ be a log-concave measure on $\mathbb{R}^n$
whose potential $W:\mathbb{R}^n\rightarrow\mathbb{R}$ satisfies a uniform strong convexity
condition: $\operatorname{Hess} W \succcurlyeq \eta \cdot \mathbf{I}_n$
with constant $\eta > 0$.
Let $\mtx{f}\in \mathrm{H}_{2,\mu}(\mathbb{R}^n;\mathbb{H}_d)$.
Then, for $q=1$ and $q\geq 1.5$,
\begin{equation*}\label{eqn:log-concave_polynomial_inequality}
\left[\Expect_\mu \operatorname{tr}|\mtx{f}-\Expect_\mu\mtx{f}|^{2q}\right]^{1/(2q)}\leq \sqrt{\frac{2q-1}{\eta}}\left[\Expect_\mu\operatorname{tr}\left(\sum_{i=1}^n(\partial_i\mtx{f})^2\right)^q\right]^{1/(2q)}.
\end{equation*}
\end{corollary}
\noindent
The details appear in Section~\ref{sec:concentration_results_log-concave}.
\subsection{Exponential concentration}
As a consequence of the Bakry--{\'E}mery criterion~\eqref{Bakry-Emery},
we can also derive exponential matrix concentration inequalities.
In principle, polynomial moment inequalities are stronger,
but the exponential inequalities often lead to better constants and more
detailed information about tail decay.
\begin{theorem}[Exponential concentration]\label{thm:exponential_concentration}
Let $\Omega$ be a Polish space equipped with a probability measure $\mu$.
Consider a reversible, ergodic Markov semigroup \eqref{eqn:semigroup} with stationary measure $\mu$
that acts on (suitable) functions $\mtx{f} : \Omega \to \mathbb{H}_d$.
Assume that the Bakry--\'Emery criterion \eqref{Bakry-Emery}
holds for a constant $c>0$.
Then
\begin{align}\label{eqn:tail_bound_1}
\mathbb{P}_{\mu}\left\{\lambda_{\max}(\mtx{f}-\Expect_\mu\mtx{f})\geq t \right\} \leq&\ d\cdot \inf_{\beta>0} \exp \left(\frac{-t^2}{2cr_{\mtx{f}}(\beta) + 2t\sqrt{c/\beta} }\right) \quad\text{for all $t \geq 0$.}
\end{align}
The function $r_{\mtx{f}}$ computes an exponential mean of the carr{\'e} du champ:
\[
r_{\mtx{f}}(\beta):=\frac{1}{\beta}\log \Expect_\mu\operatorname{\bar{\trace}} \mathrm{e}^{ \beta\Gamma(\mtx{f}) }
\quad\text{for $\beta > 0$.}
\]
In addition, suppose that the variance proxy $v_{\mtx{f}} := \norm{ \|\Gamma(\mtx{f}) \| }_{L_{\infty}(\mu)} <+\infty$.
Then
\begin{equation*}\label{eqn:tail_bound_2}
\mathbb{P}_{\mu}\left\{\lambda_{\max}(\mtx{f}-\Expect_\mu\mtx{f})\geq t \right\} \leq d\cdot \exp \left(\frac{-t^2}{2cv_{\mtx{f}}}\right)
\quad\text{for all $t \geq 0$.}
\end{equation*}
Furthermore,
\begin{equation*}\label{eqn:expectation_bound}
\Expect_\mu\lambda_{\max}(\mtx{f}-\Expect_\mu\mtx{f}) \leq \sqrt{2cv_{\mtx{f}}\log d}.
\end{equation*}
Parallel inequalities hold for the minimum eigenvalue $\lambda_{\min}$.
\end{theorem}
\noindent
We establish Theorem~\ref{thm:exponential_concentration} in Section~\ref{sec:exponential_concentration_proof}
as a consequence of an exponential moment inequality, Theorem~\ref{thm:exponential_moment}, for random matrices.
By combining Theorem~\ref{thm:exponential_concentration} with the examples in Section~\ref{sec:examples},
we obtain concentration results for concrete random matrix models.
A partial version of Theorem~\ref{thm:exponential_concentration} with slightly worse constants
appears in \cite[Corollary 4.13]{junge2015noncommutative}.
When comparing these results, note that probability measure in \cite{junge2015noncommutative}
is normalized to absorb the dimensional factor $d$.
\subsubsection{Exponential Efron--Stein inequality for product measures}
We can reproduce the matrix exponential Efron--Stein inequalities of Paulin et al.~\cite[Theorem 4.3]{paulin2016efron}
by applying Theorem~\ref{thm:exponential_moment} to a product measure (Section~\ref{sec:product_measure_all_intro}).
For instance, we obtain the following subgaussian inequality.
\begin{corollary}[Product measure: Subgaussian concentration]\label{cor:product_measure_tailbound} Let $\mu = \mu_1\otimes \mu_2\otimes \cdots\otimes\mu_n$ be a product measure on a product space $\Omega = \Omega_1\otimes \Omega_2\otimes \cdots\otimes \Omega_n $. Let $\mtx{f}:\Omega \rightarrow \mathbb{H}_d$ be a suitable function.
Define the variance proxy $v_{\mtx{f}} := \norm{ \|\mtx{V}(\mtx{f}) \| }_{L_{\infty}(\mu)}$,
where $\mtx{V}(\mtx{f})$ is given by \eqref{eqn:variance_proxy}. Then \begin{align*}\label{eqn:product_measure_tailbound}
\Prob{\lambda_{\max}(\mtx{f}-\Expect_\mu\mtx{f})\geq t } \leq d\cdot\exp\left(-\frac{t^2}{4v_{\mtx{f}}}\right)
\quad\text{for all $t \geq 0$.}
\end{align*}
Furthermore,
\begin{equation*}\label{eqn:product_measure_expectation}
\Expect_\mu \lambda_{\max}(\mtx{f}-\Expect_\mu\mtx{f}) \leq 2\sqrt{v_{\mtx{f}}\log d}.
\end{equation*}
Parallel results hold for the minimum eigenvalue $\lambda_{\min}$.
\end{corollary}
\noindent
We defer the proof to Section~\ref{sec:concentration_results_product}.
\subsubsection{Log-concave measures}
We can also obtain exponential concentration for a matrix-valued function
of a log-concave measure by combining Theorem~\ref{thm:exponential_concentration}
with the results in Section~\ref{sec:log-concave_intro}.
\begin{corollary}[Log-concave measure: Subgaussian concentration]\label{cor:log-concave_concentration}
Let $\diff \mu \propto \mathrm{e}^{-W(z)}\idiff z$ be a log-concave probability measure on $\mathbb{R}^n$
whose potential $W : \mathbb{R}^n \to \mathbb{R}$ satisfies a uniform strong convexity condition:
$\operatorname{Hess} W \succcurlyeq \eta \cdot \mathbf{I}_n$ where $\eta > 0$.
Let $\mtx{f} \in \mathrm{H}_{2,\mu}(\mathbbm{R}^n; \mathbb{H}_d)$, and define the variance proxy
\[
v_{\mtx{f}} := \sup\nolimits_{z \in \mathbbm{R}^n} \norm{ \sum_{i=1}^n(\partial_i\mtx{f}(z))^2 }.
\]
Then
\[\mathbb{P}_{\mu}\left\{\lambda_{\max}(\mtx{f}-\Expect_\mu\mtx{f})\geq t \right\} \leq d\cdot\exp\left(\frac{-\eta t^2}{2v_{\mtx{f}}}\right)
\quad\text{for all $t \geq 0$.}
\]
Furthermore,
\[\Expect_\mu \lambda_{\max}(\mtx{f}-\Expect_\mu\mtx{f}) \leq \sqrt{2 \eta^{-1} v_{\mtx{f}}\log d }.\]
Parallel results hold for the minimum eigenvalue $\lambda_{\min}$.
\end{corollary}
\noindent
See Section~\ref{sec:concentration_results_log-concave} for the proof.
\begin{example}[Matrix Gaussian series] \label{ex:matrix-gauss}
Consider the standard normal measure $\gamma_n$ on $\mathbbm{R}^n$.
Its potential, $W(z) = z^\mathsf{T} z / 2$, is uniformly strongly convex
with parameter $\eta = 1$. Therefore, Corollary~\ref{cor:log-concave_concentration}
gives subgaussian concentration for matrix-valued functions of a Gaussian
random vector.
To make a comparison with familiar results, we construct the matrix Gaussian series
\begin{equation*} \label{eqn:gauss-series_1}
\mtx{f}(z) = \sum_{i=1}^n Z_i \mtx{A}_i
\quad\text{where $z = (Z_1, \dots, Z_n) \sim \gamma_n$ and $\mtx{A}_i \in \mathbb{H}_d$ are fixed.}
\end{equation*}
In this case, the carr{\'e} du champ is simply
$$
\Gamma(\mtx{f})(z) = \sum_{i=1}^n \mtx{A}_i^2.
$$
Thus, the expectation bound states that
\begin{equation*} \label{eqn:gauss-series_2}
\operatorname{\mathbbm{E}}_{\gamma_n} \lambda_{\max}(\mtx{f}(z)) \leq \sqrt{2 v_{\mtx{f}} \log d}
\quad\text{where}\quad
v_{\mtx{f}} = \norm{ \sum_{i=1}^n \mtx{A}_i^2 }.
\end{equation*}
Up to and including the constants, this matches the sharp bound that follows from
``linear'' matrix concentration techniques~\cite[Chapter 4]{tropp2015introduction}.
\end{example}
Van Handel (private communication) has outlined out an alternative proof of
Corollary~\ref{cor:log-concave_concentration} with slightly worse constants.
His approach uses Pisier's method~\cite[Thm.~2.2]{pisier1986probabilistic}
and the noncommutative Khintchine inequality~\cite{buchholz2001operator} to
obtain the statement for the standard normal measure. Then Caffarelli's
contraction theorem~\cite{Caf00:Monotonicity-Properties} implies that the
same bound holds for every log-concave measure whose potential is
uniformly strongly convex with $\eta \geq 1$. This approach is short
and conceptual, but it is more limited in scope.
\subsection{Riemannian measures}
\label{sec:riemann-exp}
As discussed in Section~\ref{sec:Riemannin_intro},
the Brownian motion on a Riemannian manifold with uniformly positive curvature
satisfies the Bakry--{\'E}mery criterion~\eqref{Bakry-Emery}.
Therefore, we can apply both Theorem~\ref{thm:polynomial_moment} and
Theorem~\ref{thm:exponential_concentration}
in this setting. Let us give a few concrete examples of the
kind of results that can be derived with these methods.
\subsubsection{The sphere}
Consider the uniform distribution $\sigma_n$ on the $n$-dimensional unit sphere
$\mathbb{S}^n \subset \mathbbm{R}^{n+1}$ for $n \geq 2$. The Brownian motion on the sphere satisfies
the Bakry--{\'E}mery criterion~\eqref{Bakry-Emery} with $c = (n-1)^{-1}$.
Therefore, Theorem~\ref{thm:polynomial_moment} implies that, for any suitable function $\mtx{f} : \mathbb{S}^n \to \mathbb{H}_d$,
\[
\left[ \Expect_{\sigma_n} \operatorname{tr}|\mtx{f}-\Expect_{\sigma_n} \mtx{f}|^{2q}\right]^{1/(2q)}
\leq \sqrt{\frac{2q-1}{n-1}}\left[ \Expect_{\sigma_n} \operatorname{tr}\Gamma(\mtx{f})^q\right]^{1/(2q)},
\]
where the carr{\'e} du champ $\Gamma(\mtx{f})$ is defined by~\eqref{eqn:gamma_Riemannian_0}.
We can also obtain subgaussian tail bounds in terms of the variance proxy
$
v_{\mtx{f}} := \norm{ \norm{ \Gamma(\mtx{f}) } }_{L_{\infty}(\sigma_n)}.
$
Indeed, Theorem~\ref{thm:exponential_concentration} yields the bound
\[
\mathbb{P}_{\sigma_n}\left\{\lambda_{\max}(\mtx{f}-\Expect_{\sigma_n}\mtx{f})\geq t \right\}
\leq d\cdot \exp \left(\frac{-(n-1)t^2}{2v_{\mtx{f}}} \right)
\quad\text{for all $t \geq 0$.}
\]
To use these concentration inequalities, we need to compute the carr{\'e} du champ $\Gamma(\mtx{f})$ and bound the variance proxy $v_{\mtx{f}}$ for particular functions $\mtx{f}$.
We give two illustrations, postponing the detailed calculations to Section~\ref{sec:Riemannian_gamma}.
In each case, let $x = (x_1, \dots, x_{n+1}) \in \mathbb{S}^n$
be a random vector drawn from the uniform probability measure $\sigma_n$.
Suppose that $(\mtx{A}_1, \dots, \mtx{A}_{n+1}) \subset \mathbb{H}_d$ is a list of deterministic Hermitian matrices.
\begin{example}[Sphere I]\label{example:sphere_I}
Consider the random matrix $\mtx{f}(x) = \sum_{i=1}^{n+1}x_i\mtx{A}_i$. We can compute the carr{\'e} du champ as
\begin{equation}\label{eqn:gamma_sphere_I}
\Gamma(\mtx{f})(x) = \sum_{i=1}^{n+1}\mtx{A}_i^2 - \left(\sum_{i=1}^{n+1} x_i\mtx{A}_i\right)^2
\succcurlyeq \mtx{0}.
\end{equation}
It is obvious that $\Gamma(\mtx{f})(x) \preccurlyeq \sum_{i=1}^{n+1}\mtx{A}_i^2$ for all $x\in \mathbb{S}^n$, so the variance proxy $v_{\mtx{f}}\leq \norm{ \sum_{i=1}^{n+1}\mtx{A}_i^2 }$.
Compare this calculation with Example~\ref{ex:matrix-gauss},
where the coefficients follow the standard normal distribution.
For the sphere, the carr{\'e} du champ operator is
smaller because a finite-dimensional sphere has
slightly more curvature than the Gauss space.
\end{example}
\begin{example}[Sphere II]\label{example:sphere_II}
Consider the random matrix $\mtx{f}(x) = \sum_{i=1}^{n+1}x_i^2\mtx{A}_i$. The carr{\'e} du champ admits the expression
\begin{equation}\label{eqn:gamma_sphere_II}
\Gamma(\mtx{f})(x) = 2\sum_{i,j=1}^{n+1}x_i^2x_j^2(\mtx{A}_i-\mtx{A}_j)^2.
\end{equation}
A simple bound shows that the variance proxy $v_{\mtx{f}} \leq 2 \max_{i, j} \norm{ \smash{\mtx{A}_i - \mtx{A}_j} }$.
It is possible to make further improvements in some cases.
\end{example}
\subsubsection{The special orthogonal group}
The Riemannian manifold framework also encompasses matrix-valued functions of random orthogonal matrices.
For instance, suppose that $\mtx{O}_1, \dots, \mtx{O}_n \in \mathrm{SO}(d)$ are drawn independently
and uniformly from the Haar measure $\mu$ on the special orthogonal group $\mathrm{SO}(d)$.
As discussed in Section~\ref{sec:Riemannin_intro}, the Brownian motion on the product
space satisfies the Bakry--{\'E}mery criterion with constant $c = 4/(d-1)$.
In particular, if $\mtx{f} : \mathrm{SO}(d)^{\otimes n} \to \mathbb{H}_d$,
$$
\mathbb{P}_{\mu^{\otimes n}}\left\{ \lambda_{\max}(\mtx{f} - \Expect_{\mu^{\otimes n}} \mtx{f}) \geq t \right\}
\leq d \cdot \exp\left( \frac{-(d-1) t^2}{8 v_{\mtx{f}}} \right)
\quad\text{for all $t \geq 0$.}
$$
Here is a particular example where we can bound the variance proxy.
\begin{example}[Special orthogonal group]\label{example:SO_d}
Let $(\mtx{A}_1, \dots, \mtx{A}_n) \subset \mathbb{H}_d(\mathbb{R})$ be a fixed list of real, symmetric matrices.
Consider the random matrix $\mtx{f}(\mtx{O}_1, \dots, \mtx{O}_n) = \sum_{i=1}^n \mtx{O}_i \mtx{A}_i \mtx{O}_i^\mathsf{T}$.
The carr{\'e} du champ is
\begin{equation}\label{eqn:gamma_SO_d}
\Gamma(\mtx{f})(\mtx{O}_1, \dots, \mtx{O}_n) = \frac{1}{2}\sum_{i=1}^n\mtx{O}_i\left[ \left(\operatorname{tr}[\mtx{A}_i^2]-d^{-1}\operatorname{tr}[\mtx{A}_i]^2\right)\cdot\mathbf{I}_d + d\left(\mtx{A}_i-d^{-1}\operatorname{tr}[\mtx{A}_i]\cdot \mathbf{I}_d \right)^2\right] \mtx{O}_i^\mathsf{T}.
\end{equation}
Each matrix $\mtx{O}_i$ is orthogonal, so the variance proxy satisfies
\[
v_{\mtx{f}} \leq \frac{1}{2}\sum_{i=1}^n \left[ \operatorname{tr}[\mtx{A}_i^2]-d^{-1}\operatorname{tr}[\mtx{A}_i]^2 + d\cdot \norm{\mtx{A}_i - d^{-1}\operatorname{tr}[\mtx{A}_i]\cdot \mathbf{I}_d }^2 \right].
\]
The details of the calculation appear in Section~\ref{sec:Riemannian_gamma}.
\end{example}
\subsection{Extension to general rectangular matrices}
By a standard formal argument, we can extend the results in this section to a function $\mtx{h}:\Omega\rightarrow \mathbb{M}^{d_1\times d_2}$ that takes rectangular matrix values. To do so, we simply apply the theorems to the self-adjoint dilation
\[\mtx{f}(z) = \left[\begin{array}{cc}
\mtx{0} & \mtx{h}(z)\\
\mtx{h}(z)^*& \mtx{0} \end{array}\right] \in \mathbb{H}_{d_1+d_2}.\]
See~\cite{tropp2015introduction} for many examples of this methodology.
\subsection{History}\label{sec:concentration_history}
Matrix concentration inequalities are noncommutative extensions of their scalar counterparts.
They have been studied extensively, and they have had a profound impact on a
wide range of areas in computational mathematics and statistics.
The models for which the most complete results are available
include a sum of independent random matrices~\cite{lust1986inegalites,rudelson1999random,oliveira2010sums,tropp2012user,huang2019generalized}
and a matrix-valued martingale sequence~\cite{pisier1997non,oliveira2009concentration,tropp2011freedman,junge2015noncommutative,howard2018exponential}.
We refer to the monograph \cite{tropp2015introduction} for an introduction and an extensive bibliography. Very recently, some concentration results for products of random matrices have
also been established~\cite{henriksen2020concentration,huang2020matrix}.
In recent years, many authors have sought concentration results for more general random matrix models.
One natural idea is to develop matrix versions of scalar concentration techniques based on functional inequalities
or based on Markov processes.
In the scalar setting, the subadditivity of the entropy plays a basic
role in obtaining modified log-Sobolev inequalities for product spaces,
a core ingredient in proving subgaussian concentration results.
Chen and Tropp \cite{chen2014subadditivity}
established the subadditivity of matrix trace entropy quantities.
Unfortunately, the approach in \cite{chen2014subadditivity}
requires awkward additional assumptions to derive matrix
concentration from modified log-Sobolev inequalities.
Cheng et al.~\cite{cheng2016characterizations,cheng2017exponential,cheng2019matrix}
have extended this line of research.
Mackey et al.~\cite{mackey2014,paulin2016efron} observed that the method
of exchangeable pairs~\cite{stein1972,stein1986approximate,chatterjee2005concentration}
leads to more satisfactory matrix concentration inequalities,
including matrix generalizations of the Efron--Stein--Steele inequality.
The argument in~\cite{paulin2016efron} can be viewed as a discrete version
of the semigroup approach that we use in this paper;
see Appendix~\ref{apdx:Stein_method} for more discussion.
Very recently, Aoun et al.~\cite{ABY20:Matrix-Poincare} showed how to derive
exponential matrix concentration inequalities from the matrix Poincar{\'e} inequality~\eqref{eqn:matrix_Poincare}.
Their approach is based on the classic iterative argument, due to
Aida \& Stroock~\cite{aida1994moment}, that operates in the scalar setting.
For matrices, it takes serious effort to implement this technique.
In our companion paper~\cite{HT20:Trace-Poincare}, we have shown that
a trace Poincar{\'e} inequality leads to stronger exponential concentration
results via an easier argument.
Another appealing contribution of the paper~\cite{ABY20:Matrix-Poincare} is to
establish the validity of a matrix Poincar\'e inequality for
particular matrix-valued Markov processes. Unfortunately,
Poincar{\'e} inequalities are apparently not strong
enough to capture subgaussian concentration.
In the scalar case, log-Sobolev inequalities lead to subgaussian concentration inequalities.
At present, it is not clear how to extend the theory of log-Sobolev inequalities to matrices,
and this obstacle has delayed progress on studying matrix concentration via functional inequalities.
In the scalar setting, one common technique for establishing a log-Sobolev inequality is
to prove that the Bakry--{\'E}mery criterion holds~\cite[Problem 3.19]{van550probability}.
Inspired by this observation, we have chosen to investigate the implications
of the Bakry--{\'E}mery criterion~\eqref{Bakry-Emery} for Markov semigroups acting on matrix-valued functions.
Our work demonstrates that this type of curvature condition allows us to establish matrix moment
bounds directly, without the intermediation of a log-Sobolev inequality.
As a consequence, we can obtain subgaussian and subgamma concentration
for nonlinear random matrix models.
After establishing the results in this paper, we discovered that Junge \& Zeng~\cite{junge2015noncommutative}
have also derived subgaussian matrix concentration inequalities from the (noncommutative) Bakry--{\'E}mery criterion.
Their approach is based on a noncommutative version of the Burkholder--Davis--Gundy inequality and a martingale argument that applies to a wider class of noncommutative diffusion semigroups acting on von Neumann algebras. As a consequence, their results apply to a larger family of examples, but the moment growth bounds are somewhat worse.
In contrast, our paper develops a direct argument for the
mixed classical-quantum semigroup~\eqref{eqn:semigroup}
that does not require any sophisticated tools from operator
theory or noncommutative probability.
Instead, we establish a new trace inequality (Lemma~\ref{lem:key_Gamma})
that mimics the chain rule for a scalar diffusion semigroup.
\section{Matrix Markov semigroups: Properties and proofs}
\label{sec:matrix_Markov_semigroups_more}
This section presents some other fundamental facts about matrix Markov semigroups.
We also provide proofs of the propositions from Section~\ref{sec:matrix_Markov_semigroups}.
\subsection{Properties of the carr\'e du champ operator}
Our first proposition gives the matrix extension of some classic
facts about the carr\'e du champ operator $\Gamma$.
Parts of this result are adapted from~\cite[Prop.~2.2]{ABY20:Matrix-Poincare}.
\begin{proposition}[Matrix carr{\'e} du champ] \label{prop:Gamma_property}
Let $(Z_t)_{t \geq 0}$ be a Markov process.
The associated matrix bilinear form $\Gamma$ has the following properties:
\begin{enumerate}
\item \label{limit_formula}
For all suitable $\mtx{f},\mtx{g}: \Omega \rightarrow \mathbb{H}_d$ and all $z \in \Omega$,
\begin{equation}\label{eqn:limit_formula_Gamma}
\Gamma(\mtx{f},\mtx{g})(z) = \lim_{t\downarrow 0} \frac{1}{2t} \Expect\big[\big(\mtx{f}(Z_t)-\mtx{f}(Z_0)\big)\big(\mtx{g}(Z_t)-\mtx{g}(Z_0)\big)\,\big|\,Z_0=z\big].
\end{equation}
\item\label{gamma_psd}
In particular, the quadratic form $\mtx{f} \mapsto \Gamma(\mtx{f})$ is positive: $\Gamma(\mtx{f})\succcurlyeq \mtx{0}$.
\item\label{gamma_young}
For all suitable $\mtx{f},\mtx{g}: \Omega \rightarrow \mathbb{H}_d$ and all $s > 0$,
\[\Gamma(\mtx{f},\mtx{g}) + \Gamma(\mtx{g},\mtx{f})\preccurlyeq s \,\Gamma(\mtx{f}) + s^{-1}\,\Gamma(\mtx{g}).\]
\item \label{convexity}
The quadratic form induced by $\Gamma$ is operator convex:
\[\Gamma\big(\tau\mtx{f}+(1-\tau)\mtx{g}\big)\preccurlyeq \tau \,\Gamma(\mtx{f}) + (1-\tau)\,\Gamma(\mtx{g})\quad \text{for each}\ \tau\in[0,1].\]
\end{enumerate}
Similar results hold for the matrix Dirichlet form, owing
to the definition~\eqref{eqn:Dirichlet_form}.
\end{proposition}
\begin{proof}
\textit{Proof of \eqref{limit_formula}.}
The limit form of the carr\'e du champ can be verified with a short calculation:
\begin{align*}
\Gamma(\mtx{f},\mtx{g})(z) =&\ \lim_{t\downarrow 0}\frac{1}{2t} \big[ \Expect[\mtx{f}(Z_t)\mtx{g}(Z_t)\,|\,Z_0=z]-\mtx{f}(z)\mtx{g}(z) \big] \\
&\quad - \lim_{t\downarrow 0}\frac{1}{2t} \big[\mtx{f}(z)\big(\Expect[\mtx{g}(Z_t)\,|\,Z_0=z]-\mtx{g}(z)\big) \big] - \lim_{t\downarrow 0}\frac{1}{2t}\big[\big(\Expect[\mtx{f}(Z_t)\,|\,Z_0=z]-\mtx{f}(z)\big)\mtx{g}(z)\big] \\
=&\ \lim_{t\downarrow 0}\frac{1}{2t} \Expect\big[\mtx{f}(Z_t)\mtx{g}(Z_t) - \mtx{f}(z)\mtx{g}(Z_t) -\mtx{f}(Z_t)\mtx{g}(z) + \mtx{f}(z)\mtx{g}(z)\,|\,Z_0=z\big]\\
=&\ \lim_{t\downarrow 0}\frac{1}{2t} \Expect
\big[(\mtx{f}(Z_t)-\mtx{f}(Z_0))(\mtx{g}(Z_t)-\mtx{g}(Z_0))\,|\,Z_0=z\big].
\end{align*}
The first relation depends on the definition~\eqref{eqn:definition_Gamma} of $\Gamma$
and the definition~\eqref{eqn:Markov_generator} of $\mathcal{L}$.
\textit{Proof of \eqref{gamma_psd}.} The fact that $\mtx{f} \mapsto \Gamma(\mtx{f})$
is positive follows from~\eqref{limit_formula} because the square
of a matrix is positive-semidefinite and the expectation preserves positivity.
\textit{Proof of \eqref{gamma_young}.}
The Young inequality for the carr{\'e} du champ follows from the fact that $\Gamma$
is positive:
\[\mtx{0}\preccurlyeq \Gamma(s^{1/2}\mtx{f} - s^{-1/2}\mtx{g}) = s \, \Gamma(\mtx{f}) + s^{-1} \,\Gamma(\mtx{g}) - \Gamma(\mtx{f},\mtx{g}) - \Gamma(\mtx{g},\mtx{f}). \]
The second relation holds because $\Gamma$ is a bilinear form.
\textit{Proof of \eqref{convexity}.}
To establish operator convexity, we use bilinearity again:
\begin{align*}
\Gamma(\tau\mtx{f}+(1-\tau)\mtx{g}) &= \tau^2\,\Gamma(\mtx{f}) + (1-\tau)^2\,\Gamma(\mtx{g}) + \tau(1-\tau)\left(\Gamma(\mtx{f},\mtx{g}) + \Gamma(\mtx{g},\mtx{f})\right) \\
&\preccurlyeq \tau^2\,\Gamma(\mtx{f}) + (1-\tau)^2\,\Gamma(\mtx{g}) + \tau(1-\tau)\left(\Gamma(\mtx{f}) + \Gamma(\mtx{g})\right)
=\tau \,\Gamma(\mtx{f}) + (1-\tau)\,\Gamma(\mtx{g}).
\end{align*}
The first semidefinite inequality follows from~\eqref{gamma_young} with $s = 1$.
\end{proof}
The next lemma is an extension of Proposition~\ref{prop:Gamma_property}\eqref{limit_formula}.
We use this result to establish the all-important chain rule inequality in Section~\ref{sec:trace_to_moment}.
\begin{lemma}[Triple product] \label{lem:three_limit} Let $(Z_t)_{t\geq0}$ be a reversible Markov process with a stationary measure $\mu$ and infinitesimal generator $\mathcal{L}$. For all suitable $\mtx{f},\mtx{g},\mtx{h}:\Omega \rightarrow \mathbb{H}_d$
and all $z \in \Omega$,
\begin{align*}
& \lim_{t\downarrow 0} \frac{1}{t} \operatorname{tr} \Expect\big[\big(\mtx{f}(Z_t)-\mtx{f}(Z_0)\big)\big(\mtx{g}(Z_t)-\mtx{g}(Z_0)\big)\big(\mtx{h}(Z_t)-\mtx{h}(Z_0)\big)\big]\,\big|\,Z_0=z\big] \\
&\qquad= \operatorname{tr}\big[ \mathcal{L}(\mtx{f}\mtx{g}\mtx{h}) - \mathcal{L}(\mtx{f}\mtx{g})\mtx{h} - \mathcal{L}(\mtx{h}\mtx{f})\mtx{g} - \mathcal{L}(\mtx{g}\mtx{h})\mtx{f} + \mathcal{L}(\mtx{f})\mtx{g}\mtx{h}+ \mathcal{L}(\mtx{g})\mtx{h}\mtx{f} + \mathcal{L}(\mtx{h})\mtx{f}\mtx{g}\big](z).
\end{align*}
In particular,
\[\Expect_{Z\sim\mu} \lim_{t\downarrow 0} \frac{1}{t} \operatorname{tr} \Expect\big[\big(\mtx{f}(Z_t)-\mtx{f}(Z_0)\big)\big(\mtx{g}(Z_t)-\mtx{g}(Z_0)\big)\big(\mtx{h}(Z_t)-\mtx{h}(Z_0)\big)\big]\,\big|\,Z_0=Z\big] = 0.\]
\end{lemma}
\begin{proof}
For simplicity, we abbreviate
\[\mtx{f}_t = \mtx{f}(Z_t),\quad \mtx{g}_t = \mtx{g}(Z_t),\quad \mtx{h}_t = \mtx{h}(Z_t)\quad\text{and}\quad \mtx{f}_0 = \mtx{f}(Z_0),\quad \mtx{g}_0 = \mtx{g}(Z_0),\quad \mtx{h}_0 = \mtx{h}(Z_0).\]
Direct calculation gives
\begin{align*}
& \lim_{t\downarrow 0} \frac{1}{t} \operatorname{tr} \Expect\left[\big(\mtx{f}(Z_t)-\mtx{f}(Z_0)\big)\big(\mtx{g}(Z_t)-\mtx{g}(Z_0)\big)\big(\mtx{h}(Z_t)-\mtx{h}(Z_0)\big) \,\big|\,Z_0=z\right] \\
&\quad= \lim_{t\downarrow 0} \frac{1}{t} \operatorname{tr} \Expect\left[ \mtx{f}_t\mtx{g}_t\mtx{h}_t - \mtx{f}_t\mtx{g}_t\mtx{h}_0 -\mtx{f}_t\mtx{g}_0\mtx{h}_t + \mtx{f}_t\mtx{g}_0\mtx{h}_0 -\mtx{f}_0\mtx{g}_t\mtx{h}_t + \mtx{f}_0\mtx{g}_t\mtx{h}_0 + \mtx{f}_0\mtx{g}_0\mtx{h}_t - \mtx{f}_0\mtx{g}_0\mtx{h}_0 \,\big|\, Z_0 = z\right]\\
&\quad = \lim_{t\downarrow 0} \frac{1}{t} \operatorname{tr} \Expect\big[ \big(\mtx{f}_t\mtx{g}_t\mtx{h}_t - \mtx{f}_0\mtx{g}_0\mtx{h}_0\big) - \big((\mtx{f}_t\mtx{g}_t - \mtx{f}_0\mtx{g}_0)\mtx{h}_0\big) - \big((\mtx{h}_t\mtx{f}_t - \mtx{h}_0\mtx{f}_0)\mtx{g}_0\big) + \big((\mtx{f}_t - \mtx{f}_0)\mtx{g}_0\mtx{h}_0\big) \\
&\qquad\qquad\qquad\qquad -\ \big((\mtx{g}_t\mtx{h}_t - \mtx{g}_0\mtx{h}_0)\mtx{f}_0\big) + \big((\mtx{g}_t - \mtx{g}_0)\mtx{h}_0\mtx{f}_0\big) + \big((\mtx{h}_t - \mtx{h}_0)\mtx{f}_0\mtx{g}_0\big) \,\big|\,Z_0=z \big]\\
&\quad= \operatorname{tr}\big[ \mathcal{L}(\mtx{f}\mtx{g}\mtx{h})(z) - \mathcal{L}(\mtx{f}\mtx{g})(z)\mtx{h}(z) - \mathcal{L}(\mtx{h}\mtx{f})(z)\mtx{g}(z) - \mathcal{L}(\mtx{g}\mtx{h})(z)\mtx{f}(z) \\
& \qquad\qquad\quad +\ \mathcal{L}(\mtx{f})(z)\mtx{g}(z)\mtx{h}(z) + \mathcal{L}(\mtx{g})(z)\mtx{h}(z)\mtx{f}(z) + \mathcal{L}(\mtx{h})(z)\mtx{f}(z)\mtx{g}(z)\big].
\end{align*}
We have applied the cyclic property of the trace.
Using the reversibility~\eqref{eqn:reversibility_2} of the Markov process
and the zero-mean property~\eqref{eqn:mean_zero} of the infinitesimal generator,
we have
\begin{align*}
& \Expect_\mu \operatorname{tr}\left[ \mathcal{L}(\mtx{f}\mtx{g}\mtx{h}) - \mathcal{L}(\mtx{f}\mtx{g})\mtx{h} - \mathcal{L}(\mtx{h}\mtx{f})\mtx{g} - \mathcal{L}(\mtx{g}\mtx{h})\mtx{f} + \mathcal{L}(\mtx{f})\mtx{g}\mtx{h}+ \mathcal{L}(\mtx{g})\mtx{h}\mtx{f} + \mathcal{L}(\mtx{h})\mtx{f}\mtx{g}\right]\\
&\quad= \operatorname{tr}\left[ \Expect_\mu[\mathcal{L}(\mtx{f}\mtx{g}\mtx{h})] - \Expect_\mu[\mathcal{L}(\mtx{f}\mtx{g})\mtx{h} - \mtx{f}\mtx{g}\mathcal{L}(\mtx{h})] -\Expect_\mu[\mathcal{L}(\mtx{h}\mtx{f})\mtx{g} - \mtx{h}\mtx{f}\mathcal{L}(\mtx{g})] - \Expect_\mu[\mathcal{L}(\mtx{g}\mtx{h})\mtx{f} - \mtx{g}\mtx{h}\mathcal{L}(\mtx{f})] \right] \\
&\quad= 0.
\end{align*}
This concludes the second part of the lemma.
\end{proof}
\subsection{Reversibility}
\label{sec:reversibility-pf}
In this section, we establish Proposition~\ref{prop:reversibility}, which states that
reversibility of the semigroup~\eqref{eqn:semigroup}
on real-valued functions
is equivalent with the reversibility of the semigroup
on matrix-valued functions. The pattern of argument
was suggested to us by Ramon van Handel, and it will
be repeated below in the proofs that certain functional
inequalities for real-valued functions are equivalent
with functional inequalities for matrix-valued functions.
\begin{proof}[Proof of Proposition~\ref{prop:reversibility}]
The implication that matrix reversibility~\eqref{eqn:reversibility_1}
for all $d \in \mathbbm{N}$ implies scalar reversibility is obvious: just take $d = 1$.
To check the converse,
we require an elementary identity.
For all vectors $\vct{u},\vct{v}\in \mathbb{C}^d$ and all matrices $\mtx{A},\mtx{B}\in \mathbb{H}_d$,
\begin{align}
\vct{u}^*(\mtx{A}\mtx{B})\vct{v}
&= \sum_{j=1}^d (\vct{u}^*\mtx{A}\mathbf{e}_j)(\mathbf{e}_j^*\mtx{B}\vct{v}) =: \sum_{j=1}^da_j\bar{b}_j\\
&= \sum_{j=1}^d\left[\operatorname{Re}(a_j)\operatorname{Re}(b_j) + \operatorname{Im}(a_j)\operatorname{Im}(b_j) - \mathrm{i}\operatorname{Re}(a_j)\operatorname{Im}(b_j) + \mathrm{i}\operatorname{Im}(a_j)\operatorname{Re}(b_j)\right]. \label{eqn:Ramon}
\end{align}
We have defined $a_j:= \vct{u}^*\mtx{A}\mathbf{e}_j$ and $b_j:= \vct{v}^*\mtx{A}\mathbf{e}_j$ for each $j=1,\dots,d$.
As usual, $(\mathbf{e}_j : 1 \leq j \leq d)$ is the standard basis for $\mathbbm{C}^d$.
Now, consider two matrix-valued functions $\mtx{f},\mtx{g}:\Omega\rightarrow\mathbb{H}_d$. Introduce the scalar functions $f_j := \vct{u}^*\mtx{f}\mathbf{e}_j$ and $g_j := \vct{v}^*\mtx{g}\mathbf{e}_j$ for each $j=1,\dots,d$.
The definition~\eqref{eqn:semigroup} of the semigroup $(P_t)_{t\geq0}$ as an expectation ensures that
\[\vct{u}^*(P_t\mtx{f})\mathbf{e}_j = P_tf_j = P_t(\operatorname{Re}(f_j)) + \mathrm{i}\,P_t(\operatorname{Im}(f_j)) = \operatorname{Re}(P_t f_j) + \mathrm{i} \operatorname{Im}(P_tf_j).\]
The parallel statement holds for $\vct{v}^*(P_t\mtx{g})\mathbf{e}_j$.
Therefore, we can use formula \eqref{eqn:Ramon} to compute that
\begin{align*}
& \vct{u}^*\Expect_\mu [(P_t\mtx{f}) \, \mtx{g}]\vct{v} \\
&\quad = \sum_{j=1}^d \Expect_\mu [\vct{u}^*(P_t\mtx{f})\mathbf{e}_j\mathbf{e}_j^*\mtx{g}\vct{v}] = \sum_{j=1}^d \Expect_\mu [(P_tf_j)\,\bar{g}_j]\\
&\quad = \sum_{j=1}^d\Expect_\mu \left[(P_t\operatorname{Re}(f_j))\operatorname{Re}(g_j) + (P_t\operatorname{Im}(f_j))\operatorname{Im}(g_j) - \mathrm{i}(P_t\operatorname{Re}(f_j))\operatorname{Im}(g_j) + \mathrm{i}(P_t\operatorname{Im}(f_j))\operatorname{Re}(g_j)\right]\\
&\quad = \sum_{j=1}^d\Expect_\mu \left[\operatorname{Re}(f_j)(P_t\operatorname{Re}(g_j)) + \operatorname{Im}(f_j)(P_t\operatorname{Im}(g_j)) - \mathrm{i}\operatorname{Re}(f_j)(P_t\operatorname{Im}(g_j)) + \mathrm{i}\operatorname{Im}(f_j)(P_t\operatorname{Re}(g_j))\right]\\
&\quad= \sum_{j=1}^d \Expect_\mu [f_j\,(P_t\bar{g}_j)] = \sum_{j=1}^d \Expect_\mu [\vct{u}^*\mtx{f}\mathbf{e}_j\mathbf{e}_j^*(P_t\mtx{g})\vct{v}]
= \vct{u}^*\Expect_\mu [\mtx{f} \, (P_t\mtx{g})]\vct{v}.
\end{align*}
The matrix identity \eqref{eqn:reversibility_1} follows immediately because $\vct{u},\vct{v}\in \mathbb{C}^d$
are arbitrary.
\end{proof}
\subsection{Dimension reduction}
The following lemma explains how to relate the carr{\'e} du champ operator of a matrix-valued function to the carr{\'e} du champ operators of some scalar functions. It will help us transform the scalar Poincar{\'e} inequality and the scalar Bakry--{\'E}mery criterion to their matrix equivalents.
\begin{lemma}[Dimension reduction of carr{\'e} du champ]\label{lem:dimension_reduction}
Let $(P_t)_{t \geq 0}$ be the semigroup defined in~\eqref{eqn:semigroup}.
The carr{\'e} du champ operator $\Gamma$ and the iterated carr{\'e} du champ operator $\Gamma_2$ satisfy
\begin{align}
\vct{u}^*\Gamma(\mtx{f})\vct{u} &= \sum_{j=1}^d\left(\Gamma\big(\operatorname{Re}(\vct{u}^*\mtx{f}\mathbf{e}_j)\big) + \Gamma\big(\operatorname{Im}(\vct{u}^*\mtx{f}\mathbf{e}_j)\big)\right); \label{eqn:scalar_gamma}\\
\vct{u}^*\Gamma_2(\mtx{f})\vct{u} &= \sum_{j=1}^d\left(\Gamma_2\big(\operatorname{Re}(\vct{u}^*\mtx{f}\mathbf{e}_j)\big) + \Gamma_2\big(\operatorname{Im}(\vct{u}^*\mtx{f}\mathbf{e}_j)\big)\right). \label{eqn:scalar_gamma2}
\end{align}
These formulae hold for all $d \in \mathbbm{N}$, for all suitable functions $\mtx{f}:\Omega\to \mathbb{H}_d$, and for all vectors $\vct{u}\in\mathbb{C}^d$. \end{lemma}
\begin{proof}
The definition~\eqref{eqn:definition_Gamma} of $\mathcal{L}$ implies that
\[\vct{u}^*\mathcal{L}(\mtx{f})\vct{v} = \mathcal{L}(\vct{u}^*\mtx{f}\vct{v}) = \mathcal{L}(\operatorname{Re}(\vct{u}^*\mtx{f}\vct{v})) + \mathrm{i}\cdot \mathcal{L}(\operatorname{Im}(\vct{u}^*\mtx{f}\vct{v})).\]
Introduce the scalar function $f_j := \vct{u}^*\mtx{f}\mathbf{e}_j$ for each $j=1,\dots,d$.
Then we can use the definition~\eqref{eqn:definition_Gamma} of $\Gamma$ and formula \eqref{eqn:Ramon} to compute that
\begin{align*}
\vct{u}^*\Gamma(\mtx{f})\vct{u} &= \frac{1}{2}\left(\vct{u}^*\mathcal{L}(\mtx{f}^2)\vct{u} - \vct{u}^*\mtx{f}\mathcal{L}(\mtx{f})\vct{u} - \vct{u}^*\mathcal{L}(\mtx{f})\mtx{f}\vct{u}\right)\\
&= \frac{1}{2}\sum_{j=1}^d\left(\vct{u}^*\mathcal{L}(\mtx{f}\mathbf{e}_j\mathbf{e}_j^*\mtx{f})\vct{u} - \vct{u}^*\mtx{f}\mathbf{e}_j\mathbf{e}_j^*\mathcal{L}(\mtx{f})\vct{u} - \vct{u}^*\mathcal{L}(\mtx{f})\mathbf{e}_j\mathbf{e}_j^*\mtx{f}\vct{u}\right)\\
&= \frac{1}{2}\sum_{j=1}^d\left(\mathcal{L}(f_j\,\bar{f}_j) - f_j\,\mathcal{L}(\bar{f}_j) - \mathcal{L}(f_j)\,\bar{f}_j\right)\\
&= \frac{1}{2}\sum_{j=1}^d\left(\mathcal{L}(\operatorname{Re}(f_j)^2) + \mathcal{L}(\operatorname{Im}(f_j)^2) - 2\operatorname{Re}(f_j)\,\mathcal{L}(\operatorname{Re}(f_j)) - 2\operatorname{Im}(f_j)\,\mathcal{L}(\operatorname{Im}(f_j)) \right)\\
&= \sum_{j=1}^d\left(\Gamma(\operatorname{Re}(f_j)) + \Gamma(\operatorname{Im}(f_j))\right).
\end{align*}
This is the first identity~\eqref{eqn:scalar_gamma}. The second identity \eqref{eqn:scalar_gamma2} follows from a similar argument based on the definition~\eqref{eqn:definition_Gamma2} of $\Gamma_2$ and the relation \eqref{eqn:scalar_gamma}.
\end{proof}
\subsection{Equivalence of scalar and matrix inequalities}
\label{sec:scalar_matrix}
In this section, we verify Proposition~\ref{prop:poincare_equiv} and Proposition~\ref{prop:BE_equiv}.
These results state that functional inequalities for the action of the semigroup~\eqref{eqn:semigroup}
on real-valued functions induce functional inequalities for its action on matrix-valued functions.
\begin{proof}[Proof of Proposition~\ref{prop:poincare_equiv}]
It is evident that the validity of the matrix Poincar\'e inequality \eqref{Poincare_inequality_matrix} for all $d \in \mathbbm{N}$ implies the scalar Poincar\'e inequality \eqref{Poincare_inequality_scalar}, which is simply the $d = 1$ case. For the reverse implication, we invoke formula \eqref{eqn:Ramon} to learn that
\[
\vct{u}^*\mVar_\mu[\mtx{f}]\vct{u} = \sum_{j=1}^d\left(\Var_\mu\big[\operatorname{Re}(\vct{u}^*\mtx{f}\mathbf{e}_j)\big]+\Var_\mu\big[\operatorname{Im}(\vct{u}^*\mtx{f}\mathbf{e}_j)\big]\right).
\]
Moreover, we can take the expectation $\Expect_\mu$ of formula \eqref{eqn:scalar_gamma} to obtain
\[
\vct{u}^*\mathcal{E}(\mtx{f})\vct{u} = \sum_{j=1}^d\left(\mathcal{E}\big(\operatorname{Re}(\vct{u}^*\mtx{f}\mathbf{e}_j)\big)+\mathcal{E}\big(\operatorname{Im}(\vct{u}^*\mtx{f}\mathbf{e}_j)\big)\right).
\]
Applying the scalar Poincar{\'e} inequality \eqref{Poincare_inequality_scalar} to the real scalar functions $\operatorname{Re}(\vct{u}^*\mtx{f}\mathbf{e}_j)$ and $\operatorname{Im}(\vct{u}^*\mtx{f}\mathbf{e}_j)$, we obtain \[\vct{u}^*\mVar_\mu[\mtx{f}]\vct{u} \leq \alpha\cdot \vct{u}^*\mathcal{E}(\mtx{f})\vct{u}\quad \text{for all $\vct{u}\in \mathbb{C}^d$}.\]
This immediately implies the matrix Poincar{\'e} inequality \eqref{Poincare_inequality_matrix}.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:BE_equiv}]
It is evident that the validity of the matrix Bakry--{\'E}mery criterion~\eqref{Bakry-Emery_criterion_matrix} for all $d \in \mathbbm{N}$ implies the validity of the scalar criterion~\eqref{Bakry-Emery_criterion_scalar}, as we only need to set $d = 1$.
To develop the reverse implication, we use Lemma~\ref{lem:dimension_reduction} to compute that
\begin{align*}
\vct{u}^*\Gamma(\mtx{f})\vct{u} &= \sum_{j=1}^d\left(\Gamma\big(\operatorname{Re}(\vct{u}^*\mtx{f}\mathbf{e}_j)\big) + \Gamma\big(\operatorname{Im}(\vct{u}^*\mtx{f}\mathbf{e}_j)\big)\right)\\
&\leq c \sum_{j=1}^d\left(\Gamma_2\big(\operatorname{Re}(\vct{u}^*\mtx{f}\mathbf{e}_j)\big) + \Gamma_2\big(\operatorname{Im}(\vct{u}^*\mtx{f}\mathbf{e}_j)\big)\right)\\
&= c\cdot \vct{u}^*\Gamma_2(\mtx{f})\vct{u}.
\end{align*}
The inequality is applying \eqref{Bakry-Emery_criterion_scalar} to real scalar functions $\operatorname{Re}(\vct{u}^*\mtx{f}\mathbf{e}_j)$ and $\operatorname{Im}(\vct{u}^*\mtx{f}\mathbf{e}_j)$ for each $j=1,\dots,d$.
Since $\vct{u} \in \mathbbm{C}^d$ is arbitrary, we immediately obtain \eqref{Bakry-Emery_criterion_matrix}.
\end{proof}
\subsection{Derivative formulas}
A standard way to establish the equivalence between the Poincar\'e inequality and the exponential ergodicity property is by studying derivatives with respect to the time parameter $t$. The following result, extending~\cite[Lemma 2.3]{ABY20:Matrix-Poincare}, calculates the derivatives of the matrix variance and the Dirichlet form along the semigroup $(P_t)_{t\geq0}$. The result parallels the scalar case.
\begin{lemma}[Dissipation of variance and energy] \label{lem:derivative_formula}
Let $(P_t)_{t\geq 0}$ be a Markov semigroup with stationary measure $\mu$,
infinitesimal generator $\mathcal{L}$, and Dirichlet form $\mathcal{E}$.
For all suitable $\mtx{f}:\Omega\rightarrow \mathbb{H}_d$,
\begin{equation}\label{eqn:variance_derivative}
\frac{\diff{} }{\diff t}\mVar_\mu[P_t\mtx{f}] = -2\mathcal{E}(\mtx{f})\quad \text{for all $t>0$}.
\end{equation}
Moreover, if the semigroup is reversible,
\begin{equation}\label{eqn:energy_derivative}
\frac{\diff{} }{\diff t}\mathcal{E}(P_t\mtx{f}) = -2\Expect_\mu\big[(\mathcal{L}(P_t\mtx{f}))^2\big]\quad \text{for all $t>0$}.
\end{equation}
\end{lemma}
\begin{proof}
By the definition~\eqref{eqn:matrix_variance} of the matrix variance and the stationarity property $\operatorname{\mathbbm{E}}_{\mu} P_t = \operatorname{\mathbbm{E}}_\mu$, we can calculate that
\begin{align*}
\frac{\diff{} }{\diff t}\mVar_\mu[P_t\mtx{f}] = \frac{\diff{} }{\diff t}\big[\Expect_\mu (P_t\mtx{f})^2 - (\Expect_\mu\mtx{f})^2\big]
=\Expect_\mu\big[\mathcal{L}(P_t\mtx{f})(P_t\mtx{f}) + (P_t\mtx{f})\mathcal{L}(P_t\mtx{f})\big]
= -2\mathcal{E}(P_t\mtx{f}).
\end{align*}
The second equality above uses the derivative relation \eqref{eqn:derivative_relation} for the generator,
and the third equality is the expression~\eqref{eqn:Dirichlet_expression_1} for the Dirichlet form.
Similarly, we can calculate that
\begin{align*}
\frac{\diff{} }{\diff t} \mathcal{E}(P_t\mtx{f})
&= - \frac{\diff{} }{\diff t} \Expect_\mu\big[(P_t\mtx{f})\mathcal{L}(P_t\mtx{f})\big]\\
&= - \Expect_\mu\big[\mathcal{L}(P_t\mtx{f})\mathcal{L}(P_t\mtx{f}) + (P_t\mtx{f})\mathcal{L}(\mathcal{L}(P_t\mtx{f}))\big]
= - 2\Expect_\mu\big[(\mathcal{L}(P_t\mtx{f}))^2\big].
\end{align*}
The first equality is \eqref{eqn:Dirichlet_expression_2}. The last equality
holds because $\mathcal{L}$ is symmetric.
\end{proof}
The matrix Poincar\'e inequality \eqref{eqn:matrix_Poincare} allows us to convert the derivative formulas
in Lemma~\ref{lem:derivative_formula} into differential inequalities for matrix-valued functions.
The next lemma gives the solution to these differential inequalities.
\begin{lemma}[Differential matrix inequality] \label{lem:matrix_differential_inequality}
Assume that $\mtx{A}:[0,+\infty) \rightarrow \mathbb{H}_d$ is a differentiable
matrix-valued function that satisfies the differential inequality
\[\frac{\diff{} }{\diff t}\mtx{A}(t) \preccurlyeq \nu \cdot \mtx{A}(t) \quad \text{for all $t > 0$,}\]
where $\nu \in \mathbb{R}$ is a constant. Then
\[\mtx{A}(t) \preccurlyeq \mathrm{e}^{\nu t}\cdot\mtx{A}(0)\quad \text{for all $t\geq 0$}. \]
\end{lemma}
\begin{proof}
Consider the matrix-valued function $\mtx{B}(t):= \mathrm{e}^{-\nu t} \mtx{A}(t)$ for $t \geq 0$.
Then $\mtx{B}(0) = \mtx{A}(0)$, and
\[\frac{\diff{} }{\diff t}\mtx{B}(t) = \mathrm{e}^{-\nu t}\frac{\diff{} }{\diff t}\mtx{A}(t) - \nu \mathrm{e}^{-\nu t} \mtx{A}(t)\preccurlyeq \mtx{0}. \]
Since integration preserves the semidefinite order,
\[\mathrm{e}^{-\nu t}\mtx{A}(t) = \mtx{B}(t) \preccurlyeq \mtx{B}(0) = \mtx{A}(0).\]
Multiply by $\mathrm{e}^{\nu t}$ to arrive at the stated result.
\end{proof}
\subsection{Consequences of the Poincar\'e inequality}
\label{sec:equivalence_Poincare}
This section contains the proof of Proposition~\ref{prop:matrix_poincare}, the equivalence between the matrix Poincar\'e inequality and exponential ergodicity properties. This proof is adapted from its scalar analog~\cite[Theorem 2.18]{van550probability}.
\begin{proof}[Proof of Proposition~\ref{prop:matrix_poincare}]
\textit{Proof that \eqref{Poincare_inequality} $\Rightarrow$ \eqref{variance_convergence}.}
To see that the matrix Poincar{\'e} inequality~\eqref{Poincare_inequality}
implies exponential ergodicity~\eqref{variance_convergence} of the variance,
combine Lemma~\ref{lem:derivative_formula} with the matrix Poincar{\'e}
inequality to obtain a differential inequality:
\[\frac{\diff{} }{\diff t}\mVar_\mu[P_t\mtx{f}] = -2\mathcal{E}(P_t\mtx{f}) \preccurlyeq -\frac{2}{\alpha} \mVar_\mu[P_t\mtx{f}].\]
Lemma~\ref{lem:matrix_differential_inequality} gives the solution:
\[\mVar_\mu[P_t\mtx{f}] \preccurlyeq \mathrm{e}^{-2t/\alpha} \mVar_\mu[P_0\mtx{f}] = \mathrm{e}^{-2t/\alpha} \mVar_\mu[\mtx{f}].\]
This is the ergodicity of the variance.
\textit{Proof that \eqref{variance_convergence} $\Rightarrow$ \eqref{Poincare_inequality}.}
To obtain the matrix Poincar{\'e} inequality~\eqref{Poincare_inequality} from exponential ergodicity~\eqref{variance_convergence}
of the variance, use the derivative~\eqref{eqn:variance_derivative} of the variance
and the fact that $P_0$ is the identity map to see that
\[\mathcal{E}(\mtx{f}) = \lim_{t\downarrow0} \frac{\mVar_\mu[\mtx{f}]- \mVar_\mu[P_t\mtx{f}]}{2t} \succcurlyeq \lim_{t\downarrow0} \frac{1-\mathrm{e}^{-2t/\alpha}}{2t} \cdot \mVar_\mu[\mtx{f}] = \frac{1}{\alpha} \mVar_\mu[\mtx{f}].\]
The inequality follows from \eqref{variance_convergence}.
\textit{Proof that \eqref{Poincare_inequality} $\Rightarrow$ \eqref{energy_convergence} under reversibility.}
Next, we argue that the matrix Poincar\'e inequality \eqref{Poincare_inequality}
implies exponential ergodicity~\eqref{energy_convergence} of the energy, assuming that the semigroup is reversible.
In this case, the zero-mean property \eqref{eqn:mean_zero} implies that
$\Expect_\mu[\mtx{g}\mathcal{L}(\mtx{f})] = \Expect_\mu[(\mtx{g}-\Expect_\mu\mtx{g})\mathcal{L}(\mtx{f})]$ and $\Expect_\mu[\mathcal{L}(\mtx{f})\mtx{g}] = \Expect_\mu[\mathcal{L}(\mtx{f})(\mtx{g}-\Expect_\mu\mtx{g})]$ for all suitable $\mtx{f},\mtx{g}$. Therefore,
\begin{align*}
\mathcal{E}(\mtx{f}) &= - \frac{1}{2}\Expect_\mu\left[\mtx{f}\mathcal{L}(\mtx{f})+\mathcal{L}(\mtx{f})\mtx{f}\right] = - \frac{1}{2}\Expect_\mu\left[(\mtx{f}-\Expect_\mu\mtx{f})\mathcal{L}(\mtx{f}) + \mathcal{L}(\mtx{f})(\mtx{f}-\Expect_\mu\mtx{f})\right]\\
&\preccurlyeq \frac{1}{2\alpha}\Expect_\mu \big[(\mtx{f}-\Expect_\mu\mtx{f})^2\big] + \frac{\alpha}{2}\Expect_\mu \big[\mathcal{L}(\mtx{f})^2\big] \preccurlyeq \frac{1}{2} \mathcal{E}(\mtx{f}) + \frac{\alpha}{2}\Expect_\mu \big[\mathcal{L}(\mtx{f})^2\big].
\end{align*}
The first inequality holds because $\mtx{A}\mtx{B} + \mtx{B}\mtx{A}\preccurlyeq \mtx{A}^2+\mtx{B}^2$ for all $\mtx{A},\mtx{B}\in \mathbb{H}_d$, and the second follows from the matrix Poincar{\'e} inequality~\eqref{Poincare_inequality}. Rearranging, we obtain the relation $\mathcal{E}(\mtx{f})\preccurlyeq \alpha \Expect_\mu [\mathcal{L}(\mtx{f})^2]$ for all suitable $\mtx{f}$. Combine this fact with the derivative formula \eqref{eqn:energy_derivative} to reach
\[\frac{\diff{} }{\diff t} \mathcal{E}(P_t\mtx{f}) = - 2\Expect_\mu\big[\mathcal{L}(P_t\mtx{f})^2\big] \preccurlyeq - \frac{2}{\alpha} \mathcal{E}(P_t\mtx{f}).\]
Lemma~\ref{lem:matrix_differential_inequality} gives the solution to the differential inequality:
\[\mathcal{E}(P_t\mtx{f})\preccurlyeq \mathrm{e}^{-2t/\alpha} \mathcal{E}(P_0\mtx{f}) = \mathrm{e}^{-2t/\alpha} \mathcal{E}(\mtx{f}).\]
This is the ergodicity of energy.
\textit{Proof that \eqref{energy_convergence} $\Rightarrow$ \eqref{Poincare_inequality} under ergodicity.}
To see that exponential ergodicity~\eqref{energy_convergence} of the energy implies the matrix Poincar\'e inequality \eqref{Poincare_inequality} when the semigroup is ergodic, we combine \eqref{energy_convergence} with the derivative~\eqref{eqn:variance_derivative}
of the Dirichlet form to obtain
\[\frac{\diff{} }{\diff t}\mVar_\mu[P_t\mtx{f}] = -2\mathcal{E}(P_t\mtx{f}) \succcurlyeq -2\mathrm{e}^{-2t/\alpha}\mathcal{E}(\mtx{f}).\]
Using the ergodicity assumption~\eqref{eqn:ergodicity} on the semigroup, we have
\begin{align*}
\mVar_\mu[\mtx{f}] &= \mVar_\mu[P_0\mtx{f}] - \lim_{t\rightarrow\infty}\mVar_\mu[P_t\mtx{f}] = -\int_0^\infty \frac{\diff{} }{\diff t}\mVar_\mu[P_t\mtx{f}] \idiff t \\
&\preccurlyeq 2\int_0^\infty\mathrm{e}^{-2t/\alpha} \idiff t \cdot \mathcal{E}(\mtx{f}) = \alpha \mathcal{E}(\mtx{f}).
\end{align*}
The first equality follows from the ergodicity relation
\[\lim_{t\rightarrow\infty}\mVar_\mu[P_t\mtx{f}] = \lim_{t\rightarrow\infty}\Expect_\mu(P_t\mtx{f}-\Expect_\mu\mtx{f})^2 = \mtx{0}.\]
This completes the proof of Proposition~\ref{prop:matrix_poincare}.
\end{proof}
\subsection{Equivalence result for local Poincar\'e inequality}
\label{sec:equivalence_local_Poincare}
Proposition~\ref{prop:local_Poincare} states that the matrix Bakry--\'Emery criterion, the local Poincar\'e inequality, and the local ergodicity of the carr\'e du champ operator are equivalent with each other. This section is dedicated to the proof, which is modeled on the scalar argument~\cite[Theorem 2.36]{van550probability}.
\begin{proof}[Proof of Proposition~\ref{prop:local_Poincare}]
\textit{Proof that \eqref{Bakry-Emery_criterion} $\Rightarrow$ \eqref{local_ergodicity}.}
Let us show that the matrix Bakry--\'Emery criterion \eqref{Bakry-Emery_criterion} implies local ergodicity~ \eqref{local_ergodicity} of the carr\'e du champ operator. Given any suitable $\mtx{f}$ and any $t\geq 0$, construct the function $\mtx{A}(s) := P_{t-s}\Gamma(P_s\mtx{f})$ for $s\in[0,t]$. Then we have
\begin{align*}
\frac{\diff{} }{\diff s} \mtx{A}(s) &= - \mathcal{L} P_{t-s} \Gamma(P_s\mtx{f}) + P_{t-s}\Gamma(\mathcal{L} P_s\mtx{f},P_s\mtx{f}) + P_{t-s}\Gamma(P_s\mtx{f},\mathcal{L} P_s\mtx{f}) \\
&= - P_{t-s}\big( \mathcal{L} \Gamma(P_s\mtx{f}) - \Gamma(\mathcal{L} P_s\mtx{f},P_s\mtx{f}) -\Gamma(P_s\mtx{f},\mathcal{L} P_s\mtx{f})\big) \\
&= -2 P_{t-s} \Gamma_2(P_s\mtx{f})\\
&\preccurlyeq -2c^{-1} P_{t-s}\Gamma(P_s\mtx{f})\\
&= -2c^{-1} \mtx{A}(s).
\end{align*}
The inequality follows from \eqref{Bakry-Emery_criterion}. Apply Lemma~\ref{lem:matrix_differential_inequality} to reach to bound $\mtx{A}(t)\preccurlyeq \mathrm{e}^{-2t/c} \mtx{A}(0)$. This yields \eqref{local_ergodicity} because $\mtx{A}(t) = \Gamma(P_t\mtx{f})$ and $\mtx{A}(0) = P_t\Gamma(\mtx{f})$.
\textit{Proof that \eqref{local_ergodicity} $\Rightarrow$ \eqref{local_Poincare}.}
Next, we argue that local ergodicity of the carr\'e du champ operator \eqref{local_ergodicity} implies the local matrix Poincar\'e inequality \eqref{local_Poincare}. Construct the function $\mtx{B}(s) := P_{t-s}((P_s\mtx{f})^2)$ for $s\in[0,t]$. Taking the derivative with respect to $s$ gives
\begin{align*}
\frac{\diff{} }{\diff s} \mtx{B}(s) =&\ - \mathcal{L} P_{t-s} ((P_s\mtx{f})^2) + P_{t-s}(\mathcal{L}(P_s\mtx{f})P_s\mtx{f}) + P_{t-s}(P_s\mtx{f}\mathcal{L}(P_s\mtx{f})) \\
=&\ - P_{t-s}\left( \mathcal{L} ((P_s\mtx{f})^2) - \mathcal{L}(P_s\mtx{f})P_s\mtx{f} - P_s\mtx{f}\mathcal{L}(P_s\mtx{f})\right) \\
=&\ -2 P_{t-s} \Gamma(P_s\mtx{f})\\
\succcurlyeq&\ -2\mathrm{e}^{-2s/c} P_{t-s}P_s\Gamma(\mtx{f})\\
=&\ -2\mathrm{e}^{-2s/c} P_t\Gamma(\mtx{f}).
\end{align*}
Therefore,
\[P_t(\mtx{f}^2) - (P_t\mtx{f})^2 = \mtx{B}(0) - \mtx{B}(t) \preccurlyeq 2\int_0^t\mathrm{e}^{-2s/c}\idiff s \cdot P_t\Gamma(\mtx{f}) = c \, (1-\mathrm{e}^{-2t/c})\,P_t\Gamma(\mtx{f}). \]
This is the local ergodicity property.
\textit{Proof that \eqref{local_Poincare} $\Rightarrow$ \eqref{Bakry-Emery_criterion}.}
Last, we show that the local matrix Poincar\'e inequality \eqref{local_Poincare} implies the matrix Bakry--\'Emery criterion \eqref{Bakry-Emery_criterion}. Construct the function $\mtx{C}(t) := P_t(\mtx{f}^2) - (P_t\mtx{f})^2 - c\,(1-\mathrm{e}^{-2t/c})\,P_t\Gamma(\mtx{f})$. Evidently, $\mtx{C}(0) = \mtx{0}$, and the local Poincar{\'e} inequality \eqref{local_Poincare} implies that $\mtx{C}(t)\preccurlyeq \mtx{0}$ for all $t\geq 0$. Now, the first derivative satisfies
\begin{align*}
\frac{\diff{} }{\diff t}\bigg|_{t=0} \mtx{C}(t)
= \mathcal{L}(\mtx{f}^2) -\mathcal{L}(\mtx{f})\mtx{f} - \mtx{f}\mathcal{L}(\mtx{f}) - 2\Gamma(\mtx{f}) = \mtx{0}.
\end{align*}
The second derivative takes the form
\begin{align*}
\frac{\diff{^2}}{\diff t^2}\bigg|_{t=0} \mtx{C}(t) &= \mathcal{L}^2(\mtx{f}^2)-\mathcal{L}^2(\mtx{f})\mtx{f} - \mtx{f}\mathcal{L}^2(\mtx{f}) - 2(\mathcal{L}\mtx{f})^2 + 4c^{-1}\Gamma(\mtx{f}) - 4\mathcal{L}\Gamma(\mtx{f}) \\
&= 4c^{-1}\left(\Gamma(\mtx{f}) - c\Gamma_2(\mtx{f})\right).
\end{align*}
Therefore,
\[\Gamma(\mtx{f}) - c\Gamma_2(\mtx{f}) = \frac{c}{4}\frac{\diff{^2}}{\diff t^2} \bigg|_{t=0} \mtx{C}(t) = \frac{c}{2}\lim_{t\rightarrow 0}\frac{\mtx{C}(t)}{t^2} \preccurlyeq \mtx{0}.\]
This verifies the validity of the matrix Bakry--\'Emery criterion with constant $c$.
\end{proof}
\section{From curvature conditions to matrix moment inequalities}
\label{sec:trace_to_moment}
The main results of this paper, Theorems~\ref{thm:polynomial_moment} and~\ref{thm:exponential_moment},
demonstrate that the Bakry--{\'E}mery criterion~\eqref{Bakry-Emery} leads to
trace moment inequalities for random matrices.
This section is dedicated to the proofs of these theorems.
These arguments appear to be new, even in the scalar setting,
but see~\cite{Led92:Heat-Semigroup,Sch99:Curvature-Nonlocal}
for some precedents.
\subsection{Overview}
Let $(P_t)_{t \geq 0}$ be a reversible, ergodic semigroup
acting on matrix-valued functions. Assume that the semigroup satisfies a Bakry--{\'E}mery criterion~\eqref{Bakry-Emery},
so Proposition~\ref{prop:local_Poincare} implies that it is locally ergodic.
Without loss of generality, we may assume that the matrix-valued function $\mtx{f}$
is zero-mean: $\Expect_\mu\mtx{f}=\mtx{0}$.
For a standard matrix function $\phi$, the basic idea is to estimate a trace moment
of the form $\Expect_\mu\operatorname{tr}[\mtx{f}\,\varphi(\mtx{f})]$ via a classic semigroup argument:
\[\Expect_\mu\operatorname{tr}[\mtx{f}\,\varphi(\mtx{f})] =\Expect_\mu\operatorname{tr}[P_0(\mtx{f})\,\varphi(\mtx{f})] = \lim_{t\rightarrow\infty}\Expect_\mu\operatorname{tr}[P_t(\mtx{f})\,\varphi(\mtx{f})] - \int_0^\infty\frac{\diff{}}{\diff t}\Expect_\mu\operatorname{tr}[P_t(\mtx{f})\,\varphi(\mtx{f})]\idiff t.\]
By ergodicity \eqref{eqn:ergodicity}, $\lim_{t\rightarrow\infty}\Expect_\mu\operatorname{tr}[P_t(\mtx{f})\,\varphi(\mtx{f})] = \Expect_\mu\operatorname{tr}[(\Expect_\mu\mtx{f})\,\varphi(\mtx{f})] = 0$.
In the second term on the right-hand side, the time derivative places the infinitesimal generator $\mathcal{L}$ in the integrand,
which then becomes
\begin{equation} \label{eqn:overview_gamma}
-\Expect_\mu\operatorname{tr}[\mathcal{L}(P_t\mtx{f})\,\varphi(\mtx{f})] = \Expect_\mu\operatorname{tr} \Gamma(P_t\mtx{f},\varphi(\mtx{f})).
\end{equation}
This familiar formula is the starting point for our method.
To control the trace of the carr{\'e} du champ, we employ the following fundamental lemma,
which is related to the Stroock--Varopoulos
inequality~\cite{Str84:Introduction-Theory,Var85:Hardy-Littlewood-Theory}.
\begin{lemma}[Chain rule inequality]\label{lem:key_Gamma}
Let $\varphi:\mathbb{R}\rightarrow\mathbb{R}$ be a function such that $\psi := |\varphi'|$ is convex.
For all suitable $\mtx{f},\mtx{g}:\Omega\rightarrow \mathbb{H}_d$,
\begin{equation*}\label{eqn:general_Gamma}
\Expect_\mu \operatorname{tr}\Gamma(\mtx{g},\varphi(\mtx{f}))\leq \Big(\Expect_\mu\operatorname{tr} \left[\Gamma(\mtx{f})\,\psi(\mtx{f})\right]\cdot\Expect_\mu\operatorname{tr} \left[\Gamma(\mtx{g})\,\psi(\mtx{f})\right]\Big)^{1/2}.
\end{equation*}
\end{lemma}
\noindent
The proof of this lemma appears below in Section~\ref{sec:key_lemma}.
Lemma~\ref{lem:key_Gamma} isolates the contributions
from the matrix $P_t \mtx{f}$ and the matrix $\phi(\mtx{f})$
in the formula~\eqref{eqn:overview_gamma}.
To estimate $\Gamma(P_t \mtx{f})$,
we invoke the local ergodicity property,
Proposition~\ref{prop:local_Poincare}\eqref{local_ergodicity}.
Last, we apply the matrix decoupling techniques, based on H{\"o}lder and Young trace inequalities,
to bound $\Expect\operatorname{tr} \left[\Gamma(\mtx{f})\,\psi(\mtx{f})\right]$ and $\Expect\operatorname{tr} \left[\Gamma(P_t\mtx{f})\,\psi(\mtx{f})\right]$
in terms of the original quantity of interest $\Expect_{\mu}\operatorname{tr}[\mtx{f} \, \phi(\mtx{f})]$.
The following sections supply full details.
Our approach incorporates some techniques and ideas from~\cite[Theorems 4.2 and 4.3]{paulin2016efron},
but the argument is distinct. Appendix~\ref{apdx:Stein_method} gives more details about the connection.
\subsection{Proof of chain rule inequality}
\label{sec:key_lemma}
To prove Lemma~\ref{lem:key_Gamma}, we require a novel trace inequality.
\begin{lemma}[Mean value trace inequality]\label{lem:mean_value_inequality} Let $\varphi:\mathbb{R}\rightarrow\mathbb{R}$ be a function such that $\psi:= |\varphi'|$ is convex. For all $\mtx{A},\mtx{B},\mtx{C}\in\mathbb{H}_d$,
\[\operatorname{tr}\left[\mtx{C} \, \big(\varphi(\mtx{A})-\varphi(\mtx{B})\big)\right]\leq \inf_{s>0} \frac{1}{4}\operatorname{tr}\left[\left(s\,(\mtx{A}-\mtx{B})^2+s^{-1}\,\mtx{C}^2\right)\big(\psi(\mtx{A})+\psi(\mtx{B})\big)\right].\]
\end{lemma}
Lemma~\ref{lem:mean_value_inequality} is a common generalization
of \cite[Lemmas 9.2 and 12.2]{paulin2016efron}. Roughly speaking, it exploits convexity to bound
the difference $\varphi(\mtx{A})-\varphi(\mtx{B})$
in the spirit of the mean value theorem.
We defer the proof of Lemma~\ref{lem:mean_value_inequality}
to Appendix~\ref{apdx:mean_value}.
\begin{proof}[Proof of Lemma~\ref{lem:key_Gamma} from Lemma~\ref{lem:mean_value_inequality}]
For simplicity, we abbreviate
\[\mtx{f}_t = \mtx{f}(Z_t),\quad \mtx{g}_t = \mtx{g}(Z_t) \quad\text{and}\quad \mtx{f}_0 = \mtx{f}(Z_0), \quad \mtx{g}_0 = \mtx{g}(Z_0).\]
By Proposition~\ref{prop:Gamma_property}\eqref{limit_formula},
\begin{equation}\label{step:key_lemma_1}
\begin{split}
\Expect_\mu \operatorname{tr}\Gamma(\mtx{g},\varphi(\mtx{f})) =&\ \Expect_{Z\sim\mu}\operatorname{tr}\lim_{t\downarrow 0}\frac{1}{2t} \Expect\left[\left(\mtx{g}_t-\mtx{g}_0\right)\left(\varphi(\mtx{f}_t)-\varphi(\mtx{f}_0)\right)\,\big|\,Z_0=Z\right]\\
=&\ \Expect_{Z\sim\mu}\lim_{t\downarrow 0}\frac{1}{2t} \Expect \left[\operatorname{tr}\left[\left(\mtx{g}_t-\mtx{g}_0\right)\left(\varphi(\mtx{f}_t)-\varphi(\mtx{f}_0)\right)\right]\,\big|\,Z_0=Z\right].
\end{split}
\end{equation}
Fix a parameter $s > 0$. For each $t > 0$, the mean value trace inequality,
Lemma~\ref{lem:mean_value_inequality}, yields
\begin{equation}\label{step:key_lemma_2}
\begin{split}
\operatorname{tr} \left[\left(\mtx{g}_t-\mtx{g}_0\right)\big(\varphi(\mtx{f}_t)-\varphi(\mtx{f}_0)\big)\right]
&\leq \frac{1}{4}\operatorname{tr}\left[\left(s\,(\mtx{f}_t-\mtx{f}_0)^2+s^{-1}\,(\mtx{g}_t-\mtx{g}_0)^2\right)\big(\psi(\mtx{f}_t)+\psi(\mtx{f}_0)\big)\right]\\
&= \frac{1}{2} \operatorname{tr}\left[\left(s\,(\mtx{f}_t-\mtx{f}_0)^2+s^{-1}\,(\mtx{g}_t-\mtx{g}_0)^2\right)\psi(\mtx{f}_0)\right]\\
&\qquad + \frac{1}{4} \operatorname{tr}\left[\left(s(\mtx{f}_t-\mtx{f}_0)^2+s^{-1}\,(\mtx{g}_t-\mtx{g}_0)^2\right)\big(\psi(\mtx{f}_t)-\psi(\mtx{f}_0)\big)\right].
\end{split}
\end{equation}
It follows from the triple product result, Lemma~\ref{lem:three_limit}, that
the second term satisfies
\begin{equation}\label{step:key_lemma_3}
\Expect_{Z\sim\mu}\lim_{t\downarrow 0}\frac{1}{t} \operatorname{tr}\Expect \left[ \left(s\,(\mtx{f}_t-\mtx{f}_0)^2+s^{-1}\,(\mtx{g}_t-\mtx{g}_0)^2\right)\big(\psi(\mtx{f}_t)-\psi(\mtx{f}_0)\big) \,\big|\,Z_0=Z\right] =0.
\end{equation}
Sequence the displays \eqref{step:key_lemma_1},\eqref{step:key_lemma_2} and \eqref{step:key_lemma_3} to reach
\begin{align*}
\Expect_\mu \operatorname{tr}\Gamma(\mtx{g},\varphi(\mtx{f}))&\leq \frac{1}{2}\Expect_{Z\sim\mu}\lim_{t\downarrow 0} \frac{1}{2t} \operatorname{tr} \Expect\left[\left(s\,(\mtx{f}_t-\mtx{f}_0)^2+s^{-1}\,(\mtx{g}_t-\mtx{g}_0)^2\right)\psi(\mtx{f}_0) \,\big|\,Z_0=Z\right] \\
&= \frac{1}{2}\Expect_{Z\sim\mu}\operatorname{tr}\Big[\Big(s\,\lim_{t\downarrow0}\frac{1}{2t} \Expect[(\mtx{f}_t-\mtx{f}_0)^2\,|\,Z_0=Z] \\
&\qquad\qquad\qquad\ + s^{-1}\,\lim_{t\downarrow0}\frac{1}{2t} \Expect[(\mtx{g}_t-\mtx{g}_0)^2\,|\,Z_0=Z] \Big)\psi(\mtx{f}(Z))\Big] \\
&= \frac{1}{2} \Expect_\mu\operatorname{tr} \left[\left(s\,\Gamma(\mtx{f}) +s^{-1}\,\Gamma(\mtx{g})\right)\,\psi(\mtx{f})\right].
\end{align*}
The last relation is Proposition~\ref{prop:Gamma_property}\eqref{limit_formula}.
Minimize the right-hand side over $s\in(0,\infty)$ to arrive at
\[\Expect_\mu \operatorname{tr}\Gamma(\mtx{g},\varphi(\mtx{f}))\leq \big(\Expect_\mu\operatorname{tr} \left[\Gamma(\mtx{f})\,\psi(\mtx{f})\right]\big)^{1/2}\cdot\big(\Expect_\mu\operatorname{tr} \left[\Gamma(\mtx{g})\,\psi(\mtx{f})\right]\big)^{1/2}.\]
This completes the proof of Lemma~\ref{lem:key_Gamma}.
\end{proof}
\subsection{Polynomial moments}
\label{sec:polynomial_moments_proof}
This section is dedicated to the proof of Theorem~\ref{thm:polynomial_moment}, which states that the Bakry--{\'E}mery criterion implies matrix polynomial moment bounds.
\subsubsection{Setup}
Consider a reversible, ergodic Markov semigroup $(P_t)_{t \geq 0}$
with stationary measure $\mu$.
Assume that the semigroup satisfies the Bakry--{\'E}mery criterion~\eqref{Bakry-Emery}
with constant $c > 0$.
By Proposition~\ref{prop:local_Poincare}, this is equivalent to local ergodicity.
Fix a suitable function $\mtx{f}:\Omega \rightarrow \mathbb{H}_d$.
Proposition~\ref{prop:Gamma_property}\eqref{limit_formula} implies that the carr{\'e}
du champ is shift invariant. In particular, $\Gamma(\mtx{f}) = \Gamma(\mtx{f}-\Expect_\mu\mtx{f})$.
Therefore, we may assume that $\Expect_\mu\mtx{f}=\mtx{0}$.
The quantity of interest is
\[
\Expect_\mu\operatorname{tr} |\mtx{f}|^{2q}
= \Expect_\mu\operatorname{tr} \left[\mtx{f}\cdot \sgn(\mtx{f})\cdot |\mtx{f}|^{2q-1}\right]
=: \Expect_{\mu} \operatorname{tr} \left[ \mtx{f} \, \varphi(\mtx{f}) \right].
\]
We have introduced the signed moment function $\varphi: x\mapsto \sgn(x)\cdot \abs{x}^{2q-1}$
for $x \in \mathbbm{R}$. Note that the absolute derivative $\psi(x) := \abs{\varphi'(x)} = (2q-1)\abs{x}^{2q-2}$
is convex when $q= 1$ or when $q\geq 1.5$.
\begin{remark}[Missing powers]
A similar argument holds when $q \in (1, 1.5)$.
It requires a variant of Lemma~\ref{lem:key_Gamma}
that holds for monotone $\psi$, but has an
extra factor of $2$ on the right-hand side.
\end{remark}
\subsubsection{A Markov semigroup argument}
By the ergodicity assumption \eqref{eqn:ergodicity}, it holds that
\[\lim_{t\rightarrow\infty}\Expect_\mu\operatorname{tr}[P_t(\mtx{f})\,\varphi(\mtx{f})] = \Expect_\mu\operatorname{tr}[(\Expect_\mu\mtx{f})\,\varphi(\mtx{f})] = 0.\]
Therefore,
\begin{equation}\label{step:polynomial_1}
\begin{split}
\Expect_\mu\operatorname{tr} \abs{\mtx{f}}^{2q} &= \Expect_\mu\operatorname{tr} \left[P_0\mtx{f}\, \varphi(\mtx{f})\right] - \lim_{t\rightarrow\infty}\Expect_\mu\operatorname{tr}[P_t(\mtx{f})\,\varphi(\mtx{f})]\\
&= -\int_0^\infty\frac{\diff{} }{\diff t}\Expect_\mu\operatorname{tr}\left[(P_t\mtx{f}) \, \varphi(\mtx{f})\right]\idiff t = -\int_0^\infty\Expect_\mu\operatorname{tr}\left[\mathcal{L}(P_t\mtx{f}) \, \varphi(\mtx{f})\right]\idiff t.
\end{split}
\end{equation}
By convexity of $\psi$, we can invoke the chain rule inequality, Lemma~\ref{lem:key_Gamma}, to obtain
\begin{equation}\label{step:polynomial_2}
\begin{split}
- \Expect_\mu\operatorname{tr}\left[\mathcal{L}(P_t\mtx{f})\, \varphi(\mtx{f})\right] =&\ \Expect_\mu\operatorname{tr}\Gamma(P_t\mtx{f},\varphi(\mtx{f}))\\
\leq&\ \left(\Expect_\mu\operatorname{tr}\left[\Gamma(\mtx{f})\,\psi(\mtx{f}) \right]\cdot \Expect_\mu\operatorname{tr}\left[\Gamma(P_t\mtx{f})\,\psi(\mtx{f})\right]\right)^{1/2} \\
=&\ (2q-1)\left(\Expect_\mu\operatorname{tr}\left[\Gamma(\mtx{f})\abs{\mtx{f}}^{2q-2} \right]\cdot \Expect_\mu\operatorname{tr}\left[\Gamma(P_t\mtx{f})\abs{\mtx{f}}^{2q-2}\right]\right)^{1/2}\\
\leq&\ (2q-1)\,\mathrm{e}^{-t/c}\left(\Expect_\mu\operatorname{tr}\left[\Gamma(\mtx{f})\abs{\mtx{f}}^{2q-2} \right]\cdot \Expect_\mu\operatorname{tr}\left[(P_t\Gamma(\mtx{f}))\abs{\mtx{f}}^{2q-2}\right]\right)^{1/2}.
\end{split}
\end{equation}
The last inequality is the local ergodicity condition,
Proposition~\ref{prop:local_Poincare}\eqref{local_ergodicity}.
\subsubsection{Decoupling}
Apply H\"older's inequality for the trace followed by H\"older's inequality for the expectation
to obtain
\begin{equation} \label{step:polynomial_2.5}
\begin{aligned}
\Expect_\mu\operatorname{tr}\left[\Gamma(\mtx{f}) \abs{\mtx{f}}^{2q-2} \right] &\leq \left(\Expect_\mu\operatorname{tr} \Gamma(\mtx{f})^q \right)^{1/q}\cdot \left(\Expect_\mu\operatorname{tr}|\mtx{f}|^{2q}\right)^{(q-1)/q} \quad\text{and} \\
\Expect_\mu\operatorname{tr}\left[(P_t\Gamma(\mtx{f})) \abs{\mtx{f}}^{2q-2}\right] &\leq \left(\Expect_\mu\operatorname{tr}{} (P_t\Gamma(\mtx{f}))^q \right)^{1/q}\cdot \big(\Expect_\mu\operatorname{tr} \abs{\mtx{f}}^{2q}\big)^{(q-1)/q}.
\end{aligned}
\end{equation}
Introduce the bounds~\eqref{step:polynomial_2.5} into \eqref{step:polynomial_2} to find that
\begin{equation}\label{step:polynomial_3}
\begin{split}
& - \Expect_\mu\operatorname{tr}\left[\mathcal{L}(P_t\mtx{f}) \,\varphi(\mtx{f})\right] \\
&\qquad\qquad \leq (2q-1)\,\mathrm{e}^{-t/c}\left(\Expect_\mu\operatorname{tr} \Gamma(\mtx{f})^q \cdot \Expect_\mu\operatorname{tr}{}(P_t\Gamma(\mtx{f}))^q \right)^{1/(2q)} \big(\Expect_\mu\operatorname{tr}\abs{\mtx{f}}^{2q}\big)^{(q-1)/q}.
\end{split}
\end{equation}
Substitute \eqref{step:polynomial_3} into \eqref{step:polynomial_1} and rearrange the expression to reach
\begin{equation}\label{step:polynomial_4}
\big(\Expect_\mu\operatorname{tr} \abs{\mtx{f}}^{2q}\big)^{1/q}\leq (2q-1)\left(\Expect_\mu\operatorname{tr} \Gamma(\mtx{f})^q \right)^{1/(2q)} \int_0^{\infty} \mathrm{e}^{-t/c}\left(\Expect_\mu\operatorname{tr}{} (P_t\Gamma(\mtx{f}))^q\right)^{1/(2q)}\idiff t.
\end{equation}
It remains to remove the semigroup from the integral.
\subsubsection{Endgame}
The trace power $\operatorname{tr}[ (\cdot)^q ]$ is convex on $\mathbb{H}_d$ for $q\geq1$; see~\cite[Theorem 2.10]{carlen2010trace}.
Therefore, the Jensen inequality \eqref{eqn:semigroup_Jensen_2} for the semigroup implies that
\begin{equation}\label{step:polynomial_5}
\Expect_\mu\operatorname{tr}{} (P_t\Gamma(\mtx{f}))^q \leq \Expect_\mu \operatorname{tr} \Gamma(\mtx{f})^q.
\end{equation}
Substituting \eqref{step:polynomial_5} into \eqref{step:polynomial_4} yields
\[\big(\Expect_\mu\operatorname{tr} \abs{\mtx{f}}^{2q}\big)^{1/q}\leq (2q-1) \left(\Expect_\mu\operatorname{tr} \Gamma(\mtx{f})^q \right)^{1/q} \int_0^\infty \mathrm{e}^{-t/c} \idiff t = c \, (2q-1)\left(\Expect_\mu\operatorname{tr}\Gamma(\mtx{f})^q\right)^{1/q}.\]
This establishes \eqref{eqn:polynomial_moment_1}.
Define the uniform bound $v_{\mtx{f}} := \norm{ \norm{ \Gamma(\mtx{f}) } }_{L_{\infty}(\mu)}$.
We have the further estimate
\[\left(\Expect_\mu\operatorname{tr}\left[\Gamma(\mtx{f})^q\right]\right)^{1/(2q)}\leq d^{1/(2q)} \sqrt{v_{\mtx{f}}}.\]
The statement \eqref{eqn:polynomial_moment_2} now follows from \eqref{eqn:polynomial_moment_1}.
This step completes the proof of Theorem~\ref{thm:polynomial_moment}.
\subsection{Exponential moments}
\label{sec:exponential_moments_proof}
In this section, we establish Theorem~\ref{thm:exponential_concentration},
the exponential matrix concentration inequality. The main technical ingredient
is a bound on exponential moments:
\begin{theorem}[Exponential moments]\label{thm:exponential_moment}
Instate the hypotheses of Theorem~\ref{thm:exponential_concentration}.
For all $\theta\in(-\sqrt{\beta/c},\sqrt{\beta/c})$,
\begin{equation}\label{eqn:exponential_moment_1}
\log\Expect_\mu \operatorname{\bar{\trace}} \mathrm{e}^{\theta(\mtx{f}-\Expect_\mu\mtx{f})} \leq \frac{c\theta^2 r_{\mtx{f}}(\beta)}{2(1-c\theta^2/\beta)}.
\end{equation}
Moreover, if $v_{\mtx{f}} <+\infty$, then
\begin{equation}\label{eqn:exponential_moment_2}
\log\Expect_\mu \operatorname{\bar{\trace}} \mathrm{e}^{\theta(\mtx{f}-\Expect_\mu\mtx{f})} \leq \frac{cv_{\mtx{f}}\theta^2}{2}
\quad\text{for all $\theta \in \mathbbm{R}$.}
\end{equation}
\end{theorem}
\noindent
The proof of Theorem~\ref{thm:exponential_moment} occupies the rest of this subsection.
Afterward, in Section~\ref{sec:exponential_concentration_proof}, we derive
Theorem~\ref{thm:exponential_concentration}.
\subsubsection{Setup}
As usual, we consider a reversible, ergodic Markov semigroup $(P_t)_{t \geq 0}$
with stationary measure $\mu$.
Assume that the semigroup satisfies the Bakry--{\'E}mery criterion~\eqref{Bakry-Emery}
for a constant $c > 0$, so it is locally ergodic.
Choose a suitable function $\mtx{f}:\Omega \rightarrow \mathbb{H}_d$.
We may assume that $\Expect_\mu\mtx{f}=\mtx{0}$. Furthermore, we only
need to consider the case $\theta \geq 0$. The results for $\theta < 0$
follow formally under the change of variables $\theta \mapsto - \theta$
and $\mtx{f} \mapsto - \mtx{f}$.
The quantity of interest is the normalized trace mgf:
\[m(\theta) := \Expect_\mu \operatorname{\bar{\trace}} \mathrm{e}^{\theta\mtx{f}}\quad \text{for}\ \theta\geq0.\]
We will bound the derivative of this function:
\[
m'(\theta) = \Expect_\mu \operatorname{\bar{\trace}} \left[\mtx{f} \, \mathrm{e}^{\theta\mtx{f}}\right]
=: \Expect_{\mu} \operatorname{\bar{\trace}} [ \mtx{f} \, \varphi(\mtx{f}) ].
\]
We have introduced the function $\varphi : x \mapsto \mathrm{e}^{\theta x}$ for $x \in \mathbbm{R}$.
Note that its absolute derivative $\psi(x) := \abs{ \varphi'(x) } = \theta \mathrm{e}^{\theta x}$
is a convex function, since $\theta \geq 0$.
Here and elsewhere, we use the properties of the trace mgf that are
collected in Lemma~\ref{prop:trace_mgf}.
\subsubsection{A Markov semigroup argument}
By the ergodicity assumption \eqref{eqn:ergodicity}, we have
\begin{equation}\label{step:exponential_1}
\begin{split}
m'(\theta) &= \Expect_\mu \operatorname{\bar{\trace}} \left[P_0\mtx{f}\,\mathrm{e}^{\theta\mtx{f}}\right] - \lim_{t\rightarrow\infty}\Expect_\mu\operatorname{tr}\left[P_t(\mtx{f})\,\mathrm{e}^{\theta\mtx{f}}\right] \\
&= -\int_0^\infty \frac{\diff{}}{\diff{t}}\Expect_\mu \operatorname{\bar{\trace}} \left[P_t(\mtx{f})\,\mathrm{e}^{\theta\mtx{f}}\right]\diff t = -\int_0^\infty \Expect_\mu \operatorname{\bar{\trace}} \left[\mathcal{L}(P_t\mtx{f})\,\mathrm{e}^{\theta\mtx{f}}\right]\diff t.
\end{split}
\end{equation}
Invoke the chain rule inequality, Lemma~\ref{lem:key_Gamma}, to obtain
\begin{equation}\label{step:exponential_2}
\begin{split}
-\Expect_\mu \operatorname{\bar{\trace}} \left[\mathcal{L}(P_t\mtx{f})\,\mathrm{e}^{\theta\mtx{f}}\right] &= \Expect_\mu \operatorname{\bar{\trace}} \Gamma(P_t\mtx{f},\mathrm{e}^{\theta\mtx{f}})\\
&\leq \theta \left(\Expect_\mu\operatorname{\bar{\trace}} \left[\Gamma(\mtx{f})\,\mathrm{e}^{\theta\mtx{f}}\right]\cdot \Expect_\mu\operatorname{\bar{\trace}} \left[\Gamma(P_t\mtx{f})\,\mathrm{e}^{\theta\mtx{f}}\right]\right)^{1/2} \\
&\leq \theta \mathrm{e}^{-t/c}\left(\Expect_\mu\operatorname{\bar{\trace}} \left[\Gamma(\mtx{f})\,\mathrm{e}^{\theta\mtx{f}}\right]\cdot \Expect_\mu\operatorname{\bar{\trace}} \left[(P_t\Gamma(\mtx{f}))\, \mathrm{e}^{\theta\mtx{f}}\right]\right)^{1/2}.
\end{split}
\end{equation}
The second inequality is the local ergodicity condition, Proposition~\ref{prop:local_Poincare}\eqref{local_ergodicity}.
\subsubsection{Decoupling}
The next step is to use an entropy inequality to separate the carr\'e du champ operator in \eqref{step:exponential_2} from the matrix exponential. The following trace inequality appears as \cite[Proposition A.3]{mackey2014}; see also \cite[Theorem 2.13]{carlen2010trace}.
\begin{fact}(Young's inequality for matrix entropy)\label{lem:Young_inequality} Let $\mtx{X}$ be a random matrix in $\mathbb{H}_d$, and let $\mtx{Y}$ be a random matrix in $\mathbb{H}_d^+$ such that $\Expect\operatorname{\bar{\trace}} \mtx{Y} = 1$. Then
\begin{equation*}\label{eqn:Young_inequality}
\Expect \operatorname{\bar{\trace}}\left[\mtx{X}\mtx{Y}\right] \leq \log\Expect\operatorname{\bar{\trace}}\mathrm{e}^{\mtx{X}} + \Expect\operatorname{\bar{\trace}}\left[\mtx{Y}\log \mtx{Y}\right].
\end{equation*}
\end{fact}
Apply Fact~\ref{lem:Young_inequality} to see that, for any $\beta>0$,
\begin{equation}\label{step:exponential_3}
\begin{split}
\Expect_\mu\operatorname{\bar{\trace}} \left[\Gamma(\mtx{f}) \, \mathrm{e}^{\theta\mtx{f}}\right] &= \frac{m(\theta)}{\beta} \Expect_\mu\operatorname{\bar{\trace}} \left[\beta \Gamma(\mtx{f})\frac{\mathrm{e}^{\theta\mtx{f}}}{m(\theta)}\right]\\
&\leq \frac{m(\theta)}{\beta} \left(\log\Expect_\mu\operatorname{\bar{\trace}} \exp\left(\beta\Gamma(\mtx{f})\right) + \Expect_\mu\operatorname{\bar{\trace}}\left[\frac{\mathrm{e}^{\theta\mtx{f}}}{m(\theta)}\log\frac{\mathrm{e}^{\theta\mtx{f}}}{m(\theta)}\right]\right) \\
&= m(\theta)\, r(\beta) + \frac{1}{\beta}\Expect_\mu\operatorname{\bar{\trace}}\left[\mathrm{e}^{\theta\mtx{f}}\log\frac{\mathrm{e}^{\theta\mtx{f}}}{m(\theta)}\right].
\end{split}
\end{equation}
We have identified the exponential mean $r(\beta) := \beta^{-1}\log\Expect_\mu\operatorname{\bar{\trace}} \exp\left(\beta\Gamma(\mtx{f}) \right)$.
Likewise,
\begin{equation*}
\Expect_\mu\operatorname{\bar{\trace}} \left[(P_t\Gamma(\mtx{f}))\,\mathrm{e}^{\theta\mtx{f}}\right]\leq \frac{m(\theta)}{\beta} \log\Expect_\mu\operatorname{\bar{\trace}} \exp\left(\beta P_t\Gamma(\mtx{f})\right) + \frac{1}{\beta}\Expect_\mu\operatorname{\bar{\trace}}\left[\mathrm{e}^{\theta\mtx{f}}\log\frac{\mathrm{e}^{\theta\mtx{f}}}{m(\theta)}\right].
\end{equation*}
The trace exponential $\operatorname{\bar{\trace}} \exp(\cdot)$ is operator convex; see \cite[Theorem 2.10]{carlen2010trace}.
The Jensen inequality \eqref{eqn:semigroup_Jensen_2} for the semigroup implies that
\begin{equation*}\label{step:exponential_5}
\Expect_\mu\operatorname{\bar{\trace}} \exp\left(\beta P_t\Gamma(\mtx{f})\right) \leq \Expect_\mu\operatorname{\bar{\trace}} \exp\left(\beta\Gamma(\mtx{f})\right)
= \exp \left(\beta r(\beta)\right).
\end{equation*}
Combine the last two displays to obtain
\begin{equation}\label{step:exponential_4}
\Expect_\mu\operatorname{\bar{\trace}} \left[(P_t\Gamma(\mtx{f}))\,\mathrm{e}^{\theta\mtx{f}}\right]
\leq m(\theta)\, r(\beta) + \frac{1}{\beta}\Expect_\mu\operatorname{\bar{\trace}}\left[\mathrm{e}^{\theta\mtx{f}}\log\frac{\mathrm{e}^{\theta\mtx{f}}}{m(\theta)}\right].
\end{equation}
Thus, the two terms on the right-hand side of~\eqref{step:exponential_2}
have matching bounds.
Sequence the displays \eqref{step:exponential_2}, \eqref{step:exponential_3}, and \eqref{step:exponential_4}
to reach
\begin{equation}\label{step:exponential_6}
-\Expect_\mu \operatorname{\bar{\trace}} \left[\mathcal{L}(P_t\mtx{f})\,\mathrm{e}^{\theta\mtx{f}}\right] \leq \mathrm{e}^{-t/c}\theta \left( m(\theta)\, r(\beta) + \frac{1}{\beta}\Expect_\mu\operatorname{\bar{\trace}}\left[\mathrm{e}^{\theta\mtx{f}}\log\frac{\mathrm{e}^{\theta\mtx{f}}}{m(\theta)}\right] \right).
\end{equation}
This is the integrand in \eqref{step:exponential_1}.
Next, we simplify this expression to arrive at a differential inequality.
\subsubsection{A differential inequality}
In view of Proposition~\ref{prop:trace_mgf}\eqref{eqn:m.g.f_Property_1}, we have $\log m(\theta)\geq0$ and hence
\[\log\frac{\mathrm{e}^{\theta\mtx{f}}}{m(\theta)} = \theta \mtx{f} - \log m(\theta) \cdot \mathbf{I} \preccurlyeq \theta \mtx{f}.\]
It follows that
\begin{equation}\label{step:exponential_7}
\Expect_\mu\operatorname{\bar{\trace}}\left[\mathrm{e}^{\theta\mtx{f}}\log\frac{\mathrm{e}^{\theta\mtx{f}}}{m(\theta)}\right]\leq \theta \Expect_\mu \operatorname{\bar{\trace}}\left[\mtx{f}\,\mathrm{e}^{\theta\mtx{f}}\right] = \theta\, m'(\theta).
\end{equation}
Combine \eqref{step:exponential_6} and \eqref{step:exponential_7} to reach
\[- \Expect_\mu \operatorname{\bar{\trace}} \left[\mathcal{L}(P_t\mtx{f})\,\mathrm{e}^{\theta\mtx{f}}\right] \leq \mathrm{e}^{-t/c}\theta \left(m(\theta)\,r(\beta) + \frac{\theta}{\beta} m'(\theta) \right).\]
Substitute this bound into \eqref{step:exponential_1} and compute the integral
to arrive at the differential inequality
\begin{equation}\label{eqn:differential_inequality}
m'(\theta)\leq c\theta \,m(\theta)\,r(\beta) + \frac{c\theta^2}{\beta} m'(\theta)\quad\text{for $\theta\geq 0$.}
\end{equation}
Finally, we need to solve for the trace mgf.
\subsubsection{Solving the differential inequality}
Fix parameters $\theta$ and $\beta$ where $0\leq \theta <\sqrt{\beta/c}$.
By rearranging the expression \eqref{eqn:differential_inequality},
we find that
\[\frac{\diff{} }{\diff \zeta}\log m(\zeta) \leq \frac{c\zeta \, r(\beta)}{1-c\zeta^2/\beta}\leq \frac{c\zeta\,r(\beta)}{1-c\theta^2/\beta}
\quad\text{for $\zeta \in (0, \theta]$.}
\]
Since $\log m(0) = 0$, we can integrate this bound over $[0,\theta]$ to obtain
\[\log m(\theta) \leq \frac{c\theta^2 r(\beta)}{2(1-c\theta^2/\beta)}.\]
This is the first claim \eqref{eqn:exponential_moment_1}.
Moreover, it is easy to check that $r(\beta)\leq v_{\mtx{f}}$.
Since this bound is independent of $\beta$, we can take $\beta\rightarrow +\infty$ in \eqref{eqn:exponential_moment_1} to achieve \eqref{eqn:exponential_moment_2}. This completes the proof of Theorem~\ref{thm:exponential_moment}.
\subsection{Exponential matrix concentration}
\label{sec:exponential_concentration_proof}
We are now ready to prove Theorem~\ref{thm:exponential_concentration},
the exponential matrix concentration inequality,
as a consequence of the moment bounds of Theorem~\ref{thm:exponential_moment}.
To do so, we use the standard matrix Laplace transform method,
summarized in Appendix~\ref{apdx:matrix_moments}.
\begin{proof}[Proof of Theorem~\ref{thm:exponential_concentration} from Theorem~\ref{thm:exponential_moment}]
To obtain inequalities for the maximum eigenvalue $\lambda_{\max}$,
we apply Proposition~\ref{prop:matrix_exponential_concentration} to the random matrix $\mtx{X} = \mtx{f}(Z) -\Expect_\mu\mtx{f}$
where $Z \sim \mu$.
To do so, we first need to weaken the moment bound \eqref{eqn:exponential_moment_1}:
\[\log\Expect_\mu \operatorname{\bar{\trace}} \mathrm{e}^{\theta(\mtx{f}-\Expect_\mu\mtx{f})} \leq \frac{c\theta^2r(\beta)}{2(1-c\theta^2/\beta)}\leq \frac{c\theta^2r(\beta)}{2(1-\theta\sqrt{c/\beta})}\quad \text{for $0\leq \theta < \sqrt{\beta/c}$}.\]
Then substitute $c_1=c r(\beta)$ and $c_2 = \sqrt{c/\beta}$ into Proposition~\ref{prop:matrix_exponential_concentration}
to achieve the results stated in Theorem~\ref{thm:exponential_concentration}.
To obtain bounds for the minimum eigenvalue $\lambda_{\min}$, we apply Proposition~\ref{prop:matrix_exponential_concentration} instead
to the random matrix $\mtx{X} = -(\mtx{f}(Z) -\Expect_\mu\mtx{f})$
where $Z \sim \mu$.
\end{proof}
\section{Bakry--{\'E}mery criterion for product measures}
\label{sec:product_measure_all}
In this section, we introduce the classic Markov process for a product measure. We check the Bakry--{\'E}mery criterion for this Markov process, which leads to matrix concentration results for product measures.
\subsection{Product measures and Markov processes}
Consider a product space $\Omega = \Omega_1\otimes \Omega_2\otimes \cdots\otimes \Omega_n $ equipped with a product measure $\mu = \mu_1\otimes \mu_2\otimes \cdots\otimes\mu_n$. We can construct a Markov process $(Z_t)_{t\geq0} = (Z^1_t,Z^2_t,\dots,Z^n_t)_{t\geq 0}$ on $\Omega$ whose stationary measure is $\mu$. Let $\{N_t^i\}_{i=1}^n$ be a sequence of independent Poisson processes. Whenever $N_t^i$ increases for some $i$, we replace the value of $Z_t^i$ in $Z_t$ by an independent sample from $\mu_i$ while keeping the remaining coordinates fixed.
To describe the Markov semigroup associated with this Markov process, we need some notation.
For each subset $I\subseteq \{1,\dots,n\}$ and all $z,w\in\Omega$, define the interlacing operation
\[(z;w)_I := (\eta^1,\eta^2,\dots,\eta^n)\quad \text{where}\quad
\begin{cases}
\eta^i = w^i, & i\in I; \\
\eta^i = z^i, & i\notin I.
\end{cases}
\]
In particular, $(z;w)_{\emptyset} = z$, and we abbreviate $(z;w)_i = (z^1,\dots,z^{i-1},w^i,z^{i+1},\dots,z^n)$. In this section, the superscript stands for the index of the coordinate.
Let $Z = (Z^1,Z^2,\dots,Z^n)\in\Omega$ be a random vector drawn from the measure $\mu$; that is, each coordinate $Z^{i}\in\Omega_i$ is drawn independently from the measure $\mu_i$. Through this section, we write $\Expect_Z := \Expect_{Z\sim\mu}$. The Markov semigroup $(P_t)_{t\geq 0}$ induced by the Markov process is given by
\begin{equation}\label{eqn:tensor_Markov}
P_t\mtx{f}(z) = \sum_{I\subseteq \{1,\dots,n\}}(1-\mathrm{e}^{-t})^{|I|}\mathrm{e}^{-t(n-|I|)} \cdot \Expect_Z \mtx{f}\big((z;Z)_I\big) \quad\text{for all $z\in\Omega$.}
\end{equation}
This formula is valid for every $\mu$-integrable function $\mtx{f}:\Omega \rightarrow \mathbb{H}_d$. The ergodicity \eqref{eqn:ergodicity} of the semigroup follows immediately from~\eqref{eqn:tensor_Markov}
because $\lim_{t\rightarrow\infty}(1-\mathrm{e}^{-t})^{|I|}\mathrm{e}^{-t(n-|I|)}=0$ whenever $|I|<n$.
The infinitesimal generator $\mathcal{L}$ of the semigroup admits the explicit form
\begin{equation}\label{eqn:tensor_L}
\mathcal{L}\mtx{f} = \lim_{t\downarrow 0}\frac{P_t\mtx{f}-\mtx{f}}{t} = -\sum_{i=1}^n\delta_i\mtx{f}.
\end{equation}
The difference operator $\delta_i$ is given by
\[\delta_i\mtx{f}(z) := \mtx{f}(z) - \Expect_Z \mtx{f}\big((z;Z)_i\big)\quad \text{for all $z\in\Omega$}.\]
This infinitesimal generator $\mathcal{L}$ is well defined for all integrable functions, so the class of suitable functions contains $L_1(\mu)$. It follows from the definition of $\delta_i$ that
\[\Expect_\mu[\mtx{f} \,\delta_i(\mtx{g})] = \Expect_\mu[\delta_i(\mtx{f}) \,\delta_i(\mtx{g})] = \Expect_\mu[\delta_i(\mtx{f}) \, \mtx{g}]\quad \text{for each $1\leq i\leq n$}.\]
Thus, the infinitesimal generator $\mathcal{L}$ is symmetric on $L_2(\mu)$. As a consequence, the semigroup is reversible, and the Dirichlet form is given by
\[\mathcal{E}(\mtx{f},\mtx{g}) = \Expect_\mu\left[\sum_{i=1}^n\delta_i(\mtx{f})\delta_i(\mtx{g})\right]=\sum_{i=1}^n\Expect_Z\left[\left(\mtx{f}(Z)-\Expect_{\tilde{Z}}\mtx{f}((Z;\tilde{Z})_i)\right)\left(\mtx{g}(Z)-\Expect_{\tilde{Z}}\mtx{g}((Z;\tilde{Z})_i)\right)\right]\]
for any $\mtx{f},\mtx{g}:\Omega \rightarrow \mathbb{H}_d$, where $\tilde{Z}$ is an independent copy of $Z$. All the results above and their proofs can be found in \cite{van550probability,ABY20:Matrix-Poincare}.
\subsection{Carr\'e du champ operators} The following lemma gives the formulas for the matrix carr\'e du champ operator and the iterated matrix carr\'e du champ operator.
\begin{lemma}[Product measure: Carr{\'e} du champs] \label{lem:tensor_Gamma} The matrix carr\'e du champ operator $\Gamma$ and the iterated matrix carr\'e du champ operator $\Gamma_2$ of the semigroup~\eqref{eqn:tensor_Markov} are given by the formulas
\begin{align}\label{eqn:tensor_Gamma}
\Gamma(\mtx{f},\mtx{g})(z) &= \frac{1}{2}\sum_{i=1}^n \Expect_Z\left[\big(\mtx{f}(z)-\mtx{f}((z;Z)_i)\big)\cdot\big(\mtx{g}(z)-\mtx{g}((z;Z)_i)\big)\right]
\intertext{and}
\Gamma_2(\mtx{f},\mtx{g})(z) &= \frac{1}{4}\sum_{i=1}^n \Expect_{\tilde{Z}}\Expect_Z \Big[\big(\mtx{f}(z)-\mtx{f}((z;Z)_i)\big)\cdot\big(\mtx{g}(z)-\mtx{g}((z;Z)_i)\big)\\
& \qquad\qquad\qquad\qquad\qquad + \big(\mtx{f}((z;\tilde{Z})_i)-\mtx{f}((z;Z)_i)\big)\cdot\big(\mtx{g}((z;\tilde{Z})_i)-\mtx{g}((z;Z)_i)\big)\Big] \\
& + \frac{1}{4}\sum_{i\neq j}\Expect_{\tilde{Z}}\Expect_Z \Big[\big(\mtx{f}(z)-\mtx{f}((z;\tilde{Z})_i) - \mtx{f}((z;Z)_j) + \mtx{f}(((z;\tilde{Z})_i;Z)_j) \big)\\
&\qquad\qquad\qquad\qquad\qquad \times\big(\mtx{g}(z)-\mtx{g}((z;\tilde{Z})_i) - \mtx{g}((z;Z)_j) + \mtx{g}(((z;\tilde{Z})_i;Z)_j) \big) \Big].
\label{eqn:tensor_Gamma2}
\end{align}
These expressions are valid for all suitable $\mtx{f},\mtx{g}:\Omega \rightarrow \mathbb{H}_d$ and all $z\in \Omega$.
The random variables $Z$ and $\tilde{Z}$ are independent draws from the measure $\mu$.
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lem:tensor_Gamma}] The expression \eqref{eqn:tensor_Gamma} is a consequence of the form \eqref{eqn:tensor_L} of the infinitesimal generator and the definition \eqref{eqn:definition_Gamma} of the carr\'e du champ operator $\Gamma$. Further, the following displays are consequences of \eqref{eqn:tensor_L} and \eqref{eqn:tensor_Gamma}.
\begin{align*}
& \mathcal{L}\Gamma(\mtx{f},\mtx{g})(z) \\
&\qquad = -\sum_{i=1}^n\delta_i\Gamma(\mtx{f},\mtx{g})(z)\\
&\qquad = -\frac{1}{2}\sum_{i,j=1}^n\Expect_{\tilde{Z}}\Expect_{Z} \Big[\big(\mtx{f}(z)-\mtx{f}((z;Z)_j)\big)\cdot\big(\mtx{g}(z)-\mtx{g}((z;Z)_j)\big) \\
&\qquad\qquad\qquad\qquad\qquad\qquad\quad - \big(\mtx{f}((z;\tilde{Z})_i)-\mtx{f}(((z;\tilde{Z})_i;Z)_j)\big)\cdot\big(\mtx{g}((z;\tilde{Z})_i)-\mtx{g}(((z;\tilde{Z})_i;Z)_j)\big)\Big].\\
& \Gamma(\mtx{f},\mathcal{L}\mtx{g})(z)\\
&\qquad = -\sum_{i=1}^n\Gamma(\mtx{f},\delta_i\mtx{g})(z)\\
&\qquad = -\frac{1}{2}\sum_{i,j=1}^n\Expect_{Z}\Big[\big(\mtx{f}(z)-\mtx{f}((z;Z)_j)\big)\\
& \qquad\qquad\qquad\qquad\qquad\qquad\quad \times \big(\mtx{g}(z)-\Expect_{\tilde{Z}}\big[\mtx{g}((z;\tilde{Z})_i)\big] - \mtx{g}((z;Z)_j) + \Expect_{\tilde{Z}}\big[\mtx{g}(((z;Z)_j;\tilde{Z})_i)\big]\big)\Big]\\
&\qquad = -\frac{1}{2}\sum_{i,j=1}^n\Expect_{\tilde{Z}}\Expect_{Z}\Big[\big(\mtx{f}(z)-\mtx{f}((z;Z)_j)\big)\\
&\qquad \qquad\qquad\qquad\qquad\qquad\quad \times \big(\mtx{g}(z)-\mtx{g}((z;\tilde{Z})_i) - \mtx{g}((z;Z)_j) + \mtx{g}(((z;Z)_j;\tilde{Z})_i)\big)\Big].\\
&\Gamma(\mathcal{L}\mtx{f},\mtx{g})(z)\\
&\qquad = -\sum_{i=1}^n\Gamma(\delta_i\mtx{f},\mtx{g})(z)\\
&\qquad = -\frac{1}{2}\sum_{i,j=1}^n\Expect_{Z}\Big[\big(\mtx{f}(z)-\Expect_{\tilde{Z}}\big[\mtx{f}((z;\tilde{Z})_i)\big] - \mtx{f}((z;Z)_j) + \Expect_{\tilde{Z}}\big[\mtx{f}(((z;Z)_j;\tilde{Z})_i)\big]\big)\\
&\qquad\qquad\qquad\qquad\qquad\qquad\quad \times \big(\mtx{g}(z)-\mtx{g}((z;Z)_j)\big)\Big]\\
&\qquad = -\frac{1}{2}\sum_{i,j=1}^n\Expect_{\tilde{Z}}\Expect_{Z}\Big[\big(\mtx{f}(z)-\mtx{f}((z;\tilde{Z})_i) - \mtx{f}((z;Z)_j) + \mtx{f}(((z;Z)_j;\tilde{Z})_i)\big)\\
& \qquad\qquad\qquad\qquad\qquad\qquad\quad \times \big(\mtx{g}(z)-\mtx{g}((z;Z)_j)\big)\Big].
\end{align*}
If $j=i$, then $((z;\tilde{Z})_i;Z)_j = (z;Z)_i$ and $((z;Z)_j;\tilde{Z})_i = (z;\tilde{Z})_i$. But if $j\neq i$, then $((z;Z)_j;\tilde{Z})_i = ((z;\tilde{Z})_i;Z)_j$. Therefore, by the definition \eqref{eqn:definition_Gamma2} of iterated carr\'e du champ operator $\Gamma_2$, we can compute that
\begin{align*}
&\Gamma_2(\mtx{f},\mtx{g})(z)\\
&\qquad = \frac{1}{4}\sum_{i,j=1}^n\Expect_{\tilde{Z}}\Expect_{Z}\Big[\big(\mtx{f}(z)-\mtx{f}((z;Z)_j)\big)\cdot\big(\mtx{g}(z)-\mtx{g}((z;Z)_j)\big) \\
& \qquad\qquad\qquad\qquad\qquad\qquad\quad + \big(\mtx{f}((z;\tilde{Z})_i)-\mtx{f}(((z;\tilde{Z})_i;Z)_j)\big)\cdot\big(\mtx{g}((z;\tilde{Z})_i)-\mtx{g}(((z;\tilde{Z})_i;Z)_j)\big)\\
& \qquad\qquad\qquad\qquad\qquad\qquad\quad - \big(\mtx{f}(z)-\mtx{f}((z;Z)_j)\big)\cdot\big(\mtx{g}((z;\tilde{Z})_i) - \mtx{g}(((z;Z)_j;\tilde{Z})_i)\big)\\
& \qquad\qquad\qquad\qquad\qquad\qquad\quad - \big(\mtx{f}((z;\tilde{Z})_i) - \mtx{f}(((z;Z)_j;\tilde{Z})_i)\big)\cdot\big(\mtx{g}(z)-\mtx{g}((z;Z)_j)\big)\Big]\\
&\qquad = \frac{1}{4}\sum_{i=1}^n\Expect_{\tilde{Z}}\Expect_Z \Big[\big(\mtx{f}(z)-\mtx{f}((z;Z)_i)\big)\cdot\big(\mtx{g}(z)-\mtx{g}((z;Z)_i)\big) \\
& \qquad\qquad\qquad\qquad\qquad\qquad\quad + \big(\mtx{f}((z;\tilde{Z})_i)-\mtx{f}((z;Z)_i)\big)\cdot\big(\mtx{g}((z;\tilde{Z})_i)-\mtx{g}((z;Z)_i)\big)\Big] \\
&\qquad\qquad + \frac{1}{4}\sum_{i\neq j}\Expect_{\tilde{Z}}\Expect_Z \Big[\big(\mtx{f}(z)-\mtx{f}((z;\tilde{Z})_i) - \mtx{f}((z;Z)_j) + \mtx{f}(((z;\tilde{Z})_i;Z)_j) \big)\\
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad \times \big(\mtx{g}(z)-\mtx{g}((z;\tilde{Z})_i) - \mtx{g}((z;Z)_j) + \mtx{g}(((z;\tilde{Z})_i;Z)_j) \big) \Big].
\end{align*}
This gives the expression \eqref{eqn:tensor_Gamma2}.
\end{proof}
\subsection{Bakry--\'Emery criterion}
It is clear from Lemma~\ref{lem:tensor_Gamma} that the formula~\eqref{eqn:tensor_Gamma} for $\Gamma$ appears within the formula~\eqref{eqn:tensor_Gamma2} for $\Gamma_2$. We immediately conclude that the Bakry--{\'E}mery criterion holds.
\begin{theorem}[Product measure: Bakry--{\'E}mery] \label{thm:product_measure_localPoincare} For the semigroup~\eqref{eqn:tensor_Markov}, the Bakry--\'Emery criterion \eqref{Bakry-Emery} holds with $c = 2$.
That is, for any suitable function $f:\Omega \rightarrow \mathbb{R}$,
\begin{equation*}\label{eqn:product_measure_localPoincare}
\Gamma(f)\leq 2\Gamma_2(f).
\end{equation*}
\end{theorem}
\begin{proof} Comparing the two expressions in Lemma~\ref{lem:tensor_Gamma} with $f=g$ gives
\begin{align*}
\Gamma_2(f)(z) &= \frac{1}{4}\sum_{i=1}^n \Expect_{\tilde{Z}}\Expect_Z \Big[\big(f(z)-f((z;Z)_i)\big)^2 + \left(f((z;\tilde{Z})_i)-f((z;Z)_i)\right)^2\Big] \\
&\qquad + \frac{1}{4}\sum_{i\neq j} \Expect_{\tilde{Z}}\Expect_Z \Big[\Big(f(z)-f((z;\tilde{Z})_i) - f((z;Z)_j) + f(((z;\tilde{Z})_i;Z)_j) \Big)^2\Big]\\
&\geq \frac{1}{4}\sum_{i=1}^n \Expect_Z \left[\big(f(z)-f((z;Z)_i)\big)^2\right]\\
&= \frac{1}{2}\Gamma(f)(z),
\end{align*}
which is the stated inequality.
\end{proof}
After completing this paper, we learned that Theorem~\ref{thm:product_measure_localPoincare}
appears in \cite[Example 6.6]{junge2015noncommutative} with a different style of proof.
\begin{remark}[Matrix Poincar\'e inequality: Constants]
Following the discussion in Section~\ref{sec:local_matrix_Poincare_inequality}, Theorem~\ref{thm:product_measure_localPoincare} implies the matrix Poincar\'e inequality~\eqref{eqn:matrix_Poincare} with $\alpha = 2$. However, Aoun et al.~\cite{ABY20:Matrix-Poincare} proved that the Markov process~\eqref{eqn:tensor_Markov} actually satisfies the matrix Poincar\'e inequality with $\alpha = 1$; see also \cite[Theorem 5.1]{cheng2016characterizations}. This gap is not surprising because the averaging operation that is missing in the local Poincar\'e inequality contributes to the global convergence of the Markov semigroup.
\end{remark}
\subsection{Matrix concentration results}
\label{sec:concentration_results_product}
In this subsection, we complete the proofs of the matrix concentration results for product measures stated in Section~\ref{sec:main_results}.
For a product measure $\mu = \mu_1\otimes\mu_2\otimes\cdots\otimes\mu_n$, Theorem~\ref{thm:product_measure_localPoincare} shows that there is a reversible ergodic Markov semigroup whose stationary measure is $\mu$ and which satisfies the Bakry--\'Emery criterion \eqref{Bakry-Emery} with constant $c=2$. We then apply Theorem~\ref{thm:polynomial_moment} with $c=2$ to obtain the polynomial moment bounds in Corollary~\ref{cor:product_measure_Efron--Stein}. Similarly, we apply Theorem~\ref{thm:exponential_concentration} with $c=2$ to obtain the subgaussian concentration inequalities in Corollary~\ref{cor:product_measure_tailbound}.
\section{Bakry--{\'E}mery criterion for log-concave measures}
\label{sec:log-concave}
In this section, we study a class of log-concave measures; the most important example in this class is the standard Gaussian measure. First, we introduce the standard diffusion process associated with a log-concave measure. We verify that the associated semigroup is reversible and ergodic via standard arguments. Then we introduce the Bakry--{\'E}mery criterion which follows from the uniform strong convexity of the potential.
\subsection{Log-concave measures and Markov processes}
Consider the Markov processes $(Z_t)_{t\geq 0}$ on $\mathbb{R}^n$ generated by the stochastic differential equation:
\begin{equation}\label{eqn:SDE}
\diff Z_t = -\nabla W(Z_t)\idiff{t} + \sqrt{2}\idiff{B_t},
\end{equation}
where $B_t$ is the standard $n$-dimensional Brownian motion and $W:\mathbb{R}^n\rightarrow \mathbb{R}$ is a smooth convex function. The stationary measure $\mu$ of this process has the density $\diff \mu = \rho^\infty(z)\idiff{z} = M^{-1}\mathrm{e}^{-W(z)}\idiff{z}$, where $M := \int_{\mathbb{R}^n}\mathrm{e}^{-W(z)}\idiff z$ is a normalization constant. The infinitesimal generator $\mathcal{L}$ is given by
\begin{equation}\label{eqn:log-concave_L}
\mathcal{L}\mtx{f}(z) = -\sum_{i=1}^n\partial_iW(z)\cdot\partial_i\mtx{f}(z) + \sum_{i=1}^n\partial_i^2\mtx{f}(z)
\quad\text{for all $z=(z_1,\dots,z_n)\in \mathbb{R}^n$.}
\end{equation}
The class of suitable functions is the Sobolev space $\mathrm{H}_{2,\mu}(\mathbb{R}^n;\mathbb{H}_d)$, defined in~\eqref{def:H2_function}. Here and elsewhere, $\partial_i$ means $\partial/\partial z_i$ and $\partial_{ij}$ means $\partial^2/(\partial z_i\partial z_j)$ for all $i,j=1,\dots,n$.
\subsubsection{Reversibility} The reversibility of this Markov $(Z_t)_{t\geq0}$ can be verified with a standard calculation. We restrict our attention to functions in $\mathrm{H}_{2,\mu}(\mathbb{R}^n;\mathbb{H}_d)$. Integration by parts yields
\begin{align*}
\Expect_\mu[\mathcal{L}(\mtx{f})\mtx{g}] =&\ \int_{\mathbb{R}^n}\left(-\sum_{i=1}^n\partial_iW(z)\cdot\partial_i\mtx{f}(z) + \sum_{i=1}^n\partial_i^2\mtx{f}(z)\right)\mtx{g}(z)\rho^\infty(z)\idiff z\\
=&\ -\sum_{i=1}^n\int_{\mathbb{R}^n}\partial_i\mtx{f}(z)\cdot \partial_i\mtx{g}(z)\cdot\rho^\infty(z)\idiff z\\
=&\ \int_{\mathbb{R}^n}\mtx{f}(z)\left(-\sum_{i=1}^n\partial_iW(z)\cdot\partial_i\mtx{g}(z) + \sum_{i=1}^n\partial_i^2\mtx{g}(z)\right)\rho^\infty(z)\idiff z\\
=&\ \Expect_\mu[\mtx{f}\mathcal{L}(\mtx{g})].
\end{align*}
This shows that $\mathcal{L}$ is symmetric on $L_2(\mu)$ and thus $(Z_t)_{t\geq 0}$ is reversible. From the calculation above, we also obtain a simple formula for the associated Dirichlet form:
\[\mathcal{E}(\mtx{f},\mtx{g}) = \sum_{i=1}^n\Expect_\mu\left[\partial_i\mtx{f}\cdot \partial_i\mtx{g}\right]\quad \text{for all $\mtx{f},\mtx{g}\in \mathrm{H}_{2,\mu}(\mathbb{R}^n;\mathbb{H}_d)$}.\]
These results parallel the scalar case, but the partial derivatives are matrix-valued.
\subsubsection{Ergodicity} We now turn to the ergodicity of the Markov process given by \eqref{eqn:SDE}, which generally reduces to studying the convergence of the corresponding Fokker--Planck equation:
\begin{equation}\label{eqn:Fokker-Planck}
\begin{cases}
\frac{\partial}{\partial t}\rho_{x}(z,t) = \mathcal{L}^*\rho_{x}(z,t) := \sum_{i=1}\partial_i(\partial_iW(z)\rho_{x}(z,t)) + \sum_{i=1}^n\partial_i^2\rho_{x}(z,t); \\[3pt]
\rho_{x}(z,0) = \delta(z-x).
\end{cases}
\end{equation}
We define $\rho_{x}(z,t)$ to be the density of $Z_t$, conditional on $Z_0 = x\in\mathbb{R}^n$. As usual, $\delta(z-x)$ is the Dirac distribution centered at $x$. The associated Markov semigroup $(P_t)_{t\geq 0}$ can be recognized as
\begin{equation}\label{eqn:log-concave_semigroup}
P_t \mtx{f}(x) = \Expect_\mu\left[\mtx{f}(Z_t) \,|\, Z_0 = x \right] = \int_{\mathbb{R}^n} \mtx{f}(z) \rho_{x}(z,t) \idiff z \quad \text{for all $t\geq0$ and all $x\in \mathbb{R}^n$ }.
\end{equation}
The semigroup $(P_t)_{t\geq 0}$ is ergodic in the sense of \eqref{eqn:ergodicity} if and only if $\rho_{x}(\cdot,t)$ converges weakly to $\rho^\infty$ for all $x\in \mathbb{R}^n$.
A fundamental way to prove the convergence of \eqref{eqn:Fokker-Planck} to the stationary density $\rho^\infty$ is through the method of Lyapunov functions~\cite{hairer2010convergence,ji2019convergence}. However, ergodicity in the weak sense follows more easily from the assumption that the function $W$ is uniformly strongly convex. That is,
\[
(\operatorname{Hess} W)(z) := \big[\partial_{ij} W(z)\big]_{i,j=1}^n
\succcurlyeq \eta \cdot \mathbf{I}_n
\quad\text{for all $z \in \mathbb{R}^n$.}
\]
To see this, recall the Brascamp--Lieb inequality~\cite[Theorem 4.1]{BRASCAMP1976366},
which states that the (ordinary) variance of a scalar function $h:\mathbb{R}^n\rightarrow \mathbb{R}$
is bounded as
\[\Var_\mu[h]\leq \int_{\mathbb{R}^n} (\nabla h(z))^\mathsf{T}\big((\operatorname{Hess} W)(z)\big)^{-1}\nabla h(z) \idiff \mu(z).\]
Combine the last two displays to arrive at the Poincar\'e inequality $\Var_\mu[h]\leq \eta^{-1}\mathcal{E}(h)$.
Next, consider the scalar function $\phi_{x}(z,t) := (\rho_{x}(z,t) - \rho^\infty(z))/\rho^\infty(z)$. Let us check that its variance $\Var_\mu[\phi_{x}(\cdot,t)]$ converges to $0$ exponentially fast. Indeed, it is not hard to verify that $\phi_{x}(z,t)$ satisfies the partial differential equation
\[\frac{\partial}{\partial t} \phi_{x}(z,t) = \mathcal{L}\phi_{x}(z,t) \quad \text{for $t\geq0$ and $z\in \mathbb{R}^n$}.\]
Along with the Poincar\'e inequality and the fact that $\Expect_\mu\phi_{x}(\cdot,t) = 0$, this implies
\[\frac{\diff{} }{\diff t} \Var_\mu[\phi_{x}(\cdot,t)] = - 2\mathcal{E}(\phi_{x}(\cdot,t))\leq - 2\eta \Var_\mu[\phi_{x}(\cdot,t)]. \]
Therefore, the quantity $\Var_\mu[\phi_{x}(\cdot,t)]$ converges to $0$ exponentially fast because
\[\Var_\mu[\phi_{x}(\cdot,t)] \leq \mathrm{e}^{-2\eta (t-t_0)} \Var_\mu[\phi_{x}(\cdot,t_0)]\quad \text{for}\ t\geq t_0>0.\]
As a consequence, for any $f\in \mathrm{H}_{2,\mu}(\mathbb{R}^n;\mathbb{R})$ and any $x\in \mathbb{R}^n$,
\begin{align*}
\left|P_tf(x) - \Expect_\mu f\right| &= \left|\int_{\mathbb{R}^n} f(z)(\rho_{x}(z,t)-\rho^\infty(z))\idiff z \right| = \left|\int_{\mathbb{R}^n} f(z)\rho^\infty(z)\phi_{x}(z,t)\idiff z\right| \\
&\leq \int_{\mathbb{R}^n} |f(z)|\cdot\rho^\infty(z)\cdot|\phi_{x}(z,t)|\idiff z \leq \left(\Expect_\mu |f|^2\right)^{1/2}\Var_\mu[\phi_{x}(\cdot,t)]^{1/2} \rightarrow 0.
\end{align*}
This justifies the pointwise convergence of $P_t\mtx{f}$, which is stronger than the $L_2(\mu)$ ergodicity \eqref{eqn:ergodicity} of the semigroup $(P_t)_{t\geq0}$.
\subsection{Carr\'e du champ operators}
After checking reversibility and ergodicity, we now turn to the derivation of the matrix carr\'e du champ operator and the iterated matrix carr\'e du champ operator. Their explicit forms are given in the next lemma.
\begin{lemma}[Log-concave measure: Carr{\'e} du champs] \label{lem:log-concave_Gamma} The matrix carr\'e du champ operator $\Gamma$ and the iterated matrix carr\'e du champ operator $\Gamma_2$ of the Markov process defined by~\eqref{eqn:SDE} are given by the formulas
\begin{equation}\label{eqn:log-concave_Gamma}
\Gamma(\mtx{f},\mtx{g})= \sum_{i=1}^n\partial_i\mtx{f}\cdot \partial_i\mtx{g}
\end{equation}
and
\begin{equation}\label{eqn:log-concave_Gamma2}
\Gamma_2(\mtx{f},\mtx{g}) = \sum_{i,j=1}^n\partial_{ij}W\cdot \partial_i\mtx{f} \cdot \partial_j\mtx{g} + \sum_{i,j=1}^n\partial_{ij}\mtx{f}\cdot \partial_{ij}\mtx{g}
\end{equation}
for all suitable $\mtx{f},\mtx{g}:\mathbb{R}^n\rightarrow\mathbb{H}_d$.
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lem:log-concave_Gamma}]
Knowing the explicit form \eqref{eqn:log-concave_L} of the Markov generator $\mathcal{L}$, we can compute the carr\'e du champ operator $\Gamma$ as
\begin{align*}
\Gamma(\mtx{f},\mtx{g})=&\ \frac{1}{2}\sum_{i=1}^n\left(-\partial_i W\cdot \partial_i(\mtx{f}\mtx{g}) + \partial_i^2(\mtx{f}\mtx{g}) - \big(-\partial_i W\cdot\partial_i \mtx{f} + \partial_i^2\mtx{f}\big)\mtx{g} - \mtx{f}\big(-\partial_i W\cdot \partial_i\mtx{g} + \partial_i^2\mtx{g}\big)\right)\\
=&\ \sum_{i=1}^n\partial_i\mtx{f}\cdot \partial_i\mtx{g}.
\end{align*}
Moreover, combining the expressions \eqref{eqn:log-concave_L} and \eqref{eqn:log-concave_Gamma} yields the following:
\begin{align*}
\mathcal{L}\Gamma(\mtx{f},\mtx{g}) =&\ -\sum_{i=1}^n\partial_iW\cdot \partial_i\left(\sum_{j=1}^n\partial_j\mtx{f}\cdot \partial_j\mtx{g}\right) + \sum_{i=1}^n\partial_i^2\left(\sum_{j=1}^n\partial_j\mtx{f}\cdot \partial_j\mtx{g}\right)\\
=&\ \sum_{i,j}^n\left(-\partial_iW\cdot \partial_{ij}\mtx{f}\cdot \partial_j\mtx{g} - \partial_iW\cdot \partial_j\mtx{f}\cdot \partial_{ij}\mtx{g} + \partial_i^2(\partial_j\mtx{f})\cdot \partial_j\mtx{g}+2\partial_{ij}\mtx{f}\cdot\partial_{ij}\mtx{g}+ \partial_j\mtx{f}\cdot \partial_i^2(\partial_j\mtx{g})\right).\\
\Gamma(\mathcal{L}\mtx{f},\mtx{g}) =&\ \sum_{j=1}^n\partial_j\left(\sum_{i=1}^n \big(- \partial_iW\cdot \partial_i\mtx{f} + \partial_i^2\mtx{f})\right)\cdot \partial_j\mtx{g}\\
=&\ \sum_{i,j=1}^n\left(-\partial_{ij}W\cdot \partial_i\mtx{f}\cdot \partial_j\mtx{g} - \partial_iW\cdot \partial_{ij}\mtx{f}\cdot \partial_j\mtx{g} + \partial_i^2(\partial_j\mtx{f})\cdot \partial_j\mtx{g}\right).\\
\Gamma(\mtx{f},\mathcal{L}\mtx{g}) =&\ \sum_{j=1}^n\partial_j\mtx{f}\cdot \partial_j\left(\sum_{i=1}^n \big(- \partial_iW \cdot\partial_i\mtx{g} + \partial_i^2\mtx{g})\right)\\
=&\ \sum_{i,j=1}^n\left(-\partial_{ij}W\cdot \partial_j\mtx{f}\cdot \partial_i\mtx{g} - \partial_iW\cdot \partial_j\mtx{f}\cdot \partial_{ij}\mtx{g} + \partial_j\mtx{f}\cdot \partial_i^2(\partial_j\mtx{g})\right).
\end{align*}
Then we can compute that
\begin{align*}
\Gamma_2(\mtx{f},\mtx{g}) =&\ \frac{1}{2}\left(\mathcal{L}\Gamma(\mtx{f},\mtx{g}) -\Gamma(\mathcal{L}\mtx{f},\mtx{g}) -\Gamma(\mtx{f},\mathcal{L}\mtx{g})\right)\\
=&\ \frac{1}{2}\sum_{i,j=1}^n\left(\partial_{ij}W\cdot \partial_i\mtx{f}\cdot \partial_j\mtx{g}+ \partial_{ij}W\cdot \partial_j\mtx{f}\cdot \partial_i\mtx{g}\right) + \sum_{i,j=1}^n\partial_{ij}\mtx{f}\cdot \partial_{ij}\mtx{g}\\
=&\ \sum_{i,j=1}^n\partial_{ij}W\cdot \partial_i\mtx{f}\cdot \partial_j\mtx{g} + \sum_{i,j=1}^n\partial_{ij}\mtx{f}\cdot \partial_{ij}\mtx{g}.
\end{align*}
This gives the expression \eqref{eqn:log-concave_Gamma2}.
\end{proof}
\subsection{Bakry--\'Emery criterion} It is a well-known result that a Bakry--\'Emery criterion follows from the uniform strong convexity of $W$. For example, see the discussion in \cite[Sec. 4.8]{bakry2013analysis}. Nevertheless, we provide a short proof here for the sake of completeness.
\begin{fact}[Log-concave measure: Matrix Bakry--{\'E}mery] \label{fact:log-concave_localPoincare}
Consider the Markov process defined by \eqref{eqn:SDE}. If the potential $W:\mathbb{R}\rightarrow\mathbb{R}$ satisfies $(\operatorname{Hess} W)(z)\succcurlyeq \eta \cdot \mathbf{I}_n $ for all $z\in \mathbb{R}^n$ for some constant $\eta>0$, then the Bakry--\'Emery criterion \eqref{Bakry-Emery} holds with $c = \eta^{-1}$. That is, for any suitable function $f:\mathbb{R}^n\rightarrow\mathbb{R}$,
\begin{equation*}\label{eqn:log-concave_localPoincare}
\Gamma(f)\preccurlyeq \eta^{-1}\Gamma_2(f).
\end{equation*}
\end{fact}
\begin{proof} Comparing the two expressions in Lemma~\ref{lem:log-concave_Gamma} with $f=g$ gives that
\begin{align*}
\Gamma_2(f) &= \sum_{i,j=1}^n\partial_{ij}W\cdot \partial_if\cdot \partial_jf + \sum_{i,j=1}^n(\partial_{ij}f)^2 \\
&\geq (\nabla f)^\mathsf{T}(\operatorname{Hess} W)\nabla f
\geq \eta \sum_{i=1}^n(\partial_i\mtx{f})^2
= \eta\cdot\Gamma(\mtx{f}).
\end{align*}
The second inequality follows from the uniform strong convexity of $W$.
Proposition~\ref{prop:BE_equiv} extends the scalar Bakry--{\'E}mery criterion
to matrices.
\end{proof}
\subsection{Standard normal distribution}\label{sec:Gaussian}
The most important example of a strongly log-concave measure
occurs for the potential
\[W(z) = \frac{1}{2}z^\mathsf{T} z\quad\text{for all $z\in \mathbb{R}^n$.}\]
In this case, the corresponding log-concave measure $\mu$ coincides with
the density of the $n$-dimensional standard Gaussian distribution $N(\vct{0},\mathbf{I}_n)$:
\[\diff \mu = \frac{1}{\sqrt{(2\pi)^n}}\exp\left(-\frac{1}{2}z^\mathsf{T} z\right) \idiff{z}\quad \text{for all $z\in \mathbb{R}^n$.}\]
The associated Markov process is known as the Ornstein--Uhlenbeck process. The semigroup $(P_t)_{t\geq 0}$
has a simple form, given by the Mehler formula:
\[P_t\mtx{f}(z) = \Expect \mtx{f}\left(\mathrm{e}^{-t}z + \sqrt{1-\mathrm{e}^{-2t}}\xi\right)\quad\text{where $\xi\sim N(\vct{0},\mathbf{I}_n)$.} \]
The ergodicity of this Markov semigroup is obvious from the above formula because $\mathrm{e}^{-t}\rightarrow 0$ as $t\rightarrow +\infty$. Lemma~\ref{lem:log-concave_Gamma} gives the matrix carr\'e du champ operator $\Gamma$ and the iterated matrix carr\'e du champ operator $\Gamma_2$ for the Ornstein--Uhlenbeck process:
\[\Gamma(\mtx{f},\mtx{g}) = \sum_{i=1}^n\partial_i\mtx{f}\cdot \partial_i\mtx{g}\quad \text{and}\quad \Gamma_2(\mtx{f},\mtx{g}) = \sum_{i=1}^n\partial_i\mtx{f}\cdot \partial_i\mtx{g} + \sum_{i,j=1}^n\partial_{ij}\mtx{f}\cdot \partial_{ij}\mtx{g}.\]
Clearly, $\Gamma(\mtx{f})\preccurlyeq \Gamma_2(\mtx{f})$. Therefore, the Bakry--{\'E}mery criterion~\eqref{Bakry-Emery} holds with $c = 1$.
\subsection{Matrix concentration results}
\label{sec:concentration_results_log-concave}
Finally, we prove the matrix concentration results for log-concave measures stated in Section~\ref{sec:main_results}.
Consider a log-concave probability measure $\diff \mu \propto \mathrm{e}^{-W(z)}\idiff z$ on $\mathbb{R}^n$,
where the potential satisfies the strong convexity condition $\operatorname{Hess} W\succcurlyeq \eta\mathbf{I}_n$
for $\eta > 0$.
Fact~\ref{fact:log-concave_localPoincare} states that the associated semigroup \eqref{eqn:log-concave_semigroup} satisfies the Bakry--\'Emery criterion with constant $c=\eta^{-1}$. We then apply Theorem~\ref{thm:polynomial_moment} with $c=\eta^{-1}$ to obtain the polynomial moment bounds in Corollary~\ref{cor:log-concave_polynomial_inequality}. Similarly, we apply Theorem~\ref{thm:exponential_concentration} with $c=\eta^{-1}$ to obtain the subgaussian concentration inequalities in Corollary~\ref{cor:log-concave_concentration}.
\section{Extension to Riemannian manifolds}\label{sec:extension_Riemannian_manifold}
In this section, we give a high-level discussion about diffusion processes on Riemannian manifolds. The book \cite{bakry2013analysis} contains a comprehensive treatment of the subject. For an introduction to calculus
on Riemannian manifolds, references include \cite{petersen2016riemannian,lee2018introduction}.
\subsection{Measures on Riemannian manifolds}
Let $(M, \mathfrak{g})$ be an $n$-dimensional Riemannian manifold whose co-metric tensor $\mathfrak{g}(x) = (g^{ij}(x) : 1 \leq i,j \leq n)$ is symmetric and positive definite for every $x \in M$. We write $\mtx{G}(x) = (g_{ij} : 1 \leq i, j \leq n)$ for the metric tensor, which satisfies the relation $\mtx{G}(x) = \mathfrak{g}(x)^{-1}$.
The Riemannian measure $\mu_\mathfrak{g}$ on the manifold $(M,\mathfrak{g})$ has density $\diff \mu_\mathfrak{g} \propto w_\mathfrak{g}(x(z)) \idiff{z}$ with respect to the Lebesgue measure in local coordinates. The weight $w_\mathfrak{g} := \det(\mathfrak{g})^{-1/2}$. Whenever this measure is finite, we normalize it to obtain a probability.
In particular, a compact Riemannian manifold always admits a Riemannian probability measure.
The matrix Laplace--Beltrami operator $\Delta_{\mathfrak{g}}$ on the manifold is defined as
\[\Delta_\mathfrak{g}\mtx{f}(x) := \frac{1}{w_\mathfrak{g}} \sum_{i,j=1}^n\partial_i\left(w_\mathfrak{g}g^{ij}\partial_j\mtx{f}(x)\right)\quad \text{for suitable $\mtx{f} : M \to \mathbb{H}_d$ and $x\in M$.}\]
Here, $\partial_i$ and the like represent the components of the differential with respect to local coordinates.
The diffusion process on $M$ whose infinitesimal generator is $\Delta_{\mathfrak{g}}$
is called the Riemannian Brownian motion. The measure $\mu_\mathfrak{g}$ is the
stationary measure for the Brownian motion.
To generalize, one may consider a weighted measure $\diff \mu \propto \mathrm{e}^{-W} \diff \mu_\mathfrak{g}$ where the potential $W:M\rightarrow \mathbb{R}$ is sufficiently smooth. The associated infinitesimal generator is then the Laplace--Beltrami operator plus a drift term:
\begin{equation} \label{eqn:mL-drift}
\mathcal{L}\mtx{f}(x) := -\sum_{i,j=1}^n g^{ij}\,\partial_iW\, \partial_j\mtx{f} + \frac{1}{w_\mathfrak{g}} \sum_{i,j=1}^n\partial_i\left(w_\mathfrak{g}g^{ij}\,\partial_j\mtx{f}(x)\right)\quad \text{for suitable $\mtx{f} : M \to \mathbb{H}_d$.}
\end{equation}
It is not hard to check that $\mathcal{L}$ is symmetric with respect to $\mu$, and hence the induced diffusion process with drift is reversible.
\subsection{Carr{\'e} du champ operators}
Next, we present expressions for the matrix carr{\'e} du champ operators associated with the infinitesimal generator $\mathcal{L}$ defined in~\eqref{eqn:mL-drift}. The derivation follows from a standard symbol calculation, as in the scalar setting.
\subsubsection{Carr\'{e} du champ operator} The carr{\'e} du champ operator coincides with the squared ``magnitude'' of the differential:
\begin{equation}\label{eqn:gamma_Riemannian}
\Gamma(\mtx{f}) = \sum_{i,j=1}^ng^{ij}\,\partial_i\mtx{f}\, \partial_j\mtx{f}\quad \text{for suitable $\mtx{f}:M\rightarrow\mathbb{H}_d$}.
\end{equation}
Note that this expression contains a matrix product. Calculation of the carr{\'e} du champ involves a choice of local coordinates. Nevertheless, expressions of the carr{\'e} du champ in different choices of local coordinates are equivalent under change of variables.
Another way to calculate the carr{\'e} du champ $\Gamma(\mtx{f})$ is by relating it to the tangential gradient of $\mtx{f}$ on the manifold. For a point $x\in M$, let $T_xM$ denote the tangent space at $x$. The tangential gradient $\nabla_M\mtx{f}(x)$ of a matrix-valued function $\mtx{f}:M\rightarrow\mathbb{H}_d$ can be written as
\[\nabla_M\mtx{f}(x) = \sum_{i=1}^N \vct{v}_i\otimes \mtx{A}_i\]
for some vectors $\{\vct{v}_i\}_{i=1}^N\subset T_xM$ and some matrices $\{\mtx{A}_i\}_{i=1}^N\subset \mathbb{H}_d$ that depend on the representation of the manifold $M$. The integer $N$ is not necessarily the dimension of $M$. When $d=1$, the tangential gradient $\nabla_M\mtx{f}(x)$ is also a vector in $T_xM$. Now, the carr{\'e} du champ at the point $x$ is given by an equivalent expression:
\begin{equation}\label{eqn:gamma_Riemannian_alternative}
\Gamma(\mtx{f})(x) = \langle \nabla_M\mtx{f}(x),\nabla_M\mtx{f}(x)\rangle_{\mtx{G}} := \sum_{i,j=1}^N\langle \vct{v}_i,\vct{v}_j\rangle_{\mtx{G}}\cdot \mtx{A}_i\mtx{A}_j
\end{equation}
where $\langle \cdot ,\cdot\rangle_{\mtx{G}}$ is the inner product on $T_xM$ associated with the metric tensor $\mtx{G}$.
The expression \eqref{eqn:gamma_Riemannian_alternative} coincides with \eqref{eqn:gamma_Riemannian} if we choose
$(\vct{v}_i : 1 \leq i \leq n)$ to be the moving frame of $N = n$ local coordinates. In this case,
$\langle \vct{v}_i(x),\vct{v}_i(x)\rangle_{\mtx{G}} = g_{ij}(x)$ for $i,j=1,\dots,n$. Moreover, the tangential gradient can be written as
\[\nabla_M\mtx{f}(x) = \sum_{i=1}^n \vct{v}_i(x)\otimes \nabla_M^i\mtx{f}(x),\]
where $\nabla_M^i\mtx{f}(x):= \sum_{j=1}^ng^{ij}\partial_j\mtx{f}$ for $i=1,\dots,n$. Then one can rewrite the expression \eqref{eqn:gamma_Riemannian_alternative} in the form \eqref{eqn:gamma_Riemannian} by recalling that $\mtx{G}=\mathfrak{g}^{-1}$.
The expression \eqref{eqn:gamma_Riemannian_alternative} is especially useful when the Riemannian manifold $M$ is embedded into a higher-dimensional Euclidean space $\mathbb{R}^N$ with the metric tensor $\mtx{G}$ induced by the Euclidean metric. That is, $M$ is a Riemannian submanifold of $\mathbb{R}^N$. In this case, for a function $\mtx{f} : \mathbbm{R}^{N} \to \mathbb{H}_d$, the tangential gradient $\nabla_M\mtx{f}(x)$ is simply the projection of $\nabla_{\mathbb{R}^N}\mtx{f}(x)$ onto the tangent space $T_xM$, where $\nabla_{\mathbb{R}^N}\mtx{f}$ is the ordinary gradient of $\mtx{f}$ in the embedding space $\mathbbm{R}^N$. Let us elaborate. Suppose that $x = (x_1,\dots,x_N)$ is the representation of a point $x\in M$ with respect to the standard basis $\{\mathbf{e}_i\}_{i=1}^N$ of $\mathbb{R}^N$. Define the orthogonal projection $\mathrm{Proj}_x$ onto the tangent space $T_xM$. Then the tangential gradient satisfies
\[\nabla_M\mtx{f}(x) = (\mathrm{Proj}_x \otimes \mathbf{I})\left(\sum_{i=1}^N \mathbf{e}_i\otimes \frac{\partial \mtx{f}(x)}{\partial x_i}\right) = \sum_{i=1}^N (\mathrm{Proj}_x\mathbf{e}_i)\otimes \frac{\partial \mtx{f}(x)}{\partial x_i}.\]
This expression of the tangential gradient helps simplify the calculation of the carr{\'e} du champ operator in many interesting examples.
\subsubsection{Iterated carr{\'e} du champ operator} To introduce the iterated matrix carr{\'e} du champ operator, we first define the Hessian
$\nabla^2 \mtx{f} := (\nabla^2_{ij} \mtx{f} : 1 \leq i,j \leq n)$ of a matrix-valued function $\mtx{f}:M\rightarrow \mathbb{H}_d$,
where
\[\nabla^2_{ij} \mtx{f} := \partial_{ij}\mtx{f} - \sum_{k=1}^n\gamma_{ij}^k\partial_k\mtx{f} \quad \text{for $i,j=1,\dots,n$}.\]
The Christoffel symbols $\gamma_{ij}^k$ are the quantities
\[
\gamma_{ij}^k := \frac{1}{2}\sum_{l=1}^ng^{kl}(\partial_{j}g_{il} + \partial_{i}g_{jl} - \partial_{l}g_{ij})
\quad \text{for $i,j,k=1,2,\dots,n$}.\]
When the matrix dimension $d > 1$, the Hessian $\nabla^2 \mtx{f}$ is a 4-tensor.
Now, the iterated matrix carr{\'e} du champ operator $\Gamma_2$ admits the formula
\begin{equation} \label{eqn:gamma2-riemann}
\Gamma_2(\mtx{f}) = \sum_{i,j,k,l=1}^ng^{ij}g^{kl} \, \nabla^2_{ik}\mtx{f}\, \nabla^2_{jl}\mtx{f} + \sum_{i,j,k,l=1}^ng^{ik}g^{jl}\left(\operatorname{Ric}_{kl} + \nabla^2_{kl}W\right) \partial_i\mtx{f}\, \partial_j\mtx{f}.
\end{equation}
Again, this expression involves matrix products.
The Ricci tensor $\operatorname{\mtx{Ric}} = (\operatorname{Ric}_{ij} : 1 \leq i,j \leq n)$ is given by
\[\operatorname{Ric}_{ij} := \sum_{k=1}^n \left(\partial_k\gamma_{ij}^k - \partial_i\gamma_{kj}^k\right) + \sum_{k,l=1}^n\left(\gamma_{kl}^k\gamma_{ij}^l - \gamma_{il}^k\gamma_{jk}^l\right).\]
The Ricci tensor expresses the curvature of the manifold.
\subsection{Bakry--\'Emery criterion}\label{sec:BE_Riemannian}
Since the first sum in the expression~\eqref{eqn:gamma2-riemann} for $\Gamma_2(\mtx{f})$ is a positive-semidefinite matrix, we have the inequality
\begin{equation}\label{eqn:gamma_2_Riemannian}
\Gamma_2(\mtx{f}) \succcurlyeq \sum_{i,j,k,l=1}^ng^{ik}g^{jl}\left(\operatorname{Ric}_{kl} + \nabla^2_{kl}W\right) \partial_i\mtx{f}\, \partial_j\mtx{f}.
\end{equation}
In a Euclidean space, the Ricci tensor is everywhere zero, so the Bakry--\'Emery criterion~\eqref{Bakry-Emery} relies on the strong convexity of the potential $W$, as we have seen in Section~\ref{sec:log-concave}. In contrast, on a Riemannian manifold, the Ricci tensor plays an important role.
Let us now assume that the Riemannian manifold is unweighted; that is, the potential $W = 0$ identically. By comparing the displays \eqref{eqn:gamma_Riemannian} and \eqref{eqn:gamma_2_Riemannian} for a scalar function $f:M\to\mathbb{R}$, we can see that the scalar Bakry--\'Emery criterion holds with constant $c=\rho^{-1}$, provided that
\[
\mathfrak{g}(x)\operatorname{\mtx{Ric}}(x)\mathfrak{g}(x)\succcurlyeq \rho \mathfrak{g}(x)\quad \text{or equivalently}\quad \operatorname{\mtx{Ric}}(x) \succcurlyeq \rho \mtx{G}(x) \quad \text{for all $x\in M$}.
\]
That is, the eigenvalues of $\operatorname{\mtx{Ric}}$ relative to the metric $\mtx{G}$ are bounded from below by $\rho$. This is often referred as the curvature condition $CD(\rho,\infty)$.
Proposition~\ref{prop:BE_equiv} allows us to lift the scalar Bakry--{\'E}mery
criterion to matrix-valued functions; we can also achieve this goal by direct
argument.
We remark that the uniform positiveness of the Ricci curvature tensor also leads to a Poincar\'e inequality for the diffusion process on the manifold; see \cite[Section 4.8]{bakry2013analysis}. Therefore, proposition~\ref{prop:matrix_poincare} implies that the associated Markov semigroup is ergodic in the sense of \eqref{eqn:ergodicity}.
As a typical example, consider the $n$-dimensional unit sphere $\mathbb{S}^n \subset \mathbbm{R}^{n+1}$,
equipped with the induced Riemmanian structure. The associated Riemannian measure is the
uniform distribution. For the sphere, the Ricci curvature tensor is constant:
$\operatorname{\mtx{Ric}} = (n-1)\mtx{G}$; see \cite[Section 2.2]{bakry2013analysis}.
Therefore, the Brownian motion on $\mathbb{S}^n$ satisfies a Bakry--\'Emery criterion \eqref{Bakry-Emery} with $c = (n-1)^{-1}$ for $n\geq 2$.
Next, consider the special orthogonal group $\mathrm{SO}(n) \subset \mathbbm{R}^{n \times n}$
with the induced Riemannian structure. The canonical measure is the Haar probability measure.
For this manifold, it is known that the eigenvalues of the Ricci tensor are bounded below by $\rho = (n-1)/4$;
see~\cite[p.~27]{ledoux2001concentration}.
Therefore, the special orthogonal group $\mathrm{SO}(n)$ satisfies the Bakry--{\'E}mery
criterion~\eqref{Bakry-Emery} with $c = 4/(n-1)$.
There are many other Riemannian manifolds where a lower bound on the Ricci curvature is available.
We refer the reader to~\cite[Sec.~2.2.1]{ledoux2001concentration} for more examples and references.
\subsection{Calculations of carr{\'e} du champ operators}\label{sec:Riemannian_gamma}
In this section, we provide calculations of carr{\'e} du champ operators for the concrete examples in Section~\ref{sec:riemann-exp}.
\subsubsection{Example~\ref{example:sphere_I}: Sphere I}
In this example, we consider the unit sphere $\mathbb{S}^n \subset \mathbbm{R}^{n+1}$ as a Riemannian submanifold
of $\mathbbm{R}^{n+1}$ for $n \geq 2$. The canonical Riemannian measure is the uniform probability measure $\sigma_n$
on the sphere.
Let $(\mtx{A}_1, \dots, \mtx{A}_{n+1}) \subset \mathbb{H}_d$ be a fixed collection of Hermitian matrices.
Draw a random vector $\vct{x} = (x_1, \dots, x_{n+1}) \in \mathbb{S}^n$ from the uniform measure; we use
boldface to emphasize that $\vct{x}$ is a vector in the embedding space. Consider the matrix-valued function
$$
\mtx{f}(\vct{x}) = \sum_{i=1}^{n+1} x_i \mtx{A}_i.
$$
We can use the expression~\eqref{eqn:gamma_Riemannian_alternative} to compute the carr{\'e} du champ
of $\mtx{f}$.
Indeed, the ordinary gradient of $\mtx{f}$ as a function on $\mathbb{R}^{n+1}$ is given by
\[\nabla_{\mathbb{R}^{n+1}} \mtx{f}(\vct{x}) = \sum_{i=1}^{n+1}\mathbf{e}_i\otimes \frac{\partial \mtx{f}(\vct{x})}{\partial x_i} = \sum_{i=1}^{n+1}\mathbf{e}_i\otimes \mtx{A}_i\quad \text{for all $\vct{x}\in \mathbb{R}^{n+1}$}.\]
As usual, $\{\mathbf{e}_i\}_{i=1}^{n+1}$ is the standard basis of $\mathbb{R}^{n+1}$.
Define the orthogonal projection $\mathrm{Proj}_{\vct{x}} = \mathbf{I} - \vct{x}\vct{x}^\mathsf{T}$ onto the tangent space $T_{\vct{x}}\mathbb{S}^n = \{\vct{y}\in \mathbb{R}^{n+1}: \vct{y}^\mathsf{T}\vct{x}=0 \}$.
Thus, the tangential gradient is the projection of the ordinary gradient onto the tangent space:
\[\nabla_{\mathbb{S}^n}\mtx{f}(\vct{x}) = (\mathrm{Proj}_{\vct{x}} \otimes \mathbf{I})\nabla_{\mathbb{R}^{n+1}} \mtx{f}(\vct{x}) = \sum_{i=1}^{n+1}(\mathbf{e}_i - x_i\vct{x})\otimes \mtx{A}_i.\]
By the expression \eqref{eqn:gamma_Riemannian_alternative}, we can compute the carr{\'e} du champ at each point $\vct{x}\in \mathbb{S}^{n}$ as
\begin{align*}
\Gamma(\mtx{f})(\vct{x}) &= \sum_{i,j=1}^{n+1}(\mathbf{e}_i - x_i\vct{x})^\mathsf{T}(\mathbf{e}_j - x_j\vct{x})\cdot \mtx{A}_i\mtx{A}_j = \sum_{i,j=1}^{n+1}(\delta_{ij} - x_ix_j)\,\mtx{A}_i\mtx{A}_j \\
&= \sum_{i=1}^{n+1}\mtx{A}_i^2 - \sum_{i,j=1}^{n+1}x_ix_j\,\mtx{A}_i\mtx{A}_j = \sum_{i=1}^{n+1}\mtx{A}_i^2 - \left(\sum_{i=1}^{n+1}x_i\mtx{A}_i\right)^2.
\end{align*}
This calculation verifies the formula \eqref{eqn:gamma_sphere_I}.
It is now evident that
$$
\mtx{0} \preccurlyeq \Gamma(\mtx{f})(\vct{x}) \preccurlyeq \sum_{i=1}^{n+1}\mtx{A}_i^2 \quad\text{for all $\vct{x} \in \mathbb{S}^n$.}
$$
Therefore, the variance proxy $v_{\mtx{f}} \leq \norm{ \sum_{i=1}^{n+1} \mtx{A}_i^2 }$.
\subsubsection{Example~\ref{example:sphere_II}: Sphere II}
We maintain the setup and notation from the last subsection, and we consider the matrix-valued function
$$
\mtx{f}(\vct{x}) = \sum_{i=1}^{n+1}x_i^2\mtx{A}_i
\quad\text{where $\vct{x} \sim \sigma_n$ on $\mathbb{S}^n$.}
$$
Treating $\mtx{f}$ as a function on the embedding space $\mathbb{R}^{n+1}$,
the ordinary gradient is given by
\[\nabla_{\mathbb{R}^{n+1}} \mtx{f}(\vct{x}) = \sum_{i=1}^{n+1}\mathbf{e}_i\otimes \frac{\partial \mtx{f}(\vct{x})}{\partial x_i} = 2\sum_{i=1}^{n+1}x_i\mathbf{e}_i\otimes \mtx{A}_i\quad \text{for all $\vct{x}\in \mathbb{R}^{n+1}$}.\]
Thus, the tangential gradient of $\mtx{f}$ at a point $\vct{x}\in \mathbb{S}^{n}$ can be computed as
\[\nabla_{\mathbb{S}^n}\mtx{f}(\vct{x}) = (\mathrm{Proj}_{\vct{x}} \otimes \mathbf{I})\nabla_{\mathbb{R}^{n+1}} \mtx{f}(\vct{x}) = 2\sum_{i=1}^{n+1}(x_i\mathbf{e}_i - x_i^2\vct{x})\otimes \mtx{A}_i.\]
By the expression \eqref{eqn:gamma_Riemannian_alternative} of the carr{\'e} du champ operator, we can compute that
\begin{align*}
\Gamma(\mtx{f})(\vct{x}) &= 4\sum_{i,j=1}^{n+1}(x_i\mathbf{e}_i - x_i^2\vct{x})^\mathsf{T}(x_j\mathbf{e}_j - x_j^2\vct{x})\cdot \mtx{A}_i\mtx{A}_j = 4\sum_{i=1}^{n+1}x_i^2\mtx{A}_i^2 - 4\sum_{i,j=1}^{n+1}x_i^2x_j^2\,\mtx{A}_i\mtx{A}_j\\
&= 4\sum_{i,j=1}^{n+1}x_i^2x_j^2\mtx{A}_i^2 - 4\sum_{i,j=1}^{n+1}x_i^2x_j^2\,\mtx{A}_i\mtx{A}_j = 2\sum_{i,j=1}^{n+1}x_i^2x_j^2(\mtx{A}_i - \mtx{A}_j)^2.
\end{align*}
This establishes the formula \eqref{eqn:gamma_sphere_II}.
Using this result, we can obtain some bounds for the variance proxy.
First, introduce the maximum norm difference $a:= \max_{i,j}\norm{\smash{\mtx{A}_i-\mtx{A}_j}}$.
Then the carr{\'e} du champ satisfies
\[\Gamma(\mtx{f})(\vct{x}) \preccurlyeq 2\sum_{i,j=1}^{n+1}x_i^2x_j^2\norm{\smash{\mtx{A}_i-\mtx{A}_j}}^2\cdot \mathbf{I}_d \preccurlyeq 2a^2\sum_{i,j=1}^{n+1}x_i^2x_j^2\cdot \mathbf{I}_d = 2a^2\mathbf{I}_d.\]
Thus, $v_{\mtx{f}} \leq 2 a^2$.
Here is an alternative approach. For an arbitrary matrix $\mtx{B} \in \mathbb{H}_d$,
we can write
\begin{align*}
\Gamma(\mtx{f})(\vct{x}) &= 2\sum_{i,j=1}^{n+1}x_i^2x_j^2(\mtx{A}_i - \mtx{B} + \mtx{B} - \mtx{A}_j)^2\\
&= 4\sum_{i=1}^{n+1}x_i^2(\mtx{A}_i - \mtx{B})^2 - 4\left(\sum_{i=1}^{n+1}x_i^2\mtx{A}_i - \mtx{B}\right)^2\\
&\preccurlyeq 4\sum_{i=1}^{n+1}x_i^2(\mtx{A}_i - \mtx{B})^2
\end{align*}
Defining $b:=\min_{\mtx{B}\in \mathbb{H}_d} \max_i \norm{\mtx{A}_i-\mtx{B}}$,
we see that the variance proxy $v_{\mtx{f}} \leq 4 b^2$.
Modulo an extra factor of two, the second bound represents
a qualitative improvement over the first.
\subsubsection{Example~\ref{example:SO_d}: Special orthogonal group}
Let $(\mtx{A}_1, \dots, \mtx{A}_n) \subset \mathbb{H}_d(\mathbbm{R})$ be fixed, real, symmetric matrices.
Draw $(\mtx{O}_1, \dots, \mtx{O}_n) \subset \mathrm{SO}(d)$ independent and uniformly from
the Haar measure on the special orthogonal group $\mathrm{SO}(d)$. Consider the random matrix
$$
\mtx{f}(\mtx{O}_1, \dots, \mtx{O}_n) = \sum_{i=1}^n \mtx{O}_i \mtx{A}_i \mtx{O}_i^\mathsf{T}.
$$
To study this random matrix model, we will use local geodesic/normal coordinates
on the product manifold $\mathrm{SO}(d)^{\otimes n}$ to compute the carr{\'e} du champ;
for example, see~\cite[Sec. 5]{lee2018introduction} \& \cite[Sec. 3]{hall2015lie}.
Since $\mathrm{SO}(d)^{\otimes n}$ is a Lie group, we only need to consider the
geodesic frame of the tangent space at the identity element $(\mathbf{I}_d, \dots, \mathbf{I}_d)$.
For each $1\leq k<l\leq d$, let $\mtx{S}_{kl}\in \mathbb{M}_d$ be the unit skew-symmetric matrices:
\[(\mtx{S}_{kl})_{kl} = 1/\sqrt{2}\quad\text{and}\quad (\mtx{S}_{kl})_{lk} = -1/\sqrt{2}\quad \text{and other entries of $\mtx{S}_{kl}$ are zero}.\]
Define the tangent vectors
\[\mtx{V}_{kl}^i = \underbrace{(\mtx{0},\dots,\mtx{S}_{kl},\dots,\mtx{0})}_{\text{The $i$th coordinate is $\mtx{S}_{kl}$}} \quad \text{for $i=1,\dots,n$ and $1\leq k<l\leq d$}.\]
Then $( \mtx{V}_{kl}^i : 1\leq i\leq n \text{ and } 1\leq k<l\leq d )$ forms an orthonormal basis for the tangent space at the identity element of the Lie group $\mathrm{SO}(d)^{\otimes n}$, with respect to the Hilbert--Schmidt inner product:
\[\langle (\mtx{P}_1,\dots,\mtx{P}_n), (\mtx{Q}_1,\dots,\mtx{Q}_n)\rangle_\mathrm{HS} = \sum_{i=1}^n \operatorname{tr}[\mtx{P}_i^*\mtx{Q}_i]\quad \text{for $\mtx{P}_1,\dots,\mtx{P}_n,\mtx{Q}_1,\dots,\mtx{Q}_n\in\mathbb{M}_d$}.\]
This basis $\{\mtx{V}_{kl}^i\}_{1\leq i\leq n,1\leq k<l\leq d}$ can be translated to an orthonormal basis of the tangent space at another point $(\mtx{O}_1, \dots, \mtx{O}_n)$ by the group operation: $(\mtx{0},\dots,\mtx{S}_{kl},\dots,\mtx{0})\mapsto (\mtx{0},\dots,\mtx{S}_{kl}\mtx{O}_i,\dots,\mtx{0})$.
Now, for each $(\mtx{O}_1, \dots, \mtx{O}_n)\in\mathrm{SO}(d)^{\otimes n}$, consider the local geodesic map corresponding to the direction $\mtx{V}_{kl}^i$:
\[(\mtx{O}_1, \dots, \mtx{O}_i, \dots, \mtx{O}_n) \mapsto (\mtx{O}_1, \dots, \mathrm{e}^{\varepsilon\mtx{S}_{kl}}\mtx{O}_i, \dots, \mtx{O}_n)\quad \text{for some small $\varepsilon\geq0$}.\]
Then the directional derivative of $\mtx{f}$ in local geodesic coordinates, evaluated at the point $(\mtx{O}_1,\dots,\mtx{O}_n)$ where $\varepsilon = 0$, is given by
\[\frac{\partial \mtx{f}}{ \partial \mtx{V}_{kl}^i}(\mtx{O}_1, \dots, \mtx{O}_n) = \mtx{S}_{kl}\mtx{O}_i\mtx{A}_i\mtx{O}_i^\mathsf{T} - \mtx{O}_i\mtx{A}_i\mtx{O}_i^\mathsf{T} \mtx{S}_{kl} =: \mtx{S}_{kl}\mtx{B}_i - \mtx{B}_i\mtx{S}_{kl},\]
where $\mtx{B}_i := \mtx{O}_i\mtx{A}_i\mtx{O}_i^\mathsf{T}$. In local geodesic coordinates, the co-metric tensor $\mathfrak{g}$ at the origin equals the identity. Using the formula \eqref{eqn:gamma_Riemannian}, we can compute the carr{\'e} du champ as
\begin{align*}
\Gamma(\mtx{f})(\mtx{O}_1, \dots, \mtx{O}_n) &=\sum_{i=1}^n\sum_{1\leq k<l\leq b} \left(\frac{\partial \mtx{f}}{ \partial \mtx{V}_{kl}^i}\right)^2 = \sum_{i=1}^n\sum_{1\leq k<l\leq b}\left(\mtx{S}_{kl}\mtx{B}_i - \mtx{B}_i\mtx{S}_{kl}\right)^2\\
&= \sum_{i=1}^n\sum_{1\leq k<l\leq b}\left(-\mtx{S}_{kl}\mtx{B}_i^2\mtx{S}_{kl} - \mtx{B}_i\mtx{S}_{kl}^2\mtx{B}_i + \mtx{S}_{kl}\mtx{B}_i\mtx{S}_{kl}\mtx{B}_i + \mtx{B}_i\mtx{S}_{kl}\mtx{B}_i\mtx{S}_{kl}\right).
\end{align*}
It is not hard to check that, for any real matrix $\mtx{M}\in \mathbb{M}_d(\mathbb{R})$,
\[\sum_{1\leq k<l\leq b} \mtx{S}_{kl}\mtx{M}\mtx{S}_{kl} = -\frac{1}{2}(\operatorname{tr}[\mtx{M}] \cdot \mathbf{I}_d - \mtx{M}^\mathsf{T}).\]
Therefore, we can obtain that
\begin{align*}
\Gamma(\mtx{f})(\mtx{O}_1, \dots, \mtx{O}_n) &= \frac{1}{2}\sum_{i=1}^n\left( \operatorname{tr}[\mtx{B}_i^2] \cdot \mathbf{I}_d - \mtx{B}_i^2 + (d-1)\mtx{B}_i^2 - (\operatorname{tr}[\mtx{B}_i]\cdot \mathbf{I}_d - \mtx{B}_i)\mtx{B}_i - \mtx{B}_i(\operatorname{tr}[\mtx{B}_i]\cdot \mathbf{I}_d - \mtx{B}_i) \right) \\
&= \frac{1}{2}\sum_{i=1}^n\left( \operatorname{tr}[\mtx{B}_i^2]\cdot \mathbf{I}_d + d\cdot \mtx{B}_i^2 - 2\operatorname{tr}[\mtx{B}_i]\cdot \mtx{B}_i\right)\\
&= \frac{1}{2}\sum_{i=1}^n\mtx{O}_i\left( \operatorname{tr}[\mtx{A}_i^2]\cdot \mathbf{I}_d + d\cdot \mtx{A}_i^2 - 2\operatorname{tr}[\mtx{A}_i]\cdot \mtx{A}_i\right)\mtx{O}_i^\mathsf{T}.\\
&= \frac{1}{2}\sum_{i=1}^n\mtx{O}_i\left\{ \left(\operatorname{tr}[\mtx{A}_i^2]-\frac{\operatorname{tr}[\mtx{A}_i]^2}{d}\right)\cdot \mathbf{I}_d + d\cdot \left(\mtx{A}_i - \frac{\operatorname{tr}[\mtx{A}_i]}{d}\cdot \mathbf{I}_d \right)^2\right\}\mtx{O}_i^\mathsf{T}.
\end{align*}
This justifies the formula \eqref{eqn:gamma_SO_d}. Since each $\mtx{O}_i$ is an orthogonal matrix,
the variance proxy satisfies
\begin{align*}
v_{\mtx{f}} &=
\max\nolimits_{\mtx{O}_i} \norm{\Gamma(\mtx{f})(\mtx{O}_1, \dots, \mtx{O}_n)} \\
&\leq \frac{1}{2}\sum_{i=1}^n\norm{ \left(\operatorname{tr}[\mtx{A}_i^2]-d^{-1}\operatorname{tr}[\mtx{A}_i]^2\right)\cdot \mathbf{I}_d + d\cdot \left(\mtx{A}_i - d^{-1}\operatorname{tr}[\mtx{A}_i]\cdot \mathbf{I}_d \right)^2 }\\
&= \frac{1}{2}\sum_{i=1}^n \left( \operatorname{tr}[\mtx{A}_i^2]-d^{-1}\operatorname{tr}[\mtx{A}_i]^2 + d\cdot \norm{\mtx{A}_i - d^{-1}\operatorname{tr}[\mtx{A}_i]\cdot \mathbf{I}_d }^2 \right).
\end{align*}
Note that this bound is sharp because we can always choose some particular point $(\mtx{O}_1, \dots, \mtx{O}_n)$ to achieve equality.
\subsection{Matrix concentration results}\label{sec:concentration_results_Riemannian}
At last, we provide a proof of Theorem~\ref{thm:riemann-simple} from Theorem~\ref{thm:polynomial_moment} and Theorem~\ref{thm:exponential_concentration}.
Consider a compact $n$-dimensional Riemannian submanifold $M$ of a Euclidean space. The uniform measure $\mu$ on $M$ is the stationary measure of the associated Brownian motion on $M$. As discussed in Section~\ref{sec:BE_Riemannian}, the Brownian motion satisfies a Bakry--\'Emery criterion with constant $c=\rho^{-1}$ if the eigenvalues of the Ricci curvature tensor are bounded below by $\rho$. We then apply Theorem~\ref{thm:polynomial_moment} and Theorem~\ref{thm:exponential_concentration} with $c=\rho^{-1}$ to obtain the matrix concentration inequalities in Theorem~\ref{thm:riemann-simple}.
For any point $x\in M$, we can compute the carr\'e du champ $\Gamma(\mtx{f})(x)$ in local normal coordinates centered at $x$. In this case, the co-metric tensor $\mathfrak{g}$ is the identity matrix $\mathbf{I}_n$ when evaluated at $x$. The expression of the variance proxy $v_{\mtx{f}}$ in Theorem~\ref{thm:riemann-simple} then follows from formula \eqref{eqn:gamma_Riemannian} of the carr\'e du champ operator.
| {
"timestamp": "2021-01-08T02:00:57",
"yymm": "2006",
"arxiv_id": "2006.16562",
"language": "en",
"url": "https://arxiv.org/abs/2006.16562",
"abstract": "Matrix concentration inequalities provide information about the probability that a random matrix is close to its expectation with respect to the $l_2$ operator norm. This paper uses semigroup methods to derive sharp nonlinear matrix inequalities. In particular, it is shown that the classic Bakry-Émery curvature criterion implies subgaussian concentration for \"matrix Lipschitz\" functions. This argument circumvents the need to develop a matrix version of the log-Sobolev inequality, a technical obstacle that has blocked previous attempts to derive matrix concentration inequalities in this setting. The approach unifies and extends much of the previous work on matrix concentration. When applied to a product measure, the theory reproduces the matrix Efron-Stein inequalities due to Paulin et al. It also handles matrix-valued functions on a Riemannian manifold with uniformly positive Ricci curvature.",
"subjects": "Probability (math.PR)",
"title": "Nonlinear Matrix Concentration via Semigroup Methods",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.986151392226466,
"lm_q2_score": 0.7185943805178139,
"lm_q1q2_score": 0.708642848793757
} |
https://arxiv.org/abs/1908.02834 | Curves orthogonal to a vector field in Euclidean spaces | A curve is rectifying if it lies on a moving hyperplane orthogonal to its curvature vector. In this work, we extend the main result of [Chen 2017, Tamkang J. Math. 48, 209] to any space dimension: we prove that rectifying curves are geodesics on hypercones. We later use this association to characterize rectifying curves that are also slant helices in three-dimensional space as geodesics of circular cones. In addition, we consider curves that lie on a moving hyperplane normal to (i) one of the normal vector fields of the Frenet frame and to (ii) a rotation minimizing vector field along the curve. The former class is characterized in terms of the constancy of a certain vector field normal to the curve, while the latter contains spherical and plane curves. Finally, we establish a formal mapping between rectifying curves in an $(m + 2)$-dimensional space and spherical curves in an $(m + 1)$-dimensional space. A curve is rectifying if it lies on a moving hyperplane orthogonal to its curvature vector. | \section{Introduction}
In Euclidean space we may ask ``When does the position vector of a regular curve always lie orthogonal to a vector field?''. In other words, the problem consists in characterizing the curves $\alpha:I\to\mathbb{E}^{m+2}$ for which $\langle\alpha-p,\mathbf{V}\rangle=0$ in $I$, where $p$ is a constant and $\mathbf{V}$ is a vector field along $\alpha$. Naturally, the answer will greatly depend on the properties of $\mathbf{V}$. For example, if $\alpha$ is a \emph{normal curve} (here $\mathbf{V}=\alpha'$), then the curve is spherical. On the other hand, if $\alpha$ is an \emph{osculating curve} (here $\mathbf{V}$ is the multinormal vector field, i.e., the last Frenet vector field from which we define the torsion \cite{Kuhnel2015}), then every osculating curve is a hyperplane curve. In the 2000s, Chen introduced the notion of a \emph{rectifying curve} in the three-dimensional (3d) Euclidean space by imposing that $\alpha$ always lies in its rectifying plane \cite{ChenMonthly2003}, i.e., it lies in the plane spanned by the tangent and binormal vectors. Rectifying curves have remarkable properties \cite{ChenBIMAS2005} and, in addition, they can be characterized as geodesics on a cone \cite{ChenTJM2017}. The notion of rectifying curves may be extended to higher dimensional Euclidean spaces \cite{CambieTJM2016,IlarslanTJM2008} by requiring $\alpha$ to lie in the (moving) hyperplane normal to its \emph{curvature vector} $\mathbf{k}=\kappa\frac{\mathbf{T}'}{\Vert\mathbf{T}'\Vert}$, where $\mathbf{T}=\frac{\alpha'}{\Vert\alpha'\Vert}$ is the unit tangent and $\kappa$ is the \emph{curvature function} of $\alpha$. Naturally, we can also consider curves orthogonal to one of the remaining vector fields of the Frenet frame (a problem originally proposed by Cambie \emph{et al.} \cite{CambieTJM2016}) or, more generally, curves orthogonal to vector fields coming from frames distinct of Frenet, such as the so-called rotation minimizing (RM) frames \cite{BishopMonthly1975} (a problem investigated in 3d space by the first named author \cite{DaSilvaArXiv2017}). In addition, an equation relating the curvatures and torsion characterizing these special classes of curves has been obtained for rectifying curves, first in dimensions 3 and 4, \cite{ChenMonthly2003,KimHMJ1993} and \cite{IlarslanTJM2008}, respectively, and latter generalized to any dimension \cite{CambieTJM2016}.
In this work, we extend the main result of Chen in Ref. \cite{ChenTJM2017} to any space dimension: we prove that rectifying curves are geodesics on the hypersurface of higher dimensional cones (Theorem \ref{ThrRecCurvAsConeGeod}). We later use this relation with cones to present a characterization of curves that are simultaneously rectifying and slant helices, i.e., curves whose curvature vector makes a constant angle with a fixed direction (a problem raised in 3d space by Deshmukh \textit{et al.} \cite{AlghanemiFilomat2019}). Indeed, we show that in dimension 3 these curves are characterized as geodesics of circular cones (Theorem \ref{thr::SlantAndRectfCurves}). In higher dimensions, we are show that geodesic of circular hypercones are slant helices (Corollary \ref{cor::SlantRectAsGeodCircCone}). Additionaly, we also consider curves that lie on a moving hyperplane normal to the $j$-th vector field of the Frenet frame and characterize them in terms of the constancy of a certain vector field normal to the curve, namely, the projection of the curve on the hyperplane spanned by the $(j+1)$-th, $(j+2)$-th, $\dots$, and multinormal vector fields of the Frenet frame (Theorem \ref{Theo::CharjRectCurves}). Later, by investigating the behavior of the coordinates of the curve with respect to a given orthonormal moving frame, we establish a formal mapping between spherical and rectifying curves (Theorem \ref{TheoMapSpherCurveInRectCurve}). Finally, we characterize spherical and plane curves as those curves whose position vector lies orthogonal to a rotation minimizing normal vector field (Theorem \ref{ThrCharRMrectifying}).
The remaining of this work is divided as follows. In Section \ref{secRectifyingCurves}, we study rectifying curves in Euclidean spaces. In Section \ref{sect::RectCurvAndSlntHlx}, we characterize those rectifying curves that are also slant helices. In Section \ref{sectJ-rect}, we investigate curves normal to a Frenet vector field. In Section \ref{sectMapSphrclAndRectfyngCrv}, we establish a map between spherical and rectifying curves and, finally, in Section \ref{secRMrectifyingCurves}, we consider curves normal to a rotation minimizing vector field.
\section{Rectifying curves in Euclidean spaces}
\label{secRectifyingCurves}
In this section, we generalize the main result of Chen \cite{ChenTJM2017} and show that rectifying curves in $\mathbb{E}^{m+2}$ are geodesics in the hypersurface of cones. This characterization follows
as a consequence of Theorem \ref{thr::CharRectCurvesUsingTangentialComponent}, which is a generalization of Theorems 1 and 2 of \cite{ChenMonthly2003}. Such extensions already appeared in \cite{IlarslanTJM2008} for dimension 4 and in \cite{CambieTJM2016} for any dimension. The attentive reader will notice that our proofs are similar to those of \cite{CambieTJM2016,ChenMonthly2003,IlarslanTJM2008}, but we included them here for the sake of completeness.
Let $\alpha:I\to \mathbb{E}^{m+2}$ be a regular $C^2$ curve parameterized by the arc-length $s$, i.e., for all $s\in I$, $\langle\mathbf{T}(s),\mathbf{T}(s)\rangle=1$, where $\mathbf{T}(s)=\alpha'(s)$. We say that a $C^2$ regular curve $\alpha$ is \emph{rectifying} with vertex $p$ if $\langle\alpha(s)-p,\mathbf{k}(s)\rangle=0,$ where $\mathbf{k}(s)=\alpha''(s)$ is the curvature vector of $\alpha$ and $p$ is constant.
\begin{theorem}\label{thr::CharRectCurvesUsingTangentialComponent}
The following conditions are equivalent:
\begin{enumerate}
\item The curve $\alpha(s)$ is rectifying.
\item There exist constants $b$ and $c\in\mathbb{R}$ such that $\langle\alpha(s)-p,\mathbf{T}(s)\rangle=s+b$ and $\rho(s)\equiv\Vert\alpha(s)-p\Vert=\sqrt{s^2+2bs+c}$.
\item There exist a reparameterization $t=t(s)$ and a unit velocity spherical curve $\beta:J\to\mathbb{S}^{m+1}(p,1)$ such that
$$
\alpha(t)=(a\sec t)\beta(t),
$$
where $a\in\mathbb{R}$ is a positive constant. (Notice that $t$ is the arc-length of $\beta$.)
\item The normal component of $\alpha(s)-p$ has constant length and $\rho(s)$ is a non-constant function.
\end{enumerate}
\end{theorem}
\begin{proof}
$(1)\Leftrightarrow(2)$: Taking the derivative of $\langle\alpha(s)-p,\mathbf{T}(s)\rangle$ and using the definition of rectifying curves give
\begin{equation}
\langle\alpha(s)-p,\mathbf{T}(s)\rangle' = \langle\mathbf{T}(s),\mathbf{T}(s)\rangle+\langle\alpha(s)-p,\mathbf{k}(s)\rangle=1.
\end{equation}
Thus, we conclude that $\langle\alpha(s)-p,\mathbf{T}(s)\rangle=s+b$ for some constant $b$. In addition,
\begin{equation}
(\rho^2)'(s)=2\langle\alpha(s)-p,\mathbf{T}(s)\rangle=2s+2b\Rightarrow \exists\,c\in\mathbb{R},\, \rho^2(s)=s^2+2bs+c.
\end{equation}
Conversely, if $\langle\alpha(s)-p,\alpha(s)-p\rangle=s^2+2bs+c$, then taking the derivative twice gives $1+\langle\alpha(s)-p,\mathbf{T}'(s)\rangle=1$, which implies $\langle\alpha(s)-p,\mathbf{T}'(s)\rangle=0$, i.e., $\alpha$ is a rectifying curve.
\newline
$(2)\Leftrightarrow(3)$: First, we write $\rho^2=(s+b)^2+a^2$, where $a^2=c-b^2>0$ (notice $\rho^2>0$). Translating $s$, we may simply write $\rho^2=s^2+a^2$ and $\langle\alpha-p,\mathbf{T}\rangle=s$. Let us define the spherical curve $\beta(s)=\frac{1}{\rho(s)}(\alpha(s)-p)$. Then,
\begin{equation}
\alpha(s)-p=\sqrt{s^2+a^2}\,\beta(s)\Rightarrow \mathbf{T}(s)=\frac{s}{\sqrt{s^2+a^2}}\beta(s)+\sqrt{s^2+a^2}\,\beta'(s).\label{eqDefSphrCurvFromRectifyinCurv}
\end{equation}
Since $\langle\beta,\beta'\rangle=0$, we deduce that $\Vert\beta'(s)\Vert=\frac{a}{s^2+a^2}$. The arc-length, $t$, of $\beta$ is
\begin{equation}
t = \int_0^s\frac{a}{u^2+a^2}\mathrm{d} u = \arctan\left(\frac{s}{a}\right)\Rightarrow s = a\tan t.
\end{equation}
Finally, substitution in Eq. \eqref{eqDefSphrCurvFromRectifyinCurv} leads to the desired result: $\alpha(t)-p=(a\sec t)\beta(t)$.
Conversely, if $\alpha(t)-p=(a\sec t)\beta(t)$, then $\alpha'(t)=(a\sec t)[\tan(t)\beta(t)+\beta'(t)]$. The arc-length parameter of $\alpha$ is $s=\int\Vert\alpha'(t)\Vert\mathrm{d} t=a\int\sec^2t\,\mathrm{d} t=a\tan t$. Finally, $\rho^2(s)=\langle\alpha(s)-p,\alpha(s)-p\rangle=a^2\sec^2t=s^2+a^2$, which gives (2).
\newline
$(1)\Rightarrow(4)$: We can assume that (2) and (3) are valid [they are a consequence of (1)], from which follows that $\rho^2=\langle \alpha(t)-p,\alpha(t)-p\rangle=s^2+a^2=a^2\sec^2t$, $\langle\alpha-p,\mathbf{T}\rangle=s=a\tan t$, and
\begin{equation}
\alpha'(t)=(a\sec t)[\tan(t)\beta(t)+\beta'(t)]\Rightarrow \Vert \alpha'(t)\Vert=a\sec^2t.
\end{equation}
The normal component $\alpha^N$ of $\alpha(t)-p=(a\sec t)\beta(t)$ is
\begin{equation}
\alpha^N(t)=(\alpha(t)-p)-\frac{\langle\alpha(t)-p,\alpha'(t)\rangle}{\Vert\alpha'(t)\Vert^2}\,\alpha'(t),
\end{equation}
which finally implies
\begin{eqnarray}
\langle\alpha^N(t),\alpha^N(t)\rangle &=& \langle\alpha(t)-p,\alpha(t)-p\rangle-\frac{\langle\alpha(t)-p,\alpha'(t)\rangle^2}{\Vert\alpha'(t)\Vert^2} \nonumber\\
&=& \langle\alpha(t)-p,\alpha(t)-p\rangle-\langle\alpha(t)-p,\mathbf{T}(t)\rangle^2 \nonumber\\
& = & a^2\sec^2t-a^2\tan^2t=a^2.
\end{eqnarray}
$(4)\Rightarrow(1)$: Writing $\alpha-p=\langle\alpha-p,\mathbf{T}\rangle\,\mathbf{T}+\alpha^N$, where $\langle\alpha^N,\mathbf{T}\rangle=0$, and $C=\langle\alpha^N,\alpha^N\rangle$ constant, it follows
\begin{equation}
\langle\alpha-p,\alpha-p\rangle = \langle\alpha-p,\mathbf{T}\rangle^2+\langle\alpha^N,\alpha^N\rangle=\langle\alpha-p,\mathbf{T}\rangle^2+C.
\end{equation}
Taking the derivative,
\begin{equation}
2\langle\alpha-p,\mathbf{T}\rangle = 2\langle\alpha-p,\mathbf{T}\rangle\Big(\langle \mathbf{T},\mathbf{T}\rangle+\langle\alpha-p,\mathbf{T}'\rangle\Big)\Rightarrow 1=1+\langle\alpha-p,\mathbf{k}\rangle,
\end{equation}
where used that $\rho$ non-constant implies $\langle\alpha-p,\mathbf{T}\rangle\not=0$. Finally, we deduce that $\langle\alpha-p,\mathbf{T}'\rangle=0$, i.e., $\alpha$ is a rectifying curve.
\end{proof}
A cone hypersurface $\mathcal{C}^{m+1}(p)$ in $\mathbb{E}^{m+2}$ with vertex at $p$ can be parameterized in terms of a spherical submanifold as
\begin{equation}\label{eq::SphericalRepOfCones}
C_{\beta}(t_1,\dots,t_{m},u) = u\,\beta(t_1,\dots,t_{m}),\mbox{ where }\beta:(t_1,\dots,t_{m})\mapsto\mathbb{S}^{m+1}(p,1).
\end{equation}
For a fixed $\mathbf{t}_0=(t_1,\dots,t_{m})$, the straight lines $c(t)=C_{\beta}(\mathbf{t}_0,t)$ are geodesics of the cone, these are the so-called \emph{rulings}. If $\beta$ parameterizes a great sphere, i.e., the intersection of $\mathbb{S}^{m+1}(p,1)$ with a hyperplane passing through $p$, the corresponding cone is just a hyperplane, whose geodesics are all straight lines. Thus, in the following we assume that this is not the case. The next theorem characterizes the remaining geodesics on the hypersurface of a cone as rectifying curves and generalizes the main result in Ref. \cite{ChenTJM2017}.
\begin{theorem}\label{ThrRecCurvAsConeGeod}
A regular $C^2$ curve $\alpha:I\to\mathbb{E}^{m+2}$ is rectifying with vertex $p$ if and only if it is a geodesic on a cone $\mathcal{C}^{m+1}(p)$ which is not a ruling.
\end{theorem}
\begin{proof}
Let $\alpha(t)=u(t)\beta(t_1(t),\dots,t_{m}(t))\equiv u(t)\beta(t)$ be a geodesic on $\mathcal{C}^{m+1}(p)$ with $\beta(t)\in\mathbb{S}^{m+1}(p,1)$ a unit speed curve. (Notice, $\alpha$ is not a ruling.) We have $\alpha'(t)=u'(t)\beta(t)+u(t)\beta'(t)$ and, therefore, the length functional of $\alpha$, which is a function of $t,u$, and $u'$ only, is given by
$L(t,u,u') =\int E\,\mathrm{d} t= \int\sqrt{u^2+u'^2}\,\mathrm{d} t$. The corresponding Euler-Lagrange equation is
\begin{equation}
\frac{\partial E}{\partial u}-\frac{\mathrm{d}}{\mathrm{d} t}\frac{\partial E}{\partial u'}=0\Rightarrow uu''-2u'\,^2-u^2=0.
\end{equation}
The general solution is of the form $u(t)=a\sec(t+b)$ for some constants $a,b\in\mathbb{R}$. Indeed, defining $v(u)=\mathrm{d} u/\mathrm{d} t$ leads to $uv(u)v'(u)-2v(u)^2-u^2=0$ and, dividing by $u/2$, $2v(u)v'(u)-4v(u)^2/u=2u$. We may now define $w=v^2$ and, therefore, $w'(u)-4w(u)/u=2u$. Multiplying this equation by $\mu=1/u^4$ (integrating factor), we have $(w/u^4)'(u)=2/u^3=-(1/u^2)'$. Then, there exists a constant $c$, such that $c^2=w/u^4+1/u^2\Rightarrow c^2\,u^4=v^2+u^2=u'(t)^2+u^2$ or, equivalently, $(c\,u)^4=(c\,u')^2+(c\,u)^2$, whose general solution is a secant function. Then, every geodesic of a cone is a rectifying curve.
Conversely, from Theorem \ref{thr::CharRectCurvesUsingTangentialComponent}, it follows that a rectifying curve can be written as $\alpha(t)=a\sec(t)\beta(t)$, where $\beta:I\to \mathbb{S}^{m+1}(p,1)$ is a unit speed curve. Using the reasoning above, $\alpha$ is a geodesic of the 2-cone $\Sigma^2:(u,s)\mapsto p+u(\alpha(s)-p)$. Thus, $\alpha''(s)$ is orthogonal to $T_{\alpha(s)}\Sigma^2$. Now, let $V_2(s),\dots,V_{m}(s)$ be unit vector fields orthogonal to $\alpha''$ and $\Sigma^2$. We may use these vector fields to build a hypercone such that $\alpha$ is a geodesic in it. Indeed, define
\begin{equation}
\mathcal{C}^{m+1}(p):(u,s_1,\dots,s_m)\mapsto p+u\,[\alpha(s_1)+\sum_{i=2}^ms_iV_i(s_1)-p].
\end{equation}
Notice that $\alpha(s)$ has coordinates $(u,s_1,\dots,s_m)=(1,s,0,\dots,0)$. Then, the tangent vectors along $\alpha$ are
\begin{equation}
\left\{
\begin{array}{ccl}
\partial_u\vert_{\alpha} &=& (\alpha(s_1)+\sum_{j=2}^{m}s_jV_{j}(s_1)-p)\vert_{\alpha(s)}=\alpha(s)-p \\[4pt]
\partial_{s_1}\vert_{\alpha} &=& (u\alpha'(s_1)+u\sum_{j=2}^{m}s_jV_{j}'(s_1))\vert_{\alpha}=\alpha'(s) \\[4pt]
\partial_{s_j}\vert_{\alpha} &=& V_{j}(s),\,j\in\{2,\dots,m\}\\
\end{array}
\right..
\end{equation}
By construction, $\alpha''$ is orthogonal to all $\partial_{s_j}\vert_{\alpha}$ and, since $\alpha$ is rectifying and parameterized by arc-length, we also have that $\alpha''$ is orthogonal to $\partial_{u}\vert_{\alpha}$ and $\partial_{s_1}\vert_{\alpha}$. Therefore, $\alpha''$ is parallel to the normal of $\mathcal{C}^{m+1}(p)$ and, consequently, $\alpha$ is a geodesic.
\end{proof}
We now provide an alternative proof for the characterization of geodesics on a cone. The strategy of the proof is similar to that found in the study of rectifying curves in the 3d sphere and hyperbolic space \cite{LucasJMAA2015,LucasMJM2016}.
\begin{proof}[Alternative proof of Theorem \ref{ThrRecCurvAsConeGeod}.] Given $\alpha:I\to\mathcal{C}^{m+1}(p)$, it follows by definition of a cone that the straight line $X_s(u)=p+u(\alpha(s)-p)$ is a curve in $\mathcal{C}^{m+1}(p)$ for every $s\in I$. Consequently, the velocity vector $X_s'(u)=\alpha(s)-p$ is tangent to $\mathcal{C}^{m+1}(p)$. Thus, if $\alpha$ is a geodesic, we must have $\langle\alpha'',\alpha-p\rangle=0$, i.e., $\alpha$ is a rectifying curve.
Conversely, let $\alpha$ be a rectifying curve centered at $p$. Now, consider the 2-cone $\Sigma^2:X(u,s)=p+u(\alpha(s)-p)$, $u\not=0$ and $s\in I$. By hypothesis, $\partial_uX=\alpha-p$ is orthogonal to $\alpha''$. On the other hand, since $\langle\alpha',\alpha'\rangle=1\Rightarrow\langle\alpha',\alpha''\rangle=0$ and $\alpha'=\frac{1}{u}\partial_sX$, we have $\langle\partial_sX,\alpha''\rangle=0$. Thus, we conclude that $\alpha''$ is normal to $\Sigma^2$, $\alpha''\in\Gamma(N\Sigma^2)$. Therefore, $\alpha$ is a geodesic of the 2-cone $\Sigma^2$. To conclude the proof, i.e., show that $\alpha$ is a geodesic of a hypercone, we may employ the same strategy used in the end of the first proof of Theorem \ref{ThrRecCurvAsConeGeod}.
\end{proof}
\begin{remark}
A careful examination of the proofs of Theorem \ref{ThrRecCurvAsConeGeod} reveals that the cone containing a rectifying curve may be not unique. (Uniqueness is only assured for 2-cones.) Then, it would be interesting to ask whether the cones sharing a common rectifying curve have any special geometric property.
\end{remark}
\section{Rectifying curves and slant helices}
\label{sect::RectCurvAndSlntHlx}
A curve is a \emph{slant helix} if its curvature vector makes a constant angle with a fixed direction \cite{IzumiyaTJM2004}. In this section, we are interested in characterizing those rectifying curves that are also slant helices. We mention that a characterization of rectifying slant helices in terms of their curvature and torsion was established by Altunkaya \emph{et al.} \cite{AltunkayaKJM2016}. (This problem has been recently raised also by Deshmukh \textit{et al.} \cite{AlghanemiFilomat2019}.) Here, we are going to provide a geometric answer to this question in terms of the spherical curve associated with the cone that contains a given rectifying curve as a geodesic. The strategy will consist in taking into account that \emph{constant angle surfaces} (also known as \emph{helix surfaces}), i.e., surfaces whose unit normal makes a constant angle with a fixed direction, are characterized by the fact their geodesics are slant helices \cite{LucasBBMS2016}. Then, we may characterize curves that are simultaneously rectifying and slant helices by determining the cones of constant angle (Proposition \ref{prop::HelixConesAreCircular}) which finally leads to the characterization of rectifying slant helices as geodesics of circular cones (Theorem \ref{thr::SlantAndRectfCurves}). In higher dimensions we show that geodesics of circular hypercones are slant helices (Corollary \ref{cor::SlantRectAsGeodCircCone}) while the converse remains an open problem.
\begin{proposition}\label{prop::HelixConesAreCircular}
A cone $\mathcal{C}_{\beta}^2(p)\subset\mathbb{E}^3$ makes a constant angle with a fixed direction if and only if it is circular. In addition, the fixed direction coincides with the axis of the circular cone.
\end{proposition}
\begin{proof}
The unit normal of $\mathcal{C}_{\beta}^2$ along $\beta$ is the normal to $\beta$ with respect to the unit sphere, namely $\xi\vert_{\beta}=\beta\times\beta'$. On the remaining points of $\mathcal{C}_{\beta}^2$, the normal is obtained through parallel transport along the rulings. In addition, the Frenet equations of $\beta$ on the sphere are $\nabla_{\beta'}\beta'=\kappa_g\xi$ and $\xi'=-\kappa_g\beta'$, where $\nabla_{\beta'}\beta'\equiv\mbox{Proj}_{T_{\beta}\mathbb{S}^2}(\beta'')=\beta''+\beta$ and $\kappa_g$ is the geodesic curvature of $\beta$ with respect to the unit sphere.
The spherical curve associated with a circular cone is a small circle described by the equation $\langle\beta,\mathbf{d}\rangle=\mbox{const.}$, which gives $\langle\beta',\mathbf{d}\rangle=0$. Then, taking the derivative of $f(t)=\langle\xi,\mathbf{d}\rangle$ leads to $f'=-\kappa_g\langle\beta',\mathbf{d}\rangle=0$. Therefore, $f=\mbox{const.}$ and, consequently, a circular cone makes a constant angle with a fixed direction.
Conversely, assume that the cone $\mathcal{C}_{\beta}^2$ makes a constant angle with the fixed direction $\mathbf{d}$, $\langle\xi,\mathbf{d}\rangle=\mbox{constant}$. We have $0=\langle\xi',\mathbf{d}\rangle=-\kappa_g\langle\beta',\mathbf{d}\rangle$. If $\kappa_g=0$, then $\beta$ is a great circle and $\mathcal{C}_{\beta}^2$ is a plane, which is a helix surface. On the other hand, if $\kappa_g\not=0$, we deduce that $\langle\beta',\mathbf{d}\rangle=0\Rightarrow \langle\beta,\mathbf{d}\rangle=\mbox{const.}$ and, therefore, $\beta$ is a small circle and $\mathcal{C}_{\beta}^2$ is a circular cone.
\end{proof}
\begin{theorem}\label{thr::SlantAndRectfCurves}
A rectifying curve $\alpha:I\to\mathbb{E}^3$ is a slant helix if and only if it is the geodesic of a circular cone.
\end{theorem}
\begin{proof}
The principal normal of a rectifying curve coincides with the cone normal since it is a geodesic. Thus, if the associated cone is circular, the curve will make a constant angle with a fixed direction, the axis of the cone. Therefore, any circular rectifying is a slant helix.
Conversely, if $\alpha$ is a rectifying slant helix, then the unit normal of the cone $\mathcal{C}^2(p)$ containing $\alpha$ has to make a constant angle with a fixed direction along $\alpha$. For cones, the unit normal is parallel transported along the rulings and, consequently, the portion of $\mathcal{C}^2(p)$ given by $r(u,s) = p + u (\alpha(s)-p)$, $u \in [0,1]$,
should be a circular cone according to Proposition \ref{prop::HelixConesAreCircular}. Thus, a rectifying curve which is also a slant helix has to be circular rectifying and, in addition, the fixed direction is nothing but the axis of the corresponding circular cone.
\end{proof}
In \cite{LucasBBMS2016} it is noted that the geodesics of circular cones provide examples of rectifying slant helices. We just showed that this is a characteristic property. Now, we address the same problem in $\mathbb{E}^{m+2}$. Mimicking Eq. (\ref{eq::SphericalRepOfCones}), we can define $n$-cones by taking $\beta$ as the parameterization of an $(n-1)$-dimensional submanifold of the unit sphere $\mathbb{S}^{m+1}(p,1)$. A $n$-cone is said to be \emph{circular} if $\langle\beta,\mathbf{d}\rangle=\mbox{const.}$ for some fixed vector $\mathbf{d}$.
\begin{proposition}
A circular hypercone $\mathcal{C}_{\beta}^{m+1}(p)\subset\mathbb{E}^{m+2}$ makes a constant angle with a fixed direction. In addition, the fixed direction coincides with the axis of the circular cone.
\end{proposition}
\begin{proof}
First, notice that the unit normal $\xi$ of $\mathcal{C}_{\beta}^{m+1}$ along $\beta(t_1,\dots,t_m)$ has to be the normal to $\beta$ with respect to the unit sphere. On the remaining points of $\mathcal{C}_{\beta}^{m+1}$, the normal is obtained through parallel transport along the rulings in $\mathbb{E}^{m+2}$. In addition, if $\nabla$ is the covariant differentiation in $\mathbb{S}^{m+1}$, $(\nabla_XY)(p)=\frac{\partial Y}{\partial X}\vert_p+\langle X,Y\rangle\,p$, we may conclude that $\nabla_{\partial_i}\xi=\frac{\partial \xi}{\partial t_i}$, where $\partial_i$ is the velocity vector of the $i$-th coordinate curve $t_i\mapsto\beta(t_1,\dots,t_i,\dots,t_m)$.
The spherical submanifold associated with a circular hypercone is a small sphere described by an equation $\langle\beta,\mathbf{d}\rangle=\mbox{const.}$, which gives $\langle\partial\beta/\partial t_i,\mathbf{d}\rangle=0$ for all $i\in\{1,\dots,m\}$. Taking the derivative of the angle function $f(t_1,\dots,t_m)=\langle\xi,\mathbf{d}\rangle$ leads to $\partial f/\partial t_i=\langle\nabla_{\partial_i}\xi,\mathbf{d}\rangle=0$, where we used that $\nabla_{\partial_i}\xi$ has to be a tangent vector to $\beta$ in $\mathbb{S}^{m+1}$. Thus, $f=\mbox{const.}$ and, therefore, a circular hypercone makes a constant angle with a fixed direction.
\end{proof}
\begin{corollary}\label{cor::SlantRectAsGeodCircCone}
A circular rectifying curve, i.e., a geodesic of a circular hypercone, is also a slant helix.
\end{corollary}
In higher dimensions, the converse of the above result is subtler. If a rectifying curve $\alpha$ is also a slant helix, then $\alpha''$ coincides with the hypercone normal $\xi$ and, in addition, it is straightforward to conclude that $\xi$ makes a constant angle with a fixed direction along the 2-cone $\Sigma_{\alpha}^2:(u,s)\mapsto p+u(\alpha(s)-p)$. (Notice that any hypercone containing $\alpha$ should necessarily contain $\Sigma_{\alpha}^2$.) Then, the challenge to establish a converse is to show that it is possible to find a hypercone $\mathcal{C}^{m+1}_{\beta}$ whose associated spherical submanifold $\beta$ is a small sphere.
\section{Curves normal with respect to a Frenet vector field}
\label{sectJ-rect}
It is known that rectifying curves can be characterized in terms of the constancy of the length of its normal component \cite{CambieTJM2016,ChenMonthly2003} [see Theorem \ref{thr::CharRectCurvesUsingTangentialComponent}, item (4)]. The problem of characterizing curves normal to one of the Frenet vectors was first proposed by Cambie \emph{et al.} \cite{CambieTJM2016}: here we shall call a curve \emph{$j$-rectifying} if its position vector is orthogonal to the $j$-th Frenet vector field. In this section we provide a characterization for $j$-rectifying curves in terms of the constancy of a certain normal component (Theorem \ref{Theo::CharjRectCurves}), which then generalizes the characterization of rectifying curves, or 1-rectifying in our notation. First, we need some preliminaries results.
Let $\alpha:I\to \mathbb{E}^{m+2}$ be a regular curve parameterized by arc-length. We say that $\alpha$ is a \emph{twisted curve} if it is of class $C^{m+2}$ and $\{\alpha'(s),\alpha''(s),\dots,\alpha^{(m+2)}(s)\}$ is linearly independent for all $s\in I$ \cite{Kuhnel2015}. We may associate with a twisted curve its Frenet frame $\{\mathbf{T},\mathbf{N}_1,\dots.\mathbf{N}_m,\mathbf{B}\}$ whose equations of motion in $\mathbb{E}^{m+2}$ are
\begin{equation}
\left\{
\begin{array}{ccc}
\mathbf{T}' & = & \kappa_0 \mathbf{N}_1 \\
\mathbf{N}_i' & = & -\kappa_{i-1} \mathbf{N}_{i-1} + \kappa_i\mathbf{N}_{i+1} \\
\mathbf{B}' & = & - \kappa_{m}\mathbf{N}_{m}\\
\end{array}
\right.,\,i\in\{1,\dots,m\},
\end{equation}
where $\mathbf{N}_0=\mathbf{T}$ is the unit tangent whose derivative gives the curvature function $\kappa=\kappa_0$ and $\mathbf{N}_{m+1}=\mathbf{B}$ is the \emph{multinormal vector} whose derivative gives the torsion $\tau=\kappa_{m}$. In analogy to what happens in dimension 3, a hyperplane curve is characterized by $\tau\equiv0$. Moreover, if $\alpha$ is twisted, then $\kappa_i\not=0$ and $\tau\not=0$.
\begin{definition}
We say that $\alpha$ is a \emph{$j$-rectifying curve}, $j\in\{0,\dots,m+1\}$, when
\begin{equation}
\forall\,s\in I,\,\langle\alpha(s)-p,\mathbf{N}_j(s)\rangle=0\Rightarrow \alpha-p=\sum_{i=0,\, i\not=j}^{m+1}A_i(s)\mathbf{N}_i(s),
\end{equation}
where $A_{i}(s) = \langle\alpha(s)-p, N_{i}(s) \rangle$.
\end{definition}
Notice that for $j=0,1$, and $m+1$ we have normal, rectifying, and osculating curves, respectively. Thus, it remains to investigate the cases where $j\in\{2,\dots,m\}$.
\begin{lemma}\label{lemmaFrenetSystemForCoord}
Let $\alpha$ be \emph{any} $C^{2}$ regular curve and $\{\mathbf{V}_0=\mathbf{T},\mathbf{V}_1,\dots,\mathbf{V}_{m+1}\}$ be \emph{any} orthonormal moving frame along $\alpha$ whose equations of motion are
$$\mathbf{V}_i'(s)=\sum_{j=0}^{m+1}k_{ij}(s)\mathbf{V}_j(s),\,\mbox{ where }k_{ij}=-k_{ji}.$$
If we write $\alpha(s)-p=\sum_{i=0}^{m+1}A_i(s)\mathbf{V}_i(s)$,
then the coordinate functions $\{A_i\}_{i=0}^{m+1}$ satisfy the system of equations
\begin{equation}\label{eqODEsForCoordWithRespetMovFrame}
A_{0}'(s) = 1+ \sum_{i=0}^{m+1} k_{0j}(s) A_{j}(s) \mbox{ and }
A_{i}'(s) = \sum_{i=0}^{m+1} k_{ij}(s) A_{j}(s),
\end{equation}
where $i\in \{1,\dots,m,m+1\}$. In addition, the derivative of the distance function $\rho=\Vert \alpha-p\Vert$ and the tangential coordinate, $A_0$, are related by $(\rho^2)'(s)=2A_0(s)$.
\end{lemma}
\begin{proof}
For $i=0$, we have $A_{0}'=1+\langle\alpha-p,\sum_{i=0}^{m+1} k_{0j}\mathbf{V}_j\rangle=1+ \sum_{i=0}^{m+1} k_{0j} A_{j}.$ Now, for $i\in\{1,\dots,m+1\}$, we have $A_{i}'=0+\langle\alpha-p,\sum_{j=0}^{m+1}k_{ij}(s)\mathbf{V}_j\rangle=\sum_{i=0}^{m+1} k_{ij} A_{j}$. In short, the coordinates functions satisfy the system of equations \eqref{eqODEsForCoordWithRespetMovFrame}.
Now, let us investigate $\rho$. First, notice that $k_{ij}=-k_{ji}$ follows as a result of the orthonormality of $\{\mathbf{V}_i\}$. In addition, noting that $\rho^2=\sum_{i=0}^{m+1}A_i^2$, we finally have
\begin{eqnarray}
(\rho^2)' & = & 2A_0A_0'+2\sum_{i=1}^{m+1}A_iA_i'= 2A_0(1+\sum_{i=0}^{m+1} k_{0j} A_{j})+2\sum_{i=1,j=0}^{m+1}k_{ij}A_iA_{j}\nonumber\\
& = & 2A_0+2\sum_{i=0,j=0}^{m+1}k_{ij}A_iA_{j}= 2A_0+\sum_{i<j}(k_{ij}+k_{ji})A_iA_{j}=2A_0.
\end{eqnarray}
\end{proof}
Equipping a curve with its Frenet frame and, in addition, taking into account that a curve is $j$-rectifying when its $j$-th coordinate function $A_j$ vanishes, the following result holds.
\begin{corollary}\label{Cor::FrenetSystemForCoord}
Let $\alpha$ be \emph{any} $C^{m+2}$ regular curve and $\{A_i\}_{i=0}^{m+1}$ be the coordinate functions with respect to the Frenet frame $\{\mathbf{N}_i\}_{i=0}^{m+1}$. Then, the coefficients $\{A_i\}$ satisfy the Frenet-like system of equations
\begin{equation}\label{eqODEsForCoordjRectifying}
\left\{
\begin{array}{ccc}
A_{0}'(s) & = & 1+ \kappa(s) A_{1}(s) \\[3pt]
A_{i}'(s) & = & -\kappa_{i-1}(s)A_{i-1}(s)+\kappa_{i}(s)A_{i+1}(s) \\[3pt]
A_{m+1}'(s) & = & -\tau(s) A_{m}(s)\\
\end{array}
\right.,\,i\in \{1,\dots,m\}.
\end{equation}
Moreover, if $\alpha$ is a $j$-rectifying curve, then we have the additional equations
\begin{equation}
A_{j-1}'=-\kappa_{j-2}A_{j-2},\,A_{j+1}'=\kappa_{j+1}A_{j+2},\mbox{ and }-\kappa_{j-1}A_{j-1}+\kappa_{j}A_{j+1}=0.
\end{equation}
\end{corollary}
\begin{lemma}\label{LemNoJandJplus1RectCurv}
Let $\alpha:I\to\mathbb{E}^{m+2}$ be a regular twisted curve and $\{A_i\}_{i=0}^{m+1}$ be the coordinate functions with respect to its Frenet frame. Then, $\alpha$ can not be simultaneously a $j$- and a $(j+1)$-rectifying curve for any $j$.
\end{lemma}
\begin{proof}
Assume that $\alpha$ is both $j$- and $(j+1)$-rectifying for some $j$. Then, it follows that $0=A_j'=-\kappa_{j-1}A_{j-1}+\kappa_{j}A_{j+1}$. Now, since $\alpha$ is also $(j+1)$-rectifying and $\kappa_{j-1}\not=0$ ($\alpha$ twisted), we have $A_{j-1}=0$. Thus, $\alpha$ is also $(j-1)$-rectifying. By recursion, we would deduce that $\alpha$ is 1-, 2-,$\dots$, and $(j-1)$-rectifying. (Notice that $A_0'=1\Rightarrow A_0=s+b$.) Analogously, from $A_{j+1}=0$, we also have $0=-\kappa_{j}A_{j}+\kappa_{j+1}A_{j+2}=\kappa_{j+1}A_{j+2}$ and, consequently, $A_{j+2}=0$. In short, if $\alpha$ were $j$- and $(j+1)$-rectifying for some $j$ we would deduce that all $A_i$ vanish except for $A_0$, which implies $\alpha=p+(s-a)\mathbf{T}$. Thus $\alpha$ would be a straight line, which is not be twisted.
\end{proof}
Now, we provide a proof for the main theorem of this section characterizing $j$-rectifying curves, which should be thought as the generalization of item (4) of Theorem \ref{thr::CharRectCurvesUsingTangentialComponent} to this new context.
\begin{theorem}\label{Theo::CharjRectCurves}
Let $\alpha:I\to\mathbb{E}^{m+2}$ be a regular curve of class $C^{m+2}$. Then, $\alpha$ is $j$-rectifying if and only if the normal vector field $$\alpha^{N_j}\equiv\sum_{i=j+1}^{m+1}\langle\alpha-p,\mathbf{N}_i\rangle\mathbf{N}_i$$ has constant length.
\end{theorem}
\begin{proof}
Let $\alpha$ be $j$-rectifying, i.e., $A_j=0$. Since $\rho_{j}^2\equiv\langle\alpha^{N_j},\alpha^{N_j}\rangle=\sum_{i=j+1}^{m+1}A_i^2$, taking the derivative gives
$$
(\rho_{j}^2)' = 2\sum_{i=j+1}^{m}(-\kappa_{i-1}A_{i-1}A_i+\kappa_iA_iA_{i+1})+2A_{m+1}A_{m+1}' = 2(-\kappa_{j}A_{j}A_{j+1}+\tau A_mA_{m+1})-2\tau A_{m}A_{m+1}=0.
$$
Therefore, $\alpha^{N_j}$ has constant length.
Conversely, let $\rho_{j}$ be constant. We may assume, without loss of generality, that $A_{j+1}\not\equiv0$, otherwise $\rho_{j}=\rho_{j+1}$ and we can exchange $j$ and $j+1$ (see Lemma \ref{LemNoJandJplus1RectCurv}). We can write $\alpha-p$ as
\begin{equation}
\alpha(s)-p=\sum_{i=0}^jA_i\mathbf{N}_i+\alpha^{N_j}\Rightarrow \rho^2 = \sum_{i=0}^jA_i^2+\rho_{j}^2.
\end{equation}
Taking the derivative, and using Corollary \ref{Cor::FrenetSystemForCoord},
\begin{eqnarray}
(\rho^2)' & = & 2\sum_{i=0}^jA_iA_i'+0= 2A_0(1+\kappa A_1)+2\sum_{i=1}^j(-\kappa_{i-1}A_{i-1}A_i+\kappa_{i}A_iA_{i+1})\nonumber\\
& = & 2A_0 +2\kappa A_0A_1+2(-\kappa A_0A_1+\kappa_jA_jA_{j+1}) = 2A_0+2\kappa_jA_jA_{j+1}.
\end{eqnarray}
Since $(\rho^2)'=2A_0$ (Lemma \ref{lemmaFrenetSystemForCoord}), it follows that $\kappa_jA_jA_{j+1}=0$ and, consequently, $A_j=0$. In other words, $\alpha$ is a $j$-rectifying curve.
\end{proof}
\begin{remark}
The definition of $j$-rectifying curves only requires a $C^{j+1}$ condition since the Frenet frame is defined in such a way that $V_j\equiv\mbox{span} \{\mathbf{T},\mathbf{N}_1,\dots,\mathbf{N}_j\}=\mbox{span}\{\alpha',\alpha'',\dots,\alpha^{(j+1)}\}$ \cite{Kuhnel2015}. Once we equip a $j$-rectifying curve with the first $j+1$ Frenet vectors, we could later choose any set of $m-j+1$ orthonormal vector fields spanning $V_j^{\perp}$ to complete a frame along $\alpha$ and then provide a proof entirely analogous to the above.
\end{remark}
\section{A formal correspondence between spherical and rectifying curves}
\label{sectMapSphrclAndRectfyngCrv}
In $\mathbb{E}^3$, in addition to the characterization of rectifying curves in terms of $\Vert\alpha^N\Vert=\mbox{constant}$, Chen showed that $\alpha$ is rectifying if and only if $\frac{\tau}{(s+b)\kappa}=\frac{1}{a}$, for some constants $a$ and $b$ \cite{ChenMonthly2003}. Item (iv) of Chen's Theorem 1 \cite{ChenMonthly2003} was later extended to higher dimensional spaces in Theorem 4.1 of \cite{CambieTJM2016}, see also \cite{IlarslanTJM2008} for a proof in $4d$, involving the remaining curvature functions. In this section, we show that these equations for the curvatures and torsion of a rectifying curve in $\mathbb{E}^{m+2}$ allow us to establish a correspondence with spherical curves in $\mathbb{E}^{m+1}$ (see Theorem \ref{TheoMapSpherCurveInRectCurve}). To illustrate this, consider $3d$ rectifying curves. If we denote $\kappa_0=\kappa$ and $\kappa_1=\tau$, then we can establish the following correspondence between circles in $\mathbb{E}^2$ and rectifying curves in $\mathbb{E}^3$ given by
\begin{equation}
\kappa \leftrightarrow \frac{\kappa_1}{(s+b)\kappa_0}=\frac{\tau}{(s+b)\kappa}=\frac{1}{a}.
\end{equation}
Analogously, rectifying curves in $\mathbb{E}^4$
are characterized by the equation \cite{CambieTJM2016,IlarslanTJM2008} (notice our notation is a bit different: $\kappa_0=\kappa_1,\kappa_1=\kappa_2$, and $\tau=\kappa_3$)
\begin{equation}\label{eq::DEqFor4dRectCurves}
\frac{(s+b)\kappa_0}{\kappa_1}\tau+\frac{\mathrm{d}}{\mathrm{d} s}\left\{\frac{1}{\tau}\frac{\mathrm{d}}{\mathrm{d} s}\left[\frac{(s+b)\kappa_0}{\kappa_1}\right]\right\}=0.
\end{equation}
Consequently, we may establish a formal correspondence between spherical curves in $\mathbb{E}^3$ and rectifying curves in $\mathbb{E}^4$ given by
\begin{equation}
(\kappa,\tau) \leftrightarrow \left(\frac{\kappa_1}{(s+b)\kappa_0},\tau\right).
\end{equation}
Indeed, spherical curves in $\mathbb{E}^3$ are characterized by $\frac{\tau}{\kappa}+[\frac{1}{\tau}(\frac{1}{\kappa})']'=0$ \cite{DaSilvaMJM2018,Kuhnel2015}, which is equivalent to Eq. (\ref{eq::DEqFor4dRectCurves}) under the correspondence above. The next theorem states that this is a general feature rectifying curves.
\begin{theorem}\label{TheoMapSpherCurveInRectCurve}
Let $(\{k_i\}_{i=0}^{m-1},t)$ and $(\{\kappa_i\}_{i=0}^{m},\tau)$ denote the curvatures and torsion of regular curves in $\mathbb{E}^{m+1}$ and $\mathbb{E}^{m+2}$, respectively, then the correspondence
\begin{equation}
(k_0,k_1,\dots,k_{m-1},t)\leftrightarrow\left(\frac{\kappa_1}{(s+b)\kappa_0},\kappa_2,\dots,\kappa_m,\tau\right)
\end{equation}
formally maps spherical curves in $\mathbb{E}^{m+1}$ into rectifying curves in $\mathbb{E}^{m+2}$ and vice-verse. In addition, if $\{C_i\}$ and $\{A_i\}$ are respectively the coordinate functions of regular spherical and rectifying curves in $\mathbb{E}^{m+1}$ and $\mathbb{E}^{m+2}$ with respect to the their Frenet frames, then $(C_0,C_1)=(0,-\frac{1}{k_0})$ and $(A_0,A_1,A_2)=(s+b,0,\frac{(s+b)\kappa_0}{\kappa_1})$ and the remaining coordinate functions are related by
\begin{equation}
(C_2,\dots,C_{m})\leftrightarrow (A_3,\dots,A_{m+1}).
\end{equation}
\end{theorem}
\begin{proof}
Let $\alpha$ be a spherical curve in $\mathbb{S}^{m}(r)\subset\mathbb{E}^{m+1}$ with coordinate functions $\{C_i\}$, curvatures $k_i$, and torsion $t$. Since spherical curves can be seen as normal curves, we have $C_0=0$ and, therefore, according to Corollary \ref{Cor::FrenetSystemForCoord}, the remaining coordinates satisfy the system of equations
\begin{equation}
\left\{
\begin{array}{ccc}
0 & = & 1+ k C_{1} \\
C_{1}' & = & k_{1}C_{2} \\
C_{i}' & = & -k_{i-1}C_{i-1}+k_{i}C_{i+1} \\
C_{m}' & = & -t\, C_{m-1}\\
\end{array}
\right.,\,i\in \{2,\dots,m-1\}.
\end{equation}
On the other hand, the coordinate functions $\{A_i\}$ of a rectifying curve in $\mathbb{E}^{m+2}$ satisfy
\begin{equation}
\left\{
\begin{array}{ccc}
A_{0}' & = & 1 \\
0 & = & -\kappa A_{0}+\kappa_{1}A_{2} \\
A_{2}' & = & \kappa_{2} A_{3} \\
A_{i}' & = & -\kappa_{i-1}A_{i-1}+\kappa_{i}A_{i+1} \\
A_{m+1}' & = & -\tau A_{m}\\
\end{array}
\right.,\,i\in \{3,\dots,m\}.
\end{equation}
Comparing these two systems, we see that under the correspondences $(C_2,\dots,C_{m})\leftrightarrow (A_3,\dots,A_{m+1})$ and $(k,k_1,\dots,k_{m-1},t)\leftrightarrow\left(\frac{\kappa_1}{(s+b)\kappa},\kappa_2,\dots,\kappa_m,\tau\right)$, it is possible to establish a formal map between spherical and rectifying curves. Finally, the remaining coordinate function of the spherical curve is $C_1=-\frac{1}{k},$
while the two remaining coordinate functions of the rectifying curve are
$A_0=s+b$ and $A_2=\frac{\kappa}{\kappa_1}A_0=\frac{(s+b)\kappa}{\kappa_1}$.
\end{proof}
\begin{remark}
It is possible to write a single differential equation relating curvatures and torsion to characterize rectifying curves \cite{CambieTJM2016}. Under the correspondence given by the theorem above, we may then write a single differential equation characterizing spherical curves as well. Such an equation then generalizes the characterization of spherical curves in $\mathbb{E}^4$ and $\mathbb{E}^5$ given by the first named author and da Silva \cite{DaSilvaMJM2018} (see their Remark 2).
\end{remark}
Finally, there also exists a formal correspondence between $j$-rectifying curves in $\mathbb{E}^{m+2}$ and curves in $\mathbb{E}^{\,j}\times\mathbb{S}^{m-j}(r)$ for some $r>0$. Indeed, let $\alpha$ be a $j$-rectifying curve, its coordinate functions with respect to its Frenet frame satisfy the equations
$$
\left\{
\begin{array}{ccc}
A_{0}' & = & 1 + \kappa A_1 \\
A_i' & = & -\kappa_{i-1}A_{i-1}+\kappa_iA_{i+1}\\
A_{j-1}' & = & -\kappa_{j-1}A_{j-2}\\
\end{array}
\right., 1\leq i\leq j-2,
\left\{
\begin{array}{ccc}
A_{j+1}' & = & \kappa_{j+1} A_{j+2} \\
A_{k}' & = & -\kappa_{k-1}A_{k-1}+\kappa_{k}A_{k+1} \\
A_{m+1}' & = & -\tau A_{m}\\
\end{array}
\right.,
j+2\leq k\leq m,
$$
and $0 = -\kappa_{j-1}A_{j-1}+ \kappa_{j}A_{j+1}$.
The first $j$ functions $(A_0,A_1,\dots,A_{j-1})$ behave like the coordinates of a generic twisted curve in $\mathbb{E}^{\,j}$ with torsion $\kappa_{j-1}$, while the remaining coordinate functions together with $A_{0}=s+b$, i.e., $(s+b,A_{j+1},\dots,A_{m+1})$, behave like the coordinates of a rectifying curve in $\mathbb{E}^{m-j+2}$, which can be associated with a spherical curve in $\mathbb{E}^{m-j+1}$ according to Theorem \ref{TheoMapSpherCurveInRectCurve}.
\section{Curves normal with respect to a rotation minimizing vector field}
\label{secRMrectifyingCurves}
In addition to the Frenet frame, we may equip a regular curve with the so-called rotation minimizing frames: we say that a unit $C^1$ vector field $\mathbf{V}$, normal to $\alpha'$, is \emph{rotation minimizing} (RM) if $\langle\mathbf{V}'(s),\mathbf{T}(s)\rangle=0$ \cite{BishopMonthly1975}, i.e., if it is parallel transported with respect to the normal connection of the curve. We now consider curves that always lie orthogonal to a rotation minimizing frame, a problem originally considered in 3d \cite{DaSilvaArXiv2017}, and show that this condition leads to plane and spherical curves. We say that a regular $C^2$ curve $\alpha$ is \emph{normal with respect to an RM vector field} $\mathbf{V}$ if $\langle\alpha(s)-p,\mathbf{V}(s)\rangle=0$, where $p$ is constant.
\begin{theorem}\label{ThrCharRMrectifying}
A curve is normal with respect to an RM field if and only if it is either a hyperplane or a spherical curve.
\end{theorem}
\begin{proof}
If $\alpha$ is normal with respect to an RM field $\mathbf{V}$, we extend it to an RM frame $\{\mathbf{T},\mathbf{V}_1,\dots,\mathbf{V}_m,\mathbf{V}\}$ along $\alpha$ and write $\alpha(s)-p = A(s)\mathbf{T}(s)+A_1(s)\mathbf{V}_1(s)+\cdots+A_m(s)\mathbf{V}_m(s)$. Taking the derivative, gives
\begin{equation}
\mathbf{T} = (A'-\sum_{i=1}^mA_i\kappa_i)\,\mathbf{T}+\sum_{i=1}^m(A_i'+A\kappa_i)\,\mathbf{V}_i+A\lambda\mathbf{V}.\label{EqTangOfRMrectifying}
\end{equation}
From the coordinate of $\mathbf{V}$, we deduce that $A\lambda=0$. If $\lambda=0$, then $\mathbf{V}$ is constant and, consequently, $\alpha-p$ lies in a hyperplane orthogonal to $\mathbf{V}$. On the other hand, if $A=0$, then $\alpha$ is a normal curve, i.e., $\alpha$ is spherical: $\langle\alpha-p,\alpha-p\rangle=R^2\Leftrightarrow\langle\alpha-p,\mathbf{T}\rangle=0$. (Notice, $\{A_i\}$ are all constant: from the coordinate of $\mathbf{N}_i$ in Eq. \eqref{EqTangOfRMrectifying}, $A_i'=0$; and are related to the radius of the sphere by $R^2=\langle\alpha-p,\alpha-p \rangle=\sum_{i=1}^mA_i^2$.)
Conversely, if $\alpha$ is spherical, $\alpha:I\to\mathbb{S}^{m+1}(p,R)$, the normal to the sphere, $\xi=\frac{1}{R}(\alpha-p)$, is an RM vector field. We may equip $\alpha$ with an RM frame $\{\mathbf{T},\mathbf{V}_1,\dots,\mathbf{V}_m,\xi\}$. Noticing that each $\mathbf{V}_i$ has to be tangent to the sphere, we deduce that $\alpha-p$ is normal to an RM vector field. The same reasoning applies to a hyperplane curve and the vector field normal to the plane.
\end{proof}
\section*{Acknowledgment}
The first named author would like to thank the financial support provided by the Mor\'a Miriam Rozen Gerber fellowship for Brazilian postdocs.
| {
"timestamp": "2020-04-17T02:04:37",
"yymm": "1908",
"arxiv_id": "1908.02834",
"language": "en",
"url": "https://arxiv.org/abs/1908.02834",
"abstract": "A curve is rectifying if it lies on a moving hyperplane orthogonal to its curvature vector. In this work, we extend the main result of [Chen 2017, Tamkang J. Math. 48, 209] to any space dimension: we prove that rectifying curves are geodesics on hypercones. We later use this association to characterize rectifying curves that are also slant helices in three-dimensional space as geodesics of circular cones. In addition, we consider curves that lie on a moving hyperplane normal to (i) one of the normal vector fields of the Frenet frame and to (ii) a rotation minimizing vector field along the curve. The former class is characterized in terms of the constancy of a certain vector field normal to the curve, while the latter contains spherical and plane curves. Finally, we establish a formal mapping between rectifying curves in an $(m + 2)$-dimensional space and spherical curves in an $(m + 1)$-dimensional space. A curve is rectifying if it lies on a moving hyperplane orthogonal to its curvature vector.",
"subjects": "Differential Geometry (math.DG)",
"title": "Curves orthogonal to a vector field in Euclidean spaces",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9861513910054509,
"lm_q2_score": 0.7185943805178138,
"lm_q1q2_score": 0.7086428479163424
} |
https://arxiv.org/abs/2002.02021 | A dichotomy for bounded degree graph homomorphisms with nonnegative weights | We consider the complexity of counting weighted graph homomorphisms defined by a symmetric matrix $A$. Each symmetric matrix $A$ defines a graph homomorphism function $Z_A(\cdot)$, also known as the partition function. Dyer and Greenhill [10] established a complexity dichotomy of $Z_A(\cdot)$ for symmetric $\{0, 1\}$-matrices $A$, and they further proved that its #P-hardness part also holds for bounded degree graphs. Bulatov and Grohe [4] extended the Dyer-Greenhill dichotomy to nonnegative symmetric matrices $A$. However, their hardness proof requires graphs of arbitrarily large degree, and whether the bounded degree part of the Dyer-Greenhill dichotomy can be extended has been an open problem for 15 years. We resolve this open problem and prove that for nonnegative symmetric $A$, either $Z_A(G)$ is in polynomial time for all graphs $G$, or it is #P-hard for bounded degree (and simple) graphs $G$. We further extend the complexity dichotomy to include nonnegative vertex weights. Additionally, we prove that the #P-hardness part of the dichotomy by Goldberg et al. [12] for $Z_A(\cdot)$ also holds for simple graphs, where $A$ is any real symmetric matrix. | \section{Hardness for $Z_A(\cdot)$ on simple graphs for real symmetric $A$}\label{sec:Goldberg-et-al-2010-dichotomy}
There is a more direct approach to prove
the \#P-hardness part of the Bulatov-Grohe dichotomy
(Theorem~\ref{thm:Bulatov-Grohe}) for simple graphs.
Although this method does not handle degree-boundedness,
we can apply it more generally to the problem $\EVAL(A, D)$
when the matrix $A$ is real symmetric
and $D$ is positive diagonal
In particular, we will prove
the \#P-hardness part of the dichotomy for counting GH
by Goldberg et al.~\cite{Goldberg-et-al-2010} (the problem $\EVAL(A)$
without vertex weights, where $A$ is a real symmetric matrix)
for simple graphs.
We first prove the following theorem.
\begin{theorem}\label{thm:EVAL-simp-interp}
Let $A$ and $D$ be $m \times m$ matrices,
where $A$ is real symmetric and $D$ is positive diagonal.
Then $\EVAL(A, D) \le_{\mathrm T}^{\mathrm P} \EVAL_{\simp}(A, D)$.
\end{theorem}
\begin{proof}
We may assume $A$ is not identically $0$, for otherwise the problem is
trivial.
Let $G = (V, E)$ be an input graph to the problem $\EVAL(A, D)$.
For any $n \ge 1$, let $G_n = S_n^{(F)}(G)$ where $F \subseteq E$ is the
subset consisting of the edges of $G$ each of which is parallel
to at least one other edge.
In other words, we obtain $G_n$ by replacing every parallel edge $e$
by its $n$-stretching $S_n e$.
We will refer to these as paths of length $n$ in $G_n$.
Note that $G_1 = G$.
Moreover, for every $n \ge 2$, the graph $G_n$ is simple and loopless,
and has polynomial size in the size of $G$ and $n$.
A path of length $n \ge 1$ has the edge weight matrix
\[
M^{(n)}=
\underbrace{A D A \ldots A D A}_{D \text{ appears } n - 1 ~\ge~ 0 \text{ times}} = A (D A)^{n - 1} = D^{-1 / 2} (D^{1 / 2} A D^{1 / 2})^n D^{-1 / 2}.
\]
Here $D^{1 / 2}$ is a diagonal matrix
with the positive square roots
of the corresponding entries
of $D$ on the main diagonal,
and $D^{-1 / 2}$ is its inverse.
Since $A$ is real symmetric and $D$ is positive diagonal,
the matrix $\widetilde A = D^{1 / 2} A D^{1 / 2}$ is real symmetric.
Then $\widetilde A$ is orthogonally diagonalizable over $\mathbb{R}$, i.e.,
there exist a real orthogonal matrix $S$
and a real diagonal matrix $J = (\lambda_i)_{i = 1}^m$ such that $\widetilde A = S^T J S$.
If $A$ has rank $r$, then $1 \le r \le m$,
and we may assume
that $\lambda_i \ne 0$ for $1 \le i \le r$
and $\lambda_i = 0$ for $i > r$.
We have ${\widetilde A}^n = S^T J^n S$, so the edge weight matrix for a path of length $n \ge 1$ can be written as
\[
M^{(n)}
= D^{-1 / 2} {\widetilde A}^n D^{-1 / 2}
= D^{-1 / 2} S^T J^n S D^{-1 / 2}.
\]
We can write $M_{i j}^{(n)} = \sum_{\ell = 1}^r a_{i j \ell} \lambda_\ell^n$
by a formal expansion,
for every $n \ge 1$ and some real $a_{i j \ell}$'s
that are dependent on $D$ and $S$, but independent of $n$ and $\lambda_\ell$,
where $1 \le i, j \le m$ and $1 \le \ell \le r$.
By the formal expansion of the symmetric matrix $M^{(n)}$ above,
we have $a_{i j \ell} = a_{j i \ell}$.
Let $t = |F|$, which is the number of edges in $G$
subject to the stretching operator $S_n$ to form $G_n$.
In the evaluation of the partition function $Z_{A, D}(G_n)$,
we stratify the vertex assignments in $G_n$ as follows.
Denote by $\kappa = (k_{i j})_{1 \le i \le j \le m}$
a nonnegative tuple with entries indexed by ordered pairs of nonnegative numbers
that satisfy $\sum_{1 \le i \le j \le m} k_{i j} = t$.
Let $\mathcal K$ denote the set of all such possible tuples $\kappa$.
In particular, $|\mathcal K| = \binom{t + m (m + 1) / 2 - 1}{m (m + 1) / 2 - 1}$.
For a fixed $m$, this is a polynomial in $t$, and thus a
polynomial in the size of $G$.
Let $c_\kappa$
be the sum over all assignments of all vertex and edge weight products
in $Z_{A, D}(G_n)$,
except the contributions by the paths of length $n$ formed by
stretching parallel edges in $G$,
such that the endpoints of precisely $k_{i j}$ constituent paths
of length $n$
receive the assignments $(i, j)$ (in either order of the end points)
for every $1 \le i \le j \le m$.
Technically we can call a vertex assignment on $G$
consistent with $\kappa$ (where $\kappa \in \mathcal K$),
if it satisfies the stated property.
Note that the contribution by each such path does not include
the vertex weights of the two end points (but does include all vertex weights
of the internal $n-1$ vertices of the path).
We can write
\[
c_\kappa = \sum_{\substack{\xi \colon V(G) \to [m] \\ \xi \text{ is consistent with } \kappa}} \prod_{w \in V} D_{\xi(w)} \prod_{(u, v) \in E \setminus F} A_{\xi(u), \xi(v)}
\]
for $\kappa \in \mathcal K$.
In particular, the values $c_\kappa$ are independent of $n$.
Thus for some polynomially many values $c_\kappa$, where $\kappa
\in \mathcal K$, we have
\[
Z_{A, D}(G_n) = \sum_{\kappa \in \mathcal K} c_\kappa \prod_{1 \le i \le j \le m} (M_{i j}^{(n)})^{k_{i j}}
= \sum_{\kappa \in \mathcal K} c_\kappa \prod_{1 \le i \le j \le m} (\sum_{\ell = 1}^r a_{i j \ell} \lambda_\ell^n)^{k_{i j}}.
\]
Expanding out the last sum and rearranging the terms, for some
values $b_{i_1, \ldots, i_r}$ independent of $n$, we get
\begin{equation}\label{interpolation-lin-sys-sec4}
Z_{A, D}(G_n)
= \sum_{\substack{i_1 + \ldots + i_r = t \\ i_1, \ldots, i_r \ge 0}} b_{i_1, \ldots, i_r} ( \prod_{\ell = 1}^r \lambda_\ell^{i_\ell} )^n
\end{equation}
for all $n \ge 1$.
This can be viewed as a linear system with the unknowns $b_{i_1, \ldots, i_r}$ with the rows indexed by $n$.
The number of unknowns is $\binom{t + r - 1}{r - 1}$
which is polynomial in the size of the input graph $G$, since $r \le m$ is a constant.
The values $\prod_{\ell = 1}^r \lambda_\ell^{i_\ell}$
can all be computed in polynomial time.
We show how to compute the value $Z_{A, D}(G) = \displaystyle \sum_{\scriptsize \substack{i_1 + \ldots + i_r = t \\ i_1, \ldots, i_r \ge 0}} b_{i_1, \ldots, i_r} \prod_{\ell = 1}^r \lambda_\ell^{i_\ell}$,
from the values $Z_{A, D}(G_n)$ where $n \ge 2$ in polynomial time
(recall that $G_n$ is simple and loopless for $n \ge 2$).
The coefficient matrix of the linear system (\ref{interpolation-lin-sys-sec4})
is a Vandermonde matrix.
However, it might not be of full rank
because the coefficients $\prod_{\ell = 1}^r \lambda_\ell^{i_\ell}$
do not have to be pairwise distinct, and therefore
it can have repeating columns.
Nevertheless, when there are two repeating columns we replace
the corresponding unknowns $b_{i_1, \ldots, i_r}$ and $b_{i'_1, \ldots, i'_r}$
with their sum as a new variable; we repeat this replacement procedure until
there are no repeating columns.
Since all $\lambda_\ell \ne 0$, for $1 \le \ell \le r$,
after the replacement,
we have a Vandermonde system of full rank.
Therefore we can solve this modified linear system
in polynomial time.
This allows us to obtain the value $Z_{A, D}(G) = \displaystyle \sum_{\scriptsize \substack{i_1 + \ldots + i_r = t \\ i_1, \ldots, i_r \ge 0}} b_{i_1, \ldots, i_r} \prod_{\ell = 1}^r \lambda_\ell^{i_\ell}$, which also has exactly the same pattern of repeating multipliers
$\prod_{\ell = 1}^r \lambda_\ell^{i_\ell}$.
We have shown how to compute the value $Z_{A, D}(G)$ in polynomial time
by querying the oracle $\EVAL(A, D)$
on polynomially many instances $G_n$, for $n \ge 2$.
It follows that $\EVAL(A, D) \le_{\mathrm T}^{\mathrm P} \EVAL_{\simp}(A, D)$.
\end{proof}
We are ready to prove the \#P-hardness part of the dichotomy
by Goldberg et al.~\cite{Goldberg-et-al-2010} (Theorem 1.1) for simple graphs.
Let $A$ be a real symmetric $m \times m$ matrix.
Assuming that $A$ does not satisfy
the tractability conditions of the dichotomy theorem of Goldberg et al.,
the problem $\EVAL(A)$ is \#P-hard.
By Theorem~\ref{thm:EVAL-simp-interp} (with $D = I_m$), $\EVAL(A) \le_{\mathrm T}^{\mathrm P} \EVAL_{\simp}(A)$.
It follows that $\EVAL_{\simp}(A)$ is \#P-hard.
Hence
the dichotomy theorem by Goldberg et al.
can improve to apply to simple graphs.
\begin{theorem}
Let $A$ be a real symmetric matrix.
Then either $\EVAL(A)$ is in polynomial time or $\EVAL_{\simp}(A)$ is \#P-hard
(a fortiori, $\EVAL(A)$ is \#P-hard).
Moreover, there is a polynomial time algorithm that,
given the matrix $A$, decides which case of the dichotomy it is.
\end{theorem}
\begin{remark}
The interpolation argument in Theorem~\ref{thm:EVAL-simp-interp}
works even if $G$ is a multigraph possibly with multiple loops at any vertex
in the following sense.
In Definition~\ref{def:EVAL(A,D)},
we treat the loops of $G$ as edges.
We think of them as mapped to the entries $A_{i i}$
in the evaluation of the partition function $Z_{A, D}$.
However, we need to slightly change the way we define the graphs $G_n$.
In addition to $n$-stretching the parallel edges of $G$,
we also need to $n$-stretch each loop of $G$ (i.e., replacing a loop by
a closed path of length $n$).
Now $F$ is the set of parallel edges \emph{and} loops in $G$.
This way each $G_n = S_n^{(F)}(G)$ for $n \ge 2$ is simple and loopless.
The rest of the proof goes through.
In other words, the statement of Theorem~\ref{thm:EVAL-simp-interp}
extends to a reduction from
the $\EVAL(A, D)$ problem that allows input $G$ to have multiloops,
to the standard problem $\EVAL_{\simp}(A, D)$ not allowing loops.
\end{remark}
\section{Introduction}
The modern study of graph homomorphisms originates from the work by Lov\'asz and
others several decades
ago and has been a very active area~\cite{Lovasz-1967, GH-book}.
If $G$ and $H$ are two graphs,
a graph homomorphism (GH) is a mapping $f \colon V(G) \to V(H)$
that preserves vertex adjacency, i.e.,
whenever $(u, v)$ is an edge in $G$, $(f(u), f(v))$ is also an edge in $H$.
Many combinatorial problems on graphs can be expressed as graph homomorphism
problems.
Well-known examples include the problems of finding a proper vertex coloring, vertex cover, independent set and clique.
For example, if $V(H) = \{0, 1\}$ with an edge between $0$ and $1$ and a loop at $0$,
then $f \colon V(G) \to \{0, 1\}$ is a graph homomorphism iff $f^{-1}(1)$ is an independent set in $G$;
similarly, proper vertex colorings on $G$ using at
most $m$ colors correspond to homomorphisms
from $G$ to $H = K_m$ (with no loops).
More generally, one can consider weighted graphs $H$
and aggregate all homomorphisms
from $G$ to $H$ into a weighted sum. This
is a powerful graph invariant which can express many
graph properties.
Formally, for a symmetric $m \times m$ matrix $A$,
the \emph{graph homomorphism function} on a graph $G = (V, E)$
is defined as follows:
\[
Z_A(G) = \sum_{\xi: V \rightarrow [m]} \prod_{(u, v) \in E} A_{\xi(u), \xi(v)}.
\]
Note that if $H$ is unweighted, and $A$ is its $\{0, 1\}$-adjacency
matrix, then each product $\prod_{(u, v) \in E} A_{\xi(u), \xi(v)}$ is $0$ or $1$,
and is $1$ iff $\xi$ is a graph homomorphism.
Thus in this case $Z_A(G)$ counts the number of homomorphisms
from $G$ to $H$.
One can further allow $H$ to have vertex weights.
In this case, we can similarly define the
function $Z_{A, D}(\cdot)$ (see Definition~\ref{def:EVAL(A,D)}).
These sum-of-product functions $Z_A(\cdot)$ and $Z_{A, D}(\cdot)$
are referred to as the \emph{partition functions} in statistical physics~\cite{baxter-6-8}.
Various special cases of GH have been studied there extensively,
which include the Ising, Potts, hardcore gas, Beach, Widom-Rowlinsom models, etc.~\cite{baxter-6-8}.
The computational complexity of $Z_A(\cdot)$
has been studied systematically.
Dyer and Greenhill~\cite{Dyer-Greenhill-2000, Dyer-Greenhill-corrig-2004}
proved that, for a symmetric $\{0, 1\}$-matrix $A$,
$Z_A(\cdot)$ is either in polynomial time or \#P-complete,
and they gave a succinct condition for this complexity dichotomy:
if $A$ satisfies the condition then $Z_A(\cdot)$ is computable in
polynomial time (we also call it \emph{tractable}), otherwise
it is \#P-complete.
Bulatov and Grohe~\cite{Bulatov-Grohe-2005}
(see also~\cite{Thurley-2009, Grohe-Thurley-2011})
generalized the Dyer-Greenhill dichotomy to
$Z_A(\cdot)$ for nonnegative symmetric matrices $A$.
It was further extended by Goldberg et al.~\cite{Goldberg-et-al-2010}
to arbitrary real symmetric matrices,
and finally by Cai, Chen and Lu~\cite{Cai-Chen-Lu-2013} to arbitrary
complex symmetric matrices.
In the last two dichotomies,
the tractability criteria are not trivial to state.
Nevertheless, both tractability criteria are decidable in polynomial time
(in the size of $A$).
The definition of the partition function $Z_A(\cdot)$
can be easily extended to
directed graphs $G$ and arbitrary (not necessarily symmetric) matrices $A$
corresponding to directed edge weighted graphs $H$.
Concerning the complexity of counting directed GH,
we currently have the \emph{decidable} dichotomies
by Dyer, Goldberg and Paterson~\cite{Dyer-Goldberg-Paterson-2007}
for $\{0, 1\}$-matrices corresponding to (unweighted) simple acyclic graphs $H$,
and by Cai and Chen~\cite{Cai-Chen-2019} for all nonnegative
matrices $A$.
Dyer and Greenhill in the same paper~\cite{Dyer-Greenhill-2000}
proved a stronger statement that
if a $\{0, 1\}$-matrix $A$ fails the tractability condition then
$Z_A(G)$ is \#P-complete even when restricted to
bounded degree graphs $G$.
We note that the complexity of GH for bounded degree graphs
is particularly interesting as much work has been done on the
approximate complexity of GH focused on bounded degree graphs
and approximate algorithms
are achieved for them~\cite{DyerFJ02,Weitz06,Sly10,SinclairST12,LiLY13,Barvinok-book,Barvinok-Soberon-2017,Peters-Regts-2018,HelmuthPR19,Peters-Regts-2018}.
However, for fifteen years
the worst case complexity for bounded degree graphs
in the Bulatov-Grohe dichotomy
was open.
Since this dichotomy is used essentially
in almost all subsequent work, e.g.,~\cite{Goldberg-et-al-2010,Cai-Chen-Lu-2013},
this has been a stumbling block.
Our main contribution in this paper is to resolve this 15-year-old open problem.
We prove
that the \#P-hardness part of
the Bulatov-Grohe dichotomy still holds
for \emph{bounded degree graphs}.
It can be further strengthened to apply to
bounded degree \emph{simple} graphs.
We actually prove a broader
dichotomy for $Z_{A, D}(\cdot)$,
where in addition to the nonnegative symmetric edge weight matrix $A$
there is also a nonnegative diagonal vertex weight matrix $D$.
We will give an explicit tractability condition
such that, if $(A, D)$ satisfies the condition then
$Z_{A, D}(G)$ is computable in polynomial time for all $G$,
and if it fails the condition then
$Z_{A, D}(G)$ is \#P-hard even restricted to
\emph{bounded degree simple graphs} $G$.
$Z_A(G)$ is the special case of $Z_{A, D}(G)$ when $D$ is
the identity matrix.
Additionally, we prove that
the \#P-hardness part of the dichotomy by Goldberg et al.~\cite{Goldberg-et-al-2010}
for all real symmetric edge weight matrices $A$ still holds for \emph{simple graphs}.
(Although in this case, whether under the same condition
on $A$ the \#P-hardness still holds
for bounded degree graphs is not resolved in the present paper.)
In order to prove the dichotomy theorem on bounded degree graphs,
we have to introduce a nontrivial extension of the well-developed
interpolation method~\cite{Valiant}.
We use some of the well-established techniques in this area of research
such as stretchings and thickenings. But the main innovation
is an overall design of the interpolation for
a more abstract target polynomial than $Z_{A, D}$.
To carry out the proof there is an initial condensation step
where we combine vertices that have
proportionately the same neighboring edge wrights
(technically defined by pairwise linear dependence) into a super vertex with
a combined vertex weight. Note that this creates vertex weights
even when initially all vertex weights are 1. When vertex weights are present,
an approach in interpolation proof
is to arrange things well so that
in the end one can redistribute vertex weights to edge weights.
However, when edge weights are not 0-1,
any gadget design must deal with a quantity at
each vertex that cannot be \emph{redistributed}.
This dependence has the form
$\sum_{j = 1}^{m_{\zeta(w)}} \alpha_{\zeta(w) j} \mu_{\zeta(w) j}^{\deg(w)}$,
resulting from combining pairwise linearly dependent rows and columns,
that depends on vertex degree $\deg(w)$
in a complicated way.
(We note that in the 0-1 case all $\mu_{\zeta(w) j} \in \{0, 1\}$, making
it in fact degree \emph{independent}.)
We overcome this difficulty by essentially introducing
a virtual level of interpolation---an interpolation to realize
some ``virtual gadget'' that cannot be physically realized, and yet
its ``virtual'' vertex weights are suitable for redistribution.
Technically we have to define an auxiliary graph $G'$,
and express
the partition function
in an extended framework, called
$Z_{\mathscr A, \mathscr D}$ on $G'$ (see Definition~\ref{def:EVAL(scrA,scrD)}).
In a typical interpolation proof, there is a polynomial
with coefficients that have a clear combinatorial meaning
defined in terms of $G$, usually consisting of certain
sums of exponentially many terms in some target partition function.
Here, we will define a target polynomial
with certain coefficients;
however these coefficients do not have a direct combinatorial meaning
in terms of $Z_{A, D}(G)$, but rather they only have
a direct combinatorial meaning in terms of
$Z_{\mathscr A, \mathscr D}$ on $G'$.
In
a suitable ``limiting'' sense,
a certain aggregate of these coefficients
forms some useful quantity in the final result.
This introduces a concomitant
``virtual'' vertex weight which depends on the vertex degree that is ``just-right''
so that it can be redistributed to become part of the incident edge weight,
thus effectively killing the vertex weight.
This leads to a reduction from $Z_{C}(\cdot)$
(without vertex weight) to $Z_{A, D}(\cdot)$, for some $C$ that inherits
the hardness condition of $A$,
thus proving the \#P-hardness of the latter.
This high level description will be made clearer
in Section~\ref{sec:Hardness-proof}.
The nature of the degree dependent vertex weight
introduces a substantial difficulty;
in particular
a direct adaptation of the proof
in~\cite{Dyer-Greenhill-2000}
does not work.
Our extended vertex-weighted version of the
Bulatov-Grohe dichotomy can be used
to correct a crucial gap in the proof by Thurley~\cite{Thurley-2010}
for a dichotomy for $Z_A(\cdot)$ with Hermitian edge
weight matrices $A$, where this degree dependence was also at
the root of the
difficulty.~\footnote{In~\cite{Thurley-2010}, the proof of Lemma~4.22
uses Lemma~4.24. In Lemma~4.24, $A$ is assumed to have
pairwise linearly independent rows while Lemma~4.22 does not assume this,
and the author appeals to
a twin reduction step in~\cite{Dyer-Greenhill-2000}. However, unlike
in the 0-1 case~\cite{Dyer-Greenhill-2000},
such a step incurs degree dependent vertex weights.
This gap is fixed by our Theorem~\ref{thm:bd-hom-nonneg}.}
\subsection{Hardness part}\label{subsec:Hardness-part}
\section{Hardness proof}\label{sec:Hardness-proof}
We proceed to prove the \#P-hardness part of Theorem~\ref{thm:bd-hom-nonneg}.
Let $A$ and $D$ be $m \times m$ matrices,
where $A$ is nonnegative symmetric but not block-rank-$1$, and $D$ is positive diagonal.
The first step is to eliminate pairwise linearly dependent rows and columns
of $A$.
(We will see that this step will naturally create nontrivial vertex weights
even if we initially start with the vertex unweighted case $D=I_m$.)
If $A$ has a zero row or column $i$, then
for any connected input graph $G$ other than a single isolated vertex,
no map $\xi: V(G) \rightarrow [m]$ having
a nonzero contribution to $Z_{A, D}(G)$
can map any vertex of $G$ to $i$.
So, by crossing out all zero rows and columns (they have the same index
set since $A$ is symmetric)
we may assume that $A$ has no zero rows or columns.
We then delete the same set of rows and columns from $D$,
thereby expressing the problem
$\EVAL_{\simp}^{(\Delta)}(A, D)$ for $\Delta \ge 0$ on a smaller domain.
Also permuting the rows and columns of both $A$ and $D$ simultaneously by the same permutation
does not change the value of $Z_{A, D}(\cdot)$,
and so it does not change the complexity of
$\EVAL_{\simp}^{(\Delta)}(A, D)$ for $\Delta \ge 0$ either.
Having no zero rows and columns implies that pairwise linear dependence
is an equivalence relation, and so
we may assume that the pairwise linearly dependent rows and
columns of $A$ are contiguously arranged.
Then, after renaming the indices, the entries of $A$ are of the following form:
$A_{(i, j), (i', j')} = \mu_{i j} \mu_{i' j'} A'_{i, i'}$,
where $A'$ is a nonnegative symmetric $s \times s$ matrix
with all columns nonzero and pairwise linearly independent,
$1 \le i, i' \le s$, $1 \le j \le m_i$, $1 \le j' \le m_{i'}$,
$\sum_{i = 1}^s m_i = m$,
and all $\mu_{i j} > 0$.
We also rename the indices of the matrix $D$ so that
the diagonal entries of $D$
are of the following form:
$D_{(i, j), (i, j)} = \alpha_{i j} > 0$
for $1 \le i \le s$ and $1 \le j \le m_i$.
As $m \ge 1$ we get $s \ge 1$.
Then the partition function $Z_{A, D}(\cdot)$ can be written in
a compressed form
\[
Z_{A, D}(G) =
\sum_{\zeta: V(G) \rightarrow [s]} \left( \prod_{w \in V(G)} \sum_{j = 1}^{m_{\zeta(w)}} \alpha_{\zeta(w) j} \mu_{\zeta(w) j}^{\deg(w)} \right) \prod_{(u, v) \in E(G)} A'_{\zeta(u), \zeta(v)} = Z_{A', \mathfrak D}(G)
\]
where $\mathfrak D = \{ D^{\llbracket k \rrbracket}\}_{k = 0}^\infty$
with $D^{\llbracket k \rrbracket}_i = \sum_{j = 1}^{m_i} \alpha_{i j} \mu_{i j}^k > 0$ for $k \ge 0$ and $1 \le i \le s$.
Then all matrices in $\mathfrak D$ are positive diagonal.
Note the dependence on the vertex degree $\deg(w)$ for $w \in V(G)$.
Since the underlying graph $G$ remains unchanged,
this way we obtain the equivalence
$\EVAL_{\simp}^{(\Delta)}(A, D) \equiv_{\mathrm T}^{\mathrm P} \EVAL_{\simp}^{(\Delta)}(A', \mathfrak D)$ for any $\Delta \ge 0$.
Here the subscript $\simp$ can be included or excluded,
and the same is true for the superscript $(\Delta)$,
the statement remains true in all cases.
We also point out that
the entries of the matrices $D^{\llbracket k \rrbracket} \in \mathfrak D$
are computable in polynomial time
in the input size of $(A, D)$ as well as in $k$.
\subsection{Gadgets $\mathcal P_{n, p}$ and $\mathcal R_{d, n, p}$}\label{subsec:Gadget-Rdnp}
We first introduce the \emph{edge gadget}
$\mathcal P_{n, p}$, for all $p, n \ge 1$.
It is obtained by replacing each edge
of a path of length $n$ by the gadget in
Figure~\ref{fig:ADA-to-p-gadget-advanced}
from Lemma~\ref{lem:ADA-nondeg-thick}.
More succinctly $\mathcal P_{n, p}$ is $S_2 T_p S_n e$, where $e$ is an edge.
To define the gadget $\mathcal R_{d, n, p}$, for all $d, p, n \ge 1$, we start
with a cycle on $d$ vertices $F_1, \ldots, F_d$ (call it a $d$-cycle),
replace every edge of the $d$-cycle by a copy of $\mathcal P_{n, p}$,
and append a dangling
edge at each vertex $F_i$ of the $d$-cycle.
To be specific, a $2$-cycle has
two vertices with $2$ parallel edges between them,
and a $1$-cycle
is a loop on one vertex. The gadget $\mathcal R_{d, n, p}$
always has $d$ dangling edges.
Note that all
$\mathcal R_{d, n, p}$ are loopless simple graphs (i.e., without
parallel edges or loops), for $d, n, p \ge 1$.
An example of a gadget $\mathcal R_{d, n, p}$
is shown in Figure~\ref{fig:d-gon-gagdet-simplified}.
For the special cases $d = 1, 2$,
examples of gadgets $\mathcal R_{d, n, p}$
can be seen in Figure~\ref{fig:d=1,2-gadgets}.
\begin{figure}[t]
\centering
\includegraphics{figure5.pdf}
\caption{\label{fig:d-gon-gagdet-simplified}The gadget $\mathcal R_{5, 3, 4}$.}
\end{figure}
\begin{figure}[t]
\setbox1=\hbox{\includegraphics[scale=0.73]{figure6.pdf}}
\setbox2=\hbox{\includegraphics[scale=0.73]{figure7.pdf}}
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[scale=0.73]{figure6.pdf}
\subcaption{$\mathcal R_{1, 5, 5}$\label{subfig-1:d=1}}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.5\textwidth}
\centering
\raisebox{0.5\ht1}{\raisebox{-0.5\ht2}{
\includegraphics[scale=0.73]{figure7.pdf}
}}
\subcaption{$\mathcal R_{2, 4, 3}$\label{subfig-2:d=2}}
\end{subfigure}
\caption{\label{fig:d=1,2-gadgets}Examples of gadgets $\mathcal R_{d, n, p}$ for $d = 1, 2$}
\end{figure}
We note that
vertices in $\mathcal P_{n, p}$ have degrees at most $2p$,
and vertices in $\mathcal R_{d, n, p}$ have degrees at most $2p+1$,
taking into account the dangling edges. Clearly
$|V({\mathcal R_{d, n, p}})| = d n (p+1)$ and
$|E({\mathcal R_{d, n, p}})| = (2np+1) d$, including the dangling edges.
By Lemma~\ref{lem:ADA-nondeg-thick},
we can fix some $p \ge 1$ such that $B = (A' D^{\llbracket 2 \rrbracket} A')^{\odot p}$ is nondegenerate, where the superscript $\llbracket 2 \rrbracket$
is from the stretching operator $S_2$ which creates those degree
$2$ vertices, and the superscript $\odot p$ is
from the thickening operator $T_p$, followed by $S_2$, which creates those
parallel paths of length $2$.
The edge gadget $\mathcal P_{n, p}$ has the edge weight matrix
\begin{align}
L^{(n)} &=
\underbrace{B D^{\llbracket 2 p \rrbracket} B \ldots B D^{\llbracket 2 p \rrbracket} B}_{D^{\llbracket 2 p \rrbracket} \text{ appears } n - 1 ~\ge~ 0 \text{ times}} = B (D^{\llbracket 2 p \rrbracket} B)^{n - 1} \label{Pnp-edgeweightmatrix1} \\
&= (D^{\llbracket 2 p \rrbracket})^{-1 / 2} ((D^{\llbracket 2 p \rrbracket})^{1 / 2} B (D^{\llbracket 2 p \rrbracket})^{1 / 2})^n (D^{\llbracket 2 p \rrbracket})^{-1 / 2},
\label{Pnp-edgeweightmatrix2}
\end{align}
where in the notation $L^{(n)}$
we suppress the index $p$.
The $n-1$ occurrences of
$D^{\llbracket 2 p \rrbracket}$ in (\ref{Pnp-edgeweightmatrix1}) are due to
those $n-1$ vertices of degree $2p$.
Here $(D^{\llbracket 2 p \rrbracket})^{1 / 2}$ is a diagonal matrix
with the positive square roots
of the corresponding entries
of $D^{\llbracket 2 p \rrbracket}$ on the main diagonal,
and $(D^{\llbracket 2 p \rrbracket})^{-1 / 2}$ is its inverse.
The vertices $F_i$ are of degree $2 p + 1$ each,
but the contributions by its vertex weights are
not included in $L^{(n)}$.
The constraint function induced by $\mathcal R_{d, n, p}$
is more complicated to write down. When it is placed as a part of
a graph, for any given assignment
to the $d$ vertices $F_i$, we can express the contribution of
the gadget $\mathcal R_{d, n, p}$ in terms of $d$ copies of
$L^{(n)}$, \emph{together with} the vertex weights incurred at the
$d$ vertices $F_i$ which will depend on their degrees.
\subsection{Interpolation using $\mathcal R_{d, n, p}$}\label{subsec:Interpolation}
Assume for now that $G$ does not contain isolated vertices.
We will replace every vertex $u \in V(G)$ of degree $d = d_u = \deg(u) \ge 1$
by a copy of $\mathcal R_{d, n, p}$, for all $n, p \ge 1$.
The replacement operation
can
be described in two steps: In step one, each $u \in V(G)$ is replaced
by a $d$-cycle on vertices $F_1, \ldots, F_d$,
each having a dangling edge attached.
The $d$ dangling edges will be identified one-to-one with
the $d$ incident edges at $u$.
If $u$ and $v$ are adjacent vertices in $G$, then
the edge $(u, v)$ in $G$ will be replaced by merging a
pair of dangling edges, one from the $d_u$-cycle
and one from the $d_v$-cycle.
Thus in step one we obtain a graph $G'$, which basically replaces every
vertex $u \in V(G)$ by a cycle of $\deg(u)$ vertices.
Then in step two, for every cycle in $G'$ that
corresponds to some $u \in V(G)$ we replace each edge
on the cycle by a copy of the edge gadget
$\mathcal P_{n, p}$.
Let $G_{n, p}$ denote the graph obtained from $G$ by the
replacement procedure above.
Since all gadgets $\mathcal R_{d, n, p}$ are loopless simple graphs,
so are $G_{n, p}$ for all $n, p \ge 1$,
even if $G$ has multiple edges
(or had multiloops,
if we view a loop as adding degree $2$ to the incident vertex).
As a technical remark, if $G$ contains vertices of degree $1$,
then the intermediate graph $G'$ has loops but all graphs $G_{n, p}$
($n, p \ge 1$) do not.
Also note that all vertices in $G_{n, p}$ have degree
at most
$2 p + 1$,
which is independent of $n$.
Next, it is not hard to see that
\begin{gather*}
|V(G_{n, p})| = \sum_{u \in V(G)} d_u n (p + 1)
= 2 n (p + 1) |E(G)|, \\
|E(G_{n, p})| = |E(G)| + \sum_{u \in V(G)} 2 n p d_u
= (4 n p + 1) |E(G)|.
\end{gather*}
Hence the size of
the graphs $G_{n, p}$ is polynomially bounded in the size of $G$, $n$ and $p$.
Since we chose a fixed $p$,
and will choose $n$ to be bounded by a polynomial in the size of $G$,
whenever something is computable in polynomial time in $n$,
it is also computable in polynomial time in the size of $G$
(we will simply say in polynomial time).
We consider $Z_{A', \mathfrak D}(G)$, and substitute
$G$ by $G_{n, p}$.
We will make use of the edge weight matrix $L^{(n)}$ of $\mathcal P_{n, p}$
in (\ref{Pnp-edgeweightmatrix2}).
The vertices $F_i$ are of degree $2 p + 1$ each in $G_{n, p}$,
so will each contribute
a vertex weight according to the diagonal
matrix $D^{\llbracket 2 p + 1 \rrbracket}$
to the partition function, which are not included in $L^{(n)}$,
but now must be accounted for in $Z_{A', \mathfrak D}(G_{n, p})$.
Since $B$ is real symmetric and $D^{\llbracket 2 p \rrbracket}$ is positive diagonal,
the matrix
\[\widetilde B = (D^{\llbracket 2 p \rrbracket})^{1 / 2} B (D^{\llbracket 2 p \rrbracket})^{1 / 2}\]
is real symmetric.
Then $\widetilde B$ is orthogonally diagonalizable over $\mathbb{R}$, i.e.,
there exist a real orthogonal matrix $S$
and a real diagonal matrix $J = \operatorname{diag}(\lambda_i)_{i = 1}^s$ such that $\widetilde B = S^T J S$.
Then ${\widetilde B}^n = S^T J^n S$ so the edge weight matrix for
$\mathcal P_{n, p}$ becomes
\[
L^{(n)}
= (D^{\llbracket 2 p \rrbracket})^{-1 / 2} {\widetilde B}^n (D^{\llbracket 2 p \rrbracket})^{-1 / 2}
= (D^{\llbracket 2 p \rrbracket})^{-1 / 2} S^T J^n S (D^{\llbracket 2 p \rrbracket})^{-1 / 2}.
\]
Note that $L^{(n)}$ as a matrix is defined for any $n \ge 0$,
and $L^{(0)} = (D^{\llbracket 2 p \rrbracket})^{-1}$,
even though there is no physical gadget $\mathcal P_{0, p}$ that corresponds to it.
However, it is precisely this ``virtual'' gadget we wish to
``realize'' by interpolation.
Clearly, $\widetilde B$ is nondegenerate as $B$ and $(D^{\llbracket 2 p \rrbracket})^{1/2}$ both are, and so is $J$.
Then all $\lambda_i \ne 0$.
We can also write $L_{i j}^{(n)} = \sum_{\ell = 1}^s a_{i j \ell} \lambda_\ell^n$
for every $n \ge 0$ and some real $a_{i j \ell}$'s
which depend on $S$, $D^{\llbracket 2 p \rrbracket}$, but not on
$J$ and $n$, for all $1 \le i, j, \ell \le s$.
By the formal expansion of the symmetric matrix $L^{(n)}$ above,
we have $a_{i j \ell} = a_{j i \ell}$.
Note that for all $n, p \ge 1$,
the gadget $\mathcal R_{d_v, n, p}$ for $v \in V(G)$ employs exactly
$d_v$ copies of $\mathcal P_{n, p}$. Let
$t = \sum_{v \in V(G)} d_v = 2 |E|$; this is precisely the number of
edge gadgets $\mathcal P_{n, p}$ in $G_{n, p}$.
In the evaluation of the partition function $Z_{A', \mathfrak D}(G_{n, p})$,
we stratify the vertex assignments in $G_{n, p}$ as follows.
Denote by $\kappa = (k_{i j})_{1 \le i \le j \le s}$
a tuple of nonnegative integers, where the indexing is over all
$s(s+1)/2$ ordered pairs $(i, j)$. There are
a total of $\binom{t + s (s + 1) / 2 - 1}{s (s + 1) / 2 - 1}$
such tuples that satisfy $\sum_{1 \le i \le j \le s} k_{i j} = t$.
For a fixed $s$, this is a polynomial in $t$, and thus
a polynomial in the size of $G$.
Denote by $\mathcal K$ the set of all such
tuples $\kappa$.
We will stratify all vertex assignments in $G_{n, p}$
by $\kappa\in \mathcal K$, namely all assignments
such that there are exactly $k_{i j}$ many constituent edge gadgets $\mathcal P_{n, p}$ with the two end points (in either order of the end points)
assigned $i$ and $j$ respectively.
For each $\kappa \in \mathcal K$, the edge gadgets $\mathcal P_{n, p}$
in total contribute $\prod_{1 \le i \le j \le s} (L_{i j}^{(n)})^{k_{i j}}$
to the partition function $Z_{A', \mathfrak D}(G_{n, p})$.
If we factor this product out for each $\kappa \in \mathcal K$, we can express
$Z_{A', \mathfrak D}(G_{n, p})$ as a linear
combination of these products over all $\kappa \in \mathcal K$,
with polynomially many coefficient values $c_\kappa$
that are independent of all edge gadgets $\mathcal P_{n, p}$.
Another way to define these coefficients $c_\kappa$ is to think in terms of
$G'$: For any $\kappa = (k_{i j})_{1 \le i \le j \le s}
\in \mathcal K$,
we say a vertex assignment on $G'$ is consistent with $\kappa$
if it assigns exactly $k_{i j}$ many cycle edges of $G'$
(i.e., those that belong to the cycles that replaced vertices in $G$)
as ordered
pairs of vertices to the values $(i, j)$ or $(j,i)$.
(For any loop in $G'$, as a cycle of length $1$ that came from
a degree $1$ vertex of $G$, it can only be assigned $(i,i)$ for some
$1 \le i \le s$.)
Let $L'$ be any symmetric edge signature to be
assigned on each of these
cycle edges in $G'$,
and keep the edge signature $A'$ on the
merged dangling edges between any two such cycles,
and the suitable vertex
weights specified by $\mathfrak D$, namely each vertex receives its vertex weight according to $D^{\llbracket 2 p +1 \rrbracket}$.
Then $c_\kappa$
is the sum, over all assignments
consistent with $\kappa$, of the products of all
edge weights and vertex weights \emph{other than}
the contributions by $L'$, in the evaluation of
the partition function on $G'$.
In other words, for each $\kappa \in \mathcal K$,
\[
c_\kappa = \sum_{\substack{\zeta \colon V(G') \to [s] \\ \zeta \text{ is consistent with } \kappa}} \prod_{w \in V(G')} D_{\zeta(w)}^{\llbracket 2 p + 1 \rrbracket} \prod_{(u, v) \in \widetilde E} A'_{\zeta(u), \zeta(v)},
\]
where $\widetilde E \subseteq E(G')$ are the non-cycle edges of $G'$ that are in $1$-$1$ correspondence with $E(G)$.
In particular, the values $c_\kappa$ are independent of $n$.
Thus for some polynomially many values $c_\kappa$, where $\kappa
\in \mathcal K$, we have
\begin{equation}\label{stratification-isolating-L}
Z_{A', \mathfrak D}(G_{n, p}) = \sum_{\kappa \in \mathcal K} c_\kappa \prod_{1 \le i \le j \le s} (L_{i j}^{(n)})^{k_{i j}}
= \sum_{\kappa \in \mathcal K} c_\kappa \prod_{1 \le i \le j \le s} (\sum_{\ell = 1}^s a_{i j \ell} \lambda_\ell^n)^{k_{i j}}.
\end{equation}
Expanding out the last sum and rearranging the terms, for some
values $b_{i_1, \ldots, i_s}$ independent of $n$, we get
\[
Z_{A', \mathfrak D}(G_{n, p})
= \sum_{\substack{i_1 + \ldots + i_s = t \\ i_1, \ldots, i_s \ge 0}} b_{i_1, \ldots, i_s} ( \prod_{j = 1}^s \lambda_j^{i_j} )^n
\]
for all $n \ge 1$.
This represents a linear system with the unknowns $b_{i_1, \ldots, i_s}$
with the rows indexed by $n$.
The number of unknowns is clearly $\binom{t + s - 1}{s - 1}$
which is polynomial in the size of the input graph $G$ since $s$ is a constant.
The values $\prod_{j = 1}^s \lambda_j^{i_j}$
can be clearly computed in polynomial time.
We show how to compute the value
\[\sum_{\substack{i_1 + \ldots + i_s = t \\ i_1, \ldots, i_s \ge 0}} b_{i_1, \ldots, i_s}\]
from the values $Z_{A', \mathfrak D}(G_{n, p}),\, n \ge 1$ in polynomial time.
The coefficient matrix of this system is a Vandermonde matrix.
However, it can have repeating columns so it might not be of full rank
because the coefficients $\prod_{j = 1}^s \lambda_j^{i_j}$
do not have to be pairwise distinct.
However, when they are equal, say,
$\prod_{j = 1}^s \lambda_j^{i_j} =
\prod_{j = 1}^s \lambda_j^{i'_j}$, we replace
the corresponding unknowns $b_{i_1, \ldots, i_s}$ and $b_{i'_1, \ldots, i'_s}$
with their sum as a new variable.
Since all $\lambda_i \ne 0$,
we have a Vandermonde system of full rank after all such combinations.
Therefore we can solve this linear system
in polynomial time and find the desired value $\displaystyle \sum_{\scriptsize \substack{i_1 + \ldots + i_s = t \\ i_1, \ldots, i_s \ge 0}} b_{i_1, \ldots, i_s}$.
Now we will consider a problem in the framework of $Z_{\mathscr A, \mathscr D}$
according to Definition~\ref{def:EVAL(scrA,scrD)}.
Let $G_{0, p}$ be the (undirected) GH-grid,
with the underlying graph $G'$,
and every edge of the cycle in $G'$ corresponding
to a vertex in $V(G)$ is assigned
the edge weight matrix $(D^{\llbracket 2 p \rrbracket})^{-1}$,
and we keep the vertex-weight matrices $D^{\llbracket 2 p + 1 \rrbracket}$ at all vertices $F_i$.
The other edges, i.e., the original edges of $G$, each keep the assignment
of the edge weight matrix $A'$.
(So in the specification of $Z_{\mathscr A, \mathscr D}$,
we have $\mathscr A = \{(D^{\llbracket 2 p \rrbracket})^{-1}, A'\}$,
and $\mathscr D = \{D^{\llbracket 2 p + 1 \rrbracket}\}$.
We note that $G'$ may have loops, and
Definition~\ref{def:EVAL(scrA,scrD)} specifically allows this.)
Then
\[
Z_{\{ (D^{\llbracket 2 p \rrbracket})^{-1}, A' \}, D^{\llbracket 2 p + 1 \rrbracket}}(G_{0, p})
= \sum_{\substack{i_1 + \ldots + i_s = t \\ i_1, \ldots, i_s \ge 0}} b_{i_1, \ldots, i_s} ( \prod_{j = 1}^s \lambda_j^{i_j} )^0
= \sum_{\substack{i_1 + \ldots + i_s = t \\ i_1, \ldots, i_s \ge 0}} b_{i_1, \ldots, i_s}
\]
and we have just computed this value in polynomial time
in the size of $G$ from the values $Z_{A', \mathfrak D}(G_{n, p})$, for $n \ge 1$.
In other words, we have achieved it by querying
the oracle $\EVAL(A', \mathfrak D)$ on the instances $G_{n, p}$, for $n \ge 1$,
in polynomial time.
Equivalently, we have shown that we can simulate
a virtual ``gadget'' $\mathcal R_{d, 0, p}$
replacing every occurrence of $\mathcal R_{d, n, p}$
in $G_{n, p}$ in polynomial time.
The virtual gadget $\mathcal R_{d, 0, p}$ has
the edge signature $(D^{\llbracket 2 p \rrbracket})^{-1}$ in place of
$(D^{\llbracket 2 p \rrbracket})^{-1 / 2} {\widetilde B}^n
(D^{\llbracket 2 p \rrbracket})^{-1 / 2}$ in each $\mathcal P_{n, p}$, since
\[
(D^{\llbracket 2 p \rrbracket})^{-1 / 2} {\widetilde B}^0
(D^{\llbracket 2 p \rrbracket})^{-1 / 2}
= (D^{\llbracket 2 p \rrbracket})^{-1 / 2} I_s (D^{\llbracket 2 p \rrbracket})^{-1 / 2} = (D^{\llbracket 2 p \rrbracket})^{-1}.
\]
Additionally, each $F_i$ retains the vertex-weight contribution with the matrix $D^{\llbracket 2 p + 1 \rrbracket}$ in $\mathcal R_{d, 0, p}$.
We view it as having ``virtual'' degree $2 p + 1$.
This precisely results in the GH-grid $G_{0, p}$.
However, even though $G_{0, p}$
still retains the cycles,
since $(D^{\llbracket 2 p \rrbracket})^{-1}$ is a diagonal matrix, each
vertex $F_i$
in a cycle is forced to receive the same vertex
assignment value in the domain set $[s]$;
all other vertex assignments contribute
zero in the evaluation of $Z_{\{ (D^{\llbracket 2 p \rrbracket})^{-1}, A' \}, D^{\llbracket 2 p + 1 \rrbracket}}(G_{0, p})$.
This can be easily seen by traversing the vertices $F_1, \ldots, F_d$
in a cycle.
Hence we can view each cycle employing the
virtual gadget $\mathcal R_{d, 0, p}$ as a single vertex
that contributes only a diagonal matrix of positive vertex weights
$P^{\llbracket d \rrbracket} = (D^{\llbracket 2 p + 1 \rrbracket} (D^{\llbracket 2 p \rrbracket})^{-1})^d$, where $d$ is the vertex degree in $G$.
Contracting all the cycles
to a single vertex each,
we arrive at the original graph $G$.
Let $\mathfrak P = \{ P^{\llbracket i \rrbracket} \}_{i = 0}^\infty$,
where we let $P^{\llbracket 0 \rrbracket} = I_s$, and for $i>0$,
we have $P^{\llbracket i \rrbracket}_j = w_j^i$
where $w_j = \sum_{k = 1}^{m_j} \alpha_{j k} \mu_{j k}^{2 p + 1} / \sum_{k = 1}^{m_j} \alpha_{j k} \mu_{j k}^{2 p} > 0$
for $1 \le j \le s$.
This shows that we now can interpolate
the value $Z_{A', \mathfrak P}(G)$
using the values $Z_{A', \mathfrak D}(G_{n, p})$
in polynomial time in the size of $G$.
The graph $G$ is arbitrary but without isolated vertices here.
We show next how to deal with the case when $G$ has isolated vertices.
Given an arbitrary graph $G$, assume it has $h \ge 0$ isolated vertices.
Let $G^*$ denote the graph obtained from $G$ by their removal.
Then $G^*$ is of size not larger than $G$ and $h \le |V(G)|$.
Obviously, $Z_{A', \mathfrak P}(G) = (\sum_{i = 1}^m P_i^{\llbracket 0 \rrbracket})^h Z_{A', \mathfrak P}(G^*) = s^h Z_{A', \mathfrak P}(G^*)$.
Here the integer $s$ is a constant, so the factor $s^h > 0$ can be easily computed in polynomial time.
Thus, knowing the value $Z_{A', \mathfrak P}(G^*)$
we can compute the value $Z_{A', \mathfrak P}(G)$ in polynomial time.
Further, since we only use the graphs $G_{n, p}, n \ge 1$ during the interpolation,
each being simple of degree at most $2 p + 1$,
combining it with the possible isolated vertex removal step,
we conclude $\EVAL(A', \mathfrak P) \le_{\mathrm T}^{\mathrm P} \EVAL_{\simp}^{(2 p + 1)}(A', \mathfrak D)$.
Next, it is easy to see that for an arbitrary graph $G$
\begingroup
\allowdisplaybreaks
\begin{align*}
Z_{A', \mathfrak P}(G) &= \sum_{\zeta: V(G) \rightarrow [s]} \prod_{z \in V(G)} P^{\llbracket \deg(z) \rrbracket}_{\zeta(z)} \prod_{(u, v) \in E(G)} A'_{\zeta(u), \zeta(v)} \\
&= \sum_{\zeta: V(G) \rightarrow [s]} \prod_{z \in V(G)} w_{\zeta(z)}^{\deg(z)} \prod_{(u, v) \in E(G)} A'_{\zeta(u), \zeta(v)} \\
&= \sum_{\zeta: V(G) \rightarrow [s]} \prod_{(u, v) \in E(G)} w_{\zeta(u)} w_{\zeta(v)} A'_{\zeta(u), \zeta(v)} \\
&= \sum_{\zeta: V(G) \rightarrow [s]} \prod_{(u, v) \in E(G)} C_{\zeta(u), \zeta(v)} = Z_C(G).
\end{align*}
\endgroup
Here $C$ is an $s \times s$ matrix with
the entries $C_{i j} = A'_{i j} w_i w_j$ where $1 \le i, j \le s$.
Clearly, $C$ is a nonnegative symmetric matrix.
In the above chain of equalities,
we were able to \emph{redistribute the weights $w_i$
and $w_j$ into the edge weights} $A'_{i j}$
which resulted in the edge weights $C_{i j}$,
so that precisely each edge $\{u,v\}$ in $G$ gets two factors
$w_{\zeta(u)}$ and $w_{\zeta(v)}$ since the vertex weights at $u$ and $v$
were $w_{\zeta(u)}^{\deg(u)}$ and $w_{\zeta(v)}^{\deg(v)}$ respectively.
(This is a crucial step in our proof.)
Because the underlying graph $G$ is arbitrary, it follows that
$\EVAL(A', \mathfrak P)
\equiv_{\mathrm T}^{\mathrm P} \EVAL(C)$.
Combining this with the previous $\EVAL$-reductions and equivalences,
we obtain
\[
\EVAL(C) \equiv_{\mathrm T}^{\mathrm P} \EVAL(A', \mathfrak P) \le_{\mathrm T}^{\mathrm P} \EVAL_{\simp}^{(2 p + 1)}(A', \mathfrak D) \equiv_{\mathrm T}^{\mathrm P} \EVAL_{\simp}^{(2 p + 1)}(A, D),
\]
so that $\EVAL(C) \le_{\mathrm T}^{\mathrm P} \EVAL_{\simp}^{(\Delta)}(A, D)$,
by taking $\Delta = 2 p + 1$.
Remembering that our goal is to prove
the \#P-hardness for the matrices $A, D$
not satisfying the tractability conditions of Theorem~\ref{thm:bd-hom-nonneg},
we finally use the assumption that $A$ is not block-rank-$1$.
Next, noticing that all $\mu_{i j} > 0$,
by construction $A'$ is not block-rank-$1$ either.
Finally, because all $w_i > 0$ nor is $C$ block-rank-$1$
implying that $\EVAL(C)$ is \#P-hard by Theorem~\ref{thm:Bulatov-Grohe}.
Hence $\EVAL_{\simp}^{(2 p + 1)}(A, D)$ is also \#P-hard.
This completes the proof of the
\#P-hardness part of Theorem~\ref{thm:bd-hom-nonneg}.
We remark that
one important step in our interpolation proof happened
at the stratification step
before (\ref{stratification-isolating-L}).
In the proof we have the goal of redistributing
vertex weights to edge weights; but this redistribution is
sensitive to the degree of the vertices.
This led us to define
the auxiliary graph $G'$ and the coefficients $c_\kappa$.
Usually in an interpolation proof there are some coefficients
that have a clear combinatorial meaning in terms of the
original problem instance.
Here these values $c_\kappa$ do not have a clear combinatorial meaning in terms of
$Z_{A', \mathfrak D}(G)$, rather they are defined in terms of
an intermediate problem instance $G'$, which is neither $G$ nor the
actual constructed graphs $G_{n, p}$.
It is only in a ``limiting'' sense that a certain combination
of these values $c_\kappa$
allows us to compute $Z_{A', \mathfrak D}(G)$.
\section{Dichotomy for bounded degree graphs}
In addition to the Dyer-Greenhill dichotomy (Theorem~\ref{thm:Dyer-Greenhill}),
in the same paper~\cite{Dyer-Greenhill-2000}
they also
proved that the \#P-hardness part of their dichotomy
holds for bounded degree graphs.
The bounded degree case of the Bulatov-Grohe dichotomy
(Theorem~\ref{thm:Bulatov-Grohe})
was left open,
and all known proofs~\cite{Bulatov-Grohe-2005,Thurley-2009,Grohe-Thurley-2011} of its \#P-hardness
part require unbounded degree graphs.
All subsequent dichotomies that use the Bulatov-Grohe dichotomy,
e.g.,~\cite{Goldberg-et-al-2010,Cai-Chen-Lu-2013} also explicitly
or implicitly
(because of their dependence on the Bulatov-Grohe dichotomy)
require unbounded degree graphs.
In this paper, we extend the \#P-hardness part of the Bulatov-Grohe dichotomy
to bounded degree graphs.
\begin{theorem}\label{thm:bd-hom-nonneg-weak}
Let $A$ be a symmetric nonnegative matrix.
If $A$ is not block-rank-$1$, then for some $\Delta > 0$,
the problem $\EVAL^{(\Delta)}(A)$ is \#P-hard.
\end{theorem}
The degree bound $\Delta$ proved
in Theorem~\ref{thm:bd-hom-nonneg-weak} depends on $A$,
as is the case in Theorem~\ref{thm:Dyer-Greenhill}.
The authors
of~\cite{Dyer-Greenhill-2000} conjectured that a universal bound $\Delta =3$
works for Theorem~\ref{thm:Dyer-Greenhill}; whether
a universal bound exists for both
Theorems~\ref{thm:Dyer-Greenhill} and~\ref{thm:bd-hom-nonneg-weak} is open.
For general symmetric real or complex $A$, it is open
whether bounded degree versions of the dichotomies
in~\cite{Goldberg-et-al-2010} and~\cite{Cai-Chen-Lu-2013} hold.
Xia~\cite{MingjiXia} proved that a universal bound
does not exist for complex symmetric matrices $A$, assuming \#P
does not collapse to P.
We prove a broader dichotomy than Theorem~\ref{thm:bd-hom-nonneg-weak},
which also includes arbitrary
nonnegative vertex weights.
\begin{theorem}\label{thm:bd-hom-nonneg}
Let $A$ and $D$ be $m \times m$ nonnegative matrices, where $A$ is symmetric, and $D$ is diagonal.
Let $A'$ be the matrix obtained from $A$ by
striking out rows and columns that correspond to
$0$ entries of $D$ on the diagonal.
If $A'$ is block-rank-$1$, then the problem $\EVAL(A, D)$ is in polynomial time.
Otherwise, for some $\Delta > 0$, the problem $\EVAL_{\simp}^{(\Delta)}(A, D)$ is \#P-hard.
\end{theorem}
Every $0$ entry of $D$ on the diagonal effectively nullifies
the corresponding domain element in $[m]$, so the problem
becomes an equivalent problem on the reduced domain. Thus, for a nonnegative diagonal
$D$, without
loss of generality, we may assume the domain has already been reduced
so that $D$ is positive diagonal. In what follows, we will make this
assumption.
In Appendix~\ref{sec:Tractability-part}, we will prove the
tractability part of
Theorem~\ref{thm:bd-hom-nonneg}. This follows easily from known results.
In Appendix~\ref{sec:Two-technical-lemmas},
we will present two technical lemmas,
Lemma~\ref{lem:ADA-pairwise-lin-ind} and Lemma~\ref{lem:ADA-nondeg-thick}
to be used in
Section~\ref{sec:Hardness-proof}. Finally, in Appendix~\ref{sec:Goldberg-et-al-2010-dichotomy} we prove
Theorem~\ref{thm:EVAL-simp-interp},
showing that the \#P-hardness part of the dichotomy for counting GH
by Goldberg et al.~\cite{Goldberg-et-al-2010} for
real symmetric matrix (with mixed signs) is also valid for simple graphs.
\section{Preliminaries}
In order to state all our complexity results in the strict notion of Turing
computability, we adopt the standard model~\cite{Lenstra-1992} of computation for
partition functions, and require that all numbers be from an arbitrary but fixed
algebraic extension of $\mathbb{Q}$. We use
$\mathbb R$ and $\mathbb C$ to denote the sets
of real and complex algebraic numbers.
Many statements remain true in other fields or rings
if arithmetic operations can be carried out efficiently
in a model of computation
(see~\cite{cai-chen-book} for more discussions on this issue).
For a positive integer $n$, we use $[n]$
to denote the set $\{1, \ldots, n \}$.
When $n = 0$, $[0] = \emptyset$.
We use $[m:n]$, where $m \le n$,
to denote $\{ m, m + 1, \ldots, n \}$.
In this paper, we consider undirected graphs
unless stated otherwise.
Following standard definitions,
the graph $G$ is allowed to have multiple edges but no loops.
(However, we will touch on this issue a few times when
$G$ is allowed to have loops.)
The graph $H$ can have multiple edges and loops,
or more generally, edge weights.
For the graph $H$, we treat its loops as edges.
An edge-weighted graph $H$ on $m$ vertices can be identified with
a symmetric $m \times m$ matrix $A$ in the obvious way.
We write this correspondence by $H = H_A$ and $A = A_H$.
\begin{definition}\label{def:EVAL(A)}
Let $A \in \mathbb C^{m \times m}$ be a symmetric matrix.
The problem $\EVAL(A)$ is defined as follows:
Given an undirected graph $G = (V, E)$, compute
\[
Z_A(G) = \sum_{\xi: V \rightarrow [m]} \prod_{(u, v) \in E} A_{\xi(u), \xi(v)}.
\]
\end{definition}
The function $Z_A(\cdot)$ is called a \emph{graph homomorphism function} or a \emph{partition function}.
When $A$ is a symmetric $\{0, 1\}$-matrix, i.e.,
when the graph $H = H_A$ is unweighted,
$Z_A(G)$ counts
the number of homomorphisms from $G$ to $H$.
In this case, we denote $\EVAL(H) = \EVAL(A_H)$, and this problem is also
known as the \#$H$-coloring problem.
\begin{theorem}[Dyer and Greenhill \cite{Dyer-Greenhill-2000}]\label{thm:Dyer-Greenhill}
Let $H$ be a fixed undirected graph.
Then $\EVAL(H)$ is in polynomial time
if every connected component of $H$ is either
(1) an isolated vertex, or (2) a complete graph with all loops present,
or (3) a complete bipartite graph with no loops present.
Otherwise, the problem $\EVAL(H)$ is \#P-complete.
\end{theorem}
Bulatov and Grohe~\cite{Bulatov-Grohe-2005} extended Theorem~\ref{thm:Dyer-Greenhill} to
$\EVAL(A)$ where $A$ is a symmetric matrix with nonnegative entries.
In order to state their result, we need to define a few notions first.
We say
a nonnegative
symmetric $m \times m$ matrix $A$ is rectangular if
there are pairwise disjoint nonempty subsets of $[m]$:
$T_1, \ldots, T_r, P_1, \ldots, P_s, Q_1, \ldots, Q_s$,
for some $r, s\ge 0$, such that $A_{i, j} > 0$ iff
\[
(i, j) \in \bigcup_{k \in [r]} (T_k \times T_k) \cup \bigcup_{l \in [s]} [(P_l \times Q_l) \cup (Q_l \times P_l)].
\]
We refer to $T_k \times T_k, P_l \times Q_l$ and $Q_l \times P_l$ as blocks of $A$.
Further, we say a nonnegative symmetric matrix $A$ is \textit{block-rank-$1$}
if $A$ is rectangular and every block of $A$ has rank one.
\begin{theorem}[Bulatov and Grohe \cite{Bulatov-Grohe-2005}]\label{thm:Bulatov-Grohe}
Let $A$ be a symmetric matrix with nonnegative entries.
Then $\EVAL(A)$ is in polynomial time if $A$ is block-rank-$1$,
and is \#P-hard otherwise.
\end{theorem}
There is a natural extension of $\EVAL(A)$
involving the use of vertex weights. Both
papers~\cite{Dyer-Greenhill-2000,Bulatov-Grohe-2005} use them
in their proofs.
A graph $H$ on $m$ vertices with vertex and edge weights is
identified with
a symmetric $m \times m$ edge weight matrix $A$ and
a diagonal $m \times m$ vertex weight matrix $D =
\operatorname{diag}(D_1, \ldots, D_m)$ in a natural way.
Then the problem $\EVAL(A)$ can be generalized
to $\EVAL(A, D)$ for vertex-edge-weighted graphs.
\begin{definition}\label{def:EVAL(A,D)}
Let $A \in \mathbb C^{m \times m}$ be a symmetric matrix
and $D \in \mathbb C^{m \times m}$ a diagonal matrix.
The problem $\EVAL(A, D)$ is defined as follows:
Given an undirected graph $G = (V, E)$, compute
\[
Z_{A, D}(G) = \sum_{\xi: V \rightarrow [m]} \prod_{w \in V} D_{\xi(w)} \prod_{(u, v) \in E} A_{\xi(u), \xi(v)}.
\]
\end{definition}
Note that $\EVAL(A)$ is the special case $\EVAL(A, I_m)$.
We also need to define another $\EVAL$ problem where
the vertex weights are specified by the
degree.
\begin{definition}\label{def:EVAL(A,frakD)}
Let $A \in \mathbb C^{m \times m}$ be a symmetric matrix and
$\mathfrak D = \{ D^{\llbracket i \rrbracket} \}_{i = 0}^\infty$ a sequence of diagonal matrices in $\mathbb C^{m \times m}$.
The problem $\EVAL(A, \mathfrak D)$ is defined as follows:
Given an undirected graph $G = (V, E)$, compute
\[
Z_{A, \mathfrak D}(G) = \sum_{\xi: V \rightarrow [m]} \prod_{w \in V} D_{\xi(w)}^{\llbracket \deg(w) \rrbracket} \prod_{(u, v) \in E} A_{\xi(u), \xi(v)}.
\]
\end{definition}
Finally, we need to define a general $\EVAL$ problem,
where the vertices and edges can individually take specific weights.
Let $\mathscr A$ be a set of (edge weight) $m \times m$ matrices and
$\mathscr D$ a set of diagonal (vertex weight) $m \times m$ matrices.
A GH-grid $\Omega = (G, \rho)$ consists of
a graph $G = (V, E)$ with possibly both directed and undirected edges,
and loops,
and $\rho$ assigns to each edge $e \in E$ or loop an
$A^{(e)} \in \mathscr A$
and to each vertex $v \in V$
a $D^{(v)} \in \mathscr D$.
(A loop is just an edge of the form $(v,v)$.)
If $e \in E$
is a directed edge then the tail and head correspond to rows and columns of
$A^{(e)}$, respectively;
if $e \in E$
is an undirected edge then $A^{(e)}$ must be
symmetric.
\begin{definition}\label{def:EVAL(scrA,scrD)}
The problem $\EVAL(\mathscr A, \mathscr D)$ is defined as follows:
Given a GH-grid $\Omega = \Omega(G)$, compute
\[
Z_{\mathscr A, \mathscr D}(\Omega) =
\sum_{\xi \colon V \to [m]} \prod_{w \in V} D_{\xi(w)}^{(w)} \prod_{e = (u, v) \in E} A_{\xi(u), \xi(v)}^{(e)}
\]
\end{definition}
We remark that $Z_{\mathscr A, \mathscr D}$ is introduced
only as a tool to express a certain quantity in a ``virtual''
interpolation;
the dichotomy theorems do not
apply to this.
Defintions~\ref{def:EVAL(A,frakD)} and~\ref{def:EVAL(scrA,scrD)}
are carefully crafted in order to carry out
the \#P-hardness part of the proof of Theorem~\ref{thm:bd-hom-nonneg}.
Notice that the problem $\EVAL(\mathscr A, \mathscr D)$ generalizes both
problems $\EVAL(A)$ and $\EVAL(A, D)$, by taking $\mathscr A$ to
be a single symmetric matrix, and
by taking $\mathscr D$ to be a single diagonal matrix.
But $\EVAL(A, \mathfrak D)$ is not naturally
expressible as $\EVAL(\mathscr A, \mathscr D)$ because the
latter does not force the
vertex-weight matrix on a vertex according to its degree.
We refer to $[m]$ as the domain of the corresponding $\EVAL$ problem.
If $\mathscr A = \{ A \}$ or $\mathscr D = \{ D \}$, then
we simply write $Z_{A, \mathscr D}(\cdot)$ or $Z_{\mathscr A, D}(\cdot)$, respectively.
We use a superscript $(\Delta)$ and/or a subscript $\simp$
to denote the restriction of a corresponding
$\EVAL$ problem
to degree-$\Delta$ bounded graphs
and/or simple graphs.
E.g., $\EVAL^{(\Delta)}(A)$ denotes the problem $\EVAL(A)$
restricted to degree-$\Delta$ bounded graphs,
$\EVAL_{\simp}(A, \mathfrak D)$ denotes the problem
$\EVAL(A, \mathfrak D)$ restricted to simple graphs,
and both restrictions apply in $\EVAL_{\simp}^{(\Delta)}(A, \mathfrak D)$.
Working within the framework of $\EVAL(A, D)$,
we define an edge gadget to be a graph with two distinguished vertices,
called $u^*$ and $v^*$.
An edge gadget $G = (V, E)$ has a signature (edge weight matrix)
expressed by an $m \times m$ matrix $F$, where
\[
F_{i j} = \sum_{\substack{\xi \colon V \to [m] \\ \xi(u^*) = i,\, \xi(v^*) = j}} \prod_{z \in V \setminus \{ u^*, v^* \}} D_{\xi(z)} \prod_{(x, y) \in E} A_{\xi(x), \xi(y)}
\]
for $1 \le i, j \le m$.
When this gadget is placed in a graph identifying $u^*$ and $v^*$
with two vertices $u$ and $v$ in that graph, then $F$ is the signature
matrix for the pair $(u, v)$.
Note that the vertex weights corresponding to $u$ and $v$
are excluded from the product in the definition of $F$.
Similar definitions can be introduced for
$\EVAL(A)$, $\EVAL(A, \mathfrak D)$ and $\EVAL(\mathscr A, \mathscr D)$.
We use $\le_{\mathrm T}^{\mathrm P}$ (and $\equiv_{\mathrm T}^{\mathrm P}$)
to denote polynomial-time Turing reductions (and equivalences, respectively).
Two simple
operations are known as \emph{thickening} and \emph{stretching}.
Let $p, r \ge 1$ be integers.
A $p$-\emph{thickening} of an edge
replaces it by $p$ parallel edges,
and
a $r$-\emph{stretching} replaces it by a path
of length $r$.
In both cases we retain the endpoints $u, v$.
The $p$-\emph{thickening} or $r$-\emph{stretching}
of $G$ with respect to $F \subseteq E(G)$,
denoted respectively by $T_p^{(F)}(G)$ and $S_r^{(F)}(G)$,
are
obtained by $p$-\emph{thickening} or $r$-\emph{stretching}
each edge from $F$, respectively.
Other edges, if any, are unchanged in both cases.
When $F = E(G)$, we call them the $p$-\emph{thickening} and $r$-\emph{stretching} of $G$
and denote them by $T_p(G)$ and $S_r(G)$, respectively.
$T_p e$ and $S_r e$ are the special cases when the graph consists of
a single edge $e$.
See Figure \ref{fig:thickening-stretching} for an illustration.
Thickenings and stretchings can be combined in any order.
Examples are shown in Figure~\ref{fig:thickenings-stretchings-composition}.
\begin{figure}[t]
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics{figure1.pdf}
\end{subfigure}\hfill
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics{figure2.pdf}
\end{subfigure}
\caption{\label{fig:thickening-stretching}The thickening $T_p e$ and the stretching $S_r e$ of an edge $e = (u, v)$.}
\end{figure}
\begin{figure}[t]
\begin{subfigure}{0.6\textwidth}
\centering
\includegraphics{figure3.pdf}
\end{subfigure}\hfill
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics{figure4.pdf}
\end{subfigure}
\caption{\label{fig:thickenings-stretchings-composition}The graphs $T_4 S_5 e$ (on the left) and $S_5 T_4 e$ (on the right) where $e = (u, v)$.}
\end{figure}
For a matrix $A$, we denote by $A^{\odot p}$ the matrix
obtained by replacing each entry of $A$ with its $p$th power.
Clearly, $Z_A(T_p G) = Z_{A^{\odot p}}(G)$ and $Z_A(S_r G) = Z_{A^r}(G)$.
More generally, for the vertex-weighted case,
we have $Z_{A, D}(T_p G) = Z_{A^{\odot p}, D}(G)$ and $Z_{A, D}(S_r G) = Z_{A (D A)^{r - 1}, D}(G)$.
Here $(D A)^0 = I_m$ if $A$ and $D$ are $m \times m$.
\subsection{Two technical lemmas}
\section{Two technical lemmas}\label{sec:Two-technical-lemmas}
We need two technical lemmas. The following lemma is from \cite{Dyer-Greenhill-2000}
(Lemma 3.6); for the convenience of readers we give a proof here.
\begin{lemma}\label{lem:ADA-pairwise-lin-ind}
Let $A$ and $D$ be $m \times m$ matrices,
where $A$ is real symmetric
with all columns nonzero and pairwise linearly independent,
and $D$ is positive diagonal.
Then all columns of $A D A$ are nonzero and pairwise linearly independent.
\end{lemma}
\begin{proof}
The case $m = 1$ is trivial. Assume $m \ge 2$.
Let $D = \operatorname{diag}(\alpha_i)_{i = 1}^m$, and $\Pi = \operatorname{diag}(\sqrt{\alpha_i})_{i = 1}^m$.
Then $\Pi^2 = D$.
We have $A D A = Q^T Q$, where $Q = \Pi A$.
Let $q_i$ denote the $i$th column of $Q$.
Then $Q$ has pairwise linearly independent columns.
By the Cauchy-Schwartz inequality,
\[
q_i^T q_j < \left( (q_i^T q_i) (q_j^T q_j) \right)^{1 / 2},
\]
whenever $i \ne j$.
Then for any $1 \le i < j \le m$, the $i$th and $j$th columns of $A D A$
contain a submatrix
\[
\begin{bmatrix}
q_i^T q_i & q_i^T q_j \\
q_i^T q_j & q_j^T q_j
\end{bmatrix},
\]
so they are linearly independent.
\end{proof}
The following is also adapted from~\cite{Dyer-Greenhill-2000} (Theorem 3.1).
\begin{lemma}\label{lem:ADA-nondeg-thick}
Let $A$ and $D$ be $m \times m$ matrices,
where $A$ is real symmetric
with all columns nonzero and pairwise linearly independent,
and $D$ is positive diagonal.
Then for all sufficiently large positive integers
$p$, the matrix $B = (A D A)^{\odot p}$
corresponding to the edge gadget in Figure~\ref{fig:ADA-to-p-gadget-advanced} is nondegenerate.
\end{lemma}
\begin{proof}
If $m = 1$, then any $p \ge 1$ works. Let $m \ge 2$.
Following the proof of Lemma~\ref{lem:ADA-pairwise-lin-ind},
we have
$q_i^T q_j < \sqrt{(q_i^T q_i) (q_j^T q_j)}$,
for all $1 \le i < j \le m$. Let
\[
\gamma = \max_{1 \le i < j \le m} \frac{q_i^T q_j }{\sqrt{(q_i^T q_i) (q_j^T q_j)}} < 1.
\]
Let $A' = A D A = Q^T Q$ so $A'_{i j} = q_i^T q_j$.
Then $A'_{i j} \le \gamma \sqrt{A'_{i i} A'_{j j}}$ for all $i \ne j$.
Consider the determinant of $A'$.
Each term of $\det(A')$ has the form
\[
\pm \prod_{i = 1}^m A'_{i \sigma(i)},
\]
where $\sigma$ is a permutation of $[m]$.
Denote $t(\sigma) = |\{ i \mid \sigma(i) \ne i \}|$. Then
\[
\prod_{i = 1}^m A'_{i \sigma(i)} \le \gamma^{t(\sigma)} \prod_{i = 1}^m \sqrt{A'_{i i}} \prod_{i = 1}^m \sqrt{A'_{\sigma(i) \sigma(i)}} = \gamma^{t(\sigma)} \prod_{i = 1}^m A'_{i i}.
\]
Consider the $p$-thickening of $A'$ for $p \ge 1$.
Each term of $\det\left((A')^{\odot p}\right)$ has the form
$\pm \prod_{i = 1}^m A'^{~p}_{i \sigma(i)}$
for some permutation $\sigma$ of $[m]$.
Now
\[
| \{ \sigma \mid t(\sigma) = j \} | \le \binom{m}{j} j! \le m^j,
\]
for $0 \le j \le m$.
By separating out the identity permutation
and all other terms, for $p \ge \lfloor \ln(2 m) / \ln(1 / \gamma) \rfloor + 1$,
we have $2 m \gamma^p < 1$, and
\begin{align*}
\det\left((A')^{\odot p}\right) &\ge \left( \prod_{i = 1}^m A'_{i i} \right)^p - \left( \prod_{i = 1}^m A'_{i i} \right)^p \sum_{j = 1}^m m^j \gamma^{p j} \\
&\ge \left( \prod_{i = 1}^m A'_{i i} \right)^p \left( 1 - \frac{m \gamma^p}{1 - m \gamma^p} \right) = \left( \prod_{i = 1}^m A'_{i i} \right)^p \left( \frac{1 - 2 m \gamma^p}{1 - m \gamma^p} \right) > 0.
\end{align*} \end{proof}
\begin{figure}
\centering
\includegraphics{figure8.pdf}
\caption{\label{fig:ADA-to-p-gadget-advanced}The edge gadget $S_2 T_p e,\, e = (u, v)$ with the edge weight matrix $(A D A)^{\odot p}$.}
\end{figure}
\subsection{Tractability part}
| {
"timestamp": "2020-02-07T02:02:28",
"yymm": "2002",
"arxiv_id": "2002.02021",
"language": "en",
"url": "https://arxiv.org/abs/2002.02021",
"abstract": "We consider the complexity of counting weighted graph homomorphisms defined by a symmetric matrix $A$. Each symmetric matrix $A$ defines a graph homomorphism function $Z_A(\\cdot)$, also known as the partition function. Dyer and Greenhill [10] established a complexity dichotomy of $Z_A(\\cdot)$ for symmetric $\\{0, 1\\}$-matrices $A$, and they further proved that its #P-hardness part also holds for bounded degree graphs. Bulatov and Grohe [4] extended the Dyer-Greenhill dichotomy to nonnegative symmetric matrices $A$. However, their hardness proof requires graphs of arbitrarily large degree, and whether the bounded degree part of the Dyer-Greenhill dichotomy can be extended has been an open problem for 15 years. We resolve this open problem and prove that for nonnegative symmetric $A$, either $Z_A(G)$ is in polynomial time for all graphs $G$, or it is #P-hard for bounded degree (and simple) graphs $G$. We further extend the complexity dichotomy to include nonnegative vertex weights. Additionally, we prove that the #P-hardness part of the dichotomy by Goldberg et al. [12] for $Z_A(\\cdot)$ also holds for simple graphs, where $A$ is any real symmetric matrix.",
"subjects": "Computational Complexity (cs.CC)",
"title": "A dichotomy for bounded degree graph homomorphisms with nonnegative weights",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9861513901914406,
"lm_q2_score": 0.7185943805178139,
"lm_q1q2_score": 0.7086428473313993
} |
https://arxiv.org/abs/2202.12045 | Pushing Blocks by Sweeping Lines | We investigate the reconfiguration of $n$ blocks, or "tokens", in the square grid using "line pushes". A line push is performed from one of the four cardinal directions and pushes all tokens that are maximum in that direction to the opposite direction. Tokens that are in the way of other tokens are displaced in the same direction, as well.Similar models of manipulating objects using uniform external forces match the mechanics of existing games and puzzles, such as Mega Maze, 2048 and Labyrinth, and have also been investigated in the context of self-assembly, programmable matter and robotic motion planning. The problem of obtaining a given shape from a starting configuration is know to be NP-complete.We show that, for every $n$, there are "sparse" initial configurations of $n$ tokens (i.e., where no two tokens are in the same row or column) that can be rearranged into any $a\times b$ box such that $ab=n$. However, only $1\times k$, $2\times k$ and $3\times 3$ boxes are obtainable from any arbitrary sparse configuration with a matching number of tokens. We also study the problem of rearranging labeled tokens into a configuration of the same shape, but with permuted tokens. For every initial "compact" configuration of the tokens, we provide a complete characterization of what other configurations can be obtained by means of line pushes. |
\subsection{All Feasible Permutations Are Even}\label{s:4.1}
In this section, we will prove Theorem~\ref{thm:even}, which states that only \emph {even} permutations are possible in the Permutation Puzzle.
For a labeled canonical configuration $C$, let $C'$ be an extension of the labeling where also the empty cells inside the bounding box of $C$ get a unique label.
Our proof strategy is to first extend permutations of $C$ to permutation of $C'
, and argue that a permutation of full and empty cells must be even.
We then introduce a \emph {dual game} played on the empty cells only, and argue that this dual game has similar properties.
Since the dual of any game is always smaller (in terms of bounding box) than the original, our theorem then follows by induction.
\subsubsection* {Permutations on Full Cells and Empty Cells}
Let $C$ be a labeled canonical configuration; that is, a function $C\colon \mathcal{L} \to \Sigma \cup \{{\rm empty}\}$.
We extend $C$ to another function $C'\colon \mathcal{L} \to \Sigma \cup \Sigma' \cup \{{\rm empty}\}$, where $\Sigma'$ is a second set of unique labels and $C'(x) \in \Sigma'$ if and only if $x$ is in the bounding box of $C$, but not in $C$ (for all other $x$, $C'(x)=C(x)$).
We now define the effect of a push operation on $C'$ (illustrated in Figure~\ref {fig:empty-labels}). We define it for a $\rightpush$ push; the other directions are symmetric. A single $\rightpush$ push affects each row as follows:
\begin {itemize}
\item For each row in which the rightmost cell is \emph {empty}, we shift all tokens and empty cells one position to the right and place the rightmost empty cell at the left; in other words, we perform a single cyclic permutation on the tokens in the row.
\item For each row in which the rightmost cell is \emph {full}, nothing changes.
\end {itemize}
\begin {figure}
\centering
\includegraphics [scale=0.94] {pics/q00.pdf}
\includegraphics [scale=0.94] {pics/q01.pdf}
\includegraphics [scale=0.94] {pics/q02.pdf}
\includegraphics [scale=0.94] {pics/q03.pdf}
\includegraphics [scale=0.94] {pics/q04.pdf}
\caption {A configuration with labeled empty cells and the result after the sequence of pushes $\langle\, \rightpush\ \rightpush\ \rightpush\ \uppush\, \rangle$. We use yellow for full tokens and light blue for empty cells (and later, dual tokens) in this section.}
\label {fig:empty-labels}
\end {figure}
We now argue that for any sequence of pushes which transforms $C$ into another canonical configuration, the effect of these moves on $C'$ must be an even permutation.
\begin {lemma} \label {lem:all-even}
Every achievable permutation on both the full and empty cells must be even.
\end {lemma}
\begin {proof}
By definition, every horizontal (resp., vertical) push causes a cyclic permutation on some rows (resp., on some columns) involving both labeled tokens and labeled empty cells.
We will argue that the total number of cyclic permutations on the rows (resp., on the columns) caused by these pushes is even. Since all cyclic permutations on rows (resp., on columns) have the same parity, which depends only on the width (resp., height) of the bounding box, this implies that the overall permutation is even.
\begin {figure}[h!]
\centering
\includegraphics [scale=1] {pics/q05.pdf}
\qquad
\includegraphics [scale=1] {pics/q06.pdf}
\caption {Rows and columns of the same length are always aligned.}
\label {fig:aligned}
\end {figure}
From Observations~\ref{obs:compact-shape} and~\ref{obs:compact} it follows that all rows (resp., columns) of the same length must always be aligned throughout the reconfiguration (see Figure~\ref {fig:aligned}).
In particular, observe that a \emph {vertical} push never influences the \emph {horizontal} placement of the rows of a particular length (note that here we only argue about the shape, not the labels).
Now consider the set of rows of length $k$. Every horizontal push either moves all such rows one cell to the right, or one cell to the left, or not at all. Since both at the start and at the end of the process all rows of length $k$ are aligned with the left border of the bounding box, the total number of pushes that influence the horizontal placement of these rows is even, and thus the number of cyclic permutations performed on these rows is also even.
Note that a single push may influence the placement of rows of different lengths; however, for each specific length $k$, the total number of pushes that influences them is even, and therefore the total number of cyclic permutations on rows is even.
The same holds for the cyclic permutations on columns, which concludes the proof.
\end {proof}
Now, in order to prove our main theorem, we still need to show that the resulting permutation \emph {restricted to the tokens} is also even; see Figure~\ref {fig:restrict}.
\begin {figure}
\centering
\includegraphics [scale=1.25] {pics/q07a.pdf}
\caption {The fact that the overall permutation on full and empty cells is even does not necessarily imply that the permutation on the full cells is even.}
\label {fig:restrict}
\end {figure}
\subsubsection* {Dual Puzzles}
Given a configuration $C$, we have extended its labeling to a configuration $C'$ having labels on the empty cells, as well.
Now, consider the \emph {restriction} of $C'$ to \emph {only} the empty cells; that is, the function $D \colon \mathcal{L} \to \Sigma'\cup \{{\rm empty}\}$ which labels exactly the cells that are empty in $C$ but lie inside the bounding box of $C$. Clearly, if we can prove that a permutation on $D$ is even, then Lemma~\ref {lem:all-even} implies that the corresponding permutation on $C$ must also be even (the product of two permutations is even if and only if they have the same parity).
\begin {figure}
\centering
\includegraphics [scale=1] {pics/q07.pdf}
\qquad
\includegraphics [scale=1] {pics/q08.pdf}
\caption {A primal puzzle (yellow) with its corresponding dual puzzle (light blue). The blue rectangle represents the bounding box of the dual puzzle. Note that the tokens in the dual puzzle match the labels on the empty cells of the primal puzzle.}
\label {fig:dual}
\end {figure}
For convenience, we will consider the bounding box of $C$ as a torus and display it in such a way that all full rows and columns are aligned with the left and bottom of the rectangle (see Figure~\ref {fig:dual}).
We call $D$ a \emph {dual configuration}, and we will study the effects of push moves on $D$, which we will treat as a \emph {dual puzzle} (while the original puzzle on $C$ is the \emph{primal puzzle}).
\begin {figure}
\centering
\includegraphics [scale=1] {pics/q09.pdf}\quad
\includegraphics [scale=1] {pics/q10.pdf}\quad
\includegraphics [scale=1] {pics/q11.pdf}\quad
\includegraphics [scale=1] {pics/q12.pdf}
\caption {Rules of the dual puzzle.
The sequence of pulls $\langle\, \rightpush\ \uppush\ \uppush\, \rangle$ is shown.}
\label {fig:dual-rules}
\end {figure}
When considering a dual puzzle in isolation, we swap terminology and refer to the empty cells as full, and vice versa.
In this context, a \emph {push} move in the primal puzzle either leaves the dual puzzle unchanged or causes a \emph {pull} move in the dual puzzle. Specifically, if we perform a $\rightpush$ push move in the primal puzzle from a configuration where only the full rows are touching the right side of the bounding box, nothing happens in the dual puzzle. Otherwise, the $\rightpush$ push move in the primal puzzle causes a $\rightpush$ pull move in the dual puzzle, where exactly those rows that are farthest from the left boundary are pulled one unit to the left (refer to Figure~\ref{fig:dual-rules}, where the blue line represents a side of the bounding box of the primal puzzle).
Thus, as we play the primal puzzle, we are also playing the dual puzzle.
We say that a \emph {dual} configuration is \emph {canonical} when all tokens are aligned with the top and right borders of their bounding box; this way, the empty cells in a canonical (primal) configuration form a canonical (dual) configuration.
Most of the results obtained so far for primal puzzles automatically apply to dual puzzles, as well.
In particular, we claim that in the dual game, the equivalent of Lemma~\ref {lem:all-even} still holds.
For this, we now consider again an extension of our dual configuration $D$ which labels both the full and empty cells of the bounding box of $D$.
Crucially, this is \emph {not} the same as the original extension $C'$, because the bounding box of $D$ is smaller than the bounding box of $C$.
\begin {lemma} \label {lem:all-even-dual}
Let $D$ be a labeled canonical configuration, and let $D'$ be an extension of $D$ that labels also the empty cells of the bounding box of $D'$. Every achievable permutation on $D'$ under a sequence of \emph {dual moves} must be even.
\end {lemma}
\begin {proof}
Since the number of rows (resp., columns) of any given length in the primal puzzle remains constant, then the same is true in the dual puzzle. Moreover, all of the rows (resp., columns) of the same length must always be aligned in the primal puzzle, and therefore they must be aligned in the dual puzzle, as well. The proof now proceeds exactly as in Lemma~\ref{lem:all-even}.
\end {proof}
\subsubsection* {Induction}
We are now ready to prove that all permutations in $G_C$ must be even.
\begin {figure}
\centering
\includegraphics [scale=1] {pics/q13.pdf}
\qquad
\includegraphics [scale=1] {pics/q15.pdf}
\caption {The dual of a dual puzzle is again a primal puzzle; we see the result after the sequence of pulls $\langle\, \leftpush\ \downpush\, \rangle$.}
\label {fig:dual-dual}
\end {figure}
\begin {theorem} \label {thm:even}
Let $C$ be a labeled canonical configuration, and let $\pi\in G_C$. Then, $\pi$ is an even permutation.
\end {theorem}
\begin {proof}
The proof follows from two simple observations. Firstly, if we take the dual of a dual-type puzzle, we obtain another puzzle that again follows the rules of a primal-type puzzle (refer to Figure~\ref {fig:dual-dual}). Secondly, the bounding box of a (primal or dual) puzzle is strictly larger than the bounding box of its dual.
We will prove a stronger statement: that our theorem holds for both primal-type and dual-type puzzles. The proof is by well-founded induction on the size of the bounding box.
If the bounding box of our puzzle is completely full, then moves have no effect, and the only allowed permutation is the identity, which is even.
Now, consider a canonical configuration $C$ in a (primal or dual) puzzle with a bounding box which is not completely full. Such a puzzle has a dual, with a canonical configuration $D$ corresponding to $C$. The bounding box of $D$ has smaller size, and therefore the induction hypothesis applies to the dual puzzle. After performing some moves and restoring a canonical configuration in both puzzles, the tokens in $C$ have undergone a permutation $\pi\in G_C$, while the tokens in $D$ have undergone a permutation $\sigma\in G_D$. By Lemmas~\ref {lem:all-even} and~\ref {lem:all-even-dual}, the overall permutation $\pi\sigma$ is even; by the induction hypothesis, $\sigma$ is even; hence, $\pi$ is even, as well.
\end {proof}
\subsection{Generating All Feasible Permutations}\label{s:4.2}
In this section, we will give a complete description of the permutation group $G_C$, which we already know from Section~\ref{s:4.1} to be a subgroup of the \emph{alternating group} $\Alt{n}$.
\subsubsection*{Unmovable Central Core}
Let the bounding box be an $a\times b$ rectangle. Our first observation is that, if more than half of the rows and more than half of the columns of the bounding box are full, then there is a central box of tokens that cannot be moved.
Let $a'$ (resp., $b'$) be the number of full columns (resp., full rows), and let $a''=a-a'$ and $b''=b-b'$.
\begin{definition}[Core]
If $a'>a''$ and $b'>b''$, the \emph{core} of $C$ is the set of lattice points in the bounding box of $C$ that are in the central $a'-a''$ columns and in the central $b'-b''$ rows.\footnote{Equivalently, if $a>2a''$ and $b>2b''$, the core is obtained by discarding the $a''$ leftmost columns, the $a''$ rightmost columns, the $b''$ topmost rows, and the $b''$ bottommost rows.} If $a'\leq a''$ or $b'\leq b''$, the \emph{core} of $C$ is empty.
\end{definition}
The lattice points in the core are called \emph{core points}, and the tokens in core points are called \emph{core tokens}. Figure~\ref{fig:core} shows an example of a non-empty core.
\begin {figure}[t]
\centering
\includegraphics [scale=1.25] {pics/b00.pdf}
\caption {A configuration with $a=10$, $b=9$, $a'=7$, and $b'=6$. The blue rectangle in the center surrounds the core tokens.}
\label {fig:core}
\end {figure}
\begin{observation}\label{obs:core}
No permutation in $G_C$ moves any core token.
\end{observation}
\begin{proof}
If the core is empty, there is nothing to prove; hence, let us assume that $a'>a''$ and $b'>b''$. From Section~\ref{s:2}, we know that there are always exactly $a'$ contiguous full columns and $b'$ contiguous full rows, no matter how the tokens are pushed. Hence, the central $a-2a''=a'-a''$ columns (resp., the central $b-2b''=b'-b''$ rows) are always full, and are not affected by $\uppush$ or $\downpush$ pushes (resp., $\leftpush$ or $\rightpush$ pushes). Therefore, no push can affect the core tokens.
\end{proof}
\subsubsection*{Permutation Groups}
In order to understand the structure of $G_C$, we will review some notions of group theory and prove three technical lemmas.
\begin{definition}
A permutation group $G$ on $\{1,2,\dots,n\}$ is \emph{2-transitive} if, for every $1\leq x,y,w,z\leq n$ with $x\neq y$ and $w\neq z$, there is a permutation $\pi\in G$ such that $\pi(x)=w$ and $\pi(y)=z$.
\end{definition}
\begin{theorem}[Jones, \cite{jones}]\label{theorem-jones}
If $G$ is a 2-transitive permutation group on $\{1,2,\dots,n\}$ and $G$ contains a cycle of length $n-3$ or less, then $G$ contains all even permutations.\footnote{Jones' theorem in \cite{jones} holds more generally for \emph{primitive} permutation groups. The fact that all 2-transitive permutation groups are primitive is an easy observation.}\qed
\end{theorem}
To express cyclic permutations, we use the standard notation $\sigma=(s_1\ s_2\ s_3\ \dots\ s_k)$, occasionally adding commas between terms when doing so improves readability. The cycle $\sigma$ is the permutation that fixes all items except $s_1$, $s_2$, $\dots$, $s_k$ such that $\sigma(s_1)=s_2$, $\sigma(s_2)=s_3$, $\dots$, $\sigma(s_k)=s_1$. Since we are studying permutations puzzles, it is visually more convenient to interpret a permutation as acting on places rather than on items. Thus, for example, $(1\ 2\ 3)$ is understood as the cycle involving the tokens occupying the \emph{locations} labeled $1$, $2$, and $3$ rather than the \emph{tokens} labeled $1$, $2$, and $3$. Also, we will follow the convention to compose chains of permutations from left to right, which is the common one in permutation theory.
\begin{lemma}\label{lemma-perm1}
Let $\alpha=(1,2,\dots,a)$ and $\beta=(a-b+1,a-b+2,\dots,2a-b)$ be two cycles spanning $n=2a-b$ items, with $a\geq 2$ and $1\leq b<a$. Then, the permutation group generated by $\alpha$ and $\beta$ acts 2-transitively on $\{1,2,\dots,n\}$.
\end{lemma}
\begin{proof}
Let $G$ be the group generated by $\alpha$ and $\beta$, and let $A$ (resp., $B$) be the set of items spanned by $\alpha$ (resp., $\beta$). Observe that $|A|=|B|=a$, $|A\cap B|=b$, and $|A\setminus B|=|B\setminus A|=a-b>0$. Assume that $w=a-b$ and $z=n$; we will prove that, for all $1\leq x,y\leq n$ with $x\neq y$, there is a permutation $\pi\in G$ such that $\pi(x)=w$ and $\pi(y)=z$. This will be sufficient to conclude that $G$ acts 2-transitively on $\{1,2,\dots,n\}$. Indeed, let $1\leq x,y,w', z'\leq n$ with $x\neq y$ and $w'\neq z'$. Since we have two permutations $\pi_1,\pi_2\in G$ such that $\pi_1(x)=w$, $\pi_1(y)=z$, $\pi_2(w')=w$, $\pi_2(z')=z$, then the permutation $\pi_1\pi_2^{-1}\in G$ maps $x$ to $w'$ and $y$ to $z'$, respectively.
Assume first that $x\in A$ and $y\in B$. Let $0\leq d<a$ be such that $\alpha^d(x)=w$, and let $0\leq d'<a$ be such that $\beta^{d'}(y)=z$. For symmetry reasons, we may assume without loss of generality that $d\leq d'$. Then, it is easy to see that $\alpha^d(y)=y'\in B$. Let $0\leq d''<a$ be such that $\beta^{d''}(y')=z$. Now, if we set $\pi=\alpha^d\beta^{d''}$, we have $\pi(x)=(\alpha^d\beta^{d''})(x)=\beta^{d''}(w)=w$ and $\pi(y)=(\alpha^d\beta^{d''})(y)=\beta^{d''}(y')=z$, as desired.
Assume now that $x\in B$ and $y\in A$. We can construct a permutation $\tau$ such that $\tau(x)=z$ and $\tau(y)=w$ as we did above. Then, we define $\sigma=\alpha^{-1}\beta^{-1}\alpha\beta^2$ if $b=1$ and $\sigma=\alpha^{-1}\beta^{-1}\alpha\beta$ if $b>1$. It is easy to see that $\sigma(w)=z$ and $\sigma(z)=w$. Let $\pi=\tau\sigma$; we have $\pi(x)=(\tau\sigma)(x)=\sigma(z)=w$ and $\pi(y)=(\tau\sigma)(y)=\sigma(w)=z$.
Finally, assume that $x,y\in A\setminus B$ (the case where $x,y\in B\setminus A$ is symmetric). We can iterate $\alpha$ until either $x$ or $y$ (say $y$) is mapped to $A\cap B$. At this point, we are in a situation where $x\in A$ and $y\in B$, which we already solved.
\end{proof}
\begin {figure}[t]
\centering
\includegraphics [scale=1] {pics/u02.pdf}
\caption {Example of the two cycles in Lemma~\ref{lemma-perm2} with $a=2$ and $b=4$.}
\label {fig:2trans-1}
\end {figure}
\begin{lemma}\label{lemma-perm2}
Let $\alpha=(1,\dots,2a+b+1,3a+b+2,\dots,3a+2b+1)$ and $\beta=(a+1,\dots,a+b,2a+b+2,\dots,4a+2b+2)$ be two cycles spanning $n=4a+2b+2$ items, with $a\geq 0$ and $b\geq 1$. Then, the permutation group generated by $\alpha$ and $\beta$ acts 2-transitively on $\{1,2,\dots,n\}$.
\end{lemma}
\begin{proof}
The two cycles are illustrated in Figure~\ref{fig:2trans-1}. Let $G$ be the group generated by $\alpha$ and $\beta$, and let $A$ (resp., $B$) be the set of items spanned by $\alpha$ (resp., $\beta$). Assume that $x=n/2$ and $y=n$; we will prove that, for all $1\leq w,z\leq n$ with $w\neq z$, there is a permutation $\pi\in G$ such that $\pi(x)=w$ and $\pi(y)=z$. This suffices to conclude that $G$ acts 2-transitively on $\{1,2,\dots,n\}$, in a similar way to the previous lemma.
Assume first that $w,z\in A$, and let $0< d< |A|=2a+2b+1$ be such that $\alpha^d(w)=z$ (i.e., $d$ is the ``distance'' from $w$ to $z$ along $\alpha$). We construct a permutation $\tau$ as follows:
\begin{itemize}
\item If $1\leq d\leq a$ (and $a>0$), we set $\tau=\alpha^{a+b+1-d}\beta$.
\item If $a+1\leq d\leq a+b-1$ (and $b>1$), we set $\tau=\alpha^{a+b-d}\beta$.
\item If $d=a+b$, we set $\tau=\alpha^{-a}\beta^{-a-1}\alpha^{a+1}$.
\item If $a+b+1\leq d\leq 2a+b+1$, we set $\tau=\alpha^{a+b+1-d}\beta$.
\item If $2a+b+2\leq d\leq 2a+2b$ (and $b>1$), we set $\tau=\alpha^{a+b-d}\beta$.
\end{itemize}
It is straightforward to check that $\tau(y)=a+1=(\tau\alpha^d)(x)$; that is, $\tau$ places $y$ in position $a+1$ and places $x$ at the correct distance from $y$ along $\alpha$. Now we can set $\pi=\tau\alpha^i$, where $\alpha^i(a+1)=z$, so that $\pi(y)=z$ and $\pi(x)=(\tau\alpha^i)(x)=(\tau\alpha^d\alpha^i\alpha^{-d})(x)=(\alpha^i\alpha^{-d})(a+1)=\alpha^{-d}(z)=w$.
The case with $w,z\in B$ is symmetric and will be omitted.
Assume now that $w\in A\setminus B$ and $z\in B\setminus A$. Therefore, we can set $\pi=\alpha^i\beta^j$, where $\alpha^i(x)=w$ and $\beta^j(y)=z$. Since $y\notin A$, we have $\alpha^i(y)=y$; since $w\notin B$, we have $\beta^j(w)=w$. We conclude that $\pi(x)=w$ and $\pi(y)=z$.
Finally, assume that $w\in B\setminus A$ and $z\in A\setminus B$. We first construct a permutation $\rho$ that swaps $x$ and $y$ as follows:
\begin{itemize}
\item If $b=1$, we set $\rho=\alpha\beta\alpha^{a+1}\beta^a$.
\item If $b>1$, we set $\rho=\alpha\beta\alpha^{b-2}\beta\alpha^{a+1}\beta^a$.
\end{itemize}
It is straightforward to check that $\rho(x)=\rho(n/2)=n=y$ and $\rho(y)=\rho(n)=n/2=x$. Now that $x$ and $y$ are on the correct cycles, we can proceed as in the previous case.
\end{proof}
\begin{lemma}\label{lemma-perm3}
Let $A$ and $B$ be two finite sets such that $A\cap B\neq \emptyset$ and $A\setminus B\neq\emptyset$. Let $G$ be a permutation group on $A\cup B$ whose restriction to $A$ is 2-transitive and whose restriction to $B\setminus A$ is trivial, and let $\beta$ be a cycle spanning $B$. Then, the permutation group generated by $G$ and $\beta$ acts 2-transitively on $A\cup B$.
\end{lemma}
\begin{proof}
Let us fix $x\in A\setminus B$ and $y\in A\cap B$. For all $w,z\in A\cup B$ with $w\neq z$, we will prove that there is a permutation $\pi$ in the group generated by $G$ and $\beta$ such that $\pi(x)=w$ and $\pi(y)=z$. As in the previous lemmas, this is sufficient to conclude that the group acts 2-transitively on $A\cup B$.
If $w,z\in A$, there is a permutation $\pi\in G$ that maps $x$ to $w$ and $y$ to $z$, because $x,y\in A$ and $G$ acts 2-transitively on $A$.
Assume now that $w,z\in B$, and let $0< d< |B|$ be such that $\beta^d(w)=z$. Let $y'=\beta^d(y)$, and let $\tau\in G$ be a permutation such that $\tau(x)=y$ and $\tau(y')=y'$, which exists because $G$ acts 2-transitively on $A$ and fixes $B\setminus A$. We can set $\pi=\beta^d\tau\beta^i$, where $\beta^i(y')=z$. Clearly, $\pi(y)=(\beta^d\tau\beta^i)(y)=(\tau\beta^i)(y')=\beta^i(y')=z$, and $\pi(x)=(\beta^d\tau\beta^i)(x)=(\tau\beta^i)(x)=\beta^i(y)=\beta^{i-d}(y')=\beta^{-d}(z)=w$.
Let us now consider the case where $w\in A\setminus B$ and $z\in B\setminus A$. We can set $\pi=\beta^j\rho$, where $\beta^j(y)=z$ and $\rho\in G$ such that $\rho(x)=w$. It is easy to verify that $\pi(x)=(\beta^j\rho)(x)=\rho(x)=w$ and $\pi(y)=(\beta^j\rho)(y)=\rho(z)=z$.
Finally, if $w\in B\setminus A$ and $z\in A\setminus B$, we can swap $x$ and $y$ via a permutation in $G$, and then proceed as in the previous case.
\end{proof}
\subsubsection*{Generating Cycles}
We will now assume that all labels are distinct. Specifically, the $n$ tokens in $C$ are labeled $1$ to $n$ from left to right and from top to bottom, as in Figure~\ref{fig:core}. The general case will be discussed later.
We introduce three types of sequences of moves:
\begin{itemize}
\item Type-A $k$-sequence: $\langle\, \rightpush^k\ \rightpush\ \uppush\ \leftpush\ \downpush\ \leftpush^k\, \rangle$ for $0\leq k< a''$.
\item Type-B $k$-sequence: $\langle\, \uppush^k\ \uppush\ \rightpush\ \downpush\ \leftpush\ \downpush^k\, \rangle$ for $0\leq k< b''$.
\item Type-C $k$-sequence: $\langle\, \rightpush^k\ \uppush\ \rightpush\ \downpush\ \leftpush\ \leftpush^k\, \rangle$ for $0\leq k< a''$.
\end{itemize}
It is straightforward to check that a type-A $k$-sequence always produces a cycle of length $2a'+2b'-1$ (refer to Figure~\ref{fig:cyclemoves-1}), which involves the lattice points $(k,i)$ and $(a'+k,i)$ for all $0\leq i< b'$ (plus some other points in the rows $(0,\cdot)$ and $(b',\cdot)$). Such cycles are called \emph{type-A cycles}.
\begin {figure}
\centering
\includegraphics [scale=1] {pics/m00.pdf}\qquad
\includegraphics [scale=1] {pics/m01.pdf}\qquad
\includegraphics [scale=1] {pics/m02.pdf}\\
\vspace{0.5cm}
\includegraphics [scale=1] {pics/m06.pdf}\qquad
\includegraphics [scale=1] {pics/m07.pdf}\qquad
\includegraphics [scale=1] {pics/m08.pdf}
\caption {Performing a type-A $k$-sequence: $\langle\, \rightpush^{k+1}\ \uppush\ \leftpush\ \downpush\ \leftpush^k\, \rangle$. Recall that a push in a direction causes the tokens to ``fall'' in the opposite direction within the bounding box.}
\label {fig:cyclemoves-1}
\end {figure}
Symmetrically, a type-B $k$-sequence produces a \emph{type-B cycle} of length $2a'+2b'-1$ involving (among others) the lattice points $(i,k)$ and $(i,b'+k)$ for all $0\leq i< a'$ (see Figure~\ref{fig:cyclemoves-2}). Thus, the type-A and the type-B cycles collectively cover all the non-core points that lie in one of the $a'$ full columns or in one of the $b'$ full rows (see Figures~\ref{fig:cycles-1} and~\ref{fig:cycles-2}).
\begin {figure}[h!]
\centering
\includegraphics [scale=1] {pics/m00.pdf}\qquad
\includegraphics [scale=1] {pics/m09.pdf}\qquad
\includegraphics [scale=1] {pics/m10.pdf}\\
\vspace{0.5cm}
\includegraphics [scale=1] {pics/m11.pdf}\qquad
\includegraphics [scale=1] {pics/m12.pdf}\qquad
\includegraphics [scale=1] {pics/m13.pdf}
\caption {Performing a type-B $k$-sequence: $\langle\, \uppush^{k+1}\ \rightpush\ \downpush\ \leftpush\ \downpush^k\, \rangle$.}
\label {fig:cyclemoves-2}
\end {figure}
A type-C $k$-sequence produces a cycle, as well, which we call \emph{type-C cycle} (see Figure~\ref{fig:cyclemoves-3}). However, unlike previous cycles, a type-C cycle's length may vary depending on $k$. It is easy to see that, if the row $(\cdot,i)$, with $0\leq i<b$, has length $\ell_i$, then the cycle produced by a type-C $k$-sequence with $a-\ell_i\leq k<a''$ involves (among others) the lattice point $(\ell_i+k-a'',i)$. Such a point lies at distance $a''-k-1$ from the rightmost full token in the row. Thus, the type-C cycles collectively cover all the tokens that are not in a full column (see Figure~\ref{fig:cycles-3}). We conclude that type-A, type-B, and type-C tokens collectively cover all non-core points (see Figure~\ref{fig:cycles-4}).
\begin {figure}
\centering
\includegraphics [scale=1] {pics/m00.pdf}\qquad
\includegraphics [scale=1] {pics/m01.pdf}\qquad
\includegraphics [scale=1] {pics/m02.pdf}\\
\vspace{0.5cm}
\includegraphics [scale=1] {pics/m03.pdf}\qquad
\includegraphics [scale=1] {pics/m04.pdf}\qquad
\includegraphics [scale=1] {pics/m05.pdf}
\caption {Performing a type-C $k$-sequence: $\langle\, \rightpush^k\ \uppush\ \rightpush\ \downpush\ \leftpush^{k+1}\, \rangle$.}
\label {fig:cyclemoves-3}
\end {figure}
The next theorem states that, in most cases, the group $G_C$ is exactly the alternating group on the non-core tokens, i.e., the group of all even permutation on the $n-(a'-a'')(b'-b'')$ tokens not in the core (or on all $n$ tokens if the core is empty).
\begin{theorem}\label{theorem-gen1}
If $n/2\geq a'+b'+1$ and $a''b''>1$, then $G_C$ is the alternating group on the non-core tokens.
\end{theorem}
\begin{proof}
We know from Theorem~\ref{thm:even} that $G_C$ only contains even permutations; also, by Observation~\ref{obs:core}, no permutation in $G_C$ can move any core tokens. Hence, it suffices to show that $G_C$ contains \emph{all} even permutations of the non-core tokens. By applying a reflection to $C$ if necessary, we may assume that $a''\geq b''$. Also, since $a''b''>1$, we have $a''\geq 2$, and therefore there are at least two distinct type-A cycles.
Let $\alpha$ and $\beta$ be the type-A cycles for $k=0$ and $k=1$, respectively. Observe that, if $a'=1$, then $\alpha$ and (the inverse of) $\beta$ satisfy the hypotheses of Lemma~\ref{lemma-perm1}; if $a'>1$, then $\alpha$ and $\beta$ satisfy the hypotheses of Lemma~\ref{lemma-perm2}. In both cases, the group $G_1\leq G_C$ generated by $\alpha$ and $\beta$ acts 2-transitively on the points spanned by $\alpha$ and $\beta$ and acts trivially on all other points.
The group $G_1$, together with the type-A cycle for $k=2$, satisfies the assumptions of Lemma~\ref{lemma-perm3}. Therefore, $G_C$ has a subgroup $G_2$ that acts 2-transitively on the points spanned by the first three type-A cycles and acts trivially on all other points. By repeatedly applying Lemma~\ref{lemma-perm3} to all remaining type-A cycles, we conclude that $G_C$ has a subgroup that acts 2-transitively on a set that includes all the non-core points in the $b'$ full rows of $C$.
By applying Lemma~\ref{lemma-perm3} again to the type-B cycles (the first of which properly intersects all of the full rows), we obtain a subgroup of $G_C$ that acts 2-transitively on a set that includes all the non-core points in the $a'$ full columns and in the $b'$ full rows. Finally, we can apply Lemma~\ref{lemma-perm3} to the type-C cycles (all of which properly intersect the full rows) to obtain a subgroup of $G_C$ that acts 2-transitively on all non-core tokens. This implies that $G_C$ itself acts 2-transitively on all non-core tokens, as well.
To conclude the proof, we recall that each type-A cycle has length $2a'+2b'-1$. Since $n/2\geq a'+b'+1$, these cycles have length at most $n-3$. Thus, $G_C$ contains all even permutations of the non-core tokens, due to Theorem~\ref{theorem-jones}.
\end{proof}
\begin {figure}[h!]
\centering
\includegraphics [scale=1] {pics/a19.pdf}\qquad
\includegraphics [scale=1] {pics/a20.pdf}\qquad
\includegraphics [scale=1] {pics/a21.pdf}\\
\vspace{0.5cm}
\includegraphics [scale=1] {pics/a22.pdf}\qquad
\includegraphics [scale=1] {pics/a23.pdf}\qquad
\includegraphics [scale=1] {pics/a24.pdf}\\
\vspace{0.5cm}
\includegraphics [scale=1] {pics/a25.pdf}\qquad
\includegraphics [scale=1] {pics/a26.pdf}\qquad
\includegraphics [scale=1] {pics/a27.pdf}\\
\vspace{0.5cm}
\includegraphics [scale=1] {pics/a28.pdf}\qquad
\includegraphics [scale=1] {pics/a29.pdf}
\caption {The type-A cycles span, among others, all the (non-core) tokens in the $b'$ full rows.}
\label {fig:cycles-1}
\end {figure}
\newpage
\begin {figure}[h!]
\centering
\includegraphics [scale=1] {pics/a30.pdf}\qquad
\includegraphics [scale=1] {pics/a31.pdf}\qquad
\includegraphics [scale=1] {pics/a32.pdf}\\
\vspace{0.5cm}
\includegraphics [scale=1] {pics/a33.pdf}\qquad
\includegraphics [scale=1] {pics/a34.pdf}\qquad
\includegraphics [scale=1] {pics/a35.pdf}\\
\vspace{0.5cm}
\includegraphics [scale=1] {pics/a36.pdf}\qquad
\includegraphics [scale=1] {pics/a37.pdf}
\caption {The type-B cycles span, among others, all the (non-core) tokens in the $a'$ full columns.}
\label {fig:cycles-2}
\end {figure}
\newpage
\begin {figure}[h!]
\centering
\includegraphics [scale=1] {pics/a38.pdf}\qquad
\includegraphics [scale=1] {pics/a39.pdf}\qquad
\includegraphics [scale=1] {pics/a40.pdf}\\
\vspace{0.5cm}
\includegraphics [scale=1] {pics/a41.pdf}\qquad
\includegraphics [scale=1] {pics/a42.pdf}\qquad
\includegraphics [scale=1] {pics/a43.pdf}\\
\vspace{0.5cm}
\includegraphics [scale=1] {pics/a44.pdf}\qquad
\includegraphics [scale=1] {pics/a45.pdf}\qquad
\includegraphics [scale=1] {pics/a46.pdf}\\
\vspace{0.5cm}
\includegraphics [scale=1] {pics/a47.pdf}\qquad
\includegraphics [scale=1] {pics/a48.pdf}
\caption {The type-C cycles span, among others, all the tokens that are not in the $a'$ full columns.}
\label {fig:cycles-3}
\end {figure}
\newpage
\begin {figure}[h!]
\centering
\includegraphics [scale=1.25] {pics/b02.pdf}\qquad
\includegraphics [scale=1.25] {pics/b03.pdf}\qquad
\includegraphics [scale=1.25] {pics/b04.pdf}\\
\vspace{0.5cm}
\includegraphics [scale=1.25] {pics/b02.pdf}\qquad
\includegraphics [scale=1.25] {pics/b05.pdf}\qquad
\includegraphics [scale=1.25] {pics/b06.pdf}\\
\vspace{0.5cm}
\includegraphics [scale=1.25] {pics/b02.pdf}\qquad
\includegraphics [scale=1.25] {pics/b07.pdf}\qquad
\includegraphics [scale=1.25] {pics/b08.pdf}
\caption {The type-A cycles (top row), the type-B cycles (middle row), and the type-C cycles (bottom row) collectively span all the non-core tokens.}
\label {fig:cycles-4}
\end {figure}
\subsubsection*{Special Configurations}
We will now discuss all the configurations of the Permutation Puzzle not covered by Theorem~\ref{theorem-gen1}.
\begin{itemize}
\item If $a''=b''=0$, i.e., there are no empty cells in the bounding box, then clearly no token can be moved, and $G_C$ is the trivial permutation group (Figure~\ref{fig:reconf-1}, left).
\item Let $a''=b''=1$, i.e., there is exactly one empty cell, located at the top-right corner of the bounding box. The core includes all the tokens, except the ones on the perimeter of the bounding box, which are spanned by the type-A cycle with $k=0$. It is easy to see that the only possible permutations are iterations of this cycle and its inverse. Therefore, $G_C$ is isomorphic to the cyclic group $C_{2a+2b-5}$ (Figure~\ref{fig:reconf-1}, right).
\end{itemize}
\begin {figure}
\centering
\includegraphics [scale=1.5] {pics/k00.pdf}\qquad
\includegraphics [scale=1.5] {pics/k15.pdf}
\caption {Two configurations with $a''b''\leq 1$.}
\label {fig:reconf-1}
\end {figure}
In the following, we will assume that $a''\geq b''$ and $a''\geq 2$, and we will discuss all configurations where $n/2< a'+b'+1$. We will invoke some theorems from~\cite{SVU22} about \emph{cyclic shift puzzles}.
\begin{itemize}
\item Let $b=2$, and let the two rows have length $1$ and $3$, respectively (Figure~\ref{fig:reconf-2}, top row, first image). The two type-A cycles $(1\ 2\ 3)$ and $(1\ 3\ 4)$ form a 2-connected $(3,3)$-puzzle involving all tokens, and therefore $G_C=\Alt{n}$, due to~\cite[Theorem~2]{SVU22}.
\item Let $b=2$, and let the two rows have length $1$ and $4$, respectively (Figure~\ref{fig:reconf-2}, top row, second image). The first and third type-A cycles $(1\ 2\ 3)$ and $(1\ 4\ 5)$ form a 1-connected $(3,3)$-puzzle involving all tokens, and therefore $G_C=\Alt{n}$, due to~\cite[Theorem~1]{SVU22}.
\item Let $b=2$, and let the two rows have length $2$ and $4$, respectively (Figure~\ref{fig:reconf-2}, top row, third image). We denote the two type-A cycles by $\alpha=(1\ 3\ 4\ 5\ 2)$ and $\beta=(1\ 4\ 5\ 6\ 2)$. It is easy to see that $G_C$ is generated by $\alpha$ and $\beta$, because any non-trivial sequence of four pushes necessarily goes through a configuration whose canonical form yields one the permutations $\alpha$, $\beta$, $\alpha^{-1}$, or $\beta^{-1}$.
In order to determine $G_C$, we transform $\alpha$ and $\beta$ by a suitable outer automorphism $\psi\colon \Sym{6}\to \Sym{6}$. Since $\psi$ is an automorphism, the group $G'_C$ generated by $\psi(\alpha)$ and $\psi(\beta)$ is isomorphic to $G_C$. The automorphism $\psi$ is defined on a set of generators of $\Sym{6}$ as follows (cf.~\cite[Corollary~7.13]{rotman}):
\begin{align*}
\psi((1\ 2))&=(1\ 5)(2\ 3)(4\ 6),\\
\psi((1\ 3))&=(1\ 4)(2\ 6)(3\ 5),\\
\psi((1\ 4))&=(1\ 3)(2\ 4)(5\ 6),\\
\psi((1\ 5))&=(1\ 2)(3\ 6)(4\ 5),\\
\psi((1\ 6))&=(1\ 6)(2\ 5)(3\ 4).
\end{align*}
We have
\begin{align*}
\alpha&=(1\ 3\ 4\ 5\ 2) = (1\ 2) (1\ 5) (1\ 4) (1\ 3),\\
\beta&=(1\ 4\ 5\ 6\ 2) = (1\ 2) (1\ 6) (1\ 5) (1\ 4).
\end{align*}
Therefore, the two generators of $G'_C$ are
\begin{align*}
\psi(\alpha)&=\psi((1\ 2)) \psi((1\ 5)) \psi((1\ 4)) \psi((1\ 3)) = (1\ 6\ 2\ 3\ 5),\\
\psi(\beta)&=\psi((1\ 2)) \psi((1\ 6)) \psi((1\ 5)) \psi((1\ 4)) = (1\ 3\ 2\ 6\ 5).
\end{align*}
Both generators $\psi(\alpha)$ and $\psi(\beta)$ leave the number $4$ fixed, and thus $G'_C$ is isomorphic to a subgroup of $\Sym{5}$. Moreover, the generators are cycles of odd length, and therefore they produce only even permutations. Hence, $G'_C$ is isomorphic to a subgroup of $\Alt{5}$. Observe that $\psi(\alpha)\psi(\beta)=(1\ 5\ 6)$, which is a 3-cycle involving consecutive elements of the 5-cycle $\psi(\alpha)$. These two cycles generate all even permutations on $\{1,2,3,5,6\}$ (cf.~\cite[Proposition~1]{SVU22}).
\begin {figure}
\centering
\includegraphics [scale=1.5] {pics/k01.pdf}\qquad
\includegraphics [scale=1.5] {pics/k02.pdf}\qquad
\includegraphics [scale=1.5] {pics/k03.pdf}\qquad
\includegraphics [scale=1.5] {pics/k04.pdf}\\
\vspace{0.5cm}
\includegraphics [scale=1.5] {pics/k05.pdf}\qquad
\includegraphics [scale=1.5] {pics/k06.pdf}\qquad
\includegraphics [scale=1.5] {pics/k07.pdf}\qquad
\includegraphics [scale=1.5] {pics/k08.pdf}
\caption {Some special configurations with $b=2$ and $b=3$ rows.}
\label {fig:reconf-2}
\end {figure}
We conclude that $G'_C$, and therefore $G_C$, is isomorphic to $\Alt{5}$, which is a group of order $60$ and index $12$ in $\Sym{6}$. A permutation $\pi\in\Sym{6}$ is in $G_C$ if and only if $\psi(\pi)$ is even and leaves the number $4$ fixed.
\item Let $b=2$, and let the two rows have length $2$ and $5$, respectively (Figure~\ref{fig:reconf-2}, top row, fourth image). We denote the three type-A cycles by $\alpha=(1\ 3\ 4\ 5\ 2)$, $\beta=(1\ 4\ 5\ 6\ 2)$, and $\gamma=(1\ 5\ 6\ 7\ 2)$. Consider the permutations $(\alpha\beta^{-1}\gamma)^2=(2\ 5\ 4)$ and $\beta\gamma\alpha=(1\ 3\ 5\ 4\ 2\ 6\ 7)$. We have a 3-cycle involving consecutive elements of a 7-cycle, which generate all even permutations on $\{1,2,\dots,7\}$ (cf.~\cite[Proposition~1]{SVU22}). We conclude that $G_C=\Alt{n}$.
\item Let $b=2$, and let the two rows have length $\ell\geq 3$ and $\ell+2$, respectively (Figure~\ref{fig:reconf-2}, bottom row, first image). We denote the two type-A cycles by $\alpha=(1, \ell+1, \dots, n-1, \ell, \dots, 2)$ and $\beta=(1, \ell+2, \dots, n, \ell, \dots, 2)$. The permutation $\beta^{-2}(\alpha^2\beta^{-1}\alpha^{-2}\beta)^2\beta^2=(n-3, n-2, n)$ is a 3-cycle that, together with $\alpha$, forms a 2-connected $(3,n-1)$-puzzle involving all tokens. We conclude that $G_C=\Alt{n}$, due to~\cite[Theorem~2]{SVU22}.
\item Let $b=2$, and let the two rows have length $\ell\geq 3$ and $\ell+3$, respectively (Figure~\ref{fig:reconf-2}, bottom row, second image). Observe that this configuration is the same as the previous one, except for an extra token, labeled $n$, in the bottom row. In particular, the first two type-A cycles are the same, and generate all even permutations on $\{1,2,\dots n-1\}$, including the 3-cycle $(1,\ell+1,\ell+2)$. This 3-cycle forms a 1-connected $(3,n-2)$-puzzle with the third type-A cycle. Since all tokens are involved in this puzzle, we conclude that $G_C=\Alt{n}$, due to~\cite[Theorem~1]{SVU22}.
\item Let $b=3$, and let the three rows have length $1$, $1$, and $3$, respectively (Figure~\ref{fig:reconf-2}, bottom row, third image). The two type-A cycles $(2\ 3\ 4)$ and $(2\ 4\ 5)$ form a 2-connected $(3,3)$-puzzle, and therefore they generate all even permutations on $\{2,3,4,5\}$, due to~\cite[Theorem~2]{SVU22}. In particular, they generate the 3-cycle $(3\ 4\ 5)$, which forms a 1-connected $(3,3)$-puzzle with the type-B cycle $(1\ 2\ 4)$, involving all tokens. By~\cite[Theorem~1]{SVU22}, $G_C=\Alt{n}$.
\item Let $b=3$, and let the three rows have length $1$, $3$, and $3$, respectively (Figure~\ref{fig:reconf-2}, bottom row, fourth image). We denote the two type-A cycles by $\alpha=(1\ 2\ 5\ 6\ 3)$ and $\beta=(1\ 3\ 6\ 7\ 4)$. Consider the permutations $(\beta^2\alpha^{-1})^2=(2\ 5\ 6)$ and $\alpha\beta^{-1}=(1\ 4\ 7\ 3\ 2\ 5\ 6)$. We have a 3-cycle involving consecutive elements of a 7-cycle, which generate all even permutations on $\{1,2,\dots,7\}$ (cf.~\cite[Proposition~1]{SVU22}). We conclude that $G_C=\Alt{n}$.
\end{itemize}
\begin {figure}
\centering
\includegraphics [scale=1.5] {pics/k09.pdf}\qquad
\includegraphics [scale=1.5] {pics/k10.pdf}\qquad
\includegraphics [scale=1.5] {pics/k11.pdf}\\
\vspace{0.5cm}
\includegraphics [scale=1.5] {pics/k12.pdf}\qquad
\includegraphics [scale=1.5] {pics/k13.pdf}\qquad
\includegraphics [scale=1.5] {pics/k14.pdf}
\caption {Some small configurations where at least three tokens are not covered by the first type-A cycle.}
\label {fig:reconf-3}
\end {figure}
We will now prove that all configurations not listed above satisfy $n/2\geq a'+b'+1$, i.e., they have at least three tokens not lying on the first type-A cycle
\begin{itemize}
\item If $b=1$, then $a''=b''=0$. This case has already been discussed (Figure~\ref{fig:reconf-1}, left).
\item For $b=2$, we have discussed all cases where $a''\leq 3$. If $a''\geq 4$, then $n=2a'+a''$, while a type-A cycle spans $2a'+1$ tokens, and leaves out at least three tokens (Figure~\ref{fig:reconf-3}, top row, first image).
\item Let $b\geq 3$ and $a'=b'=1$. Then, a type-A cycle spans three tokens. We are assuming that $a''\geq 2$, and so $a\geq 3$, implying that $n\geq 5$. The only case where $n=5$ has already been discussed (it is the case with $b=3$ rows of length $1$, $1$, and $3$, illustrated in Figure~\ref{fig:reconf-2}, bottom row, third image); in all other cases we have $n\geq 6$, and thus at least three tokens are left out of the type-A cycle (Figure~\ref{fig:reconf-3}, top row, second image).
\item Let $b\geq 3$, $a'=1$, and $b'=2$. Then, a type-A cycle spans five tokens. We are assuming that $a''\geq 2$, and so $a\geq 3$, implying that $n\geq 7$. The only case where $n=7$ has already been discussed (it is the case with $b=3$ rows of length $1$, $3$, and $3$, illustrated in Figure~\ref{fig:reconf-2}, bottom row, fourth image); in all other cases we have $n\geq 8$, and thus at least three tokens are left out of the type-A cycle (Figure~\ref{fig:reconf-3}, top row, third image).
\item Let $b\geq 3$, $a'=1$, and $b'\geq 3$ (Figure~\ref{fig:reconf-3}, bottom row, first image). Since we are assuming that $a''\geq 2$, the rightmost column contains at least three tokens, which are not included in the first type-A cycle.
\item Let $b\geq 3$, $a'\geq 2$, and $b'=1$ (Figure~\ref{fig:reconf-3}, bottom row, second image). Since we are assuming that $a''\geq 2$, the rightmost token is not included in the first type-A cycle. Moreover, the top row contains at least two tokens, which are not in the type-A cycle. In total, there are at least three tokens not spanned by the cycle.
\item Let $b\geq 3$, $a'\geq 2$, and $b'\geq 2$ (Figure~\ref{fig:reconf-3}, bottom row, third image). Since we are assuming that $a''\geq 2$, the rightmost column contains at least two tokens, which are not included in the first type-A cycle. Moreover, the token at $(1,1)$ is not contained in the first type-A cycle, either. In total, there are at least three tokens not spanned by the cycle.
\end{itemize}
\subsubsection*{Arbitrary Labels}
We now discuss the case where not all labels are distinct. Clearly, if all the non-core tokens have distinct labels, then our previous analysis carries over verbatim.
Let us now assume that at least two non-core tokens $x$ and $y$ have the same label. In this case, we identify two permutations $\pi_1,\pi_2\in G_C$ if the configurations $C_1,C_2\colon \mathcal L\to \Sigma\cup\{{\rm empty}\}$ they produce are equal, i.e., $C_1(p)=C_2(p)$ for all $p\in\mathcal L$.
Assume that $G_C$ contains all even permutations of the non-core tokens, which we know to be always the case except in some special configurations. Let $\pi$ be any permutation of the non-core tokens. If $\pi$ is even, then $\pi\in G_C$, and we can reconfigure the tokens to match $\pi$. If $\pi$ is odd, then let $\pi'=(x\ y)\pi$. Since $\pi'$ is even, we have $\pi'\in G_C$. However, $\pi$ and $\pi'$ produce equal configurations, because $x$ and $y$ have the same label, and therefore we can reconfigure the tokens to match $\pi$, as well.
We now have a complete solution to the Permutation Puzzle.
\begin{theorem}\label{theorem-final}
If the configuration $C$ is compact, then the group $G_C$ of possible permutations is as follows.
\begin{itemize}
\item If no cell in the bounding box is empty, then $G_C$ is the trivial group.
\item If exactly one cell in the bounding box is empty, then $G_C$ is generated by the cycle of the non-core tokens taken in clockwise order.
\item If there are exactly $6$ tokens and exactly $2$ empty cells in the bounding box, then $G_C$ is isomorphic to the alternating group $\Alt{5}$.
\item In all other cases, $G_C$ is the alternating group on the non-core tokens. Hence, if at least two non-core tokens have the same label, then all permutations of the non-core labels can be obtained; otherwise, only the even permutations of the non-core labels can be obtained.\qed
\end{itemize}
\end{theorem}
\section{Introduction}\label{s:1}
\input{intro.tex}
\section{Definitions and Preliminaries}\label{s:2}
\input{prelim.tex}
\section{Compaction Puzzles}\label{s:3}
\input{compaction.tex}
\section{Permutation Puzzles}\label{s:4}
\input{gravity1.tex}
\input{gravity2.tex}
\section{Conclusions and Open Problems}\label{s:5}
\input{conclusion.tex}
\input{biblio.tex}
\end{document}
| {
"timestamp": "2022-03-29T02:54:00",
"yymm": "2202",
"arxiv_id": "2202.12045",
"language": "en",
"url": "https://arxiv.org/abs/2202.12045",
"abstract": "We investigate the reconfiguration of $n$ blocks, or \"tokens\", in the square grid using \"line pushes\". A line push is performed from one of the four cardinal directions and pushes all tokens that are maximum in that direction to the opposite direction. Tokens that are in the way of other tokens are displaced in the same direction, as well.Similar models of manipulating objects using uniform external forces match the mechanics of existing games and puzzles, such as Mega Maze, 2048 and Labyrinth, and have also been investigated in the context of self-assembly, programmable matter and robotic motion planning. The problem of obtaining a given shape from a starting configuration is know to be NP-complete.We show that, for every $n$, there are \"sparse\" initial configurations of $n$ tokens (i.e., where no two tokens are in the same row or column) that can be rearranged into any $a\\times b$ box such that $ab=n$. However, only $1\\times k$, $2\\times k$ and $3\\times 3$ boxes are obtainable from any arbitrary sparse configuration with a matching number of tokens. We also study the problem of rearranging labeled tokens into a configuration of the same shape, but with permuted tokens. For every initial \"compact\" configuration of the tokens, we provide a complete characterization of what other configurations can be obtained by means of line pushes.",
"subjects": "Combinatorics (math.CO); Discrete Mathematics (cs.DM)",
"title": "Pushing Blocks by Sweeping Lines",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9693242009478238,
"lm_q2_score": 0.7310585844894971,
"lm_q1q2_score": 0.708632778256329
} |
https://arxiv.org/abs/2207.04261 | Fuzzy Clustering by Hyperbolic Smoothing | We propose a novel method for building fuzzy clusters of large data sets, using a smoothing numerical approach. The usual sum-of-squares criterion is relaxed so the search for good fuzzy partitions is made on a continuous space, rather than a combinatorial space as in classical methods \cite{Hartigan}. The smoothing allows a conversion from a strongly non-differentiable problem into differentiable subproblems of optimization without constraints of low dimension, by using a differentiable function of infinite class. For the implementation of the algorithm we used the statistical software $R$ and the results obtained were compared to the traditional fuzzy $C$--means method, proposed by Bezdek. | \section{Introduction}
Methods for making groups from data sets are usually based on the idea of disjoint sets, such as the classical crisp clustering. The most well known are hierarchical and $k$-means \cite{Hartigan}, whose resulting clusters are sets will no intersection. However, this restriction may not be natural for some applications, where the condition for some objects may be to belong to two or more clusters, rather than only one.
Several methods for constructing overlapping clusters have been proposed in the literature \cite{Pyramids,Dunn, Hartigan}.
Since Zadeh introduced the concept of fuzzy sets \cite{Zadeh}, the principle of belonging to several clusters has been used in the sense of a degree of membership to such clusters.
In this direction, Bezdek \cite{Bezdek} introduced a fuzzy clustering method that became very popular since it solved the problem of representation of clusters with centroids and the assignment of objects to clusters, by the minimization of a well-stated numerical criterion.
Several methods for fuzzy clustering have been proposed in the literature; a survey of these methods can be found in \cite{Yang}.
In this paper we propose a new fuzzy clustering method based on the numerical principle of hyperbolic smoothing \cite{Xav}.
Fuzzy $C$-Means method is presented in Section \ref{sec:FCM}
and our proposed Hyperbolic Smoothing Fuzzy Clustering method in Section \ref{sec:HSFC}.
Comparative results between these two methods are presented in Section \ref{sec:results}.
Finally, Section \ref{sec:conclusion} is devoted to the concluding remarks.
\section{Fuzzy Clustering}
\label{sec:FCM}
The most well known method for fuzzy clustering is the original Bezdek's $C$-means method \cite{Bezdek} and
it is based on the same principles of $k$-means or dynamical clusters \cite{Bock}, that is, iterations on two main steps: i) class representations by the optimization of a numerical criterion, and ii) assignment to the closest class representative in order to construct clusters; these iterations are made until a convergence is reached to a local minimum of the overall quality criterion.
Let us introduce the notation that will be used and the numerical criterion for optimization.
Let $\textbf{X}$ be a $n \times p$ data matrix containing $p$ numerical observations over $n$ objects.
We look for a $K\times p$ matrix $\textbf{G}$ that represents centroids of $K$ clusters of the $n$ objects and an $n\times K$ membership matrix with elements $\mu_{ik}\in[0,1]$, such that the following criterion is minimized:
\begin{eqnarray}
\begin{array}{ll}
\multicolumn{2}{c}{W(\textbf{X}, \textbf{U},\textbf{G})=\displaystyle \sum_{i=1}^{n}\,\displaystyle \sum_{k=1}^{K}\, (\mu_{ik})^{m}\; \Vert\textbf{x}_{i}-\textbf{g}_{k}\Vert^{2}}\\
\mbox{ subject to } & \sum_{k=1}^{K}\,\mu_{ik}=1, \mbox{ for all } i\in \{1,2,\ldots,n\}\\
& 0< \sum_{i=1}^{n}\,\mu_{ik}<n, \mbox{ for all }k\in \{1,2,\ldots,K\},
\end{array}
\label{funcional}
\end{eqnarray}
where $\textbf{x}_{i}$ is the $i$-th row of $\textbf{X}$ and $\textbf{g}_{k}$ is the $k$-th row of $\textbf{G}$, representing in $\mathbb{R}^{p}$ the centroid of the $k$-th cluster.
The parameter $m\neq 1$ in (\ref{funcional}) controls the fuzzyness of the clusters. According to the literature \cite{Yang}, it is usual to take $m=2$, since greater values of $m$ tend to give very low values of $\mu_{ik}$, tending to the usual crisp partitions such as in $k$-means. We also assume that the number of clusters, $K$, is fixed.
Minimization of (\ref{funcional}) represents a non linear optimization problem with constraints, which can be solved using Lagrange multipliers as presented in \cite{Bezdek}. The solution, for each row of the centroids matrix, given a matrix $\textbf{U}$, is:
\begin{equation}
\textbf{g}_{k}= { \sum_{i=1}^{n}\,(\mu_{ik})^{m}\textbf{x}_{i}} \left/ {\displaystyle \sum_{i=1}^{n}\,(\mu_{ik})^{m}}\right..
\label{centroids}
\end{equation}
The solution for the
membership matrix, given a matrix centroids $\textbf{G}$, is \cite{Bezdek}:
\begin{equation}
\mu_{ik}=\left[ \sum_{j=1}^{K}\,\left( \frac{||\textbf{x}_{i}-\textbf{g}_{k}||^{2}}{||\textbf{x}_{i}-\textbf{g}_{j}||^{2}}\right)^{1/(m-1)}\right]^{-1}.
\label{membership}
\end{equation}
The following pseudo-code shows the mains steps of Bezdek's Fuzzy $C$-Means method \cite{Bezdek}.
\paragraph{Bezdek's Fuzzy c-Means (FCM) Algorithm}
\begin{enumerate}
\item Initialize fuzzy membership matrix $\textbf{U}=[\mu_{ik}]_{n\times K}$\;
\item Compute centroids for fuzzy clusters according to (\ref{centroids})
\item Update membership matrix $\textbf{U}$ according to (\ref{membership})
\item If improvement in the criterion is less than a threshold, then stop; otherwise go to Step 2.
\end{enumerate}
Fuzzy $C$-Means method starts from an initial partition that is improved in each iteration, according to (\ref{funcional}), applying Steps 2 and 3 of the algorithm. It is clear that this procedure may lead to local optima of (\ref{funcional}) since iterative improvement in (\ref{centroids}) and (\ref{membership}) is made by a local search strategy.
\section{Algorithm for Hyperbolic Smoothing Fuzzy Clustering}
\label{sec:HSFC}
For the clustering problem of the $n$ rows of data matrix \textbf{X} in $K$ clusters, we can seek for the minimum distance between every $\textbf{x}_{i}$ and its class center $\textbf{g}_{k}$:
\begin{equation*}
z_{i}^2=\displaystyle \min_{\textbf{g}_{k}\in \textbf{G}}\Vert\textbf{x}_{i}-\textbf{g}_{k}\Vert^2_{2} \label{zi}
\end{equation*}
where $\Vert\cdot\Vert_{2}$ is the Euclidean norm.
The minimization can be stated as a sum-of-squares:
\begin{equation*}
\displaystyle \min \sum_{i=1}^{n}\, \min_{\textbf{g}_{k}\in \textbf{G}} \Vert\textbf{x}_{i}-\textbf{g}_{k}\Vert_{2}^{2}= \min \sum_{i=1}^{n}\,z_{i}^{2}
\end{equation*}
leading to the following constrained problem:
$$
\min\displaystyle \sum_{i=1}^{n}\,z_{i}^{2} \mbox{ subject to } z_{i}=\displaystyle \min_{\textbf{g}_{k}\in \textbf{G}}\Vert\textbf{x}_{i}-\textbf{g}_{k}\Vert_{2}, \mbox{ with } i=1,\ldots,n.
$$
\pagebreak
\noindent
This is equivalent to the following minimization problem:
$$
\min\displaystyle \sum_{i=1}^{n}\,z_{i}^{2} \mbox{ subject to } z_{i}-\Vert\textbf{x}_{i}-\textbf{g}_{k}\Vert_{2}\leq 0 ,\mbox{ with } i=1,\ldots,n \mbox{ and } k=1,\ldots,K.
$$
Considering the function: $\varphi(y)=\max(0,y)$, we obtain the problem:
$$
\min\displaystyle \sum_{i=1}^{n}\,z_{i}^{2} \mbox{ subject to } \displaystyle \sum_{k=1}^{K}\, \varphi(z_{i}-\Vert\textbf{x}_{i}-\textbf{g}_{k}\Vert_{2})=0\mbox{ for }i=1,\ldots,n.
$$
That problem can be re-stated as the following one:
$$
\min\displaystyle \sum_{i=1}^{n}\,z_{i}^{2} \mbox{ subject to } \displaystyle \sum_{k=1}^{K}\, \varphi(z_{i}-\Vert\textbf{x}_{i}-\textbf{g}_{k}\Vert_{2})>0,\mbox{ for }i=1,\ldots,n.
$$
Given a perturbation $\epsilon>0$ it leads to the problem:
$$
\min\displaystyle \sum_{i=1}^{n}\,z_{i}^{2} \mbox{ subject to }\displaystyle \sum_{k=1}^{K}\, \varphi(z_{i}-\Vert\textbf{x}_{i}-\textbf{g}_{k}\Vert_{2})\geq \epsilon\mbox{ for }i=1,\ldots,n.
$$
It should be noted that function $\varphi$ is not differentiable. Therefore, we will make a smoothing procedure in order to formulate a differentiable function and proceed with a minimization by a numerical method. For that, consider the function:
$
\psi(y,\tau)= \frac{y+\sqrt{y^{2}+\tau^{2}}}{2},
$
for all $y\in \mathbb{R}$, $\tau>0$, and the function:
$\theta(\textbf{x}_{i},\textbf{g}_{k},\gamma)=\sqrt{ \sum_{j=1}^{p}\,(x_{ij}-g_{kj})^{2}+\gamma^{2}}$,
for $\gamma>0$.
Hence, the minimization problem is transformed into:
$$
\min\displaystyle \sum_{i=1}^{n}\,z_{i}^{2} \mbox{ subject to } \displaystyle \sum_{k=1}^{K}\, \psi(z_{i}-\theta(\textbf{x}_{i},\textbf{g}_{k},\gamma),\tau)\geq \epsilon, \mbox{ for } i=1,\ldots,n.
$$
Finally, according to the Karush--Kuhn--Tucker conditions \cite{Kar, KT}, all the constraints are active and the final formulation of the problem is:
\begin{equation}
\begin{array}{ll}
& \min\displaystyle \sum_{i=1}^{n}\,z_{i}^{2}\\
\mbox{ subject to } & h_{i}(z_{i},\textbf{G})= \displaystyle\sum_{k=1}^{K}\, \psi(z_{i}-\Vert\textbf{x}_{i}-\textbf{g}_{k}\Vert_{2},\tau)-\epsilon=0, \mbox{ for } i=1,\ldots,n,\\
& \epsilon,\tau,\gamma>0.
\end{array}
\label{problem}
\end{equation}
Considering (\ref{problem}), in \cite{Xav} it was stated the Hyperbolic Smoothing Clustering Method presented in the following algorithm.
\paragraph{Hyperbolic Smoothing Clustering Method (HSCM) Algorithm}
\begin{enumerate}
\item Initialize cluster membership matrix $\textbf{U}=[\mu_{ik}]_{n\times K}$
\item Choose initial values: $\textbf{G}^{0}, \gamma^{1}, \tau^{1}, \epsilon^{1}$
\item Choose values: $0<\rho_{1}<1$, $0<\rho_{2}<1$, $0<\rho_{3}<1$
\item Let $l=1$
\item Repeat steps 6 and 7 until a stop condition is reached:
\item Solve problem (P): $\min f(\textbf{G})=\displaystyle \sum_{i=1}^{n}\,z_{i}^{2}$ with $\gamma=\gamma^{l}$, $\tau=\tau^{l}$ y $\epsilon=\epsilon^{l}$, $\textbf{G}^{l-1}$ being the initial value and $\textbf{G}^{l}$ the obtained solution
\item Let $\gamma^{l+1}=\rho_{1}\gamma^{l}$,\; $\tau^{l+1}=\rho_{2}\tau^{l}$,\; $\epsilon^{l+1}=\rho_{3}\epsilon^{l}$ y $l=l+1$.
\end{enumerate}
The most relevant task in the hyperbolic smoothing clustering method is finding the zeroes of the function
$h_{i}(z_{i},\textbf{G})= \sum_{k=1}^{K}\, \psi(z_{i}-\Vert\textbf{x}_{i}-\textbf{g}_{k}\Vert_{2},\tau)-\epsilon=0$
for $i=1,\ldots,n$.
In this paper, we used the Newton-Raphson method for finding these zeroes \cite{Burden}, particularly the BFGS procedure \cite{Li}.
Convergence of the Newton-Raphson method was successful, mainly, thank to a good choice of initial solutions.
In our implementation, these initial approximations were generated by calculating the minimum distance between the $i$-th object and the $k$-th centroid for a given partition.
Once the zeroes $z_{i}$ of the functions $h_{i}$ are obtained, it is implemented the hyperbolic smoothing.
The final solution for this method consists on solving a finite number of optimization subproblems corresponding to problem (P) in Step 6 of the HSCM algorithm.
Each one of these subproblems was solved with the R routine \textit{optim} \cite{R}, a useful tool for solving optimization problems in non linear programming.
As far as we know there is no closed solution for solving this step. For the future, we can consider writing a program by our means, but for this paper we are using this R routine.
Since we have that: $\sum_{k=1}^{K}\, \psi(z_{i}-\theta(\textbf{x}_{i},\textbf{g}_{k},\gamma),\tau)=\epsilon$, then each entry $\mu_{ik}$ of the membership matrix is given by:
$\mu_{ik}=\frac{\psi(z_{i}-d_{k},\tau)}{\epsilon}.$
It is worth to note that fuzzyness is controlled by parameter $\epsilon$.
The following algorithm contains the main steps of the Hyperbolic Smoothing Fuzzy Clustering (HSFC) method.
\paragraph{Hyperbolic Smoothing Fuzzy Clustering (HSFC) Algorithm}
\begin{enumerate}
\item Set $\epsilon>0$
\item Choose initial values for: $\textbf{G}^{0}$ (centroids matrix), $\gamma^{1}$, $\tau^{1}$ y $N$ (maximum number of iterations)
\item Choose values: $0<\rho_{1}<1$,\; $0<\rho_{2}<1$
\item Set $l=1$
\item While $l\leq N$:
\item Solve the problem (P): Minimize $f(\textbf{G})= \sum_{i=1}^{n}\,z_{i}^{2}$ con $\gamma=\gamma^{(l)}$ y $\tau=\tau^{(l)}$, with an initial point $\textbf{G}^{(l-1)}$ and $\textbf{G}^{(l)}$ being the obtained solution
\item Set $\gamma^{(l+1)}=\rho_{1}\gamma^{(l)}$,$\tau^{(l+1)}=\rho_{2}\tau^{(l)}$, y $l=l+1$
\item Set $\mu_{ik}=\psi(z_{i}-\theta(\textbf{x}_{i},\textbf{g}_{k},\gamma),\tau)/\epsilon$ para $i=1,\ldots,n$ y $k=1,\ldots,K$.
\end{enumerate}
\section{Comparative Results}
\label{sec:results}
Performance of the HSFC method was studied on a data table well known from the literature, the Fisher's iris \cite{Fisher} and 16 simulated data tables built from a semi-Monte Carlo procedure \cite{PSOclus}.
For comparing FCM and HSFC, we used the implementation of FCM in R package \textit{fclust} \cite{Gio}.
This comparison was made upon the within class sum-of-squares:\linebreak
$W(P)= \sum_{k=1}^{K}\, \sum_{i=1}^{n}\,\mu_{ik}\|\textbf{x}_{i}-\textbf{g}_{k}\|^{2}$.
Both methods were applied 50 times and the best value of $W$ is reported.
For simplicity here, for HSFC we used the following parameters:
$\rho_{1}=\rho_{2}=\rho_{3}=0.25$, $\epsilon=0.01$ and $\gamma=\tau=0.001$ as initial values.
In Table \ref{tab:classic:tables} the results for Fisher's iris are shown, in which case HSFC performs slightly better. It contains the Adjusted Rand Index (ARI) \cite{ARI} between HSFC and the best FCM result among 100 runs; RI and ARI compare fuzzy membership matrices crisped into hard partitions.
\begin{table}
\centering
\caption{Minimum sum-of-squares (SS) reported for the Fisher's iris data table with HSFC and FCM, $K$ being the number of clusters, RI and ARI comparing both methods. In bold best method.}
\label{tab:classic:tables}
\begin{tabular}{p{20mm}|p{5mm}p{20mm}p{20mm}p{8mm}}
\hline\noalign{\smallskip}
Table & $K$ & SS for HSFC & SS for FCM & ARI\\
\noalign{\smallskip}\hline\noalign{\smallskip}
& 2 & \textbf{152.348} & 152.3615 & \multicolumn{1}{c}{1}\\
{Fisher's iris} & 3 & \textbf{78.85567} & 78.86733 & 0.994\\
& 4 & 57.26934 & 57.26934 & 0.980\\
\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular}
\end{table}
Simulated data tables were generated in a controlled experiment as in \cite{PSOclus}, with random numbers following a Gaussian distribution. Factors of the experiment were:
\vspace{-0.2cm}
\begin{itemize}
\item The number of objects (with 2 levels, $n=105$ and $n=525$).
\item The number of clusters (with levels $K=3$ and $K=7$).
\item Cardinality (card) of clusters, with levels i) all with the same number of objects (coded as card($=$)), and ii) one large cluster with 50\% of objects and the rest with the same number (coded as card($\not=$)).
\item Standard deviation of clusters, with levels i) all Gaussian random variables with standard deviation (SD) equal to one (coded as SD($=$)), and ii) one cluster with SD=3 and the rest with SD=1 (coded as SD($\not=$)).
\end{itemize}
Table \ref{table:nombre} contains codes for simulated data tables according to the codes we used.
\begin{table}
\centering
\caption{Codes and characteristics of simulated data tables; $n$: number of objects, $K$: number of clusters, card: cardinality, DS: standard deviation.}\label{table:nombre}
\begin{tabular}{lp{5cm}|lp{4.1cm}}
\hline\noalign{\smallskip}
{Table} & {Characteristcs} & {Table} & {Characteristcs} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
T1 & $n=525$, $K=3$, card($=$), SD($=$) &
T9 & $n=525$, $K=3$, card($\not=$), DS($=$) \\
T2 & $n=525$, $K=7$, card($=$), SD($=$) &
T10 & $n=525$, $K=7$, card($\not=$), DS($=$) \\
T3 & $n=105$, $K=3$, card($=$), SD($=$) &
T11 & $n=105$, $K=3$, card($\not=$), DS($=$) \\
T4 & $n=105$, $K=7$, card($=$), SD($=$) &
T12 & $n=105$, $K=7$, card($\not=$), DS($=$) \\
T5 & $n=525$, $K=3$, card($=$), SD($\not=$) &
T13 & $n=525$, $K=3$, card($\not=$), DS($\not=$) \\
T6 & $n=525$, $K=7$, card($=$), SD($\not=$) &
T14 & $n=525$, $K=7$, card($\not=$), DS($\not=$)\\
T7 & $n=105$, $K=3$, card($=$), SD($\not=$) &
T15 & $n=105$, $K=3$, card($\not=$), DS($\not=$)\\
T8 & $n=105$, $K=7$, card($=$), SD($\not=$) &
T16 & $n=105$, $K=7$, card($\not=$), DS($\not=$) \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular}
\end{table}
Table \ref{table:results} contains the minimum values of the sum-of-squares obtained for our HSFC and Bezdek's FCM methods; the best solution of 100 random applications for FCM in presented and one run of HSFC. It also contains the ARI values for comparing HSFC solution with that best solution of FCM.
It can be seen that, generally, HSFC method tends to obtain better results than FCM, with only few exceptions. In 23 cases HSFC obtains better results, FCM is better in 5 cases, and results are in same in 17 cases. However, ARI shows that partitions tend to be very similar with both methods.
\begin{table}
\centering
\caption{Minimum sum-of-squares (SS) reported for HSFC and FCM methods on the simulated data tables.
Best method in bold.}
\label{table:results}
{\small
\begin{tabular}{p{7mm}p{3mm}p{12mm}p{12mm}p{18mm}|p{7mm}p{3mm}p{12mm}p{12mm}p{12mm}}
\hline\noalign{\smallskip}
Table & $K$ & SS for & SS for & ARI &
Table & $K$ & SS for & SS for & ARI\\
& & HSFC & FCM & &
& & HSFC & FCM &\\
\noalign{\smallskip}\hline\noalign{\smallskip}
& 2 & \textbf{7073.402} & 7073.814 & 0.780 & & 2 & 12524.31 & 12524.31 & 0.900\\
T1 & 3 & 3146.119 & 3146.119 & 1 &
T9 & 3 & \textbf{9269.361} & 9269.611 & 1\\
& 4 & 2983.651 & 2983.651 & 1 & & 4 & 6298.47 & \textbf{6298.368} & 1 \\ \hline
& 2 & \textbf{16987.19} & 16987.71 & 0.764 & & 2 & \textbf{5466.893} & 5466.912 & 0.890\\
T2 & 3 & 11653.22 & 11653.22 & 1 &
T10 & 3 & 2977.58 & 2977.58 & 1\\
& 4 & \textbf{7776.855} & 7777.396 & 1 & & 4 & \textbf{2745.721} & 2746.671 & 1 \\ \hline
& 2 & \textbf{3923.051} & 3923.062 & 0.763 & & 2 & \textbf{2969.247} & 2969.32 & 0.860\\
T3 & 3 & 2917.13 & 2917.13 & 0.754 &
T11 & 3 & 1912.323 & 1912.323 & 1\\
& 4 & 2287.523 & \textbf{2256.298} & 0.993 & & 4 & 1401.394 & 1401.394 & 1\\ \hline
& 2 & \textbf{1720.365} & 1720.374 & 0.992 & & 2 & 1816.056 & 1816.056 & 1\\
T4 & 3 & 569.3112 & 569.3112 & 1 &
T12 & 3 & 525.7118 & 525.7118 & 1 \\
& 4 & 535.5491 & \textbf{535.3541} & 1 & & 4 & \textbf{477.0593} & 477.2696 & 1\\ \hline
& 2 & 15595.67 & 15595.67 & 0.910 & & 2 & \textbf{12804.03} & 12805.05 & 0.920 \\
T5 & 3 & \textbf{11724.93} & 11725.28 & 1 &
T13 & 3 & \textbf{8816.805} & 8817.702 & 1\\
& 4 & 8409.738 & 8409.738 & 0.984 & & 4 & \textbf{6293.774} & 6293.951 & 1\\ \hline
& 2 & 11877.96 & 11877.96 & 0.970 & & 2 & \textbf{16228.07} & 16228.98 & 0.920\\
T6 & 3 & \textbf{8299.779} & 8300.718 & 1 &
T14 & 3 & \textbf{7255.113} & 7255.423 & 1\\
& 4 & \textbf{7212.611} & 7213.725 & 1 & & 4 & 6427.313 & 6427.313 & 1\\ \hline
& 2 & \textbf{4336.261} & 4336.507 & 0.955 & & 2 & \textbf{2616.286} & 2616.943 & 1 \\
T7 & 3 & 3041.076 & 3041.076 & 1 &
T15 & 3 & \textbf{1978.017} & 1978.233 & 1\\
& 4 & \textbf{2395.683} & 2421.333 & 1 & & 4 & \textbf{1526.895} & 1526.953 & 1 \\ \hline
& 2 & 1767.43 & 1767.43 & 1 & & 2 & 2226.923 & \textbf{2226.212} & 0.962 \\
T8 & 3 & \textbf{1380.766} & 1381.019 & 1 &
T16 & 3 & \textbf{1232.074} & 1232.124 & 1 \\
& 4 & 1215.302 & \textbf{1211.235} & 1 & & 4 & \textbf{982.7074} & 982.9721 & 1\\
\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular}
}
\end{table}
\section{Concluding Remarks}
\label{sec:conclusion}
In hyperbolic smoothing, parameters $\tau$, $\gamma$ and $\epsilon$ tend to zero, so the constraints in the subproblems make that problem (P) tends to solve (\ref{funcional}).
Parameter $\epsilon$ controls the fuzzyness degree in clustering; the higher it is, the solution becomes more and more fuzzy; the less it is, the clustering is more and more crisp.
In order to compare results and efficiency of the HSFC method, zeroes of functions $h_{i}$ can be obtained with any method for solving equations in one variable or a predefined routine.
According to the results we obtained so far and the implementation of the hyperbolic smoothing for fuzzy clustering, we can conclude that, generally, the HSFC method has a slightly better performance than original Bezdek's FCM on small real and simulated data tables.
Further research is required for testing performance of HSFC method on very large data sets, with measures of efficiency, quality of solutions and running time.
We are also considering to study further comparisons between HSFC and FCM with different indices, and writing the program for solving Step 6 in HSFC algorithm, that is the minimization of $f(G)$, by our means, instead of using the \textit{optim} routine in R.
\subsection*{Acknowledgements}
D. Mas\'is acknowledges the School of Mathematics of the Costa Rica Institute of Technology for their support; this work is part of his M.Sc. dissertation at the University of Costa Rica.
E. Segura and J. Trejos acknowledge the Research Center for Pure and Applied Mathematics (CIMPA) of the University of Costa Rica for their support.
A.E. Xavier acknowledges the Federal University of Rio de Janeiro and the Federal University of Juiz Fora for their support.
| {
"timestamp": "2022-07-12T02:09:01",
"yymm": "2207",
"arxiv_id": "2207.04261",
"language": "en",
"url": "https://arxiv.org/abs/2207.04261",
"abstract": "We propose a novel method for building fuzzy clusters of large data sets, using a smoothing numerical approach. The usual sum-of-squares criterion is relaxed so the search for good fuzzy partitions is made on a continuous space, rather than a combinatorial space as in classical methods \\cite{Hartigan}. The smoothing allows a conversion from a strongly non-differentiable problem into differentiable subproblems of optimization without constraints of low dimension, by using a differentiable function of infinite class. For the implementation of the algorithm we used the statistical software $R$ and the results obtained were compared to the traditional fuzzy $C$--means method, proposed by Bezdek.",
"subjects": "Machine Learning (stat.ML); Machine Learning (cs.LG)",
"title": "Fuzzy Clustering by Hyperbolic Smoothing",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9693242000616579,
"lm_q2_score": 0.7310585844894971,
"lm_q1q2_score": 0.7086327776084898
} |
https://arxiv.org/abs/2211.17055 | Latitudinal regionalization of rotating spherical shell convection | Convection occurs ubiquitously on and in rotating geophysical and astrophysical bodies. Prior spherical shell studies have shown that the convection dynamics in polar regions can differ significantly from the lower latitude, equatorial dynamics. Yet most spherical shell convective scaling laws use globally-averaged quantities that erase latitudinal differences in the physics. Here we quantify those latitudinal differences by analyzing spherical shell simulations in terms of their regionalized convective heat transfer properties. This is done by measuring local Nusselt numbers in two specific, latitudinally separate, portions of the shell, the polar and the equatorial regions, $Nu_p$ and $Nu_e$, respectively. In rotating spherical shells, convection first sets in outside the tangent cylinder such that equatorial heat transfer dominates at small and moderate supercriticalities. We show that the buoyancy forcing, parameterized by the Rayleigh number $Ra$, must exceed the critical equatorial forcing by a factor of $\approx 20$ to trigger polar convection within the tangent cylinder. Once triggered, $Nu_p$ increases with $Ra$ much faster than does $Nu_e$. The equatorial and polar heat fluxes then tend to become comparable at sufficiently high $Ra$. Comparisons between the polar convection data and Cartesian numerical simulations reveal quantitative agreement between the two geometries in terms of heat transfer and averaged bulk temperature gradient. This agreement indicates that spherical shell rotating convection dynamics are accessible both through spherical simulations and via reduced investigatory pathways, be they theoretical, numerical or experimental. | \section{Introduction}
It has long been known that spherical shell rotating convection significantly differs between the low latitudes
\citep[e.g.,][]{Busse77, Gillet06} situated outside the
axially-aligned cylinder that circumscribes the inner spherical
shell boundary (the tangent cylinder, TC)
and the higher latitude polar regions lying within the TC
\citep[e.g.,][]{Aurnou03, Sreenivasan06, Aujogue18, Cao18}.
Further, in the atmosphere-ocean literature, latitudinal separation into polar,
mid-latitude, extra-tropical and tropical zones is essential to accurately model
the large-scale dynamics \cite[e.g.,][]{Vallis17}.
Yet few scaling studies of spherical shell convection consider the innate
regionalization of the dynamics \cite[cf.][]{Wang21}, and instead
mostly focus on globally-averaged quantities \citep[e.g.,][]{Gastine16,Long20}.
In the turbulent rapidly-rotating limit, theory requires the convective heat
transport to be independent of the fluid diffusivities irregardless of system
geometry. This yields \citep[e.g.][]{Julien12, Plumley19}
\begin{equation}
Nu \sim (Ra/Ra_c)^{3/2} \sim \widetilde{Ra}^{3/2}Pr^{-1/2} \sim Ra^{3/2} E^{2}
Pr^{-1/2}\,,
\label{eq:ult}
\end{equation}
where, defined explicitly below, the Nusselt number $Nu$ is the nondimensional
heat transfer, $Ra$ ($Ra_c$) denotes the (critical) Rayleigh number, $E$ is the
Ekman number, $Pr$ is the Prandtl number, and $\widetilde{Ra} \equiv
Ra\,E^{4/3}$ expresses the generalized convective supercriticality
\citep{Julien12}.
Cylindrical laboratory experiments with $Pr\approx 7$ and
Cartesian (planar) numerical simulations with $Pr=(1, 7)$ and no-slip boundaries
with
$Ra/Ra_c \lesssim 10$ reveal a steep scaling $Nu \sim (Ra/Ra_c)^\beta$
with $\beta\approx 3$ \citep{King12, Cheng15, Cheng18}. By comparing numerical
models with stress-free and no-slip boundaries, \citet{Stellmach14} showed that
the steep $\beta\approx 3$ scaling is an Ekman pumping effect
\citep[cf.][]{Julien16}.
For larger supercriticalities, $\beta$ decreases and
gradually approaches (\ref{eq:ult}).
This $\beta \approx 3$ regime is expected to hold as long as the thermal
boundary layers
are in quasi-geostrophic balance, a condition approximated by $Ra\,E^{8/5}
\lesssim 1$ \citep{Julien12a}.
Globally-averaged quantities in spherical shell models present several
differences with the planar configuration. In particular, no
steep $\beta \approx 3$ exponent is observed. \citet{Gastine16} showed that the
globally-averaged heat transfer first follows a $Nu-1 \sim Ra/Ra_c -1$
weakly-nonlinear scaling for $Ra \leq 6\,Ra_c$ before transitioning to
a scaling close to (\ref{eq:ult}) for $Ra > 6\,Ra_c $ and $Ra E^{8/5} < 0.4$.
Spherical shell models with a radius ratio $r_i/r_o=0.35$ and
fixed-flux thermal
conditions recover similar global scaling behaviors,
though with a slightly larger exponent $\beta \approx 1.75$ for $E=2\times
10^{-6}$ \citep{Long20}. Because the Ekman pumping enhancement of heat
transfer is maximized when rotation and gravity are aligned, $\beta$ is lower
in the equatorial regions of spherical shells. This explains why
globally-averaged spherical $\beta$ values cannot attain the $\beta \approx 3$
values found in planar (polar-like) studies.
Recently, \citet{Wang21} analysed heat transfer
within the equatorial regions, at mid-latitudes, and inside the entire
TC. They argued that the mid-latitude scaling in their
models, similar to \citet{Gastine16}'s global scaling, follows the
diffusion-free scaling (\ref{eq:ult}), whilst the region inside the TC follows
a $\beta \approx 2.1$ trend.
This TC scaling exponent is significantly smaller than those obtained in planar
models, possibly because of the finite inclination angle between gravity and
the
rotation axis averaged over the volume of the TC.
Following \citet{Wang21}, this study aims to better characterize the
latitudinal variations in rotating convection dynamics and quantify the
differences between spherical and non-spherical geometries. To do so, we carry
out local heat transfer analyses in the polar and equatorial regions over an
ensemble of $Pr=1$ rotating spherical shell
simulations with $r_i/r_o=0.35$ and $r_i/r_o=0.6$.
\section{Hydrodynamical model}
\label{sec:model}
We consider a volume of fluid bounded by two spherical surfaces of inner radius
$r_i$ and outer radius $r_o$ rotating about the $z$-axis with a constant
rotation rate $\Omega$. Both boundaries are mechanically no-slip and are held at constant temperatures
$T_o=T(r_o)$ and $T_i=T(r_i)$.
We adopt a dimensionless formulation
of the Navier-Stokes equations using the shell gap $d=r_o-r_i$ as the reference
lengthscale, the temperature contrast $\Delta T=T_o-T_i$ as the temperature
unit, and the inverse of the rotation rate $\Omega^{-1}$ as the time scale.
Under the Boussineq approximation, this yields the following set of
dimensionless equations for the velocity $\vec{u}$ and temperature $T$
expressed in spherical coordinates
\begin{equation}
\dfrac{\partial \vec{u}}{\partial
t}+\vec{u}\cdot\vec{\nabla}\vec{u}+2\vec{e_z}\times\vec{u}=-\vec{\nabla}p +
\dfrac{Ra E^2}{Pr} T\,g(r)\vec{e_r}+ E\,\vec{\nabla}^2\vec{u},
\quad\vec{\nabla}\cdot\vec{u}=0,
\label{eq:ns}
\end{equation}
\begin{equation}
\dfrac{\partial T}{\partial t}+\vec{u}\cdot\vec{\nabla}T =
\dfrac{E}{Pr}\vec{\nabla}^2 T\,,
\label{eq:temp}
\end{equation}
where $p$ corresponds to the non-hydrostatic pressure, $g$ to gravity and
$\vec{e_r}$
($\vec{e_z}$) denotes the unit vector in the radial (axial) direction.
The above equations
are governed by the dimensionless Rayleigh, Ekman and Prandtl numbers,
respectively defined by
\begin{equation}
Ra=\dfrac{\alpha g_o \Delta T d^3}{\nu \kappa},\ E=\dfrac{\nu}{\Omega d^2},\
Pr = \dfrac{\nu}{\kappa},
\label{eq:numbers}
\end{equation}
where $\nu$ and $\kappa$ correspond to the constant kinematic viscosity and
thermal diffusivity, and $\alpha$ is the thermal expansion coefficient.
Two spherical shell configurations are employed: (\textit{i}) a
thin shell with $r_i/r_o=0.6$ under the assumption of a centrally-condensed
mass with $g=(r_o/r)^2$ \citep{Glatz1}; (\textit{ii}) a
self-gravitating thicker spherical shell model with $r_i/r_o=0.35$ and
$g=r/r_o$. The latter corresponds to the standard configuration employed in
numerical models of Earth's dynamo \citep[e.g.][]{Christensen06, Schwaiger19}.
We consider numerical simulations with $10^4 \leq Ra \leq 10^{11}$, $10^{-7}
\leq E \leq 10^{-2}$ and $Pr=1$ computed with the open source code
\texttt{MagIC}\footnote{\url{https://github.com/magic-sph/magic}}
\citep{Wicht02,Gastine12}. We mostly build the current study on existing
numerical simulations from \cite{Gastine16} and \cite{Schwaiger21} and continue
their time integration to gather additional diagnostics when required.
In the following analyses overbars denote time averages, triangular brackets denote
azimuthal averages and square brackets denote averages about the angular sectors
comprised between the colatitudes $\theta_0-\alpha$ and $\theta_0+\alpha$ in
radians:
\[
\bar{f} = \int_{t_0}^{t_0+\tau}f\mathrm{d}t,\quad
\langle f \rangle = \dfrac{1}{2\pi}\int_{0}^{2\pi}
f(r,\theta,\phi,t)\mathrm{d}\phi, \quad
\left[ f \right]_{\theta_0}^{\alpha} =
\dfrac{1}{\mathcal{S}_{\theta_0}^\alpha}\int_{\mathcal{S}_{\theta_0}^\alpha}
f(r,\theta,\phi,t)\mathrm{d}\mathcal{S},
\]
with $\mathrm{d}\mathcal{S} = \sin\theta \mathrm{d}\theta$
and
$\mathcal{S}_{\theta_0}^\alpha=\int_{\min(\theta_0-\alpha,0)}^{
\max(\theta_0+\alpha,\pi) } \sin\theta\mathrm{d}\theta$.
For the sake of clarity, we introduce the following notations to characterize
the time-averaged radial distribution of temperature
\[
\vartheta(r)=[\langle \bar{T}
\rangle ]_{\pi/2}^{\pi/2}, \quad
\vartheta_e(r)= [\langle \bar{T}
\rangle ]_{\pi/2}^{\pi/36}, \quad
\vartheta_p(r)=\dfrac{1}{2}\left([\langle \bar{T}
\rangle ]_{0}^{\pi/36}+[\langle \bar{T}
\rangle ]_{\pi}^{\pi/36}\right)\,,
\]
where $\vartheta_e$ and $\vartheta_p$ correspond to the averaged radial
distribution of temperature in the equatorial and polar regions,
respectively, and $\alpha = \pi/36$ rad corresponds to $5^\circ$ in
colatitudinal angle.
The schematic shown in Fig.~\ref{fig:th_Ranu}(\textit{a}) highlights the
fluid volumes involved in these measures.
The value of $\alpha=5^\circ$ is quite arbitrary and has been adopted to allow
a comparison of polar data with local planar Rayleigh-B\'enard
convection (hereafter RBC) models while keeping a
sufficient sampling.
To quantify the differences between the heat transfer
in the polar and equatorial regions, we introduce a Nusselt
number that depends on colatitude $\theta$ via
\begin{equation}
Nu_i(\theta) = \dfrac{\left.\frac{\mathrm{d} \langle \bar{T} \rangle
}{\mathrm{d}{r}}\right|_{r_i}}{
\left.\frac{\mathrm{d} T_c}{\mathrm{d}{r}}\right|_{r_i}}, \quad
Nu_o(\theta) = \dfrac{\left.\frac{\mathrm{d} \langle \bar{T} \rangle
}{\mathrm{d}{r}}\right|_{r_o}}{
\left.\frac{\mathrm{d} T_c}{\mathrm{d}{r}}\right|_{r_o}}, \quad
\dfrac{\mathrm{d} T_c}{\mathrm{d} r} = -\dfrac{r_i r_o}{r^2}\,,
\end{equation}
where $T_c$ corresponds to the dimensionless temperature of the conducting
state.
The corresponding local Nusselt numbers in the equatorial and polar regions are
then defined by
\begin{equation}
Nu_e=[Nu(\theta)]_{\pi/2}^{\pi/36}, \quad
Nu_p=\dfrac{1}{2}\left([\langle Nu(\theta)
\rangle ]_{0}^{\pi/36}+[\langle Nu(\theta)
\rangle ]_{\pi}^{\pi/36}\right)\,.
\label{eq:nuloc}
\end{equation}
We finally introduce the mid-shell time-averaged temperature gradient in the
polar region
\begin{equation}
\partial T = \dfrac{%
-\left.\frac{\mathrm{d}\vartheta_p}{
\mathrm {d} r}\right|_{r=r_m}}{%
-\left.\frac{\mathrm{d}T_c}{
\mathrm {d} r}\right|_{r=r_m}
}, \quad r_m=\dfrac{1}{2}(r_i+r_o)\,,
\label{eq:beta}
\end{equation}
where normalisation by the conductive temperature gradient allows us to
compare the scaling behaviour of $\partial T$ between spherical
shells of different radius ratio values, $r_i/r_o$, and planar models.
\section{Results}
\label{sec:results}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{th_Ranu_big}
\caption{(\textit{a}) Schematic showing the area selection to compute
(\ref{eq:nuloc}), the local polar (blue) and equatorial (red) Nusselt numbers.
(\textit{b}) Time-averaged local Nusselt numbers in the polar ($Nu_p$) and
equatorial ($Nu_e$) regions as a function of the Rayleigh number for spherical
shell simulations with $r_i/r_o=0.6$ and $g=(r_o/r)^2$ and $Pr=1$
\citep{Gastine16}. The different Ekman numbers are denoted by different symbol
shapes, the two spherical shells surfaces $r_i$ and $r_o$ are marked by open and
filled symbols, and by lower levels of opacity, respectively.}
\label{fig:th_Ranu}
\end{figure}
Figure~\ref{fig:th_Ranu}(\textit{b}) shows $Nu_p$ and $Nu_e$ as a
function of $Ra$ for various $E$ at both boundaries, $r_i$ and $r_o$, for
spherical shell simulations with $r_i/r_o=0.6$ and $g=(r_o/r)^2$. Rotation
delays the onset of convection such that the critical Rayleigh number required to trigger
convective motions increases with decreasing Ekman number, $Ra_c \sim
E^{-4/3}$.
Convection first sets in outside the tangent cylinder
\citep[e.g.][]{Dormy04}. For each Ekman number, heat transfer
behaviour in the equatorial regions (red symbols) first raises slowly following a weakly nonlinear scaling
\citep[e.g.][]{Gillet06}, before gradually rising in the vicinity of $Nu_e \approx
2$. At $Nu_e \gtrsim 2$, the heat transfer increases more steeply with $Ra$, before
gradually tapering off toward the non-rotating RBC trend
\citep[e.g.][]{Gastine15}.
For $Ra/Ra_c > \mathcal{O}(10)$, convection sets in the polar regions and $Nu_p$
steeply rises with $Ra$ with a much larger exponent than $Nu_e$. At still larger
forcings, the slope of $Nu_p$ gradually decreases and comparable amplitudes
in polar and equatorial heat transfers are observed.
Heat transfer scalings at both spherical shell boundaries $r_i$ and $r_o$
follow similar trends.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{th_RaEkNu}
\caption{(\textit{a}) Nusselt number in the polar ($Nu_p$) and in the
equatorial ($Nu_e$) regions as a function of $\widetilde{Ra}=Ra\,E^{4/3}$
in the $r_i/r_o = 0.6$ simulations.
The symbols carry the same meaning as in
Fig.~\ref{fig:th_Ranu} but with only the $Ra\,E^{8/5} < 2$ simulations retained. (\textit{b}) Ratio of polar and equatorial heat
transfer $Nu_p/Nu_e$ as a function of $\widetilde{Ra}$ for both spherical shell
boundaries and $E \leq 10^{-4}$.}
\label{fig:thRaEkNu}
\end{figure}
Figure~\ref{fig:thRaEkNu} shows (\textit{a}) $Nu_p$ and $Nu_e$ and (\textit{b})
their ratio $Nu_p/Nu_e$ plotted at both boundaries as a function of the
supercriticality parameter $\widetilde{Ra} = Ra E^{4/3}$. For $\widetilde{Ra} <
4$, $Nu_e$ increases following the weakly nonlinear form $Nu_e-1\sim Ra/Ra_c-1$
\citep[][\S3.1]{Gastine16}. For larger supercriticalities, the $Nu_e$
scaling steepens and an additional $E$-dependence causes the data to
fan out, possibly because these highest $\widetilde{Ra}$ cases do not fulfill
$Ra\,E^{8/5}<0.4$.
There is no clear power law scaling in the $Nu_e (\widetilde{Ra} < 10)$ data,
but the steepest local slope yields $\max(\beta) \approx 1.9$ in the $5 \leq
\widetilde{Ra} \leq 10$ range.
Best fits to the Fig.~\ref{fig:thRaEkNu}(\textit{a}) data show that polar
convection onsets at $\widetilde{Ra} (E) = 11.2 \pm 0.3$ in the $r_i/r_o = 0.6$
simulations. The mean value of the critical polar Rayleigh number is
\begin{equation}
Ra_c^p = 11.2\,E^{-4/3}.
\label{RaP}
\end{equation}
Although the polar onset of convection, estimated via $Ra_c^p \, E^{4/3}$,
remains nearly constant, the global (e.g., low latitude) onset value, estimated
by $Ra_c \, E^{4/3}$, varies by a factor of $\approx 2$ over our $E$ range.
Their ratio then yields
\begin{equation}
Ra_c^p (E)/Ra_c(E) = 20 \pm 5.
\end{equation}
This means that rotating convection does not typically onset in the polar
regions until the lower latitude convection is already 20 times supercritical
and is already operating under highly supercritical conditions. This difference
in equator versus polar convective onsets imparts a significant regionalization
to spherical shell rotating convection right from the get go.
We find, throughout this investigation, that polar rotating convection compares
closely to its plane layer counterpart. However, it is not expected that the
polar critical Rayleigh number will exactly agree with plane layer
predictions, due to the effects of finite spherical curvature as well as the
radial variations of gravity in these $r_i/r_o = 0.6$ simulations. In the
rapidly-rotating thin shell limit, in which $r_i/r_o \rightarrow 1$ and $E$ is
kept asymptotically small, $Ra_c^p$ will likely approach the
planar value.
Still, the polar scaling in \eqref{RaP} is found to be 51\% of the plane layer
$E \rightarrow 0$ scaling prediction, $Ra_c = 21.9\,E^{-4/3}$
\citep[][]{Kunnen21}, and to be 56\% of \citet{Niiler65}'s finite Ekman number,
no-slip plane layer $Ra_c $ prediction at $E = 10^{-6}$. In addition to the
similarity in critical $Ra$ values, it is found that the polar heat transfer
$Nu_p$ rises sharply once polar convection onsets, following a $Nu_p \sim
\widetilde{Ra}^3$ scaling that
matches the heat transfer scalings found in no-slip planar simulations carried
out over the same $(E, Pr)$ ranges \citep{King12, Stellmach14, Aurnou15}.
Figure~\ref{fig:thRaEkNu}(\textit{b}) shows the ratio of polar to equatorial
heat transport,
which follows a distinct v-shape trend that can be decomposed in three regions.
(\textit{i}) For $\widetilde{Ra}< 11.2$, $Nu_p\approx 1$ and the ratio depends
directly on $Nu_e=f(\widetilde{Ra})$.
(\textit{ii}) For $11.2 < \widetilde{Ra} \lesssim 30$, $Nu_p$ raises much faster than
$Nu_e$ hence increasing $Nu_p/Nu_e$. (\textit{iii}) When rotational
effects become less influential, $Nu_p/Nu_e \approx 1$ at
$r_i$ and $Nu_p/Nu_e\approx 1.5$ at $r_o$.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{profiles}
\caption{(\textit{a-b}) Radial profiles of time-averaged temperature in
the polar regions (blue dashed line), in the equatorial region (red dot-dashed line) and
averaged of the entire spherical surface (tan solid line). For comparison, the
conducting temperature profile $T_c$ is also plotted as a black dotted line. Panel
(\textit{a}) corresponds to $r_i/r_o=0.6$, $g=(r_o/r)^2$, $E=10^{-6}$,
$Ra=6.5\times 10^8$, $Pr=1$, while panel (\textit{b}) corresponds to
$r_i/r_o=0.6$, $g=(r_o/r)^2$, $E=10^{-6}$, $Ra=3.2\times 10^9$ and $Pr=1$.
(\textit{c}) Time-averaged local Nusselt number at both spherical shell
boundaries as a function of the colatitude for simulations with $r_i/r_o=0.6$,
$g=(r_o/r)^2$, $E=10^{-6}$, $Pr=1$ and increasing supercriticalities. Solid
(dashed) lines correspond to $r_i$ ($r_o$). The vertical solid lines mark the
location of the tangent cylinder. In all panels, the shaded regions correspond
to one standard deviation about the time averages.}
\label{fig:profiles}
\end{figure}
Figure~\ref{fig:profiles}(\textit{a}-\textit{b}) shows the time-averaged
temperature profiles in the polar and equatorial regions ($\vartheta_p$ dashed
lines and $\vartheta_e$ dot-dashed lines) alongside the volume-averaged
temperature ($\vartheta$, solid line) for two numerical models with
$r_i/r_o=0.6$, $g=(r_o/r)^2$, $E=10^{-6}$ and different $Ra$. For the case with
$Ra\approx 14.1\,Ra_c$ (panel \textit{a}), low latitude convection is
active but has yet to start within the TC.
The mean temperature in the polar regions $\vartheta_p$ thus closely follows
the conductive profile $T_c$ (dotted line), while in the equatorial region
we observe the formation of a thin thermal boundary layer at $r_i$ and a
decrease of the temperature gradient in the fluid bulk.
At larger convective forcing ($Ra\approx 69.3\,Ra_c$, pabel \textit{b}),
convection is space-filling. The temperature profiles in the polar and
equatorial regions become comparable and
a larger fraction of the temperature contrast is accomodated in the
thermal boundary layers.
Figure~\ref{fig:profiles}(\textit{c}) shows the latitudinal variations of the
heat flux at both spherical shell boundaries for increasing supercriticalities.
These profiles confirm that convection first sets in outside the TC whilst the
high-latitude regions remain close to the conductive $Nu = 1$ state up to
$Ra_c^p$, and that the $Ra > Ra_c^p$ polar
transfer rises quickly, thus reducing the latitudinal $Nu$ contrast.
Both spherical shell boundaries feature similar global
trends, with interesting regionalized differences. The
tangent cylinder (solid vertical lines) is visible, for instance, in the outer
boundary heat transfer $Nu_o(\theta)$, manifesting itself in local maxima that
persist between $15\,Ra_c$ and $70\, Ra_c$.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{nuBetaRa}
\caption{(\textit{a}) Nusselt number in the polar regions $Nu_p$ as a function
of the local supercriticality $Ra/Ra_c^p$. (\textit{b}) Normalised
mid-depth temperature gradient (Eq.~\ref{eq:beta}) in the polar regions
$\partial T$
as a function of the local supercriticality. Spherical shell simulations
include two configurations with $r_i/r_o=0.6$ and $g=(r_o/r)^2$ \citep[light
blue symbols, from][]{Gastine16} and $r_i/r_o=0.35$ and $g=r/r_o$
\citep[dark blue symbols, from][]{Schwaiger21}. All the simulations with $E
\leq 10^{-5}$ and $Nu_p > 1$ have been retained. Direct numerical simulations
(DNS) in Cartesian geometry with periodic horizontal boundary conditions (light
yellow symbols) come from \cite{Stellmach14}, while non-hydrostatic
quasi-geostrophic models (CNH-QGM) (red symbols) come from \cite{Plumley16}.}
\label{fig:NuPo}
\end{figure}
Figure~\ref{fig:NuPo} shows (\textit{a}) $Nu_p$ and (\textit{b})
normalized mid-depth polar temperature gradients $\partial T$ as a function of
$Ra/Ra_c^p$
for spherical shell simulations with $r_i/r_o=0.6$ and $r_i/r_o=0.35$,
and for Cartesian asymptotically reduced models \citep[e.g.,][]{Plumley16} and
$E\geq 2\times 10^{-7}$, $Pr=1$ direct numerical simulations
\citep{Stellmach14}. In this figure, $Ra_c^p$ is used for the critical $Ra$
values for spherical shell data, whereas standard planar $Ra_c$ values are used
for the plane layer data. Good quantitative agreement is found in the $Nu_p$ and
$\partial T$ data from spherical shell and
planar models, with all the data sets effectively overlying one another. The $1
\lesssim Ra/Ra_c^p \lesssim 3$ heat transfer follows a $Nu_p \sim (Ra/Ra_c)^3$
scaling in all the data sets. At larger supercriticalities, the scaling
exponent
of $Nu_p$ decreases and the
asymptotic $\beta = 3/2$ scaling
appears to be approached in the highest supercriticality planar cases.
The mid-depth temperature gradients quantitatively agree in all models as well,
attaining a relatively large minimum value, $\partial T \approx 0.5$ near $Ra
\approx 3\,Ra_c^p$, before increasing slightly in the highest supercriticality
planar models.
\section{Discussion}
\label{sec:conclu}
Globally-averaged heat transfer scalings for rotating convection differ between
spherical and
planar geometries with the latter yielding steeper $Nu$-$Ra$ scaling trends.
By introducing regionalized measures of heat
transfer, we have shown that this steep scaling can also be recovered in the
polar regions of spherical shells. The comparisons in Fig.~\ref{fig:NuPo}
reveals an almost perfect overlap in heat transfer data between the two
geometries. Importantly, this demonstrates that local, non-spherical models can
be used to understand spherical systems \citep[e.g.,][]{Julien12, Horn15,
Cabanes17, Calkins18, Cheng18, Miquel18, Gastine19}.
Our regional analysis shows that the use of global volume-averaged properties
to interpret spherical shell rotating convection can be misleading since such
averages are often made over regions with significantly differing convection
dynamics \citep[e.g.,][in rotating cylinders]{Ecke14,Lu21,Grannan22}. As such,
it is quite likely that globally-averaged $\beta$ depends on the
spherical shell radius ratio, $r_i/r_o$. In higher $r_i/r_o$ shells, more of
the fluid will lie
within the TC and the globally-averaged $\beta$ will tend towards a polar value
near $3$. In contrast, lower $r_i/r_o$ shells should trend towards regional
$\beta$ values below $2$, as found in our $Nu_e$ data. We hypothesize further
that the mid latitude $\beta \simeq 3/2$ scaling in \citep{Wang21} may
represent a combination of the low and high latitude scalings, which could also
be tested by varying $r_i/r_o$.
A similar argument may also explain \cite{Wang21}'s higher latitude, tangent
cylinder heat transfer scaling of $\beta = 2.1$. We postulate that
measuring the rotating heat transfer away from the poles will always yield
$\beta < 3$. This may be further exacerbated if the heat transfer is measured
across the tangent cylinder, which likely acts as a radial transport barrier
\cite[e.g.,][]{Guervilly17, Cao18}. Thus, \citet{Wang21}'s $\beta \approx 2.1$
value may arise because their whole tangent cylinder measurements extend to far
lower latitudes in comparison to the far tighter, pole-adjacent $Nu_p$
measurements made here that yield $\beta \approx 3$.
The polar heat transfer data in Figure \ref{fig:thRaEkNu} demonstrates a sharp
convective onset value, with $Ra_c^p = (11.2 \pm 0.3)E^{-4/3}$ over our range
of $r_i/r_o = 0.6$ models and $Ra_c^p / Ra_c = 20 \pm 5$. It is likely that
convective turbulence is space-filling in planetary fluid layers. We argue
then that realistic geophysical and astrophysical models of rotating convection
require $Ra > Ra_c^p$. If the convection is rapidly-rotating as well, this
constrains the convective Rossby number $Ro_{conv} = (Ra E^2 / Pr)^{1/2}
\lesssim 0.1$ \citep[e.g.,][]{Christensen06, Aurnou20}. Thus, space-filling
rotating convective turbulence simultaneously requires $Ra \gtrsim 10 Ra_c^p$
and $Ro_{conv} \lesssim 1/10$, which then constrains that $E \lesssim 10^{-6}$
in $Pr \simeq 1$ models. Such dynamical constraints are important for building
accurate models of $Nu(\theta)$, which are essential to our interpretations of
planetary and astrophysical observations. For instance, on the icy satellites,
latitudinal changes in ice shell thickness and surface terrain likely reflect
the latitudinally-varying convective dynamics in the underlying oceans
\citep[e.g.][]{Soderlund20}. We hypothesize that the broad array of
$Nu_p/Nu_e$ solutions found in the models \citep[e.g.,][]{Soderlund19,
Amit20, Bire22} could possibly arise because convection is not active within
the tangent cylinder in some of the models, and is not rapidly-rotating in
others. Our results suggest that quantitative comparisons in heat flux
profiles can only be made between models having similar latitudinal
distributions of convective activity and comparable Rossby number values.
Establishing asymptotically-accurate trends for $Nu_p/Nu_e$ also
requires accurate scaling laws for the equatorial heat transfer. A
brief inspection of Fig.~\ref{fig:thRaEkNu} reveals the complexity of
$Nu_e(\widetilde{Ra})$, and its lack of any clear power law trend. To further
complicate this task, zonal jets tend to develop in no-slip cases with $E
\lesssim 10^{-6}$, which can substantively alter the patterns of convective
heat flow. Figure~\ref{fig:snaps} shows (\textit{a},\textit{b}) axial
vorticity $\omega_z
=\vec{e_z}\cdot \vec{\nabla}\times\vec{u}$ snapshots and (\textit{c})
latitudinal heat flux profiles
for two $E < 10^{-6}$ simulations with different radius ratios. Convection in
the (\textit{a})
$r_i/r_o=0.35$ case is sub-critical inside the TC, while it is space-filling in
the (\textit{b}) $r_i/r_o=0.6$ simulation.
In the latter case, polar convection develops as small-scale
axially-aligned vortices which do not drive jets within the TC. In contrast, the
convective motions outside the TC are already sufficiently turbulent in both
cases to trigger the formation of zonal jets. These jet flows manifest via
the formation of alternating, concentric rings of positive and negative axial
vorticity. These coherent zonal motions act to reduce the heat transfer
efficiency in the regions of intense shear where the zonal velocities become of
comparable amplitude to the convective flow \citep[e.g.][]{Aurnou08, Yadav16,
Guervilly17,Raynaud18,Soderlund19}. Thus, the outer boundary heat flux profile
$Nu_o(\theta)$ in Fig.~\ref{fig:snaps}(\textit{c}) adopts a strongly undulatory
structure exterior to the TC.
The asymptotic scaling behaviour of $Nu_e$ is hence intimately related to the
spatial distribution and amplitude of the zonal jets that develop in the shell,
a topic for future investigations of rotating convective turbulence
\citep[e.g.][]{Lonner22}.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{vortz_fluxes}
\caption{(\textit{a}-\textit{b}) Meridional sections, equatorial cut and
radial surfaces of the axial
component of the vorticity $\omega_z =\vec{e_z}\cdot
\vec{\nabla}\times\vec{u}$. Panel (\textit{a})
corresponds to a numerical model with $r_i/r_o=0.35$, $g=r/r_o$, $E=10^{-7}$,
$Ra=10^{11}$ and $Pr=1$, while panel (\textit{b}) corresponds
to a numerical model with $r_i/r_o=0.6$, $g=(r_o/r)^2$, $E=3\times 10^{-7}$,
$Ra=1.3\times 10^{10}$ and $Pr=1$. (\textit{c}) Local Nusselt number at both
spherical shell boundaries as a function of the colatitude. The orange and
blue lines correspond to the numerical model shown in panel (\textit{a}) and
(\textit{b}), respectively. The location of the tangent cylinder for both
radius ratios are marked by vertical solid lines.}
\label{fig:snaps}
\end{figure}
\begin{acknowledgments}
We thank S.~Stellmach and K.~Julien for sharing their planar convection data. Simulations
requiring longer time integrations to gather diagnostics
were computed on GENCI (Grant 2021-A0070410095) and on the \texttt{S-CAPAD} platform at IPGP.
JMA gratefully acknowledges the support of the NSF Geophysics Program (EAR 2143939).
Lastly, we thank the University of Leiden's Lorentz Center, where this study
was resuscitated during the ``Rotating Convection: from the Lab to the Stars''
workshop.
Declaration of Interests. The authors report no conflict of interest.
\end{acknowledgments}
\bibliographystyle{jfm}
| {
"timestamp": "2022-12-01T02:17:07",
"yymm": "2211",
"arxiv_id": "2211.17055",
"language": "en",
"url": "https://arxiv.org/abs/2211.17055",
"abstract": "Convection occurs ubiquitously on and in rotating geophysical and astrophysical bodies. Prior spherical shell studies have shown that the convection dynamics in polar regions can differ significantly from the lower latitude, equatorial dynamics. Yet most spherical shell convective scaling laws use globally-averaged quantities that erase latitudinal differences in the physics. Here we quantify those latitudinal differences by analyzing spherical shell simulations in terms of their regionalized convective heat transfer properties. This is done by measuring local Nusselt numbers in two specific, latitudinally separate, portions of the shell, the polar and the equatorial regions, $Nu_p$ and $Nu_e$, respectively. In rotating spherical shells, convection first sets in outside the tangent cylinder such that equatorial heat transfer dominates at small and moderate supercriticalities. We show that the buoyancy forcing, parameterized by the Rayleigh number $Ra$, must exceed the critical equatorial forcing by a factor of $\\approx 20$ to trigger polar convection within the tangent cylinder. Once triggered, $Nu_p$ increases with $Ra$ much faster than does $Nu_e$. The equatorial and polar heat fluxes then tend to become comparable at sufficiently high $Ra$. Comparisons between the polar convection data and Cartesian numerical simulations reveal quantitative agreement between the two geometries in terms of heat transfer and averaged bulk temperature gradient. This agreement indicates that spherical shell rotating convection dynamics are accessible both through spherical simulations and via reduced investigatory pathways, be they theoretical, numerical or experimental.",
"subjects": "Fluid Dynamics (physics.flu-dyn); Earth and Planetary Astrophysics (astro-ph.EP); High Energy Astrophysical Phenomena (astro-ph.HE); Solar and Stellar Astrophysics (astro-ph.SR); Geophysics (physics.geo-ph)",
"title": "Latitudinal regionalization of rotating spherical shell convection",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9693242009478238,
"lm_q2_score": 0.7310585786300049,
"lm_q1q2_score": 0.7086327725765813
} |
https://arxiv.org/abs/2006.07013 | A Unified Analysis of Stochastic Gradient Methods for Nonconvex Federated Optimization | In this paper, we study the performance of a large family of SGD variants in the smooth nonconvex regime. To this end, we propose a generic and flexible assumption capable of accurate modeling of the second moment of the stochastic gradient. Our assumption is satisfied by a large number of specific variants of SGD in the literature, including SGD with arbitrary sampling, SGD with compressed gradients, and a wide variety of variance-reduced SGD methods such as SVRG and SAGA. We provide a single convergence analysis for all methods that satisfy the proposed unified assumption, thereby offering a unified understanding of SGD variants in the nonconvex regime instead of relying on dedicated analyses of each variant. Moreover, our unified analysis is accurate enough to recover or improve upon the best-known convergence results of several classical methods, and also gives new convergence results for many new methods which arise as special cases. In the more general distributed/federated nonconvex optimization setup, we propose two new general algorithmic frameworks differing in whether direct gradient compression (DC) or compression of gradient differences (DIANA) is used. We show that all methods captured by these two frameworks also satisfy our unified assumption. Thus, our unified convergence analysis also captures a large variety of distributed methods utilizing compressed communication. Finally, we also provide a unified analysis for obtaining faster linear convergence rates in this nonconvex regime under the PL condition. | \section{Introduction}
\label{sec:intro}
In this paper, we develop a general framework for studying and designing SGD-type methods for solving {\em nonconvex distributed/federated optimization problems} \cite{khirirat2018distributed, FEDLEARN, kairouz2019advances}. Given $m$ machines/workers/devices, each having access to their own data samples, we consider the problem
\begin{equation}\label{eq:prob-fed}
\min_{x\in {\mathbb R}^d} \left\{ f(x) := \frac{1}{m}\sum \limits_{i=1}^m{f_i(x)} \right\}
\end{equation}
in the heterogeneous (non-IID) data setting, i.e., we allow different workers to have access to different data distributions. We consider the case when the loss $f_i$ in worker $i$ is of an online/expectation form,
\begin{align}
f_i(x) := {\mathbb{E}}_{\zeta \sim {\mathcal D}_i}[f_i(x,\zeta)] , \label{prob-fed:exp}
\end{align}
and also the case when $f_i$ is of a finite-sum form,
\begin{align}
f_i(x) := \frac{1}{n}\sum \limits_{j=1}^n{f_{i,j}(x)}, \label{prob-fed:finite}
\end{align}
where $f(x), f_i(x), f_i(x,\zeta)$ and $f_{i,j}(x)$ are possibly nonconvex functions. Forms \eqref{prob-fed:exp} and \eqref{prob-fed:finite} capture the population (resp.\ empirical) risk minimization problems in distributed/federated learning.
\subsection{Single machine setting}
In particular, the single machine/node case (i.e., $m=1$) of problem \eqref{eq:prob-fed} reduces to the standard problem
\begin{equation}\label{eq:prob}
\min_{x\in {\mathbb R}^d} f(x),
\end{equation}
where $f(x)$ can be the online/expectation form
\begin{align}
f(x) := {\mathbb{E}}_{\zeta\sim {\mathcal D}}[f(x,\zeta)] \label{prob:exp}
\end{align}
or the finite-sum form
\begin{align}
f(x) := \frac{1}{n}\sum \limits_{j=1}^n{f_j(x)}, \label{prob:finite}
\end{align}
where $f(x), f(x,\zeta)$ and $f_j(x)$ are possibly nonconvex functions.
These forms capture the standard population/empirical risk minimization problems in machine learning.
There has been extensive research into solving the standard problem \eqref{eq:prob}--\eqref{prob:finite} and an enormous number of methods were proposed, e.g., \citep{nesterov2014introductory, nemirovski2009robust, ghadimi2013stochastic, johnson2013accelerating, defazio2014saga, nguyen2017sarah, ge2015escaping, lin2015universal, lan2015optimal, lan2018random, zhize2019unified, allen2017katyusha, zhize2020anderson, ghadimi2016mini,zhou2018stochastic, fang2018spider,zhize2019ssrgd, pham2019proxsarah}.
Due to the increasing popularity of distributed/federated learning, the more general distributed/federated optimization problem \eqref{eq:prob-fed}--\eqref{prob-fed:finite} has attracted significant attention as well~ \citep{FEDLEARN, FL2017-AISTATS, lian2017can, lan2018random, li2018federated, localSGD-Stich, mishchenko2019distributed, DIANA2, karimireddy2019scaffold, khaled2019first, yang2019federated, li2019communication, kairouz2019advances, li2020acceleration, localSGD-AISTATS2020}.
However, all these methods are analyzed separately, often using different approaches, intuitions, and assumptions, and separately in the $m=1$ (single node) and $m\geq 1$ case.
\subsection{Our contributions}
\label{sec:contribution}
We provide a {\em single and sharp analysis for a large family of SGD methods (Algorithm~\ref{alg:1}) for solving the nonconvex problem \eqref{eq:prob-fed}.} Our approach offers a {\em unified understanding} of many previously proposed SGD variants, which we believe helps the community making better sense of existing methods and results. More importantly, {\em our unified approach also motivates and facilitates the design of, and offers plug-in convergence guarantees for, many new and practically relevant SGD variants}.
\begin{algorithm}[t]
\caption{Framework of stochastic gradient methods}
\label{alg:1}
\begin{algorithmic}[1]
\REQUIRE
initial point $x^0$, stepsize $\eta_k$
\FOR {$k=0,1,2,\ldots$}
\STATE Compute stochastic gradient $g^k$ \label{line:grad}
\STATE $x^{k+1} = x^k - \eta_k g^k$ \label{line:update}
\ENDFOR
\end{algorithmic}
\end{algorithm}
While Algorithm~\ref{alg:1} has a seemingly tame structure, the complication arises due to the fact that there is a potentially infinite number of meaningful and yet sharply distinct ways in which the gradient estimator $g^k$ can be defined. The selection of an appropriate estimator is a very active and important area of research, as it directly impacts many aspects of the algorithm it gives rise to, including tractability, memory footprint, per iteration cost, parallelizability, iteration complexity, communication complexity, sample complexity and generalization.
The key technical idea of our approach is the design of a {\em flexible, tractable and accurate parametric model} capturing the behavior of the stochastic gradient. We want the model to be {\em flexible} in order to be able to describe many existing and have the potential to describe many variants of SGD. As we shall see, flexibility is achieved by the inclusion of a number of parameters. We want the model to be {\em tractable}, meaning that it needs to act as an assumption which can be used to perform a theoretical complexity analysis. Finally, we want the complexity results to be {\em accurate}, i.e., we want to recover best known rates for existing methods, and obtain sharp and useful rates with predictive power for new methods. Our parametric model is described in Assumption~\ref{asp:boge}, and as we argue throughout the paper and appendices, it is indeed flexible, tractable and accurate.
\begin{assumption}[Gradient estimator]\label{asp:boge}
The gradient estimator $g^k$ in Algorithm~\ref{alg:1} is unbiased, i.e.,
${\mathbb{E}}_k[g^k] = \nabla f(x^k)$,
and there exist non-negative constants $A_1, A_2, B_1, B_2, C_1,C_2,D_1,\rho$ and a random sequence $\{\sigma_k^2\}$ such that the following two inequalities hold
\begin{align}
{\mathbb{E}}_k[\ns{g^k}] & \leq 2A_1(f(x^k)-f^*)+B_1\ns{\nabla f(x^k)} + D_1 \sigma_k^2 +C_1, \label{eq:boge1} \\
{\mathbb{E}}_k[\sigma_{k+1}^2] & \leq (1-\rho)\sigma_k^2 + 2A_2(f(x^k)-f^*)+B_2\ns{\nabla f(x^k)} +C_2. \label{eq:boge2}
\end{align}
\end{assumption}
{\vspace{1mm}\noindent\bf Flexibility:}
Our model for the behavior of the stochastic gradient for nonconvex optimization, as captured by Assumption~\ref{asp:boge}, is satisfied by a large number of specific variants of SGD proposed in the literature, including SGD with arbitrary sampling \citep{qian2019svrg, gower2019sgd, gorbunov2019unified, khaled2020better}, SGD with compressed gradients \citep{alistarh2017qsgd, wen2017terngrad, bernstein2018signsgd, khirirat2018distributed, SEGA, Cnat}, and a wide variety of variance-reduced SGD methods such as SVRG \citep{johnson2013accelerating}, SAGA \citep{defazio2014saga} and their variants (e.g., \citep{kovalev2019don, reddi2016stochastic, reddi2016proximal, allen2016variance, lei2017non, li2018simple, ge2019stable, mishchenko2019distributed, DIANA2}). Specific methods vary in the parameters for which recurrences \eqref{eq:boge1} and \eqref{eq:boge2} are satisfied. For example, SGD variants not employing variance reduction will generally have $D_1=0$, and recurrence \eqref{eq:boge2} will not be used (i.e., we can ignore it and set $\rho =1$, $A_2=0$, $B_2=0$ and $C_2=0$). This setting was considered in \cite{khaled2020better}, and was an inspiration for our work. If variance reduction is applied, then $D_1>0$ and typically $C_1=0$, and recurrence \eqref{eq:boge2} describes the variance reduction process, with parameter $\rho$ describing the speed of variance reduction. If $C_2>0$, variance reduction is not perfect. If $C_2=0$ as well, then the methods will be fully variance reduced, which typically means faster convergence rate. The specific values of all the parameters depend on how the stochastic gradient $g^k$ is constructed (e.g., via minibatching, importance sampling, variance reduction, perturbation, compression).
We design several new methods, with gradient estimators that fit Assumption~\ref{asp:boge}, for solving the general nonconvex distributed/federated problem \eqref{eq:prob-fed}--\eqref{prob-fed:finite} using compressed (e.g., quantized or sparsified) gradient communication, which is of import when training deep learning models. We adopt a direct compression (DC) framework \cite{alistarh2017qsgd, khirirat2018distributed}, and a compression of gradient differences framework (DIANA) \cite{mishchenko2019distributed, DIANA2}. We develop several new specific methods belonging to the DC framework (Algorithm~\ref{alg:dc}) and DIANA framework (Algorithm \ref{alg:diana}), show that they all satisfy Assumption \ref{asp:boge}, and thus are also captured by our unified analysis.
{\vspace{1mm}\noindent\bf Tractability:}
We use our unified assumption to prove four complexity theorems: Theorems~\ref{thm:main}, \ref{thm:main-pl-dec}, \ref{thm:dc-diff}, and \ref{thm:diana-type-diff}. Theorem~\ref{thm:main} is the main theorem, and Theorem~\ref{thm:main-pl-dec} is used to obtain sharper results under the PL condition. Theorems~\ref{thm:dc-diff} and \ref{thm:diana-type-diff} are used in combination with the previous generic Theorems \ref{thm:main} and \ref{thm:main-pl-dec} to obtain specialized results for distributed/federated optimization utilizing either direct gradient compression (DC framework (Algorithm~\ref{alg:dc})), or compression of gradient differences (DIANA framework (Algorithm \ref{alg:diana})), respectively. In Tables \ref{table:1}--\ref{table:fed-pl} we visualize how these theorems lead to corollaries which describe the detailed complexity results of various existing and new methods.
{\vspace{1mm}\noindent\bf Accuracy:}
For all existing methods, the rates we obtain using our general analysis match the best known rates.
\vspace{-2mm}
\begin{table}[!h]
\centering
\caption{Selected methods that fit our unified analysis framework for \emph{nonconvex optimization} ($m=1$, i.e., single node).}
\label{table:1}
\vspace{1mm}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Problem & Assumption & Method & Algorithm & \multicolumn{2}{c|}{Convergence result} & Recover \\
\hline
\eqref{eq:prob}
& Asp \ref{asp:lsmooth}
& GD
& Alg \ref{alg:gd}
& \multirow{4}{*}{Thm \ref{thm:main}}
& Cor \ref{cor:gd}
& \citep{nesterov2014introductory} \\ \cline{1-4}\cline{6-7}
\eqref{eq:prob} with \eqref{prob:exp} or \eqref{prob:finite}
& Asp \ref{asp:lsmooth}
& SGD
& Alg \ref{alg:sgd}
&
& Cor \ref{cor:sgd}
& \citep{ghadimi2016mini, khaled2020better} \\ \cline{1-4}\cline{6-7}
\eqref{eq:prob} with \eqref{prob:finite}
& Asp \ref{asp:avgsmooth}
& L-SVRG
& Alg \ref{alg:lsvrg}
&
&Cor \ref{cor:lsvrg}
& \citep{reddi2016stochastic, allen2016variance, li2018simple, qian2019svrg} \\ \cline{1-4}\cline{6-7}
\eqref{eq:prob} with \eqref{prob:finite}
& Asp \ref{asp:avgsmooth-saga}
& SAGA
& Alg \ref{alg:saga}
&
&Cor \ref{cor:saga}
& \citep{reddi2016proximal} \\
\hline
\end{tabular}
\vspace{1mm}
\centering
\caption{Selected methods that fit our unified analysis framework for \emph{nonconvex distributed/federated optimization} ($m\geq 1$, i.e., any number of nodes).}
\label{table:fed}
\vspace{1mm}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Problem & Assumption & Method & Algorithm & \multicolumn{2}{c|}{Convergence result} & Recover \\
\hline
\eqref{eq:prob-fed}
& Asp \ref{asp:lsmooth-diana}
& DC-GD
& Alg \ref{alg:dc-gd}
& \multirow{4}{*}{Thm \ref{thm:main}, \ref{thm:dc-diff}}
& Cor \ref{cor:dc-gd}
& \citep{khaled2020better} \\ \cline{1-4}\cline{6-7}
\eqref{eq:prob-fed} with \eqref{prob-fed:exp} or \eqref{prob-fed:finite}
& Asp \ref{asp:lsmooth-diana}
& DC-SGD
& Alg \ref{alg:dc-sgd}
&
& Cor \ref{cor:dc-sgd}
& \citep{khaled2020better, Cnat} \\ \cline{1-4}\cline{6-7}
\eqref{eq:prob-fed} with \eqref{prob-fed:finite}
& Asp \ref{asp:lsmooth-diana}, \ref{asp:avgsmooth-fed}
& DC-LSVRG
& Alg \ref{alg:dc-lsvrg}
&
& Cor \ref{cor:dc-lsvrg}
& \bf{New} \\ \cline{1-4}\cline{6-7}
\eqref{eq:prob-fed} with \eqref{prob-fed:finite}
& Asp \ref{asp:lsmooth-diana}, \ref{asp:avgsmooth-saga-fed}
& DC-SAGA
& Alg \ref{alg:dc-saga}
&
& Cor \ref{cor:dc-saga}
& \bf{New} \\
\hline
\hline
\eqref{eq:prob-fed}
& Asp \ref{asp:lsmooth-diana}
& DIANA-GD
& Alg \ref{alg:diana-gd}
& \multirow{4}{*}{Thm \ref{thm:main}, \ref{thm:diana-type-diff}}
& Cor \ref{cor:diana-gd}
& \bf{New} \\ \cline{1-4}\cline{6-7}
\eqref{eq:prob-fed} with \eqref{prob-fed:exp} or \eqref{prob-fed:finite}
& Asp \ref{asp:lsmooth-diana}
& DIANA-SGD
& Alg \ref{alg:diana-sgd}
&
& Cor \ref{cor:diana-sgd}
& \bf{New} \\ \cline{1-4}\cline{6-7}
\eqref{eq:prob-fed} with \eqref{prob-fed:finite}
& Asp \ref{asp:lsmooth-diana}, \ref{asp:avgsmooth-fed}
& DIANA-LSVRG
& Alg \ref{alg:diana-lsvrg}
&
& Cor \ref{cor:diana-lsvrg}
& \bf{New}$^\dagger$ \\ \cline{1-4}\cline{6-7}
\eqref{eq:prob-fed} with \eqref{prob-fed:finite}
& Asp \ref{asp:lsmooth-diana}, \ref{asp:avgsmooth-saga-fed}
& DIANA-SAGA
& Alg \ref{alg:diana-saga}
&
& Cor \ref{cor:diana-saga}
& \bf{New}$^\dagger$ \\
\hline
\end{tabular}
\vspace{0.5mm}
{\begin{spacing}{0.5}\footnotesize$^\dagger$We want to mention that \citet{DIANA2} studied a weak version of DIANA-LSVRG and DIANA-SAGA with minibatch size $b=1$ (non-minibatch version).
See Section \ref{sec:diana} for more details.
\end{spacing}}
\vspace{5mm}
\centering
\caption{Selected methods that fit our unified analysis framework for {\em nonconvex optimization under the PL condition} ($m=1$).}
\label{table:pl}
\vspace{1mm}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Problem & Assumption & Method & Algorithm & \multicolumn{2}{c|}{Convergence result} & Recover \\
\hline
\eqref{eq:prob}
& Asp \ref{asp:lsmooth}, \ref{asp:pl}
& GD
& Alg \ref{alg:gd}
& \multirow{4}{*}{Thm \ref{thm:main-pl-dec}}
& Cor \ref{cor:gd-pl}
& \citep{polyak1963gradient,karimi2016linear} \\\cline{1-4}\cline{6-7}
\eqref{eq:prob} with \eqref{prob:exp} or \eqref{prob:finite}
& Asp \ref{asp:lsmooth}, \ref{asp:pl}
& SGD
& Alg \ref{alg:sgd}
&
& Cor \ref{cor:sgd-pl}
& \citep{khaled2020better} \\ \cline{1-4}\cline{6-7}
\eqref{eq:prob} with \eqref{prob:finite}
& Asp \ref{asp:avgsmooth}, \ref{asp:pl}
& L-SVRG
& Alg \ref{alg:lsvrg}
&
& Cor \ref{cor:lsvrg-pl}
& \citep{reddi2016proximal,li2018simple} \\ \cline{1-4}\cline{6-7}
\eqref{eq:prob} with \eqref{prob:finite}
& Asp \ref{asp:avgsmooth-saga}, \ref{asp:pl}
& SAGA
& Alg \ref{alg:saga}
&
& Cor \ref{cor:saga-pl}
& \citep{reddi2016proximal} \\
\hline
\end{tabular}
\vspace{1mm}
\centering
\caption{Selected methods that fit our unified analysis framework for \emph{nonconvex distributed/federated optimization under PL condition} ($m\geq 1$).}
\label{table:fed-pl}
\vspace{1mm}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Problem & Assumption & Method & Algorithm & \multicolumn{2}{c|}{Convergence result} & Recover \\
\hline
\eqref{eq:prob-fed}
& Asp \ref{asp:lsmooth-diana}, \ref{asp:pl}
& DC-GD
& Alg \ref{alg:dc-gd}
& \multirow{4}{*}{Thm \ref{thm:main-pl-dec}, \ref{thm:dc-diff}}
& Cor \ref{cor:dc-gd-pl}
& \bf{New} \\ \cline{1-4}\cline{6-7}
\eqref{eq:prob-fed} with \eqref{prob-fed:exp} or \eqref{prob-fed:finite}
& Asp \ref{asp:lsmooth-diana}, \ref{asp:pl}
& DC-SGD
& Alg \ref{alg:dc-sgd}
&
& Cor \ref{cor:dc-sgd-pl}
& \bf{New}\\ \cline{1-4}\cline{6-7}
\eqref{eq:prob-fed} with \eqref{prob-fed:finite}
& Asp \ref{asp:lsmooth-diana}, \ref{asp:avgsmooth-fed}, \ref{asp:pl}
& DC-LSVRG
& Alg \ref{alg:dc-lsvrg}
&
& Cor \ref{cor:dc-lsvrg-pl}
& \bf{New} \\ \cline{1-4}\cline{6-7}
\eqref{eq:prob-fed} with \eqref{prob-fed:finite}
& Asp \ref{asp:lsmooth-diana}, \ref{asp:avgsmooth-saga-fed}, \ref{asp:pl}
& DC-SAGA
& Alg \ref{alg:dc-saga}
&
& Cor \ref{cor:dc-saga-pl}
& \bf{New} \\
\hline
\hline
\eqref{eq:prob-fed}
& Asp \ref{asp:lsmooth-diana}, \ref{asp:pl}
& DIANA-GD
& Alg \ref{alg:diana-gd}
& \multirow{4}{*}{Thm \ref{thm:main-pl-dec}, \ref{thm:diana-type-diff}}
& Cor \ref{cor:diana-gd-pl}
& \bf{New} \\ \cline{1-4}\cline{6-7}
\eqref{eq:prob-fed} with \eqref{prob-fed:exp} or \eqref{prob-fed:finite}
& Asp \ref{asp:lsmooth-diana}, \ref{asp:pl}
& DIANA-SGD
& Alg \ref{alg:diana-sgd}
&
& Cor \ref{cor:diana-sgd-pl}
& \bf{New} \\ \cline{1-4}\cline{6-7}
\eqref{eq:prob-fed} with \eqref{prob-fed:finite}
& Asp \ref{asp:lsmooth-diana}, \ref{asp:avgsmooth-fed}, \ref{asp:pl}
& DIANA-LSVRG
& Alg \ref{alg:diana-lsvrg}
&
& Cor \ref{cor:diana-lsvrg-pl}
& \bf{New} \\ \cline{1-4}\cline{6-7}
\eqref{eq:prob-fed} with \eqref{prob-fed:finite}
& Asp \ref{asp:lsmooth-diana}, \ref{asp:avgsmooth-saga-fed}, \ref{asp:pl}
& DIANA-SAGA
& Alg \ref{alg:diana-saga}
&
& Cor \ref{cor:diana-saga-pl}
& \bf{New} \\
\hline
\end{tabular}
\end{table}
\section{Notation and Assumptions}
\label{sec:pre}
We now introduce the notation and assumptions that we will use throughout the rest of the paper.
\subsection{Notation}
Let ${\Delta_{0}} := f(x^0) - f^*$, where $f^* := \min_{x\in {\mathbb R}^d} f(x)$.
Let $[n]$ denote the set $\{1,2,\cdots,n\}$ and $\n{\cdot}$ denote the Euclidean norm of a vector.
Let $\inner{u}{v}$ denote the standard Euclidean inner product of two vectors $u$ and $v$.
We use $O(\cdot)$ notation to hide absolute constants.
For notational convenience, we consider the online form \eqref{prob-fed:exp} or \eqref{prob:exp} as the finite-sum form \eqref{prob-fed:finite} or \eqref{prob:finite} by letting $f_{i,j}(x) := f_i(x, \zeta_j)$ or $f_i(x) := f(x, \zeta_i)$ and thinking of $n$ as infinity (infinite data samples). By ${\mathbb{E}}[\cdot]$ we denote mathematical expectation.
\subsection{Assumptions}
In order to prove convergence results, one usually needs one or more of the following standard smoothness assumptions for function $f$, depending on the setting (see e.g., \citep{nesterov2014introductory, ghadimi2016mini, lei2017non, reddi2016stochastic, allen2016variance, li2018simple, fang2018spider, pham2019proxsarah, khaled2020better}).
\begin{assumption}[$L$-smoothness]\label{asp:lsmooth}
A function $f:{\mathbb R}^d\to {\mathbb R}$ is $L$-smooth if
\begin{align}\label{eq:lsmooth}
\n{\nabla f(x) - \nabla f(y)} \leq L \n{x-y}, \quad \forall x,y \in {\mathbb R}^d.
\end{align}
\end{assumption}
If one desires to obtain a refined analysis of SGD-type methods applied to finite-sum problems \eqref{prob:finite}, the $L$-smoothness assumption can be replaced by average $L$-smoothness, defined next.
\begin{assumption}[Average $L$-smoothness]\label{asp:avgsmooth}
A function $f(x) :=\frac{1}{n}\sum_{i=1}^{n}f_i(x)$ is average $L$-smooth if
\begin{align}\label{eq:avgsmooth}
{\mathbb{E}}[\ns{\nabla f_i(x) - \nabla f_i(y)}]\leq \frac{1}{n}\sum \limits_{i=1}^{n} L_i^2\ns{x-y}\leq L^2 \ns{x-y}, \quad \forall x,y \in {\mathbb R}^d.
\end{align}
\end{assumption}
Note that we slightly change the form of Assumption \ref{asp:avgsmooth} for SAGA-type methods as follows:
\begin{assumption}[Average $L$-smoothness]\label{asp:avgsmooth-saga}
A function $f(x) :=\frac{1}{n}\sum_{i=1}^{n}f_i(x)$ is average $L$-smooth if
\begin{align}\label{eq:avgsmooth-saga}
{\mathbb{E}}[\ns{\nabla f_i(x) - \nabla f_i(y_i)}] \leq L^2\frac{1}{n}\sum \limits_{i=1}^{n} \ns{x-y_i}, \quad \forall x, \{y_i\}_{i\in [n]} \in {\mathbb R}^d.
\end{align}
\end{assumption}
We now present smoothness assumptions suitable for the more general nonconvex federated problems, i.e., \eqref{eq:prob-fed}--\eqref{prob-fed:finite}.
\begin{assumption}[$L$-smoothness]\label{asp:lsmooth-diana}
For each work $i\in [m]$, the function $f_i(x)$ is $L_i$-smooth if
\begin{align}\label{eq:lsmooth-diana}
\n{\nabla f_i(x) - \nabla f_i(y)} \leq L_i \n{x-y}, \quad \forall x,y \in {\mathbb R}^d.
\end{align}
\end{assumption}
Moreover, we define $L^2:=\frac{1}{m}\sum_{i=1}^{m}L_i^2$. Note that Assumption \ref{asp:lsmooth-diana} reduces to Assumption \ref{asp:lsmooth} in single node case (i.e., $m=1$). Similarly, for nonconvex federated finite-sum problems, i.e., \eqref{prob-fed:finite}, we also need the average $L$-smoothness assumption (Assumption \ref{asp:avgsmooth}) for each worker $i$.
\begin{assumption}[Average $\bar{L}$-smoothness]\label{asp:avgsmooth-fed}
A function $f_i(x) :=\frac{1}{n}\sum_{j=1}^{n}f_{i,j}(x)$ is average $\bar{L}$-smooth if
\begin{align}\label{eq:avgsmooth-fed}
{\mathbb{E}}[\ns{\nabla f_{i,j}(x) - \nabla f_{i,j}(y)}] \leq \frac{1}{n}\sum \limits_{j=1}^{n} L_{i,j}^2\ns{x-y}\leq \bar{L}^2 \ns{x-y}, \quad \forall x,y \in {\mathbb R}^d.
\end{align}
\end{assumption}
Similarly, we slightly change the form of Assumption \ref{asp:avgsmooth-fed} for SAGA-type methods as follows:
\begin{assumption}[Average $\bar{L}$-smoothness]\label{asp:avgsmooth-saga-fed}
A function $f_i(x) :=\frac{1}{n}\sum_{j=1}^{n}f_{i,j}(x)$ is average $\bar{L}$-smooth if
\begin{align}\label{eq:avgsmooth-saga-fed}
{\mathbb{E}}[\ns{\nabla f_{i,j}(x) - \nabla f_{i,j}(y_{i,j})}] \leq \bar{L}^2\frac{1}{n}\sum \limits_{j=1}^{n} \ns{x-y_{i,j}}, \quad \forall x, \{y_{i,j}\}_{j\in [n]} \in {\mathbb R}^d.
\end{align}
\end{assumption}
Moreover, we also provide a unified analysis for nonconvex (federated) optimization problems \eqref{eq:prob-fed}--\eqref{prob:finite} under the Polyak-\L{}ojasiewicz (PL) condition \citep{polyak1963gradient}.
\begin{assumption}[PL condition] \label{asp:pl}
A function $f$ satisfies the PL condition if
\begin{equation}\label{eq:pl}
\exists \mu>0, ~\mathrm{such~that} ~\ns{\nabla f(x)} \geq 2\mu (f(x)-f^*),~ \forall x\in {\mathbb R}^d,
\end{equation}
where $f^*:=\min_{x\in {\mathbb R}^d} f(x)$ denotes the optimal function value.
\end{assumption}
It is worth noting that the PL condition does not imply convexity of $f$. For example, $f(x) = x^2 + 3\sin^2 x$ is a nonconvex function but it satisfies PL condition with $\mu=1/32$.
\citet{karimi2016linear} showed that PL condition is weaker than many conditions, e.g., strong convexity (SC), weak strong convexity (WSC), and error bound (EB).
Moreover, if $f$ is convex, PL condition is equivalent to the error bound (EB) and quadratic growth (QG) condition \citep{luo1993error,anitescu2000degenerate}.
\section{Main Unified Theorems and Simple Special Cases}
In this section we first provide our main unified complexity results (Section~\ref{sec:bui98fg8s_09uf}), and subsequently enumerate a few special cases to showcase the flexibility of our unified approach in accurately describing specific SGD methods (Section~\ref{sec:nonconvex}).
\subsection{Main unified theorems}\label{sec:bui98fg8s_09uf}
We first state Theorem~\ref{thm:main}, which covers a large family of SGD methods (Algorithm~\ref{alg:1}) under the general parametric assumption (Assumption~\ref{asp:boge}). The theorem says that SGD converges at the rate $O(\cdot\frac{1}{\epsilon^2})$ or $O(\cdot\frac{1}{\epsilon^4})$, depending in the value of the parameters.
\begin{theorem}[Main theorem]\label{thm:main}
Suppose that Assumptions \ref{asp:boge} and \ref{asp:lsmooth} hold. Use the fixed stepsize
$$ \eta_k \equiv \eta = \min\left\{ \frac{1}{LB_1+LD_1B_2\rho^{-1}},~ \sqrt{\frac{\ln 2}{(LA_1 + LD_1A_2\rho^{-1})K}},~ \frac{\epsilon^2}{2L(C_1+D_1C_2\rho^{-1})} \right\}$$ and let
${\Delta'_{0}}:= f(x^0) - f^* + 2^{-1}L\eta^2D_1\rho^{-1} \sigma_0^2$. Then
the number of iterations performed by Algorithm~\ref{alg:1} to find an $\epsilon$-solution, i.e., a point $\widehat{x}$ such that $${\mathbb{E}}[\n{\nabla f(\widehat{x})}] \leq \epsilon,$$ can be bounded by
\begin{align}
K = \frac{8{\Delta'_{0}} L}{\epsilon^2} \max \left\{ B_1+D_1B_2\rho^{-1},~ \frac{12{\Delta'_{0}} (A_1+D_1 A_2\rho^{-1})}{\epsilon^2},~ \frac{2(C_1+D_1C_2\rho^{-1})}{\epsilon^2}\right\}. \label{eq:main-ssuhud}
\end{align}
\end{theorem}
We now state Theorem~\ref{thm:main-pl-dec}, which covers a large family of SGD methods (Algorithm~\ref{alg:1}) if in addition the PL condition (Assumption~\ref{asp:pl}) is satisfied. Note that under the PL condition, one can obtain a faster linear convergence $O(\cdot\log \frac{1}{\epsilon})$ (Theorem~\ref{thm:main-pl-dec}) rather than the sublinear convergence of Theorem~\ref{thm:main}.
\begin{theorem}[Main theorem under PL condition]\label{thm:main-pl-dec}
Suppose that Assumptions \ref{asp:boge}, \ref{asp:lsmooth} and \ref{asp:pl} hold. Set the stepsize as
$$ \eta_{k} = \begin{cases}
\eta & \text {if~~} k\leq \frac{K}{2}\\
\frac{2\eta}{2+(k-\frac{K}{2})\mu\eta} &\text{if~~} k>\frac{K}{2}
\end{cases},
\text{~where~~}
\eta \leq \frac{1}{LB_1+2LD_1B_2\rho^{-1} + (L A_1 + 2LD_1A_2 \rho^{-1})\mu^{-1}}$$
and let ${\Delta'_{0}}:= f(x^0) - f^* + L\eta^2D_1\rho^{-1} \sigma_0^2$ and $\kappa := \frac{L}{\mu}$. Then the number of iterations performed by Algorithm~\ref{alg:1} to find an $\epsilon$-solution, i.e., a point $x^K$ such that $${\mathbb{E}}[f(x^K)-f^*] \leq \epsilon,$$ can be bounded by
\begin{align}
K = \max \left\{2\left(B_1+2D_1B_2\rho^{-1} +(L A_1 + 2LD_1A_2 \rho^{-1})\mu^{-1}\right)\kappa\log \frac{2{\Delta'_{0}}}{\epsilon},~ \frac{10(C_1+2D_1C_2\rho^{-1})\kappa}{\mu \epsilon}\right\}. \label{eq:main-pl-k}
\end{align}
\end{theorem}
In the following sections and appendices, we show that many specific methods, existing and new, satisfy our unified Assumption \ref{asp:boge} and can thus be captured by our unified analysis (i.e., Theorems~\ref{thm:main} and \ref{thm:main-pl-dec}). We can thus plug their corresponding parameters (i.e., specific values for $A_1, A_2, B_1, B_2, C_1,C_2,D_1,\rho$) into our unified Theorems~\ref{thm:main} and \ref{thm:main-pl-dec} to obtain detailed convergence rates for these methods. See Tables~\ref{table:1} and \ref{table:pl} for an overview.
In particular, we give two general frameworks, i.e., DC framework (Algorithm \ref{alg:dc}) and DIANA framework (Algorithm \ref{alg:diana}) for the general nonconvex federated problems \eqref{eq:prob-fed}--\eqref{prob-fed:finite}.
We provide Theorems \ref{thm:dc-diff} and \ref{thm:diana-type-diff} showing that optimization methods belonging to the DC and DIANA frameworks also satisfy our unified Assumption~\ref{asp:boge}, and can thus also be captured by our unified analysis to obtain detailed convergence rates in the nonconvex distributed regime. See Tables~\ref{table:fed} and \ref{table:fed-pl} for an overview.
\subsection{Simple special cases ($m=1$)}
\label{sec:nonconvex}
As a case study, we first focus on the single node case (i.e., $m=1$), i.e., on the standard problem \eqref{eq:prob} with online form \eqref{prob:exp} or finite-sum form \eqref{prob:finite}, i.e.,
\[ \min_{x\in {\mathbb R}^d} f(x),\]
where
\[ f(x) := {\mathbb{E}}_{\zeta\sim {\mathcal D}}[f(x,\zeta)],
\text{~~~~~or~~~~~} f(x) := \frac{1}{n}\sum \limits_{i=1}^n{f_i(x)}.\]
In the following, we prove that some classical methods such as GD (Section~\ref{sec:bi8f9g8f}), SGD (Section~\ref{sec:98g9s8gf89gs}), L-SVRG (Section~\ref{sec:bo89gfs09f}) and SAGA (Section~\ref{sec:bi98gjhf_9u9f}) satisfy our unified Assumption \ref{asp:boge}, and thus can be captured by our unified convergence analysis, i.e., Theorems \ref{thm:main} and \ref{thm:main-pl-dec}.
\subsubsection{GD method}\label{sec:bi8f9g8f}
The vanilla gradient descent (GD) method is formalized as Algorithm~\ref{alg:gd}.
\begin{algorithm}[h]
\caption{GD}
\label{alg:gd}
\begin{algorithmic}[1]
\STATE In Line \ref{line:grad} of Algorithm \ref{alg:1}: $g^k=\nabla f(x^k)$ \label{line:grad-gd}
\end{algorithmic}
\end{algorithm}
We now show that the gradient estimator used in GD, i.e., the true/full gradient, satisfies Assumption~\ref{asp:boge}.
\begin{lemma}[GD estimator satisfies Assumption \ref{asp:boge}]\label{lem:gd}
The gradient estimator $g^k=\nabla f(x^k)$ satisfies the unified Assumption \ref{asp:boge} with parameters
$$A_1=C_1=D_1=0,\quad B_1=1,\quad \sigma_k^2 \equiv 0,\quad \rho=1,\quad A_2=B_2=C_2=0.$$
\end{lemma}
\subsubsection{SGD method}
\label{sec:98g9s8gf89gs}
Many (but not all) variants of stochastic gradient descent (SGD) can be written in the form of Algorithm~\ref{alg:sgd}.
\begin{algorithm}[h]
\caption{SGD \citep{khaled2020better}}
\label{alg:sgd}
\begin{algorithmic}[1]
\STATE In Line \ref{line:grad} of Algorithm \ref{alg:1}:
\STATE $g^k=\begin{cases}
\nabla f_i(x^k), &\text{Standard SGD} \\% for Problem~} \eqref{prob:finite} \\
{\mathcal C}(\nabla f(x^k)), & \text{Compressed gradient, e.g., quantized gradient, sparse gradient, etc} \\
\nabla f(x^k) +\xi, & \text{Noisy gradient} \\
\sum_{i\in I} v_i \nabla f_i(x^k), &\text{minibatch SGD, SGD with importance or arbitrary sampling}\\
\ldots, &\text{Combinations (e.g., minibatch compressed SGD, noisy SGD) and beyond}
\end{cases}
$\label{line:grad-sgd}
\end{algorithmic}
\end{algorithm}
\citet{khaled2020better} showed that the stochastic gradient estimators used in many variants of SGD for nonconvex smooth problems, including variants performing minibatching, importance sampling, gradient compression and their combinations, satisfy the following generalized expected smoothness (ES) assumption.
\begin{assumption}[ES: Expected Smoothness \citep{khaled2020better}]\label{asp:es}
The gradient estimator $g(x)$ is unbiased
${\mathbb{E}}[g(x)] = \nabla f(x)$
and there exist non-negative constants $A, B, C$ such that
\begin{align}
{\mathbb{E}}[\ns{g(x)}] \leq 2A(f(x)-f^*)+B\ns{\nabla f(x)} + C, \quad \forall x\in {\mathbb R}^d.
\end{align}
\end{assumption}
Our work can be seen as a further and substantial generalization of their approach, one that allows for many more methods to be captured by a single assumption and analysis. In particular, our unified Assumption \ref{asp:boge} can capture the behavior of the gradient estimator constructed by variance reduced methods while ES assumption cannot. The next lemma says that, indeed, our unified Assumption \ref{asp:boge} recovers the ES assumption in a special case.
\begin{lemma}[SGD estimator satisfies Assumption \ref{asp:boge}]\label{lem:sgd}
Any gradient estimator $g^k$ satisfying the Expected Smoothness Assumption~\ref{asp:es}, i.e.,
$$ {\mathbb{E}}_k[\ns{g^k}] \leq 2A(f(x^k)-f^*)+B\ns{\nabla f(x^k)} + C,$$
(used in Line \ref{line:grad-sgd} of Algorithm \ref{alg:sgd})
satisfies the unified Assumption \ref{asp:boge} with parameters
$$A_1=A,\quad B_1=B,\quad C_1= C,\quad D_1=0,\quad \sigma_k^2 \equiv 0,\quad \rho=1,\quad A_2=B_2=C_2=0.$$
\end{lemma}
\subsubsection{L-SVRG method}\label{sec:bo89gfs09f}
The loopless (minibatch) SVRG method (L-SVRG) developed by \citet{Hofmann2015} and rediscovered by \citet{kovalev2019don} is formalized as Algorithm~\ref{alg:lsvrg}.
\citet{kovalev2019don} only studied L-SVRG in the strongly convex setting with $b=1$ (no minibatch). Recently, \citet{qian2019svrg} studied L-SVRG in the nonconvex case.
\vspace{-2mm}
\begin{algorithm}[!h]
\caption{Loopless SVRG (L-SVRG) \citep{Hofmann2015, kovalev2019don} }
\label{alg:lsvrg}
\begin{algorithmic}[1]
\REQUIRE
initial point $x^0=w^0$, stepsize $\eta_k$, minibatch size $b$, probability $p\in (0,1]$
\FOR {$k=0,1,2,\ldots$}
\STATE $g^k = \frac{1}{b} \sum \limits_{i\in I_b} (\nabla f_i(x^k)- \nabla f_i(w^k)) +\nabla f(w^k)$ \qquad ($I_b$ denotes random minibatch with $|I_b|=b$)
\label{line:lsvrg}
\STATE $x^{k+1} = x^k - \eta_k g^k$ \label{line:update-lsvrg}
\STATE $w^{k+1} = \begin{cases}
x^k, &\text{with probability } p\\
w^k, &\text{with probability } 1-p
\end{cases}$ \label{line:w_prob}
\ENDFOR
\end{algorithmic}
\end{algorithm}
We now show that the gradient estimator used in L-SVRG satisfies Assumption~\ref{asp:boge}.
\begin{lemma}[L-SVRG estimator satisfies Assumption \ref{asp:boge}]\label{lem:lsvrg}
Suppose that Assumption \ref{asp:avgsmooth} holds. The L-SVRG gradient estimator $$g^k=\frac{1}{b} \sum_{i\in I_b} (\nabla f_i(x^k)- \nabla f_i(w^k)) +\nabla f(w^k)$$ (see Line \ref{line:lsvrg} in Algorithm \ref{alg:lsvrg})
satisfies the unified Assumption~\ref{asp:boge} with parameters
$$A_1=A_2=C_1=C_2=0,$$
$$B_1=1, \quad D_1=\frac{L^2}{b}, \quad \sigma_k^2= \ns{x^k-w^k}, \quad \rho=\frac{p}{2}+\frac{p^2}{2}-\frac{\eta^2L^2}{b},\quad B_2=\frac{2\eta^2}{p}-\eta^2.$$
\end{lemma}
As we discussed in Section \ref{sec:contribution}, for variance-reduced methods, the gradient estimator usually satisfies the unified Assumption \ref{asp:boge} with $D_1>0$ and $C_1=0$.
The recurrence \eqref{eq:boge2} describes the variance reduction process, with parameter $\rho$ describing the speed of variance reduction.
Here $C_2$ is also $0$, thus L-SVRG is fully variance reduced, which typically means faster convergence rate. This can be see from our main Theorems \ref{thm:main} and \ref{thm:main-pl-dec}. Indeed, i) for Theorem \ref{thm:main}, note that because $A_1=A_2=C_1=C_2=0$, the two $O(\frac{1}{\epsilon^2})$ terms appearing in the max in \eqref{eq:main-ssuhud} disappear, and hence the rate of L-SVRG is $O(\frac{1}{\epsilon^2})$ as opposed to the generic rate $O(\frac{1}{\epsilon^4})$ of less refined (i.e., not variance reduced) SGD variants.
ii) for Theorem \ref{thm:main-pl-dec}, note that because $C_1=C_2=0$, the last term $O(\frac{\kappa}{\mu \epsilon})$ in \eqref{eq:main-pl-k} disappears, and hence the rate of L-SVRG is linear $O(\kappa\log \frac{1}{\epsilon})$ as opposed to the worse sublinear rate $O(\frac{\kappa}{\mu \epsilon})$ of less refined (i.e., not variance reduced) SGD variants.
\subsubsection{SAGA method}\label{sec:bi98gjhf_9u9f}
The (minibatch) SAGA method developed by \citet{defazio2014saga} (in convex setting) is formalized as Algorithm~\ref{alg:saga}.
Here we analyze it and obtain convergence rates in nonconvex setting.
\begin{algorithm}[!h]
\caption{SAGA \cite{defazio2014saga}}
\label{alg:saga}
\begin{algorithmic}[1]
\REQUIRE
initial point $x^0, \{w_i^0\}_{i=1}^n$, stepsize $\eta_k$, minibatch size $b$
\FOR {$k=0,1,2,\ldots$}
\STATE $g^k = \frac{1}{b} \sum \limits_{i\in I_b} (\nabla f_i(x^k)- \nabla f_i(w_i^k)) +\frac{1}{n}\sum \limits_{j=1}^{n}\nabla f_j(w_j^k)$ ~~~($I_b$ denotes random minibatch with $|I_b|=b$) \label{line:saga}
\STATE $x^{k+1} = x^k - \eta_k g^k$ \label{line:update-saga}
\STATE $w_i^{k+1} = \begin{cases}
x^k, & \text{for~~} i\in I_b\\
w_i^k, &\text{for~~} i\notin I_b
\end{cases}$ \label{line:wi}
\ENDFOR
\end{algorithmic}
\end{algorithm}
We now show that the gradient estimator used in SAGA satisfies Assumption~\ref{asp:boge}.
\begin{lemma}[SAGA estimator satisfies Assumption \ref{asp:boge}]\label{lem:saga}
Suppose that Assumption \ref{asp:avgsmooth-saga} holds. The SAGA gradient estimator $$g^k= \frac{1}{b} \sum_{i\in I_b} (\nabla f_i(x^k)- \nabla f_i(w_i^k)) +\frac{1}{n}\sum_{j=1}^{n}\nabla f_j(w_j^k)$$ (see Line \ref{line:saga} in Algorithm \ref{alg:saga})
satisfies the unified Assumption~\ref{asp:boge} with parameters $$A_1=A_2=C_1=C_2=0,$$
$$B_1=1,\quad D_1=\frac{L^2}{b},\quad \sigma_k^2=\frac{1}{n}\sum_{i=1}^{n} \ns{x^k-w_i^k},\quad \rho=\frac{b}{2n}+\frac{b^2}{2n^2}-\frac{\eta^2L^2}{b},\quad B_2=\frac{2\eta^2n}{b}-\eta^2.$$
\end{lemma}
{\bf Remark:} We obtain the detailed convergence rates for these methods by plugging their corresponding specific values of $A_1, A_2, B_1, B_2, C_1,C_2,D_1,\rho$ (see Lemmas \ref{lem:gd}--\ref{lem:saga}) into our unified Theorems~\ref{thm:main} and \ref{thm:main-pl-dec}. See Tables~\ref{table:1} and \ref{table:pl} for an overview.
\section{General Nonconvex Federated Optimization Problems}
\label{sec:fedrated}
In this section, we consider the more general nonconvex distributed/federated problem \eqref{eq:prob-fed} with online form \eqref{prob-fed:exp} or finite-sum form \eqref{prob-fed:finite}, i.e.,
\[
\min_{x\in {\mathbb R}^d} \bigg\{ f(x) := \frac{1}{m}\sum \limits_{i=1}^m{f_i(x)} \bigg\},
\]
where
\[ f_i(x) := {\mathbb{E}}_{\zeta \sim {\mathcal D}_i}[f_i(x,\zeta)],
\text{~~~~or~~~~} f_i(x) := \frac{1}{n}\sum \limits_{j=1}^n{f_{i,j}(x)}.
\]
Here we allow different machines/workers to have different data distributions, i.e., we consider the non-IID (heterogeneous) data setting. Note that in distributed/federated problems, the bottleneck usually is the communication cost among workers, which motivates the study of methods which employ {\em compressed} communication.
In the following, we provide two general algorithmic frameworks differing in whether direct gradient compression (DC) or compression of gradient differences (DIANA) is used. Previous approaches mostly focus on strongly convex or convex problems for specific instances of SGD. Ours is the first unified analysis covering many variants of SGD in a single theorem, covering the nonconvex regime. In fact, many specific SGD methods arising as special cases of our general approach have not been analyzed before.
\subsection{Compression operators}
Compressed communication is modeled by the application of a (randomized) compression operator to the communicated messages, as described next.
\begin{definition}[Compression operator] \label{def_compression}
A randomized map ${\mathcal C}: {\mathbb R}^d\mapsto {\mathbb R}^d$ is an $\omega$-compression operator if
\begin{equation}\label{eq:compress-main}
{\mathbb{E}}[{\mathcal C}(x)]=x, \qquad {\mathbb{E}}[\ns{{\mathcal C}(x)-x}] \leq \omega\ns{x}, \qquad \forall x\in {\mathbb R}^d.
\end{equation}
In particular, no compression (${\mathcal C}(x)\equiv x$) implies $\omega=0$.
\end{definition}
Note that \eqref{eq:compress-main} holds for many practical compression methods, e.g., random sparsification \citep{stich2018sparsified}, quantization \citep{alistarh2017qsgd}, natural compression \citep{Cnat}. We are not going to focus on any specific compression operator; instead, we will analyze our methods for any compression operator captured by the above definition.
\subsection{DC framework for nonconvex federated optimization}
In the direct compression (DC) framework studied in this section, each machine $i\in [m]$ computes its local stochastic gradient $\widetilde{g}_i^k$, subsequently applies to it a compression operator ${\mathcal C}_i^k$, and communicates the compressed vector to a server, or to all other nodes (see Algorithm \ref{alg:dc}).
\vspace{-1mm}
\begin{algorithm}[!h]
\caption{DC framework of stochastic gradient methods for nonconvex federated optimization}
\label{alg:dc}
\begin{algorithmic}[1]
\REQUIRE
initial point $x^0\in {\mathbb R}^d$, stepsizes $\eta_k$
\FOR {$k=0,1,2,\ldots$}
\STATE {\bf{for all machines $i= 1,2,\ldots,m$ do in parallel}}
\STATE \quad Compute local stochastic gradient $\widetilde{g}_i^k$ \label{line:localgrad-dc}
\STATE \quad \blue{Compress local gradient ${\mathcal C}_i^k(\widetilde{g}_i^k)$} and send it to the server \label{line:comgrad-dc}
\STATE {\bf{end for}}
\STATE Aggregate received compressed gradient information:
$g^k = \frac{1}{m}\sum \limits_{i=1}^m {\mathcal C}_i^k(\widetilde{g}_i^k)$ \label{line:gk-dc}
\STATE $x^{k+1} = x^k - \eta_k g^k$
\ENDFOR
\end{algorithmic}
\end{algorithm}
Our main theoretical result describing the convergence properties of Algorithm~\ref{alg:dc} is stated next.
\begin{theorem}[DC framework]\label{thm:dc-diff}
If the local stochastic gradient $\widetilde{g}_i^k$ (see Line \ref{line:localgrad-dc} of Algorithm~\ref{alg:dc}) satisfies the recursions
\begin{align}
{\mathbb{E}}_k[\ns{\widetilde{g}_i^k}] & \leq 2A_{1,i}(f_i(x^k)-f_i^*)+B_{1,i}\ns{\nabla f_i(x^k)} + {D_{1,i} \sigma_{k,i}^2} +C_{1,i}, \label{eq:gi1-dc-diff-8}\\
{\mathbb{E}}_k[\sigma_{k+1,i}^2] & \leq {(1-\rho_i)\sigma_{k,i}^2 + 2A_{2,i}(f(x^k)-f^*)+B_{2,i}\ns{\nabla f(x^k)} + D_{2,i}{\mathbb{E}}_k[\ns{g^k}] +C_{2,i}}, \label{eq:gi2-dc-diff-8}
\end{align}
then $g^k$ (see Line \ref{line:gk-dc} of Algorithm \ref{alg:dc}) satisfies the unified Assumption \ref{asp:boge}
with
\begin{align*}
& A_1 =\frac{(1+\omega)A}{m}, \quad
B_1 =1, \quad
D_1 =\frac{1+\omega}{m}, \quad
\sigma_k^2 = \frac{1}{m}\sum_{i=1}^m D_{1,i} \sigma_{k,i}^2 , \quad
C_1 = \frac{(1+\omega)C}{m}, \\
& \rho =\min_i\rho_i-\tau, \quad
A_2 =D_A+ \tau A, \quad
B_2 =D_B+D_D, \quad
C_2 = D_C+ \tau C,
\end{align*}
where
$A:=\max_i (A_{1,i}+B_{1,i}L_i-L_i/(1+\omega))$,
$C := \frac{1}{m} \sum_{i=1}^m C_{1,i} + 2A\Delta_f^*$,
$\Delta_f^*:=f^*-\frac{1}{m}\sum_{i=1}^m f_i^*$,
$\tau:=\frac{(1+\omega)D_D}{m}$,
$D_A:=\frac{1}{m} \sum_{i=1}^m D_{1,i}A_{2,i}$,
$D_B:=\frac{1}{m} \sum_{i=1}^m D_{1,i} B_{2,i}$,
$D_D:=\frac{1}{m} \sum_{i=1}^m D_{1,i} D_{2,i}$,
and
$D_C:=\frac{1}{m} \sum_{i=1}^m D_{1,i}C_{2,i}$.
\end{theorem}
The above result means that, provided that the local gradient estimators $\widetilde{g}_i^k$ used in the DC framework (Algorithm~\ref{alg:dc}) satisfy recursions \eqref{eq:gi1-dc-diff-8}--\eqref{eq:gi2-dc-diff-8}, the global gradient estimator $g^k$ satisfies our unified Assumption~\ref{asp:boge}, and hence our main convergence results, Theorem~\ref{thm:main} and Theorem~\ref{thm:main-pl-dec} can be applied.
To showcase the generality and expressive power of our DC framework, we describe several particular ways in which such local estimators can be generated, each leading to a particular instance DC-type method.
In particular, we describe methods DC-GD (Algorithm~\ref{alg:dc-gd}), DC-SGD (Algorithm \ref{alg:dc-sgd}), DC-LSVRG (Algorithm \ref{alg:dc-lsvrg}) and DC-SAGA (Algorithm \ref{alg:dc-saga}).
In each case we show that recursions \eqref{eq:gi1-dc-diff-8} and \eqref{eq:gi2-dc-diff-8} are satisfied, and in doing so, we obtain complexity results in the nonconvex case with or without the PL condition.
Details can be found in the appendix.
See the first part of Table~\ref{table:fed} and the first part of Table~\ref{table:fed-pl} for a summary of these particular methods.
Note that all of these results are new, with the exception of DC-GD and DC-SGD in the nonconvex non-PL case.
\subsection{DIANA framework for nonconvex federated optimization}
\label{sec:diana}
We now highlight an inherent issue of the DC framework (Algorithm \ref{alg:dc}), which will serve as a motivation for the proposed DIANA framework described here. Considering any stationary point $\widehat{x}$ such that $\nabla f(\widehat{x}) =\sum_{i=1}^{m} \nabla f_i(\widehat{x}) =0$, the aggregated compressed gradient (even if the full gradient is used locally, i.e., $\widetilde{g}_i^k=\nabla f_i(x^k)$), is {\em not} equal to zero $0$, i.e., $g(\widehat{x}) = \frac{1}{m}\sum_{i=1}^m {\mathcal C}_i(\nabla f_i(\widehat{x})) \neq 0$. This effect slows down convergence of the methods in DC framework.
To address this issue, we use the DIANA framework
to compress the {\em gradient differences} instead (see Line \ref{line:comgrad-diana} of Algorithm \ref{alg:diana}).
\begin{algorithm}[h]
\caption{DIANA framework of stochastic gradient methods for nonconvex federated optimization}
\label{alg:diana}
\begin{algorithmic}[1]
\REQUIRE
initial point $x^0$, $\{h_i^0\}_{i=1}^m$, $h^0=\frac{1}{m}\sum_{i=1}^{m}h_i^0$, stepsize parameters $\eta_k, \alpha_k$
\FOR {$k=0,1,2,\ldots$}
\STATE {\bf{for all machines $i= 1,2,\ldots,m$ do in parallel}}
\STATE \quad Compute local stochastic gradient $\widetilde{g}_i^k$ \label{line:localgrad}
\STATE \quad \blue{Compress shifted local gradient $\widehat{\Delta}_i^k= {\mathcal C}_i^k(\widetilde{g}_i^k- h_i^k)$} and send $\widehat{\Delta}_i^k$ to the server \label{line:comgrad-diana}
\STATE \quad Update local shift $h_i^{k+1}=h_i^k+\alpha_k {\mathcal C}_i^k(\widetilde{g}_i^k - h_i^k)$
\STATE {\bf{end for}}
\STATE Aggregate received compressed gradient information:
$g^k = h^k + \frac{1}{m}\sum \limits_{i=1}^m \widehat{\Delta}_i^k$ \label{line:gk}
\STATE $x^{k+1} = x^k - \eta_k g^k$
\STATE $h^{k+1} = h^k + \alpha_k \frac{1}{m}\sum \limits_{i=1}^m \widehat{\Delta}_i^k$
\ENDFOR
\end{algorithmic}
\end{algorithm}
The DIANA framework was previously studied by \citet{mishchenko2019distributed, DIANA2,DFPMCI2019} for a few specific instances of SGD and mainly in strongly convex or convex problems. Ours is the first unified analysis covering many variants of SGD in a single theorem, covering the nonconvex regime.
Concretely, \citet{DFPMCI2019} analyze the strongly convex or convex problem of finding a fixed point with strong assumptions. Here we analyze the more general nonconvex setting with a more general unified assumption.
\citet{mishchenko2019distributed} analyze their methods for the ternary compression operator only, and their result in the nonconvex setting makes very strong assumptions on the local gradient estimators $\widetilde{g}_i^k$. In particular, they assume that $\widetilde{g}_i^k$ is unbiased, and that there exists $\sigma_i^2\geq 0$ such that ${\mathbb{E}}_k[\ns{\widetilde{g}_i^k}] \leq \ns{\nabla f_i(x^k)} +\sigma_i^2$.
Note that this implies that $B_{1,i} = 1$ and $C_{1,i} = \sigma_i^2$ in recursion \eqref{eq:gi1-dc-diff-8}.
This corresponds to a (rather restrictive) special case of recursions \eqref{eq:gi1-dc-diff-8}--\eqref{eq:gi2-dc-diff-8} with the rest of the parameters rendered ``inactive'':
$A_{1,i} = D_{1,i}=A_{2,i} = B_{2,i} = C_{2,i} = D_{2,i} =0$, and $\rho_i=1$.
Hence, when compared to \citep{mishchenko2019distributed}, our results for the DIANA framework can be seen as a generalization to arbitrary compression operators described by Definition~\ref{def_compression}, to a much more general class of local gradient estimators (including more methods), and to the PL setting.
Further, \citet{DIANA2} lift some of the deficiencies of \citep{mishchenko2019distributed}. In particular, they consider general compression operators, and consider the finite-sum setting
in which they use two particular variance reduced estimators $\widetilde{g}_i^k$, i.e., L-SVRG and SAGA with minibatch size $b=1$ (non-minibatch version).
Our work can be seen as a generalization of their work to the potentially infinite family of local gradient estimators described by recursions \eqref{eq:gi1-dc-diff-8}--\eqref{eq:gi2-dc-diff-8}, and further to the PL setting. Note that our framework allows for the analysis of more general variants of DIANA, such as DIANA-LSVRG or DIANA-SAGA with additional additive noise and with minibatch $b\geq 1$ (note that one can enjoy a linear speedup by adopting minibatch in parallel), which can be helpful in some situations.
Our main result for DIANA framework is described in the following Theorem \ref{thm:diana-type-diff}.
\begin{theorem}[DIANA framework]\label{thm:diana-type-diff}
Suppose that the local stochastic gradient $\widetilde{g}_i^k$ (see Line~\ref{line:localgrad} of Algorithm~\ref{alg:diana}) satisfies \eqref{eq:gi1-dc-diff-8}--\eqref{eq:gi2-dc-diff-8}, same as in the DC framework.
Then $g^k$ (see Line \ref{line:gk} of Algorithm \ref{alg:diana}) satisfies the unified Assumption \ref{asp:boge}
with
\begin{align*}
& A_1 =\frac{(1+\omega)A}{m}, ~
B_1 =1, ~
D_1 =\frac{1+\omega}{m}, ~
\sigma_k^2 = \frac{1}{m} \sum_{i=1}^m D_{1,i} \sigma_{k,i}^2 + \frac{\omega}{(1+\omega)m}
\sum_{i=1}^m \ns{\nabla f_i(x^k) -h_i^k}, \\
& C_1 = \frac{(1+\omega)C}{m}, \quad
\rho =\min \left\{\min_i\rho_i-\tau,~ 2\alpha -(1-\alpha)\beta^{-1} -\alpha^2-\tau \right\}, \\
& A_2 =D_A+ \tau A, \quad
B_2 =D_B+B, \quad
C_2 = D_C + \tau C,
\end{align*}
where
$A:=\max_i (A_{1,i}+(B_{1,i}-1)L_i)$,
$B:=\frac{\omega(1+\beta)L^2\eta^2}{1+\omega} + D_D$,
$C := \frac{1}{m} \sum_{i=1}^m C_{1,i} + 2A\Delta_f^*$,
$\Delta_f^*:=f^*-\frac{1}{m}\sum_{i=1}^m f_i^*$,
$\tau:=\alpha^2 \omega + \frac{(1+\omega)B}{m}$,
$D_A:=\frac{1}{m} \sum_{i=1}^m D_{1,i}A_{2,i}$,
$D_B:=\frac{1}{m} \sum_{i=1}^m D_{1,i} B_{2,i}$,
$D_D:=\frac{1}{m} \sum_{i=1}^m D_{1,i} D_{2,i}$,
$D_C:=\frac{1}{m} \sum_{i=1}^m D_{1,i}C_{2,i}$,
and $\forall \beta>0$.
\end{theorem}
Similarly, the above result means that, provided that the local gradient estimators $\widetilde{g}_i^k$ used in the DIANA framework (Algorithm~\ref{alg:diana}) satisfy recursions \eqref{eq:gi1-dc-diff-8}--\eqref{eq:gi2-dc-diff-8}, the global gradient estimator $g^k$ satisfies our unified Assumption~\ref{asp:boge}, and hence our main convergence results, Theorem~\ref{thm:main} and Theorem~\ref{thm:main-pl-dec} can be applied.
To showcase the generality and expressive power of our DIANA framework, we describe several particular ways in which such local estimators can be generated, each leading to a particular instance of a DIANA-type method.
In particular, we describe methods DIANA-GD (Algorithm~\ref{alg:diana-gd}), DIANA-SGD (Algorithm~\ref{alg:diana-sgd}), DIANA-LSVRG (Algorithm~\ref{alg:diana-lsvrg}) and DIANA-SAGA (Algorithm~\ref{alg:diana-saga}).
In each case we show that recursions \eqref{eq:gi1-dc-diff-8} and \eqref{eq:gi2-dc-diff-8} are satisfied, and in doing so, we obtain complexity results in the nonconvex case with or without the PL condition.
Details can be found in the appendix.
See the second part of Table~\ref{table:fed} and the second part of Table~\ref{table:fed-pl} for a summary of these particular methods.
Note that all of these results are new, with the exception of a weak version of DIANA-LSVRG and DIANA-SAGA studied by \citet{DIANA2} (as discussed above) in the nonconvex non-PL case.
\newpage
\bibliographystyle{plainnat}
| {
"timestamp": "2020-06-15T02:11:45",
"yymm": "2006",
"arxiv_id": "2006.07013",
"language": "en",
"url": "https://arxiv.org/abs/2006.07013",
"abstract": "In this paper, we study the performance of a large family of SGD variants in the smooth nonconvex regime. To this end, we propose a generic and flexible assumption capable of accurate modeling of the second moment of the stochastic gradient. Our assumption is satisfied by a large number of specific variants of SGD in the literature, including SGD with arbitrary sampling, SGD with compressed gradients, and a wide variety of variance-reduced SGD methods such as SVRG and SAGA. We provide a single convergence analysis for all methods that satisfy the proposed unified assumption, thereby offering a unified understanding of SGD variants in the nonconvex regime instead of relying on dedicated analyses of each variant. Moreover, our unified analysis is accurate enough to recover or improve upon the best-known convergence results of several classical methods, and also gives new convergence results for many new methods which arise as special cases. In the more general distributed/federated nonconvex optimization setup, we propose two new general algorithmic frameworks differing in whether direct gradient compression (DC) or compression of gradient differences (DIANA) is used. We show that all methods captured by these two frameworks also satisfy our unified assumption. Thus, our unified convergence analysis also captures a large variety of distributed methods utilizing compressed communication. Finally, we also provide a unified analysis for obtaining faster linear convergence rates in this nonconvex regime under the PL condition.",
"subjects": "Optimization and Control (math.OC); Data Structures and Algorithms (cs.DS); Machine Learning (cs.LG); Machine Learning (stat.ML)",
"title": "A Unified Analysis of Stochastic Gradient Methods for Nonconvex Federated Optimization",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9693242009478238,
"lm_q2_score": 0.7310585786300049,
"lm_q1q2_score": 0.7086327725765813
} |
https://arxiv.org/abs/1004.1855 | Fast Correlation Greeks by Adjoint Algorithmic Differentiation | We show how Adjoint Algorithmic Differentiation (AAD) allows an extremely efficient calculation of correlation Risk of option prices computed with Monte Carlo simulations. A key point in the construction is the use of binning to simultaneously achieve computational efficiency and accurate confidence intervals. We illustrate the method for a copula-based Monte Carlo computation of claims written on a basket of underlying assets, and we test it numerically for Portfolio Default Options. For any number of underlying assets or names in a portfolio, the sensitivities of the option price with respect to all the pairwise correlations is obtained at a computational cost which is at most 4 times the cost of calculating the option value itself. For typical applications, this results in computational savings of several order of magnitudes with respect to standard methods. | \section*{Forward and Adjoint Algorithmic Differentiation}
Both the Forward and Adjoint mode of AD aim at calculating the derivatives of
a computer implemented function. They differ by the direction of
propagation of the chain rule through the composition of instructions representing the function.
To illustrate this point, suppose we begin with a single input $a$, and produce a single output
$z$ after proceeding through a sequence of steps:
\[
a\ \rightarrow\ \ldots\ \rightarrow\ u\ \rightarrow\ v\ \rightarrow\ \ldots\ \rightarrow\ z.
\]
The Forward (or Tangent) mode of AD (FAD) defines $\dot{u}$ to be the sensitivity
of $u$ to changes in $a$, i.e.,
\[
\dot{u} \equiv \frac{\partial u}{\partial a}~.
\]
If the intermediate variables $u$ and $v$ are vectors, $\dot{v}$ is
calculated by differentiating the dependence of $v$ on $u$ so that
\[
\dot{v}_i = \sum_j \frac{\partial v_i}{\partial u_j}\ \dot{u}_j.
\]
Applying this to each step in the calculation, working from left to right,
we end up computing $\dot{z}$, the sensitivity of the output to changes in the input.
Note that if we have more than one input, then we need to calculate the sensitivity
to each one in turn, and so the cost is linear in the number of input variables.
Instead, the Adjoint (or Backward) mode of AD (AAD) works from right to left. Using the
standard AD notation, $\bar{u}$ is defined to be the sensitivity of the output $z$ to
changes in the intermediate variable $u$, i.e.
\[
\bar{u}_i \equiv \frac{\partial z}{\partial u_i}.
\]
Using the chain rule we get,
\[
\frac{\partial z}{\partial u_i} = \sum_j \frac{\partial z}{\partial v_j}\
\frac{\partial v_j}{\partial u_i},
\]
which corresponds to the adjoint mode equation
\[
\bar{u}_i = \sum_j \frac{\partial v_j}{\partial u_i}\ \bar{v}_j.
\]
Starting from $\bar{z}=1$, we can apply this to each step in the calculation, working from
right to left, until we obtain $\bar{a}$, the sensitivity of the output to each of the input
variables.
In the Adjoint mode, the cost does not increase with the number of inputs, but if
there is more than one output then the sensitivities for each output have to considered
one at a time and so the cost is linear in the number of outputs.
Furthermore, because the partial derivatives depend on the values of the intermediate
variables, one first has to compute the original calculation storing the values of
all of the intermediate variables such as $u$ and $v$, before performing the Adjoint mode
sensitivity calculation.
In the above description, each step can be a distinct high-level function,
or specific mathematical operations,
or even an individual instruction in a computer code.
This last viewpoint is the one taken by computer scientists who have developed tools
which take as an input a computer code to perform some high-level function,
\[
V = \texttt{FUNCTION}(U)
\]
and produce new routines which will either perform the standard sensitivity analysis
\[
\dot V = \texttt{FUNCTION}\_{\texttt D}(U, \dot U)
\]
with suffix $\_{\texttt D}$ for ``dot'', or its adjoint counterpart
\[
\bar U = \texttt{FUNCTION}\_{\texttt B}(U, \bar V)
\]
with suffix $\_{\texttt B}$ for ``bar''
\footnote{To learn more about Automatic Differentiation tools
see e.g., \texttt{www.autodiff.org}. }.
One particularly important theoretical result is that
the number of arithmetic operations in the adjoint routine $\texttt{FUNCTION}\_{\texttt B}$ is
at most a factor 4 greater than in $\texttt{FUNCTION}$ \cite{griewank}. As a result, it is
possible to show that the execution time of $\texttt{FUNCTION}\_{\texttt B}$ is bounded by approximatively 4 times the cost of
execution of the original function $\texttt{FUNCTION}$.
Thus, one can obtain the sensitivity of a single output to
an unlimited number of inputs for little more work than the original computation.
While the application of such {\em automatic} AD tools to large inhomogeneous
pricing softwares is challenging, the principles of AD can
be used as a programming paradigm that can be used to design the Forward or Adjoint of
any algorithm (possibly using automatic AD tools for the implementation of smaller, simpler
components). This is especially useful for the most common situations where pricing codes use a
variety of libraries written in different languages, possibly linked dynamically.
These ideas will be discussed at length in Ref.~\cite{aad2}.
\section*{AAD and the Pathwise Derivative method for Correlation Risk}
In this paper, we consider options pricing problems that can be expressed as an expectation value
of the form
\begin{equation}\label{option}
V = \mathbb{E}_\mathbb{Q}\Big[P(X)\Big]~,
\end{equation}
where $X = (X_1,\ldots,X_N)^t$ represents the state vector of $N$ market factors
(e.g., stock prices, interest rates, foreign exchange pairs, default times etc.),
$P(X)$
is the (possibly discounted) payout function of a security contingent on their future realization,
and $\mathbb{Q} = \mathbb{Q}(X) $ represents
a risk neutral probability distribution \cite{HarrKreps} according to
which the components of $X$ are distributed.
Although the proposed method easily generalizes to other kinds of joint distributions, here
we consider a $N$-dimensional Gaussian copula as a model for the co-dependence between
the components of the state vector, namely a joint cumulative density function of the form
\begin{equation} \label{gausscopula}
\mathbb{Q}(X) = \Phi_N(\Phi^{-1}(M_1(X_1)),\ldots, \Phi^{-1}(M_N(X_N));\rho)
\end{equation}
where $\Phi_N(Z_1,\ldots,Z_N;\rho)$ is a $N$-dimensional multivariate Gaussian distribution
with zero mean, and a $N\times N$ positive semidefinite correlation
matrix $\rho$; $\Phi^{-1}$ is the inverse of the standard
normal cumulative distribution, and $M_i(X_i)$, $i=1,\ldots,N$, are the
Marginal distributions of the underlying factors, typically implied from the market prices of
liquid securities.
The expectation value in (\ref{option}) can be estimated by means of
MC by sampling a number $N_{\rm MC}$ of random
replicas of the underlying state vector ${X}[1],\ldots,{X}[N_{\rm MC}]$,
according to the distribution $\mathbb{Q}({ X})$, and evaluating the payout $P({X})$
for each of them. This leads to the central limit theorem \cite{CLT} estimate of the option value $V$
as
\begin{equation}\label{mcexp}
V \simeq \frac{1}{N_{\rm MC}} \sum_{i_{\rm MC} = 1}^{N_{\rm MC}} P\left({X}[i_{\rm MC}]\right)
\end{equation}
with standard error $\Sigma / \sqrt{N_{\rm MC}}$,
where $\Sigma^2 = E_{\mathbb Q}[P\left({X}\right)^2]-E_{\mathbb Q}[P\left(X\right)]^2$ is the
variance of the sampled payout.
In the Gaussian model above the dependence between the underlying factors is determined by the
correlation of a set of jointly normal random variables $Z = (Z_1,\ldots,Z_N)^t$ distributed according
to $\Phi_N(Z_1,\ldots,Z_N;\rho)$. Each $Z_i$ is distributed according to a standard normal distribution
so that $\Phi(Z_i)$ is a uniform random variable in $(0,1)$ and $X_i = M_i^{-1}(\Phi(Z_i))$ is
distributed according to $M_i$. The sampling of the $N$ jointly normal random variables $(Z_1,\ldots,Z_N)$
is efficiently implemented by means of a Cholesky factorization of the correlation matrix. The
Cholesky factorization produces a lower triangular $N\times N$ matrix $C$ such that $\rho=CC^T$ so
that one can write $Z = C \tilde Z$ where $\tilde Z = (\tilde Z_1, \ldots, \tilde Z_N)^t$ is a
$N$ dimensional vector of independent standard normal random variables. These observations naturally translate into the
standard algorithm to generate MC samples of $X$ according to (\ref{gausscopula}), namely
\begin{itemize}
\item[Step 0] Generate a sample of $N$ independent standard normal variates, $\tilde Z = (\tilde Z_1, \ldots, \tilde Z_N)^t$.
\item[Step 1] Correlate the components of $\tilde Z$ by performing the matrix vector product $Z = C \tilde Z$.
\item[Step 2] Set $U_i = \Phi(Z_i)$, $i=1,\ldots, N$.
\item[Step 3] Set $X_i = M^{-1}(U_i)$, $i=1,\ldots, N$.
\item[Step 4] Compute the payout estimator $P(X_1,\ldots,X_N)$.
\end{itemize}
Correlation Risk can be obtained in an highly efficient way by implementing the so-called Pathwise Derivative
method \cite{BrodGlass} according to the principles of AAD \cite{aad1,aad2}.
It is convenient to first express the expectation value as being over $\mathbb{P}(\tilde Z)$,
the distribution of independent $\tilde Z$ used in the MC simulation, so that
\begin{equation}
V =
\mathbb{E_Q}\Big[P\left({X}\right)\Big] =
\mathbb{E_P}\Big[P\left({X(\tilde Z)}\right)\Big].
\label{option2}
\end{equation}
The point of this subtle change is that $\mathbb{P}(\tilde Z)$ does not depend on the correlation matrix
$\rho$, whereas $\mathbb{Q}(X)$ does.
The Pathwise Derivative method allows the calculation of the sensitivities of the option price $V$ (\ref{option2})
with respect to a set of $N_\theta$ parameter $\theta = (\theta_1,\ldots, \theta_{N_\theta})$, say
\begin{equation}\label{sens}
\frac{\partial V(\theta)}{\partial\theta_k} =
\frac{\partial}{\partial \theta_k} \mathbb{E_P}\Big[P\left({X}\right)\Big]~,
\end{equation}
by defining appropriate estimators, say $\bar \theta_k(X[i_{MC}])$, that can be sampled simultaneously
in a single MC simulation. This can be achieved by observing that whenever the payout function is regular enough (e.g., Lipschitz-continuous, see
Ref.~\cite{GlassMCbook}), and the distribution $\mathbb{P}(\tilde Z)$ does not depend on $\theta$,
one can rewrite Eq.~(\ref{sens}) by taking the derivative inside the expectation value, as
\begin{equation}\label{sens2}
\frac{\partial V(\theta)}{\partial\theta_k} = \mathbb{E_P}\Big[\frac{\partial P \left({X}\right) }{\partial \theta_k} \Big]~.
\end{equation}
The calculation of Eq.~(\ref{sens2}) can be performed by applying the chain rule, and
computing the average value of the so-called Pathwise Derivative estimator
\begin{equation}\label{pwd}
\frac{\partial P(X)}{\partial \theta_k} =
\sum_{i=1}^{N} \frac{\partial P(X)}{\partial X_i}
\times\frac{\partial X_i}{\partial \theta_k} ~.
\end{equation}
The standard pathwise implementation corresponds to a Forward mode sensitivity analysis.
Applied to steps 1-4 (since the normal variates $\tilde Z$ do not depend on any
input parameters), this gives for each sensitivity:
\begin{itemize}
\item[Step 1f] Calculate $\dot{Z} = \dot{C}\, \tilde Z$ where $\dot{C}$ is the sensitivity of $C$ with respect to a given entry of the correlation matrix.
\item[Step 2f] Set $\dot{U}_i = \phi(Z_i)\, \dot{Z}_i$, $i=1,\ldots, N$.
\item[Step 3f] Set $\dot{X}_i = \dot{U}_i \,/\, m_i(X_i)$, $i=1,\ldots, N$.
\item[Step 4f] Calculate $\displaystyle \dot{P} = \sum_{i=1}^N \frac{\partial P}{\partial X_i}\,\dot{X}_i$~.
\end{itemize}
Here $\phi(x) \equiv {\partial \Phi(x)}/{\partial x}$ is the standard normal probability density function, and
$m_i(x) \equiv \partial M_i(x)/\partial x $ is the probability density function associated with the marginal $M_i(x)$ of the
$i$-th random factor.
As anticipated, the computational cost of the Forward Pathwise Derivative method scales
linearly with the number of sensitivities computed $N_\theta$, i.e., the same scaling of finite difference
approximations of the derivatives $\partial_{\theta_k} E_{\mathbb Q}[P(X)]$. As a result in many situations,
typically involving complex payouts, the standard implementation of the Pathwise Derivative method
offers a limited computational advantage with respect to Bumping \cite{aad1}.
In contrast, AAD allows in general a much more efficient implementation of the Pathwise Derivative estimators (\ref{pwd}).
Indeed, as an immediate consequence of the computational complexity results introduced in the previous Section,
it can be shown \cite{aad2} that AAD allows the simultaneous calculation of the Pathwise Derivative
estimators for {\em any} number of sensitivities at a computational cost which is a small multiple (of order 4) of the
cost of evaluating the original payout estimator. As a result, one can calculate the MC expectation of an arbitrarily
large number of sensitivities at a {\em small fixed cost}.
\begin{figure}
\begin{center}
\includegraphics[width=80mm,]{Fig1.pdf}
\end{center}
\vspace{-4mm} \caption{\label{FigCholesky} Adjoint of the Cholesky factorization. The Forward sweep is an exact
replica of the original factorization.}
\end{figure}
Although AAD can be applied for virtually any model and payout function of interest
in Computational Finance -- including path-dependent and Bermudan options --
here we will concentrate on the calculation of correlation sensitivities in a Gaussian copula
framework. In general, for the reasons mentioned in the previous Section, the AAD implementation of the Pathwise derivative method contains a {\em forward sweep}
-- reproducing the steps followed in the calculation of the estimator of the option value $P(X)$ -- and a {\em backward sweep}.
As a result, the adjoint algorithm consists of adjoint counterparts for each of the Steps 1-4
above executed in reverse order, plus the adjoint of the Cholesky factorization.
The first step consists in the evaluation of the adjoint of step 4 of the Forward sweep, calculating the
derivatives of the Payout with respect to the components of the state vector
\begin{equation}
\bar X_k = \frac{\partial P(X)}{\partial X_k}~,
\end{equation}
with $k=1,\ldots,N$. These derivatives can be calculated efficiently using AAD, as discussed in Ref.~\cite{aad1}.
In turn, the adjoint of Step 3 of the Forward sweep is given by
\begin{equation}
\bar U_k = \bar M^{-1}_k(U_k, \bar X_k) = \frac{\bar X_k}{m_k(X_k)}~,
\end{equation}
for $k =1,\ldots, N$.
The vector $\bar U$ is then mapped into the adjoint of the correlated standard normal variables $\bar Z$ through the
counterpart of Step 2
\begin{equation}
\bar Z_k = {\bar \Phi(Z_k, \bar U_k)} = {\bar U_k}\,{\phi(Z_k)}~.
\end{equation}
The adjoint of Step 1 performing the matrix vector product $Z = C \tilde Z$ reads
\begin{equation}
\bar C_{i,j} = \sum_{k=1}^N \frac{\partial Z_k}{\partial C_{i,j}}\, \bar Z_k = \tilde Z_j\, \bar Z_i
\end{equation}
or $\bar C = \bar Z \tilde Z^t$.
By applying the chain rule, it is straightforward to realize that the adjoint $\bar w$ of each intermediate variable $w$
in the succession of Steps 0-4 represents the derivative of the Payout estimator with respect to $w$, or $\bar w = \partial P / \partial w$.
In particular the quantities $\bar C_{i,j}$ calculated at the end of the adjoint of Step 1 represent the derivatives of
the payout estimator with respect to the the entries of the triangular Cholesky matrix
$C$, namely the pathwise estimator (\ref{pwd}) with $\theta_k = C_{i,j}$.
In summary, the AAD implementation of the Pathwise Derivative Estimator consists of Step 1-4 described above (forward sweep)
plus the following steps of the backward sweep:
\begin{itemize}
\item[Step 5] Evaluate the Payout adjoint $\bar X_k = \partial P/\partial X_k$, for $k=1,\ldots,N$.
\item[Step 6] Calculate $\bar U_k = {\bar X_k}/{m_k(M^{-1}_k(U_k))}$, $k=1,\ldots,N$.
\item[Step 7] Calculate $\bar Z_k = {\bar U_k}{\phi(Z_k)} $, $k=1,\ldots,N$.
\item[Step 8] Calculate $\bar C = \bar Z \tilde Z^t$.
\end{itemize}
At this point in the calculation, there is an interesting complication.
The natural AAD approach would average the values of $\bar C$ from each of the
MC paths. This average $\bar C$ can be converted into derivatives
with respect to the entries of the correlation matrix $\rho$ by means of the adjoint of the Cholesky factorization \cite{Smith}, namely
a function of the form
\begin{equation}
\bar \rho = \texttt{CHOLESKY}\_\texttt{B}(\rho, \bar C)
\end{equation}
providing
\begin{equation}
\label{rhobar}
\bar \rho_{i,j} = \sum_{l,m=1}^N \frac{\partial C_{l,m}}{\partial \rho_{i,j}}\bar C_{l,m}~.
\end{equation}
The pseudocode for the adjoint Cholesky factorization is given in Fig.~\ref{FigCholesky}.
By inspecting the structure of the pseudocode it appears clear that its computational cost is just a small multiple (of order 2)
of the cost of evaluating the original factorization. Indeed, the adjoint algorithm essentially contains
the original Cholesky factorization plus a backward sweep with the same complexity and a similar number of operations.
The complication with this implementation is that it gives an estimate for the correlation risk, but it does not provide a corresponding
confidence interval. An alternative approach would be to convert $\bar C$ to $\bar \rho$ for each individual path,
and then compute the average and standard deviation of $\bar \rho$ in the usual way. However, the numerical results
will show that this is rather costly. An excellent compromise between these two extremes is to divide the $N_{MC}$ paths
into $N_b$ 'bins' of equal size. For each bin, an average value of $\bar C$ is computed and converted into
a corresponding value for $\bar \rho$. These $N_b$ estimates for $\bar \rho$ can then be combined in the usual way
to form an overall estimate and confidence interval for the correlation risk.
The computational benefits can be understood by considering the computational costs for both the standard evaluation
and the adjoint Pathwise Derivative calculation. In the standard evaluation, the cost of the Cholesky factorization
is $O(N^3)$, and the cost of the MC sampling is $O(N_{MC} N^2)$, so the total cost is $O(N^3 + N_{MC} N^2)$.
Since $N_{MC}$ is always much greater than $N$, the cost of the Cholesky factorization is usually negligible.
The cost of the adjoint steps in the MC sampling is also $O(N_{MC} N^2)$, and when using $N_b$ bins the
cost of the adjoint Cholesky factorization is $O(N_b N^3)$. To obtain an accurate confidence interval, but with
the cost of the Cholesky factorisation being negligible, requires that $N_b$ is chosen so that
$1 \ll N_b \ll N_{MC} / N$. Without binning, i.e., using $N_b = N_{MC}$, the cost to calculate the average
of the estimators (\ref{rhobar}) is $O(N_{MC} N^3)$, and so the relative
cost compared to the evaluation of the option value is $O(N)$.
\begin{figure}
\begin{center}
\hspace{-2mm}
\includegraphics[width=80mm,angle=0]{Fig2.pdf}
\end{center}
\vspace{-5mm} \caption{\label{cpuratio} Ratios of the CPU time required for the calculation of the
option value, and correlation Greeks, and the CPU time spent for the computation of the value alone, as functions of the
number of names in the basket, for $N_{MC} = 10^5$. Symbols: Bumping (one-sided finite differences) (triangles), AAD
without binning (i.e.~$N_b=N_{MC}$) (stars), AAD with binning ($N_b=20$)
(empty circles). Lines are guides for the eye, and the MC uncertainties are smaller than the symbol sizes. }
\end{figure}
The binning procedure described above can be generalized to any situation in which the standard
solution procedure involves a common preprocessing step before any of the path calculations are
performed. Other examples would include calibration of model parameters to market prices,
or a cubic spline construction of a local volatility surface. In each case, there is a linear
relationship between the forward mode sensitivities before and after the preprocessing step, and
therefore a linear relationship between the corresponding adjoint sensitivities.
The algorithm described above can be applied whenever the option pricing problem can
be formulated as an expectation value over a set of random factors whose distribution
is modelled as a Gaussian copula. This include in general a variety of Basket Options
common across all asset classes, or structured swaps whose coupon depends on a specific observation
of a set of correlated rates. In addition, the same ideas can be extended to the simulation of correlated diffusion
processes \cite{aad2}.
\section*{Numerical Tests}
As a numerical test ground we consider the case of
Basket Default Options \cite{ChenGlass}. In this context, the random factors $X_i$ represent the default
time $\tau_i$ of the $i$-th name, e.g., the time a specific company in a reference pool of $N$ names
fails to pay one of its liabilities as specified by the terms of the contract priced.
In particular, in a $n$-th to default Basket Default Swap
one party (protection buyer) makes regular payments to a counterparty (protection seller)
at time $T_1,\ldots,T_M \leq T$ provided that less than $n$ defaults events among the components of the basket are observed before time $T_M$.
On the other hand, if $n$ defaults occur before time $T$, the regular payments cease and the
protection seller makes a payment to the buyer of $(1-R_i)$ per unit notional, where $R_i$ is the
normalized recovery rate of the $i$-th asset.
The value at time zero of the Basket Default Swap on a given realization of the default
times $\tau_1,\ldots,\tau_N$, i.e., the Payout function, can be therefore expressed as
\begin{equation}
P(\tau_1,\ldots,\tau_N) = P_{prot}(\tau_1,\ldots,\tau_N)-P_{prem}(\tau_1,\ldots,\tau_N)
\end{equation}
i.e., as the difference between the so-called protection and premium legs.
The value leg is given by
\begin{equation}\label{prem}
P_{prot}(\tau_1,\ldots,\tau_N) = (1 - R_n) D(\tau) \mathbb{I}(\tau \leq T)~,
\end{equation}
where $R_n$ and $\tau$ are the recovery rate and default time of the $n$-th to default, respectively,
$D(t)$ is the discount factor for the interval $[0,t]$ (here we assume for simplicity
uncorrelated default times and interest rates), and ${\mathbb I}(\tau \leq T)$ is
the indicator function of the event that the $n$-th default occurs before $T$.
The premium leg reads instead, neglecting for simplicity any accrued payment,
\begin{equation}\label{prot}
P_{prem}(\tau_1,\ldots,\tau_N) = \sum_{k=1}^{L(\tau)} s_k D(T_k)
\end{equation}
where $L(\tau) = \max [k \in \{1,\ldots, M\} / T_k < \tau ] $, and $s_k$ is the premium
payment (per unit notional) at time $T_k$.
In order to apply the Pathwise Derivative method to the payout above, the indicator functions in (\ref{prot}) and (\ref{prem}),
need to be regularized \cite{GlassMCbook,ChenGlass}. One simple and practical way of doing that is to replace the indicator functions with their
smoothed counterpart, at the price of introducing a small amount of bias in the Greek estimators. For the problem at hand,
as it is also generally the case, such bias can be easily reduced to be smaller than the statistical errors that can be obtained
for any realistic number of MC iteration $N_{MC}$ (for a more complete discussion of the topic of payout regularization see Refs.~\cite{aad1,aad2,Gilesmcqmc}).
The remarkable computational efficiency of AAD is illustrated in Fig.~\ref{cpuratio} for the
Second to Default Swap. Here we plot the ratio of the CPU time required for the calculation of the
value of the option, and all its pairwise correlation sensitivities, and the CPU time spent for the computation
of the value alone, as functions of the number of names in the basket. As expected, for standard finite-difference
estimators, such ratio increases quadratically with the number of names in the basket.
Already for medium sized basket ($N\simeq 20$) the cost associated with Bumping is over
100 times more expensive than the one of AAD.
Nevertheless, at a closer look (see the inset of Fig.~\ref{cpuratio}), the relative cost of AAD without binning
is $O(N)$, for the reasons explained earlier.
However, when using $N_b=20$ bins the cost of the adjoint Cholesky computation is negligible and the
numerical results show that all the Correlation Greeks can be obtained with a mere 70\% overhead compared to
the calculation of the value of the option. This results in over 2 orders of magnitude savings in computational
time for a basket of over 40 Names.
\newpage
\section*{Conclusions}
In conclusion, we have shown how Adjoint Algorithmic Differentiation allows an extremely efficient calculation of correlation Risk in Monte Carlo.
The proposed method relies on using the Adjoint mode of Algorithmic Differentiation to organize the calculation of the Pathwise Derivative estimator, and to
implement the adjoint counterpart of the Cholesky factorization. For any number of underlying assets or names in a portfolio, the proposed method allows the calculation of
the complete pairwise correlation Risk at a computational cost which is at most 4 times the cost of calculating the option value itself, resulting in remarkable computational savings
with respect to Bumping.
We illustrated the method for a Gaussian copula-based Monte Carlo computation, and we tested it numerically
for Portfolio Default Options. In this application, the proposed method is 100 times faster than Bumping for 20 names, and over 1000 times for 40 names. The method generalizes
immediately to other kind of Elliptic copulas, and to a general diffusive setting. In fact, it is a specific instance of a general AAD approach to the implementation of the
Pathwise Derivative method that will be discussed in a forthcoming publication \cite{aad2}.
{\bf Acknowledgments:}
It is a pleasure to acknowledge useful discussions with Alex Prideaux, Adam and Matthew Peacock,
Jacky Lee and David Shorthouse. Valuable help provided by Mark Bowles and Anca Vacarescu in the
initial stages of this
project is also gratefully acknowledged.
The opinions and views expressed in this paper are
uniquely those of the authors, and do not necessarily represent those of Credit Suisse Group.
| {
"timestamp": "2010-04-13T02:02:04",
"yymm": "1004",
"arxiv_id": "1004.1855",
"language": "en",
"url": "https://arxiv.org/abs/1004.1855",
"abstract": "We show how Adjoint Algorithmic Differentiation (AAD) allows an extremely efficient calculation of correlation Risk of option prices computed with Monte Carlo simulations. A key point in the construction is the use of binning to simultaneously achieve computational efficiency and accurate confidence intervals. We illustrate the method for a copula-based Monte Carlo computation of claims written on a basket of underlying assets, and we test it numerically for Portfolio Default Options. For any number of underlying assets or names in a portfolio, the sensitivities of the option price with respect to all the pairwise correlations is obtained at a computational cost which is at most 4 times the cost of calculating the option value itself. For typical applications, this results in computational savings of several order of magnitudes with respect to standard methods.",
"subjects": "Computational Finance (q-fin.CP)",
"title": "Fast Correlation Greeks by Adjoint Algorithmic Differentiation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9693242000616578,
"lm_q2_score": 0.7310585786300049,
"lm_q1q2_score": 0.7086327719287421
} |
https://arxiv.org/abs/1511.02838 | Boundness of $b_2$ for hyperkähler manifolds with vanishing odd-Betti numbers | We prove that $b_2$ is bounded for hyperkähler manifolds with vanishing odd-Betti numbers. The explicit upper boundary is conjectured. Following the method described by Sawon we prove that $b_2$ is bounded in dimension eight and ten in the case of vanishing odd-Betti numbers by 24 and 25 respectively. | \section{Introduction}
A Riemannian manifold $(M, g)$ is called hyperk\"ahler if it admits a
triple of a complex structures $I, J, K$ satisfying quaternionic relations and
K\"ahler with a respect to $g$. A hyperk\"ahler
manifold is always holomorphically symplectic. By the Yau Theorem \cite{Y}, a
hyperk\"ahler structure exists on a compact complex manifold
if and only if it is K\"ahler and holomorphically symplectic.
\hfill
\begin{definition}
A compact hyperk\"ahler manifold $M$ is called \textit{simple} if $\pi_1(M)= 0$, $H^{2,0} (M)
=\mathbb{C}$.
\end{definition}
\hfill
By Bogomolov decomposition theorem \cite{B} any hyperk\"ahler
manifold admits a finite covering which is a product of a torus and several simple hyperk\"ahler manifolds. Huybrechts \cite{H}
has proved finiteness of the number of deformation classes of holomorphic
symplectic structures on each smooth manifold. In most dimensions we only know two examples of simple hyperk\"ahler manifolds due to Beauville \cite{Bea}. This two infinite series are the
Hilbert schemes of $K3$ and the generalized Kummer varieties \cite{Bea}. Except them
two sporadic examples of O'Grady in dimension six and ten are known \cite{O1, O2}. It is known \cite{H} that there are only finitely many families with a given second cohomology lattice, hence the bounds on second Betti number provide restrictions on the number of deformation classes of hyperk\"ahler manifolds.
There is Beauville Conjecture \cite{Bea1}
\hfill
\begin{conjecture}
The number of
deformation types of compact irreducible hyperk\"ahler manifolds is finite in
any dimension (at least for given $b_2$).
\end{conjecture}
\hfill
In the complex dimension four, Guan \cite{G} proved that the second Betti number of a simple compact hyperk\"ahler manifold of dimension four
is bounded above by 23. Guan also have proved boundary conditions for $b_3$
using Rozansky-Witten invariants \cite{HS}. In dimension six inequality from Rozansky-Witten invariants could also be obtained in dimension six \cite{K}.
In dimension six the boundary conditions for $b_2$ has been obtained by Sawon
\cite{S}. In this paper we generalize Sawon's result on Hodge diamond structure for
dimensions eight and ten.
\hfill
{\bf Acknowledgments.} Author is grateful to his supervisor Misha Verbitsky
for discussions, also author wishes to thank Justin Sawon who kindly describe his method, and Fedor Bogomolov for some ideas. Author would like
thank Evgeny Shinder and Tom Bridgeland for the possibility to give a talk on
this subject at Sheffield University.
\section{Preliminaries}
In this Section we recall main properties of cohomology groups of hyperk\"ahler manifolds.
There is the following theorem of Verbitsky \cite{V1}
\hfill
\begin{theorem}\label{verb-cohomol}
Let $M$ be an irreducible hyperk\"ahler manifold of complex dimension a
$2n$ and let $SH^2$($M$,$\mathbb{C}$)$\subset H^{\ast}(M,\mathbb{C})$ be the
subalgebra generated by $H^2(M,\mathbb{C})$. Then $SH^2(M,\mathbb{C})$
= $S^{\ast}H^2(M,\mathbb{C})/\left\langle \alpha^{n + 1} \right| q
\left( \alpha \right) = 0 \left\rangle \right.$, where $q$ is a Beaville-Bogomolov-Fujiki--form.
\end{theorem}
\hfill
The inclusion $S^n H^2 \left( M \right) \hookrightarrow H^{2 n}$ follows from this Theorem.
\hfill
It is known that in a hyperk\"ahler case there is an action of $\mathfrak{so}(5)$-algebra \cite{V1},
and moreover an action of $\mathfrak{so} ( b_2 + 2,\mathbb{C})$ described by
Looijenga and Lunts \cite{LL}, and Verbitsky \cite{V2}. By this results one can decompose
H$^{even}$(M,$\mathbb{C}$) into irreducible representations for this
$\mathfrak{so}(b_2$ +2,$\mathbb{C}$)-action. The Hodge diamond is the projection onto a
plane of the (higher-dimensional) weight lattice of $\mathfrak{so}(b_2$ +2,$\mathbb{C}$)
so we can figure out the highest weights which lie in octant of Hodge diamond.
Now irreducible representations are isomorphic to $\Lambda^{i}\mathbb{C}^{b_2+2}$. Thus, their dimensions are polynomial in terms of $b_2$. Contribution of $\Lambda^{i}\mathbb{C}^{b_2+2}$ to each even cohomology groups also has dimension polynomial in terms of $b_2$.
\hfill
In early 90-s Salamon \cite{Sa} has used hyperk\"ahler symmetries of Hodge numbers and expression of Euler
characteristic to prove the following equation with Betti
numbers:
\[ n b_{2 n} = 2 \sum_{i = 1}^{2 n} \left( - 1 \right)^i \left( 3 i^2
- n \right) b_{2 n - i} . \]
\section{Boundary conditions for $b_2$}
In this Section we prove the following
\hfill
\begin{theorem}\label{boundness}
Let $M$ be a hyperk\"ahler manifold of complex
dimension $2n$ with vanishing odd-Betti numbers. Then the second Betti
number is bounded.
\end{theorem}
\hfill
\begin{proof}
Consider the $2k$-th Betti number. There is inclusion of $\Sym^k(H^2)$ into $H^{2k}(M)$. Thus, its contribution is polynomial of degree $k$. And also there are contributions of several $\mathfrak{so}(b_2+2,\mathbb{C})$-modules into $H^{2k}(M)$. The dimension of each of them is polynomial in terms of $b_2$, and the contribution of $\Lambda^{i}\mathbb{C}^{b_2+2}$ in $H^{2k}(M)$ is polynomial of degree at most $k-1$. Recall that the highest weights lie in octant of Hodge diamond, and denote them as $c, d$ and etc. For instance, $c$ is dimension of $h^{3,1}_{prim}$.
Then for all even Betti numbers one could write that
\[ b_{2 k} = \frac{1}{\left( k - 1 \right) !} \prod^{i = k - 1}_{i = 0}
\left( b_2 + i \right) + P_k \left( b_2, c, d, e, f, \ldots
\right), \]
where first term correspond to the dimension of symmetric power, and the second one -- to all contributions of irreducible $\mathfrak{so}(b_2+2,\mathbb{C})$-modules into $H^{2k}(M)$.
From Salamon's equation
\[ n b_{2 n} = 2 \sum_{i = 1}^{2 n} \left( - 1 \right)^i \left( 3 i^2
- n \right) b_{2 n - i} . \]
using vanishing of odd-Betti numbers we obtain the following
\[ \frac{1}{\left( n - 1 \right) !} \prod^{i = n - 1}_{i = 0} \left( b_2 + i
\right) + P_n \left( b_2, c, d, e, f, \ldots \right) =\]
\[= \sum^{j =
2 n}_{j = 1, j \tmop{even}} \left[ \left( 3 j^2 - n \right)
\frac{1}{\left( n - j / 2 \right) !} \prod^{i = n - j / 2 - 1}_{i = 0}
\left( b_2 + i \right) + P_j \left( b_2, c, d, e, f, \ldots
\right) \right] . \]
Rearrangering terms in equation above (we put terms corresponding to contributions of symmetric powers of $H^2$ on the left-hand side and all others on the right-hand side):
\begin{eqnarray*}
- \frac{1}{\left( n - 1 \right) !} \prod^{i = n - 1}_{i = 0} \left( b_2 +
i \right) + \sum^{j = 2 n}_{j = 1, j \tmop{even}} \left( 3 j^2 -
n \right) \frac{1}{\left( n - j / 2 \right) !} \prod^{i = n - j / 2 -
1}_{i = 0} \left( b_2 + i \right) =\\= P_n \left( b_2, c, d, e, f,
\ldots \right)
- \sum^{j = 2 n}_{j = 1, j \tmop{even}} P_j \left( b_2, c, d, e, f, \ldots \right) . & &
\end{eqnarray*}
Denote by $b_2^l$ the maximal root (in terms of $b_2$) of
polynomial $P(b_2, n)$ on the left-hand side. Then the polynomial on the left-hand side
is negative than $b_2 \geqslant b_2^l$. This polynomial
obviously has some positive roots since it is positive for $b_2 = 0$.
The numbers $c, d, e, f, \ldots$ are positive since they are numbers of
generators of irreducible $\mathfrak{so}(b_2 + 2$, $\mathbb{C}$)-modules. The leading
coefficient of polynomial $Q_n \left( b_2, c, d, e, f, \ldots
\right)$ on the right-hand side is positive. Thus, this polynomial is
positive then $b_2 \geqslant b_2^r$, where $b_2^r$ is the maximal root of $Q_n \left( b_2, c, d, e, f, \ldots
\right)$.\\
Hence we get
$b_2 \leqslant \max \left[ b_2^l, b_2^r \right]$. In the case if
right-hand side polynomial $Q_n \left( b_2, c, d, e, f, \ldots
\right)$ does not have any roots, then $b_2 \leqslant b_2^l$.
\end{proof}
\hfill
\begin{conjecture}\label{poly-conj}
The second Betti number $b_2$ is bounded by the maximal root
of the following polynomial
\[ P \left( b_2, n \right) = - \frac{1}{\left( n - 1 \right) !} \prod^{i = n -
1}_{i = 0} \left( b_2 + i \right) + \sum^{j = 2 n}_{j = 1, j
\tmop{even}} \left( 3 j^2 - n \right) \frac{1}{\left( n - j / 2 \right) !}
\prod^{i = n - j / 2 - 1}_{i = 0} \left( b_2 + i \right), \]
which is denoted as $b_2^l$.
\end{conjecture}
\hfill
To prove this conjecture one need to check that $Q_n \left( b_2, c, d, e, f, \ldots \right)$ is positive than $b_2 \geq b_2^l$. Author does not know the explicit proof of this \ref{poly-conj}.
\hfill
\begin{proposition}
In conditions of \ref{boundness} we have
\[P \left( b_2, n \right) = - \frac{1}{ n !}
\left( \prod^{i = n -
1}_{i = 3} \left( b_2 + i \right) \right) \cdot (b_2 + 2n) \cdot (b_2^2-21b_2+2-96n),
\]
and
\[b_2^l = \frac{21+\sqrt{433+96n}}{2}. \]
\end{proposition}
\begin{proof}
We can see that last four terms of $P \left( b_2, n \right)$ are divided by $(b_2+3)$ and factor is
\[\frac{1}{3}((12n^2-73n+108)b_2^2+3(12n^2-49n+48)b_2+12n(n-1)
\]
Then, we can claim by induction (we omit explicit calculations) that sum of last $k$ terms of $P \left( b_2, n \right)$ is
\[2 \frac{1}{(k-1)!} \left( \prod^{i = k -
1}_{i = 3} \left( b_2 + i \right) \right) \cdot(A_k \cdot b_2^2+3B_k \cdot b_2+12n(n-1)),
\]
where $A_k = (12n^2-(73+24(k-4))n+108+60k+24 \frac{(k-4)(k-3)}{2})$, and $B_k = (12n^2-(49+16k)n+48+24k+8\frac{(k-4)(k-3)}{2})$.
Thus, we can write our polynomial $P \left( b_2, n \right)$ in the following form
\[P \left( b_2, n \right) = \frac{1}{\left( n - 1 \right) !} \prod^{i = n - 1}_{i = 0} \left( b_2 + i
\right) + 2 \frac{1}{(n-1)!} \left( \prod^{i = n -
1}_{i = 3} \left( b_2 + i \right) \right) \cdot(A_n \cdot b_2^2+3B_n \cdot b_2+12n(n-1)).
\]
After rearranging of common factors we get the statement.
\end{proof}
\section{Dimension eight and ten}
To find explicit boundary condition we shall use the method described by Justin Sawon \cite{S}. In his work he has studied how different $\mathfrak{so}(b_2+2,\mathbb{C})$-modules sit inside Hodge diamond.
\hfill
\begin{theorem}\label{Dim8}
Let $M$ be a eight-dimensional hyperk\"ahler manifold with $H^{2
k + 1} \left( M, \mathbb{C} \right) = 0$. Then $b_2 \leqslant 24$.
\end{theorem}
\begin{proof}
We will write some explicit splitting of $b_4, b_6,$ and $b_8$ in terms of
$b_2$ and generators of primitives part of cohomologies.
There is an action of $\mathfrak{so} (b_2 + 2,\mathbb{C})$ on the
complex cohomology of $M$. Under this action, an element of $H^{3,
1}_{\tmop{pr}} (M)$ will generate an irreducible $\mathfrak{so} (b_2 + 2,
\mathbb{C})$-module of dimension $\frac{(b_2 + 2) (b_2 + 1) b_2}{6}$. In
Hodge diamond this part sits as
{\small
\begin{center}
\begin{tabular}{ccccccccccccccccc}
& & & & & & & & 0 & & & & & & & & \\
& & & & & & & 0 & & 0 & & & & & & & \\
& & & & & & 0 & & 0 & & 0 & & & & & & \\
& & & & & 0 & & 0 & & 0 & & 0 & & & & & \\
& & & & 0 & & 1 & & $h$ & & 1 & & 0 & & & & \\
& & & 0 & & 0 & & 0 & & 0 & & 0 & & 0 & & & \\
& & 0 & & 1 & & $h$ & & $\frac{h^2 - h + 4}{2}$ & & $h$ & & 1 & & 0
& & \\
& 0 & & 0 & & 0 & & 0 & & 0 & & 0 & & 0 & & 0 & \\
0 & & 0 & & h & & $\frac{h^2 - h + 4}{2}$ & & $\frac{h^3 - 3 h^2 -
10 h - 60}{6}$ & & $\frac{h^2 - h + 4}{2}$ & & $h$ & & 0 & & 0\\
& 0 & & 0 & & 0 & & 0 & & 0 & & 0 & & 0 & & 0 & \\
& & 0 & & 1 & & $h$ & & $\frac{h^2 - h + 4}{2}$ & & $h$ & & 1 & & 0
& & \\
& & & 0 & & 0 & & 0 & & 0 & & 0 & & 0 & & & \\
& & & & 0 & & 1 & & $h$ & & 1 & & 0 & & & & \\
& & & & & 0 & & 0 & & 0 & & 0 & & & & & \\
& & & & & & 0 & & 0 & & 0 & & & & & & \\
& & & & & & & 0 & & 0 & & & & & & & \\
& & & & & & & & 0 & & & & & & & &
\end{tabular}
\end{center}
}
In the same way we have an irreducible $\mathfrak{so} (b_2 + 2,\mathbb{C})$-module
generated by additional elements of $H^{2, 2}_{\tmop{pr}}$, which sits
inside the Hodge diamond as
\begin{center}
\begin{tabular}{ccccccccccccccccc}
& & & & & & & & 0 & & & & & & & & \\
& & & & & & & 0 & & 0 & & & & & & & \\
& & & & & & 0 & & 0 & & 0 & & & & & & \\
& & & & & 0 & & 0 & & 0 & & 0 & & & & & \\
& & & & 0 & & 0 & & 1 & & 0 & & 0 & & & & \\
& & & 0 & & 0 & & 0 & & 0 & & 0 & & 0 & & & \\
& & 0 & & 0 & & 1 & & $h$ & & 1 & & 0 & & 0 & & \\
& 0 & & 0 & & 0 & & 0 & & 0 & & 0 & & 0 & & 0 & \\
0 & & 0 & & 1 & & $h$ & & $\frac{h^2 - h-4}{2}$ & & $h$ & & 1 & &
0 & & 0\\
& 0 & & 0 & & 0 & & 0 & & 0 & & 0 & & 0 & & 0 & \\
& & 0 & & 0 & & 1 & & $h$ & & 1 & & 0 & & 0 & & \\
& & & 0 & & 0 & & 0 & & 0 & & 0 & & 0 & & & \\
& & & & 0 & & 0 & & 1 & & 0 & & 0 & & & & \\
& & & & & 0 & & 0 & & 0 & & 0 & & & & & \\
& & & & & & 0 & & 0 & & 0 & & & & & & \\
& & & & & & & 0 & & 0 & & & & & & & \\
& & & & & & & & 0 & & & & & & & &
\end{tabular}
\end{center}
Recall that $b_4 = \frac{\left( b_2 + 1 \right) b_2}{2} + c b_2 + d,$ where
$d$ -- part of $H^{2, 2}_{\tmop{pr}}$ that does not come from $H^{3,
1}_{\tmop{pr}}$
So the first module gives
\[ c \left( \frac{h^3 + 3 h^2 - 4 h -36}{6} \right)
= c \left( \frac{b_2^3 - 3 b_2^2 - 4 b_2 -24}{6} \right),\]
the second one --
$d \left( \frac{h^2 - h - 4}{2} + 2 h + 2 \right) = d \left( \frac{h^2 + 3 h
}{2} \right) = d \left( \frac{b_2^2 - b_2 - 2}{2} \right)$.
Definitely, in $H^8$ there are elements which come from those part of $H^6$
which is not generated by $\tmop{Sym}^3 H^2$ and two $\mathfrak{so} \left( b_2 +
2, \mathbb{C} \right)$-modules generated by elements of $H^{3,
1}_{\tmop{pr}}$ and $H^{2, 2}_{\tmop{pr}}$.
\[ b_6 = \frac{\left( b_2 + 2 \right) \left( b_2 + 1 \right) b_2}{6} + c
\left( \frac{b_2^2 - b_2 + 2}{2} \right) + d b_2 + e \]
Then each element of $H^{3, 3}$ part generate $\mathfrak{so} \left( b_2 + 2,
\mathbb{C} \right)$-module of dimension $\left( b_2 + 2 \right)$. That
gives in $b_8$ the following term $e \left( b_2 \right)$
\begin{eqnarray*}
b_8 \geqslant \frac{\left( b_2 + 3 \right) \left( b_2 + 2 \right) \left(
b_2 + 1 \right) b_2}{24} + c \left( \frac{b_2^3 - 3 b_2^2 - 4 b_2 -24}{6}
\right) + d \left( \frac{b_2^2 - b_2 - 2}{2} \right) + e b_2 & &
\end{eqnarray*}
From Salamon's relation we have
\begin{eqnarray*}
8 \cdot \frac{\left( b_2 + 2 \right) \left( b_2 + 1 \right) b_2}{6} + 8 c
\left( \frac{b_2^2 - b_2 + 2}{2} \right) + 8 d b_2 + 8 e + 44 \left(
\frac{\left( b_2 + 1 \right) b_2}{2} + c b_2 + d \right)+\\ + 104 b_2 + 188 +
b_7 - 71 b_3 - 23 b_5 \geqslant \\ \geqslant 2\frac{\left( b_2 + 3 \right) \left( b_2 +
2 \right) \left( b_2 + 1 \right) b_2}{24} + 2c \left( \frac{b_2^3 - 3 b_2^2 - 4 b_2 -24}{6} \right) + 2d \left( \frac{b_2^2 - b_2 - 2}{2} \right) + 2e b_2
& &
\end{eqnarray*}
Then after rearranging we get
\begin{eqnarray*}
- 2 \frac{\left( b_2 + 3 \right) \left( b_2 + 2 \right) \left( b_2
+ 1 \right) b_2}{24} + 8 \cdot \frac{\left( b_2 + 2 \right) \left( b_2 + 1 \right) b_2}{6} + 44
\left( \frac{\left( b_2 + 1 \right) b_2}{2} \right) + 104 b_2 + 188 + b_7 \geqslant \\
\geqslant c \left( \frac{b^3_2 - 15 b_2^2 - 124 b_2 -
48}{3} \right) + d \left( b_2^2 - 9b_2 - 46 \right) + 2 e
\left( b_2 - 4 \right) + 71 b_3 + 23 b_5
\end{eqnarray*}
The left-part is $\frac{- b_2^4 + 10 b_2^3 + 301 b_2^2 + 530 b_2 + 2256}{12}
+ b_7$.\\
Now recall that all odd Betti numbers are zero. Then the left-hand side is
negative for $b_2 \geqslant 24$, although the right-hand side is positive.
\end{proof}
\hfill
{\bf{Remark:}} The second Betti number $b_2$ is at most 24 for a sufficiently small $b_7$.
Indeed, for the proof of \ref{Dim8} we use the fact that
\[ F \left( b_2 \right) + b_7 := \frac{- 2 b_2^4 + 20 b_2^3 + 602 b_2^2
+ 1060 b_2 + 4512}{24} + b_7 \]
is negative than $b_2 \geqslant 25$. This is also true for $b_7 \leqslant
\left| F \left( b_2 \right)\vert_{b_2 = 25} \right| = 1281$.
\hfill
\begin{theorem}\label{Dim10}
Let $M$ be a ten-dimensional hyperk\"ahler manifold with $H^{2 k
+ 1} \left( M, \mathbb{C} \right) = 0$. Then $b_2 \leqslant 25$.
\end{theorem}
\begin{proof}
The proof is very similar to the previous one. We could see that they are
contributions from $H^{3, 1}_{\tmop{pr}}, H^{2, 2}_{\tmop{pr}},
H^{3, 3}_{\tmop{pr}}, H^{4, 4}_{\tmop{pr}} .$ They are isomorphic to
$\bigwedge^4 \mathbb{C}^{b_2 + 2}, \bigwedge^3 \mathbb{C}^{b_2 +
2}, \bigwedge^2 \mathbb{C}^{b_2 + 2}$, and $\mathbb{C}^{b_2 + 2}$ as
irreducible $\mathfrak{so} \left( b_2 + 2,\mathbb{C} \right) - \tmop{modules}$.
The explicit calculations like in the case of dimension eight give us the
following
\[ \frac{1}{60}(b_2+3)(b_2+4)(b_2+10) ( - b_2^2- 21b_2 + 118) = Q \left( b_2, c, d, e, f, g \right),\]
Polynomial Q($b_2$) is positive for $b_2 \geqslant 26$ since $c, d, e, f, g$ are positive constants defined above ($f$ sits in
$H^{4, 4}$-part, $g$ generates $H^{5, 5}$). The left-hand side is negative
for $b_2 \geqslant 26$. Hence, $b_2 \leqslant 25$.
\end{proof}
| {
"timestamp": "2015-11-10T02:27:22",
"yymm": "1511",
"arxiv_id": "1511.02838",
"language": "en",
"url": "https://arxiv.org/abs/1511.02838",
"abstract": "We prove that $b_2$ is bounded for hyperkähler manifolds with vanishing odd-Betti numbers. The explicit upper boundary is conjectured. Following the method described by Sawon we prove that $b_2$ is bounded in dimension eight and ten in the case of vanishing odd-Betti numbers by 24 and 25 respectively.",
"subjects": "Algebraic Geometry (math.AG)",
"title": "Boundness of $b_2$ for hyperkähler manifolds with vanishing odd-Betti numbers",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9693241982893259,
"lm_q2_score": 0.7310585786300049,
"lm_q1q2_score": 0.7086327706330636
} |
https://arxiv.org/abs/2212.12199 | On splitting of the normalizer of a maximal torus in finite groups of Lie type | Let $G$ be a finite group of Lie type and $T$ a maximal torus of $G$. In this paper we complete the study of the question of the existence of a complement for the torus $T$ in its algebraic normalizer $N(G,T)$. It is proved that every maximal torus of the group $G\in\{G_2(q), {}^2G_2(q), {}^3D_4(q)\}$ has a complement in its algebraic normalizer. The remaining twisted classical groups ${}^2A_n(q)$ and ${}^2D_n(q)$ are also considered. | \section{Introduction}
The splitting problem for the normalizer of a maximal torus was first formulated by J.~Tits~\cite{Tits}. Let $\overline{G}$ be a simple connected linear algebraic group over the algebraic closure of the prime field of characteristic $p$. Let $\sigma$ be a Steinberg endomorphism and $\overline{T}$ a maximal $\sigma$-invariant torus of $\overline{G}$. It is well known that all maximal tori are conjugate in $\overline{G}$~\cite[Corollary 6.5]{MalleTester} and the quotient group $N_{\overline{G}}(\overline{T})/\overline{T}$ is isomorphic to the Weyl group $W$ of $\overline{G}$. The natural question is for which groups $\overline{G}$ the normalizer $N_{\overline{G}}(\overline{T})$ splits over $\overline{T}$. The answer was obtained independently in~\cite{AdamsHe} and as a corollary in a series of papers~\cite{Galt1,Galt2,Galt3,Galt4}. The same problem for the Lie groups was solved in~\cite{LieGroups}.
A similar question can be formulated for finite groups of Lie type. Let $G$ be a finite group of Lie type, that is $O^{p'}(\overline{G}_{\sigma})\leqslant G\leqslant\overline{G}_{\sigma}$. Suppose that $T=\overline{T}\cap G$ is a maximal torus of $G$ and $N(G,T)=N_{\overline{G}}(\overline{T})\cap G$ is its algebraic normalizer. It is known that in the case of finite groups the maximal tori need not be conjugate in the group $G$. The general problem is to describe groups $G$ and their maximal tori $T$ such that the group $N(G,T)$ splits over $T$. This problem has been solved for groups of Lie types $A_n$, $B_n$, $C_n$, $D_n$, $E_6$, $E_7$, $E_8$, and $F_4$ in~\cite{Galt2,Galt3,Galt4, GS,GS2,GS3}. Moreover, when $G=F_4(q)$ and there is no complement for $T$, we found supplements of minimal order.
J. Adams and X. He in~\cite{AdamsHe} considered a related question: what is the minimal order of a lift for an element $w\in W$ to $N_{\overline{G}}(\overline{T})$?
It is easy to see that if the order of $w$ is $d$, then the minimal order of a lift for $w$ is either $d$ or $2d$. Clearly, if $N_{\overline{G}}(\overline{T})$ splits over $\overline{T}$, then the minimal order is $d$.
For algebraic groups $\overline{G}$ of Lie type $E_6$, $E_7$, $E_8$, or $F_4$, the normalizer of the torus does not split. Nevertheless, in~\cite{GS,GS2} it is proved that in the case of finite groups $E_6(q)$, $E_7(q)$, $E_8(q)$ the minimal order of a lift is always equal to $d$. In particular, the minimal order of a lift is $d$ in the corresponding algebraic groups.
For finite groups of Lie type $F_4$ in~\cite{AdamsHe}, minimal orders of lifts are found for the elements belonging to the so-called regular or elliptic conjugacy classes of $W$. In particular, J.\,Adams and X.\,Xe showed that there exists an elliptic element of order four for which any of its lift has order eight. In~\cite{GS3}, minimal orders of lifts are found for all elements of the Weyl group of type $F_4$.
This paper completes the study of the above question for the remaining finite groups of Lie type.
The first result is devoted to exceptional groups of Lie type.
\begin{theorem}\label{th}
Let $G\in\{G_2(q), {}^2G_2(q), {}^3D_4(q), {}^2F_4(q), {}^2B_2(q)\}$. If $T$ is a maximal torus of $G$ and $N$ is its algebraic normalizer, then $N$ splits over $T$.
\end{theorem}
It is worth noting that since the groups ${}^2F_4(q)$, ${}^2B_2(q)$ are defined over a field of characteristic two, the result for them immediately follows from~\cite[Remark 2.5]{GS3}, so the essential part of the theorem is the results for the first three groups.
Among the classical groups it remains to consider the twisted groups ${}^2A_n(q)$ and ${}^2D_n(q)$.
In~\cite{Galt3}, the splitting problem for maximal tori is solved for groups $A_n(q)$. For twisted groups ${}^2A_n(q)$, the result can be obtained as follows.
The structure of maximal tori in the special unitary group $\SU_n(q)$ is described in~\cite[\S2]{ButGre} and is obtained from the structure of the corresponding maximal tori in the special linear group $\SL_n(q)$ by replacing $q$ with $ -q$. Repeating Lemmas 2-21 from~\cite{Galt3} with $q$ replaced by $-q$, we obtain results for the groups $\SU_n(q)$ and $\PSU_n(q)$.
For the sake of completeness, we formulate the corresponding results. We use the notation $\SL_n^\varepsilon(q)$, where $\varepsilon=\pm$, setting $\SL_n^+(q)=\SL_n(q)$ and $\SL_n ^-(q)=\SU_n(q)$. Recall that in these cases the Weyl group is isomorphic to the symmetric group $\Sym_n$ of degree~$n$.
A conjugacy class of $\Sym_n$ corresponds to a partition $\{n_1,\ldots,n_m\}$ of $n$. We assume that a partition is ordered in non-decreasing order $$n_1=\ldots=n_{l_1}<n_{l_1+1}=\ldots=n_{l_1+l_2}<\ldots<n_{l_1+\ldots+l_{r-1}+1}=\ldots=n_{l_1+\ldots+l_r},$$
and define $a_1=n_{l_1}l_1, a_2=n_{l_1+l_2}l_2,\ldots, a_r=n_{l_1+\ldots+l_r}l_r$.
\begin{theorem}\label{th_SU}
Let $T$ be a maximal torus of the group $G=\SL_{n}^\varepsilon(q)$ corresponding to an element of the Weyl group with the cyclic type $(n_1)(n_2)\ldots(n_m)$.
Then $T$ has a complement in $N(G,T)$ if and only if
$q$ is even or $a_i$ is odd for some $1\leqslant i\leqslant r$.
\end{theorem}
\noindent Now we formulate the result for simple linear groups.
\begin{theorem}\label{th_PSU}
Let $T$ be a maximal torus of $G=\SL_{n}^\varepsilon(q)$ corresponding to an element of the Weyl group with the cyclic type $(n_1)(n_2)\ldots(n_m)$. Let $\widetilde{T}$ and $\widetilde{N}$~ be the images of $T$ and $N(G,T)$ in $\widetilde{G}=\PSL_{n}^\varepsilon (q)$.
Then $\widetilde{T}$ has a complement in~$\widetilde{N}$ if and only if one of the following holds:
\begin{itemize}
\item[{\em (1)}] $q$ is even;
\item[{\em (2)}] $a_i$ is odd for some $1\leqslant i\leqslant r$;
\item[{\em (3)}] $(n)_2<(\varepsilon q-1)_2$;
\item[{\em (4)}] $m=4$, $n_1,n_2,n_3,n_4$ are odd;
\item[{\em (5)}] $m=3$, $n_1=n_2$ is odd, $n_3$ is even, $(n_3)_2>2$, and $(n)_2\leqslant(\varepsilon q-1)_2$;
\item[{\em (6)}] $m=3$, $n_1=n_2$ is odd, $n_3$ is even, $(n_3)_2=2$, and $(n)_2\neq(\varepsilon q-1)_2$;
\item[{\em (7)}] $m=2$, $n_1,n_2$ are odd;
\item[{\em (8)}] $m=2$, $n_1,n_2$ are even, $n_1\neq n_2$, $(n)_2<d(\varepsilon q-1)_2$, where $d=\gcd\{(\frac{n_1}{2})_2,(\frac{n_2}{2})_2,(\varepsilon q-1)_2\}$;
\item[{\em (9)}] $m=2$, $n_1,n_2$ are even, $n_1\neq n_2$, $(n_1)_2=(n_2)_2\leqslant(\varepsilon q-1)_2,$ $(\varepsilon q-1)_2(n_1)_2\leqslant(n)_2$;
\item[{\em (10)}] $m=2$, $n_1=n_2$ is even, $(n_1)_2>2$, $(n)_2\leqslant(\varepsilon q-1)_2$;
\item[{\em (11)}] $m=2$, $n_1=n_2$ is even, $(n_1)_2=2$, $(n)_2\neq(\varepsilon q-1)_2$;
\item[{\em (12)}] $m=1$.
\end{itemize}
\end{theorem}
The structures of maximal tori and the Weyl group in the orthogonal group $\mathrm{P}\Omega_{2n}^-(q)$ are described in~\cite[\S4]{ButGre}.
Let $n=n'+n''$, $\{n_1,\ldots,n_k\}$ and $\{n_{k+1},\ldots,n_m\}$ be partitions of $n'$ and $n''$, respectively. The set $\{-n_1,\ldots,-n_k,n_{k+1},\ldots,n_m\}$ is called a {\it cycle type} and is denoted by $(\overline{n_1})\ldots(\overline{n_k})(n_{k+1})\ldots(n_m)$. There is a bijection between conjugacy classes of maximal tori and their cycle types.
We assume that partitions of $n'$ and $n''$ are ordered in non-decreasing order
$$n_1=\ldots=n_{l_1}<n_{l_1+1}=\ldots=n_{l_1+l_2}<\ldots<n_{l_1+\ldots+l_{d-1}+1}=\ldots=n_k,$$
where $k=l_1+\ldots+l_{d-1}+l_d$,
$$n_{k+1}=\ldots=n_{k+l_{d+1}}<\ldots<n_{l_1+\ldots+l_{r-1}+1}=\ldots=n_{l_1+\ldots+l_r}.$$
As above, we define $a_1=n_{l_1}l_1, a_2=n_{l_1+l_2}l_2,\ldots, a_r=n_{l_1+\ldots+l_r}l_r$.
\begin{theorem}\label{th_Omega}
Let $G=\mathrm{P}\Omega_{2n}^-(q)$ with $n\geqslant4$. Let $T$ be a maximal torus of $G$ corresponding to an element of the Weyl group with the cyclic type $(\overline{n_1})\ldots(\overline{n_k})(n_{k+1})\ldots(n_m)$, where $k$ is odd. Then $T$ has a complement in~$N(G,T)$ if and only if one of the following holds:
\begin{itemize}
\item[{\em (1)}] $q\not\equiv3\pmod4;$
\item[{\em (2)}] $a_i$ is odd for some $1\leqslant i\leqslant r;$
\item[{\em (3)}] $k=m$, $n_i$ is even for all $1\leqslant i\leqslant k$.
\end{itemize}
\end{theorem}
\section{Notations and preliminary results}
If a group $G$ is the product of its normal subgroup $N$ and subgroup $K$, then $K$ is called a {\it supplement} to $N$ in $G$. If, in addition, $N\cap K=1$, then $K$ is called a {\it complement} to $N$ in $G$.
By $q$ we always mean a power of a prime $p$. We write $\overline{\mathbb{F}}_p$ for an algebraic closure of a finite field $\mathbb{F}_p$ of order $p$. The symmetric group
of degree $n$ is denoted by $\Sym_n$, the dihedral group of order $2n$ denoted by $D_{2n}$ and the cyclic group of order $n$ denoted by $\mathbb{Z}_n$ or $n$ for short when used in tables.
$\overline{G}$ denotes a simple simply connected linear algebraic group over $\overline{\mathbb{F}}_p$ corresponding to a root system $\Phi$ with a fundamental root system $\Delta=\{r_1,r_2,\ldots,r_l\}$.
In what follows, we will use the notation from~\cite{CarSG}, in particular, the definitions of the elements $x_r(t)$, $n_r(\lambda)$, $(r\in\Phi, t\in\overline{\mathbb{F}}_p , \lambda\in \overline{\mathbb{F}}_p^*)$.
Unlike~\cite{CarSG}, MAGMA uses the definition $h_r(\lambda) = n_r(-1)n_r(\lambda)$, which we will adhere to in this paper. According to~\cite{CarSG}, $\overline{G}$ is generated by the elements $x_r(t)$: $G=\langle x_r(t)~|~r\in\Phi,t\in \overline{\ F}_p\rangle$.
The group $\overline{T}=\langle h_r(\lambda)~|~r\in\Delta,\lambda\in \overline{\mathbb{F}}_p^*\rangle$ is a maximal torus in $\overline{G} $ and $\overline{N}=\langle \overline{T},n_r~|~r\in\Delta\rangle$, where $n_r=n_r(1)$, is the normalizer of $\overline{T}$ in $\overline{G}$~\cite[\S 7.1, 7.2]{CarSG}. The Weyl group $\overline{N}/\overline{T}$ is denoted by $W$ and the natural homomorphism from $\overline{N}$ onto $W$ by $\pi$. Following~\cite[Definition 2.1.9]{GorLySol}, $\sigma$ denotes the Steinberg endomorphism.
Define the action of $\sigma$ on $W$ in a natural way. Elements $w_1, w_2\in W$ are called {\it $\sigma$-conjugated} if $w_1=w^{-1}w_2w^{\sigma}$ for an element $w\in W$. For $G=\overline{G}_\sigma$ and $g\in\overline{G}$ the following statements are true.
\begin{prop}{\em\cite[Propositions~3.3.1, 3.3.3]{Car}}\label{torus}.
A torus $\overline{T}^g$ is $\sigma$-invariant if and only if $g^{\sigma}g^{-1}\in\overline{N}$.
The map $\overline{T}^g\mapsto\pi(g^{\sigma}g^{-1})$ defines a bijection between the $G$-classes of $\sigma$-invariant maximal tori of the group $\overline{G}$ and the
$\sigma$-conjugacy classes of $W$.
\end{prop}
\begin{prop}{\em\cite[Lemma~1.2]{ButGre}}\label{prop2.5}.
Let $n=g^{\sigma}g^{-1}\in\overline{N}$. Then $(\overline{T}^g)_\sigma=(\overline{T}_{\sigma n})^g$, where $n$ acts on $\overline{T}$ by conjugation.
\end{prop}
\begin{prop}{\em\cite[Proposition~3.3.6]{Car}}\label{p:normalizer}.
Let $g^{\sigma}g^{-1}\in\overline{N}$ and $\pi(g^{\sigma}g^{-1})=w$. Then $$(N_{\overline{G}}({\overline{T}}^g))_{\sigma}/({\overline{T}}^g)_{\sigma}\simeq C_{W,\sigma}(w)=\{x\in W~|~x^{-1}wx^{\sigma}=w\}.$$
\end{prop}
It follows from Proposition~\ref{prop2.5} that $({\overline{T}}^g)_{\sigma}=(\overline{T}_{\sigma n})^g$ and $(N_{\overline{G}}({\overline{T}}^g))_{\sigma}=(\overline{N}^g)_{\sigma}=(\overline{N}_{\sigma n})^g$. Therefore,
$$C_{W,\sigma}(w)\simeq (N_{\overline{G}}({\overline{T}}^g))_{\sigma}/({\overline{T}}^g)_{\sigma}=(\overline{N}_{\sigma n})^g/(\overline{T}_{\sigma n})^g\simeq\overline{N}_{\sigma n}/\overline{T}_{\sigma n}.$$
\begin{remark}\label{r:nonsplit}
Let $n$ and $w$ be as in Propositions~\ref{prop2.5} and~\ref{p:normalizer}, respectively. Suppose that $n_1=g_1^{\sigma}g_1^{-1}\in\overline{N}$ and $\pi(n_1)=w$. Since $n$ and $n_1$ act on $\overline{T}$ by conjugation in the same way, we find that
$({\overline{T}}^{g_1})_{\sigma}=(\overline{T}_{\sigma n_1})^{g_1}=(\overline{T}_{\sigma n})^{g_1}$ and $(N_{\overline{G}}({\overline{T}}^{g_1}))_{\sigma}=(\overline{N}^{g_1})_{\sigma}=(\overline{N}_{\sigma n_1})^{g_1}$.
Therefore, we infer that $$C_{W,\sigma}(w)\simeq (N_{\overline{G}}({\overline{T}}^{g_1}))_{\sigma}/({\overline{T}}^{g_1})_{\sigma}=(\overline{N}_{\sigma n_1})^{g_1}/(\overline{T}_{\sigma n})^{g_1}\simeq\overline{N}_{\sigma n_1}/\overline{T}_{\sigma n}.$$
This implies that $(\overline{T}^g)_\sigma$ has a complement in the algebraic normalizer if and only if $\overline{T}_{\sigma n}$ has a complement in $\overline{N}_{\sigma n_1}$ for some $n_1\in\overline{N}$ such that $\pi(n_1)=w$. \end{remark}
For brevity, we write $h_r$ instead of $h_r(-1)$, as well as $w_i$, $h_i$, and $n_i$ instead of $w_{r_i}$, $h_{r_i}$ and, $n_{r_i}$, respectively. Any element $H$ of the group $\overline{T}$ can be written as $H=h_{r_1}(\lambda_1)h_{r_2}(\lambda_2)\ldots h_{r_l}(\lambda_l)$ which we simplify to $(\lambda_1,\lambda_2,\ldots,\lambda_l)$.
Define $\mathcal{T}=\langle n_r~|~ r\in\Delta\rangle$ and $\mathcal{H}=\overline{T}\cap\mathcal{T}$. According to~\cite{Tits}, we have $\mathcal{H}=\langle h_r~|~r\in\Delta\rangle$ and $\mathcal{T}/\mathcal{H}\simeq W$.
In particular, if $q$ is odd, then $\mathcal{H}$ is an elementary abelian $2$-group.
For the untwisted groups of Lie type, the following lemma holds, which will be used for the groups $G_2(q)$.
\begin{lemma}{\em \cite[Lemma~3.1]{GS}}\label{normalizer}
Let $g\in\overline{G}$ and $n=g^\sigma g^{-1}\in\overline{N}$. Suppose that $H\in \overline{T}$ and $u\in\mathcal{T}$. Then
(1) $Hu\in\overline{N}_{\sigma n}$ if and only if $H=H^{\sigma n}[n,u];$
(2) if $H\in \mathcal{H}$, then $Hu\in\overline{N}_{\sigma n}$ is equivalent to the equality $[n,Hu]=1$.
\end{lemma}
\noindent Similarly to~\cite[Theorem~7.2.2]{CarSG}, we have the following equalities:
\begin{center}
$n_s n_r n_s^{-1}=n_{w_s(r)}(\eta_{s,r}),\quad \eta_{s,r}=\pm1,$
\end{center}
\begin{center}
$n_s h_r(\lambda)n_s^{-1}=h_{w_s(r)}(\lambda).$
\end{center}
We choose the values of $\eta_{r,s}$ in the standard way (see~\cite[\S2]{GS3}).
\section{Proof of Theorem~\ref{th} for $G_2(q)$}
In this section, we suppose that $G=G_2(q)$, where $q$ is a power of a prime~$p$. Since the case $p=2$ follows from~\cite[Remark 2.5]{GS3}, we can assume that $p$ and $q$ are odd. The Dynkin diagram of type $G_2$ has the following form:
\begin{picture}(100,40)(-140,-10)
\put(50,0){\line(1,0){50}}
\put(50,3){\line(1,0){50}} \put(50,-3){\line(1,0){50}}
\put(50,0){\line(1,0){50}}
\put(50,0){\circle*{6}} \put(100,0){\circle*{6}}
\put(50,10){\makebox(0,0){$r_1$}}
\put(100,10){\makebox(0,0){$r_2$}}
\put(75,0){\makebox(0,0){$\langle$}}
\end{picture}\\
Following~\cite{Bour}, we use the following order of positive roots:
$$r_3=r_1+r_2, r_4=2r_1+r_2, r_5=3r_1+r_2, r_6=3r_1+2r_2.$$
The Weyl group $W=\langle w_1,w_2\rangle$ of the group $G_2(q)$ is isomorphic to the group $D_{12}$ and contains a central involution $w_0=w_1w_6=w_3w_5$.
Since $W$ acts trivially in this case, the group $W$ comprises six $\sigma$-conjugacy classes that are ordinary conjugacy classes.
To prove Theorem~\ref{th} we consider each $\sigma$-conjugacy class of maximal tori separately. As an element of $W$ corresponding to a conjugacy class of some maximal torus, we choose $w$ according to Table~\ref{tableG_2}.
In each case, we present an element $n$ such that $\pi(n)=w$ and the complement to the torus $\overline{T}_{\sigma n}$ in $\overline{N}_{\sigma n}$.
The cyclic structure of maximal tori given in Table~\ref{tableG_2} can be found in~\cite[\S2]{Kantor}. The proof of the theorem uses calculations obtained with the aid of computer systems MAGMA and GAP. The corresponding commands can be found in~\cite{github}. All calculations of this kind can be checked manually. For example, the case $w=1$ is discussed in detail in~\cite[\S7]{Galt1}.
Calculations in MAGMA show that $n_0:=h_1n_1n_6$ lies in $Z(\mathcal{T})$. In what follows, we use this fact without explanation.
\textbf{Tori 1 and 4.} In this case $w=1$ or $w=w_0$, respectively.
Moreover, we see that $C_{W}(w)\simeq\langle w_1,w_2 \rangle\simeq D_{12}$.
Let $n=1$ if $w=1$ and $n=n_0$ if $w=w_0$.
Consider elements $a=h_2n_1$ and $b=h_1n_2$. It follows from the definition of $n$ that $[n,a]=[n,b]=1$.
By Lemma~\ref{normalizer}(2), we get that $a,b\in\overline{N}_{\sigma n}$.
According to~\cite[\S7]{Galt1}, it is true that $a^2=b^2=1$ and $(ab)^6=1$.
Therefore, we infer that $K=\langle a,b \rangle$ is a complement to $\overline{T}_{\sigma n}$ in $\overline{N}_{\sigma n}$.
\textbf{Torus 2.} In this case $w=w_2$ and $C_W(w)=\langle w_2,w_4\rangle\simeq \mathbb{Z}_2\times \mathbb{Z}_2$.
Consider elements $a=h_1n_2$, $b=h_1n_4$, and $n=a$.
Using MAGMA, we see that $[n, a]=[n, b]=1$. By Lemma~\ref{normalizer}(2), we get that $a, b\in\overline{N}_{\sigma n}$. Now $a^2=b^2=[a,b]=1$ and hence $K=\langle a,b \rangle$ is a homomorphic image of $\mathbb{Z}_2\times \mathbb{Z}_2$. On the other hand, the image of $K$ in $W$ has order four, so $K\simeq\mathbb{Z}_2\times \mathbb{Z}_2$ and it is a complement to $\overline{T}_{\sigma n}$ in $\overline{N}_{\sigma n}$.
\textbf{Torus 3.} In this case $w=w_1$ and $C_W(w)=\langle w_1,w_6\rangle\simeq \mathbb{Z}_2\times \mathbb{Z}_2$.
Consider elements $a=h_2n_1$, $b=h_2n_6$ and $n=a$.
Using MAGMA, we see that $[n, a]=[n, b]=1$. Therefore, $a, b\in\overline{N}_{\sigma n}$ by Lemma~\ref{normalizer}(2). Now we see that $a^2=b^2=1$ and $[a,b]=1$.
Therefore, $K=\langle a,b\rangle$ is a complement to $\overline{T}_{\sigma n}$ in $\overline{N}_{\sigma n}$.
\textbf{Torus 5.} In this case $w=w_1w_3$ and $C_W(w)=\langle w_1w_3\rangle\times\langle w_0\rangle\simeq\mathbb{Z}_3\times \mathbb{Z}_2\simeq\mathbb{Z}_6$.
Consider elements $a=n_1n_3$, $n_0$ and $n=a$.
Using MAGMA, we see that $[n, a]=[n, n_0]=1$. Therefore, $a, n_0\in\overline{N}_{\sigma n}$ by Lemma~\ref{normalizer}(2). Since $a^3=n_0^2=1$ and $[a,n_0]=1$, we infer that $K=\langle a,n_0\rangle$ is a complement to $\overline{T}_{\sigma n}$ in $\overline{N}_{\sigma n}$.
\textbf{Torus 6.} In this case $w=w_1w_2$ and $C_W(w)=\langle w\rangle\simeq\mathbb{Z}_6$.
Consider $n=n_1n_2$. Using MAGMA, we see that
$n^6=1$. Therefore, we infer that $K=\langle n\rangle$ is a complement to $\overline{T}_{\sigma n}$ in $\overline{N}_{\sigma n}$.
\begin{table}[H]
\begin{center}
\caption{Splitting of normalizers of maximal tori of $G_2(q)$\label{tableG_2}}
{\centering
\begin{tabular}{|l|l|c|l|l|l|c|}
\hline
& $w$ & $|w|$ & $|C_W(w)|$ & Structure of $C_W(w)$ & Cyclic structure $T$ & Splitting\\ \hline
1 & $1$ & 1 & 12 & $D_{12}$ & $(q-1)^2$ & + \\
2 & $w_2$ & 2 & 4 & $\mathbb{Z}_2\times \mathbb{Z}_2$ & $q^2-1$ & + \\
3 & $w_1$ & 2 & 4 & $\mathbb{Z}_2\times \mathbb{Z}_2$ & $q^2-1$ & + \\
4 & $w_1w_6$ & 2 & 12 & $D_{12}$ & $(q+1)^2$ & + \\
5 & $w_1w_3$ & 3 & 6 & $\mathbb{Z}_6$ & $q^2+q+1$ & + \\
6 & $w_1w_2$ & 6 & 6 & $\mathbb{Z}_6$ & $q^2-q+1$ & + \\
\hline
\end{tabular}}
\end{center}
\end{table}
\section{Proof of Theorem~\ref{th} for groups ${}^2G_2(q)$}
In this section, we suppose that $G={}^2G_2(q)$, where $q=3^{2m+1}$ and $m$ is a positive integer. We follow the definition of the group ${}^2G_2(q)$ from~\cite[1.15.4, 2.2.3]{GorLySol}. In particular, $\rho$ is the nontrivial symmetry of the Dynkin diagram, $\sigma=\psi^{2m+1}$, where
\begin{center}
$\psi(x_r(t))=
\begin{cases}
x_{r^\rho}(t) & \text{if } r \text{ is a long root},\\
x_{r^\rho}(t^3) & \text{if } r \text{ is a short root}.
\end{cases}$
\end{center}
As was mentioned above, the Weyl group $W=\langle w_1,w_2\rangle$ of $G_2(q)$ is isomorphic to $D_{12}$ and contains a central involution $w_0=w_1w_6=w_3w_5$.
The $\sigma$-conjugacy classes of $W$ can be found directly by the definition. For example, the $\sigma$-conjugacy class for the identity element consists of the elements
$\{w^{-1}w^\sigma~|~w\in W\}$. Elements of the Weyl group are written in the following way: $W=\{w_k, (w_1w_2)^k~|~k=1,\ldots,6 \}$.
Since $(w_1w_2)^\sigma=w_2w_1$, we see that
$$(w_1w_2)^{-k}((w_1w_2)^k)^\sigma=(w_2w_1)^k(w_2w_1)^k=(w_2w_1)^{2k}=(w_1w_2)^{-2k},\quad w_1^{-1}w_1^\sigma=w_1w_2,$$ $$w_2^{-1}w_2^\sigma=w_2w_1=(w_1w_2)^5,\quad w_3^{-1}w_3^\sigma=w_3w_5=w_0=(w_1w_2)^3,\quad w_4^{-1}w_4^\sigma=w_4w_6=(w_1w_2)^5,$$
$$w_5^{-1}w_5^\sigma=w_5w_3=w_0=(w_1w_2)^3,\quad w_6^{-1}w_6^\sigma=w_6w_4=w_1w_2.$$
Therefore, $\{w^{-1}w^\sigma~|~w\in W\}=\{(w_1w_2)^k~|~k=1,\ldots,6\}$. Similarly, we find all the others $\sigma$-conjugacy classes: $\{w_1, w_2\}$, $\{w_3, w_5\}$ and $\{w_4, w_6\}$.
Using the size of the $\sigma$-conjugacy class of $w$, we find the order of $C_{W,\sigma}(w)$ (see Table~\ref{table2G_2}).
Fixing a representative of each $\sigma$-conjugacy class, we find the structure of the corresponding maximal torus, and then construct a complement in the corresponding algebraic normalizer. The results on the structure of maximal tori and their normalizers are given in Table~\ref{table2G_2}.
\textbf{Torus~1.} In this case $w=1$.
For an element $h_{r_1}(t_1)h_{r_2}(t_2)\in\overline{T}$,
we find necessary and sufficient conditions ensuring that this element belongs to $\overline{T}_\sigma$.
Since $\psi^2$ acts as the field automorphism of order $3$ (see,~\cite[1.15.4(b)]{GorLySol}), we see that
$$h_{r_1}(t_1)h_{r_2}(t_2)=(h_{r_1}(t_1)h_{r_2}(t_2))^\sigma=(h_{r_1}(t_1)h_{r_2}(t_2))^{\psi^{2m}\psi}=
(h_{r_1}(t_1^{3^m})h_{r_2}(t_2^{3^m}))^\psi= h_{r_2}(t_1^{3^{m+1}})h_{r_1}(t_2^{3^m}).$$
Therefore, we find two necessary and sufficient equalities: $t_1=t_2^{3^m}$ and $t_2=t_1^{3^{m+1}}$.
Applying equivalent transformations, we get that
$$
\begin{cases}
t_1=t_2^{3^m} \\
t_2=t_1^{3^{m+1}}
\end{cases}\Leftrightarrow
\begin{cases}
t_1=t_2^{3^m} \\
t_2=t_2^{3^m\cdot 3^{m+1}}
\end{cases}\Leftrightarrow
\begin{cases}
t_1=t_2^{3^m} \\
t_2^{q-1}=1
\end{cases}.
$$
Therefore, the torus is parametrized by a set $\{(z,z^{\sqrt{3q}})~|~z\in\overline{\mathbb{F}}_q, z^{q-\sqrt{3q}+1}=1 \}$ and is isomorphic to the cyclic group of order $q-1$.
Note that $C_{W,\sigma}(w)\simeq\langle w_0 \rangle\simeq\mathbb{Z}_2$. Consider the element $n=n_0=h_1n_1n_6$.
Using MAGMA, we see that $n^\sigma=n$ and $n^2=1$. Therefore, it follows from the definition of $\overline{N}_{\sigma n}$ that $n\in\overline{N}_{\sigma n}$. Hence, the group $K=\langle n \rangle$ is a complement to $\overline{T}_{\sigma n}$ in $\overline{N}_{\sigma n}$.
\textbf{Torus~2.} In this case $w=w_1$. Note that for any element $h_{r_1}(t_1)h_{r_2}(t_2)\in\overline{T}$
it is true that
$$(h_{r_1}(t_1)h_{r_2}(t_2))^{\sigma n_1}=(h_{r_1}(t_2^{3^m})h_{r_2}(t_1^{3^{m+1}}))^{n_1}=
h_{-r_1}(t_2^{3^m})h_{r_1+r_2}(t_1^{3^{m+1}})=$$
$$h_{r_1}(t_2^{-3^m})h_{r_1}(t_1^{3^{m+1}})h_{r_2}(t_1^{3^{m+1}})=
h_{r_1}(t_1^{3^{m+1}}t_2^{-3^m})h_{r_2}(t_1^{3^{m+1}}).$$
Consider the following chain of equivalent systems of equations for the torus parameters of $\overline{T}_{\sigma n_1}$:
$$
\begin{cases}
t_1=t_1^{3^{m+1}}t_2^{-3^m} \\
t_2=t_1^{3^{m+1}}
\end{cases}\Leftrightarrow
\begin{cases}
t_1=t_1^{3^{m+1}}t_1^{-3^{2m+1}} \\
t_2=t_1^{3^{m+1}}
\end{cases}\Leftrightarrow
\begin{cases}
t_1^{3^{2m+1}-3^{m+1}+1}=1 \\
t_2=t_1^{3^{m+1}}
\end{cases}
.
$$
Therefore, elements of the torus are parametrized by a set $\{(z,z^{\sqrt{3q}})~|~z\in\overline{\mathbb{F}}_q, z^{q-\sqrt{3q}+1}=1\}$,
so $\overline{T}_{\sigma n_1}$ is isomorphic to the cyclic group of order $q-\sqrt{3q}+1$.
Note that $C_{W,\sigma}(w)\simeq\langle w_1w_2 \rangle\simeq\mathbb{Z}_6$.
Consider elements $a=n_1n_2$ and $n=n_1$. Clearly, $a^{\sigma n}=nn_2n_1n^{-1}=n_1n_2=a$, that is $a\in\overline{N}_{\sigma n}$.
As was noted in the case of Torus~6 for $G=G_2(q)$, it is true that $a^6=1$. Hence $K=\langle a \rangle$ is a complement to $\overline{T}_{\sigma n}$ in $\overline{N}_{\sigma n}$.
\textbf{Torus~3.} In this case $w=w_3$. Note that
$$(h_{r_1}(t_1)h_{r_2}(t_2))^{\sigma n_3}=\left(h_{r_1}(t_2^{3^m})h_{r_2}(t_1^{3^{m+1}})\right)^{n_{r_1+r_2}}=
h_{2r_1+r_2}(t_2^{3^m})h_{-3r_1-2r_2}(t_1^{3^{m+1}})=$$
$$h_{r_1}(t_2^{2\cdot3^m})h_{r_2}(t_2^{3^{m+1}})h_{r_1}(t_1^{-3^{m+1}})h_{r_2}(t_1^{-2\cdot3^{m+1}})=
h_{r_1}(t_1^{-3^{m+1}}t_2^{2\cdot3^m})h_{r_2}(t_1^{-2\cdot3^{m+1}}t_2^{3^{m+1}}).$$
This implies that elements of $\overline{T}_{\sigma n_3}$ are given by the following two equalities on $t_1$ and $t_2$:
$$t_1=t_1^{-3^{m+1}}t_2^{2\cdot3^m},\quad t_2=t_1^{-2\cdot3^{m+1}}t_2^{3^{m+1}}.$$
Therefore,
\begin{multline*}
\begin{cases}
t_1^{3^{m+1}+1}=t_2^{2\cdot 3^m} \\
t_1^{2\cdot 3^{m+1}}=t_2^{3^{m+1}-1}
\end{cases}\Leftrightarrow
\begin{cases}
t_1^{3^{m+1}+1}=t_2^{2\cdot 3^m} \\
t_1^{3^{m+1}-1}=t_2^{3^{m}-1}
\end{cases}
\Leftrightarrow
\begin{cases}
t_1^{2}=t_2^{3^m+1} \\
t_1^{3^{m+1}-1}=t_2^{3^{m}-1}
\end{cases}
\\
\Leftrightarrow
\begin{cases}
t_1^{2}=t_2^{3^m+1} \\
t_2^{\frac{3^{2m+1}+2\cdot3^m-1}{2}}=t_2^{3^{m}-1}
\end{cases}\Leftrightarrow
\begin{cases}
t_1^{2}=t_2^{3^m+1} \\
t_2^{\frac{3^{2m+1}+1}{2}}=1
\end{cases}.
\end{multline*}
Hence, $t_2^\frac{q+1}{2}=1$ and $t_1=\pm t_2^\frac{{\sqrt{q/3}+1}}{2}$.
It follows that $\overline{T}_{\sigma n_3}$ has a cyclic structure $\mathbb{Z}_2\times\mathbb{Z}_{\frac{q+1}{2}}$,
where each element in the direct product can be written as follows: $(t_1,t_2)=(t_1\cdot t_2^\frac{-{\sqrt{q/3}-1}}{2},1)\cdot(t_2^\frac{{\sqrt{q/3}+1}}{2},t_2)$.
Note that $C_{W,\sigma}(w)\simeq\langle w_1w_2 \rangle\simeq\mathbb{Z}_6$.
Consider elements $a=n_1n_2$ and $n=h_2n_3$. Using MAGMA, we see that
$a^{\sigma n}=a.$ Hence $K=\langle a \rangle$ is a complement to $\overline{T}_{\sigma n}$ in $\overline{N}_{\sigma n}$.
\textbf{Torus~4.} In this case $w=w_4$.
Then
$$(h_{r_1}(t_1)h_{r_2}(t_2))^{\sigma n_4}=\left(h_{r_1}(t_2^{3^m})h_{r_2}(t_1^{3^{m+1}})\right)^{n_{2r_1+r_2}}=
h_{-r_1-r_2}(t_2^{3^m})h_{r_2}(t_1^{3^{m+1}})=$$
$$h_{r_1}(t_2^{-3^m})h_{r_2}(t_2^{-3^{m+1}})h_{r_2}(t_1^{3^{m+1}}).$$
From this, we get the following equivalent systems of equations for the parameters $t_1$ and $t_2$:
$$\begin{cases}
t_1=t_2^{-3^m} \\
t_2=t_1^{3^{m+1}}t_2^{-3^{m+1}}
\end{cases}
\Leftrightarrow
\begin{cases}
t_1=t_2^{-3^m} \\
t_2=(t_2^{-3^m})^{3^{m+1}}t_2^{-3^{m+1}}
\end{cases}
\begin{cases}
t_1=t_2^{-3^m} \\
t_2^{3^{2m+1}+3^{m+1}+1}=1
\end{cases}.
$$
Therefore, the elements of $\overline{T}_{\sigma n_4}$ are parametrized by a set
$\{(z,z^{-\sqrt{3q}})~|~z\in\overline{\mathbb{F}}_q, z^{q+\sqrt{3q}+1}=1\}$,
so the torus is isomorphic to the cyclic group of order $q+\sqrt{3q}+1$.
Note that $C_{W,\sigma}(w)\simeq\langle w_1w_2 \rangle\simeq\mathbb{Z}_6$.
Consider elements $a=n_1n_2$ and $n=h_2n_4$. Using MAGMA, we see that
$a^{\sigma n}=a.$
Therefore, $K=\langle a \rangle$ is a complement to $\overline{T}_{\sigma n}$ in $\overline{N}_{\sigma n}$.
\begin{table}[H]
\begin{center}
\caption{Splitting of normalizers of maximal tori of ${}^2G_2(q)$\label{table2G_2}}
{\centering
\begin{tabular}{|l|c|l|c|c|}
\hline
& $w$ & Structure of $T$ & $C_{W,\sigma}(w)$ & Splitting\\ \hline
1 & $1\sim(w_1w_2)^k$ & $\{(z,z^{\sqrt{3q}})~|~z^{q-\sqrt{3q}+1}=1\}$ & $\mathbb{Z}_2$ & + \\
& & $T\simeq q-1$ & & \\
2 & $w_1\sim w_2$ & $\{(z,z^{\sqrt{3q}})~|~z^{q-1}=1\}$ & $\mathbb{Z}_6$ & + \\
& & $T\simeq q-\sqrt{3q}+1$ & & \\
3 & $w_3\sim w_5$ & $\{(t_1,t_2)~|~t_1=\pm t_2^{(\sqrt{q/3}-1)/2}, t_2^{(q+1)/2}=1\}$ & $\mathbb{Z}_6$ & + \\
& & $T\simeq \mathbb{Z}_2\times\frac{q+1}{2}$ & & \\
4 & $w_4\sim w_6$ & $\{(z,z^{-\sqrt{3q}})~|~z^{q+\sqrt{3q}+1}=1\}$ & $\mathbb{Z}_6$ & + \\
& & $T\simeq q+\sqrt{3q}+1$ & & \\
\hline
\end{tabular}}
\end{center}
\end{table}
\section{Proof of Theorem~\ref{th} for groups $^3D_4(q)$}
In this section, we suppose that $G=^3D_4(q)$, where $q$ is a power of a prime $p$.
Since the case $p=2$ follows from~\cite[Remark 2.5]{GS3}, we assume that $q$ is odd. The extended Dynkin diagram of type $D_4$ has the following form:
\begin{picture}(250,70)(-170,-30) \put(0,20){\circle*{6}}
\put(0,20){\line(5,-2){50}} \put(50,0){\circle*{6}}
\put(100,20){\circle*{6}}
\put(100,-20){\circle*{6}} \put(50,0){\line(5,2){50}}
\put(50,0){\line(5,-2){50}} \put(0,30){\makebox(0,0){$r_1$}}
\put(0,10){\makebox(0,0){1}} \put(50,10){\makebox(0,0){$r_2$}}
\put(100,30){\makebox(0,0){$r_3$}}
\put(100,10){\makebox(0,0){1}} \put(0,-20){\circle*{6}}
\put(0,-20){\line(5,2){50}} \put(0,-10){\makebox(0,0){$-r_0$}}
\put(0,-30){\makebox(0,0){-1}}
\put(50,-10){\makebox(0,0){2}} \put(100,-10){\makebox(0,0){$r_4$}}
\put(100,-30){\makebox(0,0){1}}
\end{picture}\\
We denote by $\rho$ the symmetry of the Dynkin diagram such that $\rho(r_1)=r_3$, $\rho(r_2)=r_2$, $\rho(r_3)=r_4$, and $\rho(r_4)= r_1$.
We will use the following numbering of roots:
\begin{multline*}
r_5=r_1+r_2, r_6=r_2+r_3, r_7=r_2+r_4, r_8=r_1+r_2+r_3, r_9=r_1+r_2+r_4, \\r_{10}=r_2+r_3+r_4, r_{11}=r_1+r_2+r_3+r_4, r_{12}=r_1+2r_2+r_3+r_4.
\end{multline*}
According to~\cite[2.2.3]{GorLySol}, the endomorphism $\sigma$ acts on generators of $\overline{G}$ in the following way:
$(x_r(t))^\sigma=x_{r^\rho}(t^q)$. In particular,
$$n_r^\sigma=n_{r^\rho} \text{ and } (h_r(t))^\sigma=h_{r^\rho}(t^q).$$
A structure of maximal tori of $^3D_4(q)$ was given in~\cite[Proposition~1.2]{DerM} and is presented below in Table~\ref{table3D_4}.
It is well known that the Weyl group $W=\langle w_1,w_2,w_3,w_4\rangle$ of $D_4(q)$ is isomorphic to a subgroup of index $2$ in the group $2^4\rtimes \Sym_4$ and
contains the central involution $w_0=w_1w_3w_4w_{12}$.
The group $W$ has seven $\sigma$-conjugacy classes. We choose representatives
for these classes according to the table~\ref{table3D_4}.
Calculations in MAGMA show that $n_0:=n_1n_3n_4n_{12}$ lies in $Z(\mathcal{T})$. In what follows, we will frequently use this fact.
\begin{table}[H]
\begin{center}
\caption{Splitting of normalizers of maximal tori of $^3D_4(q)$\label{table3D_4}}
{\centering
\begin{tabular}{|l|l|c|l|l|c|}
\hline
& $w$ & $|w|$ & $C_{W,\sigma}(w)$ & Structure of $T$ & Splitting\\ \hline
1 & $1$ & 1 & $D_{12}$ & $\{(t_1,t_2,t_1^q,t_1^{q^2})~|~t_1^{q^3-1}=t_2^{q-1}=1\}$ & + \\
& & & & $T\simeq(q^3-1)\times(q-1)$ & \\ \hline
2 & $w_{12}$ & 2 & $\mathbb{Z}_2\times \mathbb{Z}_2$ & $\{(t,t^{1-q^3},t^{q^4},t^{q^2})~|~t^{(q^3-1)(q+1)}=1\}$ & + \\
& & & & $T\simeq(q^3-1)(q+1)$ & \\ \hline
3 & $w_0w_{12}$ & 2 & $\mathbb{Z}_2\times \mathbb{Z}_2$ & $\{(t,t^{q^3+1},t^{q^4},t^{q^2})~|~t^{(q^3+1)(q-1)}=1\}$ & + \\
& & & & $T\simeq(q^3+1)(q-1)$ & \\ \hline
4 & $w_{12}w_2$ & 3 & $\operatorname{SL}_2(3)$ & $\{(t_1,t_2,t_1^qt_2,(t_1^{-1}t_2)^{q+1})~|~t_i^{q^2+q+1}=1\}$ & + \\
& & & & $T\simeq(q^2+q+1)\times(q^2+q+1)$ & \\ \hline
5 & $w_0w_{12}w_2$ & 3 & $\operatorname{SL}_2(3)$ & $\{(t_1,t_2,t_1^{-q}t_2,(t_1t_2^{-1})^{q-1})~|~t_i^{q^2-q+1}=1\}$ & + \\
& & & & $T\simeq(q^2-q+1)\times(q^2-q+1)$ & \\ \hline
6 & $w_1w_2$ & 3 & $\mathbb{Z}_4$ & $\{(t,t^{q^3+1},t^q,t^{q^2})~|~t^{q^4-q^2+1}=1\}$ & + \\
& & & & $T\simeq q^4-q^2+1$ & \\ \hline
7 & $w_0$ & 2 & $D_{12}$ & $\{(t_1,t_2,t_1^{-q},t_1^{q^2})~|~t_1^{q^3+1}=t_2^{q+1}=1\}$ & + \\
& & & & $T\simeq(q^3+1)\times(q+1)$ & \\ \hline
\end{tabular}}
\end{center}
\end{table}
For each $\sigma$-conjugacy class of maximal tori, we present a complement in the corresponding algebraic normalizer.
\textbf{Tori 1 and 7.} In this case $w=1$ or $w=w_0$, respectively.
Moreover, we see that $C_{W,\sigma}(w)\simeq\langle w_2, w_1w_3w_4 \rangle\simeq D_{12}$.
Define $n=1$ if $w=1$ and $n=n_0$ if $w=w_0$.
Consider elements $a=h_1h_3h_4n_2$ and $b=h_2n_1n_3n_4$.
Since
$$n_2^{\sigma n}=n_2^n=n_2,\quad (n_1n_3n_4)^{\sigma n}=n_3n_4n_1^n=n_1n_3n_4,$$
we infer that $n_2, n_1n_3n_4\in\overline{N}_{\sigma n}$.
According to Table~\ref{table3D_4}, it is true that $h_2$, $h_1h_3h_4\in\overline{T}_{\sigma n}$. Therefore, $a,b\in\overline{N}_{\sigma n}$. Using MAGMA, we find that $a^2=b^2=1$ and $(ab)^6=1$. This implies that $K=\langle a,b \rangle$ is a complement to $\overline{T}_{\sigma n}$ in $\overline{N}_{\sigma n}$.
\textbf{Tori 2 and 3.} In this case $w=w_{12}$ and $w=w_0w_{12}$, respectively.
Moreover, we see that $C_{W,\sigma}(w)\simeq\langle w_0,w_{12} \rangle\simeq \mathbb{Z}_2\times \mathbb{Z}_2$.
Define $n=n_{12}$ and $\varepsilon=-$ if $w=w_{12}$, and $n=n_0n_{12}$ and $\varepsilon=+$ if $w=w_0w_{12}$.
Consider elements $\alpha, \beta\in\overline{\mathbb{F}}_p$ such that $\alpha^{\frac{((\varepsilon{q})^3+1)(\varepsilon{q}-1)}{2}}=-1$ and $\beta=\alpha^{\frac{((\varepsilon{q})^3+1)}{2}}$. Define elements
$$H_1=(\alpha, \alpha^{(\varepsilon{q})^3+1}, \alpha^{q^4}, \alpha^{q^2}),\quad H_2=(\beta, \beta^{(\varepsilon{q})^3+1}, \beta^{q^4}, \beta^{q^2}).$$
According to~\cite[Table~1.1]{DerM}, we conclude that $H_1, H_2\in\overline{T}_{\sigma n}$.
Consider elements $a=H_1n_0$ and $b=H_2n_{12}$. Since
$$n_0^{\sigma n}=n_0^n=n_0,\quad n_{12}^{\sigma n}=n_{12}^n=1,$$
we infer that $n_0, n_{12}\in\overline{N}_{\sigma n}$.
It follows from~\cite[Lemma~3.2]{GS2} that
$$(Hn_{12})^2=(-\lambda_1^2\lambda_2^{-1}, 1, -\lambda_3^2\lambda_2^{-1}, -\lambda_4^2\lambda_2^{-1}).$$
Since $\beta^{\varepsilon{q}}=-\beta$, we find that $\beta=\beta^{q^2}=\beta^{q^4}=-\beta^{(\varepsilon{q})^3}$. Therefore,
$$b^2=(H_2n_{12})^2=(-\beta^{1-(\varepsilon{q})^3}, 1, -\beta^{2q^4-(\varepsilon{q})^3-1}, -\beta^{2q^2-(\varepsilon{q})^3-1})=(1, 1, 1, 1)=1.$$
By \cite[Lemma~3.3]{GS2}, we see that
$$[a, b]=1 \Leftrightarrow H_1^{-1}H_1^{n_{12}}=H_2^{-1}H_2^{n_0}=H_2^{-2}.$$
Since
$$H^{n_{12}}=(\lambda_1\lambda_2^{-1}, \lambda_2^{-1}, \lambda_3\lambda_2^{-1}, \lambda_4\lambda_2^{-1}),$$
it is true that
$$H_1^{-1}H_1^{n_{12}}=(\alpha^{-((\varepsilon{q})^3+1)}, \alpha^{-2((\varepsilon{q})^3+1)}, \alpha^{-((\varepsilon{q})^3+1)}, \alpha^{-((\varepsilon{q})^3+1)})=(\beta^{-2}, \beta^{-4}, \beta^{-2}, \beta^{-2}).$$
On the other hand, $\beta^{(\varepsilon{q})^3+1}=-\beta^2$ and
$$H_2^{-2}=(\beta^{-2}, \beta^{-2((\varepsilon{q})^3+1)}, \beta^{-2q^4}, \beta^{-2q^2})=
(\beta^{-2}, (-\beta^2)^{-2}, \beta^{-2}, \beta^{-2}).$$
Note that for every $H$ it is true that $(Hn_0)^2=n_0^2=1$.
Therefore, $K=\langle a, b \rangle\simeq \mathbb{Z}_2\times \mathbb{Z}_2$ is a complement to $\overline{T}_{\sigma n}$ in $\overline{N}_{\sigma n}$.
\textbf{Tori 4 and 5.} In this case $w=w_{12}w_2$ or $w=w_0w_{12}w_2$, respectively.
We have that $C_{W,\sigma}(w)\simeq\operatorname{SL}_2(3)$.
Note that $\operatorname{SL}_2(3)$ has the following representation:
$$\operatorname{SL}_2(3)\simeq\langle a,b~|~a^4, b^3, aba^{-1}bab, (b^{-1}a)^3\rangle.$$
Moreover, elements
$w_1w_7$ and $w_1w_2w_3w_7$ generate $C_{W,\sigma}(w)$
and satisfy this set of relations.
Denote $n=n_{12}n_2$ if $w=w_{12}w_2$ and $n=n_0n_{12}n_2$ if $w=w_0w_{12}w_2$.
Consider elements $a=n_1n_2n_3n_7$ and $b=n_1n_7$.
Since
$$a^{\sigma n}=(n_3n_2n_4n_5)^n=a,\quad b^{\sigma n}=(n_3n_5)^n=b,$$
we infer that $a, b\in\overline{N}_{\sigma n}$.
Using MAGMA, we see that $a^4=b^3=aba^{-1}bab=(b^{-1}a)^3=1$. Therefore, $K=\langle a,b \rangle$ is a complement to $\overline{T}_{\sigma n}$ in $\overline{N}_{\sigma n}$.
\textbf{Torus~6.} In this case $w=w_1w_2$ and $C_{W,\sigma}(w)\simeq\langle w_1w_2w_3w_7 \rangle\simeq \mathbb{Z}_4$.
Consider $n=n_1n_2$ and $a=n_1n_2n_3n_7$. Since
$a^{\sigma n}=(n_3n_2n_4n_5)^n=a,$
we infer that $a\in\overline{N}_{\sigma n}$.
Using MAGMA, we find that $a^4=1$ and hence
$K=\langle a\rangle$ is a complement to $\overline{T}_{\sigma n}$ in $\overline{N}_{\sigma n}$.
\section{Proof of Theorem~\ref{th_Omega}}
In this section, we suppose that $G=\mathrm{P}\Omega_{2n}^-(q)$, where $n\geqslant4$ and $q$ is a power of a prime $p$.
Notice that~\cite[Remark 2.5]{GS3} is true for all groups of Lie type and therefore a maximal torus $T$ has a complement in its algebraic normalizer $N(G,T)$ in the case of even characteristic. In what follows, we assume that $q$ is odd.
Recall some definitions and notations concerning orthogonal groups from~\cite{ButGre}.
The group $\GO_{2n+1}(\overline{\mathbb{F}}_p, Q)$ is an orthogonal group of dimension $n$ over $\overline{\mathbb{F}}_p$ associated with a non-singular quadratic form $Q$, where $Q(v)=x_0^2+x_1x_{-1}+\ldots+x_nx_{-n}$. We fix the basis $\{x,e_1,\ldots,e_n,f_1\ldots,f_n\}$ of a vector space $V$, corresponding to $Q$. We numerate rows and columns of matrices in $\GO_{2n+1}(\overline{\mathbb{F}}_p)$ in the order $0,1,2,\ldots,n,-1,-2,\ldots,-n$. Define $\overline{G}$ as a subgroup of $\overline{H}=\SO_{2n+1}(\overline{\mathbb{F}}_p)$ consisting of all matrices of the form $\bd(1, A)$, where $A$ is a matrix of size $2n\times 2n$. Then $\overline{G}\simeq\SO_{2n}(\mathbb{F})$.
A subgroup $\overline{T}$ of $\overline{H}$ consisting of all diagonal matrices of the form $\bd(1,D,D^{-1})$ is a maximal torus of groups $\overline{H}$ and $\overline{G}$.
The group $\overline{N}=N_{\overline{H}}(\overline{T})$ is a subgroup of the monomial matrix group. There exists an embedding of a Weyl group $W_{\overline{H}}=N_{\overline{H}}(\overline{T})/\overline{T}$ into a permutation group on the set $\{1,2,\ldots,n,-1,-2,\ldots,-n\}$. The image of $W$ under this embedding coincides with the group $\Sl_n$ of all permutations $\varphi$ such that $\varphi(-i)=-\varphi(i)$.
If we drop signs from elements of the set $\{1,2,\ldots,n,-1,-2,\ldots,-n\}$, we get a homomorphism from $\Sl_n$ onto $\Sym_n$. Let $\varphi\in\Sl_n$ is mapped into a cycle $(i_1i_2\ldots i_k)$ and fixes all other elements. If $\varphi(i_k)=i_1$, then $\varphi$ is called a {\it positive cycle of length $k$}; if $\varphi(i_k)=-i_1$, then $\varphi$ is called a {\it negative cycle of length $k$}. An arbitrary element $\varphi$ of $\Sl_n$ is uniquely expressed as a product of disjoint positive and negative cycles. Lengths of the cycles together with their signs give a set of integers, called the {\it cycle-type} of $\varphi$.
The Weyl group $W_{\overline{G}}=N_{\overline{G}}(\overline{T})/\overline{T}$ is isomorphic to a subgroup $\Sl_n^+$ of $\Sl_n$ consisting of all permutations whose decomposition into disjoint cycles contains an even number of negative cycles.
Let $n_0=\bd(-1,A_0)$, where $A_0$ is the permutation matrix corresponding to the negative cycle $(n,-n)$. Then $n_0\in N_{\overline{H}}(\overline{T})$ and $W_{\overline{H}}=W_{\overline{G}}\cup w_0W_{\overline{G}}$, where $w_0=\pi(n_0)$.
Let $\sigma$ be a Steinberg endomorphism of $\overline{H}$, acting by the rule $(a_{ij})\mapsto(a_{ij}^q)$, $\Int n_0$ be a conjugation by the element $n_0$, and $\sigma_1=\sigma\circ\Int n_0$. Then $G=\overline{G}_{\sigma_1}\simeq\SO_{2n}^-(q)$ and $O^{p'}(G)\simeq\Omega_{2n}^-(q)$. According to~\cite[\S4]{ButGre}, two elements $w_1$ and $w_2$ are $\sigma_1$-conjugate in $W_{\overline{G}}$ if and only if $w_0w_1$ and $w_0w_2$ are conjugate by an element of $W_{\overline{G}}$. In particular,
$$C_{W_{\overline{G}},\sigma_1}(w)=C_{W_{\overline{G}},\sigma}(w_0w).$$
We say that a maximal torus $(\overline{T}^g)_{\sigma_1}$ of $G$ with $\pi(g^\sigma g^{-1})=w$ corresponds to the element $w_0w$.
For brevity, we assume $W=W_{\overline{G}}$ and $\overline{N}=N_{\overline{G}}(\overline{T})$.
According to Proposition~\ref{prop2.5}, we obtain that
$$({\overline{T}}^g)_{\sigma_1}=(\overline{T}_{\sigma_1 n})^g=(\overline{T}_{\sigma n_0n})^g, \quad (N_{\overline{G}}({\overline{T}}^g))_{\sigma_1}=(\overline{N}^g)_{\sigma_1}=(\overline{N}_{\sigma n_0n})^g.$$
Hence,
$$C_{W,\sigma}(w_0w)= C_{W,\sigma_1}(w)\simeq (N_{\overline{G}}({\overline{T}}^g))_{\sigma_1}/({\overline{T}}^g)_{\sigma_1}= (\overline{N}_{\sigma n_0n})^g/(\overline{T}_{\sigma n_0n})^g\simeq\overline{N}_{\sigma n_0n}/\overline{T}_{\sigma n_0n}.$$
Maximal tori of $\SO_{2n}^-(q)$ have the following description.
\begin{prop}{\cite{ButGre}}\label{prop2}
Let $(\overline{T}^g)_{\sigma_1}$ be a maximal torus of $\overline{G}_{\sigma_1}=\SO_{2n}^-(q)$ corresponding to an element $w_0w$ of the Weyl group with the cyclic type $(\overline{n_1})\ldots(\overline{n_k})(n_{k+1})\ldots(n_m)$, where $k$ is odd. Put $\varepsilon_i=-$ if $i\leqslant k$ and $\varepsilon_i=+$ otherwise. Let $T$ be a subgroup of $\overline{G}$ consisting of all
diagonal matrices of the form
$$\bd(1, D_1, D_2,\ldots,D_m, D_1^{-1}, D_2^{-1},\ldots, D_m^{-1}),$$
where $D_i=\diag(\lambda_i, \lambda_i^q,\ldots, \lambda_i^{q^{n_i-1}})$ and $\lambda_i^{q^{n_i}-\varepsilon_i1}=1$.
Then $\overline{T}_{\sigma_1 n}=T$.
\end{prop}
When $k$ is even, the subgroup $T$ from Proposition~\ref{prop2} is isomorphic to a maximal torus of $\overline{G}_{\sigma}=\SO_{2n}^+(q)$ and the results for this case were obtained in~\cite{Galt2}. In order to prove Theorem~\ref{th_Omega} we will refer to these results.
First of all we need to notice that there is an error in~\cite[Proposition~5]{Galt2}. Since the quadratic form $Q$ was already fixed, then $Q(x)=1$ (instead of $Q(x)=-1$ as in~\cite[Proposition~5]{Galt2}) and the statement should be as follows.
\begin{prop}\label{Omega}
Let $g\in\SO_{2n+1}(q)$ such that
$$g(e_j)=\alpha f_j,\; g(f_j)=\alpha^{-1}e_j,\; g(x)=-x,\; g(e_i)=e_i,\; g(f_i)=f_i,$$
for $1\leqslant i \leqslant n,\;i\neq j, \alpha\in\mathbb{F}_q^*$. Then $g\in\Omega_{2n+1}(q)$ if and only if $(-\alpha)\in(\mathbb{F}_q^*)^2$.
\end{prop}
Because of this, some proofs of Lemmas in~\cite{Galt2} must be corrected, but the main results of~\cite{Galt2} remains valid. Here are the required corrections.
According to Proposition~\ref{Omega}, the spinor norm $\theta$ of the element $$\tau=(1,-1)(2,-2)\ldots(k,-k)$$
is equal to $\theta(\tau)=(-1)^k$.
In the proof of Lemma~13 from~\cite{Galt2} we should define the elements $u_i$ as follows
\[{u}_i=\begin{cases} \bd((-1)^{n_i},I_n,I_n)\tau_i & \text{if } n_i \text{ is even} \\
v_0\bd((-1)^{n_i},I_n,I_n)\tau_i & \text{if } n_i \text{ is odd}
\end{cases}.
\]
Then the spinor norm $\theta(u_i)=1$ for all $n_i$. All the rest in the proof of Lemma~13 remains the same.
In the proof of Lemma~18 from~\cite{Galt2} we should define the elements $u_i$ as follows
$$u_1=\bd(1,T_1,T_1^{-1})\tau_1, u_2=\bd(1,T_2,T_2^{-1})\tau_2, u_3=\bd(1,T_3,T_3^{-1})\tau_3, u_4=\bd(1,T_4,T_4^{-1})\tau_4,$$
where $$T_1=\bd(-I_1,I_1,I_3,I_3), T_2=\bd(I_1,-I_1,I_3,I_3), T_3=\bd(I_1,I_1,-I_3,I_3), T_4=\bd(I_1,I_1,I_3,-I_3),$$
and $I_j$ is the identity $n_j\times n_j$ matrix. Then the spinor norm $\theta(u_i)=\det(T_i)(-1)^{n_i}=1$ for all $i$. All the rest in the proof of Lemma~18 remains the same.
In the proof of Lemma~23 from~\cite{Galt2} we should define the elements $t_1, t_2$ as follows
$$t_1=T_1\varpi_1, u_2=T_2\varpi_2, \text{ where} \quad T_1=T_2=\bd(-1,-I_1, I_1, -I_1, I_1).$$
Then the spinor norm $\theta(t_i)=\det(-I_1)\theta(\varpi_i)=(-1)^{n_i}(-1)^{n_i}=1$ for $i=1,2$. All the rest in the proof of Lemma~23 remains the same.
\vspace{0.5em}
Let's proceed to the proof of Theorem~\ref{th_Omega}.
Lemmas~15, 16 and 19 of~\cite{Galt2} proved for even $k$ are also true for the case of odd $k$, because the parity is not used in the proofs. Thus, Lemmas~15, 16, and 19 imply Theorem for $m\geqslant5$.
Let $m=4$. In this case an element of the Weyl group has one of two cyclic types:
$$(\overline{n_1})(n_2)(n_3)(n_4) \text{ or } (\overline{n_1})(\overline{n_2})(\overline{n_3})(n_4).$$
If all $n_i$ are even, then the result follows from~\cite[Lemma~19]{Galt2}. If the number of odd numbers among $n_1,n_2,n_3,n_4$ is not equal to two, then the result follows from~\cite[Lemma~15(2)]{Galt2}. In the case when there are two different odd numbers among $n_1,n_2,n_3,n_4$, then the result also follows from~\cite[Lemma~15(2)]{Galt2}.
\begin{lemma}\label{m=4}
Let $T$ be a maximal torus of $G=\Omega_{2n}^-(q)$ corresponding to an element $w_0w$ of the Weyl group with the cyclic type $(\overline{n_1})(n_2)(n_3)(n_4)$ or $(\overline{n_1})(\overline{n_2})(\overline{n_3})(n_4)$. Let $\widetilde{T}$ and $\widetilde{N}$ be the images of $T$ and $N(G,T)$ in $\widetilde{G}=\mathrm{P}\Omega_{2n}^-(q)$, respectively. If $q\equiv3\pmod4$, $n_1,n_4$ are even, $n_2=n_3$ is odd, then $\widetilde{T}$ does not have a complement in~$\widetilde{N}$.
\end{lemma}
\begin{proof}
Assume on the contrary that $\widetilde{T}$ has a complement $\widetilde{H}$ in $\widetilde{N}$.
Let $H$ be a preimage of $\widetilde{H}$ in $N$.
Since $n_2=n_3$, we have $\chi_2\in C_W(w_0w)$. Since $n_1$ and $n_4$
are even, we get that $\tau_1,\tau_4\in C_W(w_0w)$.
Let $v_2,u_1,u_4$ be preimages of $\chi_2,\tau_1,\tau_4$ in $H$. Then the element $v_2$ has the form
$$v_2=\bd(D_1,D_2,D_3,D_4,D_1^{-1},D_2^{-1},D_3^{-1},D_4^{-1})\chi_2,$$ where
$D_i=\diag(\lambda_i, \lambda_i^q,\ldots, \lambda_i^{q^{n_i-1}})$.
Since $\widetilde{H}$ is a complement for $\widetilde{T}$, the following equalities must be hold
$$v_2^{2}=\varepsilon I,\quad v_2u_1=\varepsilon_1 u_1v_2,\quad v_2u_4=\varepsilon_4 u_4v_2, \text{ where } \varepsilon,\varepsilon_1,\varepsilon_4\in\{1,-1\}.$$
Since $m=4$, we find that $\varepsilon_1=\varepsilon_4=1$ and hence $\lambda_1^2=\lambda_4^2=1$ by~\cite[Lemma~4]{Galt2}.
Applying~\cite[Lemma~8]{Galt2} to the equality $v_2^{2}=\varepsilon I$, we get that $D_2D_3=\varepsilon I_1$, $\lambda_1^2=\lambda_4^2=\varepsilon$. Hence, $\varepsilon=1$ and
$$\det(v_2|_{V_0})=\det(D_1D_2D_3D_4)\det(\chi_2|_{V_0})=\det(D_2D_3)\lambda_1^{n_1}\lambda_4^{n_4}(-1)^{n_2}= -1.$$ Since $q\equiv3\pmod4$, we have $\det(v_2|_{V_0})\notin(\mathbb{F}_q^*)^2$ and $v_2\notin\Omega_{2n}^-(q)$; a contradiction.
\end{proof}
\noindent
Let $m=3$. In this case an element of the Weyl group has one of two cyclic types:
$$(n_1)(n_2)(\overline{n_3}) \text{ or } (\overline{n_1})(\overline{n_2})(\overline{n_3}).$$
According to~\cite[Lemma~19]{Galt2} and~\cite[Lemma~15(2)]{Galt2}, it suffices to consider the case $n_1=n_2$ is odd and $n_3$ is even.
\begin{lemma}\label{m=3}
Let $T$ be a maximal torus of $G=\Omega_{2n}^-(q)$ corresponding to an element $w_0w$ of the Weyl group with the cyclic type $(n_1)(n_2)(\overline{n_3})$ or $(\overline{n_1})(\overline{n_2})(n_3)$. Let $\widetilde{T}$ and $\widetilde{N}$ be the images of $T$ and $N(G,T)$ in $\widetilde{G}=\mathrm{P}\Omega_{2n}^-(q)$, respectively. If $q\equiv3\pmod4$, $n_1=n_2$ is odd, and $n_3$ is even, then $\widetilde{T}$ does not have a complement in~$\widetilde{N}$.
\end{lemma}
\begin{proof}
Assume on the contrary that $\widetilde{T}$ has a complement $\widetilde{H}$ in $\widetilde{N}$.
Let $H$ be a preimage of $\widetilde{H}$ in $N$. Since $\varpi_3,\tau_1\notin W$, we have $\varpi_3\tau_1\in W$ and
$$\langle\chi_1,\varpi_3\tau_1\rangle\leqslant C_W(w_0w).$$
Let $v$ and $t$ be preimages of $\chi_1$ and $\varpi_3\tau_1$ in $H$, respectively.
Then the elements $s_3,u_3,v_1,t$ have the form
\begin{center}
$v=\bd(D_1,D_2,D_3,D_1^{-1},D_2^{-1},D_3^{-1})\chi_1, \quad D_i=\diag(\lambda_i, \lambda_i^q, \ldots, \lambda_i^{q^{n_i-1}}),$
\end{center}
\begin{center}
$t=\bd(U_1,U_2,U_3,U_1^{-1},U_2^{-1},U_3^{-1})\varpi_3\tau_1, \quad U_i=\diag(\mu_i, \mu_i^q, \ldots, \mu_i^{q^{n_i-1}})$,
\end{center}
Since $\widetilde{H}$ is a complement for $\widetilde{T}$, we infer that the
following equalities must be hold
$$v^{2}=\varepsilon I, vt=\delta tv, \quad \text{ where } \varepsilon,\delta\in\{1,-1\}.$$
Applying~\cite[Lemma~8]{Galt2} to the equality $v^{2}=\varepsilon I$, we get that $D_1D_2=\varepsilon I_1$ and $\lambda_3^2=\varepsilon$.
If $\delta=-1$, then by~\cite[Lemma 3(2)]{Galt2} we get that $n_3$ is odd; a contradiction. Thus,
$\delta=1$ and it follows from \cite[Lemma 3(1)]{Galt2} that $\lambda_3^2=1$.
Hence, $\varepsilon=1$ and $D_1D_2=1$. Since $q\equiv3\pmod4$, we have
$$\theta(v)=\det(D_1D_2D_3)\theta(\chi_1)=\lambda_3^{n_3}(-1)^{n_1}=-1\notin(\mathbb{F}_q^*)^2$$
and $v\notin\Omega_{2n}^-(q)$, a contradiction.
\end{proof}
When $m=2$, taking into account~\cite[Lemma~15(2)]{Galt2}, it remains to consider the following case.
\begin{lemma}\label{m=2}
Let $T$ be a maximal torus of $G=\Omega_{2n}^-(q)$ corresponding to an element $w_0w$ of the Weyl group with the cyclic type
$(\overline{n_1})(n_2)$. Let $\widetilde{T}$ and $\widetilde{N}$ be the images of $T$ and $N(G,T)$ in $\widetilde{G}=\mathrm{P}\Omega_{2n}^-(q)$, respectively. If $q\equiv3\pmod4$ and $n_1=n_2$ is even, then $\widetilde{T}$ does not have a complement in~$\widetilde{N}$.
\end{lemma}
\begin{proof}
Assume on the contrary that $\widetilde{T}$ has a complement $\widetilde{H}$ in $\widetilde{N}$.
Let $H$ be a preimage of $\widetilde{H}$ in $N$. Since
$n_1,n_2$ are even, we infer that $\tau_1,\tau_2\in C_W(w_0w)$ and $C_W(w_0w)\geqslant\langle\omega_2,\tau_1,\tau_2\rangle$.
Now we are in the conditions of the proof of~\cite[Lemma~21]{Galt2} where a contradiction is obtained.
\end{proof}
The remaining case $m=1$ follows from~\cite[Lemma~15(3)]{Galt2}.
| {
"timestamp": "2022-12-26T02:07:40",
"yymm": "2212",
"arxiv_id": "2212.12199",
"language": "en",
"url": "https://arxiv.org/abs/2212.12199",
"abstract": "Let $G$ be a finite group of Lie type and $T$ a maximal torus of $G$. In this paper we complete the study of the question of the existence of a complement for the torus $T$ in its algebraic normalizer $N(G,T)$. It is proved that every maximal torus of the group $G\\in\\{G_2(q), {}^2G_2(q), {}^3D_4(q)\\}$ has a complement in its algebraic normalizer. The remaining twisted classical groups ${}^2A_n(q)$ and ${}^2D_n(q)$ are also considered.",
"subjects": "Group Theory (math.GR)",
"title": "On splitting of the normalizer of a maximal torus in finite groups of Lie type",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9693241974031599,
"lm_q2_score": 0.7310585786300049,
"lm_q1q2_score": 0.7086327699852244
} |
https://arxiv.org/abs/1908.04365 | On $q$-deformed real numbers | We associate a formal power series with integer coefficients to a positive real number, we interpret this series as a "$q$-analogue of a real." The construction is based on the notion of $q$-deformed rational number introduced inarXiv:1812.00170. Extending the construction to negative real numbers, we obtain certain Laurent series. | \section{Introduction}\label{IntSec}
We take a new and experimental route to introduce a certain version of ``$q$-deformed real numbers'',
extending $q$-deformations of rationals introduced in~\cite{MGOqR}.
Given a real number~$x\geq0$, we will construct a formal power series with integer coefficients associated with~$x$.
There is no explicit formula to determine the coefficients of this series,
but there is an algorithm to calculate them.
Our construction is completely different from the classically known $q$-deformations of a number~$x\in\mathbb{R}$,
defined by the formulas
$\frac{q^x-1}{q-1}$ (or $\frac{q^x-q^{-x}}{q-q^{-1}}$) that do not give power series with integer coefficients.
The only case where our construction coincides with the classical one is that of integers.
Throughout this paper, we always use the Gauss definition:
$$
\left[a\right]_{q}=\frac{q^a-1}{q-1},
\qquad\qquad
a\in\mathbb{Z},
$$
that gives the familiar polynomials
$\left[a\right]_{q}=1+q+\cdots+q^{a-1}$
and $\left[-a\right]_{q}=-q^{-1}-q^{-2}-\cdots-q^{-a}$, for~$a\in\mathbb{N}$.
All our constructions will be in accordance with these formulas.
It is of course too early to discuss possible applications of the $q$-deformed real numbers,
since we do not know sufficiently general properties of the power series we obtain.
However, the existence of the procedure is quite surprising and the examples are captivating.
The $q$-deformation of a rational number~$\frac{r}{s}$
is a quotient of two polynomials:
$$
\left[\frac{r}{s}\right]_{q}=\frac{\mathcal{R}(q)}{\mathcal{S}(q)},
$$
where~$\mathcal{R}$ and~$\mathcal{S}$ both depend on~$r$ and~$s$ (see~\cite{MGOqR}).
In this paper, we represent these rational functions as Taylor series at $q=0$.
The definition of $q$-reals is as follows.
Let first $x\geq1$ be an irrational number, and~$(x_n)_{n\geq1}$ any sequence of rational numbers that converges to~$x$.
We $q$-deform the sequence~$(x_n)_{n\geq1}$ to
obtain a sequence of rational functions:
$\left[x_1\right]_q,\left[x_2\right]_q,\left[x_3\right]_q,\ldots$
For every $n\geq1$, we
consider the Taylor expansion of the rational function~$\left[x_n\right]_q$ at~$q=0$:
\begin{equation}
\label{TSEq}
\left[x_n\right]_q=:\sum_{k\geq0}\varkappa_{n,k}\,q^k.
\end{equation}
Abusing the notation, we use the same name, $\left[x_n\right]_q$, for the Taylor series.
The $q$-deformation of~$x$ is the series
\begin{equation}
\label{TSEqBis}
\left[x\right]_q:=\sum_{k\geq0}\varkappa_k\,q^k,
\qquad\hbox{where}\qquad
\varkappa_k=\lim_{n\to\infty}\varkappa_{n,k}.
\end{equation}
The existence of the limit and its independence of the choice of the converging sequence~$(x_n)_{n\geq1}$
is guaranteed by the following theorem.
\begin{thm}
\label{ConvThm}
Given an irrational real number $x\geq1$, for every~$k\geq0$ the coefficients~$\varkappa_{n,k}$ of the Taylor series~\eqref{TSEq} stabilize as~$n$ grows.
Moreover, the limit coefficients~$\varkappa_k$ in~\eqref{TSEqBis} are integers that do not depend on the choice of the converging sequence of rationals $(x_n)_{n\geq1}$.
\end{thm}
This statement was first observed by computer experimentation, but then a simple
proof was found.
Note that the coefficients of the polynomials in the numerator and denominator of
the sequence of rational functions $\left[x_n\right]_q$ do not stabilize.
They grow with~$n$ infinitely at every fixed power of~$q$.
In practice, we always construct $q$-deformed real numbers using continued fractions.
Let $x\geq1$ be a real number, and
$x=\left[a_1,a_2,a_3,\ldots\right],$
where~$a_i$'s are positive integers,
its continued fraction expansion.
The sequence of rational numbers
$$
x_n:=\left[a_1,\ldots , a_n\right]
$$
approximates~$x$;
it is called the sequence of convergents.
In this case the stabilization phenomenon
of Theorem~\ref{ConvThm} can be controlled with a greater exactness.
\begin{prop}
\label{TechLem}
Let $x\geq1$ be an irrational real number.
The Taylor expansions at $q=0$ of two consecutive $q$-deformed
convergents of the continued fraction of $x$,
namely of $x_{n-1}=[a_1,\ldots,a_{n-1}]$ and $x_n=[a_1,\ldots,a_n]$, have the first $a_1+\cdots+a_n-1$ terms identical,
the coefficients of $q^{a_1+\cdots+a_n-1}$ differ by~$1$.
\end{prop}
With the help of a computer program, we carried out a number of tests
calculating $q$-deformations of known mathematical constants,
from the simplest golden ratio to the transcendental $e$ and~$\pi$, and checking their properties.
The most pleasant surprise for us was the appearance of the sequence
of generalized Catalan numbers (sequence A004148 of \cite{OEIS})
as coefficients~$\varkappa_k$ of the deformed golden ratio.
This remarkable ``coincidence'' and known properties of A004148 allowed us to conjecture
(and eventually prove) several properties of quadratic irrationals.
We know only very few general properties of $q$-deformed real numbers.
One of them is the action of the translation group~$\mathbb{Z}$ described by following formula
\begin{equation}
\label{TransEq}
\left[x+1\right]_q=q\left[x\right]_q+1,
\qquad\qquad
\left[x-1\right]_q:=\frac{\left[x\right]_q-1}{q}.
\end{equation}
Note that the second equation in~\eqref{TransEq} allows us to extend our $q$-deformations
to the case $x<1$ (including negative real numbers);
the notation ``$:=$'' means that we use this equation as definition.
The property~\eqref{TransEq} implies the following ``gap theorem''.
\begin{thm}
\label{GapThm}
If $k\leq{}x\leq{}k+1$, where $k\in\mathbb{Z}_{>0}$, then the $k$-th order coefficient of the series~$\left[x\right]_q$ vanishes,
while all the preceding coefficients are equal to~$1$:
$$
\left[x\right]_q=1+q+q^2+\cdots+q^{k-1}+\varkappa_{k+1}\,q^{k+1}+\varkappa_{k+2}\,q^{k+2}+\cdots
$$
\end{thm}
Theorem~\ref{GapThm} implies that for negative~$x$ we obtain Laurent series instead of power series.
More precisely, if $-k\leq{}x<1-k$, where $k\in\mathbb{Z}_{>0}$, then~$\left[x\right]_q$ is of the following general form
\begin{equation}
\label{GeneralEq}
\left[x\right]_q=-q^{-k}+\varkappa_{1-k}\,q^{1-k}+\varkappa_{2-k}\,q^{2-k}+\cdots
\end{equation}
where $\varkappa_i\in\mathbb{Z}$.
Let us sum up our understanding of $q$-deformation, or ``quantization'',
of real numbers in comparison with integers and rationals.
This quantization transforms integers into polynomials,
rationals into rational fractions, and real numbers into power series, in each case with integer coefficients:
$$
{\left[{\,.\,}\right]_q}\;:
\left\{
\begin{array}{rcl}
\mathbb{Z}_{\geq1}&\longrightarrow&\mathbb{Z}_{\geq0}[q],\\[6pt]
\mathbb{Q}_{\geq0}&{\longrightarrow}&\mathbb{Z}_{\geq0}(q),\\[6pt]
\mathbb{R}_{\geq0}&\longrightarrow&\mathbb{Z}[[q]],\\[6pt]
\mathbb{R}&\longrightarrow&\mathbb{Z}[[q]][q^{-1}].
\end{array}
\right.
$$
In the case of integers and rationals, the resulting polynomials and rational functions are with positive coefficients.
In the case of real numbers, this positivity consists in the fact that the power series are obtained
as limit of rationals functions with positive integer coefficients.
The paper is organized as follows.
In Section~\ref{ICFqSec} we briefly recall the notion of $q$-rational.
We start with the most elementary, recurrent way to calculate $q$-rationals,
and then give two equivalent and more explicit formulas.
The first one uses the continued fractions and the second $2\times2$ matrices.
The reader can find several other equivalent definitions in~\cite{MGOqR}.
In Section~\ref{ProoSec} we prove Theorem~\ref{ConvThm} and Proposition~\ref{TechLem}.
In Section~\ref{ExSec}
we use a computer program to
investigate several examples of $q$-deformed quadratic irrational numbers.
We start with the golden ratio and continue with the ``silver ratio'' and several examples of square roots,
giving in each case an explicit formula of the $q$-deformation.
In Section~\ref{ExSecBis} we consider two
examples of transcendental irrationals, namely~$e$ and~$\pi$.
We calculate the first terms of their $q$-deformations trying to make some observations.
In Section~\ref{TransSec} we discuss the action of the translation group and prove Theorem~\ref{GapThm}.
It would be interesting to investigate more concrete examples.
Note that we searched in vain for some functional equations similar to those of $q$-deformed quadratic irrational numbers
in the case of higher order algebraic numbers,
such as $\sqrt[\leftroot{-2}\uproot{2}3]{2}$.
\section{$q$-deformed rationals}\label{ICFqSec}
In this section,
we try to give a transparent and self-contained exposition of the notion of $q$-rational introduced in~\cite{MGOqR}.
We outline an analogy with $q$-binomial coefficients,
and give a recurrent way to compute $q$-rationals from the Farey graph.
Finally we give two explicit formulas for the $q$-rationals using continued fraction expansions and the matrix form.
\subsection{Analogy with the $q$-binomials}
Recall that the classical Gaussian $q$-binomial coefficients (see~\cite{Sta}) are polynomials in~$q$ that
can be calculated recurrently via the formula
\begin{equation}
\label{PascalEq}
{r\choose s}_q=
{r-1\choose s-1}_q+q^s{r-1\choose s}_q.
\end{equation}
The $q$-binomial coefficients are the vertices of the ``weighted Pascal triangle'' which encodes the above formula:
$$
\xymatrix @!0 @R=0.6cm @C=0.8cm
{
&&&&{0\choose 0}_q\ar@{-}[rdd]^{\textcolor{red}{1}}\ar@{-}[ldd]_{\textcolor{red}{1}}
\\
\\
&&&{1\choose 0}_q\ar@{-}[rdd]^{\textcolor{red}{1}}\ar@{-}[ldd]_{\textcolor{red}{1}}
&&{1\choose 1}_q\ar@{-}[rdd]^{\textcolor{red}{1}}\ar@{-}[ldd]_{\textcolor{red}{q}}
\\
\\
&&{2\choose 0}_q\ar@{-}[rdd]^{\textcolor{red}{1}}\ar@{-}[ldd]_{\textcolor{red}{1}}
&&{2\choose 1}_q\ar@{-}[rdd]^{\textcolor{red}{1}}\ar@{-}[ldd]_{\textcolor{red}{q}}&&{2\choose 2}_q\ar@{-}[rdd]^{\textcolor{red}{1}}\ar@{-}[ldd]_{\textcolor{red}{q^2}}
\\
\\
&{3\choose 0}_q\ar@{-}[rdd]^{\textcolor{red}{1}}\ar@{-}[ldd]_{\textcolor{red}{1}}
&&{3\choose 1}_q\ar@{-}[rdd]^{\textcolor{red}{1}}\ar@{-}[ldd]_{\textcolor{red}{q}}
&&{3\choose 2}_q\ar@{-}[rdd]^{\textcolor{red}{1}}\ar@{-}[ldd]_{\textcolor{red}{q^2}}
&&{3\choose 3}_q\ar@{-}[rdd]^{\textcolor{red}{1}}\ar@{-}[ldd]_{\textcolor{red}{q^3}}
\\
\\
{4\choose 0}_q
&&{4\choose 1}_q&&{4\choose 2}_q&&{4\choose 3}_q&&{4\choose 4}_q
\\
&&&\cdots&&\cdots
}
$$
The idea behind the definition of $q$-rationals is to use exactly the same rule,
but replace the Pascal triangle by the Farey graph.
\subsection{The weighted Farey graph}\label{FaSec}
The structure of the Farey graph is as follows (see~\cite{HW}).
The set of vertices consists of rational numbers $\mathbb{Q}$, completed by $\infty:=\frac{1}{0}$.
Two rationals,~$\frac{r}{s}$ and~$\frac{r'}{s'}$ (always written as irreducible fractions),
are connected by an edge if and only if $rs'-r's=\pm1$.
Edges of the Farey graph are often represented
as (non-crossing) geodesics of the hyperbolic plane.
Although the Farey graph is much more complicated than the Pascal triangle,
in particular every vertex $\frac{r}{s}$ has infinitely many neighbors,
these two graphs have one property in common: each vertex has two ``parents''.
In the Farey graph these ``parents'' are characterized as follows.
Among the infinite set of neighbors of~$\frac{r}{s}$ there are exactly two, $\frac{r'}{s'}$ and $\frac{r''}{s''}$,
that are also connected to each other.
In other words, every rational~$\frac{r}{s}$ belongs to exactly one triangle
\begin{center}
\psscalebox{1.0 1.0}
{
\psset{unit=0.8cm}
\begin{pspicture}(0,-1.315)(3.385,1.315)
\definecolor{colour0}{rgb}{1.0,0.0,0.2}
\psarc[linecolor=black, linewidth=0.02, dimen=outer](0.87,-0.685){0.8}{0.0}{180.0}
\psarc[linecolor=black, linewidth=0.02, dimen=outer](2.47,-0.685){0.8}{0.0}{180.0}
\psarc[linecolor=black, linewidth=0.02, dimen=outer](1.67,-0.685){1.6}{0.0}{180.0}
\rput(0.07,-1.085){$\frac{r'}{s'}$}
\rput(1.67,-1.085){$\frac{r}{s}$}
\rput(3.27,-1.085){$\frac{r''}{s''}$}
\end{pspicture}
}
\end{center}
such that $\frac{r'}{s'}<\frac{r}{s}<\frac{r''}{s''}$.
Furthermore, one has $\frac{r}{s}=\frac{r'+r''}{s'+s''}$.
Similarly to the case of $q$-binomials,
the edges of the weighted Farey graph are labeled by powers of~$q$ according to the following pattern
\begin{center}
\psscalebox{1.0 1.0}
{
\psset{unit=0.9cm}
\begin{pspicture}(0,-1.315)(3.385,1.315)
\definecolor{colour0}{rgb}{1.0,0.0,0.2}
\psarc[linecolor=black, linewidth=0.02, dimen=outer](0.87,-0.685){0.8}{0.0}{180.0}
\psarc[linecolor=black, linewidth=0.02, dimen=outer](2.47,-0.685){0.8}{0.0}{180.0}
\psarc[linecolor=black, linewidth=0.02, dimen=outer](1.67,-0.685){1.6}{0.0}{180.0}
\rput[tl](0.87,0.515){\textcolor{colour0}{1}}
\rput[tl](2.47,0.515){\textcolor{colour0}{$q^\ell$}}
\rput[tl](1.67,1.315){\textcolor{colour0}{$q^{\ell-1}$}}
\rput(0.07,-1.085){$\left[\frac{r'}{s'}\right]_{q}$}
\rput(1.67,-1.085){$\left[\frac{r}{s}\right]_{q}$}
\rput(3.27,-1.085){$\left[\frac{r''}{s''}\right]_{q}$}
\end{pspicture}
}
\end{center}
The vertices are labeled by the following rule:
if $\left[\frac{r'}{s'}\right]_{q}=\frac{\mathcal{R}'}{\mathcal{S}'}$ and $\left[\frac{r''}{s''}\right]_{q}=\frac{\mathcal{R}''}{\mathcal{S}''}$, then
\begin{equation}
\label{FSEq}
\left[\frac{r}{s}\right]_{q}:=\frac{\mathcal{R}'+q^{\ell}\mathcal{R}''}{\mathcal{S}'+q^{\ell}\mathcal{S}''}.
\end{equation}
Note that \eqref{FSEq} is analogous to \eqref{PascalEq}.
The weights $q^\ell$ and the $q$-rationals can be calculated recursively along the Farey graph; see Figure~\ref{wtFg}.
\begin{figure}[htbp]
\begin{center}
\psscalebox{1.0 1.0}
{
\psset{unit=0.9cm}
\begin{pspicture}(0,-4.535)(13.757692,4.535)
\psdots[linecolor=black, dotsize=0.12](3.2788463,-2.665)
\psdots[linecolor=black, dotsize=0.12003673](3.2788463,-2.665)
\psdots[linecolor=black, dotsize=0.12](0.07884616,-2.665)
\psdots[linecolor=black, dotsize=0.12](13.678846,-2.665)
\psdots[linecolor=black, dotsize=0.12](2.478846,-2.665)
\psdots[linecolor=black, dotsize=0.12](1.6788461,-2.665)
\psdots[linecolor=black, dotsize=0.12](0.87884617,-2.665)
\psdots[linecolor=black, dotsize=0.12](0.07884616,-2.665)
\psdots[linecolor=black, dotsize=0.12](4.078846,-2.665)
\psdots[linecolor=black, dotsize=0.12](4.878846,-2.665)
\psdots[linecolor=black, dotsize=0.12](5.6788464,-2.665)
\psdots[linecolor=black, dotsize=0.12](6.478846,-2.665)
\psdots[linecolor=black, dotsize=0.12](7.2788463,-2.665)
\psdots[linecolor=black, dotsize=0.12](8.078846,-2.665)
\psdots[linecolor=black, dotsize=0.12](8.878846,-2.665)
\psdots[linecolor=black, dotsize=0.12](9.678846,-2.665)
\psdots[linecolor=black, dotsize=0.12](10.478847,-2.665)
\psdots[linecolor=black, dotsize=0.12](11.278846,-2.665)
\psdots[linecolor=black, dotsize=0.12](12.078846,-2.665)
\psdots[linecolor=black, dotsize=0.12](12.878846,-2.665)
\psarc[linecolor=black, linewidth=0.02, dimen=outer](6.878846,-2.665){6.8}{0.0}{180.0}
\psdots[linecolor=black, dotsize=0.12](13.678846,-2.665)
\psarc[linecolor=black, linewidth=0.02, dimen=outer](6.878846,-2.665){0.4}{0.0}{180.0}
\psarc[linecolor=black, linewidth=0.02, dimen=outer](7.6788464,-2.665){0.4}{0.0}{180.0}
\psarc[linecolor=black, linewidth=0.02, dimen=outer](8.478847,-2.665){0.4}{0.0}{180.0}
\psarc[linecolor=black, linewidth=0.02, dimen=outer](9.278846,-2.665){0.4}{0.0}{180.0}
\psarc[linecolor=black, linewidth=0.02, dimen=outer](10.078846,-2.665){0.4}{0.0}{180.0}
\psarc[linecolor=black, linewidth=0.02, dimen=outer](10.878846,-2.665){0.4}{0.0}{180.0}
\psarc[linecolor=black, linewidth=0.02, dimen=outer](11.678846,-2.665){0.4}{0.0}{180.0}
\psarc[linecolor=black, linewidth=0.02, dimen=outer](12.478847,-2.665){0.4}{0.0}{180.0}
\psarc[linecolor=black, linewidth=0.02, dimen=outer](13.278846,-2.665){0.4}{0.0}{180.0}
\psarc[linecolor=black, linewidth=0.02, dimen=outer](0.47884616,-2.665){0.4}{0.0}{180.0}
\psarc[linecolor=black, linewidth=0.02, dimen=outer](1.2788461,-2.665){0.4}{0.0}{180.0}
\psarc[linecolor=black, linewidth=0.02, dimen=outer](2.0788462,-2.665){0.4}{0.0}{180.0}
\psarc[linecolor=black, linewidth=0.02, dimen=outer](2.8788462,-2.665){0.4}{0.0}{180.0}
\psarc[linecolor=black, linewidth=0.02, dimen=outer](3.6788461,-2.665){0.4}{0.0}{180.0}
\psarc[linecolor=black, linewidth=0.02, dimen=outer](4.478846,-2.665){0.4}{0.0}{180.0}
\psarc[linecolor=black, linewidth=0.02, dimen=outer](5.2788463,-2.665){0.4}{0.0}{180.0}
\psarc[linecolor=black, linewidth=0.02, dimen=outer](6.078846,-2.665){0.4}{0.0}{180.0}
\psarc[linecolor=black, linewidth=0.02, dimen=outer](1.6788461,-2.665){0.8}{0.0}{180.0}
\psarc[linecolor=black, linewidth=0.02, dimen=outer](3.2788463,-2.665){0.8}{0.0}{180.0}
\psarc[linecolor=black, linewidth=0.02, dimen=outer](4.878846,-2.665){0.8}{0.0}{180.0}
\psarc[linecolor=black, linewidth=0.02, dimen=outer](6.478846,-2.665){0.8}{0.0}{180.0}
\psarc[linecolor=black, linewidth=0.02, dimen=outer](8.078846,-2.665){0.8}{0.0}{180.0}
\psarc[linecolor=black, linewidth=0.02, dimen=outer](9.678846,-2.665){0.8}{0.0}{180.0}
\psarc[linecolor=black, linewidth=0.02, dimen=outer](11.278846,-2.665){0.8}{0.0}{180.0}
\psarc[linecolor=black, linewidth=0.02, dimen=outer](12.878846,-2.665){0.8}{0.0}{180.0}
\psarc[linecolor=black, linewidth=0.02, dimen=outer](2.478846,-2.665){1.6}{0.0}{180.0}
\psarc[linecolor=black, linewidth=0.02, dimen=outer](4.878846,-2.665){0.8}{0.0}{180.0}
\psarc[linecolor=black, linewidth=0.02, dimen=outer](5.6788464,-2.665){1.6}{0.0}{180.0}
\psarc[linecolor=black, linewidth=0.02, dimen=outer](8.878846,-2.665){1.6}{0.0}{180.0}
\psarc[linecolor=black, linewidth=0.02, dimen=outer](12.078846,-2.665){1.6}{0.0}{180.0}
\psarc[linecolor=black, linewidth=0.02, dimen=outer](4.078846,-2.665){3.2}{0.0}{180.0}
\psarc[linecolor=black, linewidth=0.02, dimen=outer](10.478847,-2.665){3.2}{0.0}{180.0}
\psarc[linecolor=black, linewidth=0.02, dimen=outer](7.2788463,-2.665){6.4}{0.0}{180.0}
\rput[b](6.878846,3.735){\textcolor{red}{1}}
\rput[t](4.078846,0.935){\textcolor{red}{1}}
\rput[t](10.478847,0.935){\textcolor{red}{$q$}}
\rput[t](2.478846,-0.665){\textcolor{red}{1}}
\rput[t](5.6788464,-0.665){\textcolor{red}{$q$}}
\rput[t](8.878846,-0.665){\textcolor{red}{1}}
\rput[t](12.078846,-0.665){\textcolor{red}{$q^2$}}
\rput[t](12.878846,-1.465){\textcolor{red}{$q^3$}}
\rput[b](11.278846,-1.865){\textcolor{red}{1}}
\rput[b](8.078846,-1.865){\textcolor{red}{1}}
\rput[b](9.678846,-1.865){\textcolor{red}{$q$}}
\rput[b](1.6788461,-1.865){\textcolor{red}{1}}
\rput[b](3.2788463,-1.865){\textcolor{red}{$q$}}
\rput[t](6.478846,-1.465){\textcolor{red}{$q^2$}}
\rput[b](4.878846,-1.865){\textcolor{red}{1}}
\rput[t](0.47884616,-1.865){\textcolor{red}{1}}
\rput[t](6.078846,4.535){\textcolor{red}{$q^{-1}$}}
\rput(0.07884616,-3.065){$\frac01$}
\rput(0.87884617,-3.065){$\frac11$}
\rput(1.7,-3.065){$\left[\frac{5}{4}\right]_q$}
\rput(13.678846,-3.065){$\frac10$}
\rput(7.3,-3.065){$\left[\frac{2}{1}\right]_q$}
\rput(4.1,-3.065){$\left[\frac{3}{2}\right]_q$}
\rput(4.9,-3.065){$\left[\frac{8}{5}\right]_q$}
\rput(10.6,-3.065){$\left[\frac{3}{1}\right]_q$}
\rput(2.478846,-3.065){$\left[\frac{4}{3}\right]_q$}
\rput(3.3,-3.065){$\left[\frac{7}{5}\right]_q$}
\rput(5.7,-3.065){$\left[\frac{5}{3}\right]_q$}
\rput(6.5,-3.065){$\left[\frac{7}{3}\right]_q$}
\rput(8.178847,-3.065){$\left[\frac{5}{2}\right]_q$}
\rput(12.2,-3.065){$\left[\frac{4}{1}\right]_q$}
\rput(12.99,-3.065){$\left[\frac{5}{1}\right]_q$}
\rput[b](1.2788461,-2.265){\textcolor{red}{\footnotesize$1$}}
\rput[b](2.0788462,-2.265){\textcolor{red}{\footnotesize$q$}}
\rput[b](2.8788462,-2.265){\textcolor{red}{\footnotesize$1$}}
\rput[b](4.478846,-2.265){\textcolor{red}{\footnotesize$1$}}
\rput[b](6.078846,-2.265){\textcolor{red}{\footnotesize$1$}}
\rput[b](7.6788464,-2.265){\textcolor{red}{\footnotesize$1$}}
\rput[b](9.278846,-2.265){\textcolor{red}{\footnotesize$1$}}
\rput[b](10.878846,-2.265){\textcolor{red}{\footnotesize$1$}}
\rput[b](12.478847,-2.265){\textcolor{red}{\footnotesize$1$}}
\rput[br](3.6788461,-2.265){\textcolor{red}{\footnotesize$q^2$}}
\rput[br](5.2788463,-2.265){\textcolor{red}{\footnotesize$q$}}
\rput[br](6.878846,-2.265){\textcolor{red}{\footnotesize$q^3$}}
\rput[br](8.478847,-2.265){\textcolor{red}{\footnotesize$q$}}
\rput[br](10.078846,-2.265){\textcolor{red}{\footnotesize$q^2$}}
\rput[br](11.678846,-2.265){\textcolor{red}{\footnotesize$q$}}
\rput[br](13.278846,-2.265){\textcolor{red}{\footnotesize$q^4$}}
\rput(9.0,-3.065){$\left[\frac{8}{3}\right]_q$}
\rput(9.8,-3.065){$\left[\frac{11}{4}\right]_q$}
\rput(11.408846,-3.065){$\left[\frac{7}{2}\right]_q$}
\end{pspicture}
}
\caption{Upper part of the weighted Farey graph between~$\frac01$ and~$\frac10$}
\label{wtFg}
\end{center}
\end{figure}
\noindent
For instance, $\left[\frac{7}{5}\right]_q=\frac{1+q+2q^2+2q^3+q^4}{1+q+2q^2+q^3}$.
We will be interested by
the corresponding Taylor series at $q=0$.
For instance, for the above example the series starts as follows
$$
\left[\frac{7}{5}\right]_q=
1+ q^3 - 2q^5 + q^6 + 3q^7 - 3q^8- 4q^9 + 7q^{10} + 4q^{11}- 14q^{12} \pm\cdots
$$
\subsection{$q$-deformed continued fractions}
We will now give another method of computing $q$-rationals.
It is relied on the continued fraction expansion.
Given a rational number $\frac{r}{s}>1$ , where $r$ and $s$ are positive integers assumed to be coprime.
It has a unique continued fraction expansion with even number of terms
\begin{equation}
\label{CFEq}
\frac{r}{s}
\quad=\quad
a_1 + \cfrac{1}{a_2
+ \cfrac{1}{\ddots +\cfrac{1}{a_{2m}} } },
\end{equation}
with $a_i\geq1$, denoted by
$[a_1,\ldots,a_{2m}]$.
Note that the choice of even length removes the ambiguity
$[a_1,\ldots,a_{n},1]=[a_1,\ldots,a_{n}+1]$ and makes the expansion unique.
In order to calculate the $q$-deformation $\left[\frac{r}{s}\right]_q$,
one can use the following explicit formula.
Given a regular continued fraction $[a_{1}, \ldots, a_{2m}]$,
its $q$-deformation is given by
\begin{equation}
\label{qa}
[a_{1}, \ldots, a_{2m}]_{q}:=
[a_1]_{q} + \cfrac{q^{a_{1}}}{[a_2]_{q^{-1}}
+ \cfrac{q^{-a_{2}}}{[a_{3}]_{q}
+\cfrac{q^{a_{3}}}{[a_{4}]_{q^{-1}}
+ \cfrac{q^{-a_{4}}}{
\cfrac{\ddots}{[a_{2m-1}]_q+\cfrac{q^{a_{2m-1}}}{[a_{2m}]_{q^{-1}}}}}
} }}
\end{equation}
where we use the standard notation for the $q$-integers~$[a]_q=1+q+q^2+\cdots+q^{a-1}$.
Of course, one can get rid of negative exponents in~\eqref{qa}, but the formula becomes uglier.
\subsection{The matrix formulas}
We give another, equivalent, form to define $q$-rationals.
It uses $2\times2$ matrices and is well adapted for computer programming.
Let, as before, $\frac{r}{s}$ be a rational written in the form of a continued fraction expansion~\eqref{CFEq}.
Consider the $2\times2$ matrix with polynomial coefficients that was denoted by $\widetilde{M}^{+}_{q}(a_{1},\ldots, a_{2m})$
in~\cite{MGOqR}.
It is defined by the formula
\begin{equation}
\label{DegEqOne}
\widetilde{M}^{+}_{q}(a_{1},\ldots, a_{2m}):=
\begin{pmatrix}
[a_{1}]_{q}&q^{a_{1}}\\[6pt]
1&0
\end{pmatrix}
\begin{pmatrix}
q[a_{2}]_{q}& 1\\[6pt]
q^{a_{2}}&0
\end{pmatrix}
\cdots
\begin{pmatrix}
[a_{2m-1}]_{q}&q^{a_{2m-1}}\\[6pt]
1&0
\end{pmatrix}
\begin{pmatrix}
q[a_{2m}]_{q}&1\\[6pt]
q^{a_{2m}}&0
\end{pmatrix}.
\end{equation}
This matrix is a $q$-analogue of the usual ``matrix of convergents'' of a continued fraction (see~\cite{Never,MGO}).
It is easy to prove that this matrix contains the numerator and denominator of~$\left[\frac{r}{s}\right]_q$ (up to multiplication by~$q$)
in the first column:
\begin{equation}
\label{qRegMat}
\widetilde{M}^{+}_{q}(a_{1},\ldots, a_{2m})=
\begin{pmatrix}
q\mathcal{R}&\mathcal{R}'_{2m-1}\\[6pt]
q\mathcal{S}&\mathcal{S}'_{2m-1}
\end{pmatrix},
\end{equation}
where $\frac{\mathcal{R}(q)}{\mathcal{S}(q)}=\left[\frac{r}{s}\right]_q, $ and where
$\frac{\mathcal{R}'_{2m-1}(q)}{\mathcal{S}'_{2m-1}(q)}=[a_{1}, \ldots,a_{2m-1}]_{q}$ is the previous convergent.
\section{The stabilization phenomenon}\label{ProoSec}
In this section we
first collect some basic properties that follow from the matrix presentation.
We prove Proposition~\ref{TechLem} and Theorem~\ref{ConvThm}.
Finally, in a remark, we discuss the stabilization phenomenon in the case when $x$ is rational.
\subsection{Some simple properties of $q$-rationals}
The following statements are immediate corollaries of~\eqref{DegEqOne} and~\eqref{qRegMat}.
Let $\frac{r}{s}\geq1$ be a rational number.
Then $\left[\frac{r}{s}\right]_q=\frac{\mathcal{R}(q)}{\mathcal{S}(q)}$, where
$\mathcal{R}(q)$ and $\mathcal{S}(q)$
are polynomials with positive coefficients whose highest and lowest coefficients are equal to~$1$.
The degrees of the polynomials~${\mathcal{R}}$ and~${\mathcal{S}}$ are as follows
$$
\begin{array}{rcl}
\deg(\mathcal{R})&=&a_{1}+\ldots +a_{2m}-1,\\[4pt]
\deg(\mathcal{S})&=&a_{2}+\ldots +a_{2m}-1.
\end{array}
$$
The unimodality conjecture of~\cite{MGOqR} states that the coefficients of~${\mathcal{R}}$ and ${\mathcal{S}}$ first grow and then decrease monotonically.
Let us mention that the most important properties of $q$-rationals
are the total positivity and the combinatorial interpretation of the coefficients of~$\mathcal{R}$ and~$\mathcal{S}$.
\subsection{Proof of Proposition~\ref{TechLem}}\label{PrPSec}
Let~$x\geq1$ be a real number, and let $x_{n-1}$ and~$x_n$ be two consecutive convergents of its continued fraction.
Let
$$
\left[x_{n-1}\right]_q=\frac{\mathcal{R}_{n-1}}{\mathcal{S}_{n-1}}
\qquad\hfill{and}\qquad
\left[x_n\right]_q=\frac{\mathcal{R}_n}{\mathcal{S}_n},
$$
One then has tautologically
$$
\frac{\mathcal{R}_n}{\mathcal{S}_n}-
\frac{\mathcal{R}_{n-1}}{\mathcal{S}_{n-1}}=
\frac{\mathcal{R}_n\mathcal{S}_{n-1}-\mathcal{S}_n\mathcal{R}_{n-1}}{\mathcal{S}_n\mathcal{S}_{n-1}}.
$$
The polynomial in the numerator of the right-hand-side is a power of~$q$:
\begin{equation}
\label{PowerLem}
\mathcal{R}_n\mathcal{S}_{n-1}-\mathcal{S}_n\mathcal{R}_{n-1}=q^{a_1+\cdots+a_n-1}.
\end{equation}
Indeed, to prove this,
it suffices to compare the determinant of the matrix $\widetilde{M}^{+}_{q}(a_{1},\ldots, a_{2m})$
written in the forms~\eqref{DegEqOne} and~\eqref{qRegMat}.
Both polynomials,
$\mathcal{S}_n$ and $\mathcal{S}_{n-1}$, start with zero-order term~$1$, and so is the series $1/(\mathcal{S}_n\mathcal{S}_{n-1})$.
It follows now from~\eqref{PowerLem}, that the series $\left[x_n\right]_q-\left[x_{n-1}\right]_q$ is of the form
$$
\left[x_n\right]_q-\left[x_{n-1}\right]_q=q^{a_1+\cdots+a_n-1}+O(q^{a_1+\cdots+a_n}).
$$
Hence, Proposition~\ref{TechLem}.
\subsection{Proof of Theorem~\ref{ConvThm}}
Let now $(y_n)_{n\geq1}$ be an arbitrary sequence of rationals converging to~$x$.
Then, for every fixed~$m$, there exists~$N$ such that
$y_n\in[x_{m-1},x_m]$ (or $y_n\in[x_{m},x_{m-1}])$ for every~$n\geq{}N$.
Here, as above, $(x_n)_{n\geq1}$ is the sequence of convergents of the continued fraction of~$x$.
By Proposition~\ref{TechLem}, the first $a=a_1+\cdots+a_{m}-1$ terms
of the Taylor series of~$\left[x_{m-1}\right]_q$ and~$\left[x_m\right]_q$ coincide.
It turns out that the same is true for every rational between $x_{m-1}$ and~$x_m$.
\begin{lem}
\label{SopLem}
For every rational~$\frac{r}{s}$ such that, $x_{m-1}<\frac{r}{s}<x_m$,
the first $a=a_1+\cdots+a_{m}-1$ terms of the Taylor series of~$\left[\frac{r}{s}\right]_q$ coincides with those of~$\left[x_{m-1}\right]_q$ and~$\left[x_m\right]_q$.
\end{lem}
\begin{proof}
Let~$\left[x_{m-1}\right]_q=\frac{\mathcal{R}_{m-1}}{\mathcal{S}_{m-1}}$ and~$\left[x_{m}\right]_q=\frac{\mathcal{R}_{m}}{\mathcal{S}_{m}}$.
Recall that $x_{m-1}$ and~$x_m$ are joined by an edge in the Farey graph.
Suppose first that~$\frac{r}{s}$ is also joint to $x_{m-1}$ and~$x_m$, so that we have a triangle
\begin{center}
\psscalebox{1.0 1.0}
{
\psset{unit=0.9cm}
\begin{pspicture}(0,-1.315)(3.385,1.315)
\definecolor{colour0}{rgb}{1.0,0.0,0.2}
\psarc[linecolor=black, linewidth=0.02, dimen=outer](0.87,-0.685){0.8}{0.0}{180.0}
\psarc[linecolor=black, linewidth=0.02, dimen=outer](2.47,-0.685){0.8}{0.0}{180.0}
\psarc[linecolor=black, linewidth=0.02, dimen=outer](1.67,-0.685){1.6}{0.0}{180.0}
\rput[tl](0.87,0.515){\textcolor{colour0}{1}}
\rput[tl](2.47,0.515){\textcolor{colour0}{$q^\ell$}}
\rput[tl](1.67,1.315){\textcolor{colour0}{$q^{\ell-1}$}}
\rput(0.07,-1.085){$\frac{\mathcal{R}_{m-1}}{\mathcal{S}_{m-1}}$}
\rput(1.67,-1.085){$\left[\frac{r}{s}\right]_{q}$}
\rput(3.27,-1.085){$\frac{\mathcal{R}_m}{\mathcal{S}_m}$}
\end{pspicture}
}
\end{center}
Set $\left[\frac{r}{s}\right]_q=\frac{\mathcal{R}}{\mathcal{S}}$, then, by definition of $q$-rationals,
$$
\frac{\mathcal{R}}{\mathcal{S}}=\frac{\mathcal{R}_{m-1}+q^\ell\mathcal{R}_m}{\mathcal{S}_{m-1}+q^\ell\mathcal{S}_m}
$$
By~\eqref{PowerLem}, we have $\mathcal{R}_{m}\mathcal{S}_{m-1}-\mathcal{S}_{m}\mathcal{R}_{m-1}=q^a$.
Therefore,
\begin{equation}\label{eqdisym}
\begin{array}{rclcl}
\mathcal{R}\mathcal{S}_{m-1}-\mathcal{S}\mathcal{R}_{m-1}&=&\mathcal{R}_{m-1}\mathcal{S}_{m-1}+q^\ell\mathcal{R}_m\mathcal{S}_{m-1}-\mathcal{R}_{m-1}\mathcal{S}_{m-1}-q^\ell\mathcal{R}_{m-1}\mathcal{S}_m&=&q^{a+\ell}\\[4pt]
\mathcal{R}\mathcal{S}_m-\mathcal{S}\mathcal{R}_m&=&\mathcal{R}_{m-1}\mathcal{S}_m+q^\ell\mathcal{R}_m\mathcal{S}_m-\mathcal{R}_m\mathcal{S}_{m-1}-q^\ell\mathcal{R}_m\mathcal{S}_m&=&-q^{a}.
\end{array}
\end{equation}
Using the same argument as in the proof of Proposition~\ref{TechLem}
we deduce the statement of the lemma in the case where the three points form a triangle.
The general case of the lemma can then be proved inductively.
Indeed, every rational~$\frac{r}{s}$ such that, $x_{m-1}<\frac{r}{s}<x_m$ can be joined to $x_{m-1}$ and~$x_m$
by a sequence of triangles.
To see this, draw a vertical line
in the Poincar\'e half-plane through~$\frac{r}{s}$ and collect all the triangles of the Farey tessellation
between $x_{m-1}$ and~$x_m$ crossed in their interior by this line.
Hence, the lemma.
\end{proof}
Theorem~\ref{ConvThm} follows from Lemma~\ref{SopLem} and Proposition~\ref{TechLem}.
\begin{rem}
We also investigated the stabilization phenomenon in the case when $x$ is rational.
If $x=\frac{r}{s}$ and $\left(\frac{r_n}{s_n}\right)_{n\geq1}$ is a sequence converging to $x$ it is natural to ask whether
$[x]_q$ defined by \eqref{TSEqBis} is equal to the $q$-rational $\left[\frac{r}{s}\right]_q$.
The answer is surprising: when the sequence $\left(\frac{r_n}{s_n}\right)$ approaches $\frac{r}{s}$ from the right the stabilized power series defined by \eqref{TSEqBis} is equal to the $q$-rational
$\left[\frac{r}{s}\right]_q$, when the sequence approaches the rational from the left this is no longer true. This is due to the asymmetry of the relations \eqref{eqdisym}.
Indeed, if $\frac{r_n}{s_n}>\frac{r}{s}$ for all $n>\!\!>0$, one can use the same arguments as in the proof of Theorem~\ref{ConvThm} by replacing the sequence of convergents $x_n$ by
the sequence of right neighbors $\frac{nr+r''}{ns+s''}$, where $\frac{r''}{s''}$ is the right parent introduced in Section~\ref{FaSec}.
Considering Taylor expansions, the right neighbors $\left[\frac{nr+r''}{ns+s''}\right]_q$ have more and more first terms identical to those of $\left[\frac{r}{s}\right]_q$ as $n$ grows.
If $\frac{r_n}{s_n}<\frac{r}{s}$ for all $n>\!\!>0$ the above arguments no longer apply.
However, with experimental computations, we observed some
stabilization phenomenons for the sequences~$\left[\frac{r_n}{s_n}\right]_q$,
but the stabilized power series is different from $\left[\frac{r}{s}\right]_q$.
For instance, testing several sequences of rational approaching the integer $2$ with smaller values we always obtained a stabilization to the series $1+q^2$ which is not $[2]_{q}=1+q$.
\end{rem}
\section{$q$-deformations of quadratic irrationals}\label{ExSec}
In this section we discuss several examples of quadratic irrational numbers.
We start from the simplest possible case of the golden ratio and identify the coefficients of the
Taylor series as the remarkable and thoroughly studied sequence of generalized Catalan numbers.
We dwell on this first example with more details to better explain the stabilization phenomenon.
We then calculate the $q$-deformation of the number $1+\sqrt{2}$, which is usually called the ``silver ratio''.
Finally, we consider several examples of square roots of small positive integers
and calculate the corresponding functional equations.
\subsection{The golden ratio and generalized Catalan numbers}\label{GRSec}
The simplest example of an infinite continued fraction is the expansion of the golden ratio:
$$
\varphi=\frac{1+\sqrt{5}}{2}=
\left[1, 1, 1, 1, 1, \ldots\right].
$$
The convergents are ratios of consecutive Fibonacci numbers:
$\varphi_n=F_{n+1}/F_n$.
According to~\eqref{qa}, the $q$-deformation of~$\varphi$ is given by the $2$-periodic infinite
continued fraction
\begin{equation}
\label{RRAlterEq}
\left[\varphi\right]_q=
1 + \cfrac{q^{2}}{q
+ \cfrac{1}{1
+\cfrac{q^{2}}{q
+ \cfrac{1}{\ddots
}}}}
\end{equation}
\begin{rem}
Let us mention that there exists a celebrated $q$-deformation of the golden ratio, called the Rogers-Ramanujan continued fraction.
It is aperiodic and has a great number of beautiful and sophisticated properties (for a survey, see~\cite{RR}).
Unlike the Rogers-Ramanujan continued fraction, we do not know if~\eqref{RRAlterEq} is a quotient of two $q$-series with positive coefficients.
\end{rem}
Consider the $q$-deformations of the convergents $\left[\varphi_n\right]_q$.
We proved in~\cite{MGOqR} that the coefficients of the polynomials of
$\left[\varphi_n\right]_q$ are the numbers appearing in the remarkable ``Fibonacci lattice''
(see A123245 of OEIS~\cite{OEIS} and its mirror A079487).
More precisely, A123245 appears in the numerator and A079487 in the denominator.
For instance,
$$
\begin{array}{rcl}
\left[\varphi_6\right]_q&=&\displaystyle\frac{1+2q+3q^2+3q^3+3q^4+q^5}{1+2q+2q^2+2q^3+q^4},
\\[12pt]
\left[\varphi_8\right]_q&=&\displaystyle\frac{1+3q+5q^2+7q^3+7q^4+6q^5+4q^6+q^7}{1+3q+4q^2+5q^3+4q^4+3q^5+q^6},
\\[12pt]
\left[\varphi_9\right]_q&=&\displaystyle\frac{1+4q+7q^2+10q^3+11q^4+10q^5+7q^6+4q^7+q^8}{1+4q+6q^2+7q^3+7q^4+5q^5+3q^6+q^7}.
\end{array}
$$
We see that the coefficients at every fixed power of~$q$ of the numerator and the denominator grow.
To illustrate the stabilization phenomenon, we give the corresponding Taylor series:
$$
\begin{array}{rcl}
\left[\varphi_6\right]_q&=&
1 + q^2 - q^3 + 2 q^4 - 3 q^5 + 3 q^6 - 3 q^7 + 4 q^8 - 5 q^9 + 5 q^{10} - 5 q^{11} + 6 q^{12}\cdots\\[4pt]
\left[\varphi_8\right]_q&=&
1 + q^2 - q^3 + 2 q^4 - 4 q^5 + 8 q^6 - 16 q^7 + 30 q^8 - 55 q^9 + 103 q^{10} - 195 q^{11} + 368 q^{12}\cdots\\[4pt]
\left[\varphi_9\right]_q&=&
1 + q^2 - q^3 + 2 q^4 - 4 q^5 + 8 q^6 - 17 q^7 + 37 q^8 - 82 q^9 + 184 q^{10} - 414 q^{11} + 932 q^{12}\cdots
\end{array}
$$
The series $\left[\varphi_9\right]$ approximates $\left[\varphi\right]_q$ correctly up to the $9$th term, while $\left[\varphi_6\right]$ only up to $q^4$.
The full series~\eqref{RRAlterEq}
starts as follows:
$$
\begin{array}{rcl}
\left[\varphi\right]_q&=&
1 + q^2 - q^3 + 2 q^4 - 4 q^5 + 8 q^6 - 17 q^7 + 37 q^8 - 82 q^9 + 185 q^{10} - 423 q^{11} + 978 q^{12}-2283q^{13}\\[4pt]
&&+ 5373q^{14}-12735q^{15}+30372q^{16}-72832q^{17}+175502q^{18}-424748q^{19}+1032004q^{20} \cdots
\end{array}
$$
Note that one needs the $n$th convergent $\left[\varphi_n\right]_q$ to calculate~$\left[\varphi\right]_q$ with accuracy up to~$q^n$.
Fix the notation
$$
\left[\varphi\right]_q=:\sum_{k\geq0}\phi_kq^k.
$$
We were able to identify the coefficients $\phi_k$ appearing in this series
as the so-called Generalized Catalan numbers
(see sequence A004148 of OEIS), but with alternating signs.
\begin{prop}
\label{GoldConj}
One has $\phi_k=(-1)^ka_{k-1}$, for $k\geq2$, where $a_k$ are the Generalized Catalan numbers; see~{\rm A004148} of~\cite{OEIS}.
\end{prop}
\begin{proof}
Let us prove
that the series $\left[\varphi\right]_q$ satisfies the following functional equation:
\begin{equation}
\label{GREq}
q\left[\varphi\right]^2_q-
\left(q^2+q-1 \right)\left[\varphi\right]_q -1 =0,
\end{equation}
which is a $q$-analogue of $\varphi^2=\varphi+1$.
In fact,~\eqref{GREq} is immediate consequence of~\eqref{RRAlterEq} which can also be written\footnote{This observation is due to Doron Zeilberger.}
$$
\left[\varphi\right]_q=
1 + \cfrac{q^{2}}{q
+ \cfrac{1}{\left[\varphi\right]_q
}} .
$$
Proposition~\ref{GoldConj} then follows from the known results about the generating function of the Generalized Catalan numbers.
Indeed, this generating function satisfies an equation equivalent to~\eqref{GREq}.
(see M. Somos' contribution to A004148).
\end{proof}
Solving~\eqref{GREq}, one obtains
$$
\left[\varphi\right]_q=
\frac{q^2+q-1+\sqrt{q^4+2q^3-q^2+2q+1}}{2q}.
$$
Obviously, at $q=1$ one recovers the golden ratio~$\varphi$.
\begin{rem}
Let us also mention that
the coefficients of $\left[\varphi\right]_q$ satisfy, for all~$n\geq3$, the following linear recurrence
$$
(k+1)\phi_{k} +(2k-1)\phi_{k-1} +(2-k)\phi_{k-2} +(2k-7)\phi_{k-3} +(k-5)\phi_{k-4}=0.
$$
In the context of the generalized Catalan numbers, this recurrence was conjectured by R.J. Mathar in 2011, and recently proved; see~\cite{EYZ} (based on~\cite{Zei}).
\end{rem}
\subsection{The $q$-deformed silver ratio}
The number $1+\sqrt{2}=\left[2,2,2,2,\ldots\right]$ is often called the silver ratio.
It is denoted by~$\delta_S$,
and its convergents are given by the quotient of consecutive Pell numbers.
This is probably the next simplest example of infinite continued fraction after the golden ratio.
Formula~\eqref{qa} implies the following $q$-deformation
\begin{equation}
\label{RRSEq}
\left[\delta_S\right]_q=
1 +q+ \cfrac{q^{4}}{q+q^2
+ \cfrac{1}{1 +q
+\cfrac{q^{4}}{q+q^2
+ \cfrac{1}{\ddots
}}}}
\end{equation}
The stabilization process goes twice faster than for the golden ratio:
one needs the $n$th convergent to calculate~$\left[\delta_S\right]_q$ with accuracy up to~$q^{2n}$.
The series~$\left[\delta_S\right]_q$ starts as follows
$$
\begin{array}{rcl}
\left[\delta_S\right]_q&=&
1+q+q^4-2q^6+q^7+4q^8-5q^9-7q^{10}+ 18q^{11}+ 7q^{12}-55q^{13}+ 18q^{14}\\[4pt]
&&+ 146q^{15}- 155q^{16} - 322q^{17}+692q^{18}+ 476q^{19}- 2446q^{20}+ 307q^{21}\\[4pt]
&&+ 7322q^{22}- 6276q^{23}- 18277q^{24}+ 33061q^{25}+ 33376q^{26}- 129238q^{27}- 10899q^{28}\cdots
\end{array}
$$
We see that the coefficients grow much slower than those of~$\left[\varphi_n\right]_q$.
This sequence of coefficients is not in~OEIS.
\begin{prop}
\label{MyFla}
The series $\left[\delta_S\right]_q$ satisfies the following functional equation:
\begin{equation}
\label{SREq}
q\left[\delta_S\right]^2_q-
\left(q^3+2q-1 \right)\left[\delta_S\right]_q -1 =0.
\end{equation}
\end{prop}
\begin{proof}
Fromula~\eqref{RRSEq} rewritten in the form
$$
\left[\delta_S\right]_q=
1 +q+ \cfrac{q^{4}}{q+q^2
+ \cfrac{1}{\left[\delta_S\right]_q
}}
$$
readily implies~\eqref{SREq}.
\end{proof}
Equation~\eqref{SREq} is a $q$-analogue of~$\delta_S^2=2\delta_S+1$, appearance of~$q^3$ in this formula is somewhat surprising.
Solving~\eqref{SREq}, one obtains
$$
\left[\delta_S\right]_q=
\frac{q^3+2q-1+\sqrt{q^6+4q^4-2q^3+4q^2+1}}{2q}.
$$
\subsection{The $q$-square roots of $2,3,5$ and $7$}
We calculate the $q$-analogues of the simplest square roots.
Recall that $\sqrt{2}=[1,\overline{2}],\, \sqrt{3}=[1,\overline{1,2}],\, \sqrt{5}=[2,\overline{4}],\, \sqrt{7}=[2,\overline{1,1,1,4}]$.
The series start as follows
$$
\begin{array}{rcl}
\left[\sqrt{2}\right]_q&=&
1+ q^3 - 2q^5 + q^6+ 4q^7- 5q^8- 7q^9+18q^{10} + 7q^{11}- 55q^{12}+ 18q^{13}\\[4pt]
&&+ 146q^{14} - 155q^{15}- 322q^{16}+ 692q^{17}+ 476q^{18}
- 2446q^{19}+ 307q^{20}\\[4pt]
&&+ 7322q^{21}- 6276q^{22}- 18277q^{23}+ 33061q^{24}+ 33376q^{25}\cdots\\[8pt]
\left[\sqrt{3}\right]_q&=&
1+q^2-q^4+2q^5-2q^6-q^7+ 7q^8- 12q^9 + 7q^{10} + 18q^{11}- 59q^{12}+ 78q^{13}\\[4pt]
&&- q^{14}- 228q^{15}+ 514q^{16}- 469q^{17}- 506q^{18}+ 2591q^{19}- 4338q^{20}\\[4pt]
&&+ 1837q^{21}+ 9405q^{22} - 27430q^{23}+ 33390q^{24}+10329q^{25}\cdots\\[8pt]
\left[\sqrt{5}\right]_q&=&
1+q+q^6-q^8- q^9- q^{10}+ 3q^{11}+ 4q^{12}- q^{13} -6q^{14}- 11q^{15}+ 2q^{16}\\[4pt]
&&+ 25q^{17}+ 22q^{18}- 10q^{19}- 70q^{20}- 71q^{21}+ 67q^{22}+ 208q^{23}+ 168q^{24}- 222q^{25}\cdots\\[8pt]
\left[\sqrt{7}\right]_q&=&
1+q+q^3-q^4+2q^5-3q^6+4q^7-6q^8+8q^9-9q^{10}+9q^{11}-5q^{12}-9q^{13}\\[4pt]
&&+ 40q^{14}- 101q^{15}+ 215q^{16}- 411q^{17}+ 724q^{18}- 1195q^{19}+ 1845q^{20}\\[4pt]
&&- 2623q^{21}+ 3324q^{22}- 3412q^{23}+ 1696q^{24}+ 4157q^{25}\cdots
\end{array}
$$
Note that the coefficients of~$\left[\sqrt{2}\right]_q$ are those of the silver ratio, but the power of~$q$ is shifted by~$1$;
in full accordance with~\eqref{TransEq}.
The following formulas can be proved in a similar way as~\eqref{GREq} and~\eqref{SREq},
The calculations are quite long but straightforward, so we omit the details.
\begin{prop}
\label{MySFla}
The series $\left[\sqrt{2}\right]_q,\left[\sqrt{3}\right]_q,\left[\sqrt{5}\right]_q$ and~$\left[\sqrt{7}\right]_q$ satisfy the following functional equations:
\begin{eqnarray}
\label{CREq}
q^2\left[\sqrt{2}\right]^2_q-
\left(q^3-1 \right)\left[\sqrt{2}\right]_q&=&q^2 + 1,\\
\label{CRTEq}
q^2\left[\sqrt{3}\right]^2_q-
\left(q^3+q^2-q-1 \right)\left[\sqrt{3}\right]_q&=&q^2+q + 1,\\
\label{RFiveEq}
q^3\left[\sqrt{5}\right]_q^2-(q^5+q^3-q^2-1)\left[\sqrt{5}\right]_q&=&q^4+q^3+q^2+q+1,\\
\label{RFSevenEq}
q^3\left[\sqrt{7}\right]_q^2-(q^5+q^4-q-1)\left[\sqrt{7}\right]_q&=&q^4+2q^3+q^2+2q+1.
\end{eqnarray}
\end{prop}
We obtain, consequently, the following expressions for the quantized square roots.
$$
\begin{array}{rcl}
\left[\sqrt{2}\right]_q&=&
\displaystyle
\frac{q^3-1+\sqrt{q^6+4q^4-2q^3+4q^2+1}}{2q^2},
\\[8pt]
\left[\sqrt{3}\right]_q&=&
\displaystyle
\frac{q^3+q^2-q-1+\sqrt{q^6 + 2q^5 + 3q^4 + 3q^2 + 2q + 1}}{2q^2},
\\[8pt]
\left[\sqrt{5}\right]_q&=&
\displaystyle
\frac{q^5+q^3-q^2-1+\sqrt{q^{10}+2q^8+2q^7 + 5q^6 + 5q^4 + 2q^3 + 2q^2+ 1}}{2q^3},
\\[8pt]
\left[\sqrt{7}\right]_q&=&
\displaystyle
\frac{q^5+q^4-q-1+\sqrt{q^{10}+2q^9+q^8+4q^7 + 6q^6 + 6q^4 + 4q^3 + q^2+2q +1}}{2q^3}.
\end{array}
$$
Note that $\left[\sqrt{5}\right]_q$ looks quite different from the golden ratio.
This is an example of a highly non-trivial action of homothety $x\to{}x/2$.
We wonder if some relations similar to~\eqref{GREq},~\eqref{SREq},~\eqref{CREq},~\eqref{CRTEq},~\eqref{RFiveEq} and~\eqref{RFSevenEq}
hold for $q$-deformations of arbitrary quadratic irrationals.
\section{$q$-deformations of $e$ and $\pi$}\label{ExSecBis}
In this section we write down the first terms of the $q$-deformations of two notable examples
of transcendental irrational numbers,~$e$ and~$\pi$.
We calculated several hundreds of terms to convince ourselves that
the coefficients of the corresponding series do not correspond to any sequence of OEIS.
However, one can make some surprising observations.
\subsection{Computing $\left[e\right]_q$}
The continued fraction expansion of the Euler constant is given by the following famous regular pattern
$e=\left[2, 1, 2, 1, 1, 4, 1, 1, 6, 1, 1, 8, 1, 1, 10,\ldots\right]$ (see sequence A003417 in the OEIS).
To calculate the first $40$ terms in the series $\left[e\right]_q$, one needs to take the $15$th convergent
$$
e_{15}=\left[2, 1, 2, 1, 1, 4, 1, 1, 6, 1, 1, 8, 1, 1, 10\right]=517656/190435.
$$
The series $\left[e\right]_q$ starts as follows:
$$
\begin{array}{rcl}
\left[e\right]_q&=&
1+q+q^3-q^5+2q^6-3q^7+3q^8-q^9\\[4pt]
&&-3q^{10}+9q^{11}-17q^{12}+25q^{13}-29q^{14}+23q^{15}+2q^{16}\\[4pt]
&&-54q^{17}+ 134q^{18}- 232q^{19}+ 320q^{20} - 347q^{21}+ 243q^{22}+ 71q^{23}\\[4pt]
&&- 660q^{24}+1531q^{25}- 2575q^{26}+ 3504q^{27}- 3804q^{28}+ 2747q^{29}+ 488q^{30}\\[4pt]
&&- 6537q^{31}+ 15395q^{32}- 25819q^{33}+ 34716q^{34}- 36780q^{35}+ 24771q^{36}+ 9096q^{37}\\[4pt]
&& - 70197q^{38}+ 156811q^{39}\cdots
\end{array}
$$
We observe that the coefficients of~$q^{2+7k}$,
where $k\geq0$, turn out to be smaller than those of their neighbors.
The signs of the coefficients also obey a certain $7$-periodic pattern: indeed, the double plus,
i.e., ``$+,+$'' signs, appears with period~$7$.
We do not know any reason for such a ``$7$-periodicity'' related to Euler's number.
\subsection{The quantum $\pi$}
The continued fraction expansion of~$\pi$ (cf. sequence A001203 in the OEIS) starts as follows:
$\pi=\left[3, 7, 15, 1, 292, 1, 1, 1, 2, 1, 3, 1, 14, 2, 1, 1, 2, 2, 2, 2, 1, 84,\ldots\right]$.
The rules governing this sequence are unknown.
The $4$th convergent $\pi_5=\left[3, 7, 15, 1\right]=355/113$ gives $24$ first terms of~$\left[\pi\right]_q$,
and already the $5$th convergent $\pi_5=\left[3, 7, 15, 1, 292\right]=103993/33102$
allows us to calculate~$\left[\pi\right]_q$ up to degree~${317}$.
We calculated up to
$\left[\pi_{38}\right]_q=11895062545096656711950/3786316004878788190109$
that approximates~$\pi$ up to~$43$ digits, and
that gives the first $603$ terms of the series $\left[\pi\right]_q$.
The first $79$ terms of $\left[\pi\right]_q$ are:
$$
\begin{array}{rcl}
\left[\pi\right]_q&=&
1+q+q^2+q^{10}-q^{12}-q^{13}+q^{15}+q^{16} \\[4pt]
&&- q^{20}-2q^{21}- q^{22}+2q^{23}+4q^{24}+q^{25}\\[4pt]
&&-4q^{27}-4q^{28}-2q^{29}+q^{30}+5q^{31}+8q^{32}+3q^{33}\\[4pt]
&&-3q^{34}-10q^{35}-12q^{36}-5q^{37}+8q^{38}+19q^{39}+20q^{40}+2q^{41}\\[4pt]
&&-18q^{42}-32q^{43}-25q^{44}+31q^{46}+51q^{47}+45q^{48}\\[4pt]
&&-7q^{49}-65q^{50}- 94q^{51}- 57q^{52}+ 35q^{53}+122q^{54}+ 140q^{55} + 72q^{56}\\[4pt]
&&- 76q^{57}- 209q^{58}- 234q^{59}- 90q^{60}
+ 171q^{61}+ 383q^{62}+ 363q^{63}+ 76q^{64}\\[4pt]
&&- 364q^{65}- 650q^{66} - 545q^{67}- 6q^{68}+ 702q^{69}+ 1101q^{70}+ 790q^{71}
\\[4pt]
&&- 180q^{72}- 1329q^{73}- 1824q^{74} - 1113q^{75}+ 642q^{76}+ 2454q^{77}+ 2982q^{78}
+ 1415q^{79} \cdots
\end{array}
$$
The coefficients of this series grow very slowly in contrast with the other examples we considered so far.
Asymptotic of the ratio of two consecutive coefficients seems to be close to~$1$,
that would imply that the radius of convergence of the series is equal to~$1$.
A curious observation is that, for unknown reasons, the coefficient of~$q^{45}$ vanishes.
One also observes oscillation of the sequence of coefficients, and the unimodality property
of every (short) subsequence of coefficients with constant sign.
We were unable to find any pattern or to conjecture any functional equation for this series.
\section{Translations}\label{TransSec}
In this section we consider the properties of $q$-deformed reals
under translations of the argument.
The action of the translation group~$\mathbb{Z}$ is defined by the operator
$T:x\to{}x+1$ and its inverse, $T^{-1}:x\to{}x-1$.
We first study~$T$ which is simpler, and then~$T^{-1}$ brings us to a new ground.
In particular, we extend the notion of $q$-deformation to negative real numbers.
\subsection{Right translations}
Consider first the action of~$T$.
\begin{prop}
\label{TroP}
If $x\leq1$, then $\left[x+1\right]_q=q\left[x\right]_q+1$.
\end{prop}
\begin{proof}
The statement of the proposition is obvious when $x$ is a $q$-integer.
Hence by the explicit formula~\eqref{qa} the statement is also clear for a rational~$x$.
The irrational case will then follow from the stabilization phenomenon.
\end{proof}
\subsection{The coefficient ``gap''}
\begin{prop}
\label{GaP}
The first order term of series $\left[x\right]_q$ vanishes:
$$
\left[x\right]_q=1+\varkappa_2q^2+\varkappa_3q^3+\cdots
$$
if and only if $1\leq{}x\leq2$.
\end{prop}
\begin{proof}
As in the previous proof, it suffices to prove the statement for a rational~$x$.
Let $\frac{r}{s}=\left[a_1,a_2,\ldots,a_{2m}\right]$ be a rational written in a form of a continued fraction.
It is an easy computation that the polynomials $\mathcal{R}=1+r_1q+\cdots$ and $\mathcal{S}=1+s_1q+\cdots$
of the $q$-deformed rational $\left[\frac{r}{s}\right]_q=\frac{\mathcal{R}}{\mathcal{S}}$ have identical first-order coefficient:
$r_1=s_1$ if and only if $a_1=1$.
The result then follows from the formula
$$
\frac{\mathcal{R}}{\mathcal{S}}=\frac{1+r_1q+\cdots}{1+s_1q+\cdots}=(1+r_1q+\cdots)(1-s_1q+\cdots).
$$
\end{proof}
Propositions~\ref{TroP} and~\ref{GaP} together imply Theorem~\ref{GapThm}.
\subsection{Left translations and $q$-deformed negative numbers}
The heuristic definition
$$
\left[x-1\right]_q:=\frac{\left[x\right]_q-1}{q},
$$
that we adopt now,
leads to a classes of $q$-deformed reals that we did not consider so far.
For~$x\geq2$, the action of~$T^{-1}$ is a tautological inverse of Proposition~\ref{TroP}.
The situation changes for $1\leq{}x\leq2$ because of the ``first order gap'' of Proposition~\ref{GaP}.
More precisely, we obtain Laurent series of the following type.
\begin{enumerate}
\item
If $0\leq{}x<1$, then the zero-order term of~$\left[x\right]_q$ vanishes:
$$
\left[x\right]_q=\varkappa_1q+\varkappa_2q^2+\varkappa_3q^3+\cdots
$$
\item
If $-1\leq{}x<0$, then
$$
\left[x\right]_q=-\frac{1}{q}+\varkappa_0+\varkappa_1q+\varkappa_2q^2+\varkappa_3q^3+\cdots
$$
\item
More generally, applying the operator~$T^{-1}$ several times, we obtain the general
form \eqref{GeneralEq} of~$\left[x\right]_q$ in the case $-k\leq{}x<1-k$.
\end{enumerate}
\begin{ex}
(a) If $n$ is a positive integer, then
$\left[-n\right]_q=q^{-n}+q^{1-n}+\cdots+q^{-1}$.
(b) For $x=\frac{1}{2}$ we have $\left[\frac{1}{2}\right]_q=\frac{q}{1+q}$, and for $x=-\frac{1}{2}$ we have $\left[-\frac{1}{2}\right]_q=-\frac{1}{q(1+q)}$,
so that
$$
\left[\frac{1}{2}\right]_q+\left[-\frac{1}{2}\right]_q=-\frac{1}{q}+1.
$$
\end{ex}
We calculated the series corresponding to the negations of several examples of quadratic irrationals,
and observed the following property.
Some of them satisfy the property that the sum of the series $\left[x\right]_q+\left[-x\right]_q$ contains only finitely many terms.
\begin{ex}
\label{UltimEx}
(a) For $-\sqrt{2}$, we have
$$
\left[-\sqrt{2}\right]_q=-\frac{1}{q^2}-1+q-q^{3}+2q^{5}
-q^6- 4q^7+5q^8+7q^9-18q^{10} - 7q^{11}+ 55q^{12}- 18q^{13}\cdots
$$
Starting from the third-order term, this series is the negation of~$\left[\sqrt{2}\right]_q$.
(b) For $-\sqrt{7}$, we have
$$
\left[-\sqrt{7}\right]_q=-\frac{1}{q^3}-\frac{1}{q^2}-1+q^2-q^3+q^4-2q^5
+3q^6-4q^7+6q^8-8q^9+9q^{10}-9q^{11}+5q^{12}+9q^{13}\cdots
$$
Once again, starting from the third-order term, the series is the negation of~$\left[\sqrt{7}\right]_q$.
\end{ex}
This property is a mystery to us, since, as we can see already for $x=\frac{7}{5}$, it fails to be true even for rationals.
Note also, that nothing similar to Example~\ref{UltimEx} happens for more sophisticated irrationals.
\begin{ex}
\label{UltimExBis}
For $-\pi$, we have the series that starts as follows
$$
\begin{array}{rcl}
\left[-\pi\right]_q&=&-\frac{1}{q^4}-\frac{1}{q^2}-\frac{1}{q}-q^{3}+q^{4}
-q^{10}+q^{11}-q^{17}+2q^{18}-3q^{19}+3q^{20}-q^{21}-q^{24}+3q^{25}-6q^{26}\\[4pt]
&&+6q^{27}-2q^{28}-q^{31}+4q^{32}-9q^{33}+10q^{34}-6q^{35}+3q^{36}-q^{37}-q^{38}+5q^{39}-13q^{40}\cdots
\end{array}
$$
and seems to have nothing in common with~$\left[\pi\right]_q$.
\end{ex}
\bigbreak \noindent
{\bf Acknowledgements}.
We are grateful to Vladlen Timorin for
a suggestion to look at Taylor coefficients.
We are also grateful to Fr\'ed\'eric Chapoton for a SAGE program computing $q$-rationals and to
Doron Zeilberger for the proof of the functional equations formulated as conjectures in the first version of this paper.
This paper was partially supported by the ANR project SC3A, ANR-15-CE40-0004-01.
| {
"timestamp": "2019-10-08T02:21:28",
"yymm": "1908",
"arxiv_id": "1908.04365",
"language": "en",
"url": "https://arxiv.org/abs/1908.04365",
"abstract": "We associate a formal power series with integer coefficients to a positive real number, we interpret this series as a \"$q$-analogue of a real.\" The construction is based on the notion of $q$-deformed rational number introduced inarXiv:1812.00170. Extending the construction to negative real numbers, we obtain certain Laurent series.",
"subjects": "Quantum Algebra (math.QA)",
"title": "On $q$-deformed real numbers",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9693242018339898,
"lm_q2_score": 0.7310585727705126,
"lm_q1q2_score": 0.7086327675446729
} |
https://arxiv.org/abs/1806.08844 | A linear state feedback switching rule for global stabilization of switched nonlinear systems about a nonequilibrium point | A switched equilibrium of a switched system of two subsystems is a such a point where the vector fields of the two subsystems point strictly towards one another. Using the concept of stable convex combination that was developed by Wicks-Peleties-DeCarlo (1998) for linear systems, Bolzern-Spinelli (2004) offered a design of a state feedback switching rule that is capable to stabilize an affine switched system to any switched equilibrium. The state feedback switching rule of Bolzern-Spinelli gives a nonlinear (quadratic) switching threshold passing through the switched equilibrium. In this paper we prove that the switching threshold (i.e. the associated switching rule) can be chosen linear, if each of the subsystems of the switched system under consideration are stable. | \section{Introduction}\label{sec:int}
Using the concept of stable convex combination that was developed by Wicks et al \cite{eur} for linear systems, Bolzern-Spinelli \cite{bol} offered a design of a state feedback switching rule that is capable to stabilize an affine switched system\footnote{Bolzern-Spinelli \cite{bol} actually considered a slightly more general case $\sigma:[0,\infty)\to\{1,...,m\}$, but in this paper we stick to just two discrete states.}
\begin{equation}\label{ls}
\dot x=A^\sigma x+b^\sigma, \quad x\in\mathbb{R}^n,\ \ \sigma\in\{-1,1\}
\end{equation}
to any point $x_0$ (called {\it switched equilibrium}) that satisfies
\begin{equation}\label{se}
\lambda \left(A^+ x_0+b^+\right)+(1-\lambda)\left(A^- x_0+b^-\right)=0,
\end{equation}
for some $\lambda\in[0,1].$ If the matrix
$\lambda A^++(1-\lambda)A^-$
is Hurwitz,
then, according to Bolzern-Spinelli \cite{bol}, the switching signal $\sigma(x)$ can be defined as
\begin{equation}\label{rule1} \def1.5{1.2}
\begin{array}{l} \sigma(x)={\rm arg}\min\limits_{i\in\{-1,1\}}\{V'(x)(A^i x+b^i)\}=\\
\qquad ={\rm sign}\left(V'(x)(A^- x+b^-)-V'(x)(A^+ x+b^+)\right),
\end{array}
\end{equation}
where
$V$ is the quadratic Lyapunov function of the linear system
$$
\dot x= \lambda \left(A^+ x+b^+\right)+(1-\lambda)\left(A^- x+b^-\right).
$$
When $A^-=A^+$, the rule (\ref{rule1}) reduces to
\begin{equation}\label{rule4}
\sigma(x)={\rm sign}\left(V'(x)b^--V'(x)b^+\right),
\end{equation}
whose switching threshold $\left\{x\in\mathbb{R}^n:V'(x)b^--V'(x)b^+\right\}\ni x_0$ is a hyperplane, but in general the state feedback switching rule (\ref{rule1}) gives a nonlinear switching threshold (quadratic surface) passing through the switched equilibrium $x_0.$
\vskip0.2cm
\noindent In this paper we provide a wider class of switched systems (\ref{ls}) that can be stabilized to a switched equilibrium by a linear switching rule. Specifically, we
show that the nonlinear switching rule (\ref{rule1}) can always be replaced with the linear one
\begin{equation}\label{rulelin}
\sigma(x)={\rm sign}\left<x-x_0,\left[V''(x_0)(A^-(x_0)+b^-)\right]^T\right>,
\end{equation}
when
the subsystems $\dot x=A^+x$ and $\dot x=A^-x$ admit a common quadratic Lyapunov function. Here $V''(x_0)$ doesn't depend on $x_0$ because $V$ is assumed quadratic. We also note that (\ref{rulelin}) coincides with (\ref{rule4}) when $A^-=A^+.$
\vskip0.2cm
\noindent The paper is organized as follows. In the next section of the paper we discuss the main idea behind the switching rule (\ref{rule1}), which is based on construction of suitable sets $\Omega^-$ and $\Omega^+$, such that any switching rule $\sigma(x)$ with the property
$$
\sigma(x)=\left\{\begin{array}{lll}
-1 & {\rm if} & x\in\Omega^-,\\
1 & {\rm if} & x\in\Omega^+,
\end{array}\right.
$$
stabilizes (\ref{ls}) to $x_0.$
In section~3 we prove our main result (Theorem~\ref{thm1}), which offers a linear state feedback switching rule to stabilize a nonlinear switched system
\begin{equation}\label{nsw}
\dot x = f^\sigma(x), \quad x\in\mathbb{R}^n,\ \ \sigma\in\{-1,1\},
\end{equation}
to a switched equilibrium $x_0$.
We recall that, according to Demidovich \cite[Ch.~IV, \S281]{dem}, nonlinear systems (\ref{nsw}) admit a common quadratic Lyapunov function, if
the simmetrized derivative
$$
f^\sigma_x(x)+\left[f^\sigma_x(x)\right]^T
$$
is uniformly negative definite uniformly in $x\in\mathbb{R}^n$, and $\sigma\in\{-1,1\},$ see also Pavlov et al \cite{pav}.
The switching rule (\ref{rule2}) proposed in Theorem~\ref{thm1} takes the form (\ref{rulelin}) when switched system (\ref{nsw}) is affine. The main discovery used in Theorem~\ref{thm1} is that, for subsystems of (\ref{nsw}) that admit a common quadratic Lyapunov function, the boundaries of $\Omega^-$ and $\Omega^+$ are contained in ellipsoids that touch one another at the point $x_0,$ see Fig.~\ref{OmegaL}. The proof uses a standard Lyapunov stability theorem that is also implicitly used in Bolzern-Spinelli \cite{bol}. Specifically, we use a Lyapunov stability theorem for Filippov systems with smooth Lyapunov functions, which is a particular case of more general results available e.g. in
Shevitz-Paden \cite{paden1} or M.-Aguilara-Garcia \cite{garcia}. But since deriving the required Lyapunov theorem (Theorem~\ref{th6}) from \cite{garcia,paden1} is not very straightforward (and since we didn't find the exact required theorem elsewhere in the literature), we added a proof for completeness, that we placed in the Appendix section.
\vskip0.2cm
\noindent In section~4 we consider an application of Theorem~\ref{thm1} to a model of boost converter and, for illustration purposes, also implement the Bolzern-Spinelli rule (\ref{rule1}) for the same model. Some further discussion on when the switching rule (\ref{rulelin}) coincides with (\ref{rule1}) is carried out in the conclusions section.
\section{The idea of Wicks et al \cite{eur} and Bolzern-Spinelli \cite{bol}}
\noindent Recall that $x_0$ is a switched equilibrium for the nonlinear switched system (\ref{nsw}), if there exists $\lambda_0\in[0,1]$ such that
\begin{equation}\label{barlambda}
\lambda_0 f^-(x_0)+(1-\lambda_0)f^+(x_0)=0.
\end{equation}
\noindent Assume that the equilibrium $x_0$ of the convex combination
\begin{equation}\label{onesystem}
\dot x= \lambda_0 f^-(x)+(1-\lambda_0)f^+(x).
\end{equation} is asymptotically stable and let $V$ be the respective Lyapunov function satisfying
\begin{equation}\label{nocommonV}
\begin{array}{l}
V'(x)\left( \lambda_0 f^-(x)+(1-\lambda_0) f^+(x)\right)<0\quad\mbox{for all}\ x\not=x_0.
\end{array}
\end{equation}
\vskip0.2cm
\noindent The fundamental idea of Bolzern-Spinelli \cite{bol} (who extended Wicks et al \cite{eur} to affine linear systems) is that for (\ref{nsw}) to stabilize to $x_0$, the switching rule $\sigma(x)$ must take the value $\sigma(x)=-1$ in the region
\begin{equation}\label{Omega-}
\Omega^-=\left\{x:V'(x)f^-(x)<0\right\}
\end{equation}
and the value $\sigma(x)=+1$ in the region
\begin{equation}\label{Omega+}
\Omega^+=\left\{x:V'(x)f^+(x)<0\right\}.
\end{equation}
\begin{figure}[t]\center
\includegraphics[scale=0.75]{OmegaLR.pdf}
\caption{\footnotesize Relative locations of sets $\Omega^L$ and $\Omega^R.$} \label{fig0}
\end{figure}
The following lemma discusses the geometry of the intersection $\Omega^-\cap\Omega^+$, in particular it clarifies that there are situations where one cannot draw a hyperlane in $\Omega^-\cap\Omega^+$ passing through $x_0$ (Fig.~\ref{fig0}a) and there are situations when one can (Fig.~\ref{fig0}b). The existence of a hyperplane in
$\Omega^-\cap\Omega^+$ passing through $x_0$ corresponds to the existence of a linear switching rule $\sigma(x)$ that stabilizes (\ref{nsw}) to $x_0.$
Therefore, what this paper will really prove in Section~3 is that it is Fig.~\ref{fig0}b which takes place when both of the subsystems of (\ref{nsw}) are stable.
\begin{lem}\label{ww} (ideas of \cite{eur,bol}) Consider $f^-,\ f^+\in C^1(\mathbb{R}^n,\mathbb{R}^n)$.
Let $x_0$ be a switched equilibrium for the vector fields $f^-$ and $f^+$, i.e. (\ref{barlambda}) holds. Assume that the equilibrium $x_0$ of system (\ref{onesystem}) is asymptotically stable and the respective Lyapunov function $V\in C^1(\mathbb{R}^n,\mathbb{R})$ satisfies (\ref{nocommonV}). Then, the sets $\Omega^-$ and $\Omega^+$ satisfy the properties:
\begin{itemize}
\item [{\rm 1)}] $\Omega^-\cup\Omega^+\cup \{x_0\}=\mathbb{R}^n,$\ \ $\overline{\Omega^-}\cup\overline{\Omega^+}=\mathbb{R}^n,$
\item[{\rm 2)}] $\partial \Omega^-\backslash\{x_0\}\subset\Omega^+$,\ $\partial \Omega^+\backslash\{x_0\}\subset\Omega^-$,
\item[{\rm 3)}] $x_0\in\partial \Omega^-,$ $x_0\in\partial \Omega^+.$
\end{itemize}
\end{lem}
\noindent{\bf Proof.} {\bf Part 1.} Follows directly from (\ref{nocommonV}).
\vskip0.2cm
\noindent {\bf Part 2.} Consider $x\in\partial \Omega^-$. Then $x\not\in\Omega^-$ because $\Omega^-$ is open. Then $x\in\overline{\Omega^+}$ by Part~1. The property $\partial \Omega^+\subset \overline{\Omega^-}$ can be proved by analogy.
\vskip0.2cm
\noindent {\bf Part 3.} It is sufficient to show that $V'(x_0)=0$. To observe this, fix an arbitrary $j\in\overline{1,n}$ and consider the vector $\xi^j\in\mathbb{R}^n$ defined as $\xi_i^j=0,$ $i\not= j$, and $\xi_j^j=1.$ Since $V(x)>0$, $x\not=x_0$, we have
$$
\begin{array}{l}
0<V(x_0+k \xi^j)-V(x_0)=V'(x_0+k_*\xi^j)\xi^j\cdot k=\\
\qquad =\frac{\partial V}{\partial x_j}(x_0+k_*\xi^j)k,\\
0<V(x_0-k \xi^j)-V(x_0)=-V'(x_0-k_{**}\xi^j)\xi^j\cdot k=\\
\qquad=-\frac{\partial V}{\partial x_j}(x_0-k_{**}\xi^j)k,
\end{array}
$$
for any $k>0$ and for some $k_*,k_{**}\in[0,k]$ (that depend on $k$). Passing to the limit as $k\to 0$, one gets $\frac{\partial V}{\partial x_j}(x_0)=0$.
\vskip0.2cm
\noindent The proof of the lemma is complete.\qed
\section{The main result}
\noindent In this section we assume that the switched equilibrium $x_0$ admits a common quadratic Lyapunov function
$$
V(x)=(x-x_0)^TP(x-x_0)
$$ with respect to each of the two systems
\begin{equation}\label{asystems}
\dot x=f^-(x)-f^-(x_0)\quad\mbox{and}\quad \dot x=f^+(x)-f^+(x_0),
\end{equation}
where $P$ is an $n\times n$ symmetric matrix and the following standard properties hold:
\begin{equation}\label{acommonV}
\begin{array}{l}
V'(x)\left(f^-(x)-f^-(x_0)\right)\le -\alpha\|x-x_0\|^2,\\ V'(x)\left(f^+(x)-f^+(x_0)\right)\le -\alpha\|x-x_0\|^2,
\end{array}
\end{equation}
for some fixed constant $\alpha>0$.
\vskip0.2cm
\begin{thm} \label{thm1} \it
Consider $f^-,\ f^+\in C^1(\mathbb{R}^n,\mathbb{R}^n)$.
Let $x_0$ be a switched equilibrium for the vector fields $f^+$ and $f^-$, i.e. (\ref{barlambda}) holds.
Assume that the systems of (\ref{asystems}) admit a common quadratic Lyapunov function $V\in C^2(\mathbb{R}^n,\mathbb{R})$ that satisfies (\ref{acommonV}). Then the switching signal
\begin{equation}\label{rule2}
\sigma(x)={\rm sign}\left<x-x_0,\left[V''(x_0)f^-(x_0)\right]^T\right>
\end{equation}
makes $x_0$ quadratically globally stable switched equilibrium of switched system (\ref{nsw}).
\end{thm}
\noindent Note that rule (\ref{rule2}) takes the form (\ref{rulelin}) when the nonlinear switched system (\ref{nsw}) takes the form (\ref{ls}). Also, using (\ref{barlambda}) the switching rule (\ref{rule2}) can be rewritten as
$$
\sigma(x)={\rm sign}\left<x-x_0,\left[V''(x_0)\left(f^-(x_0)-f^+(x_0)\right)\right]^T\right>.
$$
\vskip0.2cm
\noindent In order to prove the theorem, we
introduce two sets
$$
\begin{array}{l}
\Omega^-_\alpha=\left\{x\in\mathbb{R}^n:-\alpha\|x-x_0\|^2+V'(x)f^-(x_0)<0\right\},\\
\Omega^+_\alpha=\left\{x\in\mathbb{R}^n:-\alpha\|x- x_0\|^2+V'(x)f^+(x_0)<0\right\}
\end{array}
$$
and establish the following lemma about the relative properties of the sets $\Omega^i_\alpha$ and $\Omega^i$ as introduced in (\ref{Omega-})-(\ref{Omega+}).
\begin{lem}\label{wwa} Assume that the conditions of Theorem~\ref{thm1} hold. Then $\Omega^-_\alpha$ and $\Omega^+_\alpha$ verify the following properties:
\begin{itemize}
\item[1)] $\Omega^-\supset \Omega^-_\alpha,$\ \ $\Omega^+\supset \Omega^+_\alpha,$
\item[2)] $x_0\in\partial \Omega^-_\alpha,$ \ \ $x_0\in\partial\Omega^+_\alpha$,
\item[3)] both $\partial\Omega^-_\alpha$ and $\partial\Omega^+_\alpha$ are ellipsoids,
\item[4)] hyperplane $\sigma(x)=0$ is tangent to both $\Omega^-_\alpha$ and $\Omega^+_\alpha$ at $x_0,$
\item[5)] $\Omega^-_\alpha\subset \left\{x:\sigma(x)<0\right\},$ \ $\Omega^+_\alpha\subset \left\{x:\sigma(x)>0\right\}.$
\end{itemize}
\end{lem}
\begin{figure}[t]\center
\includegraphics[scale=0.75]{OmegaL-rep.pdf}
\caption{\footnotesize Top figure: Locations of the boundaries of $\Omega^+,$ $\Omega^+_\alpha$, $\Omega^-,$ $\Omega^-_\alpha$ with respect to the hyperplane $\sigma(x)=0$ and with respect to each other.
Bottom figures: The sets $\Omega^+$ and $\Omega^+_\alpha$ (grey regions).} \label{OmegaL}
\end{figure}
\noindent The notations and statements of Lemma~\ref{wwa} are illustrated at Fig.~\ref{OmegaL}.
\vskip0.2cm
\noindent {\bf Proof.} {\bf Part 1.} Let $x\in\Omega_\alpha^-$. Then
$$
\begin{array}{l}
V'(x)f^-(x)=V'(x)(f^-(x)-f^-(x_0))+V'(x)f^-(x_0)\le \\
\le -\alpha\|x-x_0\|^2+V'(x)f^-(x_0)<0.
\end{array}
$$
Therefore, $x\in\Omega^-$. The proof for $\Omega_\alpha^+$ and $\Omega_\alpha^+$ is analogous.
\vskip0.2cm
\noindent {\bf Part 2.} Follows from $V'(x_0)=0$ established in the proof of Part~3 of Lemma~\ref{ww}.
\vskip0.2cm
\noindent {\bf Part 3.} We execute the proof for $x_0=0$. The proof in the general case doesn't differ. The change of the coordinates $y=x-\Delta$ transforms the equation
$$
-\alpha\|x-x_0\|^2+V'(x)f^-(x_0)=0
$$
into
$$
-\alpha\|y\|^2-2\alpha\left<\Delta,y\right>+2\left<Pf^-(0),y\right>-\alpha\|\Delta\|^2+2\left<\Delta,Pf^-(0)\right>=0.
$$
If $\Delta=\frac{Pf^-(0)}{\alpha},$ then we further get
$$
-\alpha\|y\|^2-\frac{1}{\alpha}\|Pf^-(0)\|^2+\frac{2}{\alpha}\|Pf^-(0)\|^2=0,
$$
which is the equation of ellipsoid centered at 0 and radius $\frac{1}{\alpha^2}\|Pf^-(0)\|^2.$
\vskip0.2cm
\noindent The proof for $\partial\Omega_\alpha^+$ is analogous.
\vskip0.2cm
\noindent {\bf Part 4.} This follows from the equality
$$
\left.\frac{d}{dx}\left(-\alpha\|x-x_0\|^2+V'(x)f^-(x_0)\right)\right|_{x=x_0}=V''(x_0)f^-(x_0).
$$
and the property (\ref {barlambda}) of switched equilibrium.
\vskip0.2cm
\noindent {\bf Part 5.} Let $H(x)=-\alpha\|x-x_0\|^2+V'(x)f^-(x_0).$ The interior of the ellipsoid $\partial \Omega_\alpha^-$ corresponds to $H(x)>0$. Therefore, the exterior of the ellipsoid $\partial \Omega_\alpha^-$ (which, by definition, coincides with the set $\Omega_\alpha^-$) corresponds to $H(x)<0.$ This proves the statement of Part~5 for $\Omega_\alpha^-.$
Since $(1-\lambda_0)f^+(x_0)=-\lambda f^-(x_0)$ by (\ref{barlambda}), the proof for $\Omega_\alpha^+$ follows same lines.
\vskip0.2cm
\noindent The proof of the lemma is complete. \qed
\vskip0.2cm
\noindent The proof of our main result uses the following Lyapunov stability theorem for discontinuous systems with smooth Lyapunov functions, which is implicitly used in \cite{eur,bol}.
\begin{thm}\label{th6}{\bf (Lyapunov stability theorem for discontinuous systems with smooth Lyapunov functions)} {\rm (similar to \cite[Theorem 3.1]{paden1}, \cite[Theorem 2.3]{garcia})} Consider a system of differential equations with discontinuous right-hand-side
\begin{equation}\label{fil}
\hskip-0.65cm \dot x= g(x),\ \ {\rm with} \ \ g(x)=\left\{\begin{array}{l}
g^+(x),\ {\rm if}\ H(x)>0,\\
g^-(x),\ {\rm if}\ H(x)<0,
\end{array}\right.\ \ x\in\mathbb{R}^n,
\end{equation}
where $g^-,$ $g^+$, and $H$ are $C^1$-functions. Consider $x_0\in\mathbb{R}^n$ satisfying $H(x_0)=0.$ Let $V$ be a $C^1$-smooth Lyapunov function with $V(x_0)=0$ and $V(x)\not=0$ for $x\not=x_0.$
Consider a piecewise continuous strictly positive for $x\not=x_0$ scalar function $x\mapsto w(x)$ such that for any $\rho>0$ there exists $\varepsilon>0$ for which $w(x)\ge \varepsilon$ as long as $\|x-x_0\|\ge\rho.$ If
$$
V'(x)\xi\le-w(x)\quad\mbox{for any}\ \xi\in K[g](x),\ \mbox{and any}\ x\not=x_0,
$$
then $x_0$ is an asymptotically globally stable stationary point of (\ref{fil}). Here $K[g](x)$ stays for convexification of the discontinuous function $g$ at $x$, see e.g. Shevitz-Paden \cite{paden1}.
\end{thm}
\noindent The proof of Theorem~\ref{th6} is given in Appendix.
\vskip0.2cm
\noindent {\bf Proof of Theorem~\ref{thm1}.} We will show that the conditions of Theorem~\ref{th6} hold with
$$w(x)=\left\{\begin{array}{ll}
-V'(x)f^-(x), & \sigma(x)<0,\\
-\max\{V'(x)f^-(x),V'(x)f^+(x)\}, & \sigma(x)=0, \\
-V'(x)f^+(x), & \sigma(x)>0.
\end{array}\right.
$$
If $x\in \overline{D^-}\backslash\{x_0\}$, then $ x\in \Omega^-_\alpha\subset \Omega^-$ by statements 5 and 1 of Lemma~\ref{wwa}, which implies $w(x)>0$. Analogously, $w(x)>0$, if $x\in \overline{D^+}\backslash\{x_0\}.$ This implies that $\max\limits_{x:\|x-x_0\|=\rho}w(x)$ is a positive function of $\rho$ that approaches 0 as $\rho\to 0.$
\vskip0.2cm
\noindent Since
$K[f](x)=\{f^-(x)\},$ when $\sigma(x)<0$, and $K[f](x)=\{f^+(x)\},$ when $\sigma(x)>0$, then condition $V'(x)\xi\le -w(x)$ of Theorem~\ref{th6} holds for $\sigma(x)\not=0.$
\vskip0.3cm
\noindent Consider $\sigma(x)=0.$ Then each $\xi\in K[f](x)$ has the form
$
\xi=\lambda f^-(x)+(1-\lambda)f^+(x),
$ where $\lambda$ is a constant from the interval $[0,1].$ We have
\begin{eqnarray*}
V'(x)\xi&=&\lambda V'(x)f^-(x)+(1-\lambda)V'(x)f^+(x)\le\\
&& \le\max\{V'(x)f^-(x),V'(x)f^+(x)\}=-w(x),
\end{eqnarray*}
that completes the proof of the theorem. \qed
\section{Application to a model of boost converter}
\begin{figure}[t]\center
\includegraphics[scale=0.75]{buck1.pdf}
\caption{\footnotesize Boost converter from Fribourg-Soulat \cite{boost2} and Beccuti et al \cite{boost1}.} \label{buck}
\end{figure}
\noindent Consider a dc-dc boost converter of Fig.~\ref{buck} with a switching feedback $\sigma(x).$ Denoting
the inductor current $i_L$ by $x_1$ and the capacitor voltage $u_C$ by $x_2$, the differential equations of the converter read as (see e.g. Fribourg-Soulat \cite{boost2}, Beccuti et al \cite{boost1})
\begin{equation}\label{ex}\def1.5{1.5}
\hskip-0.6cm \dot x=\left(\begin{array}{cc} -\frac{r_L}{x_L} & -\frac{r_0}{x_L(r_0+r_C)}\sigma \\
\frac{r_0}{x_C(r_0+r_C)}\sigma & -\frac{1}{x_C(r_0+r_C)}\end{array}\right)x+\left(\begin{array}{c} \frac{u_s}{x_L} \\ 0\end{array}\right),\ \ \sigma\in\{0,1\},
\end{equation}
Let us view the right-hand-side of (\ref{ex}) with $\sigma=0$ and $\sigma=1$ as $f^-(x)$ and $f^+(x)$ respectively.
The equation (\ref{barlambda}) for switched equilibrium $x_0$ yields
\begin{equation}\label{ex1}\def1.5{1.5}
\hskip-0.6cm\begin{array}{l}
-r_Lx_{01}+u_s-(1-\lambda_0)\frac{r_0 r_C}{r_0+r_C}x_{01}-(1-\lambda_0)\frac{r_0}{r_0+r_C}x_{02}=0,\\
-x_{02}+(1-\lambda_0)r_0x_{01}=0,
\end{array}
\end{equation}
which can be solved for $(x_{01},\lambda_0)$ when the reference voltage $x_{02}$ is fixed.
The conditions of Theorem~\ref{thm1} hold with the Lyapunov function
$$
V(x)=\frac{1}{2x_C}(x_1-x_{01})^2+\frac{1}{2x_L}(x_2-x_{02})^2.
$$
Therefore,
$$
V''(x_0)f^-(x_0)=\left(\frac{1}{x_C}\left(-\frac{r_L}{x_L}x_{01}+\frac{v_s}{x_L}\right),\frac{1}{x_L}\left(-\frac{1}{x_C(r_0+r_C)}x_{02}\right)\right),
$$
which transpose will be denoted by $n.$
Plugging $n$ into (\ref{rule2}), we conclude that any point $x_0$ that satisfies the switched equilibrium condition (\ref{ex1}) with $\lambda_0\in(0,1)$, can be stabilized using the switching rule
\begin{equation}\label{rule3}
\sigma(x)=\left\{\begin{array}{lll} 1, & {\rm if} & (x-x_d)n>0,\\
0, & {\rm if} & (x-x_d)n<0.
\end{array}\right.
\end{equation}
\begin{figure}[t]\center
\vskip-0.15cm
\includegraphics[scale=0.6]{newsim1.pdf}\\
\vskip0.4cm
\includegraphics[scale=0.6]{newsim2.pdf}
\caption{\footnotesize The solution (bold curve) of switched system (\ref{ex1}) with the initial condition $x(0)=0$, the parameters (\ref{para}), and the switching signal $\sigma(x)$ given by (\ref{rule3}) (top figure) and by (\ref{spi}) (bottom figure). The thin curve is the switching manifold $\sigma(x)=0$ and the bold point is the switched equilibrium $x_0$. } \label{simfig}
\end{figure}
\noindent An implementation of switching rule (\ref{rule3}) with the parameters
\begin{equation}\label{para}
\hskip-0.7cm r_L=20,\ r_C=5,\ x_L=600,\ x_C=70,\ r_0=200,\ u_s=8,
\end{equation}
and the reference voltage $x_{02}=10$ (which, when plugged into (\ref{ex1}), yields $x_{01}=0.79$ and $\lambda_0=0.367$ as one of the two possible solutions)
is given in Fig.~\ref{simfig} (top).
\vskip0.2cm
\noindent For comparison, Fig.~\ref{simfig} (bottom) shows stabilization of (\ref{ex1}) to the switched equilibrium $x_0=(0.79,10)$ using the switching rule (\ref{rule1}), that can be shown to simplify to
\begin{equation}\label{spi}
\sigma(x)={\rm sign}\left(r_Lr_Cx_1^2-(x_{01}r_Lr_C-x_{02})x_1-x_{01}x_2\right).
\end{equation}
\noindent The parameters (\ref{para}) are slightly artificial, but similar to Fig.~\ref{simfig} simulations are achieved in the case of more realistic parameters
e.g. taken from \cite{boost1}, \cite{boost2}, or \cite{boost3}. The parameters (\ref{para}) are chosen in such a way that the nonlinear behavior of the Bolzern-Spinelli rule (\ref{spi}) is clearly seen in Fig.~\ref{simfig} (bottom). The top and bottom figures of Fig.~\ref{simfig} turn out to be indistinguishable (on the screen) for the parameters from \cite{boost1,boost2,boost3}.
\section{Conclusions} \noindent In this paper we showed that the switching rule (\ref{rule1}) of Bolzern-Spinelli \cite{bol} for quadratic stabilization of a switched equilibrium $x_0$ of switched system (\ref{ls}) can be replaced by a linear switching rule (\ref{rulelin}) when the subsystems of (\ref{ls}) admit a common quadratic Lyapunov function. Moreover, our main result (Theorem~\ref{thm1}) applies to nonlinear switched systems (\ref{nsw}) complimenting the work by Mastellone et al \cite{spong} that proposes a nonlinear extension of Bolzern-Spinelli \cite{bol} in the case where the subsystems of (\ref{nsw}) are shifts of one another (at the same time, the work \cite{spong} addresses the case of an arbitrary number of subsystems, while the present paper focuses on just two subsystems).
\vskip0.2cm
\noindent We would like to note that seemingly nonlinear switching rule (\ref{rule1}) of Bolzern-Spinelli \cite{bol} simplifies to linear in wide classes of particular applications, e.g. in applications to buck converters (see e.g. Lu et al \cite{lu}), where $A^+=A^-$ in (\ref{ls}), or in applications to boost converters of Fig.~\ref{buck} with neglected resistance $r_C$ of the capacitor (see e.g. Schild et al \cite{sch}). Still, the switching rule (\ref{rule1}) stays nonlinear in some other classes of applications, e.g. in more general boost converters such as the one of Fig.~\ref{buck} or its further extensions (see Gupta-Patra \cite{boost3} and references therein). In these classes of applications the linear switching rules (\ref{rulelin}) and (\ref{rule2}) proposed in this paper may simplify the engineering implementation of the feedback control.
\section{Appendix: Lyapunov stability theorem for discontinuous systems with smooth Lyapunov functions}
\noindent {\bf Proof of Theorem~\ref{th6}.} Let $x$ be a Filippov solution of (\ref{fil}), see e.g. Shevitz-Paden \cite{paden1}. We pick $\rho>0$ and prove that $x(t)\in \overline{W_\rho}$ beginning some $t=t_\rho,$ where
$
{W_\rho}=\{x\in\mathbb{R}^n:V(x)< \rho\}.
$
\vskip0.2cm
\noindent {\bf Step 1.} Let $r>0$ be such a constant that $x(0)\in \partial W_r.$ We claim that $x(t)\in W_r$ for all $t>0.$
We prove by contradiction, i.e. assume that $x(\tau)\not\in W_r$ for some $\tau>0.$ Without loss of generality we can assume that $x([0,\tau])\subset W,$ where $W$ is an open neighborhood of $\overline{W_r}$, such that $w(x)$ is strictly positive in $W\backslash\{x_0\}.$ For the function
$
v(t)=V(x(t))
$
we have
\begin{equation}\label{ftfftf}
v(0)=r\quad\mbox{and}\quad v(\tau)\ge r.
\end{equation}
\noindent {\bf Step 1.1} We claim that $v(t)> r/2$ for all $t\in[0,\tau]$. Indeed, if the latter is wrong, then defining
$
s=\max\left\{t\in[0,\tau]:v(t)\le r/2\right\},
$
one gets
\begin{equation}\label{tc}
\hskip-0.7cm v(s)=r/2,\ v(\tau)=r, \ v(t)\in \left[r/2,r\right],\ \mbox{for any}\ t\in[s,\tau].
\end{equation}
In particular, $x(t)\not=x_0$ for all $t\in[s,\tau]$ and, therefore,
$$
v'(t)=V'(x(t))\xi<0,$$ for some $\xi\in K[f](x(t))$ and almost any $t\in[s,\tau].$
This contradicts (\ref{tc}) and proves that $v(t)>r/2$ for all $t\in[0,\tau].$
\vskip0.2cm
\noindent {\bf Step 1.2} Step 1.1 implies that
$
x(t)\not=0,$ for any $t\in[0,\tau],$ and, as a consequence, $$v'(t)<0,\ \mbox{for any}\ t\in[0,\tau],
$$
which contradicts (\ref{ftfftf}) and completes the proof of the fact that $x(t)\in W_r$ for all $t>0.$
\vskip0.2cm
\noindent {\bf Step 2.} Let us show that $x(t)$ reaches $\overline{W_\rho}$ at some time moment. Assume that $x(t)$ never reaches $\overline{W_\rho}$. Then
$$
v'(t)=V'(x(t))\xi<-w(x(t)),
$$
for some $\xi\in K[f](x(t))$ and almost any $t>0.$
The definition of function $w$ implies that
$
w_{\min}=\min\{w(x), \ x\in \overline{W_r}\backslash W_\rho\}>0.
$
Therefore,
$$
v(t)=v(0)+\int_0^t v'(t)dt<v(0)-w_{\min}t
$$
and $v(t)$ becomes negative, if $x(t)$ never reaches $\overline{W_\rho}$. Since $\rho\in(0,r)$ was chosen arbitrary, our conclusion implies that $x(t)\to x_0$ as $t\to\infty.$
\vskip0.2cm
\noindent The proof of the theorem is complete.
\qed
\section*{Acknowledgements}
The research was supported by NSF Grant CMMI-1436856.
\bibliographystyle{plain}
\section{References}
| {
"timestamp": "2018-06-26T02:01:47",
"yymm": "1806",
"arxiv_id": "1806.08844",
"language": "en",
"url": "https://arxiv.org/abs/1806.08844",
"abstract": "A switched equilibrium of a switched system of two subsystems is a such a point where the vector fields of the two subsystems point strictly towards one another. Using the concept of stable convex combination that was developed by Wicks-Peleties-DeCarlo (1998) for linear systems, Bolzern-Spinelli (2004) offered a design of a state feedback switching rule that is capable to stabilize an affine switched system to any switched equilibrium. The state feedback switching rule of Bolzern-Spinelli gives a nonlinear (quadratic) switching threshold passing through the switched equilibrium. In this paper we prove that the switching threshold (i.e. the associated switching rule) can be chosen linear, if each of the subsystems of the switched system under consideration are stable.",
"subjects": "Optimization and Control (math.OC)",
"title": "A linear state feedback switching rule for global stabilization of switched nonlinear systems about a nonequilibrium point",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9693242000616579,
"lm_q2_score": 0.7310585727705126,
"lm_q1q2_score": 0.7086327662489944
} |
https://arxiv.org/abs/1301.5691 | The Dupire derivatives and Fréchet derivatives on continuous pathes | In this paper, we study the relation between Fréchet derivatives and Dupire derivatives, in which the latter are recently introduced by Dupire [4]. After introducing the definition of Fréchet derivatives for non-anticipative functionals, we prove that the Dupire derivatives and the extended Fréchet derivatives are coherent on continuous pathes. | \section{Introduction}
Recently Dupire {\normalsize \cite{Dupire.B} introduced the functional
It\^{o}'s calculus, which was further developed in Cont and Fourni
\cite{Cont-1}-\cite{Cont-3}. }The key idea of Dupire
{\normalsize \cite{Dupire.B} is to introduce the new "local" derivatives,
i.e., horizontal derivative and vertical derivative for non-anticipative
processes. }Inspired by Dupire's work, Peng and Wang \cite{Peng S 3} obtained
a nonlinear Feynman-Kac formula for classical solutions of path-dependent PDEs
in terms of non-Markovian Backward stochastic differential equations (BSDEs
for short). The viscosity solutions of path-dependent PDEs are also studied in
\cite{Ekren} and \cite{Peng S 2} under this new framework. All these results
show that Dupire derivative is an important tool {\normalsize to deal with
functionals of continuous semimartingales. }
The aim of this paper is to establish the relation between Dupire derivatives
and Fr\'{e}chet derivatives. Note that the Dupire derivative is a "local" one,
in the sense that it is defined by perturbing the endpoint of a given current
path. Compared with the Dupire derivative, the Fr\'{e}chet derivative is
defined by perturbing the whole path. Thus, it seems difficult to find the
relationship between them.
To overcome the above difficulty, we introduce the definition of Fr\'{e}chet
derivatives {\normalsize for non-anticipative functionals. Inspired by
Mohammed's work about }stochastic functional differential equations with
bounded memory (see \cite{Mohammed1} and \cite{Mohammed}), we study the weakly
continuous linear and bilinear extensions of the Fr\'{e}chet derivatives. By
means of an auxiliary stochastic functional system, we show that the Dupire
derivatives and the extended Fr\'{e}chet derivatives are coherent on
continuous pathes.
This paper is organized as follows. In section 2, we present some fundamental
results of the\ Duprie derivatives and define the Fr\'{e}chet derivatives
{\normalsize for non-anticipative functionals}. Furthermore, the unique
extensions of the Fr\'{e}chet derivatives are obtained. In section 3, under
mild assumptions, we prove that the Duprie derivatives and the extended
Fr\'{e}chet derivatives are equal on continuous pathes.
\section{Preliminaries}
\subsection{The Dupire derivatives}
The following notations and tools are mainly from Dupire \cite{Dupire.B}. Let
$T>0$ be fixed. For each $t\in\lbrack0,T]$, we denote by $\Lambda_{t}$ the set
of c\`{a}dl\`{a}g $\mathbb{R}^{d}$-valued functions on $[0,t]$, and $C$ is the
set of continuous functions on $[0,T]$. For each $\gamma(\cdot)\in\Lambda_{T}$
the value of $\gamma(\cdot)$ at time $s\in\lbrack0,T]$ is denoted by
$\gamma(s)$. Thus $\gamma(\cdot)=\gamma(s)_{0\leq s\leq T}$ is a
c\`{a}dl\`{a}g process on $[0,T]$ and its value at time $s$ is $\gamma(s)$.
The path of $\gamma(\cdot)$ up to time $t$ is denoted by $\gamma_{t}$, i.e.,
$\gamma_{t}=\gamma(s)_{0\leq s\leq t}\in\Lambda_{t}$. We denote $\Lambda
=\bigcup_{t\in\lbrack0,T]}\Lambda_{t}$. For each $\gamma_{t}\in\Lambda$ and
$x\in\mathbb{R}^{d}$ we denote by $\gamma_{t}(s)$ the value of $\gamma_{t}$ at
$s\in\lbrack0,t]$ and $\gamma_{t}^{x}:=(\gamma_{t}(s)_{0\leq s<t},\gamma
_{t}(t)+x)$ which is also an element in $\Lambda_{t}$.
Let $\langle\cdot,\cdot\rangle$ and $|\cdot|$ denote the inner product and
norm in $\mathbb{R}^{n}$. We now define a distance on $\Lambda$. For each
$0\leq t,\bar{t}\leq T$ and $\gamma_{t},\bar{\gamma}_{\bar{t}}\in\Lambda$, we
denot
\
\begin{array}
[c]{l
\Vert\gamma_{t}\Vert:=\sup\limits_{s\in\lbrack0,t]}|\gamma_{t}(s)|,\\
\Vert\gamma_{t}-\bar{\gamma}_{\bar{t}}\Vert:=\sup\limits_{s\in\lbrack
0,t\vee\bar{t}]}|\gamma_{t}(s\wedge t)-\bar{\gamma}_{\bar{t}}(s\wedge\bar
{t})|,\\
d_{\infty}(\gamma_{t},\bar{\gamma}_{\bar{t}}):=\sup_{0\leq s\leq t\vee\bar{t
}|\gamma_{t}(s\wedge t)-\bar{\gamma}_{\bar{t}}(s\wedge\bar{t})|+|t-\bar{t}|.
\end{array}
\]
It is obvious that $\Lambda_{t}$ is a Banach space with respect to $\Vert
\cdot\Vert$ and $d_{\infty}$ is not a norm.
\begin{definition}
A function $u:\Lambda\mapsto\mathbb{R}$ is said to be $\Lambda$--continuous at
$\gamma_{t}\in\Lambda$, if for any $\varepsilon>0$ there exists $\delta>0$
such that for each $\bar{\gamma}_{\bar{t}}\in\Lambda$ with $d_{\infty
(\gamma_{t},\bar{\gamma}_{\bar{t}})<\delta$, we have $|u(\gamma_{t
)-u(\bar{\gamma}_{\bar{t}})|<\varepsilon$. $u$ is said to be $\Lambda
$--continuous if it is $\Lambda$--continuous at each $\gamma_{t}\in\Lambda$.
\end{definition}
\begin{definition}
Let $u:\Lambda\mapsto\mathbb{R}$ and $\gamma_{t}\in\Lambda$ be given. If there
exists $p\in\mathbb{R}^{d}$, such that
\[
u(\gamma_{t}^{x})=u(\gamma_{t})+\langle p,x\rangle+o(|x|)\ \text{as
\ x\rightarrow0,\ x\in\mathbb{R}^{d}.\ \
\]
Then we say that $u$ is (vertically) differentiable at $\gamma_{t}$ and denote
the gradient of $\tilde{D}_{x}u(\gamma_{t})=p$. $u$ is said to be vertically
differentiable in $\Lambda$ if $\tilde{D}_{x}u(\gamma_{t})$ exists for each
$\gamma_{t}\in\Lambda$. We can similarly define the Hessian $\tilde{D
_{xx}^{2}u(\gamma_{t})$. It is an $\mathbb{S}(d)$-valued function defined on
$\Lambda$, where $\mathbb{S}(d)$ is the space of all $d\times d$ symmetric matrices.
\end{definition}
For each $\gamma_{t}\in\Lambda$ we denote
\[
\gamma_{t,s}(r)=\gamma_{t}(r)\mathbf{1}_{[0,t)}(r)+\gamma_{t}(t)\mathbf{1
_{[t,s]}(r),\ \ r\in\lbrack0,s].
\]
It is clear that $\gamma_{t,s}\in\Lambda_{s}$.
\begin{definition}
For a given $\gamma_{t}\in\Lambda$ if we have
\[
u(\gamma_{t,s})=u(\gamma_{t})+a(s-t)+o(|s-t|)\ \text{as}\ s\rightarrow
t,\ s\geq t,\ \
\]
then we say that $u(\gamma_{t})$ is (horizontally) differentiable in $t$ at
$\gamma_{t}$ and denote $\tilde{D}_{t}u(\gamma_{t})=a$. $u$ is said to be
horizontally differentiable in $\Lambda$ if $\tilde{D}_{t}u(\gamma_{t})$
exists for each $\gamma_{t}\in\Lambda$.
\end{definition}
\begin{definition}
Define $\mathbb{C}^{j,k}(\Lambda)$ as the set of function $u:=(u(\gamma
_{t}))_{\gamma_{t}\in\Lambda}$ defined on $\Lambda$ which are $j$ times
horizontally and $k$ times vertically differentiable in $\Lambda$ such that
all these derivatives are $\Lambda$--continuous.
\end{definition}
The following It\^{o} formula was firstly obtained by Dupire \cite{Dupire.B}
and then generalized by Cont and Fourni\'{e}, \cite{Cont-1},\ \cite{Cont-2}
and \cite{Cont-3}.
\begin{theorem}
\label{w2 copy(1)}Let $(\Omega,\mathcal{F},(\mathcal{F}_{t})_{t\in\lbrack
0,T]},P)$ be a probability space, if $X$ is a continuous semi-martingale and
$u$ is in $\mathbb{C}^{1,2}(\Lambda)$, then for any $t\in\lbrack0,T)$,
\end{theorem}
\
\begin{split}
u(X_{t})-u(X_{0}) & =\int_{0}^{t}\tilde{D}_{s}u(X_{s})\,ds+\int_{0
^{t}\tilde{D}_{x}u(X_{s})\,dX(s)\\
& \text{ \ \ }+\frac{1}{2}\int_{0}^{t}\tilde{D}_{xx}^{2}u(X_{s})\,d\langle
X\rangle(s),\quad\quad\ P-a.s.
\end{split}
\]
\subsection{The Fr\'{e}chet Derivatives}
Let $C^{\ast}$ and $C^{\dagger}$ be the space of bounded linear functionals
$\Phi:C\rightarrow R$ and bounded blinear functionals $\tilde{\Phi}:C\times
C\rightarrow R$, of the space $C,$ respectively. They are equipped with the
operator norms which will be, respectively, denoted by $\Vert\cdot\Vert^{\ast
}$ and $\Vert\cdot\Vert^{\dagger}$.
Fix $t\in\lbrack0,T)$. Let $B_{t}=\{ \upsilon1_{\{t\}},\upsilon\in R^{n}\}$,
where $1_{\{t\}}:[0,T]\rightarrow R$ is defined b
\
\begin{array}
[c]{c
1_{\{t\}}(s):=\left\{
\begin{array}
[c]{l
0,\text{ \ for }s\in\lbrack0,t),\\
1,\text{ \ for }s=t,\\
0,\text{ \ for }s\in(t,T].
\end{array}
\right.
\end{array}
\]
We define the direct su
\
\begin{array}
[c]{c
C\oplus B_{t}:=\{ \phi(\cdot)+\upsilon1_{\{t\}}\mid\phi(\cdot)\in
C,\upsilon\in R^{n}\}
\end{array}
\]
and equip it with the norm $\Vert\cdot\Vert$ defined b
\
\begin{array}
[c]{c
\Vert\phi(\cdot)+\upsilon1_{\{t\}}\Vert=\sup_{s\in\lbrack0,T]}\mid\phi
(s)\mid+\mid\upsilon\mid,\text{ \ \ }\phi(\cdot)\in C,\upsilon\in R^{n}.
\end{array}
\]
For each $\gamma(\cdot)\in C$ we denot
\
\begin{array}
[c]{c
\gamma_{t}(s)=\left\{
\begin{array}
[c]{l
\gamma(s),\text{ \ \ }s\leq t,\\
\gamma(t),\text{ \ \ }s>t.
\end{array}
\right.
\end{array}
\]
It is clear that $\gamma_{t}(\cdot)\in C$.
\begin{definition}
We call a\ functional $\Psi:[0,T]\times C\longmapsto R$ is\ non-anticipative,
if for any $t\in\lbrack0,T]$ and $x(\cdot),$ $y(\cdot)\in C$ satisfying the
conditio
\
\begin{array}
[c]{c
y(\tau)=x(\tau)\text{ \ for }\tau\in\lbrack0,t],
\end{array}
\]
there holds the equality
\
\begin{array}
[c]{c
\Psi(t,x(\cdot))=\Psi(t,y(\cdot)).
\end{array}
\]
\end{definition}
\begin{definition}
\label{defi-1}$\forall\gamma(\cdot)\in C,$ if we have
\[
\Psi(s,\gamma_{t}(\cdot))=\Psi(t,\gamma(\cdot))+a(s-t)+o(|s-t|)\ \text{as
\ s\rightarrow t,\ s\geq t,\ \
\]
then we say that $\Psi(t,\gamma(\cdot))$ is differentiable at $t$ and denote
$D_{t}\Psi(t,\gamma(\cdot))=a$.
\end{definition}
$\Psi$ is said to be differentiable in $[0,T)$ if $D_{t}\Psi(t,\gamma(\cdot))$
exists for each $(t,\gamma(\cdot))\in\lbrack0,T]\times C.$
\begin{definition}
For a given $\gamma(\cdot)\in C$ and a non-anticipative $\Psi$, if we have
\[
\Psi(t,\varphi(\cdot))=\Psi(t,\gamma(\cdot))+D_{x}\Psi(t,\gamma(\cdot
))((\varphi(\cdot)-\gamma(\cdot)))+o(\Vert(\varphi(\cdot)-\gamma
(\cdot))1_{[0,t]}\Vert),
\]
\ for each $\varphi(\cdot)\in C,$ then we say that $\Psi(t,\gamma(\cdot))$ is
Fr\'{e}chet differentiable at $\gamma(\cdot)$.
\end{definition}
$\Psi$ is said to be differentiable in $C$ if $D_{x}\Psi(t,\gamma(\cdot))$
exists for each $(t,\gamma(\cdot))\in\lbrack0,T)\times C.$
\begin{remark}
For a non-anticipative $\Psi$, if $\Psi(t,\gamma(\cdot))$ is Fr\'{e}chet
differentiable at $\gamma(\cdot),$ then it is obvious
\[
D_{x}\Psi(t,\gamma(\cdot))(\eta(\cdot))=D_{x}\Psi(t,\gamma(\cdot))(\eta
(\cdot)1_{[0,t]}),\text{ \ \ }\forall\eta(\cdot)\in C.
\]
\end{remark}
\begin{definition}
Define $C^{j,k}([0,T)\times C)$ as the set of non-anticipative\ functions
$\Psi$ defined on $[0,T]\times C$ which are $j$ times differentiable in time
and $k$ times Fr\'{e}chet differentiable in $C$ such that all these
derivatives are continuous.
\end{definition}
Using similar techniques as in {\normalsize Mohammed} \cite{Mohammed1} and
\cite{Mohammed}, we have the following lemma.
\begin{lemma}
\label{extension-1} Suppose a non-anticipative $\Phi:[0,T]\times C\rightarrow
R$ is second order continuous differentiable. Then $\forall\phi(\cdot)\in C$,
the Fr\'{e}chet derivatives $D_{x}\Phi(t,\phi(\cdot))$ and $D_{xx}^{2
\Phi(t,\phi(\cdot))$ have unique weakly continuous linear and bilinear
extension
\[
\overline{D_{x}\Phi(t,\phi(\cdot))}\in(C\oplus B_{t})^{\ast},\ \overline
{D_{xx}^{2}\Phi(t,\phi(\cdot))}\ \in(C\oplus B_{t})^{\dagger}.
\]
\end{lemma}
\begin{proof}
It is sufficient to consider the one-dimensional case, i.e., $n=1$.
For a fixed $t\in\lbrack0,T)$ and $\phi(\cdot)\in C$, we will show that there
is a unique weakly continuous extension $\overline{D_{x}\Phi(t,\phi(\cdot
))}\in(C\oplus B_{t})^{\ast}$ of the first Fr\'{e}chet derivatives $D_{x
\Phi(t,\phi(\cdot))$. In other words, if $\{ \xi^{k}\}$ is a bounded sequence
in $C$ such that $\xi^{k}(s)\rightarrow\xi(s)$ as $k\rightarrow\infty$ for all
$s\in\lbrack0,T]$ where $\xi\in C\oplus B_{t},$ then $D_{x}\Phi(t,\xi
^{k}(\cdot))\rightarrow\overline{D_{x}\Phi(t,\xi(\cdot))}$ as $k\rightarrow
\infty.$ Note that $\Phi$ is non-anticipative. Then for all $\eta\in C,
\
\begin{array}
[c]{rl
D_{x}\Phi(t,\phi(\cdot))(\eta(\cdot))= & D_{x}\Phi(t,\phi_{t}(\cdot
))(\eta(\cdot)1_{[0,t]}).
\end{array}
\]
\
By the Riesz representation theorem, there is a unique finite Borel measure
$\mu$ on $[0,T]$ such that
\begin{equation
\begin{array}
[c]{c
D_{x}\Phi(t,\phi(\cdot))(\eta(\cdot))=\int_{0}^{t}\eta(s)d\mu(s).
\end{array}
\label{Riese-1
\end{equation}
Define $\overline{D_{x}\Phi(t,\phi(\cdot))}\in(C\oplus B_{t})^{\ast}$ by
\
\begin{array}
[c]{c
\overline{D_{x}\Phi(t,\phi(\cdot)+\upsilon1_{\{t\}})}=D_{x}\Phi(t,\phi
(\cdot))+\upsilon\mu(t),\text{ \ \ }\eta\in C,\text{ }\upsilon\in R.
\end{array}
\]
We know that $\overline{D_{x}\Phi(t,\phi(\cdot))}$ is weakly continuous by
Lebesgue's dominated convergence theorem. The weak extension $\overline
{D_{x}\Phi(t,\phi(\cdot))}$ is unique because for any $\upsilon\in R,$ the
function $\upsilon1_{\{t\}}$ can be approximated weakly by a sequence of
continuous functions $\{\xi_{0}^{k}\},$ wher
\[
\xi_{0}^{k}(s):=\left\{
\begin{array}
[c]{c
(ks+1)\upsilon,-\frac{1}{k}+t\leq s\leq t\\
\text{ \ \ \ }0,\text{ \ \ \ }0\leq s<-\frac{1}{k}+t.
\end{array}
\right.
\]
Similarly, we can construct a unique weakly continuous bilinear extension
$\overline{D_{xx}^{2}\Phi(t,\phi(\cdot))}\in(C\oplus B_{t})^{\dagger}$ for any
continuous bilinear form $D_{xx}^{2}\Phi(t,\phi(\cdot)).$
\end{proof}
\section{The relation between Dupire derivatives and Fr\'{e}chet derivatives}
In order to establish the relation between Dupire derivatives and Fr\'{e}chet
derivatives, we need the following auxiliary stochastic functional
differential equation: for given $t\in\lbrack0,T)$ and $\gamma(\cdot
)\in\Lambda_{T}$,
\begin{align}
& dX^{\gamma_{t}}(s)=b(s,X^{\gamma_{t}}(\cdot))ds+\sigma(s,X^{\gamma_{t
}(\cdot))dW(s),\text{ \ }s\in\lbrack t,T],\label{SDE_1}\\
& X^{\gamma_{t}}(r)=\gamma_{t}(r),\quad\ \ r\in\lbrack0,t],\nonumber
\end{align}
where $\{W(s),s\in\lbrack0,T]\}$ is the $d$-dimensional standard Brownian
motion; the process $\{X^{\gamma_{t}}(s),0\leq s\leq T\}$ takes values in
$\mathbb{R}^{n}$; $b:[0,T]\times{C}\rightarrow\mathbb{R}^{n}$ and$\ \sigma
:[0,T]\times{C}\rightarrow\mathbb{R}^{n}\times\mathbb{R}^{d}$ are
non-anticipative functionals.
\begin{definition}
A process $\{X^{\gamma_{t}}(s),$ $s\in\lbrack t,T]\}$ is said to be a strong
solution of the equation (\ref{SDE_1}) on the interval $[t,T]$ and through the
initial datum $\gamma_{t}\in{\Lambda}$ if it satisfies the following conditions:
\end{definition}
\noindent(1) $X_{t}^{\gamma_{t}}=\gamma_{t}$;\newline\ \ (2) $X^{\gamma_{t
}(s)$ is $\mathcal{F}(s)$-measurable for each $s\in\lbrack t,T]
;\newline\ \ (3) The process $\{X^{\gamma_{t}}(s),s\in\lbrack t,T]\}$ is
continuous and it satisfies the following stochastic integral equation
$P-a.s.$
\[
X^{\gamma_{t}}(s)=\gamma_{t}(t)+\int_{t}^{s}b(r,X^{\gamma_{t}}(\cdot
))dr+\int_{t}^{s}\sigma(r,X^{\gamma_{t}}(\cdot))dW(r).
\]
We assume $b,\sigma$ satisfy the following Lipschitz and bounded conditions.
\begin{assumption}
\label{assu-1} $b(\cdot,x(\cdot)),\sigma(\cdot,x(\cdot))$ are progressively
measurable processes for each $x(\cdot)\in{C}$,\ and there exists a constant
$c>0$ such that
\[
\mid b(s,x^{1}(\cdot))-b(s,x^{2}(\cdot))\mid+\mid\sigma(s,x^{1}(\cdot
))-\sigma(s,x^{2}(\cdot))\mid\leq c\parallel x_{s}^{1}(\cdot)-x_{s}^{2
(\cdot)\parallel,
\]
$\forall(s,x^{1}(\cdot)),(s,x^{2}(\cdot))\in\lbrack0,T]\times{C}$.
\end{assumption}
\begin{assumption}
\label{assu-2}There exists a constant $K>0$ such that
\[
\mid b(s,\Phi(\cdot))\mid+\mid\sigma(s,\Phi(\cdot))\mid\leq K,\quad
\forall(s,\Phi(\cdot))\in\lbrack0,T]\times{C}.
\]
\end{assumption}
Then we have the following theorem (see \cite{Lipster}):
\begin{theorem}
Under assumptions (\ref{assu-1}) and (\ref{assu-2}), the equation
(\ref{SDE_1}) has a unique strong solution.
\end{theorem}
By similar analysis as in {\normalsize Mohammed} \cite{Mohammed1} and
\cite{Mohammed}, we have the following result.
\begin{theorem}
\label{Tto-1}Let Assumptions (\ref{assu-1}) and (\ref{assu-2}) hold true.
$X^{\gamma_{t}}(\cdot)$ is the solution of (\ref{SDE_1}). Suppose a
non-anticipative $\Phi$ belongs to $C^{1,2}([0,T)\times C)$. Then for given
$\gamma\in C,
\begin{equation
\begin{array}
[c]{rl
\lim_{\varepsilon\rightarrow0^{+}}\frac{E[\Phi(t+\varepsilon,X^{\gamma_{t
}(\cdot))]-\Phi(t,\gamma(\cdot))}{\varepsilon}= & D_{t}\Phi(t,\gamma
(\cdot))+\overline{D_{x}\Phi(t,\gamma(\cdot))}(b(t,\gamma(\cdot))1_{\{t\}})\\
& +\frac{1}{2}\sum\limits_{j=1}^{n}\overline{D_{xx}^{2}\Phi(t,\gamma(\cdot
))}(\sigma(t,\gamma(\cdot))e_{j}1_{\{t\}},\sigma(t,\gamma(\cdot))e_{j
1_{\{t\}}),
\end{array}
\label{Ito-1
\end{equation}
\end{theorem}
\begin{proof}
Step 1.
Fix $\gamma(\cdot)\in C.$ Since $\Phi\in C^{1,2}([0,T]\times C),$ by Taylor's
theorem, for $\varepsilon>0$,
\
\begin{array}
[c]{rl
\Phi(t+\varepsilon,X^{\gamma_{t}}(\cdot))-\Phi(t,\gamma(\cdot))= &
\Phi(t+\varepsilon,\gamma_{t}(\cdot))-\Phi(t,\gamma(\cdot))+\Phi
(t+\varepsilon,X^{\gamma_{t}}(\cdot))-\Phi(t+\varepsilon,\gamma_{t}(\cdot))\\
= & D_{t}\Phi(t,\gamma(\cdot))\cdot\varepsilon+D_{x}\Phi(t+\varepsilon
,\gamma_{t}(\cdot))((X^{\gamma_{t}}(\cdot)-\gamma_{t}(\cdot
))1_{[0,t+\varepsilon]})\\
& +R(\varepsilon)+o(\varepsilon),\;a.s.
\end{array}
\]
wher
\
\begin{array}
[c]{cl
R(\varepsilon):= & \int_{0}^{1}(1-u)D_{xx}^{2}\Phi(t+\varepsilon,\gamma
_{t}(\cdot)+u\cdot(X^{\gamma_{t}}(\cdot)-\gamma_{t}(\cdot)))\\
& \text{ \ }((X^{\gamma_{t}}(\cdot)-\gamma_{t}(\cdot))1_{[0,t+\varepsilon
]},(X^{\gamma_{t}}(\cdot)-\gamma_{t}(\cdot))1_{[0,t+\varepsilon]})du.
\end{array}
\]
Taking expectation and dividing by $\varepsilon$, we hav
\begin{equation
\begin{array}
[c]{cl
\frac{E[\Phi(t+\varepsilon,X^{\gamma_{t}}(\cdot))]-\Phi(t,\gamma(\cdot
))}{\varepsilon}= & D_{t}\Phi(t,\gamma(\cdot))+D_{x}\Phi(t+\varepsilon
,\gamma_{t}(\cdot))\cdot E[\frac{1}{\varepsilon}(X^{\gamma_{t}}(\cdot
)-\gamma_{t}(\cdot))1_{[0,t+\varepsilon]}]\\
& +\frac{1}{\varepsilon}ER(t)+o(1).
\end{array}
\label{Ito-2
\end{equation}
Note tha
\
\begin{array}
[c]{rl
\lim_{\varepsilon\rightarrow0^{+}}[E\{ \frac{1}{\varepsilon}(X^{\gamma_{t
}(\cdot)-\gamma_{t}(\cdot))1_{[0,t+\varepsilon]}\}](s)= & \left\{
\begin{array}
[c]{l
\lim_{\varepsilon\rightarrow0^{+}}\frac{1}{\varepsilon}\int_{t}^{t+\varepsilon
}E[b(u,X^{\gamma_{t}}(\cdot))]du,\text{ }s=t\\
0,\text{ \ \ }0\leq s<t
\end{array}
\right. \\
= & b(t,\gamma(\cdot))1_{\{t\}},\text{ \ \ \ \ \ }0\leq s\leq t.
\end{array}
\]
Since $b$ is bounded, $\Vert E\{ \frac{1}{\varepsilon}(X^{\gamma_{t}
(\cdot)-\gamma_{t}(\cdot))1_{[0,t+\varepsilon]}\} \Vert_{C}$ is bounded at $t$
and $\gamma_{t}(\cdot)\in C.$ Henc
\
\begin{array}
[c]{c
\lim_{\varepsilon\rightarrow0^{+}}[E\{ \frac{1}{\varepsilon}(X^{\gamma_{t
}(\cdot)-\gamma_{t}(\cdot))1_{[0,t+\varepsilon]}\}]=b(t,\gamma(\cdot
))1_{\{t\}}.
\end{array}
\]
Therefore, by Lemma (\ref{extension-1}) and the continuity of $D_{x}\Phi$ at
$\gamma(\cdot),$ we obtai
\
\begin{array}
[c]{rl}
& \lim_{\varepsilon\rightarrow0^{+}}D_{x}\Phi(t+\varepsilon,\gamma_{t
(\cdot))[E\{ \frac{1}{\varepsilon}(X^{\gamma_{t}}(\cdot)-\gamma_{t
(\cdot))1_{[0,t+\varepsilon]}\}]\\
= & \lim_{\varepsilon\rightarrow0^{+}}D_{x}\Phi(t,\gamma(\cdot))[E\{ \frac
{1}{\varepsilon}(X^{\gamma_{t}}(\cdot)-\gamma_{t}(\cdot))1_{[0,t+\varepsilon
]}\}]\\
= & \overline{D_{x}\Phi(t,\gamma(\cdot))}(b(t,\gamma(\cdot))1_{\{t\}}).
\end{array}
\]
\noindent Step 2.
Finally we calculus the limit of the third term in the right-hand side of
(\ref{Ito-2}) as $\varepsilon\rightarrow0^{+}$. By the martingale property of
the It\^{o} integral and the Lipschitz continuity of $D_{xx}^{2}\Phi$, we have
the following estimates
\
\begin{array}
[c]{ll}
& \mid\frac{1}{\varepsilon}ED_{x}^{2}\Phi(t+\varepsilon,\gamma_{t
(\cdot)+u\cdot(X^{\gamma_{t}}(\cdot)-\gamma_{t}(\cdot)))((X^{\gamma_{t}
(\cdot)-\gamma_{t}(\cdot))1_{[0,t+\varepsilon]},(X^{\gamma_{t}}(\cdot
)-\gamma_{t}(\cdot))1_{[0,t+\varepsilon]})\\
& -\frac{1}{\varepsilon}ED_{xx}^{2}\Phi(t,\gamma(\cdot))((X^{\gamma_{t}
(\cdot)-\gamma_{t}(\cdot))1_{[0,t+\varepsilon]},(X^{\gamma_{t}}(\cdot
)-\gamma_{t}(\cdot))1_{[0,t+\varepsilon]})\mid\\
\leq & (E\Vert D_{xx}^{2}\Phi(t+\varepsilon,\gamma_{t}(\cdot)+u\cdot
(X^{\gamma_{t}}(\cdot)-\gamma_{t}(\cdot)))-D_{xx}^{2}\Phi(t,\gamma
(\cdot))\Vert^{2})^{\frac{1}{2}}[\frac{1}{\varepsilon^{2}}E\Vert(X^{\gamma
_{t}}(\cdot)-\gamma_{t}(\cdot))1_{[0,t+\varepsilon]}\Vert^{4}]^{\frac{1}{2}}\\
\leq & K(\varepsilon^{2}+1)^{\frac{1}{2}}(E\Vert D_{xx}^{2}\Phi(t+\varepsilon
,\gamma_{t}(\cdot)+u\cdot(X^{\gamma_{t}}(\cdot)-\gamma_{t}(\cdot)))-D_{xx
^{2}\Phi(t,\gamma(\cdot))\Vert^{2})^{\frac{1}{2}},
\end{array}
\]
where $t\in R^{+}$, $\gamma(\cdot)\in C$ and $K$ is a positive constant
independent of $u$. The last line tends to $0,$ uniformly for $u\in
\lbrack0,1],$ as $\varepsilon\rightarrow0^{+}$. Because $\Phi\in
C^{1,2}([0,T)\times C)$ and is bounded on $C$, we have the following weak
limit:
\
\begin{array}
[c]{rl
\lim_{\varepsilon\rightarrow0^{+}}\frac{1}{\varepsilon}ER(\varepsilon)= &
\int_{0}^{1}(1-u)\lim_{\varepsilon\rightarrow0^{+}}\frac{1}{\varepsilon
}ED_{xx}^{2}\Phi(t,\gamma(\cdot))((X^{\gamma_{t}}(\cdot)-\gamma_{t
(\cdot))1_{[0,t+\varepsilon]},(X^{\gamma_{t}}(\cdot)-\gamma_{t}(\cdot
))1_{[0,t+\varepsilon]})\\
= & \frac{1}{2}\sum\limits_{j=1}^{n}\overline{D_{xx}^{2}\Phi(t,\gamma(\cdot
))}(\sigma(t,\gamma(\cdot))e_{j}1_{\{t\}},\sigma(t,\gamma(\cdot))e_{j
1_{\{t\}}).
\end{array}
\]
\end{proof}
Note that $b$ and $\sigma$\ are non-anticipative functionals. We can rewrite
$b(t,\gamma(\cdot))=\tilde{b}(\gamma_{t})$ and $\sigma(t,\gamma(\cdot
))=\tilde{\sigma}(\gamma_{t})$, $\forall$ $\gamma(\cdot)\in C$.
\begin{corollary}
\label{w2} Let Assumptions (\ref{assu-1}) and (\ref{assu-2}) hold true.
$X^{\gamma_{t}}(\cdot)$ is the solution of (\ref{SDE_1}). $\Phi$ in
$\mathbb{C}^{1,2}(\Lambda)$ is non-anticipative. Then for any $t\in
\lbrack0,T)$,
\end{corollary}
\begin{equation
\begin{array}
[c]{rl
\lim_{\varepsilon\rightarrow0^{+}}\frac{E[\Phi(X_{t+\varepsilon}^{\gamma_{t
})]-\Phi(\gamma_{t})}{\varepsilon}= & \tilde{D}_{t}\Phi(\gamma_{t
)+\langle\tilde{D}_{x}\Phi(\gamma_{t}),\tilde{b}(\gamma_{t})\rangle+\frac
{1}{2}\langle\tilde{D}_{xx}\Phi(\gamma_{t})\tilde{\sigma}(\gamma_{t
),\tilde{\sigma}(\gamma_{t})\rangle.
\end{array}
\label{Ito-3
\end{equation}
It is easy to prove this corollary by Theorem \ref{w2 copy(1)}.
Now we build the relation between Fr\'{e}chet derivatives and Duprie derivatives.
\begin{theorem}
\label{w3}Suppose (i) $\Phi\in\mathbb{C}^{1,2}(\Lambda)$. (ii) When the domain
of $\Phi$ is limited to $[0,T]\times C$, it is non-anticipative and belongs to
$C^{1,2}([0,T)\times C)$. Then, for any given $\gamma(\cdot)\in C,$ we have
the following equalities
\
\begin{array}
[c]{rl
\tilde{D}_{t}\tilde{\Phi}(\gamma_{t})= & D_{t}\tilde{\Phi}(\gamma_{t}),\\
\tilde{\mu}(t)= & \tilde{D}_{x}\tilde{\Phi}(\gamma_{t}),\\
\tilde{\lambda}(t)= & \tilde{D}_{xx}^{2}\tilde{\Phi}(\gamma_{t}),
\end{array}
\]
where $\tilde{\Phi}(\gamma_{t})=\Phi(t,\gamma(\cdot))$, $\tilde{\mu}$\ and
$\tilde{\lambda}$ are the corresponding Borel measures of $\overline
{D_{x}\tilde{\Phi}(\gamma_{t})}$ and $\overline{D_{xx}^{2}\tilde{\Phi
(\gamma_{t})}$.
\end{theorem}
\begin{proof}
For given $\gamma(\cdot)\in C$, we rewrite $\tilde{\Phi}(X_{s}^{\gamma_{t}
)$=$\Phi(s,X^{\gamma_{t}}(\cdot)),\tilde{b}(\gamma_{t})=b(t,\gamma(\cdot))$
and $\tilde{\sigma}(\gamma_{t})=\sigma(t,\gamma(\cdot))$. By Theorem
(\ref{Tto-1}), we hav
\begin{equation
\begin{array}
[c]{rl
\lim_{\varepsilon\rightarrow0^{+}}\frac{E[\Phi(t+\varepsilon,X^{\gamma_{t
}(\cdot))]-\Phi(t,\gamma(\cdot))}{\varepsilon}= & \lim_{\varepsilon
\rightarrow0^{+}}\frac{E[\tilde{\Phi}(X_{t+\varepsilon}^{\gamma_{t}
)]-\tilde{\Phi}(\gamma_{t})}{\varepsilon}\\
= & \Phi(t,\gamma(\cdot))+\overline{D_{x}\Phi(t,\gamma(\cdot))}(b(t,\gamma
(\cdot))1_{\{t\}})\\
& +\frac{1}{2}\sum\limits_{j=1}^{n}\overline{D_{xx}^{2}\Phi(t,\gamma(\cdot
))}(\sigma(t,\gamma(\cdot))e_{j}1_{\{t\}},\sigma(t,\gamma(\cdot))e_{j
1_{\{t\}})\\
= & D_{t}\tilde{\Phi}(\gamma_{t})+\overline{D_{x}\tilde{\Phi}(\gamma_{t
)}(\tilde{b}(\gamma_{t})1_{\{t\}})\\
& +\frac{1}{2}\sum\limits_{j=1}^{n}\overline{D_{xx}^{2}\tilde{\Phi}(\gamma
_{t})}(\tilde{\sigma}(\gamma_{t})e_{j}1_{\{t\}},\tilde{\sigma}(\gamma
_{t})e_{j}1_{\{t\}}).
\end{array}
\label{Ito-4
\end{equation}
Similar as the proof of lemma (\ref{extension-1}), we know there is a unique
finite Borel measure $\tilde{\mu}$ on $[0,T]$ such that
\begin{equation
\begin{array}
[c]{c
D_{x}\tilde{\Phi}(\gamma_{t})(\eta(s))=\int_{0}^{t}\eta(s)d\tilde{\mu}(s).
\end{array}
\label{Rise-2
\end{equation}
Then we hav
\[
\overline{D_{x}\tilde{\Phi}(\gamma_{t})}(\tilde{b}(\gamma_{t})1_{\{t\}
)=\langle\tilde{\mu}(t),\tilde{b}(\gamma_{t})\rangle,
\]
There is also a unique finite Borel measure $\tilde{\lambda}$ on $[0,T]$ such tha
\
\begin{array}
[c]{c
\frac{1}{2}\langle\tilde{\lambda}(t)\tilde{\sigma}(\gamma_{t}),\tilde{\sigma
}(\gamma_{t})\rangle=\frac{1}{2}\sum\limits_{j=1}^{n}\overline{D_{xx
^{2}\tilde{\Phi}(\gamma_{t})}(\tilde{\sigma}(\gamma_{t})e_{j}1_{\{t\}
,\tilde{\sigma}(\gamma_{t})e_{j}1_{\{t\}}).
\end{array}
\]
It yields tha
\begin{equation
\begin{array}
[c]{rl
\lim_{\varepsilon\rightarrow0^{+}}\frac{E[\Phi(t+\varepsilon,X^{\gamma_{t
}(\cdot))]-\Phi(t,\gamma(\cdot))}{\varepsilon}= & \lim_{\varepsilon
\rightarrow0^{+}}\frac{E[\tilde{\Phi}(X_{t+\varepsilon}^{\gamma_{t}
)]-\tilde{\Phi}(\gamma_{t})}{\varepsilon}\\
= & D_{t}\tilde{\Phi}(\gamma_{t})+\langle\tilde{\mu}(t),\tilde{b}(\gamma
_{t})\rangle+\frac{1}{2}\langle\tilde{\lambda}(t)\tilde{\sigma}(\gamma
_{t}),\tilde{\sigma}(\gamma_{t})\rangle.
\end{array}
\label{Ito-5
\end{equation}
By Corollary (\ref{w2}), we hav
\begin{equation
\begin{array}
[c]{rl
\lim_{\varepsilon\rightarrow0^{+}}\frac{E[\Phi(X_{t+\varepsilon}^{\gamma_{t
})]-\Phi(\gamma_{t})}{\varepsilon}= & \tilde{D}_{t}\Phi(\gamma_{t
)+\langle\tilde{D}_{x}\Phi(\gamma_{t}),\tilde{b}(\gamma_{t})\rangle+\frac
{1}{2}\langle\tilde{D}_{xx}\Phi(\gamma_{t})\tilde{\sigma}(\gamma_{t
),\tilde{\sigma}(\gamma_{t})\rangle.
\end{array}
\label{Ito-6
\end{equation}
Notice that $b$ and $\sigma$ can take any values which satisfy Assumptions
(\ref{assu-1}) and (\ref{assu-2}). Comparing (\ref{Ito-5}) and (\ref{Ito-6}),
we obtain the results.
\end{proof}
\textbf{Acknowledgement.} \textit{The authors would like to thank Shige Peng
for pointing out that based on our results, Dupire derivative is a concept
weak than Fr\'{e}chet derivative. }
\bigskip
| {
"timestamp": "2013-01-25T02:00:44",
"yymm": "1301",
"arxiv_id": "1301.5691",
"language": "en",
"url": "https://arxiv.org/abs/1301.5691",
"abstract": "In this paper, we study the relation between Fréchet derivatives and Dupire derivatives, in which the latter are recently introduced by Dupire [4]. After introducing the definition of Fréchet derivatives for non-anticipative functionals, we prove that the Dupire derivatives and the extended Fréchet derivatives are coherent on continuous pathes.",
"subjects": "Probability (math.PR)",
"title": "The Dupire derivatives and Fréchet derivatives on continuous pathes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9693241956308277,
"lm_q2_score": 0.7310585727705127,
"lm_q1q2_score": 0.7086327630097982
} |
https://arxiv.org/abs/2006.12689 | Bounds for Combinatorial Types of Non-Attacking Riders | Given q non-attacking riders with r moves, the number of combinatorial types has not been found for r greater than 2 and q greater than 3. This paper aims to create upper and lower bound functions which can be applied to any q and r, regardless of size. | \section{Introduction/Background}
\large{\null\quad Consider first an n by n infinite chessboard upon which is placed a queen. Now, if a second queen is placed, we dictate that these queens cannot be in line of fire of each other. With this assumption, it is clear that the second piece can be placed in 8 different locations with respect to the first piece; however, it will be explained later that this only results in 4 combinatorial types. We now expand this concept to an infinite plane upon which "pieces" or "riders" may be placed with no grid present. These two terms are interchangeable. These riders will have some r number of moves denoting the number of different straight line movements that can be made. From the example above, the queen would have 4 moves, and a rook would have 2. Pawns, knights, and kings would not exist in this scenario because they do not have straight line movements, and cannot continue in a direction indefinitely. One can also imagine a piece with 5+ moves. Such a piece could simply move in 5 or more straight directions indefinitely. Also note that these moves need not be set at a uniform angle apart. We wish to find a formula which will output the number of combinatorial types for any given number of riders and moves. Note that we will not mix riders with different movement patterns. For example we will not solve the number of combinatorial types for a rider with 4 moves placed on a plane with a rider with 2 moves.}
\begin{figure}[h]
\centering
\includegraphics[width=5cm, height=5cm]{figure1.jpg}
\caption{Labeling the 8 regions for a piece with 4 moves.}
\end{figure}
\section{General Syntax}
\large{\quad Two non-attacking pieces are said to have the same combinatorial type if for each pair $\mathbb{P}_i$ and $ \mathbb{P}_j$, $\mathbb{P}_j$ lies in the same region of the board with respect to $\mathbb{P}_i$ in both configurations (Hanusa). We will first introduce some notation. Without loss of generality, let every piece contain a move which is vertical. Then, denote region I as the region directly clockwise to the the upper half of this vertical move. Let region II be the region clockwise of region I. Continue clockwise in this manner, labeling every region of the piece. Given a combinatorial type, (such as Figure 1) we record it by first choosing a piece, call it $\mathbb{P}_1$. Then, choose any other piece, $\mathbb{P}_2$, and record the region of $\mathbb{P}_1$ that $ \mathbb{P}_2$ is in. Begin a list, say x. Let the first index of x be this number we recorded, ie x=(1). Then, choose a third piece, $\mathbb{P}_3$, and let the second index of x be the region of $\mathbb{P}_1, \mathbb{P}_3$ is in. Let the third index of x be the region of $\mathbb{P}_2, \mathbb{P}_3$ is in, ie x=(1,6,5). Continue this process, so for some $\mathbb{P}_i$, it is recorded in the indices $[\frac{i(i-1)}{2}+1,\frac{i(i+1)}{2}]$ of x, the j-th of which being its location relative to $\mathbb{P}_j$, $j<i$. Note that one combinatorial type can be recorded in multiple different ways.\\
\begin{figure}[h]
\centering
\includegraphics[width=16cm, height=8cm]{figure2.jpg}
\caption{This would typically be recorded as (1,6,5). However, it could also be (6,1,2) if we label in the order ($\mathbb{P}_1,\mathbb{P}_3,\mathbb{P}_2$).There are 8 different ways to order 3 pieces, thus there are 8 possible ways to record the above figure. We say there are 8 permutations of the above combination. }
\end{figure}
\textbf{Lemma 1}: There are q! ways to record a combinatorial type with q riders. Call each recording of the combinatorial type a permutation.\\ \\
\null\quad Proof: This is trivial. In order to record a different permutation of the same combination, simply record the pieces in a different order. In other words, choose a different piece to denote as $\mathbb{P}_i$ when recording the permutation. There are q! ways to order the list of integers from one to q, so there must be q! different permutations of one combinatorial type. All that remains is to show that each permutation must be unique. Suppose not two orderings of the pieces are unique, but their permutation is the same. Ie one labels pieces in the order (1,2,3) and results in the permutation (1,6,5), while the other has pieces labeled as (2,1,3) but also results in the permutation (1,6,5). This is clearly impossible, thus we have proved our claim.\\ \\
\null\quad Therefore to find the number of combinatorial types for q riders and r moves, it suffices to find the number of permutations.}
\section{Lower Bound}
\large{Define spaces as the largest areas for which no moves intersect. Consider Figure 2 which has 34 spaces. \\ \\
\null\quad \textbf{Lemma 2}: Every combinatorial type of q riders and r moves has the same number of spaces.\\
Proof: A space is created by the intersection of a space with a move. Proof by induction on the number of riders, q. Base case: If there is one rider, then since there is only one combinatorial type, the number of spaces is the same for every type. Inductive step: Suppose for r moves, every combinatorial type with q-1 riders has the same number of spaces. Regardless of where this new q-th rider is placed, each of its moves will intersect with (r-1)(q-1) moves from the other q-1 riders. (Each move will not intersect the parallel move of the other riders). Note that the q-th rider will transform the space it is placed in into (2r-1) spaces. Also, its moves will create r(r-1)(q-1) additional spaces since r moves intersect (r-1)(q-1) lines. So, we can create a formula for the number of spaces. Define s(q,r) represent the number of spaces available for q riders and r moves. Then, s(q,r)=(2r-1)+r(r-1)(q-1)+s(q-1,r). By the inductive step, each combinatorial type has the same number of spaces.\\ \\
\null\quad Moreover, we now have an inductive formula for the number of spaces. This can be re-written as $s(q,r)=\frac{q(q+1)}{2}(r^2-r)+q(-r^2+3r-1)+1$.Next, define p(q,r) as the number of permutations for q riders and r moves. It is clear that $p(q,r)= \prod\limits_{n=1}^{q-1}s(n,r)$ via Lemma 2. Thus, conclude via Lemma 1 that \[t(q,r)=\frac{1}{q!}\cdot \prod\limits_{n=1}^{q-1}[\frac{n(n+1)}{2}(r^2-r)+n(-r^2+3r-1)+1]\]
}
\section{Upper Bound}
\large{\null\quad While this agrees with previous formulas (Hanusa), (Kot$\check{e}\check{s}$ovec) for low q and r, it acts as a lower bound once we reach $q\geq 4, r\geq 3$. This is due to a slight error within Lemma 2. Suppose q-2 riders with r moves have already been placed on the board. When the q-1 rider is placed on the board in certain spaces, two different sets of spaces can be created based upon where in the space the q-1 rider is placed. Thus, when the final q-th rider is placed, it may have a larger than normal set of spaces to be placed in depending on where the q-1 rider is placed. Figure 3 demonstrates this. \\
\begin{figure}[h]
\centering
\includegraphics[width=16cm, height=8cm]{figure3.jpg}
\caption{First note the in both diagrams we have the same recording, that is (1,6,5). In diagram 1, we have a blackened triangle. If a 4-th rider were placed in this triangle, we would record it as (1,6,5,1,5,3). This location does not exist in diagram 2. Moreover, although the first three riders are combinatorically equivalent in both diagrams, we record diagram 2 as (1,6,5,6,4,2). Once again, this space does not exist in diagram 1. Thus s(4,3) will be larger than expected due to extra spaces with the same orientation of the first three pieces.}
\end{figure}
Determining how many of these "problem spaces" occur given some q and r is a difficult problem, and is yet to be solved. However, an upper bound can be created. These problem spaces only occur at intersections of three or more moves with less than q riders placed, hence the bounds of $q\geq 4, r\geq 3$. (This part is clear, since Lemma 2 accounts for all intersections of 2 lines). Thus, if we simply add to s(q,r) every instance of three move intersections, we can find a formula for t(q,r) given any q and r. This however, is also very difficult, thus we will create an upper bound for the number of 3 line intersections with some simple geometry. \\ \\
\null\quad Note that for q-2 riders and r moves, there are $r(r-1)\frac{(q-2)(q-3)}{2}$ intersections of 2 moves. This can be seen by looking at one rider at a time. First, count the number of intersections created by the q-2 rider. Each of its r moves will intersect with r-1 moves from the other q-3 riders. Next, the q-3 rider will have r moves intersect with r-1 moves from the other q-4 riders (not the q-2 rider). Summing these up yields $=r(r-1)[(q-3)+(q-4)+...+1]=r(r-1)\frac{(q-2)(q-3)}{2}$. Next, assume that the q-1 rider can reach any 2 move intersection with any move which is not parallel to either of the two moves forming the intersection. (This is why this will be an upper bound). So, when placing the q-1 rider, each move can interact with $\frac{r-2}{r}\cdot r(r-1)\frac{(q-2)(q-3)}{2}$ 2-intersections. If we define a(q,r) as the number of problem spaces for q moves and r riders, we conclude that $a(q,r)=r(r-1)(r-2)\frac{(q-2)(q-3)}{2}$. This results in the following formula:
\begin{gather}
\frac{1}{q!}\cdot \prod\limits_{n=1}^{q-1}[\frac{n(n+1)}{2}(r^2-r)+n(-r^2+3r-1)+1]\leq t(q,r)\leq \\ \frac{1}{q!}\cdot \prod\limits_{n=1}^{q-1}[\frac{n(n+1)}{2}(r^2-r)+n(-r^2+3r-1)+1+r(r-1)(r-2)\frac{(n-2)(n-3)}{2}]
\end{gather}
\large{Where (2) only applies when $q\geq 4$ and $r\geq 3$. One might ask what occurs at intersections of 4 or more lines. As seen in Figure 3, an intersection of 3 lines creates one problem space. It can be seen clearly that an intersection of x lines (where x$\geq$3) creates (x-2) problem spaces. However, a(q,r) counts an intersection of x lines as ${x-1 \choose 2}$ problem spaces. (There are ${x-1 \choose 2}$ 2-intersections in an (x-1)-intersection, then the q-th rider makes it a 3-intersection which equals 1 problem space). Clearly ${x-1 \choose 2}\geq (x-2)$ for $x\geq 2$, so we need not worry about large intersections since they will still be counted below the upper bound. This concludes the proof. Below are the results of these equations compared to Hanusa's results.}
\begin{table}[h]
\centering
\begin{tabu}{|c|[2pt]c|c|c|c|c|c|}
\hline
q$\backslash$ r&1&2&3&4&5&6 \\ \tabucline[2pt]{-}
1&1&1&1&1&1&1 \\ \hline
2&1&2&3&4&5&6 \\ \hline
3&1&6&17&36&65&106 \\ \hline
4&1&24&144.5$\leq$ 151$\leq$ 289&522$\leq$ 574$_\mathbb{Q}\leq$ 2088 &1430$\leq$?$\leq$ 10010&3286$\leq$?$\leq$36146\\ \hline
5&1&120&1647.3$\leq$1899&10544.4$\leq$14206$_\mathbb{Q}$&44902$\leq$?&147870$\leq$? \\
~&~&~&$\leq$3641.4&$\leq$52200&$\leq$434434&$\leq$2494074\\ \hline
6&1&720&23611.3$\leq$31709&274154.4$\leq$501552$_\mathbb{Q}$& 1840982$\leq$?&8773620$\leq$?\\
~&~&~&$\leq$ 63117.6&$\leq$1983600&$\leq$ 30844814&$\leq$297626164 \\ \hline
\end{tabu}
\caption{For q$\leq$3 and r$\leq $2, the upper bound formula agrees exactly with results previously found by Kot$\check{e}\check{s}$ovec, and equations from Hanusa. For the other entries, the middle value is from Kot$\check{e}\check{s}$ovec's empirical formulas while the bounds come from the formulas above. $\mathbb{Q}$ denotes that the empirical formula only applies for riders with moves identical to a chess queen. Riders with 4 moves in different orientations may result in different values. }
\end{table}
\\ \\
\section{Conclusion}
\large{\null\quad Clearly much progress can be made on improving these bounds(especially the upper bound). The question that remains is how to calculate the number of these problem spaces given q and r. In addition, it is possible that different types of pieces (pieces with the same number of moves, but whose moves are a different angle apart) could result in a different number of combinatorial types despite having the same number of moves and riders!}
}
\section{References}
\large{
Hanusa, Christopher R. H., and Thomas Zaslavsky. “A q-Queens Problem. VII. Combinatorial Types of Nonattacking Chess Riders.” ArXiv.org, 11 June 2020, arxiv.org/abs/1906.08981.
\\ \\
V. Kot$\check{e}\check{s}$ovec, Non-attacking chess pieces (chess and mathematics) [\textit{$\check{S}$ach a matematika - po$\check{c}$ty rozm\'ist$\check{e}$n\'i neohro$\check{z}$uj\'ic\'ich se kamen$\overset{\circ}{u}$}]. Self-published online book, Apr. 2010; 6th ed., Feb. 2013. \\
http://www.kotesovec.cz/math.htm
}
\end{document}
| {
"timestamp": "2020-06-25T02:20:45",
"yymm": "2006",
"arxiv_id": "2006.12689",
"language": "en",
"url": "https://arxiv.org/abs/2006.12689",
"abstract": "Given q non-attacking riders with r moves, the number of combinatorial types has not been found for r greater than 2 and q greater than 3. This paper aims to create upper and lower bound functions which can be applied to any q and r, regardless of size.",
"subjects": "Combinatorics (math.CO)",
"title": "Bounds for Combinatorial Types of Non-Attacking Riders",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9693241956308278,
"lm_q2_score": 0.7310585727705126,
"lm_q1q2_score": 0.708632763009798
} |
https://arxiv.org/abs/1304.6782 | Minimal Residual Methods for Complex Symmetric, Skew Symmetric, and Skew Hermitian Systems | While there is no lack of efficient Krylov subspace solvers for Hermitian systems, there are few for complex symmetric, skew symmetric, or skew Hermitian systems, which are increasingly important in modern applications including quantum dynamics, electromagnetics, and power systems. For a large consistent complex symmetric system, one may apply a non-Hermitian Krylov subspace method disregarding the symmetry of $A$, or a Hermitian Krylov solver on the equivalent normal equation or an augmented system twice the original dimension. These have the disadvantages of increasing either memory, conditioning, or computational costs. An exception is a special version of QMR by Freund (1992), but that may be affected by non-benign breakdowns unless look-ahead is implemented; furthermore, it is designed for only consistent and nonsingular problems. For skew symmetric systems, Greif and Varah (2009) adapted CG for nonsingular skew symmetric linear systems that are necessarily and restrictively of even order.We extend the symmetric and Hermitian algorithms MINRES and MINRES-QLP by Choi, Paige and Saunders (2011) to complex symmetric, skew symmetric, and skew Hermitian systems. In particular, MINRES-QLP uses a rank-revealing QLP decomposition of the tridiagonal matrix from a three-term recurrent complex-symmetric Lanczos process. Whether the systems are real or complex, singular or invertible, compatible or inconsistent, MINRES-QLP computes the unique minimum-length, i.e., pseudoinverse, solutions. It is a significant extension of MINRES by Paige and Saunders (1975) with enhanced stability and capability. | \section{Introduction} \label{sec:intro}
Krylov subspace methods \red{for linear systems} are generally divided
into two classes: \red{those} for Hermitian matrices
(e.g.\red{,}\ \CG~\cite{HS52}, \MINRES~\cite{PS75},
\SYMMLQ~\cite{PS75}, \MINRES-\QLP~\cite{CPS11,CS12,CS12b,C06}) and
those for general matrices without such symmetries
(e.g.\red{,}\ \BCG~\cite{F76}, \GMRES~\cite{SS86}, \QMR~\cite{FN91},
\BICGSTAB~\cite{V92}, \LSQR~\cite{PS82a,PS82b}, \red{and
IDR$(s)$~\cite{SV08})}. Such a division is largely due to
historical reasons in numerical linear algebra---the most prevalent
structure for matrices arising from practical applications being
Hermitian (which reduces to symmetric for real
matrices). However\red{,} other types of symmetry structures, notably
complex symmetric, skew symmetric, and skew Hermitian matrices, are
becoming increasingly common in modern applications. Currently,
\red{except} possibly for storage and matrix-vector products, these
are treated \red{as} general matrices with no symmetry structures. The
algorithms in this article go substantially further in developing
specialized Krylov subspace algorithms designed at the outset to
exploit \red{the} symmetry structures. In addition, our algorithms
constructively reveal the (numerical) compatibility and singularity of
a given linear system\red{;} users do not have to know these
properties a priori.
We are concerned with iterative methods for solving a large linear system
$Ax=b$ or the more general minimum-length least-squares (\LS) problem
\begin{equation} \label{eqn4b}
\min \norm{x}_2 \quad \text{s.t.} \quad x \in
\arg\min_{x \in \mathbb{C}^{n}}\norm{Ax-b}_2,
\end{equation}
where $A \in \mathbb{C}^{n \times n}$ is complex symmetric ($A=A^T\!$\;)
or skew Hermitian ($A=-A^*$), and possibly singular\red{,} and $b \in
\mathbb{C}^{n}$. Our results are directly applicable to problems with
symmetric or skew symmetric matrices $A=\pm A^T\! \in \mathbb{R}^{n
\times n}$ and real vectors $b$. $A$ may exist only as an operator
for returning the product $Ax$.
The solution of~\eqref{eqn4b}, called the \textit{minimum-length} or
\textit{pseudoinverse} solution~\cite{GV12}, is formally given by
$x^\dagger = (A^* A)^\dagger A^* b$,
where $A^\dagger$ denotes the pseudoinverse of $A$. The pseudoinverse
is continuous under perturbations $E$ for which
$\rank{(A+E)}=\rank{(A)}$~\cite{S69}, and $x^\dagger$ is continuous
under the same condition. Problem~\eqref{eqn4b} is then
well-posed~\cite{Had1902}.
Let $A=U \Sigma U^T\! $ be a Takagi decomposition~\cite{HJ}, a
singular-value decomposition (\SVD) specialized for a complex
symmetric matrix, with $U$ unitary ($U^*U=I$) and $\Sigma \equiv
\diag(\smat{\sigma_1,\ldots,\sigma_n})$ real non-negative and
$\sigma_1 \ge \sigma_2 \ge \cdots \ge \sigma_r > 0$, where $r$ is the
rank of $A$. We define the condition number of $A$ to be $\kappa(A) =
\smash[b]{\frac{\sigma_1}{\sigma_r}}$, and we say that $A$ is
ill-conditioned if $\kappa(A)\gg 1$. Hence a mathematically
nonsingular matrix (e.g., $A=\smat{1 & 0 \\ 0 & \varepsilon}$, where
$\varepsilon$ is the machine precision) could be regarded as
numerically singular. Also, a singular matrix could be
well-conditioned or ill-conditioned. For a skew Hermitian matrix, we
use its (full) eigenvalue decomposition $A=V\Lambda V^*$, where
$\Lambda$ is a diagonal matrix of imaginary numbers (possibly zeros;
in conjugate pairs if $A$ is real, i.e., skew symmetric) and $V$ is
unitary.\footnote{Skew Hermitian (symmetric) matrices are, like
Hermitian matrices, unitarily diagonalizable (i.e.,
normal~\cite[Theorem~24.8]{TB}).} We define its condition number as
$\kappa(A) = \smash[b]{\frac{|\lambda_1|}{|\lambda_r|}}$, the ratio of
the largest and smallest nonzero eigenvalues in magnitude.
\bigskip
\begin{example} We contrast the five classes of symmetric
or Hermitian matrices by their definitions and small instances of
order $n=2$:
\begin{align*}
\mathbb{R}^{n\times n} \ni A &=A^T\! =\bmat{1 & 5 \\ 5 & 1} \text{ is
symmetric}.
%
\\ \mathbb{C}^{n\times n} \ni A &=A^* =\bmat{1 & 1-2i
\\ 1+2i & 1} \text{ is Hermitian (with real diagonal)}.
%
\\ \mathbb{C}^{n\times n} \ni A &=A^T\! =\bmat{2+i & 1-2i \\ 1-2i & i}
\text{ is complex symmetric (with complex
diagonal)}.
%
\\ \mathbb{R}^{n\times n} \ni A
&=-A^T\! =\bmat{0 & 5 \\ -5 & 0} \text{ is skew symmetric (with zero
diagonal)}.
\\ \mathbb{C}^{n\times n} \ni A &=-A^* =\bmat{0 &
1-2i \\ -1-2i & \red{i}} \text{ is skew Hermitian (with \red{imaginary} diagonal)}.
\end{align*}
\end{example}
\CG, \SYMMLQ, and \MINRES are designed for solving nonsingular
symmetric systems $Ax = b$. \CG is efficient on symmetric positive
definite systems. For indefinite problems, \SYMMLQ and \MINRES are
reliable even if $A$ is ill-conditioned.
Choi~\cite{C06} \red{appears} to be the first \red{to} comparatively
\red{analyze} the algorithms on singular symmetric and Hermitian
problems. On (singular) incompatible problems \CG and \SYMMLQ
iterates $x_k$ diverge to some nullvectors of
$A$~\cite[Propositions~2.7, 2.8, and 2.15; Lemma 2.17]{C06}. \MINRES
often seems more desirable to users because its residual norms are
monotonically decreasing. On singular compatible systems, \MINRES
returns $x^\dagger$~\cite[Theorem~2.25]{C06}. On singular
incompatible systems, \MINRES remains reliable if it is terminated
with a suitable stopping rule that monitors
$\norm{Ar_k}$~\cite[Lemma~3.3]{CPS11},
but the solution is generally not
$x^\dagger$~\cite[Theorem~3.2]{CPS11}. \MINRESQLP
\cite{CPS11,CS12,CS12b,C06} is a significant extension of \MINRES,
capable of computing $x^\dagger$, simultaneously minimizing residual
and solution norms. The additional cost of \MINRESQLP is moderate
relative to \MINRES: $1$ vector in memory, $4$ axpy \red{operations}
($y \leftarrow \alpha x + y$), and $3$ vector \red{scalings} ($x
\leftarrow \alpha x$) per iteration. The efficiency of \MINRES is
partially, and in some cases almost fully, retained in \MINRESQLP by
transferring from a \emph{\MINRES phase} to a \emph{\MINRESQLP phase}
only when an estimated $\kappa(A)$ exceeds a user-specified value.
The \MINRES phase is optional, consisting of only \MINRES iterations
for nonsingular and well-conditioned subproblems. The \MINRESQLP
phase handles less well-conditioned and possibly numerically singular
subproblems. In all iterations, \MINRESQLP uses \QR factors of the
tridiagonal matrix from a Lanczos process and then applies a second
\QR decomposition on the conjugate transpose of the upper-triangular
factor to obtain and reveal the rank of a lower-tridiagonal form.
On nonsingular systems, \MINRESQLP enhances the accuracy (with
\red{smaller} rounding errors) and stability of \MINRES. It is
applicable to symmetric and Hermitian problems with no traditional
restrictions such as nonsingularity and definiteness of~$A$ or
compatibility of~$b$.
The aforementioned established Hermitian methods are not\red{,
however,} directly applicable to complex or skew symmetric
equations. For consistent complex symmetric problems, which could
arise in Helmholtz equations, linear systems that involve Hankel
matrices, or applications in quantum dynamics, electromagnetics, and
power systems, we may apply a non-Hermitian Krylov subspace method
disregarding the symmetry of $A$ or a Hermitian Krylov solver (such as
\CG, \SYMMLQ, \MINRES, or \MINRESQLP) on the equivalent normal
equation or an augmented system twice the original dimension. They
suffer increasing memory, conditioning, or computational costs. An
exception\footnote{It is noteworthy that among direct methods for
large sparse systems, MA57 and ME57~\cite{D09} are available for
real\red{, Hermitian}, and complex symmetric problems.}
is a special version of \QMR by Freund (1992)~\cite{F92}, which takes
advantage of the matrix symmetry by using an unsymmetric Lanczos
framework. Unfortunately, the algorithm may be affected by
\red{nonbenign} breakdowns unless a look-ahead strategy is
implemented. Another less than elegant feature of \QMR is \red{that}
the vector norm of choice is induced by the inner product $x^T\! y$ but
it is not a proper vector norm (e.g., $0 \neq x^T\! :=\smat{ 1 & i }$,
where $i=\sqrt{-1}$, yet $x^T\! x = 0$). Besides, \QMR is designed for
only nonsingular and consistent problems. Inconsistent complex
symmetric problems~\eqref{eqn4b} could arise from shifted problems in
inverse or Rayleigh quotient iterations; mathematically or numerically
singular or inconsistent systems, in which $A$ or $b$ are vulnerable
to errors due to measurement, discretization, truncation, or
round-off. In fact, \QMR and most non-Hermitian Krylov solvers (other
than LSQR) fail to converge to $x^\dagger$ on an
example as simple as $A = i \; \diag(\smat{1 \\ 0})$ and $b = i
\smat{1 \\ 1}$, for which $x^\dagger =\smat{1 \\ 0}$.
Here we extend the symmetric and Hermitian algorithms \MINRES and
\MINRESQL
The main aim is to deal reliably with compatible or incompatible
systems and to return the \emph{unique} solution
of~\eqref{eqn4b}. Like \QMR and the Hermitian Krylov solvers, \red{our
approach} exploits the matrix symmetry.
Noting the similarities in the definitions of skew symmetric matrices
($A=-A^T\! \in \mathbb{R}^{n\times n}$\red{)} and complex symmetric matrices
and motivated by algebraic Riccati equations~\cite{I84} and more
recent, novel applications of Hodge theory in \red{data
mining}~\cite{JLYY11,GL11}, we evolve \MINRESQLP \red{further} for
solving skew symmetric linear systems. Greif and Varah~\cite{GV09}
adapted CG for nonsingular skew symmetric linear systems that are
\emph{skew-$A$ conjugate}, meaning $-A^2$ is symmetric positive
definite. The algorithm is further restricted to $A$ of \red{even
order because} a skew symmetric matrix of odd order is singular.
Our \MINRESQLP extension has no such limitations and \red{is}
applicable to singular problems. For skew Hermitian \red{systems} with
skew Hermitian matrices or operators ($A=-A^* \in \mathbb{C}^{n\times
n}$), our approach is to transform them into Hermitian systems so
that they \red{can} immediately take advantage of the original
Hermitian version of \MINRESQLP.
\subsection{Notation} \label{sec:notation}
For an incompatible system, $Ax\approx b$ is shorthand for the \LS
problem~(\ref{eqn4b}). We use ``$\simeq$'' to mean ``approximately
equal to\red{.''} The letters $i$, $j$, $k$ in subscripts or
superscripts denote integer indices\red{; $i$ may} also represent
$\sqrt{-1}$\red{. We use} $c$ and $s$ \red{for} cosine and sine of
some angle $\theta$; $e_k$ \red{is} the $k$th unit vector; $e$
\red{is} a vector of all ones; and other lower-case letters such as
$b$, $u$, and $x$ (possibly with integer subscripts) denote
\textit{column} vectors. Upper-case letters $A$, $T_k$, $V_k$,
\dots{} denote matrices, and $I_k$ is the identity matrix of order
$k$. Lower-case Greek letters denote scalars; in particular,
$\varepsilon \simeq 10^{-16}$ denotes floating-point double precision.
If a quantity $\delta_k$ is modified one or more times, we denote its
values by $\delta_k$, $\delta_k^{(2)}$, \red{and so on}. We use
$\diag(v)$ to denote a diagonal matrix with elements of a vector $v$
on the diagonal. The transpose, conjugate, and conjugate transpose of
a matrix $A$ \red{are} denoted \red{by} $A^T\!$, $\conj{A}$, and
$A^*=\conj{A}^T\!$\red{,} respectively. The symbol $\norm{\,\cdot\,}$
denotes the $2$-norm of a vector ($\norm{x}=\sqrt{x^*x}$) or a matrix
($\norm{A}=\sigma_1$ from $A$'s \SVD).
\subsection{Overview}
In Section~\ref{sec:review} we briefly review the Lanczos processes
and QLP decomposition before developing the algorithms in Sections
\ref{sec:csminres-standalone}-\ref{sec:transfer}. Preconditioned
algorithms are described in Section~\ref{sec:pcsminres}. Numerical
experiments are described in Section~\ref{sec:numerical}. We conclude
with future work and related software in
Section~\ref{sec:conclusions}. Our pseudocode and a summary of norm
estimates and stopping conditions are given in
Appendices~\ref{sec:pseudo} and~\ref{sec:QLPstop}.
\section{Review} \label{sec:review}
In the following few subsections, we summarize algebraic methods
necessary for our algorithmic development.
\subsection{\red{Saunders} and Lanczos processes} \label{sec:Lanczos}
Given a complex symmetric operator $A$ and a vector $b$, a
Lanczos-\emph{like}\footnote{We distinguish our process from the
complex symmetric Lanczos process~\cite{L56_88} used in
QMR~\cite{F92}.} process~\cite{BS99}, which we name the
\emph{Saunders process}, computes vectors $v_k$ and tridiagonal
matrices $\underline{T_k}$ according to $v_0 \equiv 0$, $\beta_1
v_1=b$, and then\footnote{Numerically, $p_k = A\conj{v}_k-\beta_k
v_{k-1}$, $\alpha_k = v_k^* p_k$, $\beta_{k+1}v_{k+1} =
p_k-\alpha_kv_k$ is slightly better~\cite{P76}.}
\begin{equation} \label{eq:savk}
p_k = A\conj{v}_k, \qquad \alpha_k = v_k^* p_k, \qquad
\beta_{k+1}v_{k+1} = p_k-\alpha_kv_k-\beta_kv_{k-1}
\end{equation}
for $k=1,2,\dots,\ell$, where we choose $\beta_k > 0$ to give
$\norm{v_k}=1$. In matrix form,
\begin{equation} \label{eq:avk}
A \conj{V}\!_k \!=\! V_{k+1}\underline{T_k},\quad
\underline{T_k} \!\equiv\! \mbox{\footnotesize
$\bmat{\alpha_{1} & \beta_{2}
\\ \beta_{2} & \alpha_{2} & \ddots
\\ & \ddots & \ddots & \beta_k
\\ & & \beta_k & \alpha_k
\\ & & & \beta_{k+1}}
$}
\!\equiv\!
\bmat{T_k \\ \beta_{k+1}e_k^T},
\quad V_k \!\equiv\! \bmat{v_1 & \!\cdots\! & v_k}.
\end{equation}
In exact arithmetic, the columns of $V_k$ are orthogonal\red{,} and the
process stops with $k = \ell$ and $\beta_{\ell+1}=0$ for some $\ell
\le n$, and then $A \conj{V}\!_\ell = V_\ell T_\ell$. For derivation
purposes we assume that this happens, though in practice it is rare
unless $V_k$ is reorthogonalized for each $k$.
In any case, \eqref{eq:avk} holds to machine precision\red{,} and the
computed vectors satisfy $\norm{V_k}_1 \simeq 1$ (even if $k \gg n$).
If instead we are given a skew symmetric $A$, the following is a
Lanczos process~\cite[Algorithm~1]{GV09}\footnote{Another Lanczos
process for skew symmetric~$A$ \red{using} a different measure to
normalize $\beta_{k+1}$ was developed in~\cite{W78,SW93}.} that
transforms $A$ to a series of expanding, skew symmetric tridiagonal
matrices $T_k$ and generates a set of orthogonal vectors in $V_k$ in
exact arithmetic:
\begin{equation}\label{eq:savk-ss}
p_k = A v_k, \qquad - \beta_{k+1}v_{k+1} = p_k -\beta_kv_{k-1}\red{,}
\end{equation}
\red{where $\beta_k > 0$ for $k < \ell$.} Its associated matrix form is
\begin{equation} \label{eq:avk-sh}
A V_k \!=\! V_{k+1}\underline{T_k},\quad
\underline{T_k} \!\equiv\! \mbox{\footnotesize
$\bmat{0 & \beta_{2}
\\ - {\beta}_{2} & 0 & \ddots
\\ & \ddots & \ddots & \beta_k
\\ & & - {\beta}_k & 0
\\ & & & - {\beta}_{k+1}}
$}
\!\equiv\!
\bmat{T_k \\ -\beta _{k+1}e_k^T}.
\end{equation}
If the skew symmetric process were forced on a skew Hermitian
matrix, the resultant $V_k$ would \emph{not} be orthogonal. Instead,
we multiply $Ax \approx b$ by $i$ on both sides to yield a Hermitian
problem since $(iA)^* = \conj{i} A^* = i A$. This simple
transformation by a scalar multiplication\footnote{Multiplying by $-i$
works equally well\red{,} but without loss of generality, we use $i$.}
preserves the \red{conditioning since} $\kappa(A)=\kappa(iA)$ and
allows us to adapt the original Hermitian Lanczos process with $v_0
\equiv 0$, $\beta_1 v_1= i b$, followed by
\begin{equation} \label{eq:sh-avk}
p_k = iA v_k, \qquad \alpha_k = v_k^* p_k, \qquad
\beta_{k+1}v_{k+1} = p_k-\alpha_kv_k-\beta_kv_{k-1}.
\end{equation}
Its matrix form is the same as~\eqref{eq:avk} except that the first
equation is $i A V_k = V_{k+1}\underline{T_k}$.
\subsection{Properties of the Lanczos processes}
\label{sec:Lanproperties}
The following properties of the Lanczos processes are notable:
\begin{enumerate}
\item If~$A$ and~$b$ \red{are} real, then the Saunders
process~\eqref{eq:savk} \red{for a complex symmetric system} reduces
to the symmetric Lanczos process.
\item The complex and skew symmetric properties of~$A$ carry over
to~$T_k$ by the Lanczos processes~\eqref{eq:savk}
and~\eqref{eq:savk-ss}\red{,} respectively. From the skew Hermitian
process~\eqref{eq:sh-avk}, $T_k$ is symmetric.
\item The skew symmetric Lanczos process~\eqref{eq:savk-ss} is only
two-term recurrent.
\item In~\eqref{eq:sh-avk}, there are two ways to form $p_k$: $p_k =
(iA) v_k$ or $p_k = A(iv_k)$. One may be cheaper than the other. If
$A$ is dense, $iA$ takes $\mathcal{O}(n^2)$ scalar multiplications
and storage. If $A$ is sparse or structured as in the case of
Toeplitz, $iA$ just takes $\mathcal{O}(n)$ multiplications. In
contrast, $iv_k$ takes $n\red{\wp}$ multiplications, where~$\red{\wp}$ is
theoretically bounded by the number of distinct nonzero eigenvalues
of~$A$\red{;} but in practice~$\red{\wp}$ could be an integer multiple
of~$n$.
\item While the skew Hermitian Lanczos process~\eqref{eq:sh-avk} is
applicable to a skew symmetric problem, it involves complex
arithmetic and is thus computationally more costly than the skew
symmetric Lanczos process with a real \red{vector} $b$.
\item If $A$ is changed to $A-\sigma I$ for some scalar shift
$\sigma$, \red{then} $T_k$ becomes $T_k - \sigma I$\red{,} and $V_k$ is unaltered,
showing that singular systems are commonplace. Shifted problems
appear in inverse iteration or Rayleigh quotient iteration. The \red{Saunders and}
Lanczos \red{frameworks efficiently handle} shifted problems.
\item Shifted skew symmetric matrices are not skew
symmetric. This notion also applies to the case of shifted skew
Hermitian matrices. Nevertheless they arise often in Toeplitz
problems~\cite{CJ91,CJ07}.
\item For the skew Lanczos processes, the $k$th Krylov subspace
generated by $A$ and $b$ is defined to be $\mathcal{K}_k(A,b) =
\mathop{\mathrm{range}}(V_k) = \mbox{\rm span} \{b,Ab, \dots, A^{k-1}b\}$. For the Saunders
process, we have a \emph{modified} Krylov subspace~\cite{SSY88} that
we call the \emph{Saunders subspace}, \red{
\mathcal{S}_k(A,b) \equiv \mathcal{K}_{k_1}(A\conj{A},b) \oplus
\mathcal{K}_{k_2}(A\conj{A},A\conj{b})$},
where \red{$\oplus$ is the direct-sum operator,} $k_1+k_2 = k$\red{,} and $0 \le k_1-k_2\le1 $.
\item \label{prop:fullrank} $\underline{T_k}$ has full column rank $k$ for
all $k < \ell$ \red{because} $\beta_1,\dots,\beta_{k+1} > 0$.
\end{enumerate}
\smallskip
\begin{theorem} $T_\ell$ is nonsingular if and only if $b \in \mathop{\mathrm{range}}(A)$.
Furthermore,
$\rank(T_\ell) = \ell-1$ in the case $b \notin \mathop{\mathrm{range}}(A)$.
\begin{proof}
We prove below for $A$ complex symmetric. The proofs are similar
for the skew symmetric and skew Hermitian cases.
We use $A \overline{V}_\ell = V_\ell T_\ell$ twice. First, if
$T_\ell$ is nonsingular, we can solve $T_\ell y_\ell = \beta_1 e_1$
and then $A \overline{V}_\ell y_\ell = V_\ell T_\ell y_\ell = V_\ell
\beta_1 e_1 = b$. Conversely, if $b \in \mathop{\mathrm{range}}(A)$, then
$\mathop{\mathrm{range}}(\overline{V}_\ell) \subseteq \mathop{\mathrm{range}}(\overline{A}) =
\mathop{\mathrm{range}}(A^*)$. Suppose $T_\ell$ is singular. Then there exists $z
\ne 0$ such that $T_\ell z=0$ and thus $V_\ell T_\ell z = A
\overline{V}_\ell z=0$. That is, $0 \ne \overline{V}_\ell z \in
\mathop{\mathrm{null}}(A)$. But this is impossible because $\overline{V}_\ell z \in
\mathop{\mathrm{range}}(A^*)$ and $\mathop{\mathrm{null}}(A) \cap \mathop{\mathrm{range}}(A^*) =\{ 0 \}$. Thus
$T_\ell$ must be nonsingular.
If $b \notin \mathop{\mathrm{range}}(A)$, $T_\ell = \bmat{\underline{T_{\ell-1}}&
\begin{smallmatrix}\beta_\ell e_{\ell-1}
\\ \alpha_\ell
\end{smallmatrix}}$
is singular. It follows that $\ell > \rank(T_\ell) \ge
\rank(\underline{T_{\ell-1}}) = \ell-1$ since $\rank(\underline{T_k})
= k$ for all $k<\ell$. Therefore $\rank(T_\ell) = \ell-1$.
\end{proof}
\label{thm:rankTk}
\end{theorem}
\subsection{QLP decompositions for singular matrices}
\label{sec:QLPreview}
Here we generalize, from real to complex, the matrix decomposition
\emph{pivoted QLP} by Stewart in 1999~\cite{S99}.\footnote{QLP is a
special case of the \ULV decomposition, also by
Stewart~\cite{S93,HC03}.} It is equivalent to two consecutive \QR
factorizations with column interchanges, first on $A$, then on
\red{$\smat{R & S}^*$}:
\begin{equation} \label{qlpeqn1}
Q_R A \Pi_R = \bmat{ R & S \\ 0&0 }, \qquad
Q_L \bmat{R^* & 0 \\ S^* & 0} \Pi_L = \bmat{\hat{R} & 0 \\ 0&0},
\end{equation}
giving \emph{nonnegative} diagonal elements, where $\Pi_R$ and $\Pi_L$
are (real) permutations chosen to maximize the next diagonal element
of $R$ and $\hat{R}$ at each stage. This gives
\begin{equation*}
A = QLP, \qquad
Q = Q_R^* \Pi_L, \qquad
L = \bmat{\hat{R}^* & 0 \\ 0&0}, \qquad
P = Q_L \Pi_R^T,
\label{qlpeqn2}
\end{equation*}
with $Q$ and $P$ orthonormal. Stewart demonstrated that the diagonal
elements of $L$ (the \emph{$L$-values}) give better singular-value
estimates than those of $R$ (the \emph{$R$-values}), and the accuracy
is particularly good for the extreme singular values $\sigma_1$ and
$\sigma_n$:
\begin{equation} \label{qlpeqn1a}
R_{ii} \simeq \sigma_i, \quad L_{ii} \simeq \sigma_i, \quad
\sigma_1 \ge \max_i L_{ii} \ge \max_i R_{ii}, \quad \min_i R_{ii}
\ge \min_i L_{ii} \ge \sigma_n.\!\!
\end{equation}
The first permutation $\Pi_R$ in pivoted \QLP is important. The main
purpose of the second permutation $\Pi_L$ is to ensure that the
$L$-values present themselves in decreasing order, which is not always
necessary. If $\Pi_R = \Pi_L = I$, it is simply called the \emph{\QLP
decomposition}, which is applied to each $T_k$ from the Lanczos
processes (Section~\ref{sec:Lanczos}) in \MINRESQLP.
\subsection{Householder \red{reflectors}}
\label{sec:reflector}
Givens rotations are often used to selectively annihilate matrix
elements. Householder reflectors~\cite{TB} of the following form
may be considered the \emph{Hermitian} counterpart of Givens
rotations:
\begin{equation*}
Q_{i,j} \!\equiv\! \mbox{\footnotesize
$\bmat{1 & \cdots & 0 & \cdots & 0 & \cdots & 0 \;
\\ \vdots & \ddots & \vdots & & \vdots & & \vdots
\\ 0 & \cdots & c & \cdots & \phantom-s & \cdots & 0
\\ \vdots & & \vdots & \ddots & \vdots & & \vdots
\\ 0 & \cdots & \conj{s}& \cdots & -c & \cdots & 0
\\ \vdots & & \vdots & & \vdots & \ddots & \vdots
\\ 0 & \cdots & 0 & \cdots & 0 & \cdots & 1
}
$},
\end{equation*}
where the subscripts indicate the positions of $c=\cos(\theta) \in
\mathbb{R}$ and $s=\sin(\theta) \in \mathbb{C}$ for some angle
$\theta$. They are orthogonal\red{,} and $Q_{i,j}^2 = I$ as \red{for} any
reflector, meaning $Q_{i,j}$ is its own inverse. Thus $c^2
+|s|^2=1$. We often use the shorthand $Q_{i,j} = \smat{c & \phantom-s
\\[.8ex] \conj{s} & -c}$.
\begin{comment}
\begin{remark} \label{rem:ns}
In~\eqref{qlpeqn3a}, $\beta_{k+1}>0$ implies $R_k$ and $L_k$ are
nonsingular (and $k<\ell$).
\beta_1 e_1$ has a unique solution and $AV_ky_k = V_kT_ky_k = b$,
where $x_k = V_ky_k\perp \mathop{\mathrm{null}}(A)$, and $r_k = 0$; otherwise $b
\not\in \mathop{\mathrm{range}}(A)$, $T_k$ is singular, and the residual is nonzero.
\end{remark}
\end{comment}
\bigski
In the next few sections we extend \MINRES and
\MINRESQLP to solving complex symmetric problems~\eqref{eqn4b}. Thus
we tag the algorithms with ``CS-''. The discussion and results can be
easily adapted to the skew symmetric and skew Hermitian cases\red{,} and so
we do not go into details. In fact, the skew Hermitian problems can
be solved by the implementations~\cite{MinresqlpMatlab, MinresqlpF90}
of \MINRES and \MINRESQLP for Hermitian problems. For example, we can
call the \red{\Matlab} solvers by \texttt{x = minres(i * A, i * b)} and
\texttt{x = minresqlp(i * A, i * b)} \red{to achieve} code reuse immediately.
\section{\CSMINRES standalone} \label{sec:csminres-standalone}
\CSMINRES is a natural way of using the complex symmetric Lanczos
process~\eqref{eq:savk}
to solve~\eqref{eqn4b}. For $k < \ell$, if $x_k =
\conj{V}\!_ky_k$ for some vector $y_k$, the associated residual is
\begin{equation} \label{eqn:rk}
r_k \equiv b-Ax_k = b - A \conj{V}\!_k y_k = \beta_1 v_1 - V_{k+1}
\underline{T_k} y_k = V_{k+1} (\beta_1 e_1 - \underline{T_k} y_k).
\end{equation}
\red{In order to} make $r_k$ small, $\beta_1 e_1 - \underline{T_k} y_k$
should be small. At this iteration $k$, \CSMINRES minimizes the
residual subject to $x_k\in\mathop{\mathrm{range}}(\overline{V}_k)$ by choosing
\begin{equation} \label{eqn:LSsubprob}
y_k = \arg\min_{y \in \mathbb{C}^k}
\norm{\underline{T_k} y - \beta_1 e_1}.
\end{equation}
By Theorem~\ref{thm:rankTk}, $\underline{T_k}$ has full column rank\red{,} and the
above is a nonsingular problem.
\subsection{QR factorization of $\underline{T_k}$}
\label{sec:qrfac}
We apply an expanding \QR factorization to the
subproblem~\eqref{eqn:LSsubprob} by $Q_0 \equiv 1$ and
\begin{equation} \label{QRfac}
Q_{k,k+1}\!\equiv\!
\bmat{
c_k & \!\!\phantom-s_k
\\ \conj{s}_k & \!-c_k
},
\quad
Q_k \!\equiv\! Q_{k,k+1}
\bmat{Q_{k-1} \\ & \!1}\!,
\quad
Q_k \bmat{\underline{T_k} & \beta_1 e_1}
\!=\! \bmat{ R_k & t_k \\ 0 & \phi_{k}}\!,
\end{equation}
where $c_k$ and $s_k$ form the Householder reflector $Q_{k,k+1}$ that
annihilates $\beta_{k+1}$ in $\underline{T_k}$ to give upper-tridiagonal
$R_k$, with $R_k$ and $t_k$ being unaltered in later iterations. We
can \red{state} the last expression in~\eqref{QRfac} in terms of its
elements for further analysis:
\begin{equation} \label{QRfac2}
\bmat{\,R_k\, \\ 0} \equiv \mbox{\small
$\bmat{
\gamma_1 & \delta_2 & \epsilon_3
\\ & \gamma_2^{(2)} & \delta_3^{(2)} & \ddots
\\ & & \ddots & \ddots & \epsilon_k
\\ & & & \ddots & \delta_k^{(2)}
\\ & & & & \gamma_k^{(2)}
\\ & & & & 0}$},
\quad
\bmat{t_k \\ \phi_{k}} \equiv
\bmat{\tau_1 \\ \tau_2 \\ \vdots\\ \vdots\\ \tau_k \\ \phi_{k}}
= \beta_1
\bmat{c_1 \\ \conj{s}_1 c_2 \\ \vdots \\ \vdots
\\ \conj{s}_1 \cdots \conj{s}_{k-1}c_k
\\ \conj{s}_1 \cdots \conj{s}_{k-1}\conj{s}_k}
\end{equation}
(where the superscripts are defined in Section~\ref{sec:notation}).
With $\phi_1 \equiv \beta_1>0$, the full action of $Q_{k,k+1}$ in
\eqref{QRfac}, including its effect on later columns of $T_i$, $k < i
\le \ell$, is described~by
\begin{equation} \label{min7}
\bmat{ c_k & \!\!\!\phantom-s_k
\\ \conj{s}_k & \!\!-c_k}
\bmat{\begin{matrix}
\gamma_k & \delta_{k+1} & 0
\\ \beta_{k+1} & \alpha_{k+1} & \beta_{k+2}
\end{matrix}
& \biggm| &
\begin{matrix} \phi_{k-1} \\ 0 \end{matrix}
}
=
\bmat{\begin{matrix}
\gamma_k^{(2)} & \delta_{k+1}^{(2)} & \epsilon_{k+2}
\\ 0 & \gamma_{k+1} & \delta_{k+2}
\end{matrix}
& \biggm| &
\begin{matrix} \tau_k \\ \phi_{k} \end{matrix}
}.
\end{equation}
Thus for each $j \le k < \ell$ we have $s_j \gamma_j^{(2)} =
\beta_{j+1} > 0$, giving $\gamma_1$, $\gamma_j^{(2)} \ne 0$, and
therefore each $R_j$ is nonsingular. Also, $\tau_k = \phi_{k-1} c_k$
and $\phi_{k} = \phi_{k-1} \conj{s}_k \ne 0$. Hence
from~\eqref{eqn:rk}--\eqref{QRfac}, we \red{obtain} the following short
recurrence relation for the residual norm:
\begin{align}
\label{eq:normrk}
\norm{r_k} = \norm{\underline{T_k} y_k - \beta_1 e_1} = |\phi_k|
\quad\Rightarrow\quad \norm{r_k} = \norm{r_{k-1}} |\conj{s}_k|
= \norm{r_{k-1}} |s_k|,
\end{align}
which is monotonically decreasing and tending to zero if $Ax = b$ is
compatible.
\subsection{Solving the subproblem}
When $k < \ell$,
a solution of~\eqref{eqn:LSsubprob} satisfies $R_k y_k = t_k$.
Instead of solving for $y_k$, \CSMINRES solves $R_k^T\! D_k^T\! = V_k^*$
by forward substitution, obtaining the last column $d_k$ of $D_k$ at
iteration $k$. This basis generation process can be summarized as
\begin{align}
\left\{
\begin{array}{l}
d_1 = \conj{v}_1/\gamma_1,\quad d_2 =
(\conj{v}_2-\delta_2d_1)/\gamma_2^{(2)}, \\[1ex] d_k =
({\conj{v}_k - \delta_k^{(2)} d_{k-1} - \epsilon_k d_{k-2}}) /
{\gamma_k^{(2)}}.
\end{array}
\right.
\label{eq:rdeqv}
\end{align}
At the same time, \CSMINRES updates $x_k$ via $x_0 \equiv 0$ and
\begin{equation} \label{minresxk}
x_k = \conj{V}\!_k y_k = D_k R_k y_k = D_k t_k
= x_{k-1} + \tau_k d_k
\end{equation}
\subsection{Termination}
When $k=\ell$, we can form $T_\ell$\red{,} but nothing else expands. In
place of~\eqref{eqn:rk} and~\eqref{QRfac} we have \( r_\ell = V_\ell
(\beta_1 e_1 - T_\ell y_\ell) \) and \( Q_{\ell-1} \bmat{T_\ell &
\beta_1 e_1} = \bmat{R_\ell & t_\ell} \). It is natural to solve for
$y_\ell$ in the subproblem
\begin{align} \label{eqn:LSsubprob-ell}
\min_{y_\ell \in \mathbb{C}^{\ell}}
\norm{T_\ell y_\ell - \beta_1 e_1}
\quad \equiv \quad
\min_{y_\ell \in \mathbb{C}^{\ell}}
\norm{R_\ell y_\ell - t_\ell}.
\end{align}
\red{Two} cases \red{must be considered}:
\begin{enumerate}
\item If $T_\ell$ is nonsingular, $R_\ell y_\ell = t_\ell$ has a
unique solution. Since $A\conj{V}\!_\ell y_\ell = V_\ell T_\ell
y_\ell = b$, the problem $Ax=b$ is compatible and solved by $x_\ell
= \conj{V}\!_\ell y_\ell $ with residual $r_\ell=0$.
Theorem~\ref{theorem-singular-compatible} proves that
$x_\ell = x^\dagger$, assuring us that \CSMINRES is a useful solver
for compatible linear systems even if $A$ is singular.
\item If $T_\ell$ is singular, $A$ and $R_\ell$ are singular
($R_{\ell\ell}=0$)\red{,} and both $Ax = b$ and $R_\ell y_\ell = t_\ell$
are incompatible. The optimal residual vector is unique, but
infinitely many solutions give that residual. \CSMINRES sets the
last element of $y_\ell$ to be zero. The final point and residual
stay as $x_{\ell-1}$ and $r_{\ell-1}$ with $\norm{r_{\ell-1}} =
|\phi_{\ell-1}| = \beta_1 |s_1 |\cdots| s_{\ell-1}| > 0$.
Theorem~\ref{theorem-singular-incompatible} proves that
$x_{\ell-1}$ is a \LS solution of $Ax \approx b$ (but not
necessarily $x^\dagger$).
\end{enumerate}
\begin{comment}
\begin{remark}
\label{rem:stop1}
If $\gamma_k = 0$ then $T_k$ is singular. If $k < \ell$
> 0$ we have $s_k=1$ and $\norm{r_k} = \norm{r_{k-1}}$ (not a strict
decrease), but this cannot happen twice in a row.
If $k=\ell-1$ and $\gamma_{k+1} = \gamma_\ell = 0$ in~\eqref{min7},
then the final point and residual stay as $x_{\ell-1}$ and
$r_{\ell-1}$ with $\norm{r_{\ell-1}} = \phi_{\ell-1} = \beta_1 s_1
\cdots s_{\ell-1} > 0$.
\end{remark}
\begin{remark} \label{rem:stop1}
If $k < \ell $ and $T_k$ is singular, we have $\gamma_k = 0$, $s_k =
1$, and $\norm{r_k} = \norm{r_{k-1}}$ (not a strict decrease), but
this cannot happen twice in a row
(cf.~Section~\ref{sec:Lanproperties}).
\end{remark}
\begin{remark} \label{rem:stop2}
If $T_\ell$ is singular,
\end{remark}
\end{comment}
\smallskip
\begin{theorem} \label{theorem-singular-compatible}
If $b \in \mathop{\mathrm{range}}(A)$, the final \CSMINRES point $x_\ell=x^\dagger$
and $r_\ell=0$.
\end{theorem}
\begin{proof}
If $b \in \mathop{\mathrm{range}}(A)$, the Lanczos process gives $A\conj{V}\!_\ell =
V_\ell T_\ell$ with nonsingular $T_\ell$, and \CSMINRES terminates
with $Ax_\ell=b$ and $x_\ell = \conj{V}\!_\ell y_\ell = A^*
q=\conj{A}q$, where $q = V_\ell \conj{T}\!_\ell^{-1} y_\ell$. If some
other point $\skew{2.8}\widehat x$ satisfies $A\skew{2.8}\widehat x=b$, let $p = \skew{2.8}\widehat x - x_\ell$.
We have $Ap=0$ and $x_\ell^* p = q^* Ap = 0$. Hence $\norm{\skew{2.8}\widehat x}^2 =
\norm{x_\ell + p}^2 = \norm{x_\ell}^2 + 2 x_\ell^* p + \norm{p}^2 \ge
\norm{x_\ell}^2$. \red{Thus $x_\ell=x^\dagger$.}
\red{Since $\beta_{\ell+1}=0$, $s_k=0$ in~\eqref{min7}. By~\eqref{eq:normrk}, $\norm{r_\ell}=0$ and $r_\ell = b-Ax_\ell = 0$}.
\end{proof}
\smallskip
\begin{theorem} \label{theorem-singular-incompatible}
If $b \notin \mathop{\mathrm{range}}(A)$, then $\norm{Ar_{\ell-1}} = 0$\red{,} and the
\CSMINRES $x_{\ell-1}$ is an \LS solution.
\end{theorem}
\begin{proof}
Since $b \notin \mathop{\mathrm{range}}(A)$, $T_\ell$ is singular and $R_{\ell\ell} =
\gamma_\ell = 0$. By Lemma~\ref{minreslemma2},
$A^*(Ax_{\ell-1}-b) = -\conj{A}r_{\ell-1} = -\norm{r_{\ell-1}} \gamma_\ell
v_\ell = 0$. Thus $x_{\ell-1}$ is an \LS solution.
\end{proof}
\begin{comment}
\smallskip
We illustrate this result with a small example:
\smallskip
\begin{example} \label{minresCounterEg}
Let $A = i \; \diag(1,1,0)$ and $b=i \; e$. The minimum-length
solution to $Ax \approx b$ is $x^\dagger = [ 1 \ 1 \ 0 ]^T\!$ with
residual $\skew3\widehat r = b - Ax^\dagger = e_3$ and $A\skew3\widehat r = 0$. \CSMINRES
returns the solution $x^\sharp = e$ (with residual $r^\sharp = b -
Ax^\sharp= e_3 = \skew3\widehat r$ and $Ar^\sharp = 0$).
\end{example}
\end{comment}
\section{\CSMINRESQLP standalone} \label{sec:CSMINRESQLP}
In this section we develop \CSMINRESQLP for solving ill-conditioned or
singular symmetric systems. The Lanczos framework is the same as in
\CSMINRES, and QR factorization is applied to $\underline{T_k}$ in
subproblem~\eqref{eqn:LSsubprob} for all $k < \ell$; see
Section~\ref{sec:qrfac}. By Theorem~\ref{thm:rankTk} and
Property~\ref{prop:fullrank} in Section~\ref{sec:Lanproperties},
$\rank(\underline{T_k}) = k$ for all $k<\ell$ and $\rank(T_\ell) \ge \ell-1$.
\CSMINRESQLP handles $T_\ell$ in~\eqref{eqn:LSsubprob-ell} with extra
care to \emph{constructively} reveal $\rank(T_\ell)$
via a \QLP decomposition, so it can compute the minimum-length
solution of the following subproblem~instead
of~\eqref{eqn:LSsubprob-ell}:
\begin{align} \label{eqn:LSsubprob-ell-2}
& \min \norm{y_\ell}_2 \quad \text{s.t.} \quad y_\ell \in
\arg\min_{y_\ell \in \mathbb{C}^{\ell}}
\norm{T_\ell y_\ell - \beta_1 e_1}.
\end{align}
Thus \CSMINRESQLP also applies the \QLP decomposition on $\underline{T_k}$
in~\eqref{eqn:LSsubprob} for all $k < \ell$.
\begin{comment}
\subsection{\CSMINRES subproblem}
If $k < \ell$, $\underline{T_k}$ has full column rank and $y_k$ solves the
same subproblem $\min \norm{\underline{T_k} y - \beta_1 e_1}$
\eqref{eqn:LSsubprob} as \CSMINRES. If $A$ is singular and $b \not\in
\mathop{\mathrm{range}}(A)$, $T_\ell$ is singular and we choose $y_\ell$ to solve the
subproblem
\begin{equation} \label{qlpsubproblemk}
\min \norm{y} \quad\text{s.t.}\quad y \in \arg\min_{y \in
\mathbb{R}^\ell} \norm{T_\ell y- \beta_1 e_1}.
\end{equation}
For all $k \le \ell$ the solutions $y_k$ define $x_k = V_k y_k$
\mathcal{K}_k (A,b)$ as the $k$th approximation to $x$. As usual,
$y_k$ is not actually computed because all its elements change when
$k$ increases.
\end{comment}
\subsection{QLP factorization of $\underline{T_k}$}
In \CSMINRESQLP, the \QR factorization~\eqref{QRfac} of $\underline{T_k}$ is
followed by an \LQ factorization of $R_k$:
\begin{equation} \label{qlpeqn3a}
Q_k \underline{T_k} = \bmat{R_k \\ 0}, \qquad
R_k P_k = L_k, \qquad \textrm{so that}\quad
Q_k \underline{T_k} P_k = \bmat{L_k \\ 0},
\end{equation}
where $Q_k$ and $P_k$ are orthogonal, $R_k$ is upper tridiagonal, and
$L_k$ is lower tridiagonal. When $k < \ell$, both $R_k$ and $L_k$ are
nonsingular. The \QLP decomposition of each $\underline{T_k}$ \red{is} performed
without permutations, and the left and right reflectors \red{are}
interleaved~\cite{S99} in order to ensure inexpensive updating of the
factors as $k$ increases. The desired rank-revealing
properties~\eqref{qlpeqn1a} are retained in the last iteration when
$k=\ell$.
We elaborate on interleaved QLP here. As in \CSMINRES, $Q_k$
in~\eqref{qlpeqn3a} is a product of Householder reflectors\red{;}
see~\eqref{QRfac} and~\eqref{min7}\red{.} $P_k$ involves a product of
\emph{pairs} of Householder reflectors:
\begin{equation*}
Q_k = Q_{k,k+1}\ \cdots \ Q_{3,4} \ \ Q_{2,3} \ \ Q_{1,2},\qquad
P_k = P_{1,2} \ \ P_{1,3} P_{2,3}
\ \cdots \ \ P_{k-2,k} P_{k-1,k}.
\end{equation*}
For \CSMINRESQLP to be efficient, in the $k$th iteration ($k\ge 3$)
the application of the left reflector $Q_{k,k+1}$ is followed
immediately by the right reflectors $P_{k-2,k}, P_{k-1,k}$, so that
only the last $3 \times 3$ bottom right submatrix of $\underline{T_k}$ is changed.
These ideas can be understood more easily from
the following compact form, which represents
the actions of right reflectors on $R_k$ obtained from~\eqref{min7}:
\begin{align}
\label{qlpRightRef}
&\hspace*{13pt}
\bmat{\gamma_{k-2}^{(5)} & & \epsilon_k
\\ \vartheta_{k-1} & \gamma_{k-1}^{(4)} & \delta_k^{(2)}
\\ & & \gamma_{k}^{(2)}
}
\bmat{c_{k2} & & \!\!\!\phantom-s_{k2}
\\ & 1 &
\\ \conj{s}_{k2} & & \!\!-c_{k2}
}
\bmat{ 1
\\ & c_{k3} & \!\!\!\phantom-s_{k3}
\\ & \conj{s}_{k3} & \!\!-c_{k3}
}
\nonumber
\\ &=
\bmat{\gamma_{k-2}^{(6)}
\\ \vartheta_{k-1}^{(2)} & \gamma_{k-1}^{(4)} & \delta_k^{(3)}
\\ \eta_{k} & & \gamma_{k}^{(3)}
}
\bmat{ 1
\\ & c_{k3} & \!\!\!\phantom-s_{k3}
\\ & \conj{s}_{k3} & \!\!-c_{k3}
}
= \bmat{\gamma_{k-2}^{(6)}
\\ \vartheta_{k-1}^{(2)} & \gamma_{k-1}^{(5)} &
\\ \eta_{k} & \vartheta_{k} & \gamma_{k}^{(4)}
}.
\end{align}
\begin{comment}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{QLPfig3ai.eps}
\caption{QLP with left and right reflectors interleaved on
$\underline{T_5}$.
\texttt{QLPfig5.m}. }
\label{QLPfig}
\end{figure}
\end{comment}
\begin{comment}
\subsection{Diagonals of $L_k$}
Figure~\ref{Davis1177case2Eig4fig} shows the relation between the
singular values of $A$ and the diagonal elements of $R_k$ and $L_k$
with $k=19$. This illustrates~\eqref{qlpeqn1a} for matrix ID 1177 from
\cite{UFSMC} with $n=25$.
\begin{figure}[h]
\centering
\hspace*{-0.1in}
\includegraphics[width=.95\textwidth]{Davis1177case2Eig4fig.eps}
\caption{ \textbf{Upper left:} Nonzero singular values of $A$ sorted
in decreasing order. \textbf{Upper middle and right:} The diagonals
$\gamma_k^M$ of $R_k$ (red circles) from MINRES are plotted as red
circles above or below the nearest singular value of $A$. They
approximate the extreme nonzero singular values of $A$ well.
\textbf{Lower:} The diagonals $\gamma_k^Q$ of $L_k$ (red circles)
from \CSMINRES approximate the extreme nonzero singular values of
$A$ even better. An implication is that the ratio of the largest
and smallest diagonals of $L_k$ provides a good estimate of
$\kappa(A)$. To reproduce this figure, run
\texttt{test\_minresqlp\_fig3(2)}.}
\label{Davis1177case2Eig4fig}
\end{figure}
\end{comment}
\subsection{Solving the subproblem}
With $y_k=P_k u_k$, subproblem~\eqref{eqn:LSsubprob} after \QLP
factorization of $\underline{T_k}$ becomes
\begin{equation} \label{Lsubproblem}
u_k = \arg \min_{u \in \mathbb{C}^k}
\,\normm{\bmat{L_k \\ 0} u - \bmat{t_k \\ \phi_{k}}},
\end{equation}
where $t_k$ and $\phi_{k}$ are as in~\eqref{QRfac}. At the
\textit{start} of iteration $k$, the first $k\!-\!3$ elements of
$u_k$, denoted by $\mu_j$ for $j \le k\!-\!3$, are known from previous
iterations.
We need to solve \red{for} only the last three components of $u_k$ from the
bottom three equations of $L_k u_k = t_k$:
\begin{equation}
\bmat{\gamma_{k-2}^{(6)}
\\ \vartheta_{k-1}^{(2)} & \gamma_{k-1}^{(5)} &
\\ \eta_k & \vartheta_k & \gamma_k^{(4)}
}
\bmat{\mu_{k-2}^{(3)}
\\ \mu_{k-1}^{(2)}
\\ \mu_k
}
= \bmat{\tau_{k-2}^{(2)}
\\ \tau_{k-1}^{(2)}
\\ \tau_k
}
\equiv
\bmat{\tau_{k-2} - \eta_{k-2} \mu_{k-4}^{(4)}
- \vartheta_{k-2} \mu_{k-3}^{(3)}
\\ \tau_{k-1} - \eta_{k-1} \mu_{k-3}^{(3)}
\\ \tau_k
}.
\end{equation}
When $k < \ell$, $\underline{T_k}$ has full column rank, and so do $L_k$ and
the above $3\times 3$ triangular matrix.
\CSMINRESQLP obtains the same solution as \CSMINRES,
but by a different process (and with different rounding errors).
The \CSMINRESQLP estimate of $x$ is \( x_k =
\conj{V}\!_ky_k = \conj{V}\!_kP_ku_k = W_ku_k, \) with theoretically
orthonormal $W_k\equiv \conj{V}\!_kP_k$, where
\begin{align}
W_k
&= \bmat{ \conj{V}\!_{k-1} P_{k-1} & \conj{v}_k } P_{k-2,k} P_{k-1,k}
\label{eq:wvp}
\\ &= \bmat{ W_{k-3}^{(4)} & w_{k-2}^{(3)} & w_{k-1}^{(2)} & \conj{v}_k}
P_{k-2,k} P_{k-1,k} \nonumber
\\ &\equiv \bmat{ W_{k-3}^{(4)} & w_{k-2}^{(4)} & w_{k-1}^{(3)} & w_k^{(2)}}.
\nonumber
\end{align}
\red{Finally}, we update å$x_{k-2}$ and compute $x_k$ by short-recurrence
orthogonal steps (using only the last three columns
of $W_k$):
\begin{align}
x_{k-2}^{(2)} &= x_{k-3}^{(2)} + w_{k-2}^{(4)} \mu_{k-2}^{(3)}
\text{, where } x_{k-3}^{(2)} \equiv W_{k-3}^{(4)} u_{k-3}^{(3)},
\label{qlpeqnsol1}
\\ x_k &= x_{k-2}^{(2)} + w_{k-1}^{(3)} \mu_{k-1}^{(2)}
+ w_k ^{(2)} \mu_k. \label{qlpeqnsol2}
\end{align}
\subsection{Termination} \label{sec:term}
When $k=\ell$ and $y_\ell = P_\ell u_\ell$, the final
subproblem~\eqref{eqn:LSsubprob-ell-2} becomes
\begin{align} \label{eqn:LSsubprob-ell-3}
& \min \norm{u_\ell}_2 \quad \text{s.t.} \quad u_\ell \in
\arg\min_{\red{u} \in \mathbb{C}^{\ell}} \norm{L_\ell \red{u}-
t_\ell}.
\end{align}
$Q_{\ell,\ell+1}$ is neither formed nor applied (see~\eqref{QRfac}
and~\eqref{min7}), and the \QR factorization stops. To obtain the
minimum-length solution, we still need to apply
$P_{\ell-2,\ell}P_{\ell-1,\ell}$ on the right of $R_\ell$ and
$\overline{V}_\ell$ in~\eqref{qlpRightRef} and~\eqref{eq:wvp}\red{,}
respectively. If $b \in \mathop{\mathrm{range}}(A)$, then $L_\ell$ is nonsingular\red{,} and
the process in the previous subsection applies. If $b \not\in
\mathop{\mathrm{range}}(A)$, the last row and column of $L_\ell$ are zero, \red{that is},
$L_\ell = \smat{L_{\ell-1} \\ 0 & 0}$ (see~\eqref{qlpeqn3a}), and we
need to define $u_\ell \equiv \smat{u_{\ell-1} \\ 0}$ and solve only
the last two equations of $L_{\ell-1} u_{\ell-1} = t_{\ell-1}$:
\begin{equation} \label{eq:Lut}
\bmat{\gamma_{\ell-2}^{(6)}
\\ \vartheta_{\ell-1}^{(2)} & \gamma_{\ell-1}^{(5)}
}
\bmat{\mu_{\ell-2}^{(3)}
\\ \mu_{\ell-1}^{(2)}
}
= \bmat{\tau^{(2)}_{\ell-2}
\\ \tau^{(2)}_{\ell-1}
}.
\end{equation}
\red{Recurrence}~\eqref{qlpeqnsol2} simplifies to $x_\ell =
x_{\ell-2}^{(2)} + w_{\ell-1}^{(3)} \mu_{\ell-1}^{(2)}$. The following theorem
proves that \CSMINRESQLP yields $x^{\dagger}$ in this last iteration.
\smallskip
\begin{theorem} \label{theorem-MINRES-QLP}
In \CSMINRESQLP, $x_\ell = x^\dagger$.
\end{theorem}
\smallskip
\begin{proof}
When $b \in \mathop{\mathrm{range}}(A)$, the proof is the same as that for
Theorem~\ref{theorem-singular-compatible}.
When $b \notin \mathop{\mathrm{range}}(A)$, for all $u = [u_{\ell-1}^T\! \ \,\mu_\ell]^T\!
\in \mathbb{C}^\ell$ that solves~\eqref{Lsubproblem}, \CSMINRESQLP
returns the minimum-length \LS solution $u_\ell = [u_{\ell-1}^T\! \ \,0]^T\!$
by the construction in~\eqref{eq:Lut}. For any
$x\in\mathop{\mathrm{range}}(W_{\ell})= \mathop{\mathrm{range}}(\overline{V}_\ell)
\subseteq \mathop{\mathrm{range}}(\overline{A})=\mathop{\mathrm{range}}({A^*})$ by~\eqref{eq:wvp} and
$A\overline{V}_\ell=V_\ell T_\ell$,
\begin{align*}
\norm{Ax-b} &= \norm{AW_\ell u - b} = \norm{A\conj{V}\!_\ell P_\ell u - b}
= \norm{V_\ell T_\ell P_\ell u - \beta_1V_\ell e_1}
= \norm{T_\ell P_\ell u - \beta_1 e_1}
\\ &=\normm{Q_{\ell-1} T_\ell P_\ell u - \bmat{t_{\ell-1} \\ \phi_{\ell}}} =
\normm{\bmat{L_{\ell-1} & 0 \\ 0 & 0} u -\bmat{t_{\ell-1} \\ \phi_{\ell}}}.
\end{align*}
Since $L_{\ell-1}$ is nonsingular, $ |\phi_{\ell}| = \min
\norm{Ax-b}$ can be achieved by $x_\ell = W_\ell u_\ell = W_{\ell-1}
u_{\ell-1}$ and $\norm{x_\ell} = \norm{W_{\ell-1} u_{\ell-1}} =
\norm{u_{\ell-1}}$. Thus $x_\ell$ is the minimum-length
\LS solution of $\norm{Ax-b}$,
\red{that is}, $x_\ell = \arg\min\{\norm{x} \mid A^*A x=A^* b, \; x \in
\mathop{\mathrm{range}}({A^*})\}$. Likewise $y_\ell = P_\ell u_\ell$
is the minimum-length \LS solution of $\norm{T_\ell y - \beta_1 e_1}$\red{,} and
so $y_\ell \in \mathop{\mathrm{range}}(T^*_\ell)$, \red{that is,} $y_\ell = T^*_\ell z =
\conj{T}\!_\ell z$ for some $z$. Thus $x_\ell = \conj{V}\!_\ell y_\ell =
\conj{V}\!_\ell \conj{T}\!_\ell z = \conj{A} V_\ell z = A^* V_\ell z\in\!
\mathop{\mathrm{range}}(A^*)$. We know that $x^\dagger = \arg\min\{\norm{x} \mid A^* A
x=A^* b, \; x \in \mathbb{C}^n\}$ is unique and $x^\dagger \in
\mathop{\mathrm{range}}(A^*)$. Since $x_\ell \in \mathop{\mathrm{range}}(A^*)$, we must have $x_\ell =
x^\dagger$.
\end{proof}
\section{Transferring CS-MINRES to \CSMINRES-QLP}
\label{sec:transfer}
\CSMINRES and \CSMINRESQLP behave
similarly on well-conditioned systems. However,
compared \red{with} \CSMINRES, \CSMINRESQLP requires one more vector of
storage, and each iteration needs 4 more axpy \red{operations} ($y \leftarrow \alpha x
+ y$) and 3 more vector \red{scalings} ($x \leftarrow \alpha x$). It would
be a desirable feature to invoke \CSMINRESQLP from \CSMINRES only if
$A$ is ill-conditioned or singular. The key idea is to transfer
\CSMINRES to \CSMINRESQLP at an iteration $k < \ell$ when $\underline{T_k}$
has full column rank and is still well-conditioned. At such an
iteration, the \CSMINRES point $x_k^M$ and \CSMINRESQLP point $x_k$
are the same, so from~\eqref{minresxk}, \eqref{qlpeqnsol2},
and~\eqref{Lsubproblem}: $ x_k^M = x_k \Longleftrightarrow D_k t_k =
W_k L_k^{-1} t_k$. From \eqref{eq:rdeqv}, \eqref{qlpeqn3a},
and~\eqref{eq:wvp},
\begin{equation} \label{transfereqn1}
D_k L_k=(\conj{V}\!_kR_k^{-1})(R_kP_k)=\conj{V}\!_kP_k=W_k.
\end{equation}
The vertical arrow in Figure~\ref{fig:bases} represents this
process. In particular, we transfer only the last three \CSMINRES
basis vectors in $D_k$ to the last three \CSMINRESQLP basis vectors in
$W_k$:
\begin{equation} \label{transfereqn2}
\bmat{w_{k-2} & w_{k-1} & w_k}
= \bmat{d_{k-2} & d_{k-1} & d_k} \bmat{\gamma_{k-2}^{(6)}
\\ \vartheta_{k-1}^{(2)} & \gamma_{k-1}^{(5)} &
\\ \eta_{k} & \vartheta_{k} & \gamma_{k}^{(4)}
}.
\end{equation}
Furthermore, we need to generate the \CSMINRESQLP point
$\smash{x_{k-3}^{(2)}}$ in~\eqref{qlpeqnsol1} from the \CSMINRES point
$x_{k-1}^M$ by rearranging~\eqref{qlpeqnsol2}:
\begin{equation}
x_{k-3}^{(2)} = x_{k-1}^M - w_{k-2}^{(3)} \mu_{k-2}^{(2)}
- w_{k-1}^{(2)} \mu_{k-1}.
\end{equation}
Then the \CSMINRESQLP points $\smash{x_{k-2}^{(2)}}$ and $x_k$ can be
computed \red{by}~\eqref{qlpeqnsol1} and~\eqref{qlpeqnsol2}.
\red{From}~\eqref{transfereqn1} and~\eqref{transfereqn2} we \red{clearly}
still need to do the right transformation $R_k P_k = L_k$ in the
\CSMINRES phase and keep the last $3 \times 3$ bottom right submatrix
of $L_k$ for each $k$ so that we are ready to transfer to \CSMINRESQLP
when necessary. We then obtain a short recurrence for $\norm{x_k}$
(see Section~\ref{subsectsolnorm})\red{,} and for this computation we save
flops relative to the standalone \CSMINRES algorithm, which computes
$\norm{x_k}$ directly in the NRBE condition associated with
$\norm{r_k}$ in Table~\ref{tab-stopping-conditions}.
In the implementation of \CSMINRESQLP, the iterates transfer from
\CSMINRES to \CSMINRESQLP when an estimate of the condition number of
$T_k$ (see \eqref{cond2AQLP}) exceeds an input parameter
$\mathit{trancond}$. Thus, $\mathit{trancond} > 1/\varepsilon$ leads
to \CSMINRES iterates throughout (that is, \CSMINRES standalone),
while $\mathit{trancond} = 1$ generates \CSMINRESQLP iterates from the
start (that is, \CSMINRESQLP standalone).
\begin{figure}
\begin{center}
\vspace{3ex}
\includegraphics[width=1.8in]{basis.pdf}
\vspace{3ex}
\end{center}
\label{basis}
\caption{Changes of basis vectors within and between the two phases
CS-MINRES and CS-MINRES-QLP; see equations~\eqref{eq:rdeqv},
\eqref{eq:wvp}, and~\eqref{transfereqn2} for details.}
\label{fig:bases}
\end{figure}
\section{Preconditioned \CSMINRES and \CSMINRESQLP} \label{sec:pcsminres}
\begin{comment}
It is often asked: How can we construct a preconditioner for a linear
system solver so that the same problem is solved with fewer
iterations? Previous work on preconditioning the symmetric solvers
\CG, \SYMMLQ, or \CSMINRES includes~\cite{RS02, N90, GMPS92, D98,
FRSW98, NV98, RZ99, MGW00, H01, BL03, TPC04}
added RS02
We have the same question for singular symmetric systems $Ax \approx
b$.
\end{comment}
Well-constructed two-sided preconditioners can preserve problem
symmetry and substantially reduce the number of iterations for
nonsingular problems. For singular compatible problems, we can still
solve the problems faster but generally obtain \LS solutions
that are not of minimum length. \red{This} is not an issue due to
algorithms but the way two-sided preconditioning is set up for
singular problems. For incompatible systems (which are necessarily
singular), preconditioning alters the ``least squares'' norm. To
avoid this difficulty\red{,} we could work with larger equivalent systems
that are compatible (see approaches
in~\cite[Section~7.3]{CPS11}), or we could apply a \red{right}
preconditioner $M$ preferably such that $AM$ is complex symmetric so
that our algorithms are directly applicable. For example, if $M$ is
nonsingular (complex) symmetric and $AM$ is commutative, then $AMy
\approx b$ is a complex symmetric problem with $y \equiv M^{-1} x$.
This approach is efficient and straightforward. We devote the rest of
this section to \red{deriving} a two-sided preconditioning method.
We use a symmetric positive definite or a \emph{nonsingular} complex
symmetric preconditioner $M$. For such an $M$, it is known that the
Cholesky factorization exists, \red{that is}, $M = CC^T$ for some lower
triangular matrix~$C$, which is real if $M$ is real, or complex if $M$
is complex. We may employ commonly used construction techniques of
preconditioners \red{such as} diagonal preconditioning and incomplete Cholesky
factorization if the nonzero entries of $A$ are accessible. It may
seem unnatural to use a symmetric positive definite preconditioner for
a complex symmetric problem. However, if available, its application
may be less expensive than a complex symmetric preconditioner.
We denote the square root of $M$ \red{by} \red{$M^{\frac12}$}. It is known that a
complex symmetric root always exists for a nonsingular complex
symmetric $M$ even though it may not be unique;
see~\cite[Theorems~7.1, 7.2, and~7.3]{H08} or~\cite[Section
6.4]{HJ91}.
Preconditioned \CSMINRES (or \CSMINRESQLP) applies itself to the
equivalent system $\tilde{A} \tilde{x} = \tilde{b}$, where $\tilde{A}
\equiv M^{-\frac12} A M^{-\frac12}$, $\tilde{b} \equiv M^{-\frac12}
b$, and $x = M^{-\frac12}\tilde{x}$. Implicitly, we are solving an
equivalent complex symmetric system $C^{-1} AC^{-T} y = C^{-1} b$, where
$C^T\! x = y$. In practice, we work with $M$ itself (solving the linear
system in~(\ref{pminresd4})). For analysis, we can assume $C =
M^{\frac12}$ for convenience.
An effective preconditioner for \CSMINRES or \CSMINRESQLP is one such
that~$\tilde{A}$ has a more clustered eigenspectrum and \red{becomes} better
conditioned, and it is inexpensive to solve linear systems that
involve~$M$.
\subsection{Preconditioned \red{Saunders} process}
Let \red{$\conj{V}_k$} denote the \red{Saunders} vectors of the $k$th extended Krylov subspace
generated by $\tilde{A}$ and $\tilde{b}$. With $v_0
= 0$ and $\beta_1 v_1 = \tilde{b}$, for $k=1,2,\ldots$ we define
\begin{equation} \label{pminresd4}
z_k = \beta_k M^{ \frac12} v_k, \qquad
q_k = \beta_k M^{-\frac12} \conj{v}_k,
\qquad \text{so that} \quad M \conj{q}_k =z_k.
\end{equation}
Then
\(
\beta_k = \norm{\beta_k v_k}
= \sqrt{ q_k^T\! z_k },
\)
and the \red{Saunders} iteration is
\begin{align*}
p_k &= \tilde{A} \conj{v}_k = M^{-\frac12} \! A M^{-\frac12} \conj{v}_k
= M^{-\frac12} \! A q_k / {\beta_k},
\\ \alpha_k &= v_k^* p_k = q_k^T\! A q_k / {\beta_k^2},
\\ \beta_{k+1}v_{k+1} &= M^{-\frac12}\! A M^{-\frac12} \conj{v}_k
-\alpha_k v_k - \beta_k v_{k-1}.
\end{align*}
Multiplying the last equation by $M^{\frac12}$\red{,} we get
\begin{align*}
z_{k+1} = \beta_{k+1} M^{\frac12} v_{k+1}
& = A M^{-\frac12} \conj{v}_k - \alpha_k M^{\frac12} v_k
- \beta_k M^{\frac12} v_{k-1}
\\ &= \frac{1}{\beta_k} A q_k - \frac{\alpha_k}{\beta_k} z_k
- \frac{\beta_k}{\beta_{k-1}} z_{k-1}.
\end{align*}
The last expression involving consecutive $z_j$'s replaces the
three-term recurrence in $v_j$'s. In addition, we need to solve a
linear system $M q_k = z_k$~\eqref{pminresd4} at each iteration.
\subsection{Preconditioned \CSMINRES}
From~\eqref{minresxk} and~\eqref{eq:rdeqv} we have the following
recurrence for the $k$th column of $D_k = \conj{V}\!_k R_k^{-1}$ and
$\tilde{x}_k$:
\[
d_k = \bigl( \conj{v}_k - \delta_k^{(2)} d_{k-1}-\epsilon_k d_{k-2} \bigr)
/ \gamma_k^{(2)},
\qquad
\tilde{x}_k = \tilde{x}_{k-1} + \tau_k^{(2)} d_k.
\]
Multiplying the above two equations by $M^{-\frac12}$ on the left and
defining $\tilde{d}_k = M^{-\frac12}d_k$, we can update the solution
of our original problem by
\[
\tilde{d}_k = \Bigl( \frac{1}{\beta_k} q_k
- \delta_k^{(2)} \tilde{d}_{k-1}
- \epsilon_k \tilde{d}_{k-2} \Bigr)
\!\bigm/\! \gamma_k^{(2)},
\qquad
x_k = M^{-\frac12} \tilde{x}_k
= x_{k-1} + \tau_k ^{(2)} \tilde{d}_k.
\]
\subsection{Preconditioned \CSMINRESQLP}
\label{secPMINRESQLP}
A preconditioned \red{\CSMINRESQLP} can be derived similarly. The
additional work is to apply right reflectors $P_k$ to $R_k$, and the
new subproblem bases are $W_k \equiv \conj{V}\!_k P_k$, with
$\tilde{x}_k = W_k u_k$. Multiplying the new basis and solution
estimate by $M^{-\frac12}$ on the left, we obtain
\begin{align*}
\widetilde{W}_k &\equiv M^{-\frac12} W_k = M^{-\frac12} \conj{V}\!_k P_k,
\\ x_k & = M^{-\frac12} \tilde{x}_k
= M^{-\frac12} W_k {u}_k
= \widetilde{W}_k {u}_k
= x_{k-2}^{(2)} +
\mu_{k-1}^{(2)} \tilde{w}_{k-1}^{(3)} +
\mu_k \tilde{w}_k^{(2)}.
\end{align*}
\begin{comment}
Algorithm~\ref{pminresqlpalgo} lists all steps. Note that
$\tilde{w}_k$ is written as $w_k$ for all relevant $k$. Also, the
output $x$ solves $Ax \approx b$ but the other outputs are associated
with $\tilde{A}\tilde{x} \approx \tilde{b}$.
\paragraph{Remark}
The requirement of positive definite preconditioners $M$ in \CSMINRES
and \CSMINRES may seem unnatural for a problem with indefinite $A$
because we cannot achieve $M^{-\frac12}\! A M^{-\frac12} \simeq I$.
However, as shown in~\cite{GMPS92}, we can achieve $M^{-\frac12}\! A
M^{-\frac12} \simeq \left[\begin{smallmatrix}I \\ & -I
\end{smallmatrix}\right]$ using an approximate
block-LDL$^{\text{T}}$ factorization $A \simeq LDL^T\!$ to get $M =
L|D|L^T\!$, where $D$ is indefinite with blocks of order 1 and 2, and
$|D|$ has the same eigensystem as $D$ except negative eigenvalues are
changed in sign.
\end{comment}
\begin{comment}
\subsection{Preconditioning singular $Ax = b$}
For singular compatible systems, \CSMINRES and \CSMINRES find the
minimum-length solution (see Theorems
\ref{theorem-singular-compatible} and \ref{theorem-MINRES-QLP}). If
$M$ is nonsingular, the preconditioned system is also compatible and
the solvers return its minimum-length solution. The unpreconditioned
solution solves $Ax \approx b$, but is not necessarily a
minimum-length solution.
\begin{example}
Let
$ A = \left[\begin{smallmatrix}
1 & 1 & 0 & 0
\\ 1 & 1 & 1 & 0
\\ 0 & 1 & 0 & 1
\\ 0 & 0 & 1 & 0
\end{smallmatrix}
\right]$
and
$ b = \left[\begin{smallmatrix}
6 \\ 9 \\ 6 \\ 3
\end{smallmatrix}
\right].
$
Then $\rank(A)=3$ and $Ax=b$ is a singular compatible
system. The minimum-length solution is $x^{\dagger} =
\left[\begin{smallmatrix} 2 & 4 & 3 & 2
\end{smallmatrix}\right]^T\!$.
By binormalization~\cite{LG04}
we construct the matrix
$D = \diag([\begin{smallmatrix} 0.84201 & 0.81228 & 0.30957 & 3.2303
\end{smallmatrix}] )$.
The minimum-length solution of the diagonally preconditioned problem
$DAD y \!=\! Db$ is $y^{\dagger} \!=\!
\left[\begin{smallmatrix} 3.5739 & 3.6819 & 9.6909 & 0.93156
\end{smallmatrix}\right]^T\!$\!\!. Then $x = Dy^{\dagger} =
\left[\begin{smallmatrix} 3.0092 & 2.9908 & 3.0000 & 3.0092
\end{smallmatrix}\right]^T\!$ is a solution of $Ax=b$,
but $x \neq x^{\dagger}$.
\end{example}
\subsection{Preconditioning singular $Ax \approx b$}
We propose the following techniques for obtaining minimum-residual
solutions of singular incompatible problems. In each case we use an
equivalent but larger \emph{compatible} system to which \CSMINRES may
be applied. Even if the larger system is singular, Theorem
\ref{theorem-singular-compatible} shows that the minimum-length
solution of the larger system will be obtained. The required $x$ will
be part of this solution. Preconditioning still gives a
minimum-residual solution of $Ax \approx b$, and in \emph{some} cases
$x$ will be $x^\dagger$.
If the systems are ill-conditioned, it will be safer and more
efficient to apply \CSMINRES to the original incompatible system.
However, preconditioning will give an $x$ that is ``minimum length''
in a different norm.
\end{comment}
\begin{comment}
\subsubsection{Augmented system}
When $A$ is singular, so is the augmented system
\begin{align}
\label{eq:augmented}
\bmat{I & A \\ A} \bmat{r \\ x} &= \bmat{b \\ 0},
\end{align}
but it is always compatible. Preconditioning with symmetric
positive definite $M$ gives us a solution $\left[ \begin{smallmatrix}
r \\ x \end{smallmatrix} \right]$ in which $r$ is unique, but $x$
may not be $x^\dagger$.
\subsubsection{A giant KKT system}
Problem~\eqref{eqn4b} is equivalent to $\min_{r,\,x} x^T\! x$ subject
to~\eqref{eq:augmented}, which is an equality-contrained convex
quadratic program. The corresponding KKT system
\cite[Section~16.1]{NW} is both symmetric and compatible:
\begin{equation} \label{KKT}
\bmat{ & & I & A
\\ &-I & A
\\ I & A
\\ A }
\bmat{r\\x\\y\\z} = \bmat{0\\0\\b\\0}.
\end{equation}
Although this is still a singular system, the upper-left $3 \times 3$
block-submatrix is nonsingular and therefore
$r$, $x$ and $y$ are unique and a preconditioner applied to the KKT
system would give $x$ as the minimum-length solution of our original
problem.
\subsubsection{Regularization}
If the rank of a given matrix $A$ is ill-determined, we may want to
solve the \emph{regularized} problem~\cite{L77, H90} with parameter
$\delta>0$:
\begin{equation} \label{regLLS}
\min_x\ \normm{ \bmat{A \\ \;\delta I\;} x - \bmat{b\\0} }^2.
\end{equation}
The matrix $\left[\begin{smallmatrix} A \\ \delta
I \end{smallmatrix}\right]$ has full rank and is always better
conditioned than $A$. \LSQR may be applied, and its iterates $x_k$
will reduce $\norm{r_k}^2 + \delta^2\norm{x_k}^2$ monotonically.
Alternatively, we could transform~\eqref{regLLS} into the following
symmetric compatible systems and apply \CSMINRES or \CSMINRES. They
tend to reduce $\norm{Ar_k - \delta^2 x_k}$ monotonically.
\smallskip
\begin{description}
\item[Normal equation:]
\begin{equation} \label{form3}
(A^2 + \delta^2 I) x =Ab.
\end{equation}
\item[Augmented system:]
\[
\bmat{I & A
\\ A & -\delta^2 I }
\bmat{r\\x} = \bmat{b\\0}.
\]
\item[A two-layered problem:] If we eliminate $v$ from the system
\begin{equation} \label{BV}
\bmat{ I & A^2
\\ A^2 & -\delta^2 A^2}
\bmat{x\\v} = \bmat{0\\Ab}.
\end{equation}
we obtain~\eqref{form3}. Thus $x$ is also a solution of our
regularized problem~\eqref{regLLS}. This is equivalent to the
two-layered formulation (4.3) in Bobrovnikova and
Vavasis~\cite{BV01} (with $A_1 = A$, $A_2 = D_1 = D_2 = I$, $b_1 =
b$, $b_2 = 0$, $\delta_1 = 1$, $\delta_2 = \delta^2$). A key
property is that $x \rightarrow x^\dagger$ as $\delta \rightarrow 0$.
\item[A KKT-like system:] If we define $y = -Av$ and $r = b - Ax -
\delta^2 y$, then we can show (by eliminating $r$ and $y$ from the
following system) that $x$ in
\begin{equation} \label{BVreferee7}
\bmat{ & & I & A
\\ & -I & A
\\ I & A & \delta^2 I
\\ A }
\bmat{r \\x \\y \\v} =\bmat{0 \\ 0 \\b \\0}
\end{equation}
is also a solution of~\eqref{BV} and thus of~\eqref{regLLS}. The
upper-left $3\times 3$ block-submatrix of~\eqref{BVreferee7} is
nonsingular, and the correct limiting behavior occurs: $x
\rightarrow x^\dagger$ as $\delta \rightarrow 0$. In fact,
\eqref{BVreferee7} reduces to~\eqref{KKT}.
\end{description}
\subsection{General preconditioners}
The construction of preconditioners is usually problem-dependent. If
not much is known about the structure of $A$, we can only consider
general methods such as diagonal preconditioning and incomplete
Cholesky factorization. These methods require access to the nonzero
elements of $A$. (They are not applicable if $A$ exists only as an
operator for returning the product $Ax$.)
For a comprehensive survey of preconditioning techniques, see Benzi
\cite{B02}. We discuss a few methods for symmetric $A$ that also
require access to the nonzero $A_{ij}$.
\subsubsection{Diagonal preconditioning}
If $A$ has entries that are very different in magnitude, diagonal
scaling might improve its condition. When $A$ is diagonally dominant
and nonsingular, we can define $D = \diag(d_1,\ldots,d_n)$ with $d_j =
1/|A_{jj}|^{1/2}$. Instead of solving $Ax=b$, we solve $DADy=Db$,
where $DAD$ is still diagonally dominant and nonsingular with all
entries $\leq1$ in magnitude, and $x=Dy$.
More generally, if $A$ is not diagonally dominant and possibly
singular, we can safeguard division-by-zero errors by choosing a
parameter $\delta > 0$ and defining
\begin{equation} \label{DAD}
d_j(\delta) = 1 / \max \{\delta,
\, \sqrt{\smash[b]{|A_{jj}|}},
\, \max_{i \ne j} |A_{ij}| \},
\qquad j=1,\ldots,n.
\end{equation}
\begin{example}
\vspace*{0.05in}
\begin{enumerate}
\item If $A =
\left[\begin{smallmatrix}
-1 & 10^{-8} &
\\ 10^{-8} & 1 & 10^4
\\ & 10^4 & 0
\\ & & & 0
\end{smallmatrix}\right]$,
then $\kappa(A)\approx10^{4}$. Let $\delta = 1$,
$D =
\left[\begin{smallmatrix}
1 & & &
\\ & 10^{-2} & &
\\ & & 10^{-2} &
\\ & & & 1
\end{smallmatrix}\right]$
in~\eqref{DAD}.
Then $DAD =
\left[\begin{smallmatrix}
-1 & 10^{-10} & &
\\ 10^{-10} & 10^{-4} & 1 &
\\ & 1 & 0 &
\\ & & & 0
\end{smallmatrix}\right]$
and $\kappa(DAD) \simeq 1$.
\item $A =
\left[\begin{smallmatrix}
10^{-4} & 10^{-8} & &
\\ 10^{-8} & 10^{-4} & 10^{-8} &
\\ & 10^{-8} & 0 &
\\ & & & 0
\end{smallmatrix}\right]$
contains mostly very small entries, and
$\kappa(A)\approx10^{10}$. Let $\delta = 10^{-8}$ and
$D =
\left[\begin{smallmatrix}
10^2 & & &
\\ & 10^2 & &
\\ & & 10^8 &
\\ & & & 10^8
\end{smallmatrix}\right]$.
Then $DAD =
\left[\begin{smallmatrix}
1 & 10^{-4} & &
\\ 10^{-4} & 1 & 10^2 &
\\ & 10^2 & 0 &
\\ & & & 0
\end{smallmatrix}\right]$
and $\kappa(DAD) \simeq 10^2$.
(The choice of $\delta$ makes a critical difference in this case:
with $\delta=1$, we have $D=I$.)
\end{enumerate}
\end{example}
\subsubsection{Binormalization (BIN)} \label{sectbin}
Livne and Golub~\cite{LG04} scale a symmetric matrix by a series of
$k$ diagonal matrices on both sides until all rows and columns of the
scaled matrix have unit $2$-norm:
$ DAD = D_k\cdots D_{1}AD_{1}\cdots D_k$.
See also Bradley~\cite{B10}.
\begin{example}
If $A =
\left[\begin{smallmatrix}
10^{-8} & 1 &
\\ 1 & 10^{-8} & 10^4
\\ & 10^4 & 0
\end{smallmatrix}\right]$,
then $\kappa(A) \simeq 10^{12}$. With just one sweep of BIN, we obtain
$D =\diag(8.1\e{-3},6.6\e{-5},1.5)$,
$DAD \simeq
\left[\begin{smallmatrix}
6.5 \e{-1} & 5.3 \e{-1} & 0
\\ 5.3 \e{-1} & 0 & 1
\\ 0 & 1 & 0
\end{smallmatrix}\right]$
and $\kappa(DAD)\simeq 2.6$ even though the rows and columns have not
converged to one in the two-norm. In contrast, diagonal scaling
\eqref{DAD} defined by $\delta=1$ and $D = \diag(1, 10^{-4},10^{-4})$
reduces the condition number to approximately $10^4$.
\end{example}
\subsubsection{Incomplete Cholesky factorization}
For a sparse symmetric positive definite matrix $A$, we could compute
a preconditioner by the incomplete Cholesky factorization that
preserves the sparsity pattern of $A$. This is known as IC0 in the
literature. Often there exists a permutation $P$ such that the IC0
factor of $PAP^T\!$ is more sparse than that of $A$.
When $A$ is semidefinite or indefinite, IC0 may not exist, but a
simple variant that may work is the incomplete Cholesky-infinity
factorization \cite[Section~5]{Z97}.
\end{comment}
\begin{comment}
\section{Effects of rounding errors in \MINRES}
Note that even using finite precision the expression for $\psi_k^2$ is
extremely accurate for the versions of the Lanczos algorithm given in
Section~\ref{sec:Lanczos}, since (taking $\norm{v_j}=1$ with
negligible error), $\norm{\conj{A}r_k}^2 = \tau_{k+1}^2(
[\gamma_{k+1}]^2 +2\gamma_{k+1}\delta_{k+2} v_{k+1}^T v_{k+2} +
[\delta_{k+2}]^2)$, where from~\eqref{min7} $|\delta_{k+2}|\leq
\beta_{k+2}$, while from \cite[(18)]{P76} $\beta_{k+2}|v_{k+1}^T
v_{k+2}|\leq O(\varepsilon)\norm{A}$, and with $|\gamma_{k+1}|\leq
\norm{A}$, see \cite[(19)]{P76}, we see that
$|\gamma_{k+1}\delta_{k+2} v_{k+1}^T v_{k+2}| \leq
O(\varepsilon)\norm{A}^2$.
\CSMINRES should stop if $R_k$ is singular (which theoretically
implies $k=\ell$ and $A$ is singular). Singularity was not discussed
by Paige and Saunders~\cite{PS75}, but they did raise the question: Is
\CSMINRES stable when $R_k$ is ill-conditioned? Their concern was
that $\norm{D_k}$ could be large in~\eqref{eq:rdeqv}, and there could
then be cancellation in forming $x_{k-1} + \tau_k^{(2)} d_k$ in
\eqref{minresxk}.
Sleijpen, Van der Vorst, and Modersitzki~\cite{SVM00} analyzed the
effects of rounding errors in \CSMINRES and reported examples of
apparent failure with a matrix of the form $A = QDQ^T\!$, where $D$ is
an ill-conditioned diagonal matrix and $Q$ involves a single plane
rotation. We were unable to reproduce \CSMINRES's performance on the
two examples defined in Figure~4 of their paper, but we modified the
examples by using an $n \times n$ Householder transformation for $Q$,
and then observed similar difficulties with \CSMINRES---see
Figure~\ref{figDPtestSing3_DP}. The recurred residual norm $\phi^M_k$
is a good approximation to the directly computed $\norm{r^M_k}$ until
the last few iterations. The recurred norms $\phi^M_k$ then keep
decreasing but the directly computed norms $\norm{r^M_k}$ become
stagnant or even increase (see the lower subplots in
Figure~\ref{figDPtestSing3_DP}).
\begin{remark}
Note that we do want $\phi_k$ to keep decreasing on compatible
systems, so that the test $\phi_k \leq \mathit{tol} (\norm{A}
\norm{x_k}+\norm{b})$ with $\mathit{tol} \ge \varepsilon$ will
eventually be satisfied even if the computed $\norm{r_k}$ is no
longer as small as $\phi_k$.
\end{remark}
The analysis in~\cite{SVM00} focuses on the rounding errors involved
in the $n$ lower triangular solves $R_k^T\! D_k^T\! = V_k^T\!$ (one solve
for each row of $D_k$), compared to the single upper triangular solve
$R_k y_k = t_k$ (followed by $x_k = V_k y_k$) that would be possible
at the final $k$ if all of $V_k$ were stored as in \GMRES \cite{SS86}.
We shall see that a key feature of \CSMINRES is that a single lower
triangular solve suffices with no need to store $V_k$, much the same
as in \SYMMLQ.
\end{comment}
\section{Numerical experiments} \label{sec:numerical}
In this section we present computational results based on \red{the} \Matlab~7.12
\red{implementations} of \CSMINRESQLP and \SSMINRESQLP, which are made
available to the public as open-source software and \red{accord} with the
philosophy of reproducible computational research~\cite{C94, CD02}.
The computations were performed in double precision on a Mac
OS X machine with a \red{2.7 GHz} Intel Core i7 and \red{16 GB} RAM.
\subsection{Complex symmetric problems} \label{sec:num:cs}
Although the SJSU Singular Matrix Database~\cite{FSJSU} currently
contains only one complex symmetric matrix (named \texttt{dwg961a}) and
only one skew symmetric matrix (\texttt{plsk1919}), it has a sizable set of
singular symmetric matrices, which can be handled by the associated
\Matlab toolbox SJsingular~\cite{FSJSU2}. We constructed multiple
singular complex symmetric systems of the form $H \equiv i A$, where
$A$ is symmetric and singular. \red{All} the eigenvalues of
$H$ \red{clearly} lie on the imaginary axis. For a compatible system, we simulated
$b = Hz$, where $z_i \sim i.i.d.\ U(0,1)$, \red{that is}, $z_i$ were
independent and identically distributed random variables whose values
were drawn from the standard uniform distribution with support
$[0,1]$. For a \LS problem, we generated a random $b$ with $b_{i} \sim
i.i.d.\ U(0,1)$\red{,} and it is almost always true that $b$ is \emph{not} in
the range of the test matrix. In \CSMINRESQLP, we set the parameters
$\mathtt{maxit} = 4n$, $\mathtt{tol} = \varepsilon$, and
$\mathtt{trancond} = 10^{-7}$ for the stopping conditions in
Table~\ref{tab-stopping-conditions} and the transfer process from
\CSMINRES (see Section~\ref{sec:transfer}).
We compare the computed results of \CSMINRESQLP and \Matlab's \QMR \red{with}
solutions computed directly by the truncated \SVD (\TSVD)
of $H$ utilizing \red{\Matlab}'s function \texttt{pinv}. For \TSVD we have
$ x_t \equiv \sum_{\sigma_i > t \norm{H} \varepsilon}
\frac{1}{\sigma_i} u_i u_i^* b,
$
with parameter $t>0$. Often $t$ is set to $1$, and sometimes to a
moderate number such as $10$ or $100$; it defines a cut-off point
relative to the largest singular value of $H$. For example, if most
singular values are of order 1 and the rest are of order
$\norm{H}\varepsilon\approx10^{-16}$, we expect \TSVD to work better
when the small singular values are excluded, while \SVD (with $t=0$)
could return an exploding solution.
In Figure~\ref{cons50a} we present the results of 50 consistent
problems of the form $Hx=b$\red{. Given the computed TSVD solution $x^\dagger$,
the figure} plots the relative error norm \red{$\norm{\hat{x}-x^\dagger} / \norm{x^\dagger}$} of
approximate solution \red{$\hat{x}$} computed by \red{\QMR and \CSMINRESQLP} with respect
to \TSVD solution against
\red{$\kappa(H)\varepsilon$. (}It is known that an upper bound
\red{on} the perturbation error of a singular linear
system involves the condition of the corresponding
matrix~\cite[Theorem~5.1]{SS}.\red{)} The diagonal dotted red line represents
the best results we could expect from any numerical method with double
precision. We can see that both \red{\QMR and \CSMINRESQLP} did well
on all problems except for \red{two} in each case. \CSMINRESQLP performed
slightly better \red{because} a few additional problems \red{solved by} \QMR attained relative
errors of less than $10^{-5}$.
\begin{figure}[ht]
\includegraphics[width=4.7in]{C13Fig7_1.pdf
\label{cons50a}
\caption{50 consistent singular complex symmetric systems. This figure
is reproducible by \texttt{C13Fig7\_1.m}.}
\end{figure}
Our second test set \red{involves} complex symmetric matrices that have \red{a} more
wide-spread eigenspectrum than those in the first test set. Let $A=V
\Lambda V^T\!$ be an eigenvalue decomposition of symmetric $A$ with
$|\lambda_1| \ge \cdots \ge |\lambda_n|$. For $i=1,\ldots,n$, we
define $d_i \equiv (2u_i-1) |\lambda_1|$, where $u_i \sim
i.i.d.\ U(0,1)$ if $\lambda_i \ne 0$, or $d_i \equiv 0$ otherwise.
Then the complex symmetric matrix $M \equiv V D V^T\! + i A$ has the
same (numerical) rank as $A$\red{,} and its eigenspectrum is bounded by a
ball of radius approximately equal to $|\lambda_1|$ on the complex
plane. In Figure~\ref{cons50b} we summarize the results of solving 50
such complex symmetric linear systems. \CSMINRESQLP \red{clearly}
behaved as stably as it did with the first test set. However, \QMR is
obviously more sensitive to the nonlinear spectrum: two problems did
not converge\red{,} and about ten additional problems converged to their
corresponding ${x}^{\dagger}$ with no more than four digits of
accuracy.
Our third test set consists of linear \LS problems~\eqref{eqn4b}, in
which $A \equiv H$ in the upper plot of Figure~\ref{incons100} and $A
\equiv M$ in the lower plot. In the case of $H$, \CSMINRESQLP did not
converge for two instances but agreed with the \TSVD solutions \red{to} five
or more digits for almost all other instances. In the
case of $M$, \CSMINRESQLP did not converge for five instances but
agreed with the \TSVD solutions \red{to} five or more digits
for almost all other instances. Thus the algorithm is to some
extent more sensitive to a nonlinear eigenspectrum in LS
problems\red{. This} is also expected by the perturbation result that an
upper bound of the relative error norm in a LS problem involves the
square of $\kappa(A)$~\cite[Theorem~5.2]{SS}. We did not run \QMR on
these test cases \red{because} the algorithm was not designed for \LS
problems.
{
\begin{figure
\includegraphics[width=4.5in]{C13Fig7_2.pdf}
\caption{50 consistent singular complex symmetric systems. This figure
is reproducible by \texttt{C13Fig7\_2.m}.}
\label{cons50b}
\end{figure}
\begin{figure
\includegraphics[width=4.5in]{C13Fig7_3.pdf}
\caption{100 \emph{inconsistent} singular complex symmetric
systems. \red{We used matrices $H$ in the upper plot and $M$ in the lower plot,
where $H$ and $M$ are defined in Section~\ref{sec:num:cs}}. This figure is reproducible by \texttt{C13Fig7\_3.m}.}
\label{incons100}
\end{figure}
}
\subsection{Skew symmetric problems}
Our fourth test collection consists of 50 skew symmetric linear
systems and 50 singular skew symmetric \LS problems~\eqref{eqn4b}.
The matrices are constructed by $S = \mathtt{tril}(A) -
\mathtt{tril}(A)^T\!$, where $\mathtt{tril}$ extracts the lower
triangular part of a matrix. In both cases---linear systems in the
upper subplot of Figure~\ref{100ss} and LS problems in the lower
subplot---\SSMINRESQLP did not converge for six instances but agreed
with the \TSVD solutions for more than ten digits of accuracy for
almost all other instances.
\begin{figure
\includegraphics[width=4.5in]{C13Fig7_4.pdf}
\caption{100 singular skew symmetric systems. Upper: 50 compatible
linear systems. Lower: 50 LS problems. This figure is reproducible
by \texttt{C13Fig7\_4.m}.}
\label{100ss}
\end{figure}
\subsection{Skew Hermitian problems}
We also have created a test collection of 50 skew
Hermitian linear systems and 50 skew Hermitian \LS
problems~\eqref{eqn4b}. Each skew Hermitian matrix is
constructed \red{as} $T = S + iB$, where $S$ is skew symmetric as defined in
the last test set, and $B \equiv A - \diag([a_{11},\ldots,a_{nn}])$\red{; in other words,}
$B$ is $A$ with the diagonal elements set to zero and is thus
symmetric. We solve the problems using the original \MINRESQLP for
Hermitian problems by the transformation $(iT) x \approx ib$. In the
case of linear systems in the upper subplot of Figure~\ref{100sh},
\SHMINRESQLP did not converge for \red{six instances}. For the other instances \SHMINRESQLP computed
approximate solutions that matched the \TSVD solutions for more than
ten digits of \red{accuracy.} As for \red{the} LS
problems in the lower subplot of Figure~\ref{100sh}, only five
instances did not converge.
\begin{figure
\includegraphics[width=4.5in]{C13Fig7_5.pdf}
\caption{100 singular skew Hermitian systems. Upper: 50 compatible
linear systems. Lower: 50 LS problems. This figure is reproducible
by \texttt{C13Fig7\_5.m}.}
\label{100sh}
\end{figure}
\begin{comment}
\subsection{A Laplacian system $Ax \approx b$ (almost compatible)}
Our first example involves a singular indefinite Laplacian matrix $A$
of order $n=400$. It is block-tridiagonal with each block being a
tridiagonal matrix $T$ of order $N=20$ with all nonzeros equal to 1:
\begin{equation} \label{eq:Laplace}
A = \bmat{T & T
\\ T & T & \ddots
\\ & \ddots & \ddots & T
\\ & & T & T}_{n\times n},
\qquad
T = \bmat{1 & 1
\\ 1 & 1 & \ddots
\\ & \ddots & \ddots & 1
\\ & & 1 & 1}_{N\times N}.
\end{equation}
\Matlab's \texttt{eig($A$)} reports the following data:
$205$ positive eigenvalues in the interval $[6.1\e{-2},8.87]$,
$39$ almost-zero eigenvalues in $[-2.18\e{-15},3.71 \e{-15}]$,
$156$ negative eigenvalues in $[-2.91,-6.65 \e{-2}]$,
numerical rank $= 361$.
We used a right-hand side with a small incompatible component: $b = Ay
+ 10^{-8}z$ with $y_i$ and $z_i \sim i.i.d.\ U(0,1)$. Results are
summarized in Table~\ref{tab:Laplacian1}. In the column labeled ``C?",
the value ``Y" denotes that the associated algorithm in the row has
converged to the desired NRBE tolerances within $\mathit{maxit}$
iterations (cf.~Table~\ref{tab-stopping-conditions}); otherwise, we
have values ``N" and ``N?", where ``N?" indicates that the algorithm
could have converged if more relaxed stopping conditions were
used. The column ``$Av$" shows the total number of matrix-vector
products, and column ``$x(1)$" lists the first element of the final
solution estimate $x$ for each algorithm. For \GMRES, the integer in
parentheses is the value of the restart parameter.
\begin{table}
\caption{Finite element problem $Ax \approx b$ with $b$ almost
compatible. Laplacian on a $20 \times 20$ grid, $n = 400$,
$\mathit{maxit} = 1200$, $\operatorname{shift} = 0$, $\mathit{tol} =
1.0 \e{-15}$, $\operatorname{maxnorm} = 100$, $\mathit{maxcond} = 1
\e{15}$, $\norm{b} = 87$. To reproduce this example, run
\texttt{test\_minresqlp\_eg7\_1(24)}.}
\label{tab:Laplacian1}
\renewcommand{\e}[1]{\text{e}{#1}}
{\footnotesize \renewcommand{\arraystretch}{1.2}
\begin{tabular}{|l@{}|l@{\,}|r@{\,}|r@{\,}|r@{\,}|r@{\,}|r@{\,}|r@{\,}|r@{\,}|r@{\,}|}
\hline
Method & C? & $Av $ & $ x(1)~$ & $\norm{x}~$ & $\norm{e}~$ & $\norm{r}~$ & $\norm{Ar}~$ & $\norm{A}~$ & $\kappa(A)$
\\ \hline
SVD & -- & -- & $ -7.39\e{5}$ & $ 4.12\e{7}$ & $ 4.1\e{7} $ & $1.7\e{-7} $ & $7.8\e{-7} $ & $8.9\e{0}$ & $ 1.1\e{17}$\\
TSVD & -- & -- & $ 3.89\e{-1}$ & $ 1.15\e{1}$ & $ 0.0\e{0} $ & $1.7\e{-8} $ & $1.4\e{-12} $ & $8.9\e{0} $ & $ 1.5\e{2}$\\
\Matlab SYMMLQ & N? & $371$ & $ 3.89\e{-1}$ & $ 1.15\e{1}$ & $ 1.4\e{-7} $ & $1.8\e{-7} $ & $5.8\e{-7} $ & -- & -- \\
SYMMLQ SOL & N & $447$ & $ -3.08\e{0}$ & $ 9.63\e{1}$ & $ 9.5\e{1} $ & $1.4\e{2} $ & $4.4\e{2} $ & $9.6\e{1}$ & $ 1.3\e{1}$\\
SYMMLQ$^+$ & N & $447$ & $ 2.94\e{6}$ & $ 4.27\e{8}$ & $ 4.3\e{8} $ & $1.8\e{2} $ & $6.5\e{2} $ & $8.6\e{0} $ & $ 1.3\e{1}$\\
\Matlab MINRES & N & $1200$ & $ -7.50\e{5}$ & $ 2.10\e{7}$ & $ 2.1\e{7} $ & $1.5\e{7}$ & $9.1\e{7}$ & -- & -- \\
MINRES SOL & N & $1200$ & $ 9.89\e{5}$ & $ 6.10\e{7}$ & $ 6.1\e{7} $ & $2.3\e{7} $ & $1.5\e{8} $ & $1.8\e{2}$ & $ 1.5\e{1}$\\
MINRES$^+$ & N & $ 611$ & $ 1.02\e{0}$ & $ 9.28\e{1}$ & $ 9.2\e{1} $ & $1.7\e{-8} $ & $2.5\e{-11} $ & $8.6\e{0}$ & $ 6.9\e{13}$\\
MINRES-QLP & Y & $ 612$ & $ 3.89\e{-1}$ & $ 1.15\e{1}$ & $ 3.7\e{-11} $ & $1.7\e{-8} $ & $9.3\e{-11} $ & $8.7\e{0}$ & $ 4.3\e{13}$\\
\Matlab LSQR & Y & $1462$ & $ 3.89\e{-1}$ & $ 1.15\e{1}$ & $ 2.3\e{-13} $ & $1.7\e{-8} $ & $3.3\e{-13} $ & -- & --\\
LSQR SOL & Y & $1464$ & $ 3.89\e{-1}$ & $ 1.15\e{1}$ & $ 2.4\e{-13} $ & $1.7\e{-8} $ & $3.9\e{-13} $ & $1.5\e{2}$ & $ 6.4\e{3}$\\
\Matlab GMRES(30) & N? & $1200$ & $ 3.90\e{-1}$ & $ 1.15\e{1}$ & $ 5.2\e{-2} $ & $3.4\e{-3} $ & $9.4\e{-4} $ & -- & --\\
SQMR & N & $1200$ & $ -2.58\e{8}$ & $ 3.74\e{10}$ & $ 3.7\e{10}$ & $4.6\e{3} $ & $2.3\e{4} $ & -- & --\\
\Matlab QMR & N? & $798$ & $ 3.89\e{-1}$ & $ 1.15\e{1}$ & $ 5.2\e{-7} $ & $1.9\e{-8} $ & $2.6\e{-8} $ & -- & --\\
\Matlab BICG & N? & $790$ & $ 3.89\e{-1}$ & $ 1.15\e{1}$ & $ 4.7\e{-7} $ & $3.9\e{-8} $ & $1.9\e{-7} $ & -- & --\\
\Matlab BICGSTAB & N? & $2035$ & $ 3.89\e{-1}$ & $ 1.15\e{1}$ & $ 4.2\e{-7} $ & $1.7\e{-8} $ & $4.3\e{-13} $ & -- & --\\
\hline
\end{tabular}}
\end{table}
\CSMINRES gives a larger solution than \CSMINRES. This example has a
residual norm of about $1.7\times10^{-8}$, so it is not clear whether
to classify it as a linear system or an \LS problem. To the credit of
\Matlab \SYMMLQ, it thinks the system is linear and returns a good
solution. For \CSMINRES, the first 410 iterations are in standard
``\CSMINRES mode'', with a transfer to ``\CSMINRES mode'' for the last
202 iterations. \LSQR converges to the minimum-length solution but
with more than twice the number of iterations of \CSMINRES. The other
solvers fall short in some way.
\subsection{A Laplacian LS problem $\min \norm{Ax-b}$}
This example uses the same Laplacian matrix $A$~\eqref{eq:Laplace} but
with a clearly incompatible $b=10\times\operatorname{rand}(n,1)$,
\red{that is}, $b_{i} \sim i.i.d.\ U(0,10)$. The residual norm is about $17$.
Results are summarized in Table~\ref{tab:Laplacian2}. \CSMINRES gives
an \LS solution, while \CSMINRES is the only solver that matches the
solution of \TSVD. The other solvers do not perform satisfactorily.
\begin{table}
\caption{Finite element problem $\min \norm{Ax-b}$. Laplacian on a
$20 \times 20$ grid, $n = 400$, $\mathit{maxit} = 500$,
$\operatorname{shift} = 0$, $\mathit{tol} = 1.0 \e{-14}$,
$\operatorname{maxnorm} = 1 \e{4}$, $\mathit{maxcond} = 1 \e{14}$,
$\norm{b} = 120$. To reproduce this example, run
\texttt{test\_minresqlp\_eg7\_1(25)}.}
\label{tab:Laplacian2}
\renewcommand{\e}[1]{\text{e}{#1}}
{\footnotesize \renewcommand{\arraystretch}{1.2}
\begin{tabular}{|l@{}|l@{\,}|r@{\,}|r@{\,}|r@{\,}|r@{\,}|r@{\,}|r@{\,}|r@{\,}|r@{\,}|}
\hline
Method & C? & $Av $ & $x(1)~$ & $\norm{x}~$ & $\norm{e}~$ & $\norm{r}~$ & $\norm{Ar}~$ & $\norm{A}~$ & $\kappa(A)$
\\ \hline
\SVD & -- & -- & $-7.39\e{14}$ & $ 4.12\e{16}$& $4.1\e{16}$ & $1.8\e{2} $ & $7.9\e{2} $ & $8.9\e{0}$ & $ 1.1\e{17}$\\
\TSVD & -- & -- & $-8.75\e{0}$ & $ 1.43\e{2}$ & $0.0\e{0}$ & $1.7\e{1} $ & $4.1\e{-12}$ & $8.9\e{0} $ & $ 1.5\e{2}$\\
\Matlab SYMMLQ & N & $1$ & $ 2.74 \e{-1}$& $ 1.52\e{1}$ & $1.4\e{2} $ & $6.0\e{1} $ & $2.9\e{2}$ & -- & -- \\
SYMMLQ SOL & N & $228$ & $-7.70\e{2}$ & $ 9.93\e{3}$ & $9.9\e{3} $ & $7.0\e{3} $ & $3.4\e{4}$ & $6.8\e{1}$ & $ 9.7\e{0}$\\
SYMMLQ$^+$ & N & $228$ & $-7.70\e{2}$ & $ 9.93\e{3}$ & $9.9\e{3} $ & $7.0\e{3} $ & $3.4\e{4}$ & $7.6\e{0} $ & $ 9.7\e{0}$\\
\Matlab MINRES & N & $500$ & $ 2.80\e{14}$ & $ 4.07\e{16}$& $4.1\e{16}$ & $2.3\e{2} $ & $1.4\e{3}$ & -- & -- \\
MINRES SOL & N & $500$ & $-1.46\e{14}$ & $ 2.11\e{16}$& $2.1\e{16}$ & $1.1\e{2} $ & $6.6\e{2}$ & $1.5\e{2}$ & $ 1.4\e{1}$\\
MINRES$^+$ & N & $ 381$ & $3.88\e{1}$ & $ 6.90\e{3}$ & $6.9\e{3} $ & $1.7\e{1} $ & $1.2\e{-5}$ & $7.9\e{0}$ & $ 1.6\e{10}$\\
MINRES-QLP & Y & $ 382$ & $-8.75\e{0}$ & $ 1.43\e{2}$ & $1.7\e{-6}$ & $1.7\e{1} $ & $1.7\e{-5}$ & $8.6\e{0}$ & $ 3.5\e{10}$\\
\Matlab LSQR & Y & $1000$ & $-8.75\e{0}$ & $ 1.43\e{2}$ & $2.0\e{-5}$ & $1.7\e{1} $ & $1.4\e{-5}$ & -- & --\\
LSQR SOL & Y & $1000$ & $-8.75\e{0}$ & $ 1.43\e{2}$ & $2.3\e{-5}$ & $1.7\e{1} $ & $1.1\e{-5}$ & $1.2\e{2}$ & $ 4.4\e{3}$\\
\Matlab GMRES(30)& N & $500$ & $-8.84\e{0}$ & $ 1.25\e{2}$ & $4.8\e{1} $ & $1.7\e{1} $ & $8.2\e{-1}$ & -- & --\\
SQMR & N & $500$ & $ 9.58\e{15}$ & $ 1.39\e{18}$& $1.4\e{18}$ & $1.2\e{11}$ & $6.7\e{11}$ & -- & --\\
\Matlab QMR & N & $556$ & $-7.30\e{0}$ & $ 1.92\e{2}$ & $1.4\e{2} $ & $1.7\e{1} $ & $1.2\e{1} $ & -- & --\\
\Matlab BICG & N & $2$ & $ 1.40\e{0}$ & $ 1.71\e{1}$ & $1.4\e{2} $ & $6.0\e{1} $ & $2.6\e{2} $ & -- & --\\
\Matlab BICGSTAB & N & $104$ & $-1.12\e{1}$ & $ 1.40\e{2}$ & $9.6\e{1} $ & $2.6\e{1} $ & $1.8\e{1} $ & -- & --\\
\hline
\end{tabular}}
\end{table}
\end{comment}
\begin{comment}
\subsection{Regularizing effect of \CSMINRES}
\label{sec:regularizing}
This example illustrates the regularizing effect of \CSMINRES with the
stopping condition $\chi_k \le \mathit{maxxnorm}$. For $k \ge 18$ in
Figure~\ref{davis1177}, we observe the following values:
\begin{align*}
\displaybreak[0]
\chi_{18} &= \norm{\bmat{2.51 & \phantom{-} 3.87\e{-11}
& \phantom{-}1.38\times 10^2}}
= 1.38\times 10^2,
\\
\displaybreak[0]
\chi_{19} &= \norm{\bmat{2.51 & -8.00\e{-10} & -1.52 \times 10^2}}
= 1.52\times 10^2,
\\
\displaybreak[0]
\chi_{20} &= \norm{\bmat{2.51 & \phantom- 1.62 \e{-10} & -1.62 \times 10^6}}
= 1.62\times 10^6 > \mathit{maxxnorm} \equiv 10^4.
\end{align*}
Because the last value exceeds $\mathit{maxxnorm}$, \CSMINRES regards
the last diagonal element of $L_k$ as a singular value to be ignored
(in the spirit of truncated \SVD solutions). It discards the last
element of $u_{20}$ and updates
\[
\chi_{20} \leftarrow \norm{\bmat{2.51 & 1.62 \e{-10} & 0}} = 2.51.
\]
The full truncation strategy used in the implementation is justified
by the fact that $x_k = W_k u_k$ with $W_k$ orthogonal. When
$\norm{x_k}$ becomes large, the last element of $u_k$ is treated as
zero. If $\norm{x_k}$ is still large, the second-to-last element of
$u_k$ is treated as zero. If $\norm{x_k}$ is \emph{still} large, the
third-to-last element of $u_k$ is treated as zero.
\begin{figure}
\includegraphics[width=\textwidth]{Davis1177Case2Eig2.eps}
\caption{Recurred $\phi_k \simeq \norm{r_k}$, $\psi_k \simeq
\norm{Ar_k}$, and $\norm{x_k}$ for MINRES and MINRES-QLP. The
matrix $A$ (ID~$1177$ from~\cite{UFSMC}) is positive semidefinite,
$n=25$, and $b$ is random with $\norm{b} \simeq 1.7$. Both solvers
could have achieved essentially the TSVD solution of $Ax\simeq b$ at
iteration $11$. However, the stringent $\mathit{tol}=10^{-14}$ on
the recurred normwise relative backward errors (NRBE in
Table~\ref{tab-stopping-conditions}) prevents them from stopping
``in time". MINRES ends with an exploding solution, yet MINRES-QLP
brings it back to the TSVD solution at iteration $20$.
\textbf{Left:} $\phi_k^M$ and $\phi_k^Q$ (recurred $\norm{r_k}$
of MINRES and MINRES-QLP) and their NRBE.
\textbf{Middle:} $\psi_k^M$ and $\psi_k^Q$ (recurred
$\norm{Ar_k}$) and their NRBE.
\textbf{Right:} $\norm{x_k^M}$ (norms of solution estimates from
MINRES) and $\chi_k^Q$ (recurred $\norm{x_k}$ from
MINRES-QLP) with $\mathit{maxxnorm} = 10^{4}$. This figure can be
reproduced by \texttt{test\_minresqlp\_fig7\_1(2)}.}
\label{davis1177}
\end{figure}
\subsection{Effects of rounding errors in \CSMINRES}
\label{egDutch}
The recurred residual norms $\phi_k^M$ in \CSMINRES usually
approximate the directly computed ones $\norm{ r_k^M }$ very well
until $\norm{ r_k^M }$ becomes small. We observe that $\phi_k^M$
continues to decrease in the last few iterations, even though $\norm{
r_k^M }$ has become stagnant. This is desirable in the sense that
the stopping rule will cause termination, although the final solution
is not as accurate as predicted.
We present similar plots of \CSMINRES in the following examples, with
the corresponding quantities as $\phi_k^Q$ and $\norm{r_k^Q}$. We
observe that except in very ill-conditioned \LS problems, $\phi_k^Q$
approximates $\norm{r_k^Q}$ very closely.
Figure~\ref{figDPtestSing3_DP} illustrates four singular compatible
linear systems.
Figure~\ref{figDPtestLSSing1} illustrates four singular \LS problems.
\begin{figure}
\centering
\hspace*{-0.1in}
\includegraphics[width=\textwidth]{DPtestSing5_DP.eps}
\vspace*{-0.22in}
\caption[]{Solving $Ax=b$ with semidefinite $A$ similar to an example
of Sleijpen \etal\ \cite{SVM00}. $A = Q \diag([0_5, \eta, 2\eta,
2\!:\! \frac{1}{789}\!:\!3])Q$ of dimension $n = 797$, nullity 5,
and norm $\norm{A}=3$, where $Q = I - (2/n) ww^T\!$ is a Householder
matrix generated by $v = [0_5, 1, \ldots, 1]^T\!$, $w = v/\norm{v}$.
These plots illustrate and compare the effect of rounding errors in
MINRES and MINRES-QLP.
The upper part of each plot shows the computed and recurred residual
norms, and the lower part shows the computed and recurred normwise
relative backward errors (NRBE, defined in
Table~\ref{tab-stopping-conditions}). MINRES and MINRES-QLP
terminate when the recurred NRBE is less than the given
$\mathit{tol} = 10^{-14}$.
\smallskip
\textbf{Upper left:} $\eta=10^{-8}$ and thus $\kappa(A) \simeq 10^8$.
Also $b = e$ and therefore $\norm{x} \gg \norm{b}$. The graphs of
MINRES's directly computed residual norms $\norm{r_k^M}$ and
recurrently computed residual norms $\phi_k^M$ start to differ at the
level of $10^{-1}$ starting at iteration 21, while the values
$\phi_k^Q \simeq \norm{r_k^Q}$ from MINRES-QLP decrease monotonically
and stop near $10^{-6}$ at iteration 26.
\textbf{Upper right:} Again $\eta=10^{-8}$ but $b=Ae$. Thus $\norm{x}=
\norm{e} = O(\norm{b})$. The MINRES graphs of $\norm{r_k^M}$ and
$\phi_k^M$ start to differ when they reach a much smaller level of
$10^{-10}$ at iteration 30. The MINRES-QLP $\phi_k^Q$'s are excellent
approximations of $\phi_k^Q$, with both reaching $10^{-13}$ at
iteration 33.
\textbf{Lower left:} $\eta=10^{-10}$ and thus $A$ is even more
ill-conditioned than the matrix in the upper plots. Here $b= e$ and
$\norm{x}$ is again exploding. MINRES ends with $\norm{r_k^M} \simeq
10^2$, which means no convergence, while MINRES-QLP reaches a
residual norm of $10^{-4}$.
\textbf{Lower right:} $\eta=10^{-10}$ and $b=Ae$. The final MINRES
residual norm $\norm{r_k^M} \simeq 10^{-8}$, which is satisfactory
but not as accurate as $\phi_k^M$ claims at $10^{-13}$. MINRES-QLP
again has $\phi_k^Q \simeq \norm{r_k^Q} \simeq 10^{-13}$ at
iteration 37.
This figure can be reproduced by \texttt{DPtestSing7.m}.}
\label{figDPtestSing3_DP}
\end{figure}
\begin{figure}[tb]
\centering
\hspace*{-0.1in}
\includegraphics[width=\textwidth]{DPtestLSSing3_DP.eps}
\vspace*{-0.25in}
\caption[]{Solving $Ax=b$ with semidefinite $A$
similar to an example of Sleijpen \etal\ \cite{SVM00}. $A = Q
\diag([0_5, \eta, 2\eta, 2 \!:\! \frac{1}{789}\!:\!3])Q$ of
dimension $n = 797$ with $\norm{A}=3$, where $Q = I - (2/n) ee^T\!$ is
a Householder matrix generated by $e = [1, \ldots, 1]^T\!$. (We are
not plotting the NRBE quantities because $\norm{A} \norm{r_k}
\simeq 6$ throughout the iterations in this example.)
\textbf{Upper left:} $\eta=10^{-2}$ and thus
$\mathrm{cond}(A) \simeq 10^2$. Also $b=e$ and
therefore $\norm{x} \gg \norm{b}$. The graphs of MINRES's
directly computed $\norm{A r_k^M}$ and recurrently computed
$\psi_k^M$, and also $\psi_k^Q \simeq \norm{Ar_k^Q}$ from
MINRES-QLP, match very well throughout the iterations.
\textbf{Upper right:} Here, $\eta=10^{-4}$ and $A$ is more
ill-conditioned than the last example (upper left). The final MINRES
residual norm $\psi_k^M \simeq \norm{A r_k^M}$ is slightly larger
than the final MINRES-QLP residual norm $\psi_k^Q \simeq \norm{A
r_k^Q}$. The MINRES-QLP $\psi_k^Q$ are excellent approximations of
$\norm{Ar_k^Q}$.
\textbf{Lower left:}
$\eta=10^{-6}$ and $\mathrm{cond}(A) \simeq 10^6$.
MINRES's $\psi_k^M$ and $\norm{A r_k^M}$ differ starting at
iteration 21. Eventually, $\norm{Ar_k^M} \simeq 3$, which means
no convergence. MINRES-QLP reaches a residual norm of
$\psi_k^Q = \norm{A r_k^Q} = 10^{-2}$.
\textbf{Lower right:} $\eta=10^{-8}$. MINRES performs even worse than
in the last example (lower left). MINRES-QLP reaches a minimum
$\norm{A r_k^Q} \simeq 10^{-7}$ but $\mathit{tol} \!=\! 10^{-8}$
does not shut it down soon enough.
The final $\psi_k^Q =
\norm{A r_k^Q}= 10^{-2}$. The values of $\psi_k^Q$ and $\norm{A
r_k^Q}$ differ only at iterations 27--28.
This figure can be reproduced by \texttt{DPtestLSSing5.m}.
}
\label{figDPtestLSSing1}
\end{figure}
\end{comment}
\section{Conclusions} \label{sec:conclusions}
\red{We take advantage of two Lanczos-like frameworks for square
matrices or linear operators with special symmetries. In particular, the framework
for complex-symmetric problems~\cite{BS99} is a special case of the Saunders-Simon-Yip process~\cite{SSY88} with the starting vectors chosen to be $b$ and $\bar{b}$; we name the complex-symmetric process the \textit{Saunders process} and the corresponding extended Krylov subspace from the $k$th iteration \textit{Saunders subspace} $S_k(A,b)$.}
\CSMINRES constructs its $k$th solution estimate from the short recursion
$x_k = D_k t_k = x_{k-1} + \tau_k d_k$~\eqref{minresxk}, where
$n$ separate triangular systems $R_k^T\! D_k^T\! = V_k^*$ are solved to
obtain the $n$ elements of each direction $d_1, \ldots, d_k$. (Only
$d_k$ is obtained during iteration $k$, but it has $n$ elements.)
In contrast, \CSMINRESQLP constructs $x_k$ using
orthogonal steps: $x_k = W_ku_k =x_{k-2}^{(2)} + w_{k-1}^{(3)} \mu_{k-1}^{(2)}
+ w_k^{(2)} \mu_k$; see
\eqref{qlpeqnsol1}--\eqref{qlpeqnsol2}. Only one triangular system
$L_k u_k = t_k$ \eqref{Lsubproblem} is involved for each $k$.
Thus \CSMINRESQLP is numerically more stable than \CSMINRES.
\red{The} additional work and storage are moderate, and
efficiency is retained by transferring from \CSMINRES to
\CSMINRESQLP only when the estimated condition of $A$
exceeds an input parameter value.
\TSVD is known to use rank-$k$ approximations to $A$ to find
approximate solutions to $\min\norm{Ax - b}$ that serve as a form of
\textit{regularization}. It is fair to conclude from the results that
like other Krylov methods \CSMINRES have built-in regularization
features~\cite{HO93, HN96, KS99}. Since \CSMINRESQLP monitors more
carefully and constructively the rank of $T_k$, which could be $k$ or
$k\!-\!1$, we may say that regularization is a stronger feature in
\CSMINRESQLP, as we have shown in our numerical examples.
Like \CSMINRES and \CSMINRESQLP, \SSMINRES and \SSMINRESQLP are
readily applicable to skew symmetric linear systems. \red{Similarly, we have
\SHMINRES and \SHMINRESQLP for skew Hermitian problems.} We summarize and
compare these methods in Appendix~\ref{sec:compare}.
\red{\CG and \SYMMLQ for problems with these special symmetries can be derived likewise.}
\begin{comment}
It is important to develop robust techniques for estimating an a
priori bound for the solution norm since the \CSMINRES approximations
are not monotonic as is the case in \CG and \LSQR.
Ideally,
we would also like to determine a practical threshold associated with
the stopping condition $\gamma_k^{(4)} = 0$ in order to handle cases
when $\gamma_k^{(4)}$ is numerically small but not exactly zero.
These are topics for future research.
\end{comment}
\subsection*{\red{Software and reproducible research}}
\label{sec:software}
\Matlab~7.12 and Fortran~90/95 implementations of \MINRES and
\MINRESQLP for symmetric, Hermitian, skew symmetric, skew Hermitian,
and complex symmetric linear systems with short-recurrence solution
and norm estimates as well as efficient stopping conditions are
available from the \MINRESQLP project website~\cite{MinresqlpMatlab}.
Following the philosophy of reliable reproducible computational research as
advocated in~\cite{C94, CD02, C13c}, for each figure and example in this
paper we mention either the source or the specific \Matlab command.
Our \Matlab scripts are available at~\cite{MinresqlpMatlab}.
\subsection*{\red{Acknowledgments}} \label{sec:ack}
We thank our anonymous reviewers for their insightful suggestions for
improving this manuscript. We are grateful \red{for} the feedback and
encouragement from Peter Benner, Jed Brown, \red{Xiao-Wen Chang, Youngsoo Choi}, Heike Fassbender, Gregory
Fasshauer, Ian Foster, \red{Pedro Freitas}, Roland Freund, Fred Hickernell, Ilse Ipsen,
Sven Leyffer, Lek-Heng Lim, \red{Lawrence Ma}, Sayan Mukherjee, Todd Munson, Nilima
Nigam, Christopher Paige, Xiaobai Sun, Paul Tupper, Stefan Wild,
Jianlin Xia, \red{and Yuan Yao} during the development of this work. We appreciate the opportunities for presenting our algorithms in the 2012 Copper
Mountain Conference on Iterative Methods, the 2012 Structured
Numerical Linear and Multilinear Algebra Problems conference, and the
2013 SIAM Conference on Computational Science and Engineering. In
particular, we thank Gail Pieper, Michael Saunders, \red{and} Daniel Szyld for
their detailed comments, which resulted in a more polished
exposition. We thank Heike Fassbender, Roland Freund, and Michael
Saunders for helpful discussions on modified Krylov subspace methods
during Lothar Reichel's 60th birthday conference.
\clearpage
| {
"timestamp": "2014-01-14T02:17:18",
"yymm": "1304",
"arxiv_id": "1304.6782",
"language": "en",
"url": "https://arxiv.org/abs/1304.6782",
"abstract": "While there is no lack of efficient Krylov subspace solvers for Hermitian systems, there are few for complex symmetric, skew symmetric, or skew Hermitian systems, which are increasingly important in modern applications including quantum dynamics, electromagnetics, and power systems. For a large consistent complex symmetric system, one may apply a non-Hermitian Krylov subspace method disregarding the symmetry of $A$, or a Hermitian Krylov solver on the equivalent normal equation or an augmented system twice the original dimension. These have the disadvantages of increasing either memory, conditioning, or computational costs. An exception is a special version of QMR by Freund (1992), but that may be affected by non-benign breakdowns unless look-ahead is implemented; furthermore, it is designed for only consistent and nonsingular problems. For skew symmetric systems, Greif and Varah (2009) adapted CG for nonsingular skew symmetric linear systems that are necessarily and restrictively of even order.We extend the symmetric and Hermitian algorithms MINRES and MINRES-QLP by Choi, Paige and Saunders (2011) to complex symmetric, skew symmetric, and skew Hermitian systems. In particular, MINRES-QLP uses a rank-revealing QLP decomposition of the tridiagonal matrix from a three-term recurrent complex-symmetric Lanczos process. Whether the systems are real or complex, singular or invertible, compatible or inconsistent, MINRES-QLP computes the unique minimum-length, i.e., pseudoinverse, solutions. It is a significant extension of MINRES by Paige and Saunders (1975) with enhanced stability and capability.",
"subjects": "Mathematical Software (cs.MS); Numerical Analysis (math.NA)",
"title": "Minimal Residual Methods for Complex Symmetric, Skew Symmetric, and Skew Hermitian Systems",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9693241947446617,
"lm_q2_score": 0.7310585727705127,
"lm_q1q2_score": 0.7086327623619588
} |
https://arxiv.org/abs/1905.11255 | Kernel Conditional Density Operators | We introduce a novel conditional density estimation model termed the conditional density operator (CDO). It naturally captures multivariate, multimodal output densities and shows performance that is competitive with recent neural conditional density models and Gaussian processes. The proposed model is based on a novel approach to the reconstruction of probability densities from their kernel mean embeddings by drawing connections to estimation of Radon-Nikodym derivatives in the reproducing kernel Hilbert space (RKHS). We prove finite sample bounds for the estimation error in a standard density reconstruction scenario, independent of problem dimensionality. Interestingly, when a kernel is used that is also a probability density, the CDO allows us to both evaluate and sample the output density efficiently. We demonstrate the versatility and performance of the proposed model on both synthetic and real-world data. |
\subsubsection*{\bibname}}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage[parfill]{parskip}
\usepackage
{ hyperref,
url,
booktabs,
amsfonts,
nicefrac,
microtype,
bbm,
graphicx,
amssymb,
amsmath,
amsthm,
dsfont,
xcolor,
enumitem,
mathtools,
bbold,
ifthen,
mdframed,
multirow,
subcaption,
comment
}
\input{notation.tex}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{assumption}{Assumption}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{definition}[theorem]{Definition}
\theoremstyle{definition}
\newtheorem{example}[theorem]{Example}
\newtheorem{remark}[theorem]{Remark}
\newtheorem{problem}[theorem]{Problem}
\newtheorem{textalgorithm}[theorem]{Algorithm}
\newcommand{\mu_\mathbb{P}}{\mu_\mathbb{P}}
\newcommand{\hat{\mu}_\mathbb{P}}{\hat{\mu}_\mathbb{P}}
\newcommand{\hat{u}}{\hat{u}}
\renewcommand{\Pr}[1]{\textrm{Pr} \left[ #1 \right]}
\hypersetup{draft}
\begin{document}
%
%
\twocolumn[
\aistatstitle{Kernel Conditional Density Operators}
\aistatsauthor{ \unblindinfo{Ingmar Schuster \And Mattes Mollenhauer \And Stefan Klus \And Krikamol Muandet}{Anonymous authors} }
\aistatsaddress{ \unblindinfo{Zalando Research, Zalando SE \\ Berlin, Germany \And Freie Universit\"at Berlin \\Berlin, Germany \And Freie Universit\"at Berlin \\Berlin, Germany \And MPI
for Intelligent Systems\\ T\"ubingen, Germany}{Anonymous institution} } ]
\begin{abstract}
We introduce a novel conditional density estimation model termed the
\emph{conditional density operator} (CDO).
It naturally captures multivariate, multimodal output densities and
shows performance that is competitive with recent neural conditional density models and Gaussian processes.
The proposed model is based on a novel approach to the
reconstruction of
probability densities from their kernel mean embeddings by drawing connections to estimation of Radon--Nikodym derivatives
in the reproducing kernel Hilbert space (RKHS).
We prove finite sample bounds for the estimation error in a standard
density reconstruction scenario, independent of problem dimensionality.
Interestingly, when a kernel is used that is also a
probability density, the CDO allows us to
both evaluate and sample the output density efficiently.
We demonstrate the versatility and
performance of the proposed model on both synthetic and real-world data.
\end{abstract}
\section{Introduction}
Conditional density estimation is an essential task in
statistics and machine learning \citep{Tsybakov08:INE,Di17}.
Popular techniques for estimating conditional densities
include a kernel density estimator
\citep[KDE,][]{Tsybakov08:INE}, Gaussian process \citep[GP,][]{Wi06}, and deep
neural networks \citep{Di17,papamakarios2017masked}.
While a KDE is simple to use,
it is known to suffer from the curse of
dimensionality.
While the GP is also a flexible model for conditional densities
which enjoys a closed-form posterior thanks to the
Gaussianity assumption, approximate inference is often
required to model complex densities.
Lastly, deep neural networks have recently been used to model
complex densities.
Despite a great representational power, they require large
amount of training data and are prone to overfitting.
%
The conditional mean embedding (CME) has emerged as an alternative
kernel-based nonparametric representation for complex
conditional distributions \citep{SHSF09,Song2013,MFSS16}.
The CME can models complex distributions nonparametrically and can be
estimated consistently from a finite sample.
It is mathematically elegant and is less prone to the curse of
dimensionality, see, e.g., \cite{Tolstikhin17:MEK}.
However, one of the fundamental drawbacks of the CME is that a reconstruction
of the associated conditional density becomes a non-trivial task.
To recover densities, a common approach is to
approximate them via a pre-image
problem \citep{Kanagawa14:KPI,Song08:TDE}, which requires
parametric assumptions on the densities. For sampling, kernel herding can be used \citep{Chen10:SKH}, which requires restrictive
assumptions to ensure fast convergence.
%
In this paper, we present a novel kernel-based supervised
learning model for estimating conditional densities, the
\emph{conditional density operator} (CDO). It has competitive performance with conditional density models based on deep neural networks~\citep{Di17}.
To derive our model, we first present the problem of reconstructing a probability density from its associated kernel mean embedding~\citep{MFSS16,Smola07Hilbert} and
connect it to the estimation of Radon--Nikodym derivatives.
While this very general problem has been tackled before in similar scenarios~\citep{FSG13,QB13}, we provide a
characterization of conditions under which the density reconstruction as an inverse problem has
a unique analytical solution.
We show that in practical applications,
this statistical inverse problem can be solved conveniently
using Tikhonov regularization~\citep{TA77,TikEtAl95}.
Furthermore, we give finite sample concentration bounds for the
stochastic reconstruction error of the Tikhonov solution.
When applied to conditional density estimation, our approach yields solutions that can capture multivariate, multimodal and non-Gaussian conditional densities and is not constrained by a homoscedastic noise assumption.
This compares favorably with standard GPs and is on par with neural conditional density models \cite{Wi06,Di17}.
In a set of experiments on toy and real-world data, we
demonstrate that these properties lead to state-of-the-art
results in conditional density estimation.
To summarize our contributions, we
\emph{(i)} derive conditions under which a density can be
reconstructed in the RKHS, \emph{(ii)} give a consistent
estimator for the reconstructed density in the form of a
statistical inverse problem, \emph{(iii)} provide
dimensionality-independent finite sample error bounds for the
estimation error of reconstructed densities, \emph{(iv)} introduce CDOs, a multivariate,
multimodal kernel-based conditional density model.
The rest of this paper is structured as follows:
In Section~\ref{sec:Preliminaries and assumptions}, we state assumptions and introduce some preliminaries from the literature.
Our main theoretical results are presented in Sections~\ref{sec:Density reconstruction and conditional density operators} and \ref{sec:cdo}, Section~\ref{sec:Related work} discusses related work.
Experiments on a toy dataset, rough terrain estimation and traffic prediction are reported in Section~\ref{sec:Experiments}, while concluding remarks are presented in Section~\ref{sec:Conclusion}.
\section{Preliminaries}
\label{sec:Preliminaries and assumptions}
\paragraph{Reproducing kernel Hilbert space (RKHS).}
We only state the important facts here and collect related results in the supplementary material ~\cite[see also][Section 4.5]{StCh08}.
We consider a measurable space $(\inspace, \Sigma)$, where $\inspace$ is a topological space
endowed with the Borel $\sigma$-algebra $\Sigma$.
Let $\ink \colon \inspace \times \inspace \rightarrow \R$ be a symmetric
positive semidefinite kernel which induces an RKHS
$\inrkhs = \overline{\mspan\{\ink(x, \cdot) \mid x \in \inspace \}}$, where
the closure is with respect to
the inner product $\ink(x,x') = \innerprod{\phi(x)}{\phi(x')}_\inrkhs$.
Here, $\phi(x) := \ink(x, \cdot)$ is known as the \emph{canonical feature map}.
\begin{assumption}[Separability] \label{ass:separability}
The RKHS $\inrkhs$ is separable. Note that for a Polish space $\inspace$,
the RKHSs induced by a continuous kernel $\ink: \inspace \times \inspace \rightarrow \R$ is also separable~\citep[see][Lemma 4.33]{StCh08}.
For a more general treatment of conditions implying separability, see~\cite{OwhadiScovel2017}.
\end{assumption}
The \emph{reproducing property}
$f(x) = \innerprod{f}{\phi(x)}_\inrkhs$ holds for all $f \in \inrkhs$ and $x \in \inspace$.
We fix a finite measure $\inmeas$ on $\inspace$ such that
$\int_\inspace \norm{\phi(x)}_{\inrkhs}^2 \ts \dd \inmeas(x) < \infty$.
Then the \emph{kernel mean embedding}
$\mu_\inmeas :=\int_{\inspace} \phi(x) \ts \dd \inmeas(x) \in \inrkhs$ of the measure
$ \inmeas $ exists \citep{SFL11} and the (uncentered)
\emph{kernel covariance operator}\footnote{Note that, technically, the term \emph{covariance operator} is misleading
when $ \inmeas $ is not a probability measure.
Since we will require $ \inmeas $
to be finite, we will nevertheless use this term to reflect the standard definition.}
$ C_\inmeas := \int_\inspace \phi(x) \otimes \phi(x) \ts \dd \inmeas(x) $ is well-defined as a positive self-adjoint Hilbert--Schmidt operator on $ \inrkhs $~\citep{Baker1973,Fukumizu04,MFSS16}.
Here, the map
$ \phi(x) \otimes \phi(x) \colon \inrkhs \rightarrow \inrkhs $, given by $ f
\mapsto \phi(x) \innerprod{f}{\phi(x)}_\inrkhs = \phi(x) \ts f(x)$ for
all $ f \in \inrkhs $, is the rank-one \emph{tensor product
operator}.
Whenever $ \inmeas $ is a
probability measure, the kernel mean embedding admits the standard estimate $ \hat{\mu}_\inmeas := \nref^{-1} \sum_{i=1}^\nref \phi(x_i) $
for i.i.d.\ samples $ (x_i)_{i=1}^\nref \sim \inmeas $.
For the covariance operator, we obtain the empirical estimate
$ \ecinref := \nref^{-1} \sum_{i=1}^\nref \phi(x_i) \otimes \phi(x_i) $ with samples as given above. Both $ \hat{\mu}_\inmeas $
and $ \ecinref $ converge with $ \mathcal{O}(\nref^{-1/2})$ in probability
in RKHS and Hilbert--Schmidt norm respectively~\citep{MFSS16}.
\paragraph{Inverse problems.}
The general theory of inverse problems,
pseudoinverse operators and
regularization
has been well studied in the context of statistical learning over the last years~\citep{DeVitoEtAl2004,DeVitoEtAl2005,Caponnetto2007,Smale2007,DeVitoEtAl06}, we will therefore
introduce these concepts only briefly.
In general, the compact operator $\cinref$ can not be inverted on the whole
space $\inrkhs$.
However, it admits a \emph{pseudoinverse}
$\cinref^\dagger$, which is a (generally unbounded) operator with domain
$\range(\cinref) + \range(\cinref)^\perp \subseteq \inrkhs$.
Note that $\range(\cinref) + \range(\cinref)^\perp = \inrkhs$ if and only if
$\range(\cinref)$ is a closed subspace, which is equivalent to $\inrkhs$ being finite dimensional.
The minimum norm solution to the inverse problem
$\cinref u = f$ with known right-hand side $f \in \domain(\cinref^\dagger)$ is given by
$u^\dagger := A^\dagger f$ and is unique, but solutions of larger norm can exist
in general.
In practice,
one can resort to the \emph{Tikhonov-regularized} solution
$u_\alpha := (\cinref + \alpha \idop[\inrkhs])^{-1} f$ (for a regularization
parameter $\alpha > 0$) to stabilize
the problem against perturbed right-hand sides $\tilde{f}$ and ensure that the
solution is still well-defined even if $\tilde{f} \notin \domain(\cinref^\dagger)$.
Note that as $\alpha \to 0$ we have $\norm{u^\dagger - u_\alpha}_\inrkhs \to 0$.
Convergence rates for Tikhonov regularization schemes
have been derived in numerous settings depending on the problem and are usually connected to
rate of decay of the eigenvalues of $\cinref$. We refer the reader to the standard literature
on inverse problems and regularization~\citep{TA77,EG96,TikEtAl95,EHN96} for details.
\paragraph{Conditional mean embedding.}
The kernel mean embedding $\mu_{\rho}$ has been used
extensively as a representation of the measure $\rho$ \citep{MFSS16}.
We now extend this idea to conditional
distributions~\citep{SHSF09,Gruen12,Song2013,MFSS16}.
Note that \citep{SHSF09} formulates results in terms
of (generally not existing) inverse operators under adequate regularity
assumptions.
We use pseudoinverses instead of inverses, which aligns
with the classical theory of inverse problems.
Assume we have a topological output space $\outspace$ endowed with the Borel $\sigma$-algebra and a
positive semidefinite kernel
$\outk \colon \outspace \times \outspace \rightarrow \R$ inducing a separable RKHS $\outrkhs$
with feature map $\psi(y) := \outk(y, \cdot)$.
All other assumptions we make for the space $\inspace$, its
RKHS, and measures on $\inspace$ apply likewise for the output space
$\outspace$ and associated objects.
We assume that random variables $X$ and $Y$
with sample spaces $\inspace$ and $\outspace$ follow the joint distribution
$\meas_{XY}$ with marginals $\meas_{X}, \meas_{Y}$ and induced
conditional distribution $\meas_{Y \mid X}$.
Let $\cov[\mathit{YX}] := \int_\inspace \psi(y) \otimes \phi(x) \ts \dd \meas_{XY}(x,y)$ be the induced cross-covariance operator
from $H$ to $F$ and $\cov[X]$ the covariance operator on $H$, respectively.
Then the \emph{conditional mean operator} (CMO) is defined as $\mathcal{U}_{Y \mid X} = \cov[\mathit{YX}]\cov[X]^{\dagger} \colon \inrkhs \rightarrow \outrkhs $ and satisfies the equation $\me[\meas_y] = \mathcal{U}_{Y \mid X} \me[\meas]$ for some distribution $\meas$ on $\inspace$,
where $\meas_y(\cdot) = \int_{\inspace} \meas_{Y|X=x}(\cdot) \, \dd \meas(x)$~\citep{SHSF09,Song2013}.
In particular, if $\meas$ is the Dirac measure on $x' \in \inspace$,
this yields $\me[\meas_{Y|X=x'}] = \mathcal{U}_{Y \mid X} \ink(x',
\cdot)$.\footnote{The latter is how the CMO is usually
introduced, while $\me[\meas_y] = \mathcal{U}_{Y \mid X} \me[\meas]$ is
referred to as \emph{kernel sum rule} in the literature \citep{Fukumizu13:KBR}.}
Note that the CMO is in general not a globally defined bounded operator.
It is defined pointwise as $\me[\meas_y] = \mathcal{U}_{Y \mid X} \me[\meas] \in \outrkhs$ for
$ \me[\meas] \in \domain \mathcal{U}_{Y \mid X}$ under the condition
that $\mathbb{E}[g(Y) \mid X = \cdot \,] \in \inrkhs$ for all $g \in \outrkhs$. This requirement
is examined in~\cite[Appendix A.1]{Fukumizu04}.
In practical applications, the pseudoinverse $\cov[X]^{\dagger}$ is usually
replaced with its Tikhonov-regularized analogue, ensuring that $\mathcal{U}_{Y \mid X}$ is
globally defined and bounded.
\section{Density reconstruction from kernel embeddings}
\label{sec:Density reconstruction and conditional density operators}
Our strategy to develop the CDO now is as follows. In this section, we will derive a method to reconstruct densities from their mean embeddings. This methodology we can then apply to \emph{conditional} mean embeddings and obtain the CDO as the aggregate of density reconstruction and conditonal mean operator.\\
Assume we are given the mean embedding $\mu_\mathbb{P}$ of a target probability distribution $\mathbb{P}$.
We now show how to reconstruct a Radon--Nikodym derivative $ \frac{\dd \mathbb{P}}{\dd \inmeas}$ with respect
to a chosen finite positive \emph{reference measure} $\inmeas$ on $\inspace$
that satisfies $\mathbb{P} \ll \inmeas$ and Assumptions~\ref{ass:rkhs_representative} and~\ref{ass:injectivity}.
\begin{assumption}[RKHS representative] \label{ass:rkhs_representative}
We assume that the $L_1(\inmeas)$
equivalence class of the
Radon--Nikodym derivative $ \frac{\dd \mathbb{P}}{\dd \inmeas}$ admits a representative
which is an element of $H$. For simplicity, we will write $ \frac{\dd \mathbb{P}}{\dd \inmeas} \in H$ for this representative.
\end{assumption}
We note that Assumption~\ref{ass:rkhs_representative} is not always satisfied in practice and
is essentially a
model assumption. However, the approximative qualities of RKHSs in terms of their ``size'' with respect to
other function spaces such as $C(\inspace)$ or $L_p(\inmeas)$, $p \in [1,\infty)$ are well examined --
for a lot of kernels it can be shown that $\inrkhs$ is dense in these spaces~\cite{MXZ:Universal2006,StCh08,Sriperumbudur08injectivehilbert}.
\begin{assumption}[Injective covariance operator] \label{ass:injectivity}
The kernel covariance operator $\cinref$ exists
(i.e. $\int_\inspace \norm{\phi(x)}_{\inrkhs}^2 \ts \dd \inmeas(x) < \infty$) and is injective.
Note that for example when $\ink$ is continuous on $\inspace \times \inspace$ and
$\inmeas$ has full support on $\inspace$, the covariance operator is always injective~\citep{Fukumizu13:KBR}.
\end{assumption}
The theoretical background used in the derivation of the following results
has appeared in a similar form in \cite{Fukumizu13:KBR}. We apply
it in the context of density reconstruction and provide a formal mathematical setting
in terms of a statistical inverse problem which can elegantly be used in practice.
The following result characterizes the Radon--Nikodym derivative $ \frac{\dd \meas}{\dd \inmeas} \in \inrkhs$
as the solution of an inverse problem.
\begin{proposition}[Radon--Nikodym derivatives]
Let Assumption~\ref{ass:rkhs_representative} and~\ref{ass:injectivity} be satisfied.
Then the inverse problem
\begin{equation} \label{eq:main_result}
\cinref u = \mu_\mathbb{P}, \quad u \in \inrkhs,
\end{equation} has the unique solution
$u^{\dagger} := \cinref^\dagger \mu_\mathbb{P} = \frac{\dd \mathbb{P}}{\dd \inmeas} \in \inrkhs$.
\end{proposition}
\begin{proof}
We have
$
\cinref \tfrac{\dd \mathbb{P}}{\dd \inmeas}
= \int\limits_\inspace \phi(x) \ts
\tfrac{\dd \mathbb{P}}{\dd \inmeas} (x) \ts \dd \inmeas(x)
= \int\limits_\inspace \phi(x) \ts \dd \mathbb{P}(x)
= \mu_\mathbb{P}.
$ Uniqueness of the solution follows directly since $\cinref$ is injective.
\end{proof}
Densities in the classical sense are Radon--Nikodym derivatives with respect to Lebesgue measure. This immediately gives the following special case.
\begin{corollary}[Density reconstruction]
\label{cor:main_result} Let $\inspace \subseteq \R^d$ be compact and the kernel $\ink$ continuous.
Let $\inmeas$ be Lebesgue measure on $\inspace$ and $\mu_\mathbb{P}$ be the kernel mean embedding of a probability distribution $\mathbb{P}$ on $\inspace$. If $\mathbb{P}$ admits a density and Assumption~\ref{ass:rkhs_representative} is satisfied, then the density is given by $\cinref^\dagger \mu_\mathbb{P}$.
\end{corollary}
Whenever we are given
an analytical mean embedding $\mu_\mathbb{P}$ in the setting of Corollary~\ref{cor:main_result},
we can compute the unique solution $\cinref^\dagger \mu_\mathbb{P}$ and reconstruct the density of $\mathbb{P}$.
In practice, we are usually given $\mu_\mathbb{P}$ in terms of an empirical estimate $\hat{\mu}_\mathbb{P}$, for
example as an output of a mean embedding-based statistical model.
We will now address the consistency and statistical details
for the typical case that $\mu_\mathbb{P}$ is given in terms of its standard estimate $\hat{\mu}_\mathbb{P}$ and we can sample
from the reference measure $\inmeas$.
We emphasize that~\eqref{eq:main_result} can in theory be solved with any kind of numerical
scheme for integral equations in the classical setting of inverse problems.
\subsection{Consistency and convergence rate of the Tikhonov-regularized solution}
\label{sec:Tikhonov solution}
In practice, we cannot access $\cinref$ and $\mu_\mathbb{P}$ analytically.
The idea is now to estimate $\cinref$ from an i.i.d. random sample.
For the general case, this can be achieved by importance sampling.
For the special case that
$\inmeas$ is the Lebesgue measure and $\inmeas(\inspace) =1 $, we can simply sample from it
under the assumption that $\inspace$ is of convenient shape. We can then use the standard estimate for $\ecinref$
(see Section ~\ref{sec:Preliminaries and assumptions}). Additionally, we will assume
for now that $\mu_\mathbb{P}$
is also given in terms of its standard estimate $\hat{\mu}_\mathbb{P}$.
Note that $\hat{\mu}_\mathbb{P}$ might instead be an estimate of
a conditional mean embedding (as we will later consider) or an output of other
model such as kernel Bayes rule \citep{FSG13}.
Instead of computing the analytical density reconstruction $u^{\dagger} = \cinref^{\dagger} \mu_\mathbb{P}$, we construct an empirical
estimate of $u^{\dagger}$ by defining the empirical Tikohonov-regularized solution
\begin{equation} \label{eq:empirical_solution}
\hat{u} := (\ecinref+\alpha\idop[\inrkhs])^{-1} \hat{\mu}_\mathbb{P}
\end{equation}
for a regularization parameter $\alpha > 0$.
We examine this problem under the assumption that $\ecinref$
is a standard empirical estimate based on
$M$ i.i.d.\ $\inmeas$-samples and $\hat{\mu}_\mathbb{P}$ is estimated from $N$ i.i.d.\ $\mathbb{P}$-samples.
Next, we show that the reconstruction error
$\norm{u^{\dagger} - \hat{u}}_\inrkhs$ vanishes in probability as $M,N \to \infty$ for an appropriately
chosen positive regularization scheme $\alpha \to 0$ depending on sample sizes.
We define the regularized solution
$u_\alpha := (\cinref + \alpha \idop[\inrkhs])^{-1} \mu_\mathbb{P}$ and
decompose the total error:
\begin{equation} \label{eq:error_decomposition}
\norm{u^{\dagger} - \hat{u}}_\inrkhs \leq \norm{ u^{\dagger} - u_\alpha }_\inrkhs + \norm{ u_\alpha - \hat{u}}_\inrkhs.
\end{equation}
The first error term is deterministic and depends only on the analytical nature of the problem based on the decay of the eigenvalues of $\cinref$.
The next result is based on a Hilbert space version of Hoeffding's inequality~\citep{Pinelis92,Pinelis94}
and gives a general concentration bound for the estimation error term $ \norm{ u_\alpha - \hat{u}}_\inrkhs $.
\begin{proposition}[Finite sample bound of estimation error]
\label{prop:stochastic_error}
Let $\sup_x \sqrt{k(x,x)} = \sup_x \norm{\phi(x)}_\inrkhs = c < \infty$
and $\alpha > 0$ be a fixed regularization parameter.
Let $0 < a < 1/2$ and $0 < b < 1/2 $ be fixed constants.
If $\ecinref = \nref^{-1} \sum_{i=1}^\nref \phi(x_i) \otimes \phi(x_i) $ with $(x_i)_{i=1}^\nref \stackrel{\mathrm{i.i.d.}}{\sim} \inmeas$
and $\hat{\mu}_\mathbb{P} = \ndat^{-1} \sum_{j=1}^\ndat \phi(x'_j)$ with $(x'_j)_{j=1}^\ndat \stackrel{\mathrm{i.i.d.}}{\sim} \mathbb{P}$ and both
sets of samples are independent,
then we have
\begin{equation}
\begin{split}
&\Pr{\norm{u_\alpha - \hat{u} }_\inrkhs \leq \frac{\nref^{-2b}}{\alpha^2}(\norm{\mu_\mathbb{P}}_\inrkhs + \ndat^{-2a}) +
\frac{ \ndat^{-2a}}{\alpha}
} \\ &\geq
\left[ 1-2 \exp
\left(
- \frac{\ndat^{1-2a}}{8 c^2} \right)
\right]
%
\left[ 1-2 \exp
\left(
- \frac{\nref^{1-2b}}{8 c^4} \right)
\right].
\end{split}
\end{equation}
\end{proposition}
The proof can be found in the supplementary material.
We emphasize that the above error bound does not depend
on the dimensionality of the data.
By combining the convergence of the deterministic error and the convergence in probability given by
Proposition~\ref{prop:stochastic_error}, we can obtain a
regularization scheme which ensures that $\hat{u}$ is a consistent
estimate of $u^{\dagger}$.
\begin{corollary}[Consistency and regularization choice]\label{col:consistency}
Let $\alpha = \alpha(\nref,\ndat)$ be a regularization
scheme such that $\alpha(\nref,\ndat) \to 0$
as well as
\begin{equation} \label{eq:consistency-scheme}
\frac{\nref^{-2b}}{\alpha(\nref,\ndat)^2} \to 0 \quad \textrm{and} \quad
\frac{ \ndat^{-2a}}{\alpha(\nref,\ndat)} \to 0
\end{equation} as $\nref,\ndat \to \infty$.
Then the empirical solution $\hat{u}$ obtained from~\eqref{eq:empirical_solution}
regularized with the scheme $\alpha(\nref,\ndat)$ converges
in probability to the analytical solution
$u^{\dagger}$.
One such choice, given $c' \in (0, 1)$, is
$\widetilde{\alpha}(\nref,\ndat) = \max(\nref^{-b},\ndat^{-2a})^{c'}$.
\end{corollary}
Small values for $c'$ imply larger bias and smaller variance (tighter bounds on the stochastic error), while large values for $c'$ imply smaller bias and larger variance.
Note that Proposition~\ref{prop:stochastic_error} gives bounds only for the case that $ \hat{\mu}_\mathbb{P}, \ecinref $ are given in
terms of their standard empirical estimates.
\section{Conditional density operators}
\label{sec:cdo}
In this section, we use Corollary~\ref{cor:main_result} to
define the \emph{conditional density operator} (CDO), which
directly results in a conditional density for an output variable given an input variable or a distribution over the input variable. This is achieved by combining the density reconstruction method derived in the last section with conditional mean operators.\\
Assume in what follows that we have fixed a finite positive reference measure $\outmeas$ on $\outspace$, such that $\coutref$ is a well-defined, injective, and positive self-adjoint Hilbert--Schmidt operator on $\outrkhs$.
Moreover, densities on $\outspace$ are assumed to be Radon--Nikodym derivatives with respect to $\outmeas$ (we get densities in the usual sense if $\outmeas$ is Lebesgue measure).
The following result is a direct consequence of Corollary~\ref{cor:main_result}.
\begin{theorem}[Conditional density operator]
\label{th:PEO}
Assume $\meas_y(\cdot) = \int_{\inspace} \meas_{Y|X=x}(\cdot) \, \dd \meas(x)$ admits a density $p_y \in \outrkhs$ with respect to reference measure $\outmeas$, such that the assumptions
of Corollary~\ref{cor:main_result} are satisfied.
Additionally assume that the conditional mean operator $\mathcal{U}_{Y \mid X} = \cov[YX]\cov[X]^{\dagger}$ for $\meas_{Y|X}$
exists and $\mu_\mathbb{P} \in \domain(\mathcal{U}_{Y \mid X}$).
Then
\begin{equation*}
\mathcal{A}_{Y \mid X} \me[\meas] := \cov[\outmeas]^{\dagger}\me[\meas_y] = \cov[\outmeas]^{\dagger}\mathcal{U}_{Y \mid X}\me[\meas] = \cov[\outmeas]^{\dagger}\cov[YX]\cov[X]^{\dagger}\me[\meas] \in F
\end{equation*}
exists and satisfies $$ p_y {=} \mathcal{A}_{Y \mid X} \me[\meas].$$
If $\meas$ is the Dirac measure on $x'$, this results in the density of $Y$ given $X=x'$ $$p_{Y|X=x'} {=} \mathcal{A}_{Y \mid X} \ink(x', \cdot).$$
\end{theorem}
We call the operator $\mathcal{A}_{Y \mid X} =
\cov[\outmeas]^{\dagger}\cov[YX]\cov[X]^{\dagger}$
mapping from $\inrkhs$ to $\outrkhs $ the \emph{conditional density
operator} (CDO).
The CDO has several advantages over GPs, the mainstream kernel method for conditional density estimation \citep{Wi06}.
In particular, it allows for density estimation in arbitrary output dimensions, unlike standard GPs, which estimate a $1$d density \citep[see the literature on multi-output GPs for a remedy, e.g.][]{Al12,Bo05}.
Moreover, multiple modes in the output can be captured by the CDO.
Though this might be achieved with GP mixtures, the CDO allows for
more flexibility as it requires no parametric assumptions on the mixture components.
Heteroscedastic noise on the output is accounted for by standard CDOs, but nontrivial to include in GP models.
Interestingly, any output kernel that is also a probability density gives
rise to CDOs where the output density can be both
evaluated and sampled efficiently.
Also, CDOs allow uncertain inputs with any distribution, while closed form predictions for GPs are only possible when the input uncertainty is Gaussian.
Conditional densities estimated by a CDO are illustrated in Figure~\ref{fig:donut}; see Section~\ref{sec:donut} for a description of the data generating
process.
\begin{figure*}
\includegraphics[width=\textwidth]{images/Donut_and_est_wide}
\caption{Cross sections of a donut shaped density and estimates using a CDO.}
\label{fig:donut}
\end{figure*}
\subsection{Consistency of the conditional density operator}
We can use the results from Section~\ref{sec:Tikhonov solution}
to assess the consistency of the CDO.
The CDO is defined pointwise
when the assumptions of Theorem~\ref{th:PEO} are satisfied.
Analogously to the empirical inverse problem in Section~\ref{sec:Tikhonov solution},
we replace the pseudoinverses of both $\cov[X]$ and $ \cov[\outmeas]$
with their regularized inverses for the empirical version of the CDO.
From the (unbounded) analytical version
$\mathcal{A}_{Y \mid X} = \cov[\outmeas]^{\dagger}\cov[YX]\cov[X]^{\dagger}$, we obtain
$\widehat{\mathcal{A}}_{Y \mid X} = (\ecov[\outmeas] + \alpha' \idop[\outrkhs]) ^{-1}
\ecov[YX] (\ecov[X]+ \alpha \idop[\inrkhs])^{-1} $ which is a
globally defined bounded operator.
The proof of Proposition~\ref{prop:stochastic_error} in the supplementary material
can directly be modified to see that whenever $\norm{\eme[\meas_y] - \me[\meas_y]}_\outrkhs \rightarrow 0$
for a suitable regularization scheme $\alpha > 0$,
we obtain a consistent regularized empirical solution
of the CDO when $\alpha' > 0$ is chosen appropriately.
We will leave the statistical details to future work but want to emphasize that
the proof of Proposition~\ref{prop:stochastic_error} can also be used to obtain bounds
for the conditional mean embedding by simply performing an additional composition
with a cross-covariance operator. See~\cite{SHSF09,Fukumizu13:KBR,Fukumizu15:NBI} for
asymptotic consistency results of the conditional mean embedding and appropriate regularization schemes.
\subsection{Numerical representation of the conditional density operator}
Assume that we have an i.i.d. sample $(x_{\scriptscriptstyle i},
y_{\scriptscriptstyle i})_{i=1}^\ndat \sim \meas_{XY}$ such
that the $\meas_{XY}$-induced conditional distribution
$\meas_{Y \mid X}$ is the distribution of interest and another
i.i.d. sample $(z_{\scriptscriptstyle i})_{i=1}^\nref
\sim \outmeas$, where $\outmeas$ is the reference measure on $\outspace$ which we use to reconstruct the desired conditional density.
The density over $\outspace$ induced by fixing the input at $x' \in \inspace$ is approximated as
\begin{equation}
\label{eq:ePEO point}
\widehat{\mathcal{A}}_{Y \mid X} \ink(x', \cdot) \approx \sum_{i=1}^{\nref}\beta_i \outk(z_{\scriptscriptstyle i}, \cdot)
\end{equation}
with $\beta = \nref^{-2} \ts (\outgram[Z]+ \alpha' \id[\nref])^{-2} \outgram[ZY] (\ingram[X] + \ndat \alpha \id[\ndat])^{-1} [\ink(x_{\scriptscriptstyle 1},x'), \dots, \ink(x_{\scriptscriptstyle \ndat}, x')]^\trans \in \R^\nref$, where we use the kernel matrices $\ingram[X] = \left[\ink(x_i,x_j) \right]_{ij} \in \R^{\ndat \times \ndat}$,
$\outgram[Y] = \left[\outk(y_i,y_j) \right]_{ij} \in \R^{\nref \times \nref}$ as well as
$\outgram[ZY] = \left[\outk(z_i,y_j) \right]_{ij} \in \R^{\nref \times \ndat}$ and
the corresponding identity matrices $\id_\ndat \in \R^{\ndat \times \ndat}, \id_\nref \in \R^{\nref \times \nref}$.
If one is interested in the marginal distribution of $Y$ when integrating out $x' \sim \meas$, the $\ink(x_{\scriptscriptstyle j}, x')$ are replaced by $\me[\meas](x_{\scriptscriptstyle j})$ in the expression for $\beta$.
The derivation of the representation in~\eqref{eq:ePEO point}
builds upon a similar derivation of the conditional mean embedding estimate
and can be found in the supplementary material.
Detailed convergence rates and error bounds for this empirical estimate are
beyond the scope of this paper.
%
\section{Related work}
\label{sec:Related work}
Finding the pre-image of a feature vector in the RKHS is a classical problem in kernel methods
\citep{Kwok03:PP,Bakir04:PI}.
In this work, our goal is to reconstruct a density $p$ from the kernel
mean embedding $\mu_{\meas}$ of some distribution $\meas$.
There exist two popular approaches in the literature for recovering information from $\mu_{\meas}$, namely distributional pre-image learning \citep{Song08:TDE,Kanagawa14:KPI} and
kernel herding \citep{Chen10:SKH}.
Given an empirical kernel mean $\hat{\mu}_{\meas}$, the idea
of the former is to pick a
family of densities $\mathcal{P}=\{p_{\theta}\,:\,\theta\in\Theta\}$ and then
find $\theta^* =
\arg\min_{\theta\in\Theta}\|\hat{\mu}_{\meas} -
\mu_{p_{\theta}}\|^2_H$ \citep{Song08:TDE}.
The drawback of this approach is the parametric
assumptions on the family of densities $\mathcal{P}$.
Moreover, it requires solving a constrainted non-convex optimization problem. A related approach is that of using \citep{TonChaTehSej2019}, which suggests to use conditional means as an input for a neural density model.
On the other hand, our method provides an analytic solution for conditional densities which only requires that $\meas$ is absolutely
continuous with respect to the reference measure $\inmeas$.
Alternatively, kernel herding aims to greedily generate a representative
set of $ T $ pseudo-samples from $\meas$ in a deterministic fashion using the estimate
$\hat{\mu}_{\meas}$ \citep{Chen10:SKH}.
The advantage of herding is an integration error of order
$\mathcal{O}(T^{-1})$ under some assumptions.
Similarly, our method gives rise to a probability density from which
random samples can be easily generated.
Note that while our work also relates to the literature of
kernel-based density ratio estimation \citep{Kanamori12:SAK,QB13}, our goal is not to
estimate a density ratio.
Furthermore, unlike previous work, we provide a rigorous treatment of the error bounds of our estimates and good choices for regularization constants.
Lastly, the kernel mean embedding has recently been applied to
fit high-dimensional \emph{implicit} density models such as
generative adversarial networks (GANs)
\citep{dziugaite2015training,li2015generative, li2017mmd} and
autoencoders \citep{tolstikhin2018wasserstein}.
It would also be interesting to extend our results to this
area of research.
Classical methods for (conditional) density estimation \citep{BH01,HRL04} are known to suffer from slow
convergence in high dimensions, see, e.g.,
\citet[Chap. 1]{Tsybakov08:INE}.
Some methods propose estimators that are similar to the CDO, although not making use of RKHS arguments and not proving consistency \citep{BH01}.
An advantage of the CDO is that it is less prone to the curse of
dimensionality.
Concretely, the convergence rate of Proposition~\ref{prop:stochastic_error}
and regularizing scheme from Corollary~\ref{col:consistency}
do not depend on the problem dimension.
Nevertheless, it might
affect the deterministic error which could converge arbitrarily slowly,
see, e.g., \citet{Tolstikhin17:MEK}.
Neural density models can also scale gracefully with increasing dimensions, as demonstrated empirically especially in the image generation domain \citep{KD18,Di17}.
However, little theory exists to confirm this observation and understand under which conditions on the problem and the network architecture it applies.
Standard neural density models can easily be extended to
include conditioning on an input variable. However,
conditioning on a distribution over the input variable is
non-trivial, unlike in the CDO setting.
In the RKHS setting, infinite dimensional exponential families (IDEF) and their conditional extension, kernel conditional exponential families (KCEF) assume the log-likelihood of a (conditional) density to be an RKHS function
\citep{SFGHK17,AG18}.
Fitting such a model is solved by using an optimization approach, while CDOs allow closed form solutions.
Furthermore, CDOs allow for trivial normalization of the estimated densities unlike kernel exponential family approaches.
Sampling from IDEF and KCEF approximations requires MCMC techniques rather than ordinary Monte Carlo as with our approach.
Sampling is necessary to estimate predictive mean and variance in IDEF models, while closed form expressions exist for CDOs, see \ref{sec:mean variance closed form}.
\section{Experiments}
\label{sec:Experiments}
In this section, we report results on one toy and two real-world datasets, showing competitive performance of the CDO in conditional density estimation tasks in comparison to recent state-of-the-art approaches.
We use a computational trick for large datasets which is described, along with a trick for high-dimensional output spaces, in the supplementary material~\ref{sec:computational tricks}. For regularization of the CDO in the experiments that follow, we always use Corollary~\ref{col:consistency} with $c' = 0.99999$. Neural density models used as baseline methods where implemented in PyTorch and optimized using ADAM \citep{Ki14} with a learning rate tuned to achive the best training log likelihood. Hidden layers contained $100$ ReLUs each. The optimal number of hidden layers differed and is stated per experiment.
\subsection{Gaussian donut}
\label{sec:donut}
\begin{figure}
\centering
\includegraphics[width=0.95\linewidth]{images/Donut_at_0}\\
\includegraphics[width=0.95\linewidth]{images/Donut_at_1}
\caption{Errors of conditional density estimation for the Gaussian donut in $L_1(\outmeas)$-norm.}
\label{fig:Donut norm}
\end{figure}
For this toy example, a unit circle in the $(x,y)$ plane is embedded into a $3$D ambient space and slightly rotated around the $y$ axis, then we pick $50$ equidistant points on this circle.
Each of the points is the mean of an isotropic Gaussian distribution, and each mean has equal probability, giving rise to a mixture that we call a Gaussian donut.
We draw $50$ samples from the isotropic Gaussian noise distribution per mean to form the training data for a CDO that estimates the density on $y,z$ coordinates given $x$.
The reference measure $\outmeas$ is taken to be the uniform distribution on a zero-centered square with side length $4$.
See Figure~\ref{fig:donut} for the ground truth density and CDO estimate at $x$ equal to $0$ and $1$, respectively.
We report numerical errors in density approximation in $L_1(\outmeas)$-norm, i.e., $\|\widehat{u} - \dens_{Y,Z\mid X=x}\|_{L_1}$ in Figure~\ref{fig:Donut norm}. Input and output kernels where Laplace and Gaussian with lengthscale resulting from the median heuristic \citep{garreau2017large}.
The uniform reference measure is represented by a regular grid of $\nref = \lfloor\sqrt{\ndat}\rfloor^2$ points.
The procedure is repeated $10$ times for different random seeds.
One comparison method are GPs on each output dimension independently with a Gaussian kernel and lengthscale optimized for highest marginal likelihood with GPyTorch \citep{GPW18}. Furthermore, we use conditional versions of RealNVP and masked autoregressive flows (MAF) with $5$ hidden layers, see \cite{Di17,papamakarios2017masked}. The KCEF method \citep{AG18} could not be used here because it yields unnormalized density estimates we could not get reliable estimates of the normalizing constant.
See Figure~\ref{fig:Donut norm} for a plot of the $L_1$ error, which shows that the CDO provides competitive conditional density estimates on this simple dataset.
\subsection{Rough terrain reconstruction}
Rough terrain reconstruction is used in robotics and navigation \citep{Ha10,Gi10}.
Given measurements of longitude, latitude, and altitude, the task is to estimate the altitude for unobserved coordinates on a map.
We reproduce an experiment from \cite{Er18}, considering around $23$ million non-uniformly sampled measurements of Mount
St. Helens, binned into a $120 \times 117$ grid.
We randomly chose $80\%$ of the data as training, the rest as test data.
We fit an exact Gaussian Process by optimizing the length scale of a Gaussian kernel with respect to marginal likelihood of the training data and compute the scaled mean absolute error (SMAE) for the test locations.
Furthermore, we fit neural conditional density models based on RealNVP and MAF \citep{Di17,papamakarios2017masked}, where $10$ hidden layers gave the best training log likelihood.
For the conditional density operator, we pick a Gaussian kernel for input and output domains.
The input length scale is chosen using the median heuristic \citep{garreau2017large}.
The output domain is chosen as an interval based on the minimum and maximum of the training output data, with a uniform reference measure represented by equidistant grid points,
the output length scale based on the distance between adjacent grid points.
See Table~\ref{tab:rough terrain} for a summary of the SMAEs reached by each method.
The result again suggests that our method is competitive with other
kernel-based learning algorithms and recent neural density models.
We conjecture that added flexibility is a reason for for the better performance compared to a GP.
While the output distribution of the GP is a Gaussian, in the CDO used here it is a mixture of Gaussians.
A related possibility is that we use a homoscedastic likelihood in the GP, leading to a certain minimum amount of assumed noise, while the CDO does not do this.
\begin{table}
\caption{Test Set SMAEs rough terrain}
\label{tab:rough terrain}
\centering
\begin{tabular}{lc}
\toprule
{Estimator} & {SMAE}\\
\midrule
\textbf{CDO}& $0.0269\pm0.0006$\\
\textbf{GP}& $0.0358\pm0.0006$\\
\textbf{Cond. Real NVP} & $0.0373 \pm 0.0380$\\
\textbf{Cond. MAF} &$0.0309 \pm 0.0395$\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Traffic density prediction from time features}
\begin{figure*}
\includegraphics[width=\linewidth]{images/Traffic_cdo_and_ae}
\caption{Road occupancy prediction experiment. \emph{Left:} Histogram of test data for three days in black, test data mean and prediction. \emph{Right:} Boxplots of scaled absolute errors with respect to test data.}
\label{fig:traffic}
\end{figure*}
In this experiment, we predict the occupancy rate of different locations on freeways in the San Francisco bay area based on a given day of week and time of day.\!\footnote{Detailed description in \cite{Cu11}, data available at \href{https://archive.ics.uci.edu/ml/datasets/PEMS-SF}{https://archive.ics.uci.edu/ml/datasets/PEMS-SF}.}
The occupancy rate is encoded as a number between $0$ and $1$ for $963$ different locations.
The measurements are sampled every 10 minutes, resulting in $144$ measurements per day (i.e., times of day).
See Figure~\ref{fig:traffic} for example histograms at one particular location.\\
In the training dataset, each day of week occured $32$ times (discarding measurements to get a balanced dataset), resulting in $32 \times 144 \times 7 = 32\,256$ input-output pairs. In the test set, each day of week occured $20$ times.
The task is to get a predictive density for the locations occupancy given time of day and day of week as inputs. \\
We fit a conditional density operator using Gaussian kernels on the output and Laplacian kernels on
the input domain.
Laplacians are chosen because they result in smoother estimates,
while Gaussians showed more oscillations for the output density estimates.
Samples for the uniform reference measure on the output domain are taken to be a regular grid between minimum and maximum occuring values.
Bandwidth for both kernels is chosen based on the median heuristic.
For comparison, we use both RealNVP and MAF deep neural networks \citep{Di17,papamakarios2017masked}, where $5$ hidden layers gave the best training log likelihood.
We estimate the expectation (w.r.t.\ model predictive distribution) of the absolute error when estimating test set mean and variance, i.e., scaled mean absolute error (SMAE), and its standard deviation. Mean and variance are chosen because closed form estimates of these exist under the CDO.
As this is not the case for the neural models, we draw $2000$ samples for
estimation.
Even though the dataset is rather large, the CDO can be fitted in under one minute on a modern laptop using a scheme outlined in~\ref{sec:Large datasets trick using factorization of the joint probability}. Because we could not adapt this scheme to KCEF \citep{AG18}, it was impossible to fit this alternative kernel conditional density model, because memory requirements could not be satisfied even on a large compute server. Errors are summarized in Table~\ref{tab:traffic} and plotted in Figure~\ref{fig:traffic}.
Clearly, our CDO outperforms the neural models. While we also fitted GPs using the GPyTorch package, the errors where huge because this problem necessitates heteroscedastic likelihood noise, which is unavailable in currently maintained GP packages.
\begin{table}
\caption{Test Set SMAEs Road Occupancy Data}
\label{tab:traffic}
\centering
\begin{tabular}{lcc}
\toprule
{ } & {mean} & {sd}\\
\midrule
\textbf{CDO}& $0.02 \pm 0.05$ & $0.36 \pm 1.21$\\
\textbf{Cond. Real NVP} & $0.32\pm 0.41$ &$1.19 \pm 1.65$\\
\textbf{Cond. MAF} &$0.52 \pm 0.81$& $4.50 \pm 4.40$\\
\bottomrule
\end{tabular}
\end{table}
\section{Conclusion}
\label{sec:Conclusion}
In this paper, we show that the reconstruction of densities from kernel
mean embeddings can be formulated as an inverse problem under some
regularity assumptions.
In particular, we draw connections to the estimation of Radon--Nikodym derivatives with respect to a Lebesgue reference measure, for which the solution is shown to be unique.
We prove that the popular Tikhonov approach to solving the inverse problem is consistent and allows for finite sample bounds on the estimation error independent of the
dimensionality of the data.
However, we want to point out that the proposed Tikhonov scheme is only one possible approach for finding a solution.
We focus on the conditional density operator as an straight forward application of the density reconstruction result.
The CDO is closely related to the conditional mean embedding, can model multivariate, multimodal conditional distributions and performs competitively in our experiments.\\
In future work, numerical routines for scaling the method up to even larger datasets will be of interest.
One way to do this might be conjugate gradient algorithms and making use of Toeplitz and Kronecker structure in the kernels, as recently done in fitting GPs \citep{GPW18,WN15}.
Theoretical avenues to take might be to find rigorously justified ways of choosing good kernels and kernel parameters.
\subsubsection*{Acknowledgments}
\unblindinfo{
We would like to thank Kashif Rasul for providing the conditional RealNVP implementation for the traffic dataset and Ilja Klebanov and Tim Sullivan for helpful discussions and pointing out relevant references.
Partially funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germanys Excellence Strategy -- MATH+: The Berlin Mathematics Research Center, EXC-2046/1 – project ID: 390685689.}{WITHHELD FOR BLIND REVIEW}
\bibliographystyle{abbrvnat}
\section{}
\begin{tikzpicture
\node[obs] (X0) {$X_0$} ; %
\node[obs,right=3cm of X0] (X1) {$X_1$} ; %
\node[obs,right=3cm of X1] (X2) {$X_2$} ; %
\node[right=3cm of X2] (dots0) {\dots} ; %
\node[latent,below=2cm of X0] (Y0) {$Y_0$} ; %
\node[latent,below=2cm of X1] (Y1) {$Y_1$} ; %
\node[latent,below=2cm of X2] (Y2) {$Y_2$} ; %
\node[right=3cm of Y2] (dots1) {\dots} ; %
\path
(Y0) edge[->] node [right = 0.03cm] {$\bP (X_0|Y_0)$} (X0)
(Y1) edge[->] node [right = 0.03cm] {$\bP (X_1|Y_1)$} (X1)
(Y2) edge[->] node [right = 0.03cm] {$\bP (X_2|Y_2)$} (X2)
(Y0) edge[->] node [below = 0.03cm] {$\bP (Y_1|Y_0)$} (Y1)
(Y1) edge[->] node [below = 0.03cm] {$\bP (Y_2|Y_1)$} (Y2)
(Y2) edge[->] node [below = 0.03cm] {$\bP (Y_3|Y_2)$} (dots1)
;
\end{tikzpicture}
\end{document}
| {
"timestamp": "2019-10-30T01:15:45",
"yymm": "1905",
"arxiv_id": "1905.11255",
"language": "en",
"url": "https://arxiv.org/abs/1905.11255",
"abstract": "We introduce a novel conditional density estimation model termed the conditional density operator (CDO). It naturally captures multivariate, multimodal output densities and shows performance that is competitive with recent neural conditional density models and Gaussian processes. The proposed model is based on a novel approach to the reconstruction of probability densities from their kernel mean embeddings by drawing connections to estimation of Radon-Nikodym derivatives in the reproducing kernel Hilbert space (RKHS). We prove finite sample bounds for the estimation error in a standard density reconstruction scenario, independent of problem dimensionality. Interestingly, when a kernel is used that is also a probability density, the CDO allows us to both evaluate and sample the output density efficiently. We demonstrate the versatility and performance of the proposed model on both synthetic and real-world data.",
"subjects": "Machine Learning (cs.LG); Statistics Theory (math.ST); Machine Learning (stat.ML)",
"title": "Kernel Conditional Density Operators",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9693241947446617,
"lm_q2_score": 0.7310585727705127,
"lm_q1q2_score": 0.7086327623619588
} |
https://arxiv.org/abs/1806.06403 | Geometric mean extension for data sets with zeros | There are numerous examples in different research fields where the use of the geometric mean is more appropriate than the arithmetic mean. However, the geometric mean has a serious limitation in comparison with the arithmetic mean. Means are used to summarize the information in a large set of values in a single number; yet, the geometric mean of a data set with at least one zero is always zero. As a result, the geometric mean does not capture any information about the non-zero values. The purpose of this short contribution is to review solutions proposed in the literature that enable the computation of the geometric mean of data sets containing zeros and to show that they do not fulfil the `recovery' or `monotonicity' conditions that we define. The standard geometric mean should be recovered from the modified geometric mean if the data set does not contain any zeros (recovery condition). Also, if the values of an ordered data set are greater one by one than the values of another data set then the modified geometric mean of the first data set must be greater than the modified geometric mean of the second data set (monotonicity condition). We then formulate a modified version of the geometric mean that can handle zeros while satisfying both desired conditions. | \section{Introduction}
Increasingly, research generates large amounts of information. Yet it is hard to work with large data sets directly, hence it is necessary to reduce their complexity whilst maintaining essential information. The most common way in which data sets are summarised is the use of the statistical quantities called means, e.g., the arithmetic mean or the geometric mean. Such means summarize all the information of the data set in a single value. In particular, the geometric mean is widely used in research fields such as biological sciences \cite{thomas_what_1990}, environmental sciences \cite{hirzel_modeling_2003} and economics \cite{curran_valuing_1994}.
One of the limitations of the geometric mean is that it is not useful for data sets that contain one or more zero values. By definition, the geometric mean of a set of numbers with at least one zero is always zero, hence, it does not capture any information about the non-zero values. Thus, it is necessary to formulate a modified version of the geometric mean that is applicable to sets with zeros. The purpose of this contribution is to review previously proposed solutions and to develop one that fulfils the desirable conditions of recovery and monotonicity that we define in Section \ref{s:conditions}.
\section{Standard geometric mean}\label{s:intro}
The geometric mean \cite{abramowitz_handbook_1972} of a sequence $X=\{x_{i}\}_{i=1}^{n}$, $x_{i}>0$ , is defined as:
\begin{equation}
G(X)=\left(\prod_{i=1}^{n}x_{i} \right)^{1/n} \label{geometricmean}
\end{equation}
When n is large, it is computationally more feasible to use the following equivalent expression of (\ref{geometricmean}):
\begin{equation}
G(X)=\exp \left({\frac{1}{n}\sum_{i=1}^{n} \log(x_{i})}\right)\label{geometricmean2}
\end{equation}
There are several reasons why using the geometric mean can be preferable to using the arithmetic mean. One important case is when the values in a set range over several orders of magnitude (e.g., plasmid transfer rates range over 7 orders of magnitude \cite{baker_mathematical_2016}), since the arithmetic mean would be dominated by the largest values. This makes application of the arithmetic mean to such data sets meaningless. In particular, values from a log-normal distribution have that feature. This distribution is important since the product of positive random independent variables (with square-integrable density function) tends toward such a distribution as a consequence of the central limit theorem \cite{hazewinkel_encyclopaedia_1990}. While normal distributions arise from many small additive errors, log-normal distributions arise from multiplicative errors, which are common for growth processes or others with positive feedback \cite{limpert_log-normal_2001}. They are therefore common in biological, environmental and economical sciences, i.e., as growth of organisms, populations or assets is proportional to their current values, any extra increase by chance will multiply further increases.
\section{Desired conditions of the geometric mean extension}\label{s:conditions}
There are two conditions that any extension of the geometric mean $G$ (that we denominate by $G_{0}$) should satisfy:
\begin{itemize}
\item
Recovery condition. The usual geometric mean should be recovered when there are no zero-values, i.e., the relative difference between the standard geometric mean and its extension for a data set without zero values should be small.
If $X$ is a data set of only positive values, $G_{0}(X) \simeq G(X)$.
\item
Monotonicity condition.
$G_{0}$ is monotone non-decreasing in the data set, i.e., if the values of a data set ($X_{1}$) are greater one by one than the values of another data set ($X_{2}$), then $G_{0}(X_{1}) \geq G_{0}(X_{2})$.
In particular, the modified geometric mean should never increase when adding zeros to a data set.
\end{itemize}
\section{Habib's proposed solution}
\label{s:habib}
Only two solutions to this problem have been proposed to date and none satisfy both aforementioned conditions.\citep{habib_geometric_2012} proposes a geometric mean expression for probability distributions whose domains include zero and/or negative values and derives an expression to calculate the geometric mean of such a data set. In particular, if we have a data set $X=\{x_{1},x_{2},\cdots,x_{n},0,0,\cdots,0\}$ with $n$ positive values and $m$ zeros,
then the geometric mean of $X$ according to \citep{habib_geometric_2012} is:
\begin{equation}
G_{1} (X)=\frac{n}{n+m} \exp{\left(\frac{1}{n+m}\sum_{i=1}^{n}\log(x_{i})\right)} \label{eqgeommeanHabib}
\end{equation}
$G_{1}(X_{+}) < G_{1}(X)$. Therefore, it is not a desirable solution. We show an example of this with real data in Figure \ref{fig:habib}.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{comparison.pdf}
\caption{Different modified geometric means as a function of the number of zeros added to a data set of 983 positives values. These data come from simulations of Robyn Wright (University of Warwick) and represent specific growth rates of bacteria of different ages, including zero values. The original data set contains 1000 values (983 positive and 17 zero). The figure illustrates how Habib's modified geometric mean (blue line) is increasing despite adding zeros, before declining after 300 zeros have been added. The red line represents the solution described in Section $\ref{s:plus1}$, which does not recover the standard geometric mean (Recovery property). The three black lines represent our solution (Section $\ref{s:solution}$), for different values of $\epsilon$ (solid line $\epsilon=10^{-5}$, dashed line $\epsilon=10^{-4}$, dashdotted line $\epsilon=10^{-3}$) }
\label{fig:habib}
\end{figure}
\section{An often used solution is to add one to all data in a set}\label{s:plus1}
Another proposed solution to deal with a data set with zeros is to add one to each value in the data set, calculating the geometric mean of this shifted data set according to (\ref{geometricmean2}) and then subtracting one again:
\begin{equation}
\exp{ \left(\frac{1}{n}\sum_{i=1}^{n} \log(x_{i}+1)\right)}-1, \label{geometricmeanadd1}
\end{equation}
This is a frequently used workaround in a number of different areas such as epidemiology \cite{alexander_index_2005,calhoun_combined_2007,barnard_measurement_2014,emerson_effect_1999,naish_prevalence_2004}, human pharmacology \cite{petry_efficacy_2013,el_setouhy_randomized_2004}, veterinary pharmacology \cite{shoop_discovery_2014,tielemans_comparative_2010}, entomology \cite{williams_time_1935,williams_use_1937}, marine ecology \cite{kuhn_plastic_2012}, environmental technology \cite{bastos_giardia_2004} and sociology \cite{thelwall_goodreads:_2017}).
The problem with this workaround (\ref{geometricmeanadd1}) is the arbitrariness of adding 1. As a consequence, (\ref{geometricmeanadd1}) does not satisfy the Recovery condition.
\section{Our solution and its problem and how to solve this}\label{s:solution}
Nevertheless, we can exploit the idea in section \ref{s:plus1} to formulate a modified version of the geometric mean, based on adding are optimal value $\delta$ to all data in a set that optimizes the difference between the standard geometric mean and the modified version for the subset of positive values.
Let $X=\{x_{1}, x_{2}, \cdots, x_{n}\}$, $x_{i} \geq 0 \ \forall i$ and $\exists j \ x_{j}=0$; let $X_{+}=\{ x \in X \rvert x>0 \}$ . We define $G_{\epsilon,X}$ as
\begin{equation}
G_{\epsilon,X}(X)=\exp{ \left(\frac{1}{n}\sum_{i=1}^{n} \log(x_{i}+\delta)\right)}-\delta \label{modifiedgeometricmean}
\end{equation}
\noindent where
\begin{equation}
\begin{array}{l}
\delta=\sup \{ \delta_{*} \in (0,\infty) \ \rvert \ (G_{\epsilon,X}(X_{+})-G(X_{+}))¦< \epsilon G(X_{+}) \}, \label{deltacondition}\quad 0<\epsilon \ll 1
\end{array}
\end{equation}
$\epsilon$ denotes the maximum relative difference between the standard geometric mean and our modified version for the subset of positive values. If not all values of $X$ are the same, $\delta$ is well-defined because $G_{\epsilon,X}(X)$ is strictly increasing in $\delta$ as a consequence of the superadditivity of the geometric mean. In the trivial case where all values of $X$ are equal to $x$, we could directly define $G_{\epsilon,X}(X) = x$. This modified geometric mean fulfils the Recovery and Monotonicity
requirements.
Note that $G_{\epsilon,X}$ depends on the data set $X$. This is a problem when comparing data sets. In order to deal with this inconvenience, we propose the following procedure:
Let $\epsilon>0$ and $X_{1}$, $X_{2}$, $\cdots$, $X_{n}$ the data sets to compare. For each $X_{i}$ we calculate $\delta_{i}$ according to (\ref{deltacondition}). Let $\delta_{min}=\min\{\delta_{1},\delta_{2},\cdots,\delta_{n}\}$. As $\delta_{min} \leq \delta_{i}$ $\forall i$, and (\ref{modifiedgeometricmean}) is increasing in $\delta$, $\delta_{min}$ satifies the condition expressed in (\ref{deltacondition}) for each data set. Using this $\delta_{min}$, we can calculate ($\ref{modifiedgeometricmean}$) for each $X_{i}$ in a unified way.
\section{Algorithm}
\label{s:alg}
\textit{Inputs:} $X$ (data set), $\epsilon$
\begin{tabbing}
\qquad \enspace Calculate $X_{+} \subset X$, the set of the positive values of $X$ \\
\qquad \enspace Calculate the geometric mean of $X_{+}$ \\
\qquad \enspace By the bisection, calculate $\delta$ of ($\ref{deltacondition}$)\\
\qquad \enspace Calculate $G_{\epsilon,X}(X)$ using $\delta$
\end{tabbing}
\section{Geometric standard deviation}
\label{s:gsd}
If the geometric mean of a sequence, $X=\{x_{i}\}_{i=1}^{n}$, $x_{i}>0$, is denoted as $G$, then the geometric standard deviation \cite{kirkwood_geometric_1979} is defined as:
\begin{equation}
\sigma_{gsd}=\exp\left({\sqrt{\frac{\sum_{i=1}^{n}\left(\log (x_{i}/G) \right)^{2}}{n}}}\right) \label{eqgsd}
\end{equation}
Note that unlike the standard deviation, the geometric standard deviation is a multiplicative factor. That is, instead of subtracting and adding the arithmetic standard deviation to the arithmetic mean to obtain the lower and upper bounds of the interval, the geometric mean is divided and multiplied by the geometric standard deviation to obtain the upper and lower bounds of the interval. As in the case of the geometric mean, the geometric standard deviation is not useful for data sets with zero values (Eq. (\ref{eqgsd}) is not even defined in this case).
Nevertheless, we can, analogously to section \ref{s:solution}, formulate a modified version of (\ref{eqgsd}) in order to include data with zero values:
Let $X=\{x_{1}, x_{2}, \cdots, x_{n}\}$, $x_{i} \geq 0 \ \forall i$ and $\exists j \ x_{j}=0$. We define $\sigma_{gsd}^{\epsilon,X}$ as:
\begin{equation}
\sigma_{gsd}^{\epsilon,X}=\exp\left({\sqrt{\frac{\sum_{i=1}^{n}\left(\log ((x_{i}+\delta_{*})/G_{\epsilon,X}) \right)^{2}}{n}}}\right) \label{eqgsdmod}
\end{equation}
where $\epsilon, \delta_{*}$ and $G_{\epsilon,X}$ have the same meaning as in section \ref{s:solution}.
As an example, we can use the same data as in Fig. \ref{fig:habib} to calculate how the modified geometric standard deviation varies depending on the number of zeros added (Fig. \ref{fig:stdmod}). Figure \ref{fig:stdmod} shows that our modified geometric standard deviation could take excessively large values, so our proposal is not ideal and we suggest that finding a better solution for the geometric standard deviation is an open problem.
\begin{figure}
\centerline{\includegraphics[width=.8\textwidth]{gsd.pdf}}
\caption{Modified geometric standard deviation as a function of the number of zeros added to a data set of 983 positives values (same data as in Fig \ref{fig:habib}). The modified geometric standard deviation increases strongly when zeros are added from a value of $\approx$ 3 without zeros, and eventually declines again.}
\label{fig:stdmod}
\end{figure}
\section{Discussion}
\label{s:discuss}
The geometric mean is preferable to the arithmetic mean when data are log-normally distributed or range over orders of magnitude; but the geometric mean has an important drawback when data sets contain zeros. Here, we propose a modified version of the geometric mean that can be used for data sets containing zeros. Previously proposed solutions have undesired properties that our solution avoids.
However, our proposed solution is not universal (it depends on the data set). Likewise, the solution for calculating the geometric standard deviation that we propose is not ideal as it can lead to excessively large values. We hope that our little contribution stimulates the search for a universal solution. Nonetheless, we have developed a procedure to compare different data sets using the modified geometric mean $G_{\epsilon,X}$. Our open source Matlab and Python code is available on GitHub (\emph{https://github.com/RobertoCM/geomMeanExt}).
\section*{Acknowledgement}
The authors thank Robyn Wright (University of Warwick) for posing the problem and JPIAMR/MRC for funding (Dynamics of Antimicrobial Resistance in the Urban Water Cycle in Europe (DARWIN), MR/P028 195/1).
\section*{References}
| {
"timestamp": "2019-04-05T02:19:48",
"yymm": "1806",
"arxiv_id": "1806.06403",
"language": "en",
"url": "https://arxiv.org/abs/1806.06403",
"abstract": "There are numerous examples in different research fields where the use of the geometric mean is more appropriate than the arithmetic mean. However, the geometric mean has a serious limitation in comparison with the arithmetic mean. Means are used to summarize the information in a large set of values in a single number; yet, the geometric mean of a data set with at least one zero is always zero. As a result, the geometric mean does not capture any information about the non-zero values. The purpose of this short contribution is to review solutions proposed in the literature that enable the computation of the geometric mean of data sets containing zeros and to show that they do not fulfil the `recovery' or `monotonicity' conditions that we define. The standard geometric mean should be recovered from the modified geometric mean if the data set does not contain any zeros (recovery condition). Also, if the values of an ordered data set are greater one by one than the values of another data set then the modified geometric mean of the first data set must be greater than the modified geometric mean of the second data set (monotonicity condition). We then formulate a modified version of the geometric mean that can handle zeros while satisfying both desired conditions.",
"subjects": "Applications (stat.AP)",
"title": "Geometric mean extension for data sets with zeros",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9693241965169939,
"lm_q2_score": 0.7310585669110202,
"lm_q1q2_score": 0.7086327579778897
} |
https://arxiv.org/abs/1910.09901 | Parallel Stochastic Optimization Framework for Large-Scale Non-Convex Stochastic Problems | In this paper, we consider the problem of stochastic optimization, where the objective function is in terms of the expectation of a (possibly non-convex) cost function that is parametrized by a random variable. While the convergence speed is critical for many emerging applications, most existing stochastic optimization methods suffer from slow convergence. Furthermore, the emerging technology of parallel computing has motivated an increasing demand for designing new stochastic optimization schemes that can handle parallel optimization for implementation in distributed systems. We propose a fast parallel stochastic optimization framework that can solve a large class of possibly non-convex stochastic optimization problems that may arise in applications with multi-agent systems. In the proposed method, each agent updates its control variable in parallel, by solving a convex quadratic subproblem independently. The convergence of the proposed method to the optimal solution for convex problems and to a stationary point for general non-convex problems is established. The proposed algorithm can be applied to solve a large class of optimization problems arising in important applications from various fields, such as machine learning and wireless networks. As a representative application of our proposed stochastic optimization framework, we focus on large-scale support vector machines and demonstrate how our algorithm can efficiently solve this problem, especially in modern applications with huge datasets. Using popular real-world datasets, we present experimental results to demonstrate the merits of our proposed framework by comparing its performance to the state-of-the-art in the literature. Numerical results show that the proposed method can significantly outperform the state-of-the-art methods in terms of the convergence speed while having the same or lower complexity and storage requirement. | \chapter{Parallel Stochastic Optimization Framework}
\documentclass[journal,draftclsnofoot,onecolumn,12pt,twoside]{IEEEtran}
\usepackage{graphicx}
\usepackage{amsmath, amsthm, amssymb}
\usepackage{lipsum}
\usepackage{cite}
\usepackage{subfigure}
\usepackage{rotating}
\usepackage{fancyhdr}
\usepackage{gensymb}
\usepackage{graphicx}
\usepackage{float}
\usepackage{multicol}
\usepackage{algorithmic}
\usepackage{breqn}
\usepackage{amsmath}
\usepackage[linesnumbered,ruled]{algorithm2e}
\usepackage{color}
\usepackage{bbm}
\newcommand\at[2]{\left.#1\right|_{#2}}
\DeclareMathOperator{\argmin}{arg\,min}
\DeclareMathOperator{\argmax}{arg\,max}
\newtheorem{lemma}{Lemma}
\newtheorem{theorem}{Theorem}
\newtheorem{corollary}{Corollary}
\newtheorem{remark}{Remark}
\newtheorem{definition}{Definition}
\newtheorem{fact}{Fact}
\newtheorem{example}{Example}
\newtheorem{construction}{Construction}
\newtheorem{assumption}{Assumption}
\newtheorem{proposition}{Proposition}
\normalsize
\hyphenation{op-tical net-works semi-conduc-tor}
\begin{document}
\title{Parallel Stochastic Optimization Framework for Large-Scale Non-Convex Stochastic Problems}
\author{Naeimeh~Omidvar,~\IEEEmembership{Member,~IEEE,}
An~Liu,~\IEEEmembership{Senior Member,~IEEE,}
Vincent~Lau,~\IEEEmembership{Fellow,~IEEE,}
Danny~H.~K.~Tsang,~\IEEEmembership{Fellow,~IEEE,}
and~Mohammad~Reza~Pakravan,~\IEEEmembership{Member,~IEEE
\iffalse
\thanks{M. Shell is with the Department
of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta,
GA, 30332 USA e-mail: (see http://www.michaelshell.org/contact.html).
\thanks{J. Doe and J. Doe are with Anonymous University.
\thanks{TCOM version based on Michael Shell's bare{\textunderscore}jrnl.tex version 1.3.}
\fi
}
\maketitle
\begin{abstract}
In this paper, we consider the problem of stochastic optimization, where the objective function is in terms of the expectation of a (possibly non-convex) cost function
that is parametrized by a random variable. While the convergence speed is critical for many emerging applications, most existing
stochastic optimization methods suffer from slow convergence.
Furthermore, the emerging technology of parallel computing has motivated
an increasing demand for designing new stochastic optimization schemes that can handle parallel optimization for implementation in distributed systems.
We propose a fast parallel stochastic optimization framework that can solve a large class of possibly non-convex stochastic optimization problems that may arise in applications with multi-agent systems. In the proposed method, each agent updates its control variable in parallel, by solving a convex quadratic subproblem independently.
The convergence of the proposed method to the optimal solution for convex problems and to a stationary point for general non-convex problems is established.
The proposed algorithm can be applied to solve a large class of optimization problems arising in important applications from various fields, such as machine learning and wireless networks.
As a representative application of our proposed stochastic optimization framework, we focus on large-scale support vector machines and demonstrate how our algorithm can efficiently solve this proble
, especially in modern
applications with huge datasets.
Using popular real-world datasets, we present experimental results to demonstrate the merits of our proposed framework by comparing its performance
to the state-of-the-art
in the literature. Numerical results show that the proposed method can significantly outperform the state-of-the-art methods in terms of the convergence speed while having the same or lower complexity and storage requirement.
\iffalse
{\color{blue}{
The purpose of this {\color{blue}{chapter}} is to elaborate on large-scale support vector machines, which pose an important problem in the context of machine learning, as a representative application of our proposed stochastic optimization framework. This problem appears in many applications in various fields.
We demonstrate how our algorithm can efficiently solve this proble
, especially in the modern
applications with huge datasets.
}}
\fi
\iffalse
The purpose of this chapter is to elaborate on large-scale support vector machines, which pose an important problem in the context of machine learning, as a representative application of our proposed stochastic optimization framework. This problem appears in many applications in various fields.
We demonstrate how our algorithm can efficiently solve this proble
, especially in the modern
applications with huge datasets.
We present experimental results to demonstrate the merits of our proposed framework by comparing its performance
to the state-of-the-art methods in the literature, using real-world datasets. Numerical results show that the proposed method can significantly outperform the related state-of-the-art stochastic methods in terms of the convergence speed, while having the same or lower complexity and storage requirement.
\fi
\end{abstract}
\begin{IEEEkeywords}
Stochastic optimization, parallel optimization, distributed systems, non-convex, large-scale optimization, machine learning, support vector machines.
\end{IEEEkeywords}
\IEEEpeerreviewmaketitle
\section{Introduction}
\iffalse
for IEEE journal papers produced under \LaTeX\ using
IEEEtran.cls version 1.7 and later.
\begin{itemize}
\item Stochastic optimization problems arise in two ways
\item current gradient-based approaches are slow
\item parallel optimization required for distributed systems
\end{itemize}
\fi
Finding the minimum of a (possibly non-convex) stochastic optimization problem over a set
in an iterative way is very popular
in a variety of fields and applications, such as
signal processing, wireless communications, machine learning, social networks, economics, statistics and bioinformatics, to name just a few.
Such stochastic problems arise
in two different types of formulations, as will be discussed in the following.
The first formulation type corresponds to the case where the objective function is in terms of the expectation of a cost function which is parametrised by a random (vector) variable, as shown in the following formulation:
\begin{equation}\label{eq: main stochastic optimisation_v01}
\displaystyle \min_{ \boldsymbol{x} \in \mathcal{X} } F \left( \boldsymbol{x} \right) = \mathbb{E}_{\boldsymbol{\zeta}} \left[ f \left(\boldsymbol{x}, \boldsymbol{\zeta} \right) \right] ,
\end{equation}
where $ \boldsymbol{x} $ is the optimization variable and $ \boldsymbol{\zeta} $ is the random variable involved in the problem. The expectation involved in the objective function may not have a closed-form expression due to the statistics of the random variables being unknown or the computational complexity of computing the exact expectation being excessively high.
Therefore, the deterministic optimization methods cannot solve this optimization problem.
Such problem formulation is encountered
in various problems and applications in various fields such as wireless communications, signal processing, economics and bioinformatics.
For instances of such optimization problems, you may see \cite{shuai2018stochastic, myTSPpaper, rezaei2019Journal-IoT, marti2013stochastic, movahednasab2019ICC, rezaei2018GLOBECOM, omidvar2015Globecom, bedi2019asynchronous, omidvar2015PIMRC, movahednasab2019TCOM, rezaei2018PIMRC, omidvar2016cross}.
Another category of problem formulations that utilize stochastic optimization corresponds to large-scale optimizations in which the objective function is a deterministic function, but in terms of the summation of a large number of sub-function
, as shown in the following formulation:
\begin{equation}\label{eq: main stochastic optimisation_v02}
\displaystyle \min_{ \boldsymbol{x} \in \mathcal{X} } F \left( \boldsymbol{x} \right) = \dfrac{1}{n} \sum_{i=1}^{n} f_i \left(\boldsymbol{x} \right),
\end{equation}
where $ \boldsymbol{x} $ is the optimization variable, $ n $ is the number of terms in the sum and each sub-function $ f_i \left(\boldsymbol{x} \right) = f \left(\boldsymbol{x}, \boldsymbol{y}_i \right), ~\forall i=1,\cdots,n $ is characterised by a data sample $ \boldsymbol{y}_i $.
Such
problems naturally arise in various applications in the machine learning area, such as classification, regression, pattern recognition, image processing, bio-informatics and social networks \cite{zhang2018energy, li2018learning, agarap2018breast, veitch2018empirical}. Moreover, these optimization problems are typically huge and need to be solved efficiently.
Note that the above problem formulation is naturally deterministic.
\iffalse
However, many emerging applications deal with excessively large datasets. Therefore, such formulation correspond to a large-scale problem in which the number of sub-functions or equivalently the number of data samples $ n $ (and/or the size of the optimization variable $ \boldsymbol{x} $) is extremely large.
such formulation corresponds to a large-scale problem in which the number of sub-functions or equivalently the number of data samples $ n $ (and/or the size of the optimization variable $ \boldsymbol{x} $) is extremely large. In such large-scale problems, deterministic optimization approaches (such as gradient descent \cite{boyd2009convex}) can not be utilized to solve them, as the computational complexity of each iteration would be excessively high.
\fi
However, in large-scale problems encountered in many emerging applications dealing with big data,
the number of data samples $ n $ or equivalently the number of sub-functions (and/or the size of the optimization variable $ \boldsymbol{x} $) is very large, and hence, deterministic optimization approaches (such as gradient descent \cite{boyd2009convex}) can not be utilized to solve them, as the computational complexity of each iteration would be excessively high.
This is because calculating the gradient of the objective function at each iteration corresponds to calculating the gradients of all of these sub-functions, which is of huge complexity and may not be practical.
As such, the extremely large size of datasets is a critical challenge for solving large-scale problems arising in emerging applications, which makes
the classical optimization methods intractable. Consequently,
there is an increasing need
to develop new methods with tractable complexity
to handle the large-scale problems efficiently\cite{BigData2014convex}.
To address the aforementioned challenge,
stochastic optimization approaches are used. Under such approaches, at each iteration, instead of calculating the \textit{exact gradient} of the objective function, a \textit{stochastic gradient} will be calculated by randomly picking one sub-function (corresponding to one randomly chosen or arrived data sample) and calculating the gradient of that sub-function as a stochastic approximation of the true gradient of the objective function. As such, the computational complexity of gradient calculation is $ 1/n^{\mathrm{th}} $ that of deterministic gradient methods, and is independent of the number of data samples.
Moreover, note that the intuition behind utilising stochastic optimization
methods instead of deterministic
approaches for
the deterministic problem formulation in \eqref{eq: main stochastic optimisation_v02} is that
the objective function in \eqref{eq: main stochastic optimisation_v02} can be viewed as the sample average approximation of the stochastic objective function
described in \eqref{eq: main stochastic optimisation_v01}, where each sample function $ f_i \left(\boldsymbol{x} \right) $ corresponds to a realisation of the random cost function $ f \left(\boldsymbol{x}, \boldsymbol{\zeta} \right) $ (i.e., under a fixed realisation $ \boldsymbol{\zeta}=\boldsymbol{y}_i $)
\cite{BigData2014convex}.
\iffalse
This kind of optimization problems naturally arises in the large-scale machine learning applications where
the number of the data samples (and consequently, the number of sub-functions) is very large.
Although deterministic optimization approaches, including gradient descent [ref?] and [ref?], can iteratively solve this problem, the computational complexity of each iteration would be high (as the same order as the number of the data samples). This is because, calculating the gradient of the objective function, which is required for each iteration, corresponds to calculating the gradient of all of these sub-functions, which will be of huge complexity and may not be practical, especially for the new applications dealing with big data. Therefore, a key challenge arising in modern applications
\fi
{\color{black}{Having encountered the above two forms of
the optimization problems (that require stochastic optimization methods) in different areas,}} researchers from various communities, including but not limited to optimization and pure mathematic
, neural network
, and machine learning field
, have been working on stochastic optimization algorithms. Specifically, they aim to propose new
stochastic optimization algorithms that meet the new requirements of emerging and future application
, as will be discussed in the following.
\iffalse
have been working to propose efficient stochastic optimization algorithms.
Specifically, they aim to enhance the performance of stochastic optimization algorithms to meet the new requirements of emerging application
, as will be discussed in the following.
\fi
\iffalse
Various iterative algorithms for solving stochastic optimization problems have been proposed so far, by people from different fields, such as optimizatio
, neural networks and machine learning. People form different/various fields such as ... ? try to enhance the performance of the stochastic optimization algorithms and also fulfil the new requirements of the emerging applications.
\fi
First of all, the main requirement for designing new stochastic optimization algorithms
is \textit{fast convergence}. This is because the new stochastic optimization problems encountered in emerging and future applications are mainly large-scale problems that need to be handled in a very short time; otherwise, their obtained solutions may become out-dated and hence, no longer valid at the time of convergence. Therefore, efficient algorithms that can converge very fast are essential.
Furthermore, in many new applications, there are
multiple agents that have control over different variables
and need to optimise the whole-system performance together in a distributed way.
These applications and scenarios deal with distributed systems in which the associated stochastic optimization problems need to be iteratively solved by a parallel stochastic optimization method.{\footnote{Such applications arise in various fields, for example, multi-agent resource allocation in wireless interference systems \cite{Daniel2016parallel}, peer-to-peer networks, adhoc networks and cognitive radio systems in wireless networks, parallel-computing data centres in big data, and large-scale distributed systems in economics and statistics, to name just a few.}}
In addition, with the recent advances in multi-core parallel processing technology,
it is increasingly desirable to develop parallel algorithms
that allow various control variables to be simultaneously updated at their associated agents, at each iteration.
\iffalse
Furthermore, the recent advances in multi-core parallel processing technology,
has made it increasingly desirable to develop parallel algorithms
that allow various control variables to be simultaneously updated at their associated users, at each iteration.
\fi
\iffalse
For example, in wireless communications, there are many new
scenarios and applications dealing with distributed systems in which their associated stochastic optimization problems need to be iteratively solved by a proper parallel stochastic optimization method. For specific examples in this regard, interested readers may refer to [ref?], [ref?]...
\fi
\iffalse
Considering the aforementioned requirements, which in general have not considered in classical stochastic optimization algorithms, there is an increasing need for designing new algorithms that can efficiently solve large-scale stochastic optimization problems with fast convergence
and also support parallel optimization of the involved control variables as well.
\fi
The
above requirements have not been truly
addressed by the classical stochastic optimization algorithms. Therefore, new algorithms that can efficiently solve large-scale stochastic optimization problems with fast convergence
and support parallel optimization of the involved control variables are needed. The existing algorithms that aim to meet such requirements
can be
categorised into three
groups: stochastic gradient-based, stochastic majorization-minimization and stochastic parallel decomposition methods, as discussed in the following sections.
\subsection{Stochastic Gradient-Based Algorithms}
Stochastic gradient-based methods are variations of the well-known stochastic gradient descent (SGD) method
\cite{Shamir12, Shamir2013}. In the dynamic equations of these methods, instead of using the exact gradient of the objective function, which is not available in problem formulations in the general form of \eqref{eq: main stochastic optimisation_v01} or is not practical to calculate in problem formulations in the general form of \eqref{eq: main stochastic optimisation_v02}, a noisy estimate of the gradient is used. Such a noisy gradient estimate, which is referred to as a \textit{stochastic gradient},
is usually obtained by calculating the gradient of the observed sample function $ f \left(\boldsymbol{x}, \boldsymbol{\zeta} \right) $ for the observed realisation of the random variable $ \boldsymbol{\zeta} $ in the case of the problem formulation in \eqref{eq: main stochastic optimisation_v01}, or the gradient of the sub-function $ f_i \left(\boldsymbol{x} \right) $ for the randomly chosen index $ i $ in the case of the problem formulation in \eqref{eq: main stochastic optimisation_v01}. As the negative of an stochastic gradient is used as the update direction at each iteration, these methods are called stochastic gradient-based.
Various numerical results show that using
such update direction suffers from slow convergence. This is mainly because the update direction may not be a tight approximation of the true gradient, and hence converges to the true gradient slowly.
To improve the slow convergence of the stochastic gradient-based update direction, acceleration techniques have been proposed \cite{polyak1992acceleration}, including
averaging over the iterates \cite{Shamir12}, $ \alpha$-suffix averaging \cite{Shamir12}, and polynomial-decay averaging \cite{Shamir2013}. However, although some of these methods have been proved to achieve the optimal convergence rate for a considered class of problems \cite{Shamir12}, their obtained rate is asymptotic, and numerical experiments on a large number of problems show that in practice, their improvement is not significant compared to the SGD method
\cite{Shamir2013}.
\iffalse
Various numerical results show that the
gradient-based methods suffer from slow convergence. This is mainly because in these methods, the update direction may not be a tight approximation of the true gradient, and hence converges to the true gradient slowly.
To improve the slow convergence of these methods, acceleration techniques have been proposed \cite{polyak1992acceleration}, including averaging over the iterates \cite{Shamir12}, $ \alpha$-suffix averaging \cite{Shamir12} and polynomial-decay averaging \cite{Shamir2013}. However, they are mainly for the special case of strongly convex objective functions and may not work well for other convex functions or the general class of non-convex problems. Moreover, although some of these methods have been proved to achieve the optimal convergence rate for the considered class of problems \cite{Shamir12}, this rate is asymptotic, and numerical experiments on a large number of problems show in practice, that their improvement is not significant compared to the SGD method
\cite{Shamir2013}.
\fi
Furthermore, the convergence of the aforementioned works is proved mainly for the special case of strongly convex objective functions and may not work well for other convex functions or the general class of non-convex problems.
The existing works that also consider non-convex
problems are mainly designed for the unconstrained case only \cite{bertsekas2000gradient,tsitsiklis1986distributed}. In fact, under some strict requirements on step-sizes \cite{nemirovski2009robust}, \cite{Shamir12}, these works analyze a descent convergence of the algorithm for the unconstrained case. However, such analysis may not be valid in the presence of the projection which is
involved due to the existence of the constraints.
Finally, another popular acceleration method to speed up the convergence, especially in training algorithms of deep networks such as ADAM algorithm \cite{ADAM_2014}, is mini-batch stochastic gradient estimation.
However, with the rapid growth of data and increasing model complexity, it still shows a slow convergence rate, while the per-iteration complexity of mini-batch algorithms grows linearly with the size of the batch \cite{accelerating_minibatch_SGD_2019}.
In this paper, we are interested in the general class of non-convex stochastic optimization problems and aim to propose a fast converging stochastic optimization algorithm that can handle
constrained optimization problems, as well.
\iffalse
Moreover, while the existing works on non-convex stochastic problems are mainly designed for the unconstrained case [ref?]: [4-6 D], we consider the larger class of problems with constraints, as previously shown in the problem formulations. It should be noted that in the existing works, under some strict requirements on step-sizes \cite{nemirovski2009robust}, \cite{Shamir12}, a descent convergence of the algorithm for the unconstrained case is analyzed. However, as this analysis may not be valid in the presence of the projection due to the constraints \cite{Daniel2016parallel}, extending these methods to the constrained case may not be applicable. This is because the projection methods for stochastic optimization problems have been proved for the convex case only [ref? - 7,8 D].
The existing works on non-convex stochastic problems are mainly designed for the unconstrained case [ref?]: [4-6 D]. In fact, under some strict requirements on step-sizes \cite{nemirovski2009robust}, \cite{Shamir12}, they analyze a descent convergence of the algorithm for the unconstrained case.
However, extending these methods to the constrained case may not be applicable as this analysis may not be valid in the presence of the projection due to the constraints \cite{Daniel2016parallel}. This is because the projection methods for stochastic optimization problems have been proved for the convex case only [ref? - 7,8 D].
In this work, we are interested in the large class of non-convex stochastic optimization problems and aim to propose an fast and efficient algorithm that can handle non-convex problems as well. Moreover, while the existing works on non-convex stochastic problems are mainly designed for the unconstrained case [ref?]: [4-6 D], we consider the larger class of problems with constraints, as previously shown in the problem formulations.
\fi
\subsection{Stochastic Majorization-Minimization }
{\color{black}{To better support non-convex stochastic optimization problems, stochastic majorization-minimization method
is also widely used.}}
This method is basically a non-trivial extension of majorization-minimization (MM) method for the deterministic problems.
MM is an optimization method to minimise a \textit{possibly non-convex} function by iteratively minimising a convex upper-bound function that serves as a surrogate for the objective function \cite{lange2000optimization}. The intuition behind the MM method is that minimising the upper-bound surrogate functions over the iterations monotonically drives the objective function value down. Because of the simplicity of the idea behind MM, it has been popular for a wide range of problems in signal processing and machine learning applications, and there are many existing algorithms that use this idea, including the expectation-maximisation \cite{cappe2009line}, DC programming \cite{gasso2009recovering}
and proximal algorithms \cite{wright2009sparse,beck2009fast}.
However, extension of MM approaches to the stochastic optimization case is not trivial. Because in a stochastic optimization problem, there is no closed-form expression for the objective function (due to the expectation involved), and hence, it is difficult
to find the required upper-bound convex approximation (i.e., the surrogate function) for the objective function.
\iffalse
The intuition behind MM method is that minimising the upper-bound surrogate functions over the iterations monotonically drives the objective function value down.
In fact, MM technique hinges on successive convex approximation (SCA) methods [ref? 15-D]. SCA tries to approximate the objective function with a convex approximation at each iteration, and then solve the obtained problem to update the iterate. The convex approximation will then be updated based on the new iterate, and gradually, it will guide the iterate toward the stationary point of the original problem. However, extension of SCA approach in [ref? 15-D] to the stochastic optimization case is not trivial, since in a stochastic optimization problem, there is no closed-form expression for the objective function due to the expectation involved. Moreover, in MM method, the convex approximation needs
Majorization-minimization (MM) is an optimization method to minimise a possibly non-convex function by iteratively minimising a upper-bound function that serves as a surrogate for the objective function \cite{lange2000optimization}. The intuition behind MM method is that minimising the surrogate functions over the iterations monotonically drives the objective function value down. There are many existing algorithms that use this idea, including the expectation-maximisation [ref? 5,21], DC programming [ref?] and proximal algorithms [ref? 1, 23, 29]. Because of the simplicity of the idea behind MM, it has been very popular for signal processing and machine learning applications. In fact, MM methods hinge on successive convex approximation (SCA) technique,
The key technique behind the aforementioned approximation is the method of successive convex approximation (SCA), which is proposed in [ref? 15-D] for deterministic optimization problems. SCA tries to approximate the objective function with a convex approximation at each iteration, and then solve the obtained problem to update the iterate. The convex approximation will then be updated based on the new iterate, and gradually, it will guide the iterate toward the stationary point of the original problem. However, extension of SCA approach in [ref? 15-D] to the stochastic optimization case is not trivial, since in a stochastic optimization problem, there is no closed-form expression for the objective function due to the expectation involved.
There are some recent attempts/works on extending the SCA approach to stochastic problems. [ref? 18 D: Razaviyayn] proposes a stochastic successive upper-bound minimization method (SSUM) that minimises an approximate sample average at each iteration.
propose a stochastic successive upper-bound minimization method (SSUM) which minimises an approximate ensemble average at each iteration. To ensure convergence and to facilitate computation, we require the approximate ensemble average to be a locally tight upper-bound of the expected cost function and be easily optimized.
\fi
To address the aforementioned issue, stochastic MM \cite{mairal2013stochastic}, a.k.a., stochastic successive upper-bound minimization (SSUM) \cite{razaviyayn2016stochastic},
is proposed as a new class of algorithms for large-scale non-convex optimization problems, which extends the idea of MM to stochastic optimization problems, in the following way:
Instead of for the original objective function, stochastic MM tries to find a majorizing surrogate
for the observed sample function
at each iteration.
The currently and previously found instance surrogate functions (a.k.a. sample surrogate functions) are then incrementally combined together to form an approximate surrogate function for the objective function, which will then be minimised to update the iterate.
\iffalse
Stochastic MM \cite{mairal2013stochastic}, a.k.a., stochastic successive upper-bound minimization (SSUM) \cite{razaviyayn2016stochastic},
is a new class of algorithms for large-scale non-convex optimization problems that extends the idea of MM to stochastic optimization problems. As explained before, in the stochastic optimization case, it is difficult to find an upper-bound surrogate function for the objective function at each iteration.
To address this issue, stochastic MM tries to find a majorizing surrogate
for the observed sample function at that iteration. The currently and previously found instance surrogate functions are then incrementally combined together to form an approximate surrogate function for the objective function, which will then be minimised to update the iterate.
\fi
Under some conditions, mainly on the instance surrogate functions,
it has been shown that the approximate surrogate function evolves to be a majorizing surrogate for the expected cost in the objective function of the original stochastic optimization problem \cite{mairal2013stochastic}. The major condition is that the instance surrogate function should be an upper-bound for the observed sample cost function at each iteration \cite{razaviyayn2016stochastic}.
In some of the related works in this area, this condition needs to be satisfied locally, while in most of the existing works, they require the surrogate function to be a global upper-bound \cite{razaviyayn2016stochastic}.
It should be noted that although the upper-bound requirement is fundamental for the convergence of these methods,
it is restrictive in practice. This is mainly because finding a proper upper-bound
for the sample cost function at each iteration may be difficult itself.
Therefore, although this upper-bound can facilitate the minimization at each iteration, in general finding this upper-bound will increase the per-iteration computational complexity of the algorithm. Consequently,
minimising the approximate surrogate function at each iteration is only practical when these surrogate functions are simple enough to be easily optimize
, e.g., when they can be parametrised with a small number of variables \cite{razaviyayn2016stochastic}. Otherwise, the complexity of stochastic MM may dominate the simplicity of the idea behind it, making the method impractical.
In addition to the aforementioned complexity issue,
stochastic MM methods may not be implementable in parallel. As in general, it is not guaranteed that the resulting problem of optimising the approximate surrogate function at each iteration is decomposable with respect to
the control variables of different agents, a centralised implementation might be required. Consequently, this method may not be
suitable for distributed systems and applications that need parallel implementations.
Motivated to
address these issues of stochastic MM, in this paper, we propose a stochastic scheme with an approximate surrogate that not only is easy to be obtained and minimised at each iteration, but also can be easily decomposed in parallel.
Specifically, the instance surrogate function in our method
does not have to be an upper-bound. Moreover, it can be easily calculated and optimized at each iteration with low complexity, as will be seen in the next section.
However, it brings new challenges that need to be tackled in order to prove the convergence of the proposed method. Most importantly, since the instance surrogate function in our algorithm is no longer an upper-bound, unlike in stochastic MM, the monotonically decreasing property cannot be utilized in our case. Therefore, it is more challenging
to show that the approximate surrogate function will eventually become an upper-bound for the expected cost in the objective function. We will show how we tackle these challenges in later sections.
\subsection{Parallel Stochastic Optimization}
As explained before, there is an increasing need to design new stochastic optimization algorithms that enable parallel optimization of the control variables by different agents
in a distributed manner. Note that many gradient-based methods are parallelisable in nature \cite{Shamir12,Shamir2013, nesterov2013gradient, necoara2013efficient, tseng2009coordinate}.
However, as mentioned before, they suffer from low convergence speed in practice \cite{razaviyayn2014parallel, facchinei2014flexible}.
\iffalse
With the recent advances in the developments of the multi-core parallel processing technology,
it is increasingly desirable to develop parallel algorithms for solving stochastic optimization problems, that allow various control variables to be updated at their associated users, simultaneously at each iteration.
It should be noted that one category of such parallelisable methods is the gradient-based methods introduced previously. Although these methods are parallelisable in nature [ref?: [4-8]-R
, they suffer from low convergence speed in practice, as discussed before [9-R].
\fi
There are quite few works on parallel stochastic optimization in the literature. The authors in \cite{Daniel2016parallel} have proposed a stochastic parallel optimization
method that decomposes the original stochastic (non-convex) optimization problem into parallel deterministic convex sub-problems, where the sub-problems are then solved independently by different agents in a distributed fashion.
\iffalse
Recently, a stochastic parallel decomposition method has been proposed for this purpose \cite{Daniel2016parallel}.
Under this method, the original stochastic (non-convex) optimization problem is decomposed into parallel deterministic convex sub-problems and each sub-problem is solved by the associated agent in a distributed fashion. In this way, at each iteration, the objective function is replaced by a sample convex approximation, which will then be solved through optimization of the sub-problems by their associated agents.
\fi
The proposed method here
differs from that in \cite{Daniel2016parallel} in the following ways.
Firstly, unlike the proposed method, the algorithm in \cite{Daniel2016parallel} requires a weighted averaging of the iterates, which slows down the convergence, in practice.
Secondly, in our proposed method, the
objective
function approximated with an incremental convex approximation that is easier to calculate and minimize, with less computational complexity than that in \cite{Daniel2016parallel}, for an arbitrary objective function in general.
The aforementioned differences will be elaborated more, later in Section \ref{sec: comparison_with_works}.
\subsection{Contributions of This Paper}
In this paper, we address the
aforementioned issues of the existing works and propose a fast converging and low-complexity parallel stochastic optimization
algorithm for general non-convex stochastic optimization problems.
The main contributions of this work can be summarised as follows.
\begin{itemize}
\item \textbf{A stochastic convex approximation method for general (possibly non-convex) stochastic optimization problems with guaranteed convergenc
:} We propose a stochastic convex approximation framework that can solve general stochastic optimization problems (i.e., without the requirement to be strongly convex or even convex), with low complexity. We
analyze the convergence of the proposed framework for both cases of convex and non-convex stochastic optimization problems, and prove its convergence to the optimal solution for convex problems and to a stationary
point for general non-convex problems.
\item \textbf{A general framework for parallel stochastic optimization}:
We show that our proposed method can be applied for parallel decomposition of stochastic multi-agent optimization problems arising in distributed systems. Under our proposed method, the original (possibly non-convex) stochastic optimization problem is decomposed into parallel deterministic convex sub-problems and each sub-problem is then solved by the associated agent, in a distributed fashion. Such a parallel optimization approach can be highly beneficial for reducing computational complexity, especially in large-scale optimization problems.
\item \textbf{Applications to solve large-scale stochastic optimization problems with fast convergence:} We show by simulations that our proposed framework can efficiently solve large-scale stochastic optimization problems
in the area of machine learning.
Comparing the proposed method to the state-of-the-art methods
shows that ours significantly outperforms the state-of-the-art methods in terms of convergence speed, while maintaining low computational complexity and storage requirements.
\end{itemize}
\iffalse
The rest of the paper is organised as follows. Section \ref{sec: ProbFormulation} formulates the problem.
Section \ref{sec: ProposedMethod} introduces the proposed parallel stochastic optimization
framework and presents the convergence results for both the convex and non-convex cases.
An important application of the proposed framework in the are of machine learning is illustrated in Section \ref{sec:SVM_Simulations}. Then, in Section \ref{sec: SVM_stoch}, we show how the proposed method can efficiently solve the problem in this application with low complexity and high convergence speed. Simulation results and comparison to the state-of-the-art methods are presented in Section \ref{sec: SVM_sim}. Finally, Section
\ref{sec: SVM_conclusion}
concludes the chapter.
\fi
The rest of the paper is organised as follows. Section \ref{sec: ProbFormulation} formulates the problem.
Section \ref{sec: ProposedMethod} introduces the proposed parallel stochastic optimization
framework and presents its convergence results for both the convex and non-convex cases.
In Section \ref{sec:SVM_Simulations}, we illustrate an important application example of the proposed framework in the area of machine learning, and show how the proposed method can efficiently solve the problem in this application with low complexity and high convergence speed.
Simulation results and comparison to the state-of-the-art methods are presented in Section \ref{sec: SVM_sim}. Finally, Section
\ref{sec: SVM_conclusion}
concludes the chapter.
\iffalse
{\color{blue}{********************* UP TO HERE *************************}}
Secondly, the convex part of the original objective function is not approximated in the proposed surrogate function in \cite{Daniel2016parallel}, and only the non-convex part is being approximated by a successive convex approximation. This
the surrogate function in \cite{Daniel2016parallel} is based on quadratic convex approximation of the non-convex part of the original objective function and the convex part is not approximated. This
while in the proposed algorithm, the surrogate function is considered to be a quadratic approximation for the whole objective function.
\iffalse
Emerging of high performance multi-core computing platforms makes it increasingly desirable to develop parallel optimization methods.
{\color{cyan}{With the recent advances in the developments of the multi-core parallel processing technology, it is desirable to parallelize the BCD method by allowing multiple blocks to be updated simultaneously at each iteration of the algorithm.
The availability of high performance multi-core computing platforms makes it increasingly desirable to develop parallel optimization methods. One category of such parallelizable methods is the (proximal) gradient methods. These methods are parallelizable in nature [4–8]; however, they are equivalent to successive minimization of a quadratic approximation of the objective function which may not be tight; and hence suffer from low convergence speed in some practical applications [9].
[R]}}
\fi
{\color{blue}{As explained before, the stochastic problems in emerging multi-agent distributed systems need to be solved via efficient parallel stochastic optimization algorithms.
the averaging over the observations.
}}
In order to solve stochastic multi-agent optimization problems arising in distributed systems, a stochastic parallel decomposition method is proposed in \cite{Daniel2016parallel}. This method is based on successive convex approximation (SCA) \cite{lange2000optimization}
, which tries to approximate the objective function with a convex approximation at each iteration, and then solve the obtained problem to update the iterate. However, extension of SCA approach in \cite{lange2000optimization}
to the stochastic optimization case is not trivial. Because, in a stochastic optimization problem, there is no closed-form expression for the objective function due to the expectation involved.
The iterate is then being averaged with the previous estimation to find the new estimation.
In order to solve stochastic multi-agent optimization problems arising in distributed systems, stochastic parallel decomposition methods are proposed \cite{Daniel2016parallel}. Under this approach, the original stochastic (non-convex) optimization problem is decomposed into parallel deterministic convex sub-problems and each sub-problem will be solved by the associated agent, in a distributed fashion. In this way, at each iteration,
the objective function is replaced by a sample convex approximation which then will be solved through optimization of the sub-problems by their associated agents.
The key technique behind the aforementioned approximation is the method of successive convex approximation (SCA), which is proposed in \cite{lange2000optimization}
for deterministic optimization problems. SCA tries to approximate the objective function with a convex approximation at each iteration, and then solve the obtained problem to update the iterate. The convex approximation will then be updated based on the new iterate, and gradually, it will guide the iterate toward the stationary point of the original problem. However, extension of SCA approach in \cite{lange2000optimization}
to the stochastic optimization case is not trivial, since in a stochastic optimization problem, there is no closed-form expression for the objective function due to the expectation involved.
There are some recent attempts/works on extending the SCA approach to stochastic problems. In \cite{razaviyayn2016stochastic}, the authors
propose a stochastic successive upper-bound minimization method (SSUM) that minimises an approximate sample average at each iteration.
propose a stochastic successive upper-bound minimization method (SSUM) which minimises an approximate ensemble average at each iteration. To ensure convergence and to facilitate computation, we require the approximate ensemble average to be a locally tight upper-bound of the expected cost function and be easily optimized.
As we want our stochastic optimization framework to support distributed and parallel optimization as well, we
to propose a parallel optimization framework,
\fi
\iffalse
include distributed systems composed of multiple users that may have different control variables and need to control the whole system in a parallel and distributed way.
The current and future stochastic optimization algorithms need to handle a large-scale problem in a very short time. Therefore,
The emerging applications
Currently, the need for efficient algorithms that can handle the large-scale optimization problems in the emerging applications reach the solution faster
Proposing an efficient stochastic optimization methods that can support the requirements of the emerging applications is currently a hot topic.
Various existing stochastic optimization methods in the literature can be categorised into three major classes:
We consider learning problems over training sets in which both the number of training examples and the dimension of the feature vectors, are large. We focus on the case where the loss function defining the quality of the parameter we wish to estimate may be nonconvex, and the parameter vector we aim to estimate is huge-scale.(dssc_report)
The above optimization problem appears in various fields such as machine learning, signal processing, wireless communication, image processing, social networks, and bioinformatics, to name just a few.
The expectation in the objective function may not have closed form expression, due to the unknown statistics of the random variables or the high computational complexity. The first phenomena/reason is faced in many applications dealing with wireless systems, where the channels statistics are not available. Such applications include but not limit to ...?. The second reason is the reason of utilising stochastic optimizations in the problem formulations of many applications dealing with a huge amount of data in machine learning, such as ...?.
In the next {\color{blue}{chapter}}, we will consider one new application example from different fields for each of the aforementioned stochastic optimization problems and will evaluate the effectiveness of the proposed algorithm along with the state-of-the-art algorithms for solving those problems.
\fi
\section{Problem Formulation}\label{sec: ProbFormulation}
Consider a multi-agent system composed of $ I $ users, each independently controlling a strategy vector $ \boldsymbol{x}_l \in \mathcal{X}_l, ~l=1,\cdots,I $, that aim at solve the following stochastic optimization problem together in a distributed manner:
\begin{equation}\label{eq: main stochastic optimisation}
\displaystyle \min_{ \boldsymbol{x} \in \mathcal{X} } F \left( \boldsymbol{x} \right) \triangleq \mathbb{E}_{\boldsymbol{\zeta}} \left[ f \left(\boldsymbol{x}, \boldsymbol{\zeta} \right) \right] ,
\end{equation}
where $ \boldsymbol{x} \triangleq \left( \boldsymbol{x}_l \right)_{l=1}^I $ is the joint strategy vector and $ \mathcal{X} \triangleq \mathcal{X}_1 \times \cdots \times \mathcal{X}_I $ is the joint strategy set of all users. Moreover, the objective function $ f \left( \boldsymbol{x} \right) $ is the expectation of a
sample cost function $ f \left(\boldsymbol{x}, \boldsymbol{\zeta} \right) : \mathcal{X} \times \Omega \rightarrow \mathbb{R} $ which depends on the joint strategy vector $ \boldsymbol{x} $ and a random vector $ \boldsymbol{\zeta} $. The random vector $ \boldsymbol{\zeta} $ is defined on a probability space denoted by $ \left( \Omega, \mathcal{F}, P \right) $, where the probability measure $ P $ is unknown.
Such problem formulation is very general and includes many optimization problems as its special cases. Specifically, since the objective function is not assumed to be convex over the set $ \mathcal{X} $, the considered optimization problem can be non-convex in general.
The following assumptions are made throughout this paper, which are classical in the stochastic optimization literature and are satisfied for a large class of problems \cite{nemirovski2009robust}.
\begin{assumption}[Problem formualtion structure]\label{assump: stoch opt formulation}
For the problem formulation in \eqref{eq: main stochastic optimisation}, we assume
\begin{itemize}
\item[a)] The feasible sets $ \mathcal{X}_l, ~ \forall l=1,\ldots,I $ are compact and convex;\footnote{This assumption guarantees that there exists a solution to the considered optimization problem
}
\item[b)] For any random vector realisation $ \boldsymbol{\zeta} $, the
sample cost function $ f \left(\boldsymbol{x}, \boldsymbol{\zeta} \right) $ is
continuously differentiable over $ \mathcal{X} $ and has a Lipschitz continuous gradient;
\item[c)] The objective function $ F \left( \boldsymbol{x} \right) $ has a Lipschitz continuous gradient with constant $ L_{\nabla F} < +\infty $.
\end{itemize}
\end{assumption}
\iffalse
We note that these assumptions are classical in the stochastic optimization literature \cite{nemirovski2009robust} and are satisfied in
a large class of stochastic optimization problems appearing in a wide range of applications.
\fi
We aim at designing a distributed iterative algorithm with low complexity, fast convergence and low storage requirements that solves the considered stochastic optimization problem when the distribution of the random variable $ \boldsymbol{\zeta} $ is not known or it is practically impossible to accurately compute the expected value in \eqref{eq: main stochastic optimisation} and hence, we only have access to the observed samples (i.e., realisations) of the random variable $ \boldsymbol{\zeta} $.
\section{Proposed Parallel Stochastic Optimization Algorithm}\label{sec: ProposedMethod}
\subsection{Algorithm Description}
Note that the considered problem in \eqref{eq: main stochastic optimisation} is possibly non-convex. Moreover, due to the expectation involved, its objective function does not have a closed-form expression.
These two issues make finding the stationary point(s) of this problem very challenging.
In the following, we tackle these challenges and propose an iterative decomposition algorithm that solves the problem in a distributed manner, where the agents update and optimise their associated surrogate functions in parallel. In this way, the original objective function is replaced with some incremental strongly convex surrogate functions which will then be updated and optimized by the agents in parallel.
Note that the proposed method can also
support the mini-batch case. Therefore, for the sake of generality, we consider the
mini-batch stochastic gradient estimation, where the size of the mini-batch, parametrized by $ B \in \{ 1, 2, \ldots \} $, can be chosen to achieve a good trade-off between the per-iteration complexity and the convergence speed.
\iffalse
{\color{red}{Finally, it should be noted that the proposed method can also
support the mini-batch case. Therefore, for the sake of generality, we consider the
mini-batch stochastic gradient estimation, where the size of the mini-batch, parametrized by $ B $, can be chosen to be one or larger.
}}
\fi
\iffalse
\begin{algorithm}[H]\label{alg: main}
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\SetKwInOut{Initialisation}{Initialisation}
\Input{$ \boldsymbol{x}^0 \in \mathcal{X}
; $ \boldsymbol{h}^0_l=\boldsymbol{0}, ~ \forall l=1,\cdots,I $; $ \left\lbrace \omega_k \right\rbrace_{k\geq 1}
; $ \left\lbrace \alpha_k \right\rbrace_{k\geq 1}
; set $ k=0 $.}
If $ \boldsymbol{x}^k $ satisfies a suitable termination criterion: STOP. \label{new_iteration}
{\color{red}{Observe the random vector realisation $ \boldsymbol{\zeta}_k $ and hence, the associated
sample function}} $ f^k \left( \boldsymbol{x} \right) = f \left(\boldsymbol{x}, \boldsymbol{\zeta}_k \right) $.
\For{$l = 1$ to $I$}
{
Update {\color{red}{the vector}} $ \boldsymbol{h}_l^k = \left( 1-\omega_k \right) \boldsymbol{h}_l^{k-1} + \omega_k \nabla_l f^k \left(\boldsymbol{x}^{k-1} \right) $\;
%
%
Define the surrogate function $ \hat{f}_l^k(\boldsymbol{x}_l) = \dfrac{1}{2 \alpha_k} \parallel \boldsymbol{x}_l - \boldsymbol{x}_l^{k-1} \parallel^2 + \langle \boldsymbol{h}^k_l , \boldsymbol{x}_l - \boldsymbol{x}_l^{k-1} \rangle $\;
%
Compute $ \boldsymbol{x}_l^k = \displaystyle \argmin_{\boldsymbol{x}_l \in \mathcal{X}_l} \hat{f}_l^k(\boldsymbol{x}_l) $\;
%
}
$ k \rightarrow k+1$, and go to line \ref{new_iteration}.
\caption{Stochastic Parallel Decomposition Algorithm}
\end{algorithm}
\fi
\begin{algorithm}
\begin{algorithmic}[1]
\STATE {\textbf{Input:} $ \boldsymbol{x}^0 \in \mathcal{X}
; $ \left\lbrace \omega_k \right\rbrace_{k\geq 1}
; $ \left\lbrace \alpha_k \right\rbrace_{k\geq 1}
; $ k=0 $; $ \boldsymbol{h}^0_l=\boldsymbol{0}, ~ \forall l=1,\ldots,I $; mini-batch size $ B \geq 1 $.}
\STATE If $ \boldsymbol{x}^k $ satisfies a suitable termination criterion: STOP. \label{new_iteration}
\STATE
Observe the
batch of independent random vector realisations $ \boldsymbol{Z}^k = \{ \boldsymbol{\zeta}^k_b \}_{b=1}^{B} $,
and hence, its
associated
sample function
$ f^k \left( \boldsymbol{x} \right) = \frac{1}{B} \sum_{b=1}^B f \left(\boldsymbol{x}, \boldsymbol{\zeta}^k_b \right) $.
\FOR {$l = 1$ to $I$}
\STATE Update
the vector
\begin{equation}\label{eq: h^k update}
\boldsymbol{h}_l^k = \left( 1-\omega_k \right) \boldsymbol{h}_l^{k-1} + \omega_k \nabla_l f^k \left(\boldsymbol{x}^{k-1} \right).
\end{equation}
%
%
\STATE Define the surrogate function
\begin{equation}\label{eq: hat_f_k update}
\hat{f}_l^k(\boldsymbol{x}_l) = \dfrac{1}{2 \alpha_k} \parallel \boldsymbol{x}_l - \boldsymbol{x}_l^{k-1} \parallel^2 + \langle \boldsymbol{h}^k_l , \boldsymbol{x}_l - \boldsymbol{x}_l^{k-1} \rangle.
\end{equation
\STATE Compute
\begin{equation}\label{eq: x^k update}
\boldsymbol{x}_l^k = \argmin_{\boldsymbol{x}_l \in \mathcal{X}_l} \hat{f}_l^k(\boldsymbol{x}_l).
\end{equation}
\ENDFOR
\STATE $ k \rightarrow k+1$, and go to line \ref{new_iteration}.
\end{algorithmic}
\caption{Stochastic Parallel Decomposition Algorithm}
\label{alg: main}
\end{algorithm}
The proposed parallel decomposition method is described in Algorithm \ref{alg: main}. The iterative algorithm proceeds as follows: As each iteration $ k $, a batch of size $ B $ of random vectors
$ \boldsymbol{\zeta}^k_b , ~ \forall b=1,\ldots,B $
is realised, and accordingly, each agent $ l, ~ \forall l=1,\ldots, I $ calculates the gradient of the
the associated sample function
$ f^k \left(\boldsymbol{x} \right) $ with respect to its control variable $ \boldsymbol{x}_l $
at the latest iterate (i.e., $ \nabla_l f^k \left( \boldsymbol{x}^{k-1} \right) $). Then,
using its newly calculated gradient and the previous ones, agent $ l $ incrementally updates a vector $ h_l^k $ which is an estimation of the exact gradient of the original objective function with respect to $ \boldsymbol{x}_l $. It will be shown that this vector eventually converges to the true gradient. Using this gradient estimation and the latest iterate (i.e., $ \boldsymbol{x}_l^{k-1} $), agent $ l $ then constructs
a quadratic deterministic surrogate function, as in \eqref{eq: hat_f_k update}.
\iffalse
\begin{equation}\label{eq: hat_f_k update_v0} \hat{f}_l^k(\boldsymbol{x}_l) \triangleq \dfrac{1}{2 \alpha_k} \parallel \boldsymbol{x}_l - \boldsymbol{x}_l^{k-1} \parallel^2 + \langle \boldsymbol{h}^k_l , \boldsymbol{x}_l - \boldsymbol{x}_l^{k-1} \rangle, \quad \forall \boldsymbol{x}_l \in \mathcal{X}_l.\notag
\end{equation}
\fi
The surrogate function is then minimised by agent $ l $ to update the control variable $ \boldsymbol{x}^k_l $.
In this way, through solving
deterministic quadratic sub-problems in parallel, each user minimises a sample convex approximation of the original non-convex stochastic function.
\iffalse
The iterative algorithm proceeds as follows: As each iteration $ k $, a random vector $ \boldsymbol{\zeta}^k $ is realised. Therefore, each user $ l, ~ \forall l=1,\ldots, I $ can calculate the gradient of the
sample function $ f \left(\boldsymbol{x} , \boldsymbol{\zeta}^k \right) $ with respect to its control variable $ \boldsymbol{x}_l $ at the previous iterate (i.e., at $ \boldsymbol{x}^{k-1} $). Then,
using the newly calculated gradient and the previous ones, user $ l $ incrementally updates a vector $ h_l^k $ which is an estimation of the exact gradient of the original objective function. It will be shown that this vector eventually converges to the true gradient. Using this vector, user $ l $ solves a quadratic deterministic optimization problem that minimises a surrogate function in order to update the user's control variable $ \boldsymbol{x}^k_l $. For given $ \boldsymbol{x}_l^{k-1} $ (i.e., previous iterate) and $ \boldsymbol{\zeta}^k $ (i.e., current observation of the random vector), the surrogate function at user $ l $ is defined as
\begin{equation}\label{eq: hat_f_k update_v0} \hat{f}_l^k(\boldsymbol{x}_l) \triangleq \dfrac{1}{2 \alpha_k} \parallel \boldsymbol{x}_l - \boldsymbol{x}_l^{k-1} \parallel^2 + \langle \boldsymbol{h}^k_l , \boldsymbol{x}_l - \boldsymbol{x}_l^{k-1} \rangle, \quad \forall \boldsymbol{x}_l \in \mathcal{X}_l.\notag
\end{equation}
\fi
Note that the first term in the surrogate function \eqref{eq: hat_f_k update} is the proximal regularisation term and makes the surrogate function strongly convex, with parameter $ \dfrac{1}{\alpha_k} $. Moreover, the role of the second term is to incrementally estimate the unavailable exact gradient of the original objective function at each iteration (i.e., $ \nabla_{\boldsymbol{x}} F \left(\boldsymbol{x}^{k-1} \right) $), using the sample gradients collected over the iterates so far.
According to \eqref{eq: h^k update}, the direction $ \boldsymbol{h}^k $ is recursively updated based on the previous direction $ \boldsymbol{h}^k $ and the newly observed stochastic gradient $ \nabla f^k \left(\boldsymbol{x}^{k-1} \right) $.
Under proper choices of the sequences $ \left\lbrace \omega_k \right\rbrace_{k=1}^\infty $ and $ \left\lbrace \alpha_k \right\rbrace_{k=1}^\infty $, it is expected that
such incremental estimation of the exact gradient
becomes more and more accurate as $ k $ increases, and gradually, it will converge to the exact gradient (This will be shown later on by Lemma \ref{lem: h_k --> nabla f}).
\subsection{Convergence Analysis of the Proposed Method}
In the following, we present the main convergence results of the proposed Algorithm \ref{alg: main}. Prior to that, let us state the following assumptions on the noise of the stochastic gradients as well as the sequences $ \left\lbrace \omega_k \right\rbrace $ and $ \left\lbrace \alpha_k \right\rbrace $.
\begin{assumption}[Unbiased gradient estimation with bounded variance]\label{assump: noise}
For any iteration $ k \geq 0 $, the following
results hold almost surely:
$ 1\leq \forall b \leq B $,
\begin{itemize}
\item[a)] $ \mathbb{E} \left[ \nabla F \left(\boldsymbol{x}^{k} \right) - \nabla f \left(\boldsymbol{x}^{k}, \boldsymbol{\zeta}^{k}_b \right) \left| \mathcal{F}_k \right. \right] = \boldsymbol{0} $,
\item[b)] $ \mathbb{E} \left[ \parallel \nabla F \left(\boldsymbol{x}^{k} \right) - \nabla f \left(\boldsymbol{x}^{k}, \boldsymbol{\zeta}^{k}_b \right) \parallel^2 \left| \mathcal{F}_k \right. \right] < \infty $,
\end{itemize}
where $ \mathcal{F}_k \triangleq \left\lbrace \boldsymbol{x}^{0}, \boldsymbol{Z}^{0}, \ldots, \boldsymbol{Z}^{k-1} \right\rbrace $ denotes the past history of the algorithm up to iteration $ k $.
\end{assumption}
\iffalse
\begin{assumption}[Unbiased gradient estimation with bounded variance]\label{assump: noise}
The following
results hold almost surely, for any iteration $ k \geq 0 $:
\begin{itemize}
\item[a)] $ \mathbb{E} \left[ \nabla F \left(\boldsymbol{x}^{k} \right) - \nabla f \left(\boldsymbol{x}^{k}, \boldsymbol{\zeta}^{k} \right) \left| \mathcal{F}_k \right. \right] = \boldsymbol{0} $,
\item[b)] $ \mathbb{E} \left[ \parallel \nabla F \left(\boldsymbol{x}^{k} \right) - \nabla f \left(\boldsymbol{x}^{k}, \boldsymbol{\zeta}^{k} \right) \parallel^2 \left| \mathcal{F}_k \right. \right] < \infty $,
\end{itemize}
where $ \mathcal{F}_k \triangleq \left\lbrace \boldsymbol{x}^{0}, \boldsymbol{\zeta}^{0}, \ldots, \boldsymbol{\zeta}^{k-1} \right\rbrace $ denotes the past history of the algorithm up to iteration $ k $.
\end{assumption}
\fi
Assumption \ref{assump: noise}-(a) indicates that the instantaneous gradient is an unbiased estimation of the exact gradient at each point, and Assumption \ref{assump: noise}-(b) indicates that the variance of such noisy gradient estimation is bounded. It is noted that these assumptions are standard and very common in the literature for instantaneous gradient errors \cite{ram2009incremental,zhang2008impact}.
Moreover, it can be easily
verified that if the random variables $ \boldsymbol{\zeta}^{k}_b, ~\forall k \geq 0, 1 \leq \forall b \leq B $ are bounded and identically distributed, then these assumptions are automatically satisfied \cite{nemirovski2009robust}. Finally, Assumption \ref{assump: noise} clearly implies that the gradient of the observed sample function $ f^k \left(\boldsymbol{x} \right) $ at the current iterate $ \boldsymbol{x}^k $ is an unbiased estimation of the true gradient of the original objective function (However, note that $ \boldsymbol{h}^k $ is not an unbiased estimation for finite $ k $), with a finite variance.
\begin{assumption}[Step-size sequences constraints]\label{assump: step-sizes}
The sequences $ \left\lbrace \omega_k \right\rbrace $ and $ \left\lbrace \alpha_k \right\rbrace $
satisfy the following conditions:
\begin{itemize}
\item[a)] $ \omega_1=1 $, $ \sum_{k=1}^\infty \omega_k = \infty $ and $ \sum_{k=1}^\infty {\omega_k}^2 < \infty $,
\item[b)] $ \sum_{k=1}^\infty \alpha_k = \infty $, $ \sum_{k=1}^\infty {\alpha_k}^2 < \infty $ and $ \alpha_k/\omega_k \rightarrow 0 $.
\end{itemize}
\end{assumption}
The following theorem states a preliminary convergence result of the proposed Algorithm \ref{alg: main} for the general (possibly non-convex) stochastic optimization problems.
\begin{theorem
\label{th: conv_nonconvex}
For the general (possibly non-convex) objective function $ F \left(.\right) $ and under Assumptions \ref{assump: stoch opt formulation}--\ref{assump: step-sizes}, there exists a
subsequence $ \left\lbrace \boldsymbol{x}^{k_j} \right\rbrace $ of the iterates generated by Algorithm \ref{alg: main} that converges to a stationary point of Problem \eqref{eq: main stochastic optimisation}.
\end{theorem}
The following theorem shows the convergence of the proposed algorithm to the optimal solution for the case of \textit{convex} stochastic optimization problems.
\begin{theorem}{\textbf{(Convex Case)}}\label{th: conv_convex}
For convex objective function $ F \left(.\right) $ and under Assumptions \ref{assump: stoch opt formulation}-\ref{assump: step-sizes}, every limit point of the sequence $ \left\lbrace \boldsymbol{x}^k \right\rbrace $ generated by Algorithm \ref{alg: main} (at least one limit point exists)
converges to an optimal solution of Problem \eqref{eq: main stochastic optimisation}, denoted by $ \boldsymbol{x}^\ast $, almost surely, i.e.,
\begin{equation}
\displaystyle\lim_{k \rightarrow \infty} F \left( \boldsymbol{x}^k \right) - F \left( \boldsymbol{x}^\ast \right) =0.
\end{equation}
\end{theorem}
The last theorem establishes the convergence of the proposed algorithm to a stationary point for the general case of \textit{non-convex} stochastic optimization problems.
\iffalse
\begin{theorem}{\textbf{(Non-Convex Case)}}\label{th: conv_nonconvex}
For the general (possibly non-convex) objective function $ F \left(.\right) $ and under Assumptions \ref{assump: stoch opt formulation}-\ref{assump: step-sizes}, there exists at least one subsequence $ \left\lbrace \boldsymbol{x}^{k_j} \right\rbrace $ of the iterates generated by Algorithm \ref{alg: main} that converges to a stationary point of Problem \eqref{eq: main stochastic optimisation}.
\end{theorem}
\fi
\begin{theorem}{\textbf{(Non-convex case)}}\label{th: conv2_nonconvex}
For the general (possibly non-convex) objective function $ F \left(.\right) $,
if there exists a subsequence of the iterates generated by Algorithm \ref{alg: main} which converges to a strictly local minimum,
then the sequence $ \left\lbrace \boldsymbol{x}^k \right\rbrace $ generated by Algorithm \ref{alg: main} converges to that stationary point, almost surely.
\end{theorem}
\subsection{Comparison with the Existing Methods} \label{sec: comparison_with_works}
It should be noted that unlike in SGD, the
stochastic gradient
vector $ \boldsymbol{h}^k $ in our method is not an unbiased estimation of the exact gradient, for finite $ k $. This fact adds to the non-trivial challenges in proving the convergence of the proposed algorithm since the expectation of the update direction is no longer a descent direction.
To tackle this challenge,
we prove that this biased estimation
asymptotically converges to
the exact gradient, with probability one.
Moreover, unlike the stochastic MM methods discussed before, the considered surrogate functions are not necessarily an upper-bound for the observed sample function, but will eventually converge to be a global upper-bound for the expectation of the sample functions.
In addition, the considered surrogate functions can be computed and optimized with low complexity for any optimization problem of the form in \eqref{eq: main stochastic optimisation}. These advantages address the complexity issues of
the stochastic MM methods
that discussed before.
Furthermore, the proposed method here
differs from that in \cite{Daniel2016parallel} in the following ways:
Firstly, the algorithm proposed in \cite{Daniel2016parallel} requires weighted averaging of the iterates, where the associated vanishing factor
is assumed to be much faster diminishing
than the vanishing factor involved in the incremental estimate of the exact
gradient of the objective function.
In fact, averaging over the iterates helps to average out the noise involved in the gradient estimation of the stochastic objective function, and hence, is used as a crucial step for proving the convergence of the method proposed in
\cite{Daniel2016parallel}.
However, in practice, such averaging over the iterates makes the convergence of the algorithm slower, as the weight for the approximate solution found at the current iteration converges to zero very fast.
Therefore, the effect of a new approximation point found by solving the approximate surrogate function at the current iteration would very quickly become negligible.
Such a step, which is fundamental for the convergence of the proposed scheme in \cite{Daniel2016parallel}, is no longer used in the proposed algorithm. Although this makes the convergence proof of our proposed method challenging, it contributes to the significantly faster convergence of the proposed scheme compared to the scheme in \cite{Daniel2016parallel}, as can be verified by the numerical results in Section \ref{sec: SVM_sim}.
Secondly, the surrogate function at each agent is obtained from the original objective function by replacing the convex part of the expected value with its incremental sample function
and the non-convex part with a convex local estimation. However, in our proposed method, the expectation of the whole objective function is approximated with an
incremental convex approximation, which can be easily calculated and optimized with low complexity, at each iteration. This contributes to the lower complexity of the proposed scheme here compared to the one in \cite{Daniel2016parallel}, for an arbitrary objective function in general.
\iffalse
{\color{blue}{
\begin{remark}
It should be noted that unlike in SGD, the
stochastic gradient
vector $ \boldsymbol{h}^k $ in our method is not an unbiased estimation of the exact gradient, for finite $ k $. This fact adds to the non-trivial challenges in proving the convergence of the proposed algorithm, since the expectation of the update direction is no longer a descent direction.
To {\color{red}{(address/tackle/overcome)}} this challenge, later on
we will prove that this biased estimation
asymptotically converges to
the exact gradient, with probability one.
\end{remark}
}}
\fi
\iffalse
\section{Conclusion}\label{sec: Conclusion}
In this paper, we have proposed a novel fast parallel stochastic optimisation framework that can solve a large class of possibly non-convex constrained stochastic optimization problems.
Under the proposed method, each user of a multi-agent system updates its control variable in parallel, by solving a successive convex approximation sub-problem, independently. The sub-problems have low complexity and are easy to obtain. The proposed algorithm can be applied to solve a large class of optimisation problems arising in important applications from various fields, such as wireless networks and large-scale machine learning. Moreover, we proved the convergence of the proposed algorithm for the cases of convex and general non-convex constrained stochastic optimisation problems.
\fi
\section{An Application Example of the Proposed Method in Solving Large-Scale Machine Learning Problems}
\label{sec:SVM_Simulations}
\iffalse
{{\color{red}{
Optimization is one of the important pillars of machine learning. In this context, it involves computation of the parameters for a system that is designed to make decisions based on yet unseen data. In particular, based on the currently available data, the parameters are optimally chosen for a given learning problem. The success of certain optimization methods in machine learning has greatly inspired various research communities
to tackle even more challenging problems in this area, and to design new optimization algorithms that are more widely applicable to these problems.
}}
\fi
Optimization is believed to be one of the important pillars of machine learning \cite{bottou2018optimization}. This is mainly due to the nature of machine learning systems, where a set of parameters need to be optimized based on currently available data so that the learning system can make decisions for yet unseen data.
Nowadays, many challenging machine learning problems in a wide range of applications are relying on optimization methods, and designing new methods that are more widely applicable to modern machine learning applications is highly desirable.
%
One of the important problems which appear in many
machine learning applications with huge datasets is large-scale support vector machines (SVMs). In this section, we demonstrate how our proposed optimization algorithm can efficiently solve this problem. We compare the performance of the proposed algorithm to the state-of-the-art methods in the literature and present experimental results to demonstrate the merits of our proposed framework.
\iffalse
In this context, it involves computation of the parameters for a system that is designed to make decisions based on yet unseen data. In particular, based on the currently available data, the parameters are optimally chosen for a given learning problem. The success of certain optimization methods in machine learning has greatly inspired various research communities
to tackle even more challenging problems in this area, and to design new optimization algorithms that are more widely applicable to these problems.
\fi
\iffalse
{\color{blue}{REPEATED/USED in the Abstract of the paper:}}
The purpose of this {\color{blue}{chapter}} is to elaborate on large-scale support vector machines, which pose an important problem in the context of machine learning, as a representative application of our proposed stochastic optimization framework. This problem appears in many applications in various fields.
We demonstrate how our algorithm can efficiently solve this proble
, especially in the modern
applications with huge datasets.
We present experimental results to demonstrate the merits of our proposed framework by comparing its performance
to the state-of-the-art methods in the literature, using real-world datasets. Numerical results show that the proposed method can significantly outperform the related state-of-the-art stochastic methods in terms of the convergence speed, while having the same or lower complexity and storage requirement.
\fi
\iffalse
The purpose of this chapter is to
elaborate one of the important problems in the context of machine learning, that widely appears in many applications in various fields, as a representative application of our proposed stochastic optimization framework and demonstrate how the proposed algorithm can efficiently solve this proble
, especially in the modern large-scale problems with huge datasets.
We will present experimental results that demonstrate the merits of our proposed framework by comparing its performance
to the best state-of-the-art methods in the literature, using some real-world datasets.
In this {\color{blue}{chapter}},
we will elaborate one of the important problems in machine learning that widely appears in many applications, and will demonstrate how the proposed algorithm can efficiently solve this proble
, especially for the emerging large-scale problems with huge datasets.
We will present experimental results that demonstrate the merits of our proposed framework by comparing
to the best state-of-the-art methods in the literature.
\fi
SVMs are
one of the most prominent machine learning techniques for a wide range of
classification problems in machine learning applications, such as cancer diagnosis in bioinformatics, image classification, face recognition in computer vision and text categorisation in document processing \cite{text_categoration_SVM,Daniel2015}. The SVM classifier is formulated by the following
optimization problem{\footnote{Note that although SVM problem formulation is widely used in the literature to show the performance of the optimization algorithms, it is not everywhere differentiable. However, it should be noted that in our proposed algorithm, similar to the other works in the literature, differentiability is a sufficient condition for the convergence proof. In addition, the probability of meeting the non-differentiable points is almost zero, in practice.}} \cite{pegasos}:
\begin{equation}\label{eq: SVM_prob_unconst}
\min_{\boldsymbol{w}} \dfrac{\lambda}{2} \parallel\boldsymbol{w} \parallel^2 + \dfrac{1}{m} \sum_{i=1}^{m} \max \{ 0, 1- y_i \langle \boldsymbol{x}_i , \boldsymbol{w} \rangle \} ,
\end{equation}
where $ \left\lbrace \left( \boldsymbol{x}_i, y_i \right) ,~ \forall i = 1,\ldots, m \right \rbrace $ is the set of training samples, and
$ \lambda $ is the regularisation parameter to control overfitting by maintaining a trade-off between increasing the size of the margin (larger $ \lambda $) and ensuring that the data lie on the correct side of the margin (smaller $ \lambda $).
Once this problem is solved, the obtained $ {\boldsymbol {w}} $ determines the SVM classifier as $ {\displaystyle {\boldsymbol {x}}\mapsto \operatorname {sgn}( \langle {\boldsymbol {w}} \cdot {\boldsymbol {x}} \rangle} ) $, i.e., each future datum $ \boldsymbol {x} $ will be labelled by the sign of its inner product with the solution $ \boldsymbol {w} $.
\iffalse
Although SVM problem formulation is well understood, solving large-scale SVMs is still challenging.
For the large and high-dimensional datasets that we encounter in emerging applications, the optimization problem cast by SVMs
is huge. Consequently, solving such large-scale SVMs is mathematically complex and computationally expensive. Specifically, as the number of training data becomes very large, computing the exact gradient of the objective function becomes impractical. Therefore, for the training of large-scale SVMs, off-the-shelf optimization techniques for general programs quickly become intractable in their memory and time requirements. We will later show how
adopting a stochastic setting for solving SVM problem formulation can help to address this issue. Moreover, for emerging applications that need to handle a large amount of data in a short time, the speed of convergence of the applied algorithm is another critical factor, which needs to be carefully considered.
Furthermore, in the emerging applications where the datasets are huge and contain a large number of attributes, extensive resources may be required to process the datasets and solve the associated problems. To address this issue especially in resource-limited multi-agent systems,
it is beneficial to decompose the attributes of the dataset into small groups, each associated with a computing agent, and then, process these smaller datasets by distributed agents in parallel.
\fi
Although SVM problem formulation is well understood, solving large-scale SVMs is still challenging.
For the large and high-dimensional datasets that we encounter in emerging applications, the optimization problem cast by SVMs
is huge. Consequently, solving such large-scale SVMs is mathematically complex and computationally expensive. Specifically, as the number of training data becomes very large, computing the exact gradient of the objective function becomes impractical.
{\color{black}{Therefore, for training large-scale SVMs, off-the-shelf optimization algorithms
for general problems
quickly become intractable in their memory
requirements.}}
Moreover, for emerging applications that need to handle a huge amount of data
in a short time, the speed of convergence of the applied algorithm is another critical factor, which needs to be carefully considered.
Furthermore,
when the data contain a large number of attributes, extensive resources may be required to process the datasets and solve the associated problems. To address this issue especially in resource-limited multi-agent systems,
it is beneficial to decompose the attributes of the dataset into small groups, each associated with a computing agent, and then, process these smaller datasets by distributed agents in parallel.
\iffalse
We will
show how
adopting a stochastic setting for solving SVM problem formulation can help to address this issue.
\fi
The aforementioned requirements
give rise to an increasing need for an efficient and scalable method that can solve large-scale SVMs with low complexity and fast convergence.
In the sequel, we show
how our proposed algorithm can effectively address the aforementioned issues to solve large-scale SVMs
with low complexity and faster convergence than the state-of-the-art existing solutions.
First, note that in SVMs with a huge number of training data,
the optimization problem cast by SVMs (as in \eqref{eq: SVM_prob_unconst}) can be viewed as the sample average approximation (SAA) of the following stochastic optimization problem:
\begin{equation}\label{eq: stoch_svm_prob}
\min_{\boldsymbol{w}} f \left( \boldsymbol{w} \right) = \mathbb{E}_{ \left( \boldsymbol{x}, y \right) } \left[ \dfrac{\lambda}{2} \parallel\boldsymbol{w} \parallel^2 + \max \{ 0, 1- y \langle \boldsymbol{x} , \boldsymbol{w} \rangle \} \right],
\end{equation}
in which the randomness comes from random samples $ \left( \boldsymbol{x} , y \right) $. Considering the batch size $ B=1 $, the stochastic gradient of the above objective function is
\begin{align}\label{eq: stoch_grad_svm}
\hat{\boldsymbol{g}} \left(\boldsymbol{w} \right) = \lambda \boldsymbol{w} - y_i \boldsymbol{x}_i \boldsymbol{1} \left( y_i \langle \boldsymbol{x}_i , \boldsymbol{w} \rangle \leq 1 \right),
\end{align}
where $ \boldsymbol{1} (.) $ is the indicator function and $ \left( \boldsymbol{x}_i , y_i \right) $ is a randomly drawn training sample.
Note that since the training samples are chosen i.i.d., the gradient of the loss function with respect to any individual sample can be shown to be an unbiased estimate of a gradient of $ f $ \cite{Shamir2013}. Moreover, the variance of such estimation is finite,
i.e., $ \mathrm{E} \left[ \parallel \hat{\boldsymbol{g}} \left(\boldsymbol{w} \right) \parallel^2 \right] \leq G $, as shown in \cite{pegasos}.
For computing this stochastic gradient at each iteration, only one sample out of the $ m $ training samples is drawn uniformly at random. Accordingly, the cost of computing the gradient at each iteration is lowered to almost $ 1/m $ of the cost of computing the exact gradient. Therefore, by adopting a stochastic approach and utilising the stochastic gradient in \eqref{eq: stoch_grad_svm} instead of calculating the exact gradient of the SVM problem formulation,
the cost of computing gradient at each iteration will become independent of the number of training data, which makes it highly scalable to large-scale SVMs.
The state-of-the-art algorithms for solving SVMs through a stochastic approach can be classified into \textit{first-order methods}, such as Pegasos \cite{pegasos},
and \textit{second-order methods}, such as the Newton-Armijo method \cite{ssvm}. The existing first-order methods can significantly decrease the computational cost per iteration,
but they converge very slowly, which is not desirable specially for emerging applications with huge datasets. On the other hand, second-order methods suffer from high complexity, due to the significantly expensive matrix computations and high storage requirement \cite{convex_big_data,Hessian}.
For example, the Newton-Amijo method \cite{ssvm} needs to solve a matrix equation, which involves matrix inversion, at each iteration. However, in the case of large-scale SVMs in big data classification, the inversion of a curvature matrix or the storage of an iterative approximation of that inversion will be very expensive,
which makes this method non-practical for large-scale SVMs.
In addition, this method is an offline method that needs to have all the training data in a full batch to calculate the exact values of the gradient and Hessian matrix at each iteration. This will cause high complexity and computational cost, and makes this method even not applicable to online applications.
In the next section, we will show that our proposed algorithm can effectively solve large-scale SVMs significantly faster than the existing solutions, and with low complexity. Using some real-world big datasets, we compare the proposed accelerated method and the aforementioned methods for solving SVMs.
\section{Simulation Results and Discussion}\label{sec: SVM_sim}
In this section, we empirically evaluate the proposed stochastic optimization algorithm and
demonstrate
its efficiency for solving large-scale SVMs.
We compare the performance of the proposed algorithm to three important state-of-the-art baselines in this area:
\begin{enumerate}
\item Pegasos algorithm \cite{pegasos}, which is
known to be one of the best state-of-the-art algorithms for solving large-scale SVMs,
\item The state-of-the-art parallel stochastic optimization algorithm \cite{Daniel2016parallel},
\item ADAM algorithm \cite{ADAM_2014}, one of the most popular and widely-used deep learning
algorithm
, which is an adaptive learning rate optimization algorithm that has been designed specifically for training deep neural networks. This algorithm has been recognized as one of the best optimization algorithms for deep learning, and its popularity is growing
exponentially, according to \cite{karparthy2017peek}.
\iffalse
!!!!!
one of the most popular and widely-used deep learning
algorithms, namely ADAM \cite{ADAM_2014}, which is an adaptive learning rate optimization algorithm that has been designed specifically for training deep neural networks. This algorithm has been recognized as one of the best optimization algorithms for deep learning, and its popularity is growing
exponentially, {\color{red}{according to}} \cite{karparthy2017peek}.
\fi
\end{enumerate}
\iffalse
{\color{red}{"Adam is definitely one of the best optimization algorithms for deep learning and its popularity is growing very fast."
" Adam has been raising in popularity exponentially according to ‘A Peek at Trends in Machine Learning’ article from Andrej Karpathy."
"Adam [1] is an adaptive learning rate optimization algorithm that’s been designed specifically for training deep neural networks."
}}
\fi
We perform our simulations and comparison on two popular real-world large datasets with very different feature counts and sparsity. The COV1 dataset classifies the forest-cover areas in the Roosevelt National Forest of northern Colorado identified from cartographic variables into two classes \cite{COV1dataset2002}, and the RCV1 dataset classifies the CCAT and ECAT classes versus GCAT and MCAT classes in the Reuters RCV1 collection \cite{RCV1dataset2004}.\footnote{The datasets are available online at https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/binary.html.} Details of the datasets' characteristics as well as the values of the SVM regularisation parameter $ \lambda $ used in the experiments are all provided in Table \ref{Datasets for SVM}
(Note that for the regularisation parameter for each dataset, we adopt the typical value used in the previous works \cite{Shamir12, Shamir2013, pegasos} and \cite{linearSVM}).
Similar to \cite{Shamir2013}, the initial point of all the algorithms is set to $ \boldsymbol{w}_1=\boldsymbol{1} $.
We also tune the step-size parameters of the method in \cite{Daniel2016parallel} to achieve its best performance for the used datasets.
Moreover, for the Pegasos algorithm, we output the last weight vector rather than the average weight vector, as it is found that it performs better in practice \cite{pegasos}. Finally, for the hyper-parameters of the ADAM algorithm, we use their default values as reported in \cite{ADAM_2014}, which are widely used for almost all the problems and known to perform very well, in practice.
\begin{table}
\caption{The datasets characteristics and the values of the regularisation parameter $ \lambda $ used in the experiments.}\label{Datasets for SVM}
\begin{center}
\begin{tabular}{| l | *{4}{c}| c |}
\hline
Dataset & Training Size & Test Size & Features & Sparsity ($ \% $) & $ \lambda $ \\ \hline
COV1 & 522911 & 58101 & 55 & 22.22 & $ 10^{-6} $ \\
RCV1 & 677399 & 20242 & 47236 & 0.16 & $ 10^{-4} $ \\
\hline
\end{tabular}
\end{center}
\end{table}
Fig.s \ref{SVM_objective_func_Datasets}-(a) and \ref{SVM_objective_func_Datasets}-(b) show the convergence results of
the proposed method and the baselines for each of the datasets. As can be seen in these figures, the proposed algorithm converges much faster than the baselines. Especially at the early iterations, the convergence speeds of all of these methods are rather fast. However, after a number of iterations, the speed of convergence in the baselines drops, while our method still maintains a good convergence speed. This is because the estimation of the gradient of the objective function at each iteration in those methods is not as good as in our proposed method. For Pegasos method, this is due to the fact that the gradient estimation is done based on only one data sample and its associated observed sample function. However in our proposed method, we utilize all the data samples observed up to that point and
average over the previous gradients of the observed sample functions to estimate the true gradient of the original objective function. Moroever, the method in \cite{Daniel2016parallel} performs an extra averaging over the iterates,
which makes the solution change increasingly slowly, as the iterations go on.
\iffalse
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{comp_objfunc_SVM_COV1_10^4iter_17062017_copy.pdf}
\caption{Average throughputs of the links under DTX control policy $ \Theta_1 $}
\label{fig:SVM_ObjFunc_COV1}
\end{subfigure}
~
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{comp_objfunc_SVM_RCV1_10^4iter_04072017_copy.pdf}
\caption{Average throughputs of the links under DTX control policy $ \Theta_2 $}
\label{fig:SVM_ObjFunc_RCV1}
\end{subfigure}
~
\caption{Average throughputs of the links in the network of Fig. \ref{fig:toy_example_routing2} under different DTX control policies.} \label{fig:toy_example_routing3_4}
\end{figure}
\fi
\begin{figure*}[]
\begin{center}
\begin{minipage}[]{0.48\linewidth}
\centering
\subfigure[]{
\includegraphics[width=\textwidth]{pics/comp_objfunc_SVM_COV1_10000iter_17062017_21102019.pdf}
}
\end{minipage}%
\begin{minipage}[]{0.48\linewidth}
\centering
\subfigure[]{
\includegraphics[width=\textwidth]{pics/comp_objfunc_SVM_RCV1_10000iter_04072017_21102019.pdf}
}
\end{minipage}%
\caption{Comparison of the convergence speed of different methods, on the COV1 (left) and RCV1 (right) datasets.}
\label{SVM_objective_func_Datasets}
\end{center}
\end{figure*}
Figs. \ref{SVM_accuracy_Datasets_1} and \ref{SVM_accuracy_Datasets_2} show the classification precision of the resulted SVM model versus the number of samples visited. The top row shows the classification precision on the training data and the bottom row shows the classification precision on the testing data.
According to the figures, comparing the proposed method to the baselines, under the same number of iterations, the proposed method obtains a more precise SVM with a higher percentage of accuracy for the training and testing data.
\begin{figure*}[]
\begin{center}
\begin{minipage}[]{0.48\linewidth}
\centering
\subfigure[]{
\includegraphics[width=\textwidth]{pics/comp_train_SVM_COV1_10000iter_17062017_21102019.pdf}
}
\end{minipage}%
\begin{minipage}[]{0.48\linewidth}
\centering
\subfigure[]{
\includegraphics[width=\textwidth]{pics/comp_train_SVM_RCV1_10000iter_04072017_21102019.pdf}
}
\end{minipage}%
\end{center}
\caption{The percentage of accuracy of the obtained SVM under different methods on the training data of the COV1 (left) and RCV1 (right) datasets.}
\label{SVM_accuracy_Datasets_1}
\end{figure*}
\begin{figure*}[]
\begin{center}
\begin{minipage}[]{0.48\linewidth}
\centering
\subfigure[]{
\includegraphics[width=\textwidth]{pics/comp_test_SVM_COV1_10000iter_17062017_21102019.pdf}
}
\end{minipage}%
\begin{minipage}[]{0.48\linewidth}
\centering
\subfigure[]{
\includegraphics[width=\textwidth]{pics/comp_test_SVM_RCV1_10000iter_04072017_21102019.pdf}
}
\end{minipage}%
\caption{The percentage of accuracy of the obtained SVM under different methods on the testing data of the COV1 (left) and RCV1 (right) datasets.}
\label{SVM_accuracy_Datasets_2}
\end{center}
\end{figure*}
Finally, Table \ref{tab: SVM CPU time} compares the CPU time for running the algorithms for $ 10^4 $ iterations. As can be verified from this table, the CPU time of the proposed method is less than
those of the baselines.
Therefore,
the computational complexity per iteration of the proposed algorithm is
less than the considered baselines, which are known to have low complexity.
\iffalse
Finally, Table \ref{tab: SVM CPU time} compares the CPU time for running the algorithms for $ 10^4 $ iterations. As can be verified from this table, the CPU time of the proposed method is similar to those of the baselines, and hence, for training the considered large-scale SVMs,
the computational complexity per iteration of the proposed algorithm is
similar to that of the baselines.
Note that in the considered numerical example, SVM objective function is already quadratic, and that is why our proposed method exhibits a similar computational complexity to the method in \cite{Daniel2016parallel} (though, ours is still a bit better). However, in general and for higher order objective functions, our method will have less computational complexity than theirs in \cite{Daniel2016parallel},
as explained before.
\fi
\iffalse
\begin{table}
{\color{red}{
\caption{The CPU time (in seconds) for running $ 10^4 $ iterations of different methods.}\label{tab: SVM CPU time}
\begin{center}
\begin{tabular}{| l || c | c | c | c |}
\hline
Datasets & Proposed Method & Pegasos \cite{pegasos} & Yang et. al. \cite{Daniel2016parallel} & ADAM \cite{ADAM_2014} \\ \hline \hline
COV1 & 87.080 & 86.026 & 87.066 & TBC \\ \hline
RCV1 & 90.238 & 90.130 & 90.175 & 90.024 \\
\hline
\end{tabular}
\end{center}
}}
\end{table}
\fi
\begin{table}
\caption{The CPU time (in seconds) for running $ 10^4 $ iterations of different methods.}\label{tab: SVM CPU time}
\begin{center}
\begin{tabular}{| l || c | c | c | c |}
\hline
Datasets & Proposed Method & Pegasos \cite{pegasos} & Yang et. al. \cite{Daniel2016parallel} & ADAM \cite{ADAM_2014} \\ \hline \hline
COV1 & 1.033 & 1.198 & 1.117 & 4.577 \\ \hline
RCV1 & 73.559 & 74.855 & 74.322 & 84.857 \\
\hline
\end{tabular}
\end{center}
\end{table}
\section{Conclusion}\label{sec: SVM_conclusion}
In this paper, we proposed a novel fast parallel stochastic optimization framework that can solve a large class of possibly non-convex constrained stochastic optimization problems.
Under the proposed method, each user of a multi-agent system updates its control variable in parallel, by solving a successive convex approximation sub-problem, independently. The sub-problems have low complexity and are easy to obtain. The proposed algorithm can be applied to solve a large class of optimization problems arising in important applications from various fields, such as wireless networks and large-scale machine learning. Moreover, for convex
problems, we proved the convergence of the proposed algorithm to the optimal solution, and for general non-convex
problems, we proved the convergence of the proposed algorithm to a stationary point.
Moreover, as a representative application of our proposed stochastic optimization framework in the context of machine learning, we elaborated on large-scale SVMs
and demonstrated how the proposed algorithm can efficiently solve this proble
, especially for modern applications with huge datasets.
We compared the performance of our proposed algorithm to
the state-of-the-art baselines. Numerical results on popular real-world datasets show that the proposed method can significantly outperform the state-of-the-art
methods in terms of the convergence speed while having the same or lower complexity and storage requirement.
\iffalse
Moreover, we elaborated on large-scale SVMs
as a representative application of our proposed stochastic optimization framework in the context of machine learning. We demonstrated how the proposed algorithm can efficiently solve this proble
, especially in modern large-scale applications with huge datasets.
We compared the performance of our proposed algorithm to the
the state-of-the-art baselines. Numerical results on real-world datasets show that the proposed method can significantly outperform the state-of-the-art stochastic methods in terms of the convergence speed while having the same or lower complexity and storage requirement.
\fi
| {
"timestamp": "2019-10-23T02:15:13",
"yymm": "1910",
"arxiv_id": "1910.09901",
"language": "en",
"url": "https://arxiv.org/abs/1910.09901",
"abstract": "In this paper, we consider the problem of stochastic optimization, where the objective function is in terms of the expectation of a (possibly non-convex) cost function that is parametrized by a random variable. While the convergence speed is critical for many emerging applications, most existing stochastic optimization methods suffer from slow convergence. Furthermore, the emerging technology of parallel computing has motivated an increasing demand for designing new stochastic optimization schemes that can handle parallel optimization for implementation in distributed systems. We propose a fast parallel stochastic optimization framework that can solve a large class of possibly non-convex stochastic optimization problems that may arise in applications with multi-agent systems. In the proposed method, each agent updates its control variable in parallel, by solving a convex quadratic subproblem independently. The convergence of the proposed method to the optimal solution for convex problems and to a stationary point for general non-convex problems is established. The proposed algorithm can be applied to solve a large class of optimization problems arising in important applications from various fields, such as machine learning and wireless networks. As a representative application of our proposed stochastic optimization framework, we focus on large-scale support vector machines and demonstrate how our algorithm can efficiently solve this problem, especially in modern applications with huge datasets. Using popular real-world datasets, we present experimental results to demonstrate the merits of our proposed framework by comparing its performance to the state-of-the-art in the literature. Numerical results show that the proposed method can significantly outperform the state-of-the-art methods in terms of the convergence speed while having the same or lower complexity and storage requirement.",
"subjects": "Information Theory (cs.IT); Optimization and Control (math.OC)",
"title": "Parallel Stochastic Optimization Framework for Large-Scale Non-Convex Stochastic Problems",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9496693674025231,
"lm_q2_score": 0.7461389873857264,
"lm_q1q2_score": 0.7085853401449619
} |
https://arxiv.org/abs/1406.5104 | On the Theory and Algorithm for rigorous discretization in applications of Information Theory | We identify fundamental issues with discretization when estimating information-theoretic quantities in the analysis of data. These difficulties are theoretical in nature and arise with discrete datasets carrying significant implications for the corresponding claims and results. Here we describe the origins of the methodological problems, and provide a clear illustration of their impact with the example of biological network reconstruction. We propose an algorithm (shared information metric) that corrects for the biases and the resulting improved performance of the algorithm demonstrates the need to take due consideration of this issue in different contexts. | \section{Results}
Mathematically, Shannon entropy of a variable $X$, defined as $H(X) = \sum_{i} -p_i \log (p_i)$ \cite{Shan}, is a well-defined quantity when $X$ is discrete, taking distinct values $i$ with probabilities $p_i$. This is not the case when $X$ is sampled from a continuous probability distribution where an equivalent definition would just give infinity \cite{Lesne}. Nonetheless, there exists a generalization of mutual information (MI) of two variables $X$ and $Y$ with joint probability distribution $P_{X,Y} (u,v)$ and corresponding marginals $P_X (u)$ and $P_Y (v)$ :
\begin{align}
I &= H(X) + H(Y) -H(X,Y) \nonumber \\
&= \sum_{X=u,Y=v} P_{X,Y} (u,v) \log \frac{P_{X,Y} (u,v)}{P_X(u) P_Y(v)}
\end{align}
from discrete to continuous joint probability distribution
\begin{equation}
I= \int du dv P_{X,Y} (u,v) \log{\frac{P_{X,Y} (u,v)}{P_X(u) P_Y(v)}}
\label{MI-C}
\end{equation}
which is well-behaved \cite{Lesne}. However, given discrete ordered pair ($x_p,y_p$) sampled from an unknown joint probability distribution over continuous variables, we would have to approximate the true (continuous) distribution from it. At the level of probability, this is achieved by binning and discretizing the range of values of the variables. \\
While this is carried out straightforwardly, there are pitfalls in simple generalization to mutual information. The key problem is that mutual information depends in a significant way on the discretization parameters, even when the number of points is very large. We show this with the following simple but general example. Consider ordered pairs $(x_v,y_v) \, , v=1,2,\cdots N$ from a joint distribution with the only requirement that $x_u \neq x_v$ for $u \neq v$ and likewise for $y$'s. If the entire range of $X$ and $Y$ is considered as a single segment for discretization, $\Delta^{(0)}_{X}$ and $\Delta^{(0)}_{Y}$ then the joint probability distribution $P_{X,Y} (\Delta^{(0)}_{X},\Delta^{(0)}_{Y})=1$ and hence the marginals are also unity for the corresponding $X$ and $Y$ ranges. It can be immediately seen that the mutual information is zero. Now, for the other extreme, we choose the interval widths $\delta_X$ and $\delta_Y$ such that there is at most one point lying within each interval (for both the variables). We can always do this by selecting $\delta_X < \min_{u \neq v, u,v =1,2,\cdots ,N} \{ |x_u -x_v|\}$ and $\delta_Y < \displaystyle \min_{u \neq v, u,v =1,2,\cdots ,N} \{ |y_u -y_v|\}$. In that case, the discretized joint probability in each rectangular cell is $1/N$ if it is occupied and 0 otherwise. In much the same way the marginal probability in each interval is either $1/N$ or 0 for both $X$ and $Y$. Fig (\ref{full-MI}) shows how would be done in the general case (note that interval widths there are not uniform).
\begin{figure}
\centering
\includegraphics[scale=0.30]{figure1.pdf}
\caption{\it Grids in 2D space for a set of 10 points such that for any partition along the $X$-direction ($Y$-direction) there is exactly one point within it. This corresponds to a maximum possible mutual information of $\log 10$.}
\label{full-MI}
\end{figure}
We have then the mutual information:
\begin{equation}
I= \sum_{\alpha,\beta} P_{X,Y}(\alpha,\beta) \log \frac{P_{X,Y} (\alpha,\beta)}{P(\alpha) P(\beta)} =\sum_{\alpha,\beta} 1/N \log \frac{1/N}{(1/N)(1/N)} = \log N
\end{equation}
Thus, for a set of $N$ observations from a joint probability distribution, the mutual information can vary between 0 and $\log{N}$ depending on the size of the binning. What is most striking about this is that {\it this is true regardless of the true mutual information of the underlying joint probability distribution}. As the upper bound grows with the size $N$, having a larger sample number does not solve the fundamental problem of the inherent ambiguity of defining mutual information using discrete data points. \\
\subsection{Standard partitioning biases}
Recognizing this basic limitation, we wish to consider the implications of different choices of binning in estimating mutual information \cite{Simoes}. We want to understand the underlying biases and the possible deviation of mathematical properties from that of the original definition. \\% We want to understand the underlying biases better evaluate methods based on how well they reproduce the relative rankings of their true mutual information \cite{MRNET} and not their absolute value \cite{Altay}.
In general, mutual information increases as we increase the number of partitions of the space. We can see that in Fig (\ref{MI-V-Bin-1}) the variation of the estimated mutual information for two multivariate Gaussian random variables with the number of bins (assuming fixed bin length) for different sample sizes $N$. The plotted values for the estimated MI comes from the average taken over 50 samples. In the supplement, we prove a general result stating that the doubling of partition size would increase the mutual information between two variables for an arbitrary set of samples. However, this by itself would not be an issue if we are considering reverse engineering in networks as long as the rankings of estimated mutual information between variables is preserved with respect to their true values \\
\begin{figure}
\begin{subfigure}{0.45 \textwidth}
\includegraphics[scale=0.32,type=pdf, ext=.pdf,read=.pdf]{figure2}
\caption{(Theoretical) I=0.22}
\end{subfigure}
\begin{subfigure}{0.45 \textwidth}
\includegraphics[scale=0.32,type=pdf, ext=.pdf,read=.pdf]{figure3}
\caption{(Theoretical) I=0.8}
\end{subfigure}
\caption{\it Variation of the estimated entropy as a function of number of bins (along each axis) for a pair of jointly Gaussian random variables.}
\label{MI-V-Bin-1}
\end{figure}
There are other biases and errors that are introduced by the choice of the partitioning scheme that we will consider here. Broadly, partitioning of the two dimensional space of variables $X$ and $Y$ fall into two general categories: uniform number of bins ($b$) where the number of divisions of both axis is identical, uniform width where the width of each partition ($w$) is identical for both variables, and uniform frequency where the number of points falling into each partition (for each variable) variable is fixed.. We will now turn to each of these specific methods.
\subsubsection{Uniform number of partitions}
\label{Uni-Num}
Here each axis of the space is split into equal number of uniform segments. The advantage of this method is that it works well when the data is distributed reasonably evenly across the range of each axis which happens when the density of points falls off sharply outside of the confined region under consideration. \\
However, the difficulty arises with probability densities that have a fat tail, where the spread is wide but the majority of points are still lying within a smaller region. A binning of this sort would then be finer for points in the periphery but coarser in the denser regions, thus tending to underestimate the mutual information from the latter areas.
We demonstrate this by estimating the entropy for two samples , the first of which comes from the exponential family,
\begin{equation}
P_{X,Y} = \frac{1}{\Gamma (\theta) \lambda} x^{\theta} e^{-\frac{xy}{\lambda} -x}
\label{exp}
\end{equation} with $\theta =4, \lambda=5$ and the second a multivariate Gaussian in two-variables $ \sim e^{-(X - \mu_x)\cdot \Sigma^{-1} \cdot (Y - \mu_y)}$ where
$$\Sigma = \left( \begin{array}{cc} 5 &4 \\ 4 & 5 \end{array} \right) $$
and $(\mu_x,\mu_y) = (5,5)$. Each set consists of 200 points, and the barplots of distributions (Fig.(\ref{Id-NP},\ref{NId-NP})) across a uniform $10 \times 10$ grid show that while the multivariate normal is spread quite smoothly across the entire surface, it falls off in the other for $X,Y >5$.
The mutual information can be theoretically computed in these two cases:
$$ I^{exp} (\theta, \lambda) = \lambda ( \psi (\theta +1) -\log \theta )$$
where $\psi$ is the digamma function and
$$I^{Gauss} (\Sigma) = \frac{1}{2} \log \frac{\sigma_{xx}^2 \sigma_{yy}^{2}}{\sigma_{xx}^2 \sigma_{yy}^2 -\sigma_{xy}^2}$$
which evaluate to $I^{exp} = 0.79$ and $I^{Gauss} = 0.51$ but the estimated mutual information from the above binning leads to $I^{exp}_e =0.21$ and $I^{Gauss}_e=0.59$. \\
The reason for the severe underestimation in the first case is precisely the fact that there is a closer clustering of points within a smaller region where this type of binning tends to be `too wide' and averaging the finer distinctions. In the multivariate normal case, this difference is not so striking, and the estimation is far more accurate.
\begin{figure}
\begin{subfigure}{0.43 \textwidth}
\includegraphics[scale=0.5,type=png,ext=.png,read=.png]{figure4}
\caption{\it \small Sample from an exponential family Eq. (\ref{exp}) with $\theta=3$ and $\lambda=5$. Theoretical (estimated) mutual information is 0.79 (0.21) }
\label{Id-NP}
\end{subfigure}
\hspace{1cm}
\begin{subfigure}{0.43 \textwidth}
\includegraphics[scale=0.5,type=png,ext=.png,read=.png]{figure5}
\caption{\it \small Normal distribution with $\sigma_{xx}^2 = \sigma_{yy}^2=5, \sigma_{xy} = 4$. Theoretical (estimated) mutual information is 0.51 (0.59) }
\label{NId-NP}
\end{subfigure}
\newline
\begin{subfigure}{0.43 \textwidth}
\includegraphics[scale=0.4,type=png,ext=.png,read=.png]{figure6}
\caption{\it Normal distribution with $\sigma_{xx}^2 =5, \sigma_{yy}^2=5, \sigma_{xy} = 4.5$. Theoretical (estimated) mutual information is 0.83 (0.79) }
\label{G1}
\end{subfigure}
\hspace{1cm}
\begin{subfigure}{0.43 \textwidth}
\includegraphics[scale=0.4,type=png,ext=.png,read=.png]{figure7}
\caption{\it Normal distribution with $\sigma_{xx}^2 = 5, \sigma_{yy}^2=2.5, \sigma_{xy} = 3.18$. Theoretical (estimated) mutual information is 0.83 (0.67) }
\label{G2}
\end{subfigure}
\caption{\bf Mutual Information and meshing: In (a) and (b), the grid choice turns the space of points into a square with equal number of partitions along both, while for (c) and (d) the width of every partition is held fixed.}
\label{NP}
\end{figure}
\vspace{6mm}
\begin{figure}
\begin{subfigure}{0.43 \textwidth}
\includegraphics[scale=0.5,type=png,ext=.png,read=.png]{figure8}
\caption{ }
\end{subfigure}
\hspace{1cm}
\begin{subfigure}{0.43 \textwidth}
\includegraphics[scale=0.5,type=png,ext=.png,read=.png]{figure9}
\caption{ }
\end{subfigure}
\caption{ (Left) {\it \small Mixed-Gaussian distribution. } (Right){\it \small Equal-frequency partitioning skews the distribution. Estimated(Numerical) mutual information is 0.19 (0.45)} }
\label{EF}
\end{figure}
\subsubsection{Uniform partition widths}
\label{Uni-Size}
The other standard approach to choosing a grid on the two dimensional space of points is to set the width of every partition on both axes to be a constant. As the spread of the two variables is in general different, the total number of partitions need not be the same. The idea behind this is that the width represents the distance over which points are considered to fall within the same discrete category. It may be set based on estimates of noise in the sample, or, where available, prior knowledge of the form of the joint distribution. \\
There is however a fundamental problem with this approach : setting the widths to be a constant destroys the scaling invariance of the theoretical definition of mutual information for continuous variables. In Eq. (\ref{MI-C}), the integral is invariant under the transformation $y \rightarrow cy, P_{X,Y} (x,y) \rightarrow c P_{X,Y} (x,cy), P_Y (y) \rightarrow c P_Y (cy)$ which corresponds to scaling the metric of the Y-axis by a constant. It must be noted that this is not an incidental property of the integral but one that is central to the usefulness of mutual information, i.e., mere scaling of the underlying space should not, and does not, alter the relative information between two variables. \\
Choosing binning styles with equal widths implies that scaling the points along one direction in the space would change the number of partitions on its axis, which changes the estimated mutual information. The effect of this can be easily observed by considering samplings from two multivariate normal distributions in two dimensions that differ only by the scaling of the y-axis by a factor of $1/\sqrt{2}$( Fig. (\ref{G1}) and Fig. (\ref{G2}). By definition, the scale-invariance implies equal mutual information but the requirement of setting equal widths (unity in this case) leads fewer bins in the second case and consequent underestimation of its value.
\subsubsection{Equal Frequency Partitioning}
In this case, the boundaries of the partitions for a given variable are chosen such that each division contains an equal number of points. The general advantage of this method is that, unlike the above two, every row or column in the grid would contain equal number of points, eliminating the problem highlighted in Fig (\ref{Id-NP},\ref{NId-NP}). However, this would skew joint distributions that have strong association between two variables to a more uniform shape, causing an underestimation in mutual information. We can see this with the mixed-Gaussian distribution with three modes (Fig. (\ref{EF})) where the effect of choosing the equal frequency partitioning on a $10 \times 10$ grid flattens the peak in the center spreading the probability over a wider region, leading to a decrease in mutual information. \\
It is thus clear from the examples above that the standard types of partitioning is in general unsatisfactory in terms of obtaining reasonable and consistent mutual information scores. It should be noted that although our examples consider discrete binning of the data, the kernel-based estimation of mutual information \cite{KBE,ARACNE} has the same fundamental problem. While it is not as pronounced as it is with direct binning, the width of the kernel in that case is equivalent to the width of the bins here (equal-width partition). The same is true with estimators like the k-nearest neighbors \cite{Knn} where the scores depend on the parameter corresponding to the number of neighbors.
\subsection{Adaptive Partitioning for Network Inference}
Data-driven reverse-engineering of genetic networks uses pairwise association between genes to determine true interactions among them. Information theory is applied effectively for this task by estimating the mutual information between every pair of network node variables and reconstructing the network based on the relative rankings of the different edge scores \cite{Chow,Reshef,Faith}. A distinct advantage being that mutual information detects all forms of associations while correlation-based measures perform well only with linear relations. \\
We propose a novel method of partitioning of space in the context of network reverse-engineering that aims to reduce the biases created by the standard techniques. This includes choosing grid sizes that take into consideration (a) overall spread of the data among all variables (b) spread of the values of the pair of node variables in question (c) dependence of numerical estimation of mutual information on this spread (d) appropriate normalization using the entropy of each variable. \\
Given set $V$ of N node variables and M samples, we choose first a standard partition width $w_0 = w_{int}/K_{min}$, where $w_{int} = \text{median} \{ \sigma (X) | X \in V \}$, and $K_{min}$ is an integer parameter that represents the minimum number of partitions and $\sigma (X)$ is the empirical standard deviation of the $X$ variable values. \\
For any two node variables $X$ and $Y$ whose mutual information we want to estimate, the number of bins $b_X$ and $b_Y$ are obtained using the following algorithm (see Fig. \ref{alg}). In addition to $K_{min}$, we have another integer parameter $K_{max}$ that sets the maximum number of partitions. \\
\begin{figure}[t!]
\centering
\framebox{ \begin{minipage}{3in}
\begin{small}
\hfill \\
{\bf Step 1}: $b_X$ is first set to $[ \frac{\sigma(X)}{w_0} ]$ where $[.]$ refers to the nearest integer function. Likewise for $b_Y$. Assume $b_X \leq b_Y$. \\
{\bf Step 2} : If $b_Y > K_{max}$, then reset $b_X = [ b_X (K/b_Y) ]$ and $b_Y= K$. If the new $b_X < K_{min}$, reset $b_X=K_{min}$. \\
{\bf Step 3}: If $b_X < K_{min}$, then $b_Y = [ \max \{K_{min}, \min \{ K_{max}, K_{min} \sqrt{b_Y/b_X} \} \} ]$ and $b_X = K_{min}$. \\
{\bf Step 4}: Once the binning is fixed, we proceed to calculate the number of points falling within each rectangle, and the discretized form of mutual information between the two variables. \\
{\bf Step 5}: We normalize the mutual information by dividing by $\min \{ H(X), H(Y) \}$, where $H(X)$ and $H(Y)$ are the entropies of $X$ and $Y$ calculated using the same bin numbers $b_X$ and $b_Y$ respectively. We call the resulting quantity shared information metric (SIM). \\
\vspace{1cm}
\end{small}
\end{minipage}
} \\
\caption{ \bf Adaptive Paritioning Algorithm}
\label{alg}
\end{figure}
\vspace{1cm}
Steps 1-3 may look complicated but the idea is simple enough: we prefer to maintain the bin widths constant as long as their numbers lie between $K_{min}$ and $K_{max}$. This is primarily in recognition of the fact that spread of the node variable is a measure of the strength of the interactions: if the values are nearly the same, the uniform width approach would select fewer partitions. However, to ensure that the distribution is not skewed when the spread of one or both is large (or too small), we need to correspondingly {\it rescale both variables} to force them to lie within that range. Heuristically, the two limits $K_{min}$ and $K_{max}$ ensure that the relative values of mutual information are not under- or over-estimated. \\
{\bf Normalization Measure}: \\
Step 5 was motivated by two considerations. First, we note that a proper normalization is required when having different bin sizes. If, for example, there is a node variable with a small spread but happens to have a strong direct interaction with another node, the binning scheme would then `overcorrect' for the small spread. We overcome that by such a normalization scheme. Second, we recognize that what is significant in reverse-engineering is not the absolute value of mutual-information but the MI relative to the information content of the two node variables. The inequality $I (X,Y) \leq \min \{ H(X), H(Y) \} $ captures the fact that this quantity represents that part of the information that is shared between the nodes. The fraction of what is shared should serve as a better indicator of true interaction than their absolute values. \\
\subsection{Evaluation on networks}
We assess the algorithm by studying its performance on {\it in-silico} networks against that of standard methods (see Section \ref{Meth} for more details). The initial comparison is with respect to the estimation method using only uniform bin size or uniform bin number. Evaluation of the performance is done by plotting the precision-recall curve (Fig. (\ref{PR-UvNU})) (see Section \ref{Meth} for more details on their definition). It can be clearly seen that, at any given value of recall, the precision is significantly higher when using adaptive non-uniform discretization. We also consistently found better results regardless of the size or the topology of the network or its dynamics. \\
\begin{figure}
\begin{subfigure}{0.45 \textwidth}
\includegraphics[scale=0.3,type=pdf,ext=.pdf,read=.pdf]{figure10}
\caption{$K_{min}=4$ and $K_{max}=10$. $N$ = 100 and $E$= 160}
\end{subfigure}
\begin{subfigure}{0.45 \textwidth}
\includegraphics[scale=0.3,type=pdf,ext=.pdf,read=.pdf]{figure11}
\caption{$K_{min}$=2 amd $K_{max}$ =5. $N$ = 100 and $E$= 120}
\end{subfigure}
\newline
\vspace{4mm}
\centering
\begin{subfigure}{0.5 \textwidth}
\includegraphics[scale=0.3,type=pdf,ext=.pdf,read=.pdf]{figure12}
\caption{Equal-width vs Adaptive Discretization: $N=100$ and $E=160$}
\label{EW-NU}
\end{subfigure}
\caption{\it Comparison of the performance of network edge determination using uniform number off bins (10 and 5 in figure (a) and (b) respectively) or uniform bin width (Fig. (c)) against a non-uniform discretization. $N$ and $E$ refer to the total number of nodes and true edges in the network.}
\label{PR-UvNU}
\end{figure}
\\
Our method performed better even against standard mutual-information based approaches such as kernel estimation and DPI (ARACNE) \cite{ARACNE} and the k-nearest neighbor algorithm (with DPI) \cite{Knn} (Fig. (\ref{KNN-ARACNE})).
\begin{figure}
\begin{subfigure}{0.45 \textwidth}
\includegraphics[scale=0.3,type=pdf,ext=.pdf,read=.pdf]{figure13}
\caption{\it $N=100$ and $E=120$}
\label{NU-KNN}
\end{subfigure}
\begin{subfigure}{0.45 \textwidth}
\includegraphics[scale=0.3,type=pdf,ext=.pdf,read=.pdf]{figure14}
\caption{\it $N=100$ and $E=185$}
\label{NU-KNN}
\end{subfigure}
\caption{\it Comparison of the performance of network edge determination using k-nearest neighbor method (with DPI) (Fig. (a)) and ARACNE (Fig (b)) against the non-uniform discretization with the shared information mertic (SIM). $N$ and $E$ refer to the total number of nodes and true edges in the network.}
\label{KNN-ARACNE}
\end{figure}
\\
For biological data sets, we considered the gene regulatory network of {\it E. coli}, and the expression data for it from the DREAM 5 challenge \cite{Marbach}. In this case, the set of transcription factors is known, and the aim is to determine their targets. Thus our binning choice should reflect this prior knowledge and the algorithm was modified accordingly (described in the supplement). Of the set of 2000 interactions we evaluated, of which only 111 represented true regulatory interactions, our method clearly performed better than the standard method of discretization (Fig. \ref{EC-C1}).
\begin{figure}
\centering
\includegraphics[scale=0.3,type=pdf,ext=.pdf,read=.pdf]{figure15}
\caption{\it Comparison of the performance of identification of targets of transcription factors of the gene regulatory network of {\it E. coli}. Expression data and true interaction set obtained from the DREAM 5 challenge.}
\label{EC-C1}
\end{figure}
\section{Methods}
\label{Meth}
We constructed synthetic networks obeying power-law degree-distribution with the R package {\it igraph}. The data was generated by carrying out numerical integration of the Hill-kinetic equations corresponding to the network topology until global steady state was reached. Each sample in the data set corresponds to the steady state value of all variables. Different samples correspond to steady states of different equations obtained with set of parameters comes from a uniform distribution. \\
The precision-recall curve (PRC) is used to evaluate these networks, where precision and recall are given by:
$$ \text{Precision} = \frac{TP}{TP + FP}$$ \\$$ \text{Recall}=\frac{TP}{TP +FN}$$
and TP= True Positives (the number of correct interactions identified), FP= number of pairs incorrectly identified, and FN=number of interactions that were unidentified. In general, there is a trade-off between precision and recall, as we slide the threshold parameter.
\section{Conclusions}
We have described and detailed significant problems with the definition and estimation of mutual information, a quantity that is central to information theory. We argue that this issue is of a theoretical nature leading to an inherent arbitrariness in its estimation from a discrete set of points. Moreover, the the errors introduced by the standard forms of partitions of space is explicitly demonstrated with examples . \\
By formulating a novel adaptive partitioning algorithm for reverse engineering of networks, we further show the impact of these estimation biases in a real-world context where mutual information is applied extensively. The superior performance of the method over standard approaches on both in-silico and real biological networks is not only an advancement in that field, but also clearly implies the necessity to carefully understand the problems with estimating mutual information. \\
The applications of information theory to various fields have grown enormously in recent decades as the concepts have become central to areas such as physics, communications and signaling, inference theory, multiple biological sciences, pattern recognition and artificial intelligence\cite{Jaynes,Adami,Maas,Per,Tishby,Studholme}.
We believe that a thorough investigation of the fundamental issues involved in the subject would be immensely beneficial to all of their applications.
\begin{appendices}
\section{Proof of increase in mutual information}
We show that doubling the partition number along one of the directions leads to a non-negative change in the mutual information. Starting with a division of the 2D space into $L \times M$ blocks we bisect each of the $LM$ units along the $X-$axis to create $2L \times M$ cells. The mutual information:
\begin{equation}
I = \sum_{i=1}^{L} \sum_{j=1}^{M} P_{X,Y} (i,j) \log \frac{P_{X,Y}(i,j)}{P_X(i) P_Y(j)}
\end{equation}
where $P_{X,Y}(i,j)$ is the fraction of particles lying in the rectangular cell $(i,j)$. After bisection, we have a new probability distribution $P^{2L}(i,j)$ (dropping the double-index subscript) where $i =1,2,\cdots 2L$, such that $P^{2L} (2k-1,j) + P^{2L} (2k,j) = P(k,j)$ for $ k=1,2,\cdots L$. We will show that for every original cell, its contribution to MI is exceeded by the sum of the contributions of the bisected components and thus obtain a stronger version of our required result.
To prove:
\begin{equation}
P^{2L} (2k-1,j) \log \frac{P^{2L}(2k-1,j)}{P_X^{2L} (2k-1) P_Y(j)} + P^{2L} (2k,j) \log \frac{P^{2L} (2k,j)}{P_X^{2L} (2k) P_Y (j)} \geq P (k,j) \log \frac{P (k,j)}{P_X (k) P_Y (j)}
\end{equation}
which reduces to,
\begin{equation}
P^{2L} (2k-1,j) \log \frac{P^{2L}(2k-1,j)}{P_X^{2L} (2k-1)} + P^{2L} (2k,j) \log \frac{P^{2L} (2k,j)}{P_X^{2L} (2k)} \geq P (k,j) \log \frac{P (k,j)}{P_X (k)}.
\end{equation}
Substituting $p_a = P^{2L} (2k-1,j), p_b = P^{2L} (2k,j),p_1= P_X^{2L} (2k-1), p_2 = P_X^{2L} (2k)$,
\begin{equation}
p_a \log \frac{p_a}{p_1} + p_b \log \frac{p_b}{p_2} \geq (p_a + p_b) \log \frac{p_a +p_b}{p_1 +p_2}
\end{equation}
Straightforward rearrangements of the terms leads to,
\begin{align}
\frac{p_a}{p_a +p_b} \log \frac{p_a}{p_a + p_b} &+ \frac{p_b}{p_a +p_b} \log \frac{p_b}{p_a + p_b} - \frac{p_a}{p_a +p_b} \log \frac{p_1}{p_1 + p_2} \\ \notag &- \frac{p_b}{p_a +p_b} \log \frac{p_2}{p_1 + p_2} \geq 0
\end{align}
which is equivalent to demonstrating that :
\begin{equation}
x \log \frac{x}{y} +(1-x) \log \frac{1-x}{1-y} \geq 0
\end{equation}
for $0\leq x,y \leq 1$. The left hand side is nothing but KL-divergence between two distributions on two element set with probabilities $\{ x,1-x \}$ and $ \{ y,1-y \}$, which is always positive.
\section{Modified Algorithm for determining targets}
We used a modification of the general algorithm for the TF-target identification in the {\it E. coli} expression data set. As the set of transcription factors are known, the binning adjustments can be limited to the targets here and thus our technique is significantly simplified. As before $w_0 = w_{int}/H$, where $w_{int} = \text{median} \{ \sigma (X) | X \in V \}$, and $H$ and $K$ represent the minimum and maximum number of bins. \\
{\bf Step 1}: Fix $b_X = (H+K)/2$ \\
{\bf Step 2} : If $b_Y > K$, then reset $b_Y= K$. \\
{\bf Step 3}: If $b_Y <H$, then $b_Y = H$. \\
{\bf Step 4}: Once the binning is fixed, we proceed to calculate the number of points falling within each rectangle, and the discretized form of mutual information between the two variables. \\
{\bf Step 5}: We normalize the mutual information by dividing by $\min \{ H(X), H(Y) \}$, where $H(X)$ and $H(Y)$ are the entropies of $X$ and $Y$ calculated using the same bin numbers $b_X$ and $b_Y$ respectively.
\end{appendices}
| {
"timestamp": "2014-06-24T02:13:23",
"yymm": "1406",
"arxiv_id": "1406.5104",
"language": "en",
"url": "https://arxiv.org/abs/1406.5104",
"abstract": "We identify fundamental issues with discretization when estimating information-theoretic quantities in the analysis of data. These difficulties are theoretical in nature and arise with discrete datasets carrying significant implications for the corresponding claims and results. Here we describe the origins of the methodological problems, and provide a clear illustration of their impact with the example of biological network reconstruction. We propose an algorithm (shared information metric) that corrects for the biases and the resulting improved performance of the algorithm demonstrates the need to take due consideration of this issue in different contexts.",
"subjects": "Quantitative Methods (q-bio.QM)",
"title": "On the Theory and Algorithm for rigorous discretization in applications of Information Theory",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9496693617046215,
"lm_q2_score": 0.7461389873857264,
"lm_q1q2_score": 0.7085853358935353
} |
https://arxiv.org/abs/1704.03892 | Approximating the Largest Root and Applications to Interlacing Families | We study the problem of approximating the largest root of a real-rooted polynomial of degree $n$ using its top $k$ coefficients and give nearly matching upper and lower bounds. We present algorithms with running time polynomial in $k$ that use the top $k$ coefficients to approximate the maximum root within a factor of $n^{1/k}$ and $1+O(\tfrac{\log n}{k})^2$ when $k\leq \log n$ and $k>\log n$ respectively. We also prove corresponding information-theoretic lower bounds of $n^{\Omega(1/k)}$ and $1+\Omega\left(\frac{\log \frac{2n}{k}}{k}\right)^2$, and show strong lower bounds for noisy version of the problem in which one is given access to approximate coefficients.This problem has applications in the context of the method of interlacing families of polynomials, which was used for proving the existence of Ramanujan graphs of all degrees, the solution of the Kadison-Singer problem, and bounding the integrality gap of the asymmetric traveling salesman problem. All of these involve computing the maximum root of certain real-rooted polynomials for which the top few coefficients are accessible in subexponential time. Our results yield an algorithm with the running time of $2^{\tilde O(\sqrt[3]n)}$ for all of them. | \section{Introduction}
For a non-negative vector ${{\bm{\mu}}}=(\mu_1,\dots,\mu_n)\in\mathbb{R}_+^n$, let $\chi_{{{\bm{\mu}}}}$ denote the unique monic polynomial with roots $\mu_1,\dots,\mu_n$:
\[ \chi_{{{\bm{\mu}}}}(x):=\prod_{i=1}^n(x-\mu_i). \]
Suppose that we do not know ${{\bm{\mu}}}$, but rather know the top $k$ coefficients of $\chi_{{{\bm{\mu}}}}$ where $1\leq k< n$. In more concrete terms, suppose that
\[ \chi_{{{\bm{\mu}}}}(x)=x^n+c_1x^{n-1}+c_2x^{n-2}+\dots+c_n, \]
and we only know $c_1,\dots, c_k$. What do $c_1,\dots, c_k$ tell us about the roots and in particular $\max_{i}\mu_{i}$?
\begin{problem}
\label{prob:main}
Given the top $k$ coefficients of a real rooted polynomial of degree $n$, how well can you approximate its largest root?
\end{problem}
This problem may seem completely impossible if $k$ is significantly smaller than $n$.
For example, consider the two polynomials $x^n-a$ and $x^n-b$. The largest roots of these two polynomials can differ arbitrarily in absolute value and even by knowing the top $n-1$ coefficients we can not approximate the absolute value of the largest at all.
The key to approach the above problem is to exploit real rootedness. One approach is to construct a polynomial with the given coefficients and study its roots.
Unfortunately, even assuming that the roots of the original polynomial are real and well-separated, adding an exponentially small amount of noise to the (bottom) coefficients can lead to constant sized perturbations of the roots in the complex plane --- the most
famous example is Wilkinson's polynomial \cite{Wilkinson84}: $$(x-1)(x-2)\ldots
(x-20).$$
Instead, we use the given coefficients to compute a polynomial of the roots, e.g., the $k$-th moment of the roots, and we use that polynomial to estimate the largest root.
Our contributions towards \autoref{prob:main} are as follows.
\paragraph{Efficient Algorithm for Approximating the Largest Root.}
In \autoref{thm:positive}, we present an upper bound showing that we can use the top $k$ coefficients of a real rooted polynomial with nonnegative roots to efficiently obtain an $\alpha_{k,n}$ approximation of the largest root where
\[
\alpha_{k,n}=\begin{cases}
n^{1/k}&k\leq \log n,\\
1+O(\frac{\log n}{k})^2&k>\log n.
\end{cases}
\]
Moreover such an approximation can be done in $\poly(k)$ time. This implies that exact access to $O\left(\frac{\log n}{\sqrt{\epsilon}}\right)$
coefficients is sufficient for
$(1+\epsilon)$-approximation of the largest root.
\paragraph{Nearly Matching Lower Bounds.}
The main nontrivial part of this work is our information-theoretic matching lower bounds.
In \autoref{thm:negative}, we show that
when $k<n^{1-\epsilon}$ and for some constant $c$, there are no algorithms with approximation factor better than $\alpha_{k,n}^{c}$.
Chebyshev polynomials are critical for this construction as well. We also use known constructions for large-girth graphs, and the proof of \cite{MSS12} for a variant of Bilu-Linial's conjecture \cite{BL06}.
Our bounds can be made slightly sharper assuming Erd\H{o}s's girth conjecture.
\subsection{Motivation and Applications}
For many important polynomials, it is easy to compute the top $k$ coefficients exactly, whereas it is provably hard to compute all of them. One example is the matching polynomial of a graph, whose coefficients encode the number of matchings of various sizes. For this polynomial, computing the constant term, i.e. the number of perfect matchings, is \#P-hard \cite{Valiant79}, whereas for small $k$, one can compute the number of matchings of size $k$ exactly, in time $n^{O(k)}$, by simply enumerating all possibilities. Roots of the matching polynomial, and in particular the largest root arise in a number of important applications \cite{HL72, MSS12, SS13}. So it is natural to ask how well the largest root can be approximated from the top few coefficients.
Another example of a polynomial whose top coefficients are easy to compute is the independence polynomial of a graph, which is real rooted for claw-free graphs \cite{CS07}, and whose roots have connections to the Lov\'{a}sz Local Lemma (see e.g. \cite{HSV16}).
\paragraph{Subexponential Time Algorithms for Method of Interlacing Polynomials}
Our main motivation for this work is the method of interlacing families of
polynomials~\cite{MSS12, MSS13, AO15}, which has been an essential tool in the
development of several recent results including the construction of Ramanujan
graphs via lifts \cite{MSS12, MSS15, HPS15}, the solution of the Kadison-Singer
problem \cite{MSS13}, and improved integrality gaps for the asymmetric traveling
salesman problem \cite{AO14}. Unfortunately, all these results on the existence
of expanders, matrix pavings, and thin trees, have the drawback of being
nonconstructive, in the sense that they do not give polynomial time algorithms
for finding the desired objects (with the notable exception of \cite{Cohen16}).
As such, the situation is somewhat similar to that of the Lov\'{a}sz Local Lemma
(which is able to guarantee the existence of certain rare objects
nonconstructively), before algorithmic proofs of it were found \cite{Beck91,
MT10}.
In \autoref{section:apps}, we use \autoref{thm:positive} to give a $2^{\tilde
O(\sqrt[3]{m})}$ time algorithm for rounding an interlacing family of depth $m$,
improving on the previously known running time of $2^{O(m)}$. This leads to
algorithms of the same running time for all of the problems mentioned above.
\paragraph{Lower Bounds Given Approximate Coefficients}
In the context of efficient algorithms, one might imagine that computing a much larger number of coefficients {\em approximately} might provide a better estimate of the largest root. In particular, we consider the following noisy version of \autoref{prob:main}:
\begin{problem}\label{prob:approx}
Given real numbers $a_1,\ldots,a_k$, promised to be
$(1+\delta)-$approximations of the first $k$ coefficients of a
real-rooted polynomial $p$, how well can you approximate the largest
root of $p$?
\end{problem}
An important extension of our information theoretic lower bounds is that
\autoref{prob:main} is extremely sensitive to noise: in
\autoref{prop:approxlb} we prove that even knowing {\em all} but the $k$-th
coefficient exactly and knowing the $k$-th one up to a $1+1/2^k$ error is no
better than knowing only the first $k-1$ coefficients exactly. We do this by
exhibiting two polynomials which agree on all their coefficients except for the
$k^{th}$, in which they differ slightly, but nonetheless have very different
largest roots.
This example is relevant in the context of interlacing families, because the
polynomials in our lower bound have a common interlacing and are characteristic polynomials of $2-$lifts of a base
graph, which means they could actually arise in the
proofs of \cite{MSS12, MSS13}. To appreciate this more broadly, one can consider the following taxonomy of increasingly structured
polynomials:
\begin{align*}\textrm{complex polynomials} &\supset \textrm{real-rooted polynomials} \supset
\textrm{mixed characteristic polynomials}\\
&\supset \textrm{characteristic polynomials of lifts of graphs}.\end{align*}
Our example complements the standard numerical analysis wisdom (as in
Wilkinson's example) that complex polynomial roots are in general terribly
ill-conditioned as functions of their coefficients, and shows that this fact
remains true even in the structured setting of interlacing families.
\autoref{prop:approxlb} is relevant to the quest for efficient algorithms for interlacing families
for the following reason. All of the coefficients of the matching polynomial of
a bipartite graph can be approximated to $1+1/\poly(n)$ error in polynomial
time, for any fixed polynomial, using Markov Chain Monte Carlo techniques
\cite{JS89,JSV04,friedland}. One might imagine that an extension of these
techniques could be used to approximate the coefficients of the more general
expected characteristic polynomials that appear in applications of interlacing
families. In fact for some families of interlacing polynomials (namely, the
mixed characteristic polynomials of \cite{MSS13}) we can design
Markov chain Monte Carlo techniques to approximate the top half of the
coefficients within $1+1/\poly(n)$ error.
Our information theoretic lower bounds rule out this method as a way to
approximate the largest root, at least in full generality, since knowing all of
the coefficients of a real-rooted polynomial up to a $(1+1/\poly(n))$ error for any $\poly(n)$ is no
better than just knowing the first $\log n$ coefficients exactly, in the worst
case, even under the promise that the given polynomials have a common
interlacing. In other words, even an MCMC oracle that gives $1+1/\poly(n)$
approximation of all coefficients would not generically allow one to
round an interlacing family of depth greater than logarithmic in $n$, since the
error accumulated at each step would be $1/\mathrm{polylog}(n)$.
\paragraph{Connections to Poisson Binomial Distributions} Finally, there is a
probabilistic view of \autoref{prob:main}. Assume that $X=B(p_1)+\dots+B(p_n)$
is a sum of independent Bernoulli random variables, i.e. a Poisson binomial,
with parameters $p_1,\dots,p_n\in[0,1]$. Then \autoref{prob:main} becomes the
following: Given the first $k$ moments of $X$ how well can we approximate
$\max_i p_i$? In this view, our paper is related to \cite{CD15}, where it was
shown that any pair of such Poisson binomial random variables with the same
first $k$ moments have total variation distance at most $2^{-\Omega(k)}$.
However, the bound on the total variation distance does not directly imply a
bound on the maximum $p_i$.
\paragraph{Discussion}
Besides conducting a precise study of the dependence of the largest root of a
real-rooted polynomial on its coefficients, the results of this
paper shed light on what a truly efficient algorithm for interlacing families
might look like. On one hand, our running time of $2^{\tilde O(m^{1/3})}$ shows
that the problem is not ETH hard, and is unnatural enough to suggest that a faster algorithm (for instance,
quasipolynomial) may exist. On the other hand, our lower bounds show
that the polynomials that arise in this method are in general hard to compute in a rather robust sense: namely, obtaining an inverse
polynomial error approximation of their largest roots requires knowing {\em many} coefficients {\em
exactly}. This implies that in order to obtain an efficient algorithm for
even approximately simulating the interlacing families proof technique, one will have to exploit finer
properties of the polynomials at hand, or
find a more ``global'' proof which is able to reason about the error in
a more sophisticated amortized manner, or perhaps track a more well-conditioned
quantity in place of the largest root, which can be computed using fewer
coefficients and which still satisfies an approximate interlacing property.
\section{Preliminaries} \label{preliminaries}
We let $[n]$ denote the set $\{1,\dots,n\}$. We use the notation
$\binom{[n]}{k}$ to denote the family of subsets $T\subseteq[n]$ with $|T|=k$.
We let $S_n$ denote the set of permutations on $[n]$, i.e. the set of bijections
$\sigma:[n]\to[n]$.
We use bold letters to denote vectors. For a vector ${{\bm{\mu}}}\in\mathbb{R}^n$, we denote its
coordinates by $\mu_1,\dots,\mu_n$. We let $\mu_{\max}$ and $\mu_{\min}$ denote
$\max_i \mu_i$ and $\min_i \mu_i$ respectively.
For a symmetric matrix $A$, we denote the vector of eigenvalues of $A$, i.e. the
roots of $\det(xI-A)$, by ${{\bm{\lambda}}}(A)$. Similarly we denote the largest and
smallest eigenvalues by $\lambda_{\max}(A)$ and $\lambda_{\min}(A)$. We slightly
abuse notation, and for a polynomial $p$ we write ${{\bm{\lambda}}}(p)$ to denote the
vector of roots of $p$. We also write $\lambda_{\max}(p)$ to denote the largest
root of $p$.
For a graph $G=(V,E)$ we let $\deg_{\max}(G)$ denote the maximum degree of its
vertices and $\deg_{\avg}(G)$ denote the average degree of its vertices, i.e.
$2|E|/|V|$.
What follows are mostly standard facts; the proofs of
\autoref{fact:linear-transform}, \autoref{fact:cheb-large-x},
\autoref{fact:cheb-large-x}, and \autoref{fact:avgdeg} are included in
\autoref{appendix:prelims} for completness.
\paragraph{Facts from Linear Algebra}
For a matrix $A\in\mathbb{R}^{n\times n}$, the characteristic polynomial of $A$ is defined as
$ \det(xI-A)$.
Letting $\sigma_k(A)$ be the sum of all principal $k$-by-$k$ minors of $A$, we
have:
$$ \det(xI-A)=\sum_{k=0}^n x^{n-k} \sigma_k(A).$$
There are several algorithms that for a matrix $A\in \mathbb{R}^{n\times n}$ calculate $\det(xI-A)$ in time polynomial in $n$. By the above identity, we can use any such algorithm to efficiently obtain $\sigma_k(A)$ for any $1\leq k\leq n$.
The following proposition is proved in \cite{MSS13} using the Cauchy-Binet formula.
\begin{proposition}
\label{prop:sigmak}
Let ${{\bf{v}}}_1,\dots,{{\bf{v}}}_m\in \mathbb{R}^n$. Then,
$$\det\left(xI-\sum_{i=1}^m {{\bf{v}}}_i{{\bf{v}}}_i^T\right) = \sum_{k=0}^n (-1)^k x^{n-k}\sum_{S\subseteq \binom{[m]}{k}} \sigma_k\left(\sum_{i\in S} {{\bf{v}}}_i{{\bf{v}}}_i^T\right).$$
\end{proposition}
\paragraph{Symmetric Polynomials}
We will make heavy use of the elementary symmetric polynomials, which relate the
roots of a polynomial to its coefficients.
\begin{definition}
Let $e_k\in \mathbb{R}[\mu_1,\dots,\mu_n]$ denote the $k$-th elementary symmetric polynomial defined as
\[ e_k({{\bm{\mu}}}):=\sum_{T\in \binom{[n]}{k}}\prod_{i\in T}\mu_i. \]
\end{definition}
\begin{fact}
\label{fact:ekck}
Consider the monic univariate polynomial $\chi(x)=x^n+c_1x^{n-1}+\dots+c_n$. Suppose that $\mu_1,\dots,\mu_n$ are the roots of $\chi$. Then for every $k\in[n]$,
\[ c_k=(-1)^k e_k(\mu_1,\dots,\mu_n). \]
\end{fact}
This means that knowing the top $k$ coefficients of a polynomial is equivalent to knowing the first $k$ elementary symmetric polynomials of the roots.
It also implies the following fact about how shifting and scaling affect the elementary symmetric polynomials.
\begin{fact}
\label{fact:linear-transform}
Let ${{\bm{\mu}}}, {{\bm{\nu}}}\in \mathbb{R}^n$ be such that $e_i({{\bm{\mu}}})=e_i({{\bm{\nu}}})$ for $i=1,\dots, k$. If $a,b\in\mathbb{R}$ then $ e_i(a{{\bm{\mu}}}+b)=e_i(a{{\bm{\nu}}}+b)$ for $i=1,\dots,k$.
\end{fact}
We will use the following relationship between the elementary symmetric
polynomials and the power sum polynomials.
\begin{theorem}[Newton's Identities]
\label{thm:newton}
For $1\leq k\leq n$, the polynomial
$p_k({{\bm{\mu}}}):=\sum_{i=1}^n \mu_i^k$
can be written as
$q_k(e_1({{\bm{\mu}}}),\dots,e_k({{\bm{\mu}}})),$
where $q_k\in\mathbb{R}[e_1,\dots, e_k]$. Furthermore, $q_k$ can be computed at any point in time $\poly(k)$.
\end{theorem}
One of the immediate corollaries of the above is the following.
\begin{corollary}
\label{cor:compute}
Let $p(x)\in\mathbb{R}[x]$ be a univariate polynomial with $\deg p\leq k$. Then $\sum_{i=1}^n p(\mu_i)$ can be written as
\[ q(e_1({{\bm{\mu}}}),\dots,e_k({{\bm{\mu}}})), \]
where $q\in\mathbb{R}[e_1,\dots,e_k]$. Furthermore, $q$ can be computed at any point in time $\poly(k)$.
\end{corollary}
\autoref{thm:newton} shows how $p_1,\dots,p_k$ can be computed from $e_1,\dots,e_k$. The reverse is also true. A second set of identities, also known as Newton's identities, imply the following.
\begin{theorem}[Newton's Identities]
For each $k\in[n]$, $e_k({{\bm{\mu}}})$ can be written as a polynomial of $p_1({{\bm{\mu}}}),\dots,p_k({{\bm{\mu}}})$ which can be computed in time $\poly(k)$.
\end{theorem}
A corollary of the above and \autoref{thm:newton} is the following.
\begin{corollary}
\label{cor:ekpk}
For two vectors ${{\bm{\mu}}},{{\bm{\nu}}}\in\mathbb{R}^n$, we have
\[ \left(\forall i\in[k]: e_i({{\bm{\mu}}})=e_i({{\bm{\nu}}})\right) \iff \left(\forall i\in[k]: p_i({{\bm{\mu}}})=p_i({{\bm{\nu}}})\right). \]
\end{corollary}
\paragraph{Chebyshev Polynomials}
Chebyshev polynomials of the first kind, which we will simply call Chebyshev polynomials, are defined as follows.
\begin{definition}\label{def:chebyshevpolyn}
Let the polynomials $T_0,T_1,\dots\in\mathbb{R}[x]$ be recursively defined as
\begin{align*}
T_0(x)&:=1,\\
T_1(x)&:=x,\\
T_{n+1}(x)&:=2xT_n(x)-T_{n-1}(x).
\end{align*}
We will call $T_k$ the $k$-th Chebyshev polynomial.
\end{definition}
Notice that the coefficients of $T_k$ can be computed in $\poly(k)$ time, by the above recurrence for example. Chebyshev polynomials have many useful properties, some of which we mention below. For further information, see \cite{Szeg39}.
\begin{fact}
For $k\geq 0$ and $\theta\in\mathbb{R}$, we have
\begin{align*}
T_k(\cos(\theta))&=\cos(k\theta),\\
T_k(\cosh(\theta))&=\cosh(k\theta).
\end{align*}
\end{fact}
\begin{fact}
The $k$-th Chebyshev polynomial $T_k$ has degree $k$.
\end{fact}
\begin{fact}
\label{fact:cheb-small-x}
For any $x\in[-1,1]$, we have $T_k(x)\in[-1,1]$.
\end{fact}
\begin{fact}
\label{fact:cheb-large-x}
For any integer $k\geq 0$, $T_k(1+x)$ is monotonically increasing for $x\geq 0$. Furthermore for $x\geq 0$,
\[ T_k(1+x)\geq (1+\sqrt{2x})^k/2. \]
\end{fact}
In our approximate lower bound we will use the following connection between
Chebyshev polynomials and graphs, due to Godsil and Gutman \cite{GG81}.
\begin{fact}\label{fact:godsil} If $A_n$ is the adjacency matrix of a cycle on $n$ vertices, then
$$\det(2xI-A_n) = 2T_n(x).$$
\end{fact}
\paragraph{Graphs with Large Girth}
In order to prove some of our impossibility results, we use the existence of extremal graphs with no small cycles.
\begin{definition}
For an undirected graph $G$, we denote the length of its shortest cycle by $\girth(G)$. If $G$ is a forest, then $\girth(G)=\infty$.
\end{definition}
The following conjecture by Erd\H{o}s characterizes extremal graphs with no small cycles.
\begin{conjecture}[Erd\H{o}s's girth conjecture \cite{Erd64}]
\label{conj:erdos}
For every integer $k\geq 1$ and sufficiently large $n$, there exist graphs $G$ on $n$ vertices with $\girth(G)>2k$ that have $\Omega(n^{1+1/k})$ edges, or in other words satisfy $\deg_{\avg}(G)=\Omega(n^{1/k})$.
\end{conjecture}
This conjecture has been proven for $k=1,2,3,5$ \cite{Wen91}. We will use the
following more general construction of graphs of somewhat lower girth.
\begin{theorem}[\cite{LU95}]
\label{thm:large-girth}
If $d$ is a prime power and $t\geq 3$ is odd, there is a $d$-regular bipartite graph $G$ on $2d^t$ vertices with $\girth(G)\geq t+5$.
\end{theorem}
\paragraph{Signed Adjacency Matrices} Our lower bounds will also utilize facts
about signings of graphs.
\begin{definition}
For a graph $G=([n],E)$, we define a signing to be any function $s:E\to \{-1,+1\}$. We define the signed adjacency matrix $A_s\in\mathbb{R}^{n\times n}$, associated with signing $s$, as follows
\[
A_s(u, v):=\begin{cases}
0&\{u, v\}\notin E,\\
s(\{u, v\})&\{u, v\} \in E.\\
\end{cases}
\]
\end{definition}
Note that by definition, $A_s$ is symmetric and has zeros on the diagonal. The following fact is immediate.
\begin{fact}
\label{fact:As-roots}
For a signed adjacency matrix $A_s$ of a graph $G$, the eigenvalues ${{\bm{\lambda}}}(A_s)$, i.e. the roots of
$ \chi(x):=\det(xI-A_s)$,
are real. If $G$ is bipartite, the eigenvalues are symmetric about the origin (counting multiplicities).
\end{fact}
Signed adjacency matrices were used in \cite{MSS12} to prove the existence of bipartite Ramanujan graphs of all degrees. We state one of the main results of \cite{MSS12} below.
\begin{theorem}[\cite{MSS12}]
For every graph $G$, there exists a signing $s$ such that \[\lambda_{\max}(A_s)\leq 2\sqrt{\deg_{\max}(G)-1}.\]
\end{theorem}
By \autoref{fact:As-roots}, we have the following immediate corollary.
\begin{corollary}
\label{cor:signing}
For every bipartite graph $G$, there exists a signing $s$ such that the eigenvalues of $A_s$ have absolute value at most $2\sqrt{\deg_{\max}(G)-1}$.
\end{corollary}
We note that trivially signing every edge with $+1$ is often far from achieving the above bound as witnessed by the following fact.
\begin{fact}
\label{fact:avgdeg}
Let $A$ be the adjacency matrix of a graph $G=([n],E)$ (i.e. the signed adjacency matrix where the sign of every edge is $+1$). Then the maximum eigenvalue of $A$ is at least $\deg_{\avg}(G)$.
\end{fact}
\section{Approximation of the Largest Root}
In this section we give an answer to \autoref{prob:main}. As witnessed by \autoref{fact:ekck}, knowing the top $k$ coefficients of the polynomial $\chi_{{{\bm{\mu}}}}$ is the same as knowing $e_1({{\bm{\mu}}}),\dots,e_k({{\bm{\mu}}})$. Therefore, without loss of generality and more conveniently, we state the results in terms of knowing $e_1({{\bm{\mu}}}),\dots,e_k({{\bm{\mu}}})$.
\begin{theorem}
\label{thm:positive}
There is an algorithm that receives $n$ and $e_1({{\bm{\mu}}}),\dots,e_k({{\bm{\mu}}})$ for some unknown ${{\bm{\mu}}}\in\mathbb{R}_+^n$ as input and outputs $\mu_{\max}^*$, an approximation of $\mu_{\max}$, with the guarantee that
\[ \mu_{\max}^*\leq \mu_{\max}\leq \alpha_{k,n}\cdot \mu_{\max}^*, \]
where the approximation factor $\alpha_{k,n}$ is
\[
\alpha_{k,n} = \begin{cases}
n^{1/k}&k\leq \log n,\\
1+O(\frac{\log n}{k})^2&k> \log n.
\end{cases}
\]
Furthermore the algorithm runs in time $\poly(k)$.
\end{theorem}
Note that there is a change in the behavior of the approximation factor in the two regimes $k\ll \log n$ and $k\gg \log n$. When $k>\log n$, the expression $n^{1/k}$ is $1+\Theta(\frac{\log n}{k})$ which can be a much worse bound compared to $1+O(\frac{\log n}{k})^2$. When $k$ is near the threshold of $\log n$, $n^{1/k}$ and $1+\Theta(\frac{\log n}{k})^2$ are close to each other and both of the order of $1+\Theta(1)$.
We complement this result by showing information-theoretic lower bounds.
\begin{theorem}
\label{thm:negative}
For every $1\leq k< n$, there are two vectors ${{\bm{\mu}}},{{\bm{\nu}}}\in\mathbb{R}_+^n$ such that $e_i({{\bm{\mu}}})=e_i({{\bm{\nu}}})$ for $i=1,\dots,k$, and
\[ \frac{\nu_{\max}}{\mu_{\max}}\geq\beta_{k,n}, \]
where
\[
\beta_{k,n}= \begin{cases}
n^{\Omega(1/k)}&k\leq \log n,\\
1+\Omega\left(\frac{\log \frac{2n}{k}}{k}\right)^2&k> \log n.
\end{cases}
\]
\end{theorem}
This shows that no algorithm can approximate $\mu_{\max}$ by a factor better than $\beta_{k,n}$ using $e_1({{\bm{\mu}}}),\dots,e_k({{\bm{\mu}}})$. Note that for $k<n^{1-\epsilon}$, $\beta_{k,n}=\alpha_{k,n}^c$ for some constant $c$ bounded away from zero.
For constant $k$, it is possible to give a constant multiplicative bound assuming Erd\H{o}s's girth conjecture.
\begin{theorem}
\label{thm:cond-negative}
Assume that $k$ is fixed and Erd\H{o}s's girth conjecture (\autoref{conj:erdos}) is true for graphs of girth $>2k$. Then for large enough $n$ there are two vectors ${{\bm{\mu}}},{{\bm{\nu}}}\in\mathbb{R}_+^n$ such that $e_i({{\bm{\mu}}})=e_i({{\bm{\nu}}})$ for $i=1,\dots,k$ and
\[ \frac{\nu_{\max}}{\mu_{\max}}\geq \Omega(n^{1/k}). \]
\end{theorem}
\subsection{Proof of Theorem \ref{thm:positive}: An Algorithm for Approximating the Largest Root}
We consider two cases: if $k\leq \log n$ we return $(p_k({{\bm{\mu}}})/n)^{1/k}$ as the estimate for the maximum root.
It is not hard to see that in this case, $(p_k({{\bm{\mu}}})/n)^{1/k}$ gives an $n^{1/k}$ approximation of the maximum root (see \autoref{claim:small-k} below).
For $k>\log n$ we can still use $(p_k({{\bm{\mu}}})/n)^{1/k}$ to estimate the maximum root, but this only guarantees a $1+O(\frac{\log n}{k})$ approximation.
We show that using the machinery of Chebyshev polynomials we can obtain a better bound.
The pseudocode for the algorithm can be see in \autoref{alg:main}.
\begin{algorithm}
\caption{Algorithm For Approximating the Maximum Root From Top Coefficients}
\label{alg:main}
\begin{algorithmic}
\item[{\bf Input:}] $n$ and $e_1({{\bm{\mu}}}),e_2({{\bm{\mu}}}),\dots,e_k({{\bm{\mu}}})$ for some ${{\bm{\mu}}}\in\mathbb{R}_+^n$.
\item[{\bf Output:}] $\mu_{\max}^*$, an approximation of $\mu_{\max}$.
\State
\If{$k\leq \log n$}
\State Compute $p_k({{\bm{\mu}}})=\sum_{i=1}^n\mu_i^k$ using Newton's identities (\autoref{thm:newton}).
\State \Return $(p_k({{\bm{\mu}}})/n)^{1/k}$.
\Else
\State $t\leftarrow e_1({{\bm{\mu}}})$.
\Loop
\State Compute $p({{\bm{\mu}}}):=\sum_{i=1}^n T_k(\frac{\mu_i}{t})$ using \autoref{cor:compute}.
\If{$p({{\bm{\mu}}})>n$}
\State \Return $\mu_{\max}^*\leftarrow t$.
\EndIf
\State $t\leftarrow \frac{t}{1+(\frac{20\log n}{k})^2}$.
\EndLoop
\EndIf
\end{algorithmic}
\end{algorithm}
We will prove the following claims to show the correctness of \autoref{alg:main}. Let us start with the case $k\leq \log n$.
\begin{claim}
\label{claim:small-k}
For any $k\geq 1$ we have
\[ \Big(\frac{p_k({{\bm{\mu}}})}{n}\Big)^{1/k} \leq \mu_{\max}\leq p_k({{\bm{\mu}}})^{1/k}. \]
\end{claim}
\begin{proof}
Observe,
\[ p_k({{\bm{\mu}}})/n = \sum_{i=1}^n\frac{\mu_i^k}{n} \leq \sum_{i=1}^n \frac{\mu_{\max}^k}{n} = \mu_{\max}^k \leq \sum_{i=1}^n\mu_i^k =p_k({{\bm{\mu}}}), \]
Taking $\frac1k$-th root of all sides of the above proves the claim.
\end{proof}
The rest of the section handles the case where $k>\log n$.
Our first claim shows that as long as $t\geq \mu_{\max}$, the algorithm keeps decreasing $t$ by a multiplicative factor of $(1-\Omega(\log(n)/k)^2)$.
Since at the beginning we have $t=e_1({{\bm{\mu}}}) \geq \mu_{\max}$,
we will have $\mu_{\max}^*\leq \mu_{\max}$.
\begin{claim}
\label{claim:large-k-sound}
For any $t\geq \mu_{\max}$,
\[ \sum_{i=1}^n T_k(\frac{\mu_i}{t})\leq n. \]
\end{claim}
\begin{proof}
If $t\geq \mu_{\max}$, then $\mu_i/t\in [0,1]$ for every $i\in[n]$. By \autoref{fact:cheb-small-x}, we have
\[
\sum_{i=1}^n T_k(\frac{\mu_i}{t})\leq \sum_{i=1}^n 1=n.
\]
\end{proof}
To finish the proof of correctness it is enough to show that $\mu_{\max}\leq \mu_{\max}^* (1+O(\log n/k)^2)$.
This is done in the next claim. It shows that as soon as $t$ gets lower than $\mu_{\max}$, within one more iteration of the loop, the algorithm terminates.
\begin{claim}
\label{claim:large-k-correct}
For $k>\log n$ and $t>0$, if $\mu_{\max}>(1+(\frac{20\log n}{k})^2)t$, then
\[ \sum_{i=1}^n T_k(\frac{\mu_i}{t})>n. \]
\end{claim}
\begin{proof}
When $\mu_{\max}>(1+(\frac{20\log n}{k})^2)t$, by \autoref{fact:cheb-large-x} we have
\begin{align*}
T_k(\frac{\mu_{\max}}{t}) & \geq T_k\left(1+\Big(\frac{20\log n}{k}\Big)^2\right)\geq \frac12 \left(1+\sqrt{2}\cdot \frac{20\log n}{k}\right)^k\\
& \geq \frac12 \exp\left(\frac{3\log n}{k}\right)^k > 2n,
\end{align*}
where we used the inequality $1+\sqrt{800}x\geq e^{3x}$ for $x\in [0, 1]$.
Now we have
\[
\sum_{i=1}^n T_k(\frac{\mu_i}{t})=T_k(\frac{\mu_{\max}}{t})+\sum_{i\neq \argmax_j \mu_j}T_k(\frac{\mu_i}{t})\geq 2n-(n-1)>n,
\]
where we used \autoref{fact:cheb-small-x} and \autoref{fact:cheb-large-x} to conclude $T_k(\frac{\mu_i}{t})\geq -1$ for every $i$.
\end{proof}
The above claim also gives us a bound on the number of iterations in which the algorithm terminates. This is because we start the loop with $t= e_1({{\bm{\mu}}})\leq n\mu_{\max}$ and the loop terminates within one iteration as soon as $t<\mu_{\max}$. Therefore the number of iterations is at most
\[ 1+\frac{\log n}{\log \left(1+(\frac{20\log n}{k})^2\right)}=O\left(\log n\cdot (\frac{k}{\log n})^2\right)=O(k^2). \]
\subsection{Proofs of Theorems \ref{thm:negative} and \ref{thm:cond-negative}: Matching Lower Bounds}
The machinery of Chebyshev polynomials was used to prove \autoref{thm:positive}. We show that this machinery can also be used to prove a weaker version of \autoref{thm:negative}.
\begin{theorem}
\label{thm:weak}
For every $1\leq k<n$, there are ${{\bm{\mu}}},{{\bm{\nu}}}\in\mathbb{R}_+^n$ such that $e_i({{\bm{\mu}}})=e_i({{\bm{\nu}}})$ for $i=1,\dots,k$ and
\[ \frac{\nu_{\max}}{\mu_{\max}}\geq 1+\Omega(1/k^2) \]
\end{theorem}
\begin{proof}
First let us prove this for $k=n-1$. Let ${{\bm{\mu}}}$ be the set of roots of $T_n(x-1)+1$ and ${{\bm{\nu}}}$ the set of roots of $T_n(x-1)-1$. These two polynomials are the same except for the constant term. It follows that $e_i({{\bm{\mu}}})=e_i({{\bm{\nu}}})$ for $i=1,\dots,n-1$. We use the following lemma to prove that ${{\bm{\mu}}},{{\bm{\nu}}}\in\mathbb{R}_+^n$.
\begin{lemma}
\label{lem:cheb-roots}
For $\theta\in\mathbb{R}$, the roots of $T_n(x)-\cos(\theta)$, counting multiplicities, are $\cos(\frac{\theta+2\pi i}{n})$ for $i=0,\dots, n-1$.
\end{lemma}
\begin{proof}
We have
\[ T_n\left(\cos(\frac{\theta+2\pi i}{n})\right)=\cos(\theta+2\pi i)=\cos(\theta). \]
For almost all $\theta$ these roots are distinct and since $T_n$ has degree $n$, it follows that they are all of the roots. When some of these roots collide, we can perturb $\theta$ and use the fact that roots are continuous functions of the polynomial coefficients to prove the statement.
\end{proof}
Using the above lemma for $\theta=\pi$ and $\theta=0$, we get that $\mu_i=1+\cos(\frac{\pi+2\pi i}{n})$ and $\nu_i=1+\cos(\frac{2\pi i}{n})$. This proves that ${{\bm{\mu}}},{{\bm{\nu}}}\in\mathbb{R}_+^n$. Moreover we have
\[ \frac{\nu_{\max}}{\mu_{\max}}=\frac{1+1}{1+\cos(\pi/n)}=1+\Omega(1/n^2). \]
This finishes the proof for $k=n-1$.
Now let us prove the statement for general $k$. By applying the above proof for $n=k+1$, we get $\tilde{{\bm{\mu}}},\tilde{{\bm{\nu}}}\in \mathbb{R}_+^{k+1}$ such that $e_i(\tilde{{\bm{\mu}}})=e_i(\tilde{{\bm{\nu}}})$ for $i=1,\dots,k$ and
\[\frac{\tilde\nu_{\max}}{\tilde\mu_{\max}}\geq 1+\Omega(1/k^2). \]
Now construct ${{\bm{\mu}}},{{\bm{\nu}}}$ from $\tilde{{\bm{\mu}}},\tilde{{\bm{\nu}}}$ by
adding zeros to make the total count $n$. It is not hard to see, by using \autoref{cor:ekpk} that $e_i({{\bm{\mu}}})=e_i({{\bm{\nu}}})$ for $i=1,\dots,k$. Moreover ${{\bm{\mu}}}_{\max}=\tilde{{\bm{\mu}}}_{\max}$ and ${{\bm{\nu}}}_{\max}=\tilde{{\bm{\nu}}}_{\max}$. This finishes the proof.
\end{proof}
Note that the above lower bound is the same as the lower bound in \autoref{thm:negative} when $k=\Omega(n)$. However, to prove \autoref{thm:negative} and \autoref{thm:cond-negative} we need more tools. The crucial idea we use to get the stronger \autoref{thm:negative} and \autoref{thm:cond-negative} is the following observation about signed adjacency matrices for graphs of large girth.
\begin{lemma}
\label{lem:sign-not-matter}
Let $G=([n], E)$ be a graph and $D\in \mathbb{R}^{n\times n}$ an arbitrary diagonal matrix. If $\girth(G)>k$, then the top $k$ coefficients of the polynomial $\chi(x)=\det(xI-(D+A_s))$ are independent of the signing $s$. In other words, \[e_i({{\bm{\lambda}}}(D+A_{s_1}))=e_i({{\bm{\lambda}}}(D+A_{s_2})),\] for any two signings $s_1,s_2$ and $i=1,\dots,k$.
\end{lemma}
We will apply the above lemma, with $D=0$, to graphs of large girth constructed based on \autoref{conj:erdos} or \autoref{thm:large-girth}, in order to prove \autoref{thm:cond-negative} and \autoref{thm:negative} for the regime $k\leq \Theta(\log n)$. In order to prove \autoref{thm:cond-negative} for the regime $k\geq \Theta(\log n)$, we marry these constructions with Chebyshev polynomials. We will prove the above lemma at the end of this section, after proving \autoref{thm:negative} and \autoref{thm:cond-negative}.
First let us prove \autoref{thm:cond-negative}.
\begin{proofof}{\autoref{thm:cond-negative}}
We apply \autoref{lem:sign-not-matter} to the following graph
construction, the proof of which we defer to \autoref{appendix:constructions}.
\begin{claim}
\label{claim:erdos-max-deg}
Let $k$ be fixed and assume that \autoref{conj:erdos} is true for graphs of girth $>2k$. Then, for all sufficiently large $n$, there exist bipartite graphs $G=([n], E)$ with $\girth(G)>2k$, $\deg_{\max}(G)=O(n^{1/k})$, and $\deg_{\avg}(G)=\Omega(n^{1/k})$.
\end{claim}
Let $G$ be the graph from the above claim. Let $s_1$ be the trivial signing that assigns $+1$ to every edge, and $s_2$ the signing guaranteed by \autoref{cor:signing}. Now let ${{\bm{\nu}}}={{\bm{\lambda}}}(A_{s_1}^2)$, i.e. the square of the eigenvalues of $A_{s_1}$, and ${{\bm{\mu}}}={{\bm{\lambda}}}(A_{s_2}^2)$, i.e. the square of the eigenvalues of $A_{s_2}$. By \autoref{lem:sign-not-matter} and \autoref{cor:ekpk}, we have
\[ p_i({{\bm{\nu}}})=p_{2i}({{\bm{\lambda}}}(A_{s_1}))=p_{2i}({{\bm{\lambda}}}(A_{s_2}))=p_i({{\bm{\mu}}}), \]
for $i=1,\dots,k$. By another application of \autoref{cor:ekpk}, we have $e_i({{\bm{\mu}}})=e_i({{\bm{\nu}}})$ for $i=1,\dots,k$.
On the other hand, by \autoref{cor:signing}, we have
\[ \mu_{\max}=\lambda_{\max}(A_{s_2})^2\leq 4(\deg_{\max}(G)-1)=O(n^{1/k}), \]
and by \autoref{fact:avgdeg}, we have
\[ \nu_{\max}=\lambda_{\max}(A_{s_1})^2\geq \deg_{\avg}(G)^2=\Omega(n^{2/k}). \]
Therefore
\[ \frac{\nu_{\max}}{\mu_{\max}}=\frac{\Omega(n^{2/k})}{O(n^{1/k})}=\Omega(n^{1/k}). \]
\end{proofof}
Now let us prove \autoref{thm:negative} for $k\leq \Theta(\log n)$.
\begin{proofof}{\autoref{thm:negative} for $k\leq \Theta(\log n)$}
We apply \autoref{lem:sign-not-matter} to the following graph
construction, the proof of which we defer to \autoref{appendix:constructions}.
\begin{claim}
\label{claim:large-girth-const}
Let $d$ be a prime. For all sufficiently large $n$, there exist bipartite graphs $G=([n], E)$ with $\girth(G)=\Omega(\log n)$, $\deg_{\max}(G)\leq d$, and $\deg_{\avg}(G)\geq d/2$.
\end{claim}
We will fix $d$ to a specific prime later. Similar to the proof of \autoref{thm:cond-negative}, we let $s_1$ be the trivial signing with all $+1$s and $s_2$ be the signing guaranteed by \autoref{cor:signing}. Let $t=\Omega(\log n/k)$ be an even integer such that $tk<\girth(G)$. Such a $t$ exists when $k<c\log n$ for some constant $c$. Take ${{\bm{\nu}}}={{\bm{\lambda}}}(A_{s_1}^t)$ and ${{\bm{\mu}}}={{\bm{\lambda}}}(A_{s_2}^t)$. Then we have
\[ \mu_{\max}=\lambda_{\max}(A_{s_2})^t\leq (2\sqrt{d-1})^t, \]
and
\[ \nu_{\max}=\lambda_{\max}(A_{s_1})^t\geq (d/2)^t. \]
This means that
\[ \frac{\nu_{\max}}{\mu_{\max}}\geq \left(\frac{d}{4\sqrt{d-1}}\right)^t\geq e^{\Omega(\log n/k)}=n^{\Omega(1/k)},
\]
as long as $\frac{d}{4\sqrt{d-1}}>e$, which happens for sufficiently large $d$ (such as $d=127$).
It only remains to show that $e_i({{\bm{\mu}}})=e_i({{\bm{\nu}}})$ for $i=1,\dots,k$. For every $i\in [k]$ we have $t\cdot i< \girth(G)$, which by \autoref{lem:sign-not-matter} gives us
\[ p_{i}({{\bm{\nu}}})=p_{t\cdot i}({{\bm{\lambda}}}(A_{s_1}))=p_{t\cdot i}({{\bm{\lambda}}}(A_{s_2}))=p_{i}({{\bm{\nu}}}), \]
and this finishes the proof because of \autoref{cor:ekpk}.
\end{proofof}
The above method unfortunately does not seem to directly extend to the regime $k\geq \Theta(\log n)$, since for large $k$, getting $\girth(G)>k$ requires many vertices of degree at most $2$.\footnote{Unless all degrees are $2$ in which case we can actually reprove \autoref{thm:weak}; we omit the details here.} Instead we use the machinery of Chebyshev polynomials to boost our graph constructions.
\begin{proofof}{\autoref{thm:negative} for $k\geq \Theta(\log n)$}
Since \autoref{thm:weak} proves the same desired bound as \autoref{thm:negative} when $k=\Omega(n)$, we may without loss of generality assume that $n/k$ is at least a large enough constant. Let $m$ be the largest integer such that
\begin{equation}
\label{eq:m-choice}
c\cdot \frac{m}{\log m}\leq \frac{n}{k},
\end{equation}
where $c$ is a large constant that we will fix later. It is easy to see that
\begin{equation}
\label{eq:m-assymp}
m=\Theta\left(\frac{n}{k}\log(\frac{n}{k})\right).
\end{equation}
We have already proved \autoref{thm:negative} for the small $k$ regime. Using this proof (for $n=m$ and $k=\Theta(\log m)$), we can find $\tilde{{\bm{\mu}}},\tilde{{\bm{\nu}}}\in\mathbb{R}_+^m$ such that $e_i(\tilde{{\bm{\mu}}})=e_i(\tilde{{\bm{\nu}}})$ for $i=1,\dots,\Omega(\log m)$ and $\tilde\nu_{\max}/\tilde\mu_{\max}\geq m^{1/\Theta(\log m)}\geq 2$. Without loss of generality, by a simple scaling, we may assume that $\tilde\nu_{\max}=1$ and $\tilde\mu_{\max}\leq 1/2$.
Let $\chi_{\tilde{{\bm{\mu}}}}(x),\chi_{\tilde{{\bm{\nu}}}}(x)\in\mathbb{R}[x]$ be the unique monic polynomials whose roots are $\tilde{{\bm{\mu}}},\tilde{{\bm{\nu}}}$ respectively. By construction, the top $\Omega(\log m)$ coefficients of these polynomials are the same. We boost the number of equal coefficients by composing them with Chebyshev polynomials. Let $p(x):=\chi_{\tilde{{\bm{\mu}}}}(T_t(x))$ and $q(x):=\chi_{\tilde{{\bm{\nu}}}}(T_t(x))$ where $t=\lfloor n/m\rfloor$. Note that $\deg p=\deg q=tm\leq n$. We let ${{\bm{\mu}}},{{\bm{\nu}}}$ be the roots of $p(x),q(x)$ together with some additional zeros to make their counts $n$. First, we show that the top $k$ coefficients of $p,q$ are the same. Then we show that they are real rooted, i.e., ${{\bm{\mu}}},{{\bm{\nu}}}$ are real vectors. Finally, we lower bound $\nu_{\max}/\mu_{\max}$.
Note that $p(x),q(x)$ have degree $tm$. They are not monic, but their leading terms are the same. Besides the leading terms, we claim that they have the same top $\Omega(t\log m)$ coefficients. This follows from the fact that $T_t(x)$ is a degree $t$ polynomial. When expanding either $\chi_{\tilde{{\bm{\nu}}}}(T_t(x))$ or $\chi_{\tilde{{\bm{\mu}}}}(T_t(x))$, terms of degree $\leq m-\Omega(\log m)$ in $\chi_{\tilde{{\bm{\mu}}}},\chi_{\tilde{{\bm{\nu}}}}$ produce monomials of degree at most $tm-\Omega(t\log m)$, which means that the top $\Omega(t\log m)$ coefficients are the same. This shows that the first $\Omega(t\log m)$ elementary symmetric polynomials of ${{\bm{\mu}}},{{\bm{\nu}}}$ are the same. It follows that the first $k$ elementary symmetric polynomials to be the same, which follows from
\[ \Omega(t\log m)=\Omega(\frac{n\log m}{m})\geq \Omega(ck)\geq k,\]
where for the first inequality we used \eqref{eq:m-choice} and for the second inequality we assumed $c$ is large enough that it cancels the hidden constants in $\Omega$.
It is not obvious if ${{\bm{\mu}}}, {{\bm{\nu}}}$ are even real. This is where we crucially use the properties of the Chebyshev polynomial $T_t(x)$. Note that the roots of $\chi_{\tilde{{\bm{\mu}}}}(x)$ and $\chi_{\tilde{{\bm{\nu}}}}(x)$, i.e. $\tilde\mu_i$'s and $\tilde\nu_i$'s are all in $[0,1]\subseteq[-1,1]$. Therefore each one of them can be written as $\cos(\theta)$ for some $\theta\in\mathbb{R}$. By \autoref{lem:cheb-roots} the equation
\[ T_t(x)=\cos(\theta),\]
has $t$ real roots (counting multiplicities), and they are simply $x=\cos(\frac{\theta+2\pi i}{t})$ for $i=0,\dots,t-1$. So for each root of $\chi_{\tilde{{\bm{\mu}}}}(x)$ we have $t$ roots of $\chi_{\tilde{{\bm{\mu}}}}(T_t(x))$, all in $[-1,1]$. This means that all of the roots of $p(x)$ are real and in $[-1, 1]$. By a similar argument, all of the roots of $q(x)$ are real and in $[-1, 1]$.
The largest root of $q(x)$ is $1$, since
\[ q(1)=\chi_{\tilde{{\bm{\nu}}}}(T_t(1))=\chi_{\tilde{{\bm{\nu}}}}(1)=\chi_{\tilde{{\bm{\nu}}}}(\nu_{\max})=0. \]
On the other hand, the largest root of $p(x)$ is at most $\cos(\pi/3t)$, because for any $x\in(\cos(\pi/3t),1]$ there is $\theta\in[0,\pi/3t)$ such that $x=\cos(\theta)$ and this means
\[ p(x)=\chi_{\tilde{{\bm{\mu}}}}(T_t(\cos(\theta)))=\chi_{\tilde{{\bm{\mu}}}}(\cos(t\theta))\neq 0, \]
because $\cos(t\theta)>\cos(\pi/3)=1/2\geq \tilde\mu_{\max}$.
By the above arguments, ${{\bm{\mu}}},{{\bm{\nu}}}$ satisfy almost all of the desired properties, except that they could be negative. However we know that ${{\bm{\mu}}},{{\bm{\nu}}}\in[-1,1]^n$. So using \autoref{fact:linear-transform} we can easily make them nonnegative. We simply replace ${{\bm{\mu}}},{{\bm{\nu}}}$ by ${{\bm{\mu}}}+1,{{\bm{\nu}}}+1$. Then ${{\bm{\mu}}},{{\bm{\nu}}}\in[0,2]^{n}$ and $e_i({{\bm{\mu}}})=e_i({{\bm{\nu}}})$ for $i=1,\dots,k$. Finally, we have
\[ \frac{\nu_{\max}}{\mu_{\max}}\geq \frac{1+1}{1+\cos(\pi/3t)}=1+\Omega\left(\frac{1}{t^2}\right)=1+\Omega\left(\frac{m}{n}\right)^2=1+\Omega\left(\frac{\log\frac{2n}{k}}{k}\right)^2, \]
where we used \eqref{eq:m-assymp} for the last equality.
\end{proofof}
Having finished the proofs of \autoref{thm:negative}, \autoref{thm:cond-negative} we are ready to prove \autoref{lem:sign-not-matter}.
\begin{proofof}{\autoref{lem:sign-not-matter}}
By \autoref{cor:ekpk}, it is enough to prove that
\begin{equation*}
p_k({{\bm{\lambda}}}(D+A_{s_1}))=p_k({{\bm{\lambda}}}(D+A_{s_2})),
\end{equation*}
for $k<\girth(G)$. This is the same as proving
\begin{equation}
\label{eq:power-k}
\trace\left((D+A_{s_1})^k\right)=\trace\left((D+A_{s_2})^k\right).
\end{equation}
For a matrix $M\in\mathbb{R}^{n\times n}$ we have the following identity
\[ \trace(M^k)=\sum_{(v_1,\dots,v_k)\in [n]^k}M_{v_1,v_2}M_{v_2,v_3}\dots M_{v_{k-1},v_k}M_{v_k, v_{1}}.\]
For ease of notation let us identify $v_{k+1}$ with $v_1$. We apply the above formula to both sides of \eqref{eq:power-k}. The sequence $(v_1,\dots,v_k)$ can be interpreted as a sequence of vertices in the graph $G$. If for any $i$, $v_i\neq v_{i+1}$ and $\{v_i,v_{i+1}\}\notin E$, then the term inside the sum vanishes. Therefore we can restrict the sum to those terms $(v_1,\dots,v_k)$ where for each $i\in[k]$, either $v_i=v_{i+1}$ or $v_i$ and $v_{i+1}$ are connected in $G$. To borrow and abuse some notation from Markov chains, let us call such a sequence a lazy closed walk of length $k$. In order to prove \eqref{eq:power-k} it is enough to prove that for any such lazy closed walk we get the same term for both $s_1$ and $s_2$.
Let $(v_1,\dots,v_k)$ be one such lazy closed walk. Consider a particle that at time $i$ resides at $v_i$. In each step, the particle either does not move or moves to a neighboring vertex, and at time $k+1$ it returns to its starting position. For each step that the particle does not move we get one of the entries of $D$, corresponding to the current vertex, as a factor. This is clearly independent of the signing. When the particle moves however, we get the sign of the edge over which it moved as a factor. We will show that the particle must cross each edge an even number of times. Therefore the signs for each edge cancel each other and we get the same result for $s_1,s_2$.
Consider the induced subgraph on $v_1,\dots, v_k$. Because $k<\girth(G)$, this subgraph has no cycles; therefore it must be a tree. A (lazy) closed walk crosses any cut in a graph an even number of times. Each edge in this tree constitutes a cut. Therefore the lazy closed walk $(v_1,\dots,v_k)$ must cross each edge in the tree an even number of times.
\end{proofof}
\subsection{Lower Bound Given Approximate Coefficients}
The lower bounds proved in the previous section show that knowing a small number of the coefficients of a polynomial {\em exactly} is insufficient to obtain a good estimate of its largest root.
In this section we generalize the construction of \autoref{thm:negative} to provide a
satisfying lower bound to \autoref{prob:approx}.
\begin{proposition}
\label{prop:approxlb}
For every integer $n>1$ and $1<k<n$ there are
degree $n$ polynomials $r(x),s(x)$ such that
\begin{enumerate}
\item All of the coefficients of $r$ and $s$ except for the $2k^{th}$ are exactly
equal, and the $2k^{th}$ coefficients are within a multiplicative factor
of $1+\frac{4}{2^{2k}}.$
\item The largest root of $r$ is at least $1+\Omega(1/k^2)$ of the largest root of $s$.
\item $r$ and $s$ have a common interlacing.
\item $r$ and $s$ are characteristic polynomials of graph Laplacians. Further,
these Laplacians correspond to $2-$lifts of a common base graph.
\end{enumerate}
\end{proposition}
\begin{proof}
Let
$$r(x):=2T_k^2(3/2-x)\qquad and\qquad s(x):=T_{2k}(3/2-x).$$
Since
$$T_{2k}=2T_k^2-1,$$
these polynomials differ only in their constant terms. Moreover, we have
$$r(0)=2T_k^2(3/2)\ge (2^{k-1})^2\qquad and\qquad s(0)\ge 2^{2k-1}/2$$
by \autoref{fact:cheb-large-x}. Thus, they agree on the first $2k-1$
coefficients, and differ by a multiplicative factor of at most
$$\frac{2^{2k-2}+1}{2^{2k-2}}=1+4/2^{2k}$$ in the $2k^{th}$ coefficient,
establishing (1).
To see (2), observe that $r(x)$ has largest root $3/2+\cos(2\pi/k)$
whereas $s(x)$ has largest root $3/2+\cos(2\pi/2k)$. Since the
difference of these numbers is
$$\cos(2\pi/k)-\cos(2\pi/2k) =
\frac{4\pi^2}{2}(\frac{1}{k^2}-\frac{1}{4k^2}+o(1/k^4)),$$
we conclude that their ratio is at least
$1+\Omega(1/k^2)$.
To see (3), observe that the roots of $r(x)$ are
$$ 3/2-\cos(2\pi j/k)\quad j=0,\ldots,k-1\quad\textrm{with multiplicity
$2$}$$
and the roots of $s(x)$ are
$$ 3/2-\cos(2\pi j/2k)\quad j=0,\ldots,2k-1\quad\textrm{with
multiplicity $1$},$$
whence $r$ and $s$ have a common interlacing according to
\autoref{defn:interlacing}.
For (4) we first apply \autoref{fact:godsil} to interpret both $r$ and
$s$ as characteristic polynomials of cycles.
Let $C_k$ denote a cycle of length $k$ and let $C_k\cup C_k$ denote a
union of two such cycles. Then we have:
$$ \det(2xI-A_{C_k\cup C_k}) = \det(2xI-A_{C_k})^2 = 4T_k(x)^2 =
2r(3/2-x),$$
and
$$\det(2xI-A_{C_{2k}}) = 2T_{2k}(x) = 2s(3/2-x),$$
whence
$$r(x) = \frac12 \det(3I-A_{C_k\cup
C_k}-2xI)=\frac{(-2)^k}{2}\det(xI-((3/2)I-(1/2)A_{C_k\cup C_k})$$
and
$$s(x)=\frac12 \det(3I-A_{C_{2k}}-2xI) =
\frac{(-2)^k}{2}\det(xI-((3/2)I-(1/2)C_{2k}),$$
which are characteristic polynomials of graph Laplacians of weighted
graphs with self loops. Note that both graphs are $2-$covers of $C_k$. Since multiplying by constants does not change
any of the properties we are interested in, we can ignore them.
Considering $\tilde{r}(x)=x^{n-k}r(x)$ and $\tilde{s}(x)=x^{n-k}s(x)$
yields examples of the desired dimension $n$; note that multiplying by
$x^{n-k}$ simply corresponds to adding isolated vertices to the
corresponding graphs.
\end{proof}
\section{Applications to Interlacing Families}\label{section:apps}
In this section we use \autoref{thm:positive} to give an $2^{\tilde O(\sqrt[3]{m})}$ time algorithm for rounding an interlacing family of depth $m$. Let us start by defining an interlacing family.
\begin{definition}[Interlacing]\label{defn:interlacing}
We say that a real rooted polynomial $g(x) = \alpha_0 \prod_{i=1}^{n-1} (x-\alpha_i)$ interlaces a real rooted polynomial $f(x) = \beta_0 \prod_{i=1}^n (x-\beta_i)$ if
$$\beta_1 \leq \alpha_1 \leq \beta_2 \leq \alpha_2 \leq \dots \leq \alpha_{n-1} \leq \beta_n. $$
\end{definition}
We say that polynomials $f_1,\dots,f_k$ have a common interlacing if there is a polynomial $g$ such that $g$ interlaces all $f_i$.
The following key lemma is proved in \cite{MSS12}.
\begin{lemma}
\label{lem:interlacingroot}
Let $f_1,\dots,f_k$ be polynomials of the same degree that are real rooted and have positive leading coefficients.
If $f_1,\dots,f_k$ have a common interlacing, then there is an $i$
such that
$$ \lambda_{\max}(f_i) \leq \lambda_{\max}(f_1+\dots+f_k).$$
\end{lemma}
\begin{definition}[Interlacing Family]
\label{def:interlacingfamily}
Let $S_1,\dots,S_m$ be finite sets.
Let ${\cal F}\subseteq S_1\times S_2\times \dots \times S_m$ be nonempty. For any $s_1,s_2,\dots,s_m\in{\cal F}$, let $f_{s_1,\dots,s_m}(x)$ be a real rooted polynomial of degree $n$ with a positive leading coefficient.
For $s_1,\dots,s_k\in S_1\times \dots\times S_k$ with $k<m$, let
$$ {\cal F}_{s_1,\dots,s_k}:=\{t_1,\dots,t_m\in{\cal F}: s_i=t_i, \forall 1\leq i\leq k\}.$$
Note that ${\cal F}={\cal F}_{\emptyset}$.
Define
$$f_{s_1,\dots,s_k} = \sum_{t_1,\dots,t_m \in{\cal F}_{s_1,\ldots,s_k}} f_{t_1,\dots,t_m},$$
and
$$ f_{\emptyset} = \sum_{t_1,\dots,t_m\in{\cal F}} f_{t_1,\dots,t_m}.$$
We say polynomials $\{f_{s_1,\dots,s_m}\}_{s_1,\dots,s_m\in{\cal F}}$ form an {\em interlacing family} if for all $0\leq k<m$ and all
$s_1,\dots,s_k\in S_1\times \dots\times S_k$ the following holds: The polynomials
$f_{s_1,\dots,s_k,t_i}$ which are not identically zero
have a common interlacing.
\end{definition}
In the above definition we say $m$ is the depth of the interlacing families.
It follows by repeated applications of \autoref{lem:interlacingroot} that for any interlacing family $\{f_{s_1,\dots,s_m}\}_{s_1,\dots,s_m\in{\cal F}}$, there is a polynomial $f_{s_1,\dots,s_m}$ such that the largest root of $f_{s_1,\dots,s_m}$ is at most the largest root of $f_{\emptyset}$ {\cite[Thm 3.4]{MSS13}}.
For an $\alpha>1$, an $\alpha$-approximation {\em rounding} algorithm for an interlacing family $\{f_{s_1,\dots,s_m}\}_{s_1,\dots,s_m\in{\cal F}}$ is an algorithm that returns a polynomial $f_{s_1,\dots,s_m}$ such that
$$ \lambda_{\max}(f_{s_1,\dots,s_m}) \leq \alpha\lambda_{\max}(f_{\emptyset}).$$
Next, we design such a rounding algorithm, given an oracle that computes the first $k$ coefficients of the polynomials in an interlacing family.
\begin{theorem}\label{thm:rounding}
Let $S_1,\dots,S_m$ be finite sets and let $\{f_{s_1,\dots,s_m}\}_{s_1,\dots,s_m\in{\cal F}}$ be an interlacing family of degree $n$ polynomials. Suppose that we are given an oracle that for any $1\leq k\leq n$ and $s_1,\dots,s_\ell\in S_1,\dots,S_\ell$ with $\ell <m$ returns the top $k$ coefficients of $f_{s_1,\dots,s_m}$ in time $T(k)$. Then, there is an algorithm that for any ${\epsilon}>0$ returns a polynomial $f_{s_1,\dots,s_m}$ such that the largest root of $f_{s_1,\dots,s_m}$ is at most $1+{\epsilon}$ times the largest root of $f_{\emptyset}$, in time
$T(O(\log(n)m^{1/3}/\sqrt{{\epsilon}})) \max\{|S_i|\}^{O(m^{1/3})}\poly(n)$.
\end{theorem}
\begin{proof}
Let $M=m^{1/3}$ and $k\asymp \frac1{\sqrt{{\epsilon}}} \log(n)M$. Then, by \autoref{thm:positive}, for any polynomial $f_{s_1,\dots,s_\ell}$ we can find a $1+\frac{{\epsilon}}{2M^2}$ approximation of the largest root of $f_{s_1,\dots,s_m}$ in time $T(k)\poly(n)$. We round the interlacing family in $M^2$ many steps and in each step we round $M$ of the coordinates. We make sure that each step only incurs a (multiplicative) approximation of $1+\frac{{\epsilon}}{2M^2}$ so that the cumulative approximation error is no more than
$$ (1+\frac{{\epsilon}}{2M^2})^{M^2}\leq 1+{\epsilon}$$
as desired.
Let us describe the algorithm inductively. Suppose we have selected $s_1,\dots,s_\ell$ for some $0\leq \ell< m$. We brute force over all polynomials $f_{s_1,\dots,s_{\ell},t_{\ell+1},\dots,t_{\ell+M}}$ which are not identically zero for all $t_{\ell+1},\dots,t_{\ell+M} \in S_{\ell+1},\dots,S_{\ell+M}$. Note that there are at most $(\max_i |S_i|)^{M}$ many such polynomials. For any polynomial $f_{s_1,\dots,s_\ell,t_{\ell+1},\dots,t_{\ell+M}}$ (which is not identically zero) we compute $\mu^*_{s_1,\dots,s_\ell,t_{\ell+1},\dots,t_{\ell+M}}$, a $1+\frac{{\epsilon}}{2M^2}$ approximation of its largest root using its top $k$ coefficients.
We let
$$ s_{\ell+1},\dots,s_{\ell+M}=\ \ \smashoperator{ \argmin_{t_{\ell+1},\dots,t_{\ell+M}}}\ \ \mu^*_{s_1,\dots,s_\ell,t_{\ell+1},\dots,t_{\ell+M}}.$$
It follows that the algorithm runs in time $T(k) \poly(n) (\max_i |S_i|)^{O(M)}$. Because we have an interlacing family, there is a polynomial $f_{s_1,\dots,s_\ell,t_{\ell+1},\dots,t_{\ell+M}}$ whose largest root is at most the largest root of $f_{s_1,\dots,s_\ell}$, in each step of the algorithm. Therefore,
$$ \lambda_{\max}(f_{s_1,\dots,s_{\ell+M}}) \leq (1+\frac{{\epsilon}}{2M^2})\lambda_{\max}(f_{s_1,\dots,s_\ell}).$$
Therefore, by induction,
$$ \lambda_{\max}(f_{s_1,\dots,s_m}) \leq (1+{\epsilon}) \lambda_{\max}(f_{\emptyset})$$
as desired.
\end{proof}
We remark that without the use of Chebyshev polynomials and
\autoref{thm:positive}, one can obtain the somewhat worse running time of $2^{\tilde
O(\sqrt{m})}$ by applying the same trick of rounding the vectors in groups
rather than one at a time.
To use the above theorem, we need to construct the aforementioned oracle for each application of the interlacing families.
Next, we construct such an oracle for several examples.
\subsection{Oracle for Kadison-Singer}
We start with interlacing families corresponding to the Weaver's problem which is an equivalent formulation of the Kadison-Singer problem.
Marcus, Spielman, and Srivastava proved the following theorem.
\begin{theorem}[\cite{MSS13}]
Given vectors ${{\bf{v}}}_1,\dots,{{\bf{v}}}_m$ in isotropic position,
$$ \sum_{i=1}^m {{\bf{v}}}_i{{\bf{v}}}_i^T = I,$$
such that $\max_i \norm{{{\bf{v}}}_i}^2 \leq \delta$,
there is a partitioning $S_1,S_2$ of $[m]$ such that for $j\in\{1,2\}$,
$$ \norm{\sum_{i\in S_j}{{\bf{v}}}_i{{\bf{v}}}_i^T} \leq 1/2+O(\sqrt{\delta}).$$
\end{theorem}
We give an algorithm that finds the above partitioning and runs in time $2^{m^{1/3}/\delta}$. It follows from the proof of \cite{MSS13}
that it is enough to design a $(1+\delta)$-approximation rounding algorithm for the following interlacing family.
Let ${{\bf{r}}}_1,\dots,{{\bf{r}}}_m\in\mathbb{R}^n$ be independent random vectors where for each $i$, $S_i$ is the support of ${{\bf{r}}}_i$. For any ${{\bf{r}}}_1,\dots,{{\bf{r}}}_m\in S_1\times\dots S_m$ let
$$ f_{{{\bf{r}}}_1,\dots,{{\bf{r}}}_m}(x)=\det(xI-\sum_{i=1}^m {{\bf{r}}}_i{{\bf{r}}}_i^T).$$
Marcus et al.~\cite{MSS13} showed that $\{f_{{{\bf{r}}}_1,\dots,{{\bf{r}}}_m}\}_{{{\bf{r}}}_1,\dots,{{\bf{r}}}_m\in S_1\times \dots\times S_m}$ is an interlacing family. Next we design an algorithm that returns the first $k$ coefficients of any polynomial in this family in time $(m\cdot \max_i |S_i|)^k$.
\begin{theorem}\label{thm:KSoracle}
Given independent random vectors ${{\bf{r}}}_1,\dots,{{\bf{r}}}_m\in \mathbb{R}^d$ with support $S_1,\dots,S_m$. There is an algorithm that for any ${{\bf{r}}}_1,\dots,{{\bf{r}}}_\ell\in S_1\times\dots\times S_\ell$ with $1\leq \ell \leq m$ and $1\leq k\leq n$ returns the top $k$ coefficients of $f_{{{\bf{r}}}_1,\dots,{{\bf{r}}}_\ell}$ in time $(m\cdot\max_i |S_i|)^k \poly(n)$.
\end{theorem}
\begin{proof}
Fix ${{\bf{r}}}_1,\dots,{{\bf{r}}}_\ell\in S_1\times \dots \times S_\ell$ for some $1\leq \ell \leq m$.
It is sufficient to show that for any $0\leq k\leq n$, we can compute the coefficient of $x^{n-k}$ of $f_{{{\bf{r}}}_1,\dots,{{\bf{r}}}_\ell}$ in time $(m\cdot \max_i |S_i|)^k \poly(n)$.
First, observe that
$$ f_{{{\bf{r}}}_1,\dots,{{\bf{r}}}_\ell}=\EE{{{\bf{r}}}_{\ell+1},\dots,{{\bf{r}}}_m}{\det\left(xI-\sum_{i=1}^m {{\bf{r}}}_i{{\bf{r}}}_i^T\right)}.$$
So, by \autoref{prop:sigmak}, the coefficient of $x^{n-k}$ in the above is equal to
$$ (-1)^k\sum_{T\subseteq \binom{[m]}{k}} \EE{{{\bf{r}}}_{\ell+1},\dots,{{\bf{r}}}_m}{\sigma_k\left(\sum_{i\in T} {{\bf{r}}}_i{{\bf{r}}}_i^T\right)}.$$
Note that there are at most $m^k$ terms in the above summation. For any $T\subseteq \binom{[m]}{k}$, we can exactly compute
$\EE{{{\bf{r}}}_{\ell+1},\dots,{{\bf{r}}}_m}{\sigma_k(\sum_{i\in T}{{\bf{r}}}_i{{\bf{r}}}_i^T)}$ in time
$(\max_i |S_i|)^k \poly(n)$. All we need to do is brute force over all vectors in the domain of $\{S_i\}_{i>\ell, i\in T}$ and average out $\sigma_k(.)$ of the corresponding sums of rank 1 matrices.
\end{proof}
It follows by \autoref{thm:rounding} and \autoref{thm:KSoracle} that for any given set of vectors ${{\bf{v}}}_1,\dots,{{\bf{v}}}_m\in \mathbb{R}^n$ in isotropic position of squared norm at most $\delta$ we can find a two partitioning $S_1,S_2$ such that $\norm{\sum_{i\in S_j} {{\bf{v}}}_i{{\bf{v}}}_i^T}\leq 1/2+O(\sqrt{\delta})$ in time $ n^{O(m^{1/3}\delta^{-1/4})} $.
\subsection{Oracle for ATSP}
Next, we construct an oracle for interlacing families related to the asymmetric traveling salesman problem.
We say a multivariate polynomial $p\in\mathbb{R}[z_1,\dots,z_m]$ is real stable if it has no roots in the upper-half complex plane, i.e., $p({{\bf{z}}})\neq 0$ whenever $\text{Im}(z_i)>0$ for all $i$.
The {\em generating polynomial} of $\mu$ is defined as
$$ g_\mu(z_1,\dots,z_m)=\sum_{S\subseteq [m]} \mu(S)\prod_{i\in S} z_i.$$
We say $\mu$ is a {\em strongly Rayleigh} probability distribution if $g_{\mu}({{\bf{z}}})$ is real stable. We say $\mu$ is homogeneous if all sets in the support of $\mu$ have the same size.
The following theorem is proved by the first and the second authors~\cite{AO15}.
\begin{theorem}[\cite{AO15}]
Let $\mu$ be a homogeneous strongly Rayleigh probability distribution on subsets of $[m]$ such that for each $i$, $\P{i}<{\epsilon}_1$. Let ${{\bf{v}}}_1,\dots,{{\bf{v}}}_m\in\mathbb{R}^n$ be in isotropic position such that for each $i$, $\norm{{{\bf{v}}}_i}^2\leq {\epsilon}_2$. Then, there is a set $S$ in the support of $\mu$ such that
$$ \norm{\sum_{i\in S}{{\bf{v}}}_i{{\bf{v}}}_i^T} \leq O({\epsilon}_1+{\epsilon}_2).$$
\end{theorem}
We give an algorithm that finds such a set $S$ as promised in the above theorem assuming that we have an oracle that for any ${{\bf{z}}}\in \mathbb{R}^m$ returns $g_{\mu}({{\bf{z}}})$.
It follows from the proof of \cite{AO15} that it is enough to design a $1+O({\epsilon}_1+{\epsilon}_2)$-approximation rounding algorithm for the following interlacing family.
For any $1\leq i\leq m$, let $S_i=\{{\bf 0},{{\bf{v}}}_i\}$. For any $S$ in the support of $\mu$, let ${{\bf{r}}}_i={{\bf{v}}}_i$ if $i\in S$ and ${{\bf{r}}}_i={\bf 0}$ otherwise and we add ${{\bf{r}}}_1,\dots,{{\bf{r}}}_m$ to ${\cal F}$. Then, define
$$ f_{{{\bf{r}}}_1,\dots,{{\bf{r}}}_m}=\mu(S)\cdot \det\left(xI-\sum_{i=1}^m {{\bf{r}}}_i{{\bf{r}}}_i^T\right).$$
It follows from \cite{AO15} that $\{f_{{{\bf{r}}}_1,\dots,{{\bf{r}}}_m}\}_{{{\bf{r}}}_1,\dots,{{\bf{r}}}_m\in {\cal F}}$ is an interlacing family.
Next, we design an algorithm that returns the top $k$ coefficients of any polynomial in this family in time $m^k\poly(n)$.
\begin{theorem}\label{thm:SRoracle}
Given a strongly Rayleigh distribution $\mu$ on subsets of $[m]$ and a set of vectors ${{\bf{v}}}_1,\dots,{{\bf{v}}}_m\in\mathbb{R}^n$, suppose that we are given an oracle that for any ${{\bf{z}}}\in\mathbb{R}^m$ returns $g_\mu({{\bf{z}}})$. There is an algorithm that for any ${{\bf{r}}}_1,\dots,{{\bf{r}}}_\ell\in S_1\times\dots\times S_\ell$ with $1\leq \ell\leq m$ and $1\leq k\leq n$ returns the top $k$ coefficients of $f_{{{\bf{r}}}_1,\dots,{{\bf{r}}}_\ell}$ in time $m^k\poly(n)$.
\end{theorem}
\begin{proof}
Fix ${{\bf{r}}}_1,\dots,{{\bf{r}}}_\ell\in S_1\times \dots\times S_\ell$ for some $1\leq \ell\leq m$.
First, note that if there is no such element in ${\cal F}$, then $g_\mu(1,\dots,1,z_{\ell+1},\dots,z_m)=0$ and there is nothing to prove. So assume for some ${{\bf{r}}}_{\ell+1},\dots,{{\bf{r}}}_m$, ${{\bf{r}}}_1,\dots,{{\bf{r}}}_m\in {\cal F}$.
It is sufficient to show that for any $0\leq k\leq n$, we can compute the coefficient of $x^{n-k}$ of $f_{{{\bf{r}}}_1,\dots,{{\bf{r}}}_\ell}$ in time $m^k\poly(n)$.
Firstly, observe that since we are only summing up the characteristic polynomials that are consistent with ${{\bf{r}}}_1,\dots,{{\bf{r}}}_\ell$, we can work with the conditional distribution
$$ \tilde{\mu}=\{\mu \,|\, i \text{ if } 1\leq i\leq \ell \text{ and } {{\bf{r}}}_i={{\bf{v}}}_i, \overline{i}\text{ if } 1\leq i\leq \ell, {{\bf{r}}}_i={\bf 0}\}.$$
Note that since we can efficiently compute $g_\mu(z_1,\dots,z_m)$, we can also compute $g_{\tilde{\mu}}(z_{\ell+1},\dots,z_m)$.
For any $i$ that is conditioned to be in, we need to differentiate with respect to $z_i$ and for any $i$ that is conditioned to be out we let $z_i=0$. Also, note that instead of differentiating we can let $z_i=M$ for a very large number $M$, and then divide the resulting polynomial by $M$. We note that when $\mu$ is a determinantal distribution, which is the case in applications to the asymmetric traveling salesman problem, this differentiation can be computed exactly and efficiently; in other cases, $M$ can be taken to be exponentially large, as we can tolerate an exponentially small error.
Now, we can write
$$ f_{{{\bf{r}}}_1,\dots,{{\bf{r}}}_\ell}= \EE{T\sim\tilde{\mu}}{\det\left(xI-\sum_{i\in T}{{\bf{v}}}_i{{\bf{v}}}_i^T\right)}.$$
So, by \autoref{prop:sigmak}, the coefficient of $x^{n-k}$ in the above is equal to
$$ (-1)^k \sum_{T\in \binom{[m]}{k}} \PP{\tilde{\mu}}{T}\cdot \sigma_k\left(\sum_{i\in T} {{\bf{v}}}_i{{\bf{v}}}_i^T\right). $$
To compute the above quantity it is enough to brute force over all sets $T\in \binom{[m]}{k}$. For any such $T$ we can compute $\sigma_{k}(\sum_{i\in T} {{\bf{v}}}_i{{\bf{v}}}_i^T)$ in time $\poly(n)$. In addition, we can efficiently compute $\PP{\tilde{\mu}}{T}$ using our oracle. It is enough to differentiate with respect to any $i\in T$,
$$ \prod_{i\in T: i>\ell} \frac{\partial}{\partial z_i} g_{\tilde{\mu}}(z_{\ell+1},\dots,z_m)|_{z_{\ell+1}=\dots=z_m=1}.$$
Therefore, the algorithm runs in time $m^k\poly(n)$.
\end{proof}
It follows from \autoref{thm:rounding} and \autoref{thm:SRoracle} that for any homogeneous strongly Rayleigh distribution $\mu$ with marginal probabilities ${\epsilon}_1$ with an oracle that computes $g_\mu({{\bf{z}}})$ for any ${{\bf{z}}}\in\mathbb{R}^n$, and for any vectors ${{\bf{v}}}_1,\dots,{{\bf{v}}}_m$ in isotropic position with squared norm at most ${\epsilon}_2$, we can find a set $S$ in the support of $\mu$ such that
$\norm{\sum_{i\in S}{{\bf{v}}}_i{{\bf{v}}}_i^T}\leq O({\epsilon}_1+{\epsilon}_2)$ in time $n^{O(m^{1/3}({\epsilon}_1+{\epsilon}_2)^{1/2})}$.
This is enough to get a $\polyloglog(m)$ approximation algorithm for asymmetric traveling salesman problem on a graph with $m$ vertices that runs in time $2^{\tilde O(m^{1/3})}$ \cite{AO14}.
\bibliographystyle{alpha}
| {
"timestamp": "2017-04-14T02:00:36",
"yymm": "1704",
"arxiv_id": "1704.03892",
"language": "en",
"url": "https://arxiv.org/abs/1704.03892",
"abstract": "We study the problem of approximating the largest root of a real-rooted polynomial of degree $n$ using its top $k$ coefficients and give nearly matching upper and lower bounds. We present algorithms with running time polynomial in $k$ that use the top $k$ coefficients to approximate the maximum root within a factor of $n^{1/k}$ and $1+O(\\tfrac{\\log n}{k})^2$ when $k\\leq \\log n$ and $k>\\log n$ respectively. We also prove corresponding information-theoretic lower bounds of $n^{\\Omega(1/k)}$ and $1+\\Omega\\left(\\frac{\\log \\frac{2n}{k}}{k}\\right)^2$, and show strong lower bounds for noisy version of the problem in which one is given access to approximate coefficients.This problem has applications in the context of the method of interlacing families of polynomials, which was used for proving the existence of Ramanujan graphs of all degrees, the solution of the Kadison-Singer problem, and bounding the integrality gap of the asymmetric traveling salesman problem. All of these involve computing the maximum root of certain real-rooted polynomials for which the top few coefficients are accessible in subexponential time. Our results yield an algorithm with the running time of $2^{\\tilde O(\\sqrt[3]n)}$ for all of them.",
"subjects": "Data Structures and Algorithms (cs.DS); Combinatorics (math.CO)",
"title": "Approximating the Largest Root and Applications to Interlacing Families",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9904406000885707,
"lm_q2_score": 0.7154240018510026,
"lm_q1q2_score": 0.7085849777110738
} |
https://arxiv.org/abs/1510.07711 | Primes and polynomials with restricted digits | Let $q$ be a sufficiently large integer, and $a_0\in\{0,\dots,q-1\}$. We show there are infinitely many prime numbers which do not have the digit $a_0$ in their base $q$ expansion. Similar results are obtained for values of a polynomial (satisfying the necessary local conditions) and if multiple digits are excluded.Our proof is based on the Hardy-Littlewood circle method and Fourier analysis of the set of integers with no digit equal to $a_0$ in base $q$. | \section{Introduction}\label{sec:Introduction}
Let $a_0\in\{0,\dots,q-1\}$ and let
\[\mathcal{A}=\Bigl\{\sum_{i\ge 0}n_i q^i: n_i\in\{0,\dots,q-1\}\backslash\{a_0\}\Bigr\}\]
be the set of numbers which have no digit equal to $a_0$ when written in base $q$. For fixed $q$, the number of elements of $\mathcal{A}$ which are less than $x$ is $O(x^{1-\epsilon_q})$, where $\epsilon_q=\log{(q/(q-1))}/\log{q}>0$. In particular, $\mathcal{A}$ is a sparse subset of the natural numbers. A set being sparse in this way presents several analytic difficulties if one tries to answer arithmetic questions such as whether the set contains infinitely many primes. Typically we can only show that sparse sets contain infinitely many primes when the set in question possesses some additional multiplicative structure.
The set $\mathcal{A}$ has unusually nice structure in that its Fourier transform has a convenient explicit analytic description, and is often unusually small in size. There has been much previous work \cite{1,2,3,4,5,6,7} studying $\mathcal{A}$ and related sets by exploiting this Fourier structure. In particular the work of Dartyge and Mauduit \cite{Almost1,Almost2} shows the existence of infinitely many integers in $\mathcal{A}$ with at most $2$ prime factors, this result relying on the fact that $\mathcal{A}$ is well distributed in arithmetic progressions \cite{Almost1,Level1,Level2}. We also mention the related work of Mauduit-Rivat \cite{MauduitRivat} who showed the sum of digits of primes was well-distributed, and the work of Bourgain \cite{Bourgain} which showed the existence of primes in the sparse set created by prescribing a positive proportion of the digits.
We show that there are infinitely many primes in $\mathcal{A}$, and any polynomial $P$ satisfying suitable local conditions takes infinitely many values in $\mathcal{A}$ provided the base $q$ is sufficiently large (i.e. provided $\mathcal{A}$ is not too sparse). Our proof is based on the circle method, and in particular makes key use of the Fourier structure of $\mathcal{A}$, in the same spirit as the aforementioned works. Somewhat surprisingly, the Fourier structure is sufficient to deduce the existence of primes in $\mathcal{A}$ using only existing exponential sum estimates for the primes, and without having to investigate further bilinear sums.
\begin{thrm}\label{thrm:Prime}
Let $q>\num{2000000}$, $a_0\in\{0,\dots,q-1\}$ and $\mathcal{A}=\{\sum_{i\ge 0}n_iq^i: n_i\in\{0,\dots,q-1\}\backslash\{a_0\}\}$ be the set of numbers with no digit in base $q$ equal to $a_0$. Then for any constant $A>0$ we have
\[\sum_{n<q^k}\Lambda(n)\mathbf{1}_{\mathcal{A}}(n)=\kappa_q(a_0)(q-1)^k+O_A\Bigl(\frac{(q-1)^k}{(\log{q^k})^A}\Bigr),\]
where
\[\kappa_q(a_0)=\begin{cases}
\displaystyle\frac{q}{q-1}\,\qquad&\text{if $(a_0,q)\ne 1$,}\\
\displaystyle\frac{q(\phi(q)-1)}{(q-1)\phi(q)},\qquad&\text{if $(a_0,q)=1$.}
\end{cases}
\]
\end{thrm}
Thus there are infinitely many primes with no digit $a_0$ when written in base $q$. There is nothing special about the fact we sum up to a power of $q$; one could sum $n$ up to $x$ instead of $q^k$ and have $\sum_{n<x}\mathbf{1}_{\mathcal{A}}(n)$ instead of $(q-1)^k$ in the statement.
We have made no particular effort to optimize the lower bound on $q$; it is likely that it could be improved significantly. In particular, a more involved calculation shows that $q>\num{2500}$ is sufficient by the same method, whilst it appears that the method of bilinear sums, Harman's sieve and zero density estimates all have the potential to show the existence of primes missing digits when the base is noticeably smaller. One might conjecture that the result would remain true for all $q>2$.
As presented here the bound is ineffective due to the reliance on estimates for primes in arithmetic progressions. However, since these estimates are only used when the modulus is highly composite, in fact Siegel zeros do not play a role, and so the error terms could be replaced by effective ones of size $O((q-1)^k\exp(-c k^{1/2}))$ if desired.
An analysis of our method reveals that in fact one can choose digits $a_0,\dots,a_{k-1}\in\{0,\dots,q-1\}$, and we obtain the same statement for primes $p=\sum_{i=0}^{k-1}p_i q^i$ with $p_i\in\{0,\dots,q-1\}\backslash\{a_i\}$ uniformly over all such choices of $a_0,\dots a_{k-1}$.
Our results hold for $q$ sufficiently large not only because we require $\mathcal{A}$ to be not too sparse, but also because we separately get superior $L^1$ control on the Fourier transform of $\mathcal{A}$ as $q$. A similar feature was present in the earlier work \cite{MauduitPolys}.
\begin{thrm}\label{thrm:Poly}
Let $q>\exp(\exp(2r))$, and $P\in\mathbb{Z}[X]$ be a polynomial of degree $r$ with lead coefficient $a_r$. Then for any $A>0$ we have
\[\sum_{P(n)<q^k}\mathbf{1}_{\mathcal{A}}(P(n))=a_r^{1/r}\mathfrak{S}(P)\frac{q^{k/r}(q-1)^k}{q^k}+O_{P,A}\Bigl(\frac{q^{k/r}(q-1)^k}{q^k(\log{q^k})^A}\Bigr),\]
where
\[\mathfrak{S}(P)=\lim_{J\rightarrow \infty}\frac{\#\{(n,m):\,0\le n,m<q^J,\, m\in\mathcal{A},\, P(n)\equiv m\Mod{q^J}\}}{(q-1)^J}.\]
\end{thrm}
Given a polynomial $P$ it is a straightforward computation to determine whether $\mathfrak{S}(P)>0$, in which case it takes infinitely many values in $\mathcal{A}$, or whether $\mathfrak{S}=0$ in which case it takes finitely many values in $\mathcal{A}$. (This is because $P(\mathbb{Z}_p)$ is a disjoint union of open balls and a finite set of points in the $p$-adic topology.) In particular, by Hensel lifting we see that Theorem \ref{thrm:Poly} shows that there are infinitely many $\ell^{th}$ powers in $\mathcal{A}$, provided that $q>\exp(\exp(2\ell))$.
Again we have made no particular effort to optimize the lower bound on $q$. It is clear that the statement must require $q$ to grow with $r$, since the main term $q^{k/r}(q-1)^k/q^k$ is only larger than $1$ if $q$ is large enough in terms of $r$. Presumably this bound would be improved if one used stronger bounds of Vinogradov type for the Weyl sums which appear rather than bounds based on Weyl-differencing for large $r$, and by less crude numerical bounds. We note that although the implied constant in the error term in the statement of the Theorem depends on the coefficients of $P$, the lower bound on $q$ depends only on the degree.
\begin{thrm}\label{thrm:ManyDigits}
Let $\epsilon>0$, $0<s<q^{1/5-\epsilon}$ and let $q$ be sufficiently large in terms of $\epsilon>0$. Let $b_1,\dots,b_s\in\{0,\dots,q-1\}$ be distinct and let $\mathcal{B}=\{\sum_{i=0}^{k-1}n_i q^i:n_i\in\{0,\dots,q-1\}\backslash\{b_0,b_1,\dots,b_s\}\}$ be the set of $k$-digit numbers in base $q$ with no digit in the set $\{b_1,\dots,b_s\}$. Then we have
\[\sum_{n<q^k}\Lambda(n)\mathbf{1}_{\mathcal{B}}(n)=\frac{q(\phi(q)-s')}{(q-1)\phi(q)}(q-s)^k+O_A\Bigl(\frac{(q-s)^k}{(\log{q^k})^A}\Bigr),\]
where $s'=\#\{1\le i\le s:(b_i,q)=1\}$.
Moreover, if $b_1,\dots,b_s$ are consecutive integers then the same result holds provided only that $q-s\ge q^{4/5+\epsilon}$ and $q$ is sufficiently large in terms of $\epsilon$.
\end{thrm}
In the case of $b_1,\dots,b_s$ consecutive with $q-s=q^{4/5+\epsilon}$ we see that Theorem \ref{thrm:ManyDigits} shows the existence of primes in a set containing $x^{4/5+\epsilon}$ elements less than $x$. The exponent $4/5$ is ultimately related to the $4/5$ exponent of Lemma \ref{lmm:PrimeSum} for an exponential sum over primes, and represents a limit of our basic method. As with Theorem \ref{thrm:Prime}, one would hope that utilizing Type I-II sums and Harman's sieve would extend this to sets of smaller density.
The conclusion of Theorem \ref{thrm:ManyDigits} holds in the case $q=10^8$ and $s=10$, so one can choose $\{b_1,\dots,b_{10}\}=\{0,11111111,22222222,\dots,99999999\}$. Thus there are infinitely many prime numbers with no string of 15 consecutive base 10 digits being the same. (Again, we expect 15 to be able to be reduced with slightly more effort.)
An analogous statement for the set $\mathcal{B}$ for polynomial values also holds, but in the more restrictive region $0<s<q^{1/r2^r-\epsilon}$ for arbitrary $b_1,\dots,b_s$ or $q-s\ge q^{1/r2^{r}+\epsilon}$ for consecutive $b_1,\dots,b_s$.
\section{Notation}
We use $e(x)=e^{2\pi i x}$ as the complex exponential and $\|x\|=\inf_{n\in\mathbb{Z}}|x-n|$ to denote the distance to the largest integer. We will use various expressions of the form $\min(A,\|\alpha\|^{-1})$, which are interpreted to take the value $A$ if $\|\alpha\|=0$. We use $n\sim N$ to abbreviate $n\in [N,2N)$. Any implied constants in asymptotic notation $\ll$ or $O(\cdot)$ are allowed to depend on the base $q$ and when dealing with polynomials as in Theorem \ref{thrm:Poly}, the polynomial $P$, but on no other quantity unless explicitly indicated by a subscript. Outside of Section \ref{sec:ExponentialSums} all quantities should be thought of as $k\rightarrow \infty$. In particular, $k$ will implicitly be assumed to be larger than any fixed constant.
\section{Outline}
We give an informal sketch the overall outline of the proof, which is essentially an application of the Hardy-Littlewood circle method.
We let $\hat{F}_{X}$ be the Fourier transform (over $\mathbb{Z}$) of the set $\mathcal{A}$ restricted to $\{1,\dots,X\}$. Thus for $X=q^k$ we have
\[\hat{F}_{q^k}(\theta)=\sum_{n\le q^k}\mathbf{1}_{A}(n)e(n\theta)=\prod_{i=0}^{k-1}\Bigl(\sum_{0\le n_i\le q-1}\mathbf{1}_{\mathcal{A}}(n_i)e(n_iq^i\theta)\Bigr).\]
Here we have written $n=\sum_{i=0}^{k-1}n_iq^i$. It is this factorization of $\hat{F}_{q^k}$ and the fact that the sum over $n_i$ is almost a geometric series which allows us very good Fourier control over $\mathcal{A}$. By Fourier inversion on $\mathbb{Z}/q^k\mathbb{Z}$
\[\mathbf{1}_{\mathcal{A}}(n)=\frac{1}{q^k}\sum_{0\le a<q^k}\hat{F}_{q^k}\Bigl(\frac{a}{q^k}\Bigr)e\Bigl(\frac{-a n}{q^k}\Bigr).\]
Thus
\[\sum_{n\le q^k}\Lambda(n)\mathbf{1}_{\mathcal{A}}(n)=\frac{1}{q^k}\sum_{0\le a<q^k}\hat{F}_{q^k}\Bigl(\frac{a}{q^k}\Bigr)S_{\Lambda,q^k}\Bigl(\frac{-a}{q^k}\Bigr),\]
where
\[S_{\Lambda,q^k}(\theta)=\sum_{n\le q^k}\Lambda(n)e(n\theta).\]
We split the contribution up depending on whether $a/q^k$ is close to a rational with small denominator or not. This distinguishes between those $a$ when $S_{\Lambda,q^k}(a/q^k)$ is large or not. It turns out that $\hat{F}_{q^k}(a/q^k)$ is large if $a$ is `close' to a number with few non-zero base $q$-digits, but these are somewhat rare and `spread out' except when $a/q^k$ close to a rational with denominator being a small power of $q$, and so it turns out decomposition is adequate for describing $\hat{F}_{q^k}$ as well as $S_{\Lambda,q^k}$.
If $\mathcal{D}$ is the set of $a$ such that $a/q^k=\ell/d+\beta$ for some integers $(\ell,d)=1$ of and some $\beta\in\mathbb{R}$ with $d|\beta|$ of size $D$, we use a $L^\infty$-$L^1$ bound to show their contribution is at most
\[\sup_{a\in\mathcal{D}}\Bigl|S_{\Lambda,q^k}\Bigl(\frac{a}{q^k}\Bigr)\Bigr| \sum_{a\in\mathcal{D}}\frac{1}{q^k}\Bigr|\hat{F}_{q^k}\Bigl(\frac{a}{q^k}\Bigr)\Bigr|.\]
One can save a small power of $D$ over the trivial bound on $S_{\Lambda,q^k}(a/q^k)$ for $a\in\mathcal{D}$. By using a large-sieve type argument (and the analytic description of $\hat{F}$) we show equidistribution for a truncated version of $\hat{F}_J$ of $\hat{F}_{q^k}$
\[\sum_{a/q^k=\ell/d+\beta}\Bigr|\hat{F}_{J}\Bigl(\frac{a}{q^k}\Bigr)\Bigr|\approx J\int_0^1|\hat{F}_{J}(\theta)|d\theta,\]
where $J=\#\mathcal{D}$. We then use the explicit analytic description of $\hat{F}_{q^k}$ to obtain a final bound which is unusually strong. In particular, we important make use of the averaging over different $\beta$. This bound loses only a small power of $D$ over the size of the largest individual terms in the sum. Crucially this power decreases to 0 as $q\rightarrow\infty$, whilst the power saving in $S_{\Lambda,q^k}$ was independent of $q$, and so we have an overall saving of a small power of $D$ if $q$ is sufficiently large. This saving shows that these `minor arc' contributions when $D$ is large are negligible.
Thus only those $a/q^k$ which are very close to a rational (i.e. $d|\beta|$ is small) make a noticeable contribution. In this case the problem simply reduces to estimating primes and elements of $\mathcal{A}$ separately in short intervals and arithmetic progressions. For primes this is well known, whilst for the set $\mathcal{A}$ this follows from a suitable $L^\infty$ bound on $\hat{F}$.
After writing this paper, the author discovered that very similar ideas appeared earlier in the literature, notably in \cite{MauduitRivat,Level1,Level2,Bourgain}. For simplicity we give an essentially self-contained proof, but emphasize to the reader that many Lemmas appearing are not new. It appears possible that (at least in the case when the base $q$ is large) that an argument similar to the one here might simplify or extend other arguments in the study of digit related functions.
Much of the previous work relied on estimating correlations of primes with digit-related functions relied on exploiting a certain property of the Fourier transform described in \cite{MauduitGeneral} as the `carry property', which often allowed one to simplify bilinear expressions so the Fourier transform only relied on the lower-order digits. This feature is not present in our work.
\section{Exponential sums for primes and polynomials}\label{sec:ExponentialSums}
We first collect some results for exponential sums for primes and polynomials. The bounds here are well-known, but we give a essentially complete proofs since they differ slightly from some standard references.
\begin{lmm}\label{lmm:Equidistribution}
Let $\alpha=a/d+\beta$ with $a,d$ coprime integers and $\beta\in\mathbb{R}$ satisfying $|\beta|<1/d^2$. Then we have
\[\sum_{n=1}^N\min\Bigl(M,\|\alpha n\|^{-1}\Bigr)\ll \Bigl(N+N M d|\beta|+\frac{1}{d|\beta|}+d\Bigr)\log{N}.\]
\end{lmm}
\begin{proof}
If $Nd|\beta|<1/2$ then we let $n=n_0+d n_1$ for non-negative integers $n_0,n_1$ with $n_0<d$ and $n_1<N/d$. If $n_0\ne 0$ then
\[\|\alpha n\|=\|n_0a/d+ \beta n\|\ge \|n_0a/d\|-\|\beta n\|\ge \|n_0a/d\|/2\]
since $N|\beta|<1/2$. We let $b\in\{0,\dots,d-1\}$ be such that $b\equiv m_0a\Mod{d}$. Thus the terms with $n_0\ne 0$ contribute a total
\[\ll \sum_{n_1<N/d}\sum_{1\le b<\min(d,N)}\frac{d}{b}\ll \sum_{n_1<N/d}d\log{N}\ll (N+d)\log{N}.\]
The terms with $n_0=0$ contribute
\[\ll \sum_{1\le n_1<N/d}\min\Bigl(M,\|d n_1\beta\|^{-1}\Bigr)\ll \sum_{1\le n_1<N/d}\frac{1}{d n_1 |\beta|}\ll \frac{\log{N}}{d|\beta|}.\]
Here we have used the fact that since $n_0=0$ and we sum over $n\ge 1$ we must have $n_1\ge 1$.
We now consider the case $Nd|\beta|>1/2$. We let $n=n_0+d n_1+d\lfloor(d^2\beta)^{-1}\rfloor n_2$, with $0\le n_0<d$, $0\le n_1\le (d^2\beta)^{-1}$ and $0\le n_2\ll N d \beta$. Thus we obtain
\[\sum_{n=1}^N\min\Bigl(M, \|\alpha n\|^{-1}\Bigr)\ll\sum_{\substack{n_1\le 1/d^2\beta\\ n_2\ll N\beta/d}}\sum_{0\le n_0<d}\min\Bigl(N,\Bigl\|\theta+n_1d\beta+n_0(\ell/d+\beta)\Bigr\|^{-1}\Bigr)\]
where we have put $\theta=\beta d \lfloor(d^2\beta)^{-1}\rfloor m_2$ for convenience. The inner sum is of the form $\sum_i\min(N,\|\theta_i\|^{-1})$ for $d$ points $\theta_i$ which are $1/2d$ separated. Therefore the sum over $m_0$ is
\begin{align*}
&\ll d\log{d}+\sup_{0\le m_0<d}\min\Bigl(N,\|\theta+m_1d\beta+m_0(\ell/d+\beta)\|^{-1}\Bigr)\\
&\ll d\log{N}+\sup_{0\le \epsilon<1}\min\Bigl(N,\frac{d}{\|d\theta+(m_1+O(1))d^2\beta\|}\Bigr)
\end{align*}
since $\|t\|^{-1}\le d\|dt\|^{-1}$ for all $t$. The term $d\log{d}$ contributes $\ll (M+d)\log{N}$ to the total sum, which is acceptable. Thus we are left to bound
\[\sum_{m_2\ll Md\beta}\sup_{\substack{\theta\in\mathbb{R}}}\sum_{m_1\le 1/d\beta}\min\Bigl(N,\frac{d}{\|\theta+(m_1+O(1))d^2\beta\|}\Bigr).\]
The inner sum is of the form ($O(1)$ copies of) $\sum_i\min(M,\|\theta_i\|^{-1})$ for $O(1/d^2\beta)$ points $\theta_i$ which are $d^2\beta$-separated $\mod{1}$. Therefore the inner sum is $\ll(M+d/d^2\beta)\log{N}$, and this gives a bound
\[\ll Nd|\beta|\Bigl(M+\frac{1}{d|\beta|}\Bigr)\log{N}\ll \Bigl(MN d|\beta|+N\Bigr)\log{N}.\]
Putting these bounds together gives
\[\sum_{n=1}^N\min\Bigl(M,\|\alpha n\|^{-1}\Bigr)\ll \Bigl(N+NM d|\beta|+\frac{1}{d|\beta|}+d\Bigr)\log{N}.\qedhere\]
\end{proof}
\begin{lmm}\label{lmm:PrimeSum}
Let $\alpha=a/d+\beta$ with $(a,d)=1$ and $|\beta|<1/d^2$. Then
\[S_{\Lambda,x}(\alpha)=\sum_{n<x}\Lambda(n)e(n\alpha)\ll \Bigl(x^{4/5}+\frac{x^{1/2}}{|d\beta|^{1/2}}+x|d\beta|^{1/2}\Bigr)(\log{x})^4.\]
\end{lmm}
\begin{proof}
From \cite[(6), Page 142]{Davenport}, taking $f(n)=e(n\alpha)$ we have that for any choice of $U,V\ge 2$ with $UV\le x$
\begin{align*}
\sum_{n<x}\Lambda(n)e(n\alpha)&\ll U+(\log{x})\sum_{1\le t<UV}\sup_w\Bigl|\sum_{w<r\le x/t}e(rt\alpha)\Bigr|\\
&+x^{1/2}(\log{x})^3\sup_{\substack{U\le M\le x/V\\ V\le j\le N/M}}\Bigl(\sum_{V<k<x/M}\Bigl|\sum_{\substack{M<m\le 2M\\ m\le x/k\\ m\le x/j}}e(\alpha m (j-k))\Bigr|\Bigr)^{1/2}.
\end{align*}
The sum over $r$ is clearly $\ll \min(x/t,\|t\alpha\|^{-1})$ and the sum over $m$ is similarly $\ll \min(M,\|(j-k)\alpha\|^{-1})$. Putting $t$ and $j-k$ into dyadic intervals and applying Lemma \ref{lmm:Equidistribution} to the resulting sums (or the trival bound when $j=k$) gives a bound
\[\ll \Bigl(UV+xd|\beta|+\frac{1}{d|\beta|}+d+\frac{x}{U^{1/2}}+\frac{x}{V^{1/2}}+x|d\beta|^{1/2}+\frac{x^{1/2}}{|d\beta|^{1/2}}+x^{1/2}d^{1/2}\Bigr)(\log{x})^4.\]
Choosing $U=V=x^{2/5}$ and simplifying the terms then gives the result.
\end{proof}
\begin{lmm}\label{lmm:PolySum}
Let $P\in\mathbb{Z}[X]$ be an integer polynomial of degree $r\ge 2$ with lead coefficient $a_r$. Let $\alpha\in\mathbb{R}$ be such that $a_rr!\alpha=a/d+\beta$ with $(a,d)=1$ and $|\beta|<1/d^2$. Then for any constant $\epsilon>0$ we have
\[S_{P,x}(\alpha)=\sum_{P(n)<x}e(\alpha P(n))\ll_{\epsilon} x(\log{x})\Bigl(\frac{1}{x}+\frac{1}{x^r d|\beta|}+d|\beta|+\frac{d}{x^{r}}\Bigr)^{1/2^{r}}.\]
\end{lmm}
\begin{proof}
If $\mathcal{I}$ is an interval contained in $[0,x]$ then
\begin{align*}
\Bigl|\sum_{n\in\mathcal{I}}e(\alpha P(n))\Bigr|^2=\sum_{|h|<x}\sum_{\substack{n\in\mathcal{I}\\ n+h\in\mathcal{I}}}e(\alpha(P(n+h)-P(n)))&\le \sum_{|h|<x}\Bigl|\sum_{n\in\mathcal{I}(h)}e(\alpha Q_h(n))\Bigr|
\end{align*}
where $\mathcal{I}(h)=\mathcal{I}\cap(\mathcal{I}-h)$ is an interval contained in $[0,x]$, and $Q_h(n)=P(n+h)-P(n)$ is a polynomial of degree $r-1$ with lead coefficient $a_r r h$. Applying this and Cauchy's inequality $r-1$ times gives
\begin{align*}
\Bigl|\sum_{n\in\mathcal{I}}e(\alpha P(n))\Bigr|^{2^{r-1}}&\le (2x)^{2^{r-1}-r}\sum_{|h_1|,\dots,|h_{r-1}|<x}\Bigl|\sum_{n\in\mathcal{I}(h_1,\dots,h_{r-1})}e(\alpha r! h_1\dots h_{r-1}n )\Bigr|\\
&\ll x^{2^{r-1}-r}\sum_{H<x^{r-1}}\tau_{r-1}(H)\min(x,\|a_r r!H\|^{-1})
\end{align*}
where we have put $H=h_1\dots h_r$. We split the sum depending on whether $\tau_{r-1}(H)>B$ or not, for some quantity $B$ which we choose later. This shows that the inner sum is of size
\begin{align*}
&\ll \sum_{\substack{H<x^{r-1}\\ \tau_{r-1}(H)<B}}B\min(x,\|\alpha a_rr!H\|^{-1})+\sum_{\substack{H<x^{r-1}\\ \tau_{r-1}(H)>B}}x\frac{\tau_{r-1}(H)^2}{B}\\
&\ll B\Bigl(x^{r-1}+x^rd|\beta|+\frac{1}{d|\beta|}+d\Bigr)\log{x}+\frac{x^r(\log{x})^{(r-1)^2}}{B}
\end{align*}
by applying Lemma \ref{lmm:Equidistribution}. Writing this bound as $x^r B/ Z+x^r(\log{x})^{(r-1)^2}/B$ and choosing $B=Z^{1/2}$ then gives the result, noting that $(\log{x})^{(r-1)^2/2^{r-1}}<\log{x}$.
\end{proof}
\section{Fourier analysis}\label{sec:Fourier}
We now establish in turn several properties of the function $\hat{F}_{q^k}$, which are the key ingredient in our result.
\begin{lmm}[$L^1$ bound]\label{lmm:L1Bound}
There exists a constant $C_q\in [1/\log{q},1+3/\log{q}]$ such that
\[\sup_{\theta\in\mathbb{R}}\sum_{0\le a< q^k}\Bigl|\hat{F}_{q^k}\Bigl(\theta+\frac{a}{q^k}\Bigr)\Bigr|\ll (C_q q\log{q})^k.\]
\end{lmm}
\begin{proof}
We expand out the definition of $\hat{F}_{q^k}$, and let $n=\sum_{i=0}^{k-1}n_i q^i$ be the base-$q$ expansion of $n$.
\begin{align*}
\hat{F}_{q^k}(t)&=\sum_{n<q^k}\mathbf{1}_{\mathcal{A}}(n)e(t n)=\prod_{i=0}^{k-1}\Bigl(\sum_{n_i=0}^{q-1}\mathbf{1}_{\mathcal{A}}(n_i)e(n_i q^i t)\Bigr).
\end{align*}
The sum over $n_i$ is a sum over all values in $\{0,\dots,q-1\}\backslash\{a_0\}$, and so is bounded by
\begin{equation}
\Bigl|\frac{e(q^{i+1}t)-1}{e(q^it)-1}-e(a_0 q^i t)\Bigr|\le \min\Bigl(q,1+\frac{1}{2\|q^i t\|}\Bigr).\label{eq:LittleSum}
\end{equation}
For $t\in[0,1)$, we write $t=\sum_{i=1}^{k} t_i q^{-i}+\epsilon$ with $t_1,\dots,t_k\in\{0,\dots,q-1\}$ and $\epsilon\in[0,1/q^k)$. We see that $\|q^i t\|^{-1}=\|t_{i+1}/q+\epsilon_i\|^{-1}$ for some $\epsilon_i\in[0,1/q)$. In particular, $\|q^i t\|^{-1}\le\max(q/t_{i+1},q/(q-1-t_{i+1}))$ if $t_{i+1}\ne 0,q-1$. Thus we see that
\begin{align*}
\sup_{\theta\in\mathbb{R}}\sum_{0\le a <q^k}\Bigl|\hat{F}_{q^k}\Bigl(\theta+\frac{a}{q^k}\Bigr)\Bigr|&\ll \sum_{t_1,\dots,t_k<q}\prod_{i=1}^k\min\Bigl(q,1+\max\Bigl(\frac{q}{2t_i},\frac{q}{2(q-1-t_i)}\Bigr)\Bigr)\\
&\ll \prod_{i=1}^k \Bigl(3q+\sum_{1\le t_i\le (q-1)/2}\frac{q}{t_i}\Bigr)\\
&\ll (3q+q\log{q})^k.
\end{align*}
Here we used a small computation to verify $\sum_{1\le t\le (q-1)/2}t^{-1}\le \log{q}$ for all integers $q<20$, whilst for $q\ge 20>2/(\log{2}-\gamma)$ (where $\gamma$ is Euler's constant), we have $\sum_{1\le t\le (q-1)/2}t^{-1}\le \log{q}-\log{2}+\gamma+2/q\le \log{q}$. (This bound is only relevant to the final lower bound on $q$; for a qualitative statement a bound $O(\log{q})$ suffices.)
\end{proof}
\begin{lmm}[Large sieve estimate]\label{lmm:TypeI}
We have
\[\sup_{\theta\in\mathbb{R}}\sum_{d\sim D}\sum_{\substack{0<\ell<d\\ (\ell,d)=1}}\sup_{|\epsilon|<\frac{1}{10D^2}}\Bigl|\hat{F}_{q^k}\Bigl(\frac{\ell}{d}+\theta+\epsilon\Bigr)\Bigr|\ll (D^2+q^k)(C_q\log{q})^k.\]
Here $C_q$ is the constant described in Lemma \ref{lmm:L1Bound}.
\end{lmm}
\begin{proof}
We have that
\[\hat{F}_{q^k}(t)=\hat{F}_{q^k}(u)+\int_t^u\hat{F}_{q^k}'(v)dv.\]
Thus integrating over $u\in [t-\delta,t+\delta]$ we have
\[|\hat{F}_{q^k}(t)|\ll \frac{1}{\delta}\int_{t-\delta}^{t+\delta}|\hat{F}_{q^k}(u)|du+\int_{t-\delta}^{t+\delta}|\hat{F}_{q^k}'(u)|du.\]
We note that the fractions $\ell/d+\theta+\epsilon$ with $(\ell,d)=1$, $d<2D$ and $|\epsilon|<1/10D^2$ are separated from one another by $\gg 1/D^2$ . Thus
\[\sum_{d\sim D}\sum_{\substack{0<\ell<d\\ (\ell,d)=1}}\sup_{|\epsilon|<1/10D^2}\Bigl|\hat{F}_{q^k}\Bigl(\frac{\ell}{d}+\theta+\epsilon\Bigr)\Bigr|\ll D^2\int_0^1|\hat{F}_{q^k}(u)|du+\int_{0}^{1}|\hat{F}_{q^k}'(u)|du.\]
We note that, writing $n=\sum_{i=0}^{k-1}n_i q^i$ we have
\begin{align*}
\hat{F}_{q^k}'(t)&=2\pi i\sum_{n\le q^k}n\mathbf{1}_{\mathcal{A}}(n)e(n t)\\
&=2\pi i \sum_{j=0}^{k-1}q^j\Bigl(\sum_{0\le n_j<q}n_j\mathbf{1}_{\mathcal{A}}e(n_jq^{j}t)\Bigr)\prod_{i\ne j}\Bigl(\sum_{0\le n_i<q}\mathbf{1}_{\mathcal{A}}(n_i)e(n_iq^it)\Bigr).
\end{align*}
Thus, as in Lemma \ref{lmm:L1Bound}, we have
\[|\hat{F}_{q^k}'(t)|\ll \sum_{j=0}^{k-1}q^{j+1}\prod_{i\ne j}\min\Bigl(q,1+\frac{1}{2\|q^it\|}\Bigr)\ll q^k\prod_{i=0}^{k-1}\min\Bigl(q,1+\frac{1}{2\|q^it\|}\Bigr),\]
and we have the same bound for $|\hat{F}_{q^k}(t)|$ but without the $q^k$ factor. We let $t=\sum_{i=1}^kt_iq^{-k}+\epsilon$ for some $t_1,\dots,t_k\in\{0,\dots,q-1\}$ and $\epsilon\in[0,1/q^k)$. We see that, as in Lemma \ref{lmm:L1Bound} we have
\begin{align*}
\int_0^1\prod_{i=0}^{k-1}\min\Bigl(q,1+\frac{1}{2\|q^it\|}\Bigr)dt&\ll \frac{1}{q^k}\sum_{t_1,\dots,t_k<q}\prod_{i=0}^{k-1}\Bigl(1+\min\Bigl(q,\frac{q}{2t_i},\frac{q}{2(q-1-t_i)}\Bigr)\Bigr)\\
&\ll (C_q\log{q})^k.
\end{align*}
Putting this all together then gives the result.
\end{proof}
\begin{lmm}[Hybrid estimate]\label{lmm:ExtendedTypeI}
Let $B,D\gg 1$. Then we have
\[\sum_{d\sim D}\sum_{\substack{\ell<d\\ (\ell,d)=1}}\sum_{\substack{|\eta|<B\\ q^k\ell/d+\eta\in\mathbb{Z}}}\Bigl|\hat{F}_{q^k}\Bigl(\frac{\ell}{d}+\frac{\eta}{q^k}\Bigr)\Bigr|\ll (q-1)^k(D^2B)^{\alpha_q}+D^2B(C_q\log{q})^k,\]
where $C_q$ is the constant described in Lemma \ref{lmm:L1Bound} and
\[\alpha_q=\frac{\log\Bigl(C_q\frac{q}{q-1}\log{q}\Bigr)}{\log{q}}.\]
\end{lmm}
\begin{proof}
The result follows immediately from Lemma \ref{lmm:L1Bound} if $B>q^k$, so we may assume $B<q^k$. For any integer $k_1\in [0,k]$ we have
\begin{align*}
\hat{F}_{q^k}(\alpha)&=\prod_{i=0}^{k-k_1-1}\Bigl(\sum_{n_i<q}\mathbf{1}_{\mathcal{A}}(n_i)e(n_iq^i\alpha)\Bigr)\prod_{i=k-k_1}^{k-1}\Bigl(\sum_{n_i<q}\mathbf{1}_{\mathcal{A}}(n_i)e(n_iq^i\alpha)\Bigr)\\
&=\hat{F}_{q^{k-k_1}}(\alpha)\hat{F}_{q^{k_1}}(q^{k-k_1}\alpha).
\end{align*}
Using this and the trivial bound $|\hat{F}_{q^j}(\theta)|\le (q-1)^j$, for $k_1+k_2\le k$ we have that
\[\Bigl|\hat{F}_{q^k}\Bigl(\frac{\ell}{d}+\frac{\eta}{q^k}\Bigr)\Bigr|\le (q-1)^{k-k_1-k_2}\Bigl|\hat{F}_{q^{k_1}}\Bigl(\frac{q^{k-k_1}\ell}{d}+\frac{\eta}{q^{k_1}}\Bigr)\Bigr|\sup_{|\epsilon|\le B/q^k}\Bigl|\hat{F}_{q^{k_2}}\Bigl(\frac{\ell}{d}+\epsilon\Bigr)\Bigr|.\]
Substituting this bound gives
\begin{align*}
\sum_{d\sim D}&\sum_{\substack{\ell<d\\ (\ell,d)=1}}\sum_{\substack{|\eta|<B\\ q^k\ell/d+\eta\in\mathbb{Z}}}\Bigl|\hat{F}_{q^k}\Bigl(\frac{\ell}{d}+\frac{\eta}{q^k}\Bigr)\Bigr|\ll (q-1)^{k-k_1-k_2}\\
&\times\sum_{d\sim D}\sum_{\substack{\ell<d\\ (\ell,d)=1}}\sup_{|\epsilon|<B/q^k}\Bigl|\hat{F}_{q^{k_2}}\Bigl(\frac{\ell}{d}+\epsilon\Bigr)\Bigr|\sum_{\substack{|\eta|<B\\ q^k\ell/d+\eta\in\mathbb{Z}}}\Bigl|\hat{F}_{q^{k_1}}\Bigl(\frac{q^{k-k_1}\ell}{d}+\frac{\eta}{q^{k_1}}\Bigr)\Bigr|.
\end{align*}
We choose $k_1$ minimally such that $q^{k_1}>B$, and extend the inner sum to $|\eta|<q^{k_1}$. Applying Lemma \ref{lmm:L1Bound} to the inner sum, and then Lemma \ref{lmm:TypeI} to the sum over $d,\ell$ gives
\[\sum_{d\sim D}\sum_{\substack{\ell<d\\ (\ell,d)=1}}\sum_{\substack{|\eta|<B\\ q^k\ell/d+\eta\in\mathbb{Z}}}\Bigl|\hat{F}_{q^k}\Bigl(\frac{\ell}{d}+\frac{\eta}{q^k}\Bigr)\Bigr|\ll (q-1)^{k-k_1-k_2}q^{k_1}(q^{k_2}+D^2)(C_q\log{q})^{k_1+k_2}.\]
We choose $k_2=\min(k-k_1,\lfloor2\log{D}/\log{q}\rfloor)$. We see that
\begin{align*}
\Bigl(\frac{C_q q\log{q}}{q-1}\Bigr)^{k_1+k_2}&\ll (D^2B)^{\alpha_q},\\
D^2 q^k_1\Bigl(\frac{C_q\log{q}}{q-1}\Bigr)^{k_1+k_2}&\ll \frac{D^2 B}{(q-1)^k}(C_q\log{q})^k+(D^2B)^{\alpha_q}.
\end{align*}
Combining these bounds gives the result.
\end{proof}
\begin{lmm}[$L^\infty$ bound]\label{lmm:LInfBound}
Let $d<q^{k/3}$ be of the form $d=d_1d_2$ with $(d_1,q)=1$ and $d_1\ne 1$, and let $|\epsilon|<1/2q^{2k/3}$. Then for any integer $\ell$ coprime with $d$ we have
\[\Bigl| \hat{F}_{q^k}\Bigl(\frac{\ell}{d}+\epsilon\Bigr)\Bigr|\ll (q-1)^{k}\exp\Bigl(-c_q\frac{k}{\log{d}}\Bigr)\]
for some constant $c_q>0$ depending only on $q$.
\end{lmm}
\begin{proof}
We have that
\[|e(n\theta)+e((n+1)\theta)|^2=2+2\cos(2\pi \theta)<4\exp(-2\|\theta\|^2).\]
This implies that
\[\Bigl|\sum_{n_i<q}\mathbf{1}_{\mathcal{A}}(n_i)e(n_i \theta)\Bigr|\le q-3+2\exp(-\|\theta\|^2)\le (q-1)\exp\Bigl(-\frac{\|\theta\|^2}{q}\Bigr).\]
We substitute this bound into our expression for $\hat{F}$, which gives
\begin{align*}
\Bigl|\hat{F}_{q^k}\Bigl(\frac{\ell}{d}\Bigr)\Bigr|&=\prod_{i=0}^{k-1}\Bigl|\sum_{n_i<q}\mathbf{1}_{\mathcal{A}}(n_i)e(n_iq^it)\Bigr|\\
&\le
(q-1)^{k}\exp\Bigl(-\frac{1}{q}\sum_{i=0}^{k-1}\|q^it\|^2\Bigr).
\end{align*}
If $\|q^it\|<1/2q$ then $\|q^{i+1}t\|=q\|q^i t\|$. If $t=\ell/d_1d_2$ with $d_1>1$, $(d_1,q)=1$ and $(\ell,d_1)=1$ then $\|q^it\|\ge 1/d$ for all $i$. Similarly, if $t=\ell/d_1d_2+\epsilon$ with $\ell,d_1,d_2$ as above $|\epsilon|<q^{-2k/3}/2$ and $d=d_1d_2<q^{k/3}$ then for $i<k/3$ we have $\|q^it\|\ge 1/d-q^i|\epsilon|\ge 1/2d$. Thus, for any interval $\mathcal{I}\subseteq[0,k/3]$ of length $\log{d}/\log{q}$, there must be some integer $i\in\mathcal{I}$ such that $\|q^i(\ell/d+\epsilon)\|>1/2q^2$. This implies that
\[\sum_{i=0}^k\Bigl\|q^i\Bigl(\frac{\ell}{d}+\epsilon\Bigr)\Bigr\|^2\ge \frac{1}{4q^4}\Bigl\lfloor\frac{k\log{q}}{3\log{d}}\Bigr\rfloor.\]
Substituting this into the bound for $\hat{F}$, and recalling we assume $d<q^{k/3}$ gives the result.
\end{proof}
\section{Minor arcs}
We now use the exponential sum estimates from the previous sections to show that when $\alpha$ is `far' from a rational with small denominator the quantity $\hat{F}_{q^k}(\alpha)S_{\Lambda,q^k}(-\alpha)$ and $\hat{F}_{q^k}(\alpha)S_{P,q^k}(-\alpha)$ are typically small in absolute value.
\begin{lmm}\label{lmm:PrimeMinor}
Let $1\ll B\ll q^k/D_0D$ and $1\ll D\ll D_0\ll q^{k/2}$. Then we have
\begin{align*}
\sum_{d\sim D}\sum_{\substack{0<\ell<d\\ (\ell,d)=1}}\sum_{\substack{|\eta|\sim B\\ q^k\ell/d+\eta\in\mathbb{Z}}}\Bigl|\hat{F}_{q^k}\Bigl(\frac{\ell}{d}+\frac{\eta}{q^k}\Bigr)S_{\Lambda,q^k}\Bigl(-\frac{\ell}{d}-\frac{\eta}{q^k}\Bigr)\Bigr|\\
\ll k^4(q-1)^kq^{k}\Bigl(\frac{1}{(DB)^{1/5-\alpha_q}}+\frac{q^{k\alpha_q}}{D_0^{1/2}}\Bigr).
\end{align*}
and
\begin{align*}
\sum_{d\sim D}\sum_{\substack{0<\ell<d\\ (\ell,d)=1}}\sum_{\substack{|\eta|\ll 1\\ q^k\ell/d+\eta\in\mathbb{Z}}}\Bigl|\hat{F}_{q^k}\Bigl(\frac{\ell}{d}+\frac{\eta}{q^k}\Bigr)S_{\Lambda,q^k}\Bigl(-\frac{\ell}{d}-\frac{\eta}{q^k}\Bigr)\Bigr|\\
\ll k^4(q-1)^kq^{k}\Bigl(\frac{1}{D^{1/5-\alpha_q}}+\frac{D_0^{1/2+2\alpha_q}}{q^{k/2}}\Bigr).
\end{align*}
Here $\alpha_q$ is the constant described in Lemma \ref{lmm:ExtendedTypeI}.
\end{lmm}
\begin{proof}
By Lemma \ref{lmm:ExtendedTypeI} we have that if $D^2B\ll q^k$ then
\[\sum_{d\sim D}\sum_{\substack{\ell<d\\ (\ell,d)=1}}\sum_{\substack{|\eta|<B\\ q^k\ell/d+\eta\in\mathbb{Z}}}\Bigl|\hat{F}_{q^k}\Bigl(\frac{\ell}{d}+\frac{\eta}{q^k}\Bigr)\Bigr|\ll (q-1)^k(D^2B)^{\alpha_q}.\]
By Lemma \ref{lmm:PrimeSum} we have
\[\sup_{\substack{d\sim D\\ (\ell,d)=1\\ |\eta|\sim B}}\Bigl|\sum_{n<q^k}\Lambda(n)e\Bigl(-n\Bigl(\frac{\ell}{d}+\frac{\eta}{q^k}\Bigr)\Bigr)\Bigr|\ll \Bigl(q^{4k/5}+\frac{q^{k}}{(DB)^{1/2}}+\frac{(DB)^{1/2}}{q^{k/2}}\Bigr)(k\log{q})^4.\]
Putting these together gives
\begin{align*}
&\sum_{d\sim D}\sum_{\substack{0<\ell<d\\ (\ell,d)=1}}\sum_{\substack{|\eta|\sim B\\ q^k\ell/d+\eta\in\mathbb{Z}}}\Bigl|\hat{F}_{q^k}\Bigl(\frac{\ell}{d}+\frac{\eta}{q^k}\Bigr)S_{\Lambda,q^k}\Bigl(-\frac{\ell}{d}-\frac{\eta}{q^k}\Bigr)\Bigr|\\
&\ll k^4 q^k(q-1)^k\Bigl(\frac{(D^2B)^{\alpha_q}}{q^{k/5}}+\frac{(D^2B)^{\alpha_q}}{(DB)^{1/2}}+\frac{(DB)^{1/2}(D^2B)^{\alpha_q}}{q^{k/2}}\Bigr).
\end{align*}
Recalling that $D^2B<q^k$ and $DB<q^k/D_0$ by assumption, we see that this is
\[\ll k^4 q^k(q-1)^k \Bigl((D^2B)^{\alpha_q-1/5}+(D^2B)^{\alpha_q-1/4}+\frac{q^{k\alpha_q}}{D_0^{1/2}}\Bigr),\]
and the first term clearly dominates the second.
By partial summation we see that we obtain the same bound for $S_{\Lambda,q^k}(\alpha+O(1/q^k))$ as the bound for $S_{\Lambda,q^k}(\alpha)$ given in Lemma \ref{lmm:PrimeSum}. Thus in the case $|\eta|\ll 1$ we obtain the well-known bound
\[\sup_{\substack{d\sim D\\ (\ell,d)=1\\ |\eta|\ll 1}}\Bigl|\sum_{n<q^k}\Lambda(n)e\Bigl(-n\Bigl(\frac{\ell}{d}+\frac{\eta}{q^k}\Bigr)\Bigr)\Bigr|\ll \Bigl(q^{4k/5}+\frac{q^{k}}{D^{1/2}}+\frac{D^{1/2}}{q^{k/2}}\Bigr)(k\log{q})^4.\]
This gives
\begin{align*}
&\sum_{d\sim D}\sum_{\substack{0<\ell<d\\ (\ell,d)=1}}\sum_{\substack{|\eta|\ll 1\\ q^k\ell/d+\eta\in\mathbb{Z}}}\Bigl|\hat{F}_{q^k}\Bigl(\frac{\ell}{d}+\frac{\eta}{q^k}\Bigr)S_{\Lambda,q^k}\Bigl(-\frac{\ell}{d}-\frac{\eta}{q^k}\Bigr)\Bigr|\\
&\ll k^4 q^k(q-1)^k\Bigl(\frac{D^{2\alpha_q}}{q^{k/5}}+\frac{D^{2\alpha_q}}{D^{1/2}}+\frac{D^{1/2+2\alpha_q}}{q^{k/2}}\Bigr).
\end{align*}
Recalling that $1\ll D\ll D_0\ll q^{k/2}$ then gives the result.
\end{proof}
\begin{lmm}\label{lmm:PolyMinor}
Let $DB\ll q^k/D_0$ and $D\ll D_0\ll q^{k/2}$. Then we have
\begin{align*}
\sum_{d\sim D}\sum_{\substack{0<\ell<da_r r!\\ (\ell,d)=1}}\sum_{\substack{|\eta|\sim B\\ q^k\ell/d+\eta\in\mathbb{Z}}}\Bigl|\hat{F}_{q^k}\Bigl(\frac{\ell}{d a_r r!}+\frac{\eta}{q^ka_r r!}\Bigr)S_{P,q^k}\Bigl(\frac{-\ell}{da_r r!}+\frac{-\eta}{q^k a_r r!}\Bigr)\Bigr|\\
\ll k (q-1)^k q^{k/r}\Bigl(\frac{1}{(DB)^{1/r2^r-\alpha_q}}+\frac{q^{k\alpha_q}}{D_0^{1/2^{r}}}\Bigr),
\end{align*}
and
\begin{align*}
\sum_{d\sim D}\sum_{\substack{0<\ell<da_r r!\\ (\ell,d)=1}}\sum_{\substack{|\eta|\ll 1\\ q^k\ell/d+\eta\in\mathbb{Z}}}\Bigl|\hat{F}_{q^k}\Bigl(\frac{\ell}{d a_r r!}+\frac{\eta}{q^ka_r r!}\Bigr)S_{P,q^k}\Bigl(\frac{-\ell}{da_r r!}+\frac{-\eta}{q^k a_r r!}\Bigr)\Bigr|\\
\ll k (q-1)^k q^{k/r}\Bigl(\frac{1}{D^{1/r2^r-\alpha_q}}+\frac{D_0^{2\alpha_q+1/2^r}}{q^{k/2^r}}\Bigr).
\end{align*}
Here $\alpha_q$ is the constant described in Lemma \ref{lmm:ExtendedTypeI}.
\end{lmm}
\begin{proof}
By Lemma \ref{lmm:ExtendedTypeI} we have that if $D^2B\ll q^k$ then
\[\sum_{d\sim a_r r! D}\sum_{\substack{\ell<d\\ (\ell,d)=1}}\sum_{\substack{|\eta|<B\\ q^k\ell/d+\eta\in\mathbb{Z}}}\Bigl|\hat{F}_{q^k}\Bigl(\frac{\ell}{d}+\frac{\eta}{q^k}\Bigr)\Bigr|\ll (q-1)^k(D^2B)^{\alpha_q}.\]
By Lemma \ref{lmm:PolySum} we have
\[\sup_{\substack{d\sim D\\ (\ell,d)=1\\ |\eta|\sim B}}\Bigl|\sum_{P(n)<q^{k}}e\Bigl(\frac{-P(n)}{a_rr!}\Bigl(\frac{\ell}{d}+\frac{\eta}{q^k}\Bigr)\Bigr)\Bigr|\ll k q^{k/r}\Bigl(\frac{1}{q^{k/r2^{r}}}+\frac{1}{(DB)^{1/2^{r}}}+\frac{(DB)^{1/2^{r}}}{q^{k/2^{r}}}\Bigr).\]
Putting these together gives
\begin{align}
\sum_{d\sim D}\sum_{\substack{0<\ell<da_r r!\\ (\ell,d)=1}}&\sum_{\substack{|\eta|\sim B\\ q^k\ell/d+\eta\in\mathbb{Z}}}\Bigl|\hat{F}_{q^k}\Bigl(\frac{\ell}{d a_r r!}+\frac{\eta}{q^ka_r r!}\Bigr)\sum_{P(n)<q^{k}}e\Bigl(P(n)\Bigl(\frac{-\ell}{da_r r!}+\frac{-\eta}{q^k a_r r!}\Bigr)\Bigr)\Bigr|\nonumber\\
&\ll k q^{k/r}(q-1)^k\Bigl(\frac{(D^2B)^{\alpha_q}}{q^{k/r2^{r}}}+\frac{(D^2B)^{\alpha_q}}{(DB)^{1/2^{r}}}+\frac{(DB)^{1/2^{r}}(D^2B)^{\alpha_q}}{q^{k/2^{r}}}\Bigr).\label{eq:PolyBound}
\end{align}
Recalling that $D^2B<q^k$ and $DB<q^k/D_0$ by assumption, we see that this is
\[\ll k q^{k/r}(q-1)^k \Bigl((D^2B)^{\alpha_q-1/r2^{r}}+(D^2B)^{\alpha_q-1/2^{r+1}}+\frac{q^{k\alpha_q}}{D_0^{1/2^{r}}}\Bigr),\]
and the first term clearly dominates the second. As in Lemma \ref{lmm:PrimeMinor}, in the case we instead sum over $|\eta|\ll 1$, we obtain the same bound as \eqref{eq:PolyBound} with $B$ replaced by 1, since by partial summation we obtain the bound of Lemma \ref{lmm:PolySum} for $S_{P,q^k}(\alpha)$ as $S_{P,q^k}(\alpha+O(1/q^k))$. This gives
\begin{align*}
\sum_{d\sim D}\sum_{\substack{0<\ell<da_r r!\\ (\ell,d)=1}}&\sum_{\substack{|\eta|\ll 1\\ q^k\ell/d+\eta\in\mathbb{Z}}}\Bigl|\hat{F}_{q^k}\Bigl(\frac{\ell}{d a_r r!}+\frac{\eta}{q^ka_r r!}\Bigr)\sum_{P(n)<q^{k}}e\Bigl(P(n)\Bigl(\frac{-\ell}{da_r r!}+\frac{-\eta}{q^k a_r r!}\Bigr)\Bigr)\Bigr|\nonumber\\
&\ll k q^{k/r}(q-1)^k\Bigl(\frac{D^{2\alpha_q}}{q^{k/r2^{r}}}+\frac{D^{2\alpha_q}}{D^{1/2^{r}}}+\frac{D^{1/2^{r}+2\alpha_q}}{q^{k/2^{r}}}\Bigr).
\end{align*}
Recalling $1\ll D\ll D_0\ll q^{k/2}$ gives the result.
\end{proof}
\section{Major Arcs}
We now consider $\hat{F}_{q^k}(\alpha)S_{\Lambda,q^k}(-\alpha)$ and $\hat{F}_{q^k}(\alpha)S_{P,q^k}(-\alpha)$ when $\alpha$ is close to a rational with small denominator.
\begin{lmm}\label{lmm:MajorError}
Let $D$, $B\ll \exp(c_q^{1/2} k^{1/2}/3)$ where $c_q$ is the constant from Lemma \ref{lmm:LInfBound}. Then we have
\[\sum_{\substack{d<D\\ \exists p|d,p\nmid q}}\sum_{\substack{0<\ell<d\\ (\ell,d)=1}}\sum_{\substack{|\eta|\ll B\\ q^k\ell/d+\eta\in\mathbb{Z}}}\Bigl|\hat{F}_{q^k}\Bigl(\frac{\ell}{d}+\frac{\eta}{q^k}\Bigr)S_{\Lambda,q^k}\Bigl(\frac{-\ell}{d}+\frac{-\eta}{q^k}\Bigr)\Bigr|\ll \frac{q^k(q-1)^k}{\exp(c_q^{1/2}k^{1/2})},\]
and
\[\sum_{\substack{d<D\\ \exists p|d,p\nmid q}}\sum_{\substack{0<\ell<d\\ (\ell,d)=1}}\sum_{\substack{|\eta|\ll B\\ q^k\ell/d+\eta\in\mathbb{Z}}}\Bigl|\hat{F}_{q^k}\Bigl(\frac{\ell}{d}+\frac{\eta}{q^k}\Bigr)S_{P,q^k}\Bigl(\frac{-\ell}{d}+\frac{-\eta}{q^k}\Bigr)\Bigr|\ll \frac{q^{k/r}(q-1)^k}{\exp(c_q^{1/2}k^{1/2})}.\]
\end{lmm}
\begin{proof}
This follows immediately from Lemma \ref{lmm:LInfBound}, using the trivial bound for the exponential sum involving primes or polynomials.
\end{proof}
\begin{lmm}\label{lmm:PrimeMajor}
Let $A>0$. Then for $D,B<(\log{q^k})^{A}$ and $d>q$ we have
\begin{align*}
\frac{1}{q^k}\sum_{\substack{d<D\\ p|d\Rightarrow p|q}}\sum_{\substack{0\le \ell<d\\ (\ell,d)=1}}\sum_{|b|<B}\hat{F}_{q^k}\Bigl(\frac{\ell}{d}+\frac{b}{q^k}\Bigr)&S_{\Lambda,q^k}\Bigl(\frac{-\ell}{d}+\frac{-b}{q^k}\Bigr)\\
&=\kappa_q(a_0)(q-1)^k+O_A\Bigl(\frac{(q-1)^k}{(\log{q^k})^A}\Bigr),
\end{align*}
where
\[\kappa_q(a_0)=\begin{cases}
\displaystyle\frac{q}{q-1},\qquad&\text{if $(a_0,q)\ne 1$,}\\
\displaystyle\frac{q(\phi(q)-1)}{(q-1)\phi(q)},&\text{if $(a_0,q)=1$.}
\end{cases}
\]
\end{lmm}
\begin{proof}
If $b\ne 0$ then by the prime number theorem in arithmetic progressions in short intervals and partial summation we have
\[S_{\Lambda,q^k}\Bigl(\frac{-\ell}{d}+\frac{-b}{q^k}\Bigr)\ll_A \frac{q^k}{(\log{q^k})^{4A}}.\]
Thus the terms with $b\ne 0$ contribute
\begin{align*}
\ll \frac{(\log{q^k})^{3A}}{q^k}\sup_{0<a<q^k}\Bigl|\hat{F}_{q^k}\Bigl(\frac{a}{q^k}\Bigr)\Bigr|\frac{q^k}{(\log{q^k})^{4A}}\ll \frac{(q-1)^k}{(\log{q^k})^{A}}.
\end{align*}
Here we used the trivial bound that $|\hat{F}_{q^k}(\theta)|\le (q-1)^k$ for all $\theta$.
Using the prime number theorem in arithmetic progressions again, we see that
\[S_{\Lambda,q^k}\Bigl(\frac{-\ell}{d}\Bigr)=\frac{q^k}{\phi(d)}\sum_{\substack{0<c<d\\ (c,d)=1}}e\Bigl(\frac{-l c}{d}\Bigr)+O_A\Bigl(\frac{q^k}{(\log{q^k})^{4A}}\Bigr)=\frac{\mu(d)q^k}{\phi(d)}+O_A\Bigl(\frac{q^k}{(\log{q^k})^{4A}}\Bigr).\]
Thus we may restrict to $d|q$, since all other such $d$ are not square-free. Letting $\ell'/q=\ell/d$, we see terms with $b=0$ and $d|q$ contribute
\begin{align*}
\frac{1}{q^k}\sum_{0\le \ell'<q}\hat{F}_{q^k}\Bigl(\frac{\ell'}{q}\Bigr)&S_{\Lambda,q^k}\Bigl(\frac{-\ell'}{q}\Bigr)=\frac{1}{q^{k-1}}\sum_{\substack{n,m<q^k\\ n\equiv m\Mod{q}}}\Lambda(n)\mathbf{1}_{\mathcal{A}}(m)\\
&=\frac{q}{\phi(q)}\sum_{\substack{1<a<q\\ (a,q)=1}}\sum_{\substack{m<q^k\\ m\equiv a\Mod{q}}}\mathbf{1}_{\mathcal{A}}(m)+O_A\Bigl(\frac{q^k}{(\log{q^k})^{4A}}\Bigr).
\end{align*}
If $a\ne a_0$ then the sum over $m$ is $(q-1)^{k-1}$ since there are $(q-1)$ choices for each digit of $m$ apart from the final one, which must be $a$. If $a=a_0$ then the sum is empty. Thus
\begin{align*}
&\frac{q}{\phi(q)}\sum_{\substack{1<a<q\\ (a,q)=1}}\sum_{\substack{m<q^k\\ m\equiv a\Mod{q}}}\mathbf{1}_{\mathcal{A}}(m)=\begin{cases}
q(q-1)^{k-1},\qquad &\text{if $(a_0,q)\ne 1$,}\\
\displaystyle\frac{\phi(q)-1}{\phi(q)}q(q-1)^{k-1},&\text{if $(a_0,q)=1$.}
\end{cases}\qedhere
\end{align*}
\end{proof}
\begin{lmm}\label{lmm:PolyMajor}
For $B,\,q^J<\exp(qk^{1/2})$ we have
\begin{align*}
\frac{1}{q^k}\sum_{\substack{d|q^J}}\sum_{\substack{0\le \ell<d\\ (\ell,d)=1}}\sum_{|b|<B}\hat{F}_{q^k}\Bigl(\frac{\ell}{d}+\frac{b}{q^k}\Bigr)&S_{P,q^k}\Bigl(\frac{-\ell}{d}+\frac{-b}{q^k}\Bigr)\\
&=\mathfrak{S}_J\frac{a_r^{1/r}q^{k/r}(q-1)^k}{q^k}+O\Bigl(\frac{q^{k/r}}{q^{1/2r}}\Bigr),
\end{align*}
where
\[\mathfrak{S}_J=\frac{\#\{(n,m):\,0\le n,m<q^J,\, m\in\mathcal{A},\, P(n)\equiv m\Mod{q^J}\}}{(q-1)^J}.\]
\end{lmm}
\begin{proof}
For $y\ll x^{1-1/2r}$ we have
\begin{align*}
\#\{n:P(n)\in&[x,x+y],n\equiv n_0\Mod{d}\}\\
&=\#\{n:a_r n^r+O(x^{1-1/r})\in[x,x+y], n\equiv n_0\Mod{d}\}\\
&=\frac{y}{d a_r x^{1-1/r}}+O(1).
\end{align*}
Thus the values of $P(n)<q^k$ are well-distributed arithmetic progressions modulo $d<q^J$ and in short intervals of length $\gg q^{k(1-1/2r)}$. Therefore by partial summation we have that for $b\ne 0$ and $d<D$
\[\sum_{P(n)\le q^{k}}e\Bigl(-P(n)\Bigl(\frac{\ell}{d}+\frac{b}{q^k}\Bigr)\Bigr)\ll q^{k/r-1/2r}.\]
In particular, using the trivial bound $|\hat{F}_{q^k}(\theta)|\le (q-1)^k$, we have
\begin{align*}
&\frac{1}{q^k}\sum_{\substack{d|q^J}}\sum_{\substack{0\le \ell<d\\ (\ell,d)=1}}\sum_{0<|b|\ll B}\hat{F}_{q^k}\Bigl(\frac{\ell}{d}+\frac{b}{q^k}\Bigr)\sum_{P(n)\le q^{k}}e\Bigl(-P(n)\Bigl(\frac{\ell}{d}+\frac{b}{q^k}\Bigr)\Bigr)\ll\frac{(q-1)^kq^{k/r}}{q^k q^{1/2r}}.
\end{align*}
Thus we may restrict our attention to $b=0$. Rewriting $\ell/d$ as $\ell'/q^J$ the sum we see that these terms are equal to
\[\frac{1}{q^k}\sum_{0\le \ell'<q^J}\hat{F}_{q^k}\Bigl(\frac{\ell'}{q^J}\Bigr)\sum_{P(n)<q^k}e\Bigl(\frac{-P(n)\ell'}{q^J}\Bigr)=\frac{1}{q^{k-J}}\sum_{m<q^k}\mathbf{1}_{\mathcal{A}}(m)\sum_{\substack{P(n)<q^k\\ p(n)\equiv m\Mod{q^J}}}1.\]
Putting $n$, $m$ into residue classes $\Mod{p^J}$ then gives the result.
\end{proof}
\section{Proof of Theorems \ref{thrm:Prime} and \ref{thrm:Poly}}
\begin{proof}[Proof of Theorem \ref{thrm:Prime}]
By Fourier expansion we have
\begin{align*}
\sum_{n<q^k}\Lambda(n)\mathbf{1}_{\mathcal{A}}(n)&=\frac{1}{q^k}\sum_{0\le a<q^k}\hat{F}_{q^k}\Bigl(\frac{a}{q^k}\Bigr)S_{\Lambda,q^k}\Bigl(\frac{-a}{q^k}\Bigr).
\end{align*}
By Dirichlet's approximation theorem, for any choice of $0<D_0$ and any $0\le a<q^k$ there exists integers $(\ell,d)=1$ with $d<D$ and a real $|\beta|<1/DD_0$ such that
\[\frac{a}{q^k}=\frac{\ell}{d}+\beta.\]
We see that $q^k\ell/d+q^k\beta\in\mathbb{Z}$. We use Lemmas \ref{lmm:PrimeMajor} and \ref{lmm:MajorError} to estimate the contribution when $\max(d,q^k|\beta|)<(\log{q^k})^A$, and use Lemma \ref{lmm:PrimeMinor} for the remaining cases. This gives
\begin{align*}
&\frac{1}{q^k}\sum_{0\le a<q^k}\hat{F}_{q^k}\Bigl(\frac{a}{q^k}\Bigr)S_{\Lambda,q^k}\Bigl(\frac{-a}{q^k}\Bigr)=\kappa_q(a_0)(q-1)^k\\
&+O_A\Bigl( (q-1)^k\Bigl(\frac{1}{(\log{q^k})^A}+\frac{k^4}{(\log{q^k})^{A(1/5-\alpha_q)}}+\frac{k^5 q^{k\alpha_q}}{D_0^{1/2}}+\frac{k^5 D_0^{1/2+2\alpha_q}}{q^{k/2}}\Bigr)\Bigr).
\end{align*}
Choosing $D_0=q^{k/2}$ we see that the error term is $O_B((q-1)^k (\log{q^k})^{-B})$ provided $\alpha_q<1/5$ and $A$ is chosen such that $A>(B+5)/(1/5-\alpha_q)$. We recall from Lemmas \ref{lmm:ExtendedTypeI} and \ref{lmm:L1Bound} that
\[\alpha_q\le\frac{\log\Bigl(\frac{q}{q-1}\log{q}+\frac{3q}{q-1}\Bigr)}{\log{q}}.\]
This clearly tends to zero as $q\rightarrow \infty$. A calculation shows that $\alpha_q<0.198$ for $q>\num{2000000}$. This gives the result.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thrm:Poly}]
The proof is essentially identical to that of Theorem \ref{thrm:Prime} above. We choose $D_0=q^{k/2}$, and split our summation according to $\ell,d,\beta$ such that
\[\frac{a}{a_r r! q^k}=\frac{\ell}{d}+\beta.\]
We use Lemma \ref{lmm:PolyMinor} in place of \ref{lmm:PrimeMinor} for $\max(d,|\beta|q^k)>q^J$ and Lemma \ref{lmm:PolyMajor} instead of \ref{lmm:PrimeMajor} along with Lemma \ref{lmm:MajorError} to deal with $\max(d,q^k|\beta|)<q^J$. For any choice of $J$ with $q^J<\exp(qk^{1/2})$ we obtain
\begin{align*}
&\frac{1}{q^k}\sum_{0\le a<q^k}\hat{F}_{q^k}\Bigl(\frac{a}{q^k}\Bigr)S_{P,q^k}\Bigl(\frac{-a}{q^k}\Bigr)=\mathfrak{S}_J\frac{a_r^{1/r}q^{k/r}(q-1)^k}{q^k}\\
&+O_A\Bigl(\frac{q^{k/r}(q-1)^k}{q^k}\Bigl(\frac{1}{\exp(c_q^{1/2}k^{1/2})}+\frac{k^4}{q^{J(1/r2^r-\alpha_q)}}+\frac{k q^{k\alpha_q}}{D_0^{1/2^r}}+\frac{D_0^{1/2^r+2\alpha_q}}{q^{k/2^r}}\Bigr)\Bigr).
\end{align*}
Since $D_0=q^{k/2}$, we see that provided $\alpha_q<2^{-r}/r$ the error term is small. In particular there is some quantity $\mathfrak{S}$ such that for any such choice of $J<c_q^{1/2}k^{1/2}$
\[\mathfrak{S}=\mathfrak{S}_J+O\Bigl(\frac{k^4}{q^{J(1/r2^r-\alpha_q)}}\Bigr).\]
Thus, if $\alpha_q<2^{-r}/r$ we see that $\mathfrak{S}_J$ converges to $\mathfrak{S}$ as $J\rightarrow \infty$ and that
\[\frac{1}{q^k}\sum_{0\le a<q^k}\hat{F}_{q^k}\Bigl(\frac{a}{q^k}\Bigr)S_{P,q^k}\Bigl(\frac{-a}{q^k}\Bigr)=\mathfrak{S}\frac{a_r^{1/r}q^{k/r}(q-1)^k}{q^k}+O\Bigl(\frac{q^{k/r}}{\exp(c_q^{1/2}k^{1/2})}\Bigr).\]
Since $\alpha_q\rightarrow 0$ as $q\rightarrow \infty$, we see that $\alpha_q<2^{-r}/r$ for $q>q_0(r)$. From the bound on $C_q\le 1+3/\log{q}$, we see that this holds for $q\ge \exp(\exp(2r))$. %
This completes the proof.
\end{proof}
\section{Modifications for Theorem \ref{thrm:ManyDigits}}
In this section we sketch the modifications required to establish Theorem \ref{thrm:ManyDigits}, leaving the precise details to the interested reader. The results of Section \ref{sec:ExponentialSums} remain unchanged. In Lemma \ref{lmm:L1Bound}, instead of equation \eqref{eq:LittleSum} we have
\[\Bigl|\frac{e(q^{i+1}t)-1}{e(q^i t)-1}-\sum_{i=1}^s e(b_i q^it)\Bigr|\le \min\Bigl(q,s+\frac{1}{2\|q^i t\|}\Bigr).\]
If the $b_i$ are consecutive integers then this can be improved to $\min(2q,1/\|q^it\|)$. Thus we can instead take $C_q=C_{q,s}=1+(2+s)/\log{q}$ in general, or $C_{q,s}=2+2/\log{q}$ if the $b_i$ are consecutive. Lemma \ref{lmm:TypeI} remains unchanged whilst in Lemma \ref{lmm:ExtendedTypeI} all occurrances of $q-1$ should be replaced by $q-s$. In particular, we have
\[\alpha_q=\alpha_{q,s}=\frac{\log\Bigl(C_{q,s}\frac{q}{q-s}\log{q}\Bigr)}{\log{q}}.\]
With these values of $\alpha_{q,s}$ and $C_{q,s}$ in place of $\alpha_q$ and $C_q$, the arguments and statements of Lemmas \ref{lmm:LInfBound}, \ref{lmm:PrimeMinor}, \ref{lmm:PolyMinor}, \ref{lmm:MajorError}, \ref{lmm:PrimeMajor}, \ref{lmm:PolyMajor} all go through as before, except that any occurrence of $q-1$ must be replaced by $q-s$. In Lemma \ref{lmm:MajorError} we made use of the fact that there were two consecutive digits which were not excluded; clearly this still holds in the cases considered by Theorem \ref{thrm:ManyDigits}.
The final proofs of Theorems \ref{thrm:Prime} and \ref{thrm:Poly} then work as before, provided that the constraints $\alpha_{q,s}<1/5$ or $\alpha_{q,s}<r^{-1}2^{-r}$ hold. If the $b_i$ are not neccessarily consecutive then we take $C_{q,s}=1+(2+s)/\log{q}$ and see that if $q$ is sufficiently large in terms of $\epsilon$ and $s<q/2$ then
\[\alpha_{q,s}\le \frac{\log{s}}{\log{q}}+\epsilon.\]
In particular, if $s<q^{1/5-\epsilon}$ then $\alpha_{q,s}<1/5$, as required. A computation reveals that if $q=10^8$ and $s=10$ then $\alpha_{q,s}<1/5$, justifying the remark made after Theorem \ref{thrm:ManyDigits}.
If the $b_i$ are consecutive then we can take $C_{q,s}=2+2/\log{q}$, and see that for $q$ sufficiently large in terms of $\epsilon$ we have
\[\alpha_{q,s}\le \frac{\log{q/(q-s)}}{\log{q}}+\epsilon.\]
Thus $\alpha_{q,s}<1/5$ provided $q-s>q^{4/5+\epsilon}$.
\section{Acknowledgments}
We thank Ben Green for introducing the author to this problem. The author is supported by a Clay research fellowship and a fellowship by examination of Magdalen College, Oxford.
\bibliographystyle{plain}
| {
"timestamp": "2015-10-28T01:04:49",
"yymm": "1510",
"arxiv_id": "1510.07711",
"language": "en",
"url": "https://arxiv.org/abs/1510.07711",
"abstract": "Let $q$ be a sufficiently large integer, and $a_0\\in\\{0,\\dots,q-1\\}$. We show there are infinitely many prime numbers which do not have the digit $a_0$ in their base $q$ expansion. Similar results are obtained for values of a polynomial (satisfying the necessary local conditions) and if multiple digits are excluded.Our proof is based on the Hardy-Littlewood circle method and Fourier analysis of the set of integers with no digit equal to $a_0$ in base $q$.",
"subjects": "Number Theory (math.NT)",
"title": "Primes and polynomials with restricted digits",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9904405983955571,
"lm_q2_score": 0.7154240018510026,
"lm_q1q2_score": 0.7085849764998512
} |
https://arxiv.org/abs/2011.10130 | Binary Discrete Fourier Transform and its Inversion | A binary vector of length $N$ has elements that are either 0 or 1. We investigate the question of whether and how a binary vector of known length can be reconstructed from a limited set of its discrete Fourier transform (DFT) coefficients. A priori information that the vector is binary provides a powerful constraint. We prove that a binary vector is uniquely defined by its two complex DFT coefficients (zeroth, which gives the popcount, and first) if $N$ is prime. If $N$ has two prime factors, additional DFT coefficients must be included in the data set to guarantee uniqueness, and we find the number of required coefficients theoretically. One may need to know even more DFT coefficients to guarantee stability of inversion. However, our results indicate that stable inversion can be obtained when the number of known coefficients is about $1/3$ of the total. This entails the effect of super-resolution (the resolution limit is improved by the factor of $\sim 3$). | \section{Introduction}
\label{sec:intro}
Super-resolution in imaging and signal processing is a subject of significant current importance. Fundamentally, the resolution limit is related to experimental inaccessibility of the spatial Fourier harmonics of an object beyond some band limit~\cite{maznev_2017_1}. The resulting image is therefore a low-path filtered version of the object. The classical Abbe limit on the resolution of optical systems is of this nature~\cite[Sec.~13.1.2]{born_book_99}. There exist many more examples of imaging or signal reconstruction problems in which Fourier components of a signal beyond some band limit are lost, corrupted or inaccessible.
One of the most powerful approaches for achieving super-resolution, as well as for image denoising, is based on utilization of prior information~\cite{nasrollahi_2014_1, romano_2017_1}. The latter can be introduced explicitly as a probability density in the Bayesian framework~\cite{watzenig_2007_1}, or implicitly as a regularizing term in a cost function~\cite{engl_2005_1}. In addition to the classical Tikhonov regularization (ridge regression), the developed techniques include nonlinear interpolation~\cite{rajan_2001_1, jiji_2006_1}, Laplacian~\cite{lagendijk_1990_1, liu_2014_1}, total variation~\cite{rudin_1992_1, beck_2009_1}, sparsity-based methods~\cite{tropp_2007_1, blumensath_2012_1, blumensath_2013_1}, and many variants of the above.
Another promising regularization technique is based on the so-called compositional constraints or the $p$-species model. It has been used in a variety of settings including microscopy~\cite{deutsch_2015_1,pfitser_2000_1}, MRI~\cite{liang_2007_1}, diffuse optical tomography~\cite{corlu_2003_1,corlu_2005_1} and electromagnetic tomography~\cite{zhang-ting_2016_1}. The approach relies on the {\em a priori} knowledge that the sample consists of two or more known ``species'', that is, materials whose properties are known. The spatial distribution of the components is however unknown and must be found by solving an inverse problem. In the context of signal processing, the $p$-species model implies that a discrete signal can take only $p$ known values.
In this paper, we examine what is perhaps the most simple and at the same time the most fundamental mathematical question of the $p$-species model. Assume that there are only two species, i.e., the signal can take only two distinct values. How many Fourier coefficients one needs to know to reconstruct the signal precisely? More specifically, let the discrete Fourier transform (DFT) coefficients be labeled by $m$ where $m \in [-M,M]$ and $2M+1=N$ is the total number of the signal samples. We also assume that the signal can take two distinct known values, say, $a$ and $b$. Can we reconstruct the signal precisely if we know the DFT coefficients only within the band limit $m \in [-L,L]$ where $L < M$? What is the smallest value of $L$ for which reconstruction is still possible?
Below, we prove that the inverse problem of reconstructing a binary signal from the band-limited DFT data has a unique solution if $L=1$ and $N$ is prime. Moreover, we derive the minimum value of $L$ that guarantees uniqueness if $N$ has two prime factors. We also discuss stability and computational efficiency of inversion. Thus, in the case of a large prime $N$, $L=1$ is theoretically sufficient to guarantee uniqueness of the inverse solution. However, finding this solution numerically can be difficult or impractical due to high computational complexity and low noise tolerance. The difficulty can be mitigated by including additional DFT coefficients into the data set. As $L$ is increased, stability of the inverse problem is improved and computational complexity is reduced. By stability we mean here a lack of sensitivity of inverse solutions to small changes in the given DFT coefficients. Thus, if inversion is stable, we expect to recover the exact binary vector even if the DFT coefficients on input are imprecise or corrupted by noise. In Supplemental Material, we provide a computational package based on the the inversion algorithms described below, and a set of examples in which forward data are rounded off to 8 and 4 significant figures. In all cases we have considered (with both prime and non-prime $N$), reconstruction becomes computationally efficient and stable when $L \gtrsim M/3$. In this case, the data can be further rounded off to 3 or even 2 significant figures without affecting the inverse solution. This corresponds to achieving a three-fold improvement of the resolution limit.
The related previous theoretical work includes investigation of discrete Fourier sampling and recovery with both random~\cite{tropp_2008_1} and deterministic~\cite{moitra_2015_1, bailey_2012_1} sampling. Ref.~\cite{tao_2003_1} introduced an uncertainty principle for prime order cyclic groups, which is related to our uniqueness results. Also relevant are the algebraic ideas that arise in error correcting for channel coding~\cite{ryan_2009_1, oggier_2005_1} and in the investigation of the vanishing sums of the roots of unity~\cite{mann_1965_1, conway_1976_1, lenstra_1978_1, lam_1996_1, lam_2000_1}.
On the inversion and algorithmic side, we are interested in reconstructing the signal precisely and not approximately, and the form of the signal is assumed to be general. In particular, the signal may be more complicated than one or two compact pulses. Under the circumstances, application of the level-set methods~\cite{aravkin_2018_1} is impractical. Also, binarity is related to but not identical to sparsity. A sparse vector has many of its elements equal to zero but the rest can take any (unknown) values. The binary vectors considered here can have about one half of their elements equal to 0 (in the most difficult case) but the rest have the same (known) value of 1. Therefore, we did not investigate sparsity-based algorithms. Further, the inverse problem considered by us can be rephrased as a linear integer programming problem of the form $A{\tt x} = \tilde{\upvarphi}$, where $\tilde{\upvarphi}$ (defined in Section~\ref{sec:inv} below) contains the known DFT coefficients, $A$ is the relevant Fourier matrix, and ${\tt x}$ is the unknown binary vector. As there is no further objective function (just the equality constraint), using branch-and-bound techniques do not offer significant improvements as there is no optimization quantity to bound. The combinatorial algorithm described below is, in fact, an efficient implementation of the branching method in which we sequentially search the subsets of binary vectors separated from an initial guess by a monotonously-increasing distance. Without an objective function, bounding improvements must come from analyzing the feasibility of solutions within a branch. In some cases, one can apply cutting planes methods to restrict the size of the search space~\cite{balas_2003_1}. As we have verified, a direct application of cutting planes to our problem has a marginal effect, but devising more sophisticated definitions of cutting planes is a natural area for improvement. The problem we are considering is also closely related to the 0-1 knapsack problem and its reduction to the subset sum problem~\cite{martello_2000_1}, which is known to be NP-hard~\cite{karp_1972_1}. There are also many refined approaches for solving linear binary integer programming problems, including dynamic programming~\cite{koiliaris_2019_1, bringmann_2017_1} and probabilistic and approximate solutions~\cite{chan_2018_1, boutsidis_2009_1, howgrave_2010_1}. As a benchmark, we have solved the test inverse problems (included as examples in the computational package) using the standard linear integer programming techniques via MATLAB's mixed-integer linear programming function {\tt intlinprog}. This function was able to recover the three model vectors defined below, but required longer running time than the codes provided in the computational package.
The rest of this paper is organized as follows. In Section~\ref{sec:bin} we introduce the binary DFT, discuss its basic properties and show that it is sufficient to consider the vectors whose elements take only two known values, $0$ and $1$. In Section~\ref{sec:num}, we adduce several numerical examples that illustrate the theoretical possibility of reconstructing a binary vector from band-limited DFT data. In Section~\ref{sec:uni}, we prove two key uniqueness theorems. In Section~\ref{sec:sta} we discuss stability and in Section~\ref{sec:inv} two algorithms for numerical inversion are introduced and tested. Section~\ref{sec:disc} contains a discussion and conclusions. Supplemental Material contains the complete data set that can be used to reproduce all figures of this paper and a computational package with a detailed user guide and a set of examples. The package implements the inversion methods of Section~\ref{sec:inv}.
Below, the following acronyms are used: ``gcd'' is ``greatest common divisor'' and the symbol ``mod'' is used to denote modulus congruence, that is, $n \equiv m \pmod k$ if $n-m$ is divisible by $k$.
\section{Binary DFT}
\label{sec:bin}
Let ${\tt v} = (v_1,v_2 \ldots , v_N)$ be a vector of length $N>1$. For simplicity, we assume that $N$ is odd. The DFT of ${\tt v}$ is given by
\begin{subequations}
\label{DFT}
\begin{align}
\label{DFT_for}
\tilde{v}_m = \sum_{n=1}^N v_n e^{\mathrm i \xi m n } \ ,
\end{align}
where
\begin{align}
\label{xi_def}
\xi = \frac{2\pi}{N} \ , \ \ -M \leq m \leq M \ , \ \ M=\frac{N-1}{2} \ .
\end{align}
Note that $\tilde{v}_m$ is defined for any integer $m$ and is periodic so that $\tilde{v}_{m+N} = \tilde{v}_m$. It is therefore sufficient to restrict $m$ to the interval $[-M, M]$. Assuming that we know all Fourier coefficients with indexes in this interval, we can reconstruct ${\tt v}$ by the inverse DFT according to
\begin{align}
\label{DFT_inv}
v_n = \frac{1}{N}\sum_{m=-M}^M \tilde{v}_m e^{- \mathrm i \xi n m } \ , \ \ 1\leq n \leq N \ .
\end{align}
\end{subequations}
We will refer to ${\tt v}$ and $\tilde{\tt v} = (\tilde{v}_{-M}, \tilde{v}_{-M+1}, \ldots \tilde{v}_M)$ as to the real-space and Fourier-space vectors.
The questions we wish to address are the following. Assume that we know only some of the Fourier coefficients $\tilde{v}_m$, namely, those with the indexes bounded as $|m| \leq L \leq M$, and also that $v_n$ can take only two distinct, {\em a priori} known values, say, $a$ and $b$. What is the smallest value of $L$ for which we can reconstruct the whole vector ${\tt v}$ uniquely from the Fourier data? For which values of $L$ the reconstruction is numerically stable? Finally, we wish to develop a computational algorithm for the reconstruction.
We can simplify the problem by writing
\begin{align}
\label{yn_xn}
v_n = a + (b-a)x_n \ ,
\end{align}
where $x_n$ can take only two values, $0$ and $1$. We will say that the vector ${\tt x} = (x_1,x_2 \ldots , x_N)$ is {\em binary}. Substituting \eqref{yn_xn} into \eqref{DFT_for}, we obtain:
\begin{align}
\label{DFT_for_ab}
\tilde{v}_m = a N\delta_{m0} + (b-a) \tilde{x}_m \ ,
\end{align}
where $\delta_{ml}$ is the Kronecker delta-symbol and $\tilde{x}_m$ are defined in terms of $x_n$ analogously to the DFT convention \eqref{DFT_for}.
Equation \eqref{DFT_for_ab} can be inverted to yield the relation
\begin{align}
\label{txm_tym}
\tilde{x}_m = \frac{\tilde{v}_m - a N \delta_{m0}}{b-a} \ .
\end{align}
Therefore, if $\tilde{v}_m$ is known, then $\tilde{x}_m$ is also known. We thus see that the inverse problem of finding ${\tt v}$ from $\tilde{\tt v}$ is mapped onto the problem of finding a binary vector ${\tt x}$ from its DFT $\tilde{\tt x}$.
Therefore, we are interested in reconstructing the full binary vector ${\tt x}$ from a limited set of coefficients $\tilde{x}_m$ with $|m| \leq L$. If $L=0$, the only information about ${\tt x}$ that is present in the data is the popcount (the total number of 1s in ${\tt x}$). Reconstructing ${\tt x}$ with only this information is obviously impossible. However, the popcount is an important constraint on the possible solutions. In what follows, we assume that the popcount
\begin{align}
\label{r_def}
r \equiv \tilde{x}_0 = \sum_{n=1}^N x_n
\end{align}
is known with a high degree of confidence, so that \eqref{r_def} can be viewed as a hard constraint on the possible inverse solutions.
We denote the set of all vectors ${\tt x}$ of length $N$ containing $r$ 1s by $\Omega(N,r)$. The size of this set is
\begin{align}
\label{S_Omega_def}
S[\Omega(N,r)] = \frac{N!}{r!(N-r)!} \ .
\end{align}
It is sufficient to consider $r$ in the range
\begin{align}
\label{r_range}
0 < r \leq M = \frac{N-1}{2}
\end{align}
because the sets $\Omega(N,r)$ and $\Omega(N,N-r)$ can be obtained from each other by the substitution $0 \leftrightarrow 1$. Therefore, the problems with $r=q$ and $r=N-q$ are mathematically identical. In what follows, we assume that $r$ is in the range \eqref{r_range}. Note that $r=0$ is a technically possible but trivial case since $r=0$ implies that all elements of ${\tt x}$ are 0s. Therefore, we exclude this possibility in \eqref{r_range}.
The band-limited to $L$ Fourier-space distance between any two vectors ${\mathtt x},{\mathtt y} \in \Omega(N,r)$ is defined as
\begin{align}
\label{chi_def}
\chi_p({\tt x}, {\tt y}; L) = \left[ \frac{1}{L}\sum_{m=1}^L \left\vert \tilde{x}_m - \tilde{y}_m \right\vert^p\right]^\frac{1}{p} \ , \ \ p\ge 1 \ .
\end{align}
Note that the term with $m=0$ is excluded from the summation. The real-space distance between ${\tt x}, {\tt y} \in \Omega(N,r)$ is defined as
\begin{align}
\label{d_def}
d({\tt x}, {\tt y}) = \frac{1}{2}\sum_{n=1}^N |x_n - y_n| \ .
\end{align}
If the distance between two vectors in $\Omega(N,r)$ is $d$, one can be obtained from the other by $d$ pair-wise switches of 0s and 1s. The possible values of $d$ are in the interval $0 \leq d \leq r$ assuming $r$ is in the range \eqref{r_range}. If $L=M$, the invertibility of DFT implies that the statements $\chi_p({\tt x}, {\tt y}; M) = 0$, $d({\tt x}, {\tt y}) = 0$ and ${\tt x} = {\tt y}$ are equivalent. However, if $L<M$, we do not generally know whether there exist pairs of distinct vectors ${\tt x} \neq {\tt y}$ for which $\chi_p({\tt x}, {\tt y}; L) = 0$.
Clearly, if $r=1$, it is sufficient to know only one additional Fourier coefficient, say, $\tilde{x}_1$. The inverse problem is then reduced to finding the position $\nu$ where the single 1 is located. We can use the equation $\tilde{x}_1 = \exp(\mathrm i \xi \nu)$ to find $\nu$. If $\tilde{x}_1$ is in range of the forward operator, then the above equation has a unique integer solution in the interval $[1,N]$. If $\tilde{x}_1$ is not in range, then the equation has no integer solutions. One can still find the integer $\nu$ that minimizes the error $ |\tilde{x}_1 - \exp(\mathrm i \xi \nu)|$. The case $r=2$ is also easy to analyze. The problem becomes difficult when $r \sim N/2$ and $N\gg 1$ so that $S[\Omega(N,r)]$ is combinatorially large. The remainder of his paper is largely focused on this more difficult case.
\section{A numerical example}
\label{sec:num}
As a first step, we can investigate the problem numerically. To this end, we have considered the following two model vectors ${\tt x}_{\rm mod}$:
\begin{align*}
&(a) \ N=31, \ r=15, \ S[\Omega(N,r)] = 300,540,195 \\
&{\tt x}_{\tt mod} = (1 0 0 1 0 1 1 0 0 0 0 1 1 1 0 1 1 0 1 1 0 0 0 1 1 0 1 0 1 0 0)
\end{align*}
and
\begin{align*}
&(b) \ N=33, \ r=16, \ S[\Omega(N,r)] = 1,166,803,110 \\
&{\tt x}_{\tt mod} = (1 0 0 1 0 0 1 1 0 0 0 1 1 0 0 1 1 1 0 0 1 0 1 0 1 0 0 1 1 0 1 1 0)
\end{align*}
For these values of $N$, all elements of $\Omega(N,r)$ can be constructed explicitly on a computer.
\begin{figure}
\centering
\includegraphics[]{Fig_1}
\caption{\label{fig:1} Fourier-space distances $\chi_2({\tt x}, {\tt x}_{\tt mod}; L)$ between various ${\tt x} \in \Omega(N,r)$ and a model vector ${\tt x}_{\rm mod}$ with the same $N$ and $r$. The left and right columns correspond to the models (a) and (b), which are defined in the text. Different rows correspond to different values of $L$. All data points that fit the vertical scale of each plot are shown. Projections onto the horizontal axis are the sequential numbers of the data points and have no other significance. Due to the finite size of the dots that are used to represent the data points, it may appear that one sequential number (a projection onto the horizontal axis) corresponds to more than one dot; in fact, this is not so. Large blue dots mark exact zeros. Note that, for $L=1$, $\chi_p$ is independent of $p$.}
\end{figure}
In Fig.~\ref{fig:1}, we display the quantities $\chi_2({\tt x}, {\tt x}_{\tt mod}; L)$ (below some thresholds) for all ${\tt x} \in \Omega(N,r)$, various $L$, and the two model vectors ${\tt x}_{\tt mod}$ defined above. It can be seen that, for $N=31$ and all $L$ considered, there exists only one ${\tt x} \in \Omega(N,r)$ for which $\chi_2({\tt x}, {\tt x}_{\tt mod}; L) = 0$. This ${\tt x}$ is the true solution, that is, it is identical to ${\tt x}_{\tt mod}$. This result implies that the knowledge of just the first two Fourier coefficients $\tilde{x}_0=r$ and $\tilde{x}_1$ suffices to find the whole ${\tt x}_{\tt mod}$. This result is surprisingly strong but, as discussed below, not unexpected. One obvious observation is that $31$ is a prime number. We will show that uniqueness of inverse solutions with $L=1$ is a general property of all prime $N$.
In the case $N=33$ (not a prime), there are three distinct vectors ${\tt x}$ with $\chi_2({\tt x}, {\tt x}_{\tt mod};1) = 0$; only one of them is equal to ${\tt x}_{\rm mod}$. The other two are false solutions. However, uniqueness of the inverse solution is restored by selecting $L\ge 3$ ($L=2$ is still insufficient). In the next section, the theoretical reasons why $L=3$ provides the unique solution in this case will be given.
\section{Uniqueness of inverse solutions}
\label{sec:uni}
We now fix a few definitions and prove two key uniqueness theorems.
\begin{definition}
\label{df:1}
We say that two vectors ${\tt x}, {\tt y} \in \Omega(N,r)$ are $L$-distinguishable if $\chi_p({\tt x}, {\tt y}; L) > 0$, where $1 \leq L \leq M$, and $L$-indistinguishable otherwise. If this property holds for some $p$, it holds for all $p \ge 1$, including the formal limit $p=\infty$. To generalize the definition, we say that vectors of the same length $N$ but with different popcounts $r$ are $0$-distinguishable~\footnote{Such two vectors do not belong to the same set $\Omega(N,r)$.}. All vectors in $\Omega(N,r)$ have the same popcount and are therefore $0$-indistinguishable. If two vectors are $L$-distinguishable, they are also $L'$-distinguishable for any $L'>L$.
\end{definition}
\begin{definition}
\label{df:2}
We use the acronym IP($N,r,L$) to denote the inverse problem of reconstructing a generic binary vector ${\tt x}$ of known length $N$ and popcount $r$ from the set of its DFT coefficients $\tilde{x}_m$ with $1 \leq m \leq L$ (the coefficient $\tilde{x}_0=r$ with $m=0$ is already included in the data set). Since $\tilde{x}_{-m} = \tilde{x}_m^*$, coefficients with negative indexes do not provide additional information and are not included in the data set. In this paper, we consider only odd $N$. The possible values of $r$ in IP($N,r,L$) are defined by the inequality \eqref{r_range}.
\end{definition}
\begin{definition}
\label{df:3}
We say that IP($N,r,L$) is uniquely solvable if all pairs of distinct vectors ${\tt x}, {\tt y} \in \Omega(N,r)$ are $L$-distinguishable.
\end{definition}
\begin{definition}
\label{df:4}
We say that a binary vector ${\tt x} \in \Omega(N,r)$ contains a regular polygon if there exists integers $m$ and $k$ such that $x_n=1$ for all $n \equiv m \pmod k$. We call $k$ the order of the polygon, and refer to such a polygon as a $k$-gon. Similarly, we say that ${\tt x}$ contains an empty regular polygon if $x_n=0$ for all $n \equiv m \pmod k$. Note that these definitions require $k$ to divide $N$. We say that ${\tt x}$ contains a pair of regular polygons if ${\tt x}$ contains both a regular polygon and an empty regular polygon of the same order. Regular polygons are disjoint in ${\tt x}$ if they do not share any indices $n$.
\end{definition}
\begin{theorem}
\label{th:1}
IP($N,r,1$) is uniquely solvable for all $r$ in the interval \eqref{r_range} if $N$ is prime.
\end{theorem}
\begin{proof}
Theorem~\ref{th:1} is proved by showing that all vectors in $\Omega(N,r)$ are pairwise 1-distinguishable for $N$ prime. Consider two distinct vectors ${\tt x}, {\tt y} \in \Omega(N,r)$ and suppose that ${\tt x}$ and ${\tt y}$ are $1$-indistinguishable. Let ${\tt z} = {\tt x} - {\tt y}$ with $z_n$ ($n=1,2,\ldots N$) denoting the real-space components of ${\tt z}$ and $\tilde{z}_m$ denoting its DFT coefficients defined according to the convention \eqref{DFT_for}. Note that ${\tt z}$ is not a binary vector as its entries can take three possible values: $0$ and $\pm 1$. Therefore, ${\tt z} \notin \Omega(N,r)$. Since we have assumed that ${\tt x}$ and ${\tt y}$ are $1$-indistinguishable, the following equality must hold:
\begin{align}
\label{z1_thm1}
0 = \tilde{z}_1 = \sum_{n=1}^N z_n e^{\mathrm i \xi n} \ .
\end{align}
Since $N$ is prime, the $N-1$ exponential factors $e^{\mathrm i \xi n}$ with $1 \leq n < N-1$ (excluding the term $e^{\mathrm i \xi N}=1$) form the complete set of $N$-th primitive roots of unity~\cite{ireland_book_1990}. Therefore, the $N$-th cyclotomic polynomial~\cite{ireland_book_1990} has the form
\begin{subequations}
\begin{align}
\label{cyclotomic}
\Phi_N(x) = \prod_{n=1}^{N-1} (x - e^{\mathrm i\xi n}) = \sum_{n=0}^{N-1}x^n \ .
\end{align}
By Eisenstein's criterion, this polynomial is irreducible over the rationals~\cite{ireland_book_1990}. We also introduce the polynomial~\footnote{Note that the coefficient indexes in \eqref{g_def} are correct. An alternative way to define $g(x)$ with the same properties is $g(x)= z_1 + z_2 x + ... + z_Nx^{N-1}$.}
\begin{align}
\label{g_def}
g(x) = z_N + z_1 x + z_2 x^2 + \cdot + z_{N-1}x^{N-1} \ .
\end{align}
\end{subequations}
Assuming that \eqref{z1_thm1} holds, $g(x)$ has a root at $e^{\mathrm i\xi}$. We now observe that both $\Phi_N(x)$ and $g(x)$ are polynomials with rational coefficients of degree $N-1$ and with the common root $e^{\mathrm i\xi}$. Since $\Phi_N(x)$ is irreducible over the rationals, and thus a minimal polynomial, these two polynomials can only differ by a constant. This implies that $z_n = C$ for some constant $C$. Recall $z_n$ can only take values of $0$ and $\pm 1$. If $C=\pm 1$, then ${\tt x}$ is a vector of all 1s and ${\tt y}$ is a vector of all 0s (or vice versa), which violates the assumption that they are both in $\Omega(N,r)$. Hence the only possibility is $z_n=0$, which is equivalent to ${\tt x}={\tt y}$ and contradicts the initial assumption that ${\tt x}$ and ${\tt y}$ are distinct.
\end{proof}
Theorem~\ref{th:1} implies that, at least theoretically, any binary vector ${\tt x}$ of a prime length $N$ can be uniquely recovered from its two Fourier coefficients $\tilde{x}_0 = r$ and $\tilde{x}_1$. It does not imply that this problem can always be solved in a numerically stable manner. Stability of inversion is discussed in Section~\ref{sec:sta} below.
When $N$ is not prime, we do not have such a strong statement of uniqueness. However, if $N$ has two prime factors, we can characterize the false solutions and derive the sufficient conditions of uniqueness assuming that some additional DFT coefficients are known. The following Lemma establishes the necessary and sufficient condition for two binary vectors in $\Omega(N,r)$ to be $1$-distinguishable.
\begin{lemma}
\label{lm:1}
Let $N = pq$ where $1 < p \leq q$ are primes (not necessarily distinct). Let ${\tt x} \in \Omega(N,r)$. Then ${\tt x}$ is $1$-indistinguishable from some other (different from ${\tt x}$) vector(s) in $\Omega(N,r)$ if and only if ${\tt x}$ contains at least one pair of $p$- or $q$-gons (a $p$-gon is a regular polygon with $p$ vertices) according to Definition~\ref{df:4}. Equivalently, ${\tt x}$ is $1$-distinguishable from all other vectors in $\Omega(N,r)$ if and only if it does not contain any pairs of $p$- or $q$-gons.
\end{lemma}
The proof of Lemma~\ref{lm:1} is given in Appendix~\ref{app:A}. It relies on the algebraic ideas that were used to study the vanishing sums of the roots of unity~\cite{mann_1965_1, conway_1976_1, lenstra_1978_1, lam_1996_1, lam_2000_1}. Geometrically, the proof states that the only way ${\tt x}$ and ${\tt y}$ in $\Omega(N,r)$ can be $1$-indistinguishable is if they agree element-wise except for the locations that make up an equivalent number of regular $p$- or $q$-gons. This is illustrated in Fig.~\ref{fig:2}.
\begin{figure}
\centering
\includegraphics[]{Fig_2.pdf}
\caption{\label{fig:2} Geometrical illustration of Lemma~\ref{lm:1} for the model vector (b) with $N=33$. Plotted are the points $e^{\mathrm i\xi n}$ on the unit circle. The small red and large blue dots are obtained for the values of $n$ such that $x_n=1$ and $x_n=0$, respectively. There are two $3$-gons contained in the model vector (b), which are shown by thin red lines. There is also one empty $3$-gon shown by thick blue lines. This gives two ways in which a pair of polygons can be formed; each distinct pair results in a distinct false solution. The two false solutions shown in Fig.~\ref{fig:1} are obtained by switching 1s in the positions corresponding to the vertices of one of the red $3$-gons with 0s in the positions corresponding to the vertices of the blue (empty) $3$-gon.}
\end{figure}
\begin{figure*}
\centering
\includegraphics[]{Fig_3}
\caption{\label{fig:3} Geometrical illustration of Lemma 2. Let ${\tt x}$ be given by the model vector (b) ($N=33,r=16$) and let ${\tt y}$ be one of the two vectors that are distinct but $1$-indistinguishable from ${\tt x}$ (we have chosen for ${\tt y}$ the specific false solution obtained by switching 1s and 0s in the pair of $3$-gons shown in Fig.~\ref{fig:2} that are geometrically farther apart). Let, as in the proof of Theorem~\ref{th:1}, ${\tt z}={\tt x}-{\tt y}$. Plotted are the terms that enter the definition \eqref{DFT_for} of $\tilde{z}_1$ (a), $\tilde{z}_2$ (b), and $\tilde{z}_3$ (c). Thus, Panel (a) displays the terms $z_ne^{\mathrm i\xi n}$ for $1\leq n\leq 33$. The terms with such $n$ that $z_n=0$, $z_n=1$ and $z_n=-1$ are represented by small black dots, intermediate-size red dots, and large blue dots, respectively. Panel (b) displays the terms $z_ne^{\mathrm i \xi 2 n}$ according to the same color convention. The additional factor of $2$ in the exponent results in a permutation of the dots that are displayed in Panel (a); however, the $3$-gons are preserved. Note that the total number of dots in Panels (a) and (b) is the same and equal to $33$. It is clear that $\tilde{z}_1=\tilde{z}_2=0$ since regular $3$-gons sum to zero. In Panel (c), the terms $z_ne^{\mathrm i \xi 3 n}$ are shown. For $1\leq n \leq 33$, the above terms take only $11$ distinct values so that each displayed dot corresponds to a sum of three terms with different $n$. However, only dots of the same color (same value of $z_n$) can overlap in this example. It can be seen that each $3$-gon of Panel(a) collapses to a single point in Panel (c). Therefore, $\tilde{z}_3 \neq 0$ implying that ${\tt x}$ and ${\tt y}$ are $3$-distinguishable.}
\end{figure*}
While Lemma~\ref{lm:1} establishes the necessary and sufficient condition for two distinct vectors in $\Omega(N,r)$ to be $1$-indistinguishable (in the two prime factors case), the following Lemma provides a potential remedy to the non-uniqueness. Specifically, it tells us how many additional DFT coefficients must be included in the data set to guarantee uniqueness.
\begin{lemma}
\label{lm:2}
Let $N=pq$ with the same conditions on $p,q$ as in Lemma~\ref{lm:1}, and let ${\tt x} \in \Omega(N,r)$. Let $L$ be the smallest integer for which ${\tt x}$ contains a pair of $L$-gons, which are not subsets of any larger-order pair of regular polygons. Then ${\tt x}$ is $L$-distinguishable from all other vectors in $\Omega(N,r)$. Moreover, $L$ is the smallest value of $k$ for which ${\tt x}$ is $k$-distinguishable from all other vectors in $\Omega(N,r)$.
\end{lemma}
The proof of Lemma~\ref{lm:2} is given in Appendix~\ref{app:B}. The geometric concept behind this proof is illustrated in Fig.~\ref{fig:3}. Lemma~\ref{lm:2} implies that, if $p\neq q$, then all vectors in $\Omega(N,r)$ are $q$-distinguishable. For $N = p^2$, all vectors are $p$-distinguishable. Considering again the case $N=33$, $r=16$, the best one can assume is that all vectors in $\Omega(33,16)$ are $11$-distinguishable. This implies that one must use $L \ge 11$ to guarantee uniqueness of IP($33,r,L$) for any $r \ge 11$. However, the model vector (b) does not contain any $11$-gons. Therefore, it can be recovered with $L=3$; using $L=11$ would be an overkill. We note that vectors that contain regular $11$-gons make a small subset of $\Omega(33,16)$ (are statistically rare). Therefore, using $L=3$ for $N=33,r=16$ entails a relatively small risk of running into a false solution. Moreover, Lemma~\ref{lm:2} tells us exactly what form these vectors, and the corresponding false solutions, take. The question of {\em statistically reliable} invertibility -- that is, accepting a small risk of finding a false solution -- is addressed more systematically in Section~\ref{sec:sta}.
In the absence of any {\it a priori} knowledge about the true solution apart from what is given in the data, we can state the following sufficient condition for uniqueness of IP($N,r,L$).
\begin{theorem}
\label{th:2}
Let $N=pq$ with the same conditions on $p,q$ as in Lemma~\ref{lm:1}, $r\leq M$, and
\begin{align*}
L_0 = \max_{\ell,m \in {\{0,1\}}} \left(\{k = p^ \ell q ^m \ : k\leq r\}\right) \ .
\end{align*}
Then IP($N,r,L)$ is uniquely solvable for any $L \ge L_0$.
\end{theorem}
\begin{proof}
Theorem~\ref{th:2} is a direct consequence of Lemma~\ref{lm:2}. A vector ${\tt x} \in \Omega(N,r)$ must have at least $L$ 1s to contain an $L$-gon, which implies that $L \leq r$. Then, by definition, $L_0$ is the largest order of polygons that can be formed in ${\tt x}$. Hence, by Lemma~\ref{lm:2}, any ${\tt x} \in \Omega(N,r)$ is $L_0$-distinguishable from all other vectors in $\Omega(N,r)$. Therefore, IP$(N,r,L)$ is uniquely solvable for any $L \ge L_0$.
\end{proof}
To illustrate Theorem~\ref{th:2}, consider the case $N=143=11 \cdot 13$. If $r<11$, then we have $L_0=1$. If $11 \leq r < 13$, then $L_0=11$, and if $13 \leq r \leq M = 71$, then $L_0=13$. Thus, $L=13$ guarantees uniqueness of the inverse solution for any binary vector of length $N=143$.
\begin{remark}
\label{rm:1}
Even if IP($N,r,L$) is not uniquely solvable, some vectors in $\Omega(N,r)$ can be uniquely recovered from the knowledge of $\tilde{x}_m$ with $m=1,2,\ldots L$. For example the following vector
\begin{align*}
&(c) \ N=35, \ r=17, \ S[\Omega(N,r)] = 4,537,567,650 \\
&{\tt x} = (1 0 0 1 0 1 1 0 0 0 0 1 1 1 1 0 1 1 0 0 0 1 1 0 1 0 1 0 0 1 0 0 0 1 1)
\end{align*}
is uniquely recoverable with $L=1$ even though IP($35,17,1$) is not uniquely solvable. The reason is that ${\tt x}$ does not contain any pairs of 5- or 7-gons.
\end{remark}
\begin{remark}
\label{rm:2}
Similar but more complicated results hold for the case when $N$ has two prime divisors, i.e., $N=p^\alpha q^\beta$ and the integers $\alpha,\beta$ can be larger than $1$. Here the difficulty grows from the possibility of an intricate overlapping of different polygon pairs. The case when $N$ has three or more prime divisors is even more difficult to analyze due to the existence of the so-called asymmetrical minimal vanishing sums of $N$-th roots of unity~\cite{conway_1976_1,lam_2000_1}.
\end{remark}
\section{Stability of inversion}
\label{sec:sta}
Even when an inverse problem IP($N,r,L)$ is uniquely solvable, it is not clear whether finding the solution is a numerically stable procedure. Consider the data points in Fig.~\ref{fig:1} for $N=31$, $r=15$ and $L=1$. The large (blue) dot corresponds to the true solution ${\tt x}_{\tt mod}$ and it is the only vector in $\Omega(31,15)$ with $\chi_2({\tt x}, {\tt x}_{\tt mod};1) = 0$, so that the inverse solutions is unique. However, the Fourier-space distance between the first runner-up to the true solution (let us call it ${\tt y}$) and the model is quite small: $\chi_2({\tt y}, {\tt x}_{\tt mod}; 1) \approx 0.0002$. On the other hand, the real-space distance between ${\tt y}$ and ${\tt x}_{\tt mod}$ is not small: $d({\tt y}, {\tt x}_{\rm mod}) = 10$. In other words, $10$ out of $16$ 1s in ${\tt y}$ are in the wrong places. This is an obvious sign of instability. A small change in the DFT data can result in a large change of the inverse solution.
In this section, we investigate the stability of IP($N,r,L$) numerically. To this end, we have selected some values of $N$ and generated sets of random vectors ${\tt x}_j \in \Omega(N,r)$ for $r=2,3,\ldots M$ and $j=1,2,\ldots J$, where $J$ was chosen to be sufficiently large to obtain statistically significant results. The vectors ${\tt x}_j$ were generated as follows. For a particular random realization, $r$ 1s were randomly placed into $N$ possible positions. Repetitions (identical random realizations) were allowed but occurred very rarely. The number of random realizations in a set, $J$, depended on $r$ and $N$ and varied from $10^4$ to $10^2$ in the most difficult cases such as $N=35$, $r=17$.
Then, for each ${\tt x}_j$, we have computed the Fourier-space distances~\footnote{In this section, we rely on the $L_2$ norm.} $\chi_2({\tt x}_j, {\tt y}; L)$ to {\em all} vectors ${\tt y} \in \Omega(N,r)$ and the minimum distance between ${\tt x}_j$ and any ${\tt y}$ that is not equal to ${\tt x}_j$. The latter quantity can be formally defined as
\begin{align}
\label{kappa_def}
\kappa({\tt x}_j;L) = \min_{{\tt y}, {\tt y}\neq {\tt x}_j} \chi_2({\tt x}_j, {\tt y}; L) \ , \ \ {\tt x}_j,{\tt y} \in \Omega(N,r) \ .
\end{align}
Note that $\kappa({\tt x}_j; L)$ and its averages defined below in \eqref{kappa_sigma_av} depend implicitly on $N$ and $r$. If $\kappa({\tt x}_j; L) = 0$, the vector ${\tt x}_j$ is not uniquely recoverable with the particular value of $L$. If $\kappa({\tt x}_j; L) > 0$ but is in some sense small, then ${\tt x}_j$ is uniquely recoverable with the given $L$ but the inverse solution is numerically unstable. Generally, as $\kappa({\tt x}_j; L)$ is increased, it becomes easier to recover ${\tt x}_j$, and the precision requirements on the DFT data become less stringent.
\begin{figure*}
\centering
\includegraphics[]{Fig_4}
\caption{\label{fig:4} Averages $\langle \kappa(L) \rangle$ as functions of $r$ for $N=31,33,35$ and $L$ from $1$ to $5$. Error bars are shown at the level of one standard deviation, $\sigma(L)$, as defined in~\eqref{kappa_sigma_av}. The vertical axes in all plots are logarithmic. The trivial case $r=1$ is not shown.}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[]{Fig_5}
\caption{\label{fig:5} Cumulative probabilities $P(\epsilon;1)$ for $\epsilon$ as labeled and $N=31$ (a), $N=33$ (b), and $N=35$ (c) as functions of $r$. The trivial case $r=1$ is not shown.}
\end{figure*}
We have also computed the average and the standard deviation of $\kappa({\tt x}_j;L)$ according to
\begin{subequations}
\label{kappa_sigma_av}
\begin{align}
\label{kappa_av}
& \langle \kappa(L) \rangle = \frac{1}{J} \sum_{j=1}^J \kappa({\tt x}_j; L) \ , \\
\label{kappa_2_av}
& \langle \kappa^2(L) \rangle = \frac{1}{J} \sum_{j=1}^J \kappa^2({\tt x}_j; L) \ , \\
\label{sigma}
& \sigma^2(L) = \langle \kappa^2(L) \rangle - \langle \kappa(L) \rangle^2 \ .
\end{align}
\end{subequations}
These quantities are illustrated in Fig.~\ref{fig:4} for $N=31,\ 33,\ 35$ and $L$ from $1$ to $5$. The data displayed in this figure convey how ``easy'' it is to reconstruct a vector from $\Omega(N,r)$. For example, consider the case $N=35$, $L=3$ and $r=15$. We have for these parameters $\langle \kappa \rangle\approx 0.1$ and $\sigma\approx 0.03$. This means that most vectors in $\Omega(35,15)$ (those within the the $\pm 2\sigma$-interval in the statistical distribution) have the Fourier-space distance to the closest distinct neighbor between $0.04$ and $0.16$. Therefore, if we find a vector ${\tt y} = \Omega(35,15)$ with $\chi_2({\tt x}, {\tt y}; 3) \leq 0.04$, it is likely to be the true solution. We thus conclude that the Fourier data should be specified with the absolute precision of $0.04$ or better.
Not all combinations of $N$, $L$ and $r$ allow such simple considerations. The differences $\langle \kappa\rangle - \sigma$ can be very small or even negative (this is technically possible). In such cases, the lower parts of the error bars in Fig.~\ref{fig:4} are outside of the plot frames. Relevant examples include $N=35$, $L=1$ for $r \ge 5$. In practical terms, the inverse problems with these combinations of $N$, $L$ and $r$ are hard to solve since it is likely that a vector in $\Omega(N,r)$ has a distinct neighbor that is {\em almost} $L$-indistinguishable from itself. To make the inverse problem better conditioned, one can either increase $L$ or increase the precision of the DFT data, assuming the inverse solution is unique.
So far, we have not characterized the statistical distribution of $\kappa({\tt x}_j;L)$. Therefore, the considerations based on the data of Fig.~\ref{fig:4} are qualitative. Computing the full statistical distribution of $\kappa({\tt x}_j;L)$ is a combinatorially hard problem. Instead, we have introduced the cumulative probabilities
\begin{align}
\label{P_def}
P(\epsilon; L) = \frac{1}{J} \sum_{j=1}^J \Theta\left( \epsilon - \kappa({\tt x}_j;L) \right) \ ,
\end{align}
where
\begin{align}
\label{Theta_def}
\Theta(x) = \left\{
\begin{array}{ll}
1 \ , & x \ge 0 \\
0 \ , & x < 0
\end{array}\right.
\end{align}
is the step function. Just like $\kappa({\tt x}_j;L)$, $P(\epsilon;L)$ depends implicitly on $N$ and $r$. It gives the (approximate) fraction of vectors in $\Omega(N,r)$ with at least one distinct neighbor that is at least as close in Fourier space (as quantified by the $\chi_2({\tt x},{\tt y};L)$) as $\epsilon$. In particular, $P(0;L)$ gives the fraction of vectors in $\Omega(N,r)$ that have at least one distinct but $L$-indistinguishable neighbor.
The quantities $P(\epsilon;1)$ are shown in Fig.~\ref{fig:5} for $N=31,\ 33,\ 35$, and $\epsilon=0,\ 10^{-4},\ 10^{-3}$. As expected, $P(0;1)=0$ for $N=31$. This means that, in agreement with Theorem~\ref{th:1}, all vectors in $\Omega(31,r)$ with $r=1,2,\ldots,M$ are $1$-distinguishable from each other.
Data for larger $L$ are not shown in Fig.~\ref{fig:5} but can be described as follows. In the case $N=31$, we have $P(10^{-3};L) = P(10^{-4};L) = P(0;L) = 0$ for all $r$ considered and $L>1$ (note that $P(\epsilon;L)$ is a non-decreasing function of $\epsilon$). Therefore, choosing $L=2$ already makes the inverse problem stable in this case.
In the case $N=33$, $P(10^{-3};L) = P(10^{-4};L) = P(0;L) = 0$ (for all $r$) when $L>2$, whereas choosing $L=2$ is still not sufficient for making the inverse problem stable. Note that, for $L<11$ and $r>11$, $P(\epsilon;L)$ are not exactly zero due to the possibility of forming regular $11$-gons (see Lemma~\ref{lm:1}). However, the values of $P$ are too small to be determined by the statistical approach used here. These cumulative probabilities can also be computed theoretically by counting all the elements ${\tt x}$ that allow regular $11$-gons.
Finally, in the case $N=35$, we can use Theorem~\ref{th:2} to determine whether there are false solutions. For $r<5$, there are no false solutions already with $L=1$. For $5 \leq r < 7$, false solutions are suppressed when $L \ge 5$. For $7\leq r \leq 17$, suppressing false solutions requires $L\ge 7$. However, even if $L<7$, false solutions are statistically rare. Moreover, when $L>1$, the false solutions appear to be the only source of numerical instability. Therefore, the inverse problem with $N=35$ and arbitrary $r$ can be solved in practice even with $L<7$. For example, choosing $L=3$ entails the probability of running into a false solution of the order of $0.01$ or smaller.
\section{Inversion}
\label{sec:inv}
Even if the solution to the inverse problem is unique and we know the DFT data with sufficient precision to guarantee stability, finding the solution is a nontrivial task. In particular, iterative methods that seek to optimize $\chi_p$ are unlikely to work. The reason for this is illustrated in Figs.~\ref{fig:6} and \ref{fig:7}. In Fig.~\ref{fig:6}, we plot the real-space distance $d({\tt x}, {\tt x}_{\tt mod})$ between various vectors ${\tt x} \in \Omega(N,r)$ and the model vector (a) vs the corresponding Fourier-space distance $\chi_2({\tt x}, {\tt x}_{\tt mod}; L)$. It can be seen that the two distances do not correlate well. It is possible to have small $\chi_2$ for a large $d$ and small $d$ with a large $\chi_2$. Any optimization technique that works directly with the real-space vectors ${\tt x} \in \Omega(N,r)$ and tries to reduce the error $\chi_2$ iteratively is therefore not likely to solve the problem: the iterations will inevitably end up in one of the many local minima of $\chi_2$, which, in real space, are still very far from the true solution. The difficulty is not removed if we use $\chi_\infty$ instead of $\chi_2$, as is illustrated in Fig.~\ref{fig:7}.
\begin{figure}
\centering
\includegraphics[]{Fig_6}
\caption{\label{fig:6} Real-space distance $d({\tt x}, {\tt x}_{\rm mod})$ vs Fourier-space distance $\chi_2({\tt x}, {\tt x}_{\tt mod}; L)$ for the same data points as in Fig.~\ref{fig:1}. Here ${\tt x}_{\tt mod}$ is the model vector (a). The symbol $(\times 2)$ indicates that the data point corresponds to two distinct false solutions. Both false solutions have the same real-space distance to the model, $d=3$.}
\end{figure}
\begin{figure}
\centering
\includegraphics[]{Fig_7}
\caption{\label{fig:7} Same as in Fig.~\ref{fig:6} but for $\chi_\infty$. Data for $L=1$ are not shown since, in this case, $\chi_2$ and $\chi_\infty$ coincide.}
\end{figure}
\begin{remark}
\label{rm:3}
If we could find a vector ${\tt y}$ such that $d({\tt y}, {\tt x}_{\rm mod}; L) = 1$,
we would be able to find, starting from this result, the true solution (assuming it is unique) in at most $r(N-r)$ deterministic steps. But as can be seen from the data of Figs.~\ref{fig:6} and \ref{fig:7}, small $\chi_p$ does not imply small $d$. It is almost as hard to find a vector ${\tt y}$ with the above property as it is to find ${\tt x}_{\rm mod}$ itself.
\end{remark}
Thus, we have encountered a somewhat paradoxical result. The inverse problem can be linear and have a unique and stable solution; yet, any method that updates iteratively ${\tt x} \in \Omega(N,r)$ in an attempt to minimize $\chi_p$ is not expected to work. The mathematical reason for this difficulty is that the forward DFT maps a discrete set $\Omega(N,r)$ onto a continuous linear space of Fourier data; not every point in the data space is an image of an element in $\Omega(N,r)$. Iterative methods that work well for continuous maps do not work here.
However, we describe below two approaches to solving the inverse problem that work reasonably well. One is a combinatorial approach to solve the NP-hard problem, and the other is based on non-convex optimization of a continuous map; the complexity of this method is polynomial (if it converges). The combinatorial method does not seek a sequence of vectors ${\tt x}$ with monotonously decreasing value of $\chi_p$. Rather, it tests all vectors in some real-space vicinity of an initial guess (defined below) and either finds one with $\chi_p$ below a pre-determined threshold or finds the vector with the smallest $\chi_p$ over all vectors in this vicinity. The optimization method does not work with $\chi_p$ directly but rather relaxes the assumption of binarity first and defines a functional that quantifies how close a general vector is to being binary. The functional is non-convex but a fourth-order polynomial; along any search direction it can have no more than two minima. This, together with stochastic jumps strategy, allows one to search efficiently for the deepest local minimum. The Supplemental Material contains a computational package that implements these two methods. The package is applicable to generic data consisting of several DFT coefficients of the unknown vector (to be supplied by the user), and includes detailed documentation and examples.
We start by describing the set up of the numerical inverse problem in more detail. Let $\tilde{\varphi}_m$, $1 \leq m \leq L$ be a set of DFT coefficients, which are given as input to the numerical inversion. This set can be expanded by using the relations $\tilde{\varphi}_0 = r$ and $\tilde{\varphi}_{-m} = \tilde{\varphi}_m^*$. For the algorithms described below, it is not important how $\tilde{\varphi}_m$ were generated. These can be exact DFT coefficients of some binary vector, or noisy approximations to such coefficients, or just an arbitrary set of complex numbers. In the numerical package that accompanies this paper, $\tilde{\varphi}_m$ with $1\leq m \leq L$ are supplied as numbers in an input file, and several examples of such ``forward data'' corresponding to some model binary vectors (with various degrees of precision) are provided for testing. We seek a binary vector ${\tt s}$ (the solution) such that the Fourier-space distance
\begin{align}
\label{stop_cond_1}
\chi_p({\tt s}, \upvarphi; L) \leq \epsilon \ ,
\end{align}
where $\epsilon$ is a pre-determined small constant, or, if \eqref{stop_cond_1} can not be met, we seek the minimizer of $\chi_p({\tt x}, \upvarphi; L)$ over a sufficiently large set of ${\tt x}$. These conditions are referred to below as the first and second stop conditions. We emphasize that ${\tt s}$ is a numerical solution; it may or may not be equal to the model vector ${\tt x}_{\rm mod}$ that was used to generate the data $\tilde{\varphi}_m$. It might also be the case that a model vector ${\tt x}_{\rm mod}$ was not even used to generate $\tilde{\varphi}_m$.
We also note the following three points. First, computation of $\chi_p({\tt x}, \upvarphi; L)$ according to \eqref{chi_def} does not require the knowledge of the full vector $\upvarphi$; the set of known DFT coefficients $\tilde{\varphi}_m$ with $|m| \leq L$ is sufficient. Second, if $\tilde{\varphi}_m$ do not correspond to some binary vector with given $N$ and $r$ precisely, then $\chi_p({\tt s}, \upvarphi; L)$ can not be arbitrarily small and the first stop condition can not be met if the selected $\epsilon$ is too small. However, the second stop condition can still provide the correct solution. Under the circumstances, increasing $\epsilon$ can make the numerical inversion more efficient since the first stop condition, if achievable, is generally met much faster than the second condition. Finally, the achieved distance $\chi_p({\tt s}, \upvarphi; L)$ is an indicator of how reliable the obtained solution is. This quantity can be compared to the data similar to those shown in Fig.~\ref{fig:4} but computed for the specific values of $N$, $r$ and $L$ that were used in a reconstruction. If $\chi_p({\tt s}, \upvarphi; L) < \langle \kappa(L) \rangle - 2 \sigma(L)$, one can be reasonably confident that ${\tt s}$ is the true solution.
In both approaches described below, we start with an initial guess ${\tt g} = (g_1,\ldots,g_N)$, which is computed as the low-path filtered inverse DFT of the forward data, viz,
\begin{align}
\label{guess}
g_n = \frac{1}{N}\sum_{m=-L}^L \tilde{\varphi}_m e^{-\mathrm i \xi n m} \ ,
\end{align}
where we have used all the available data points $\tilde{\varphi}_m$ including $\tilde{\varphi}_0 = r$ and $\tilde{\varphi}_{-m}=\tilde{\varphi}_m^*$.
\subsection{Combinatorial algorithm}
\label{sec:inv.comb}
For relatively small values of $N$ (i.e., $\lesssim 60$), we can use the following approach. We start with the
the initial guess \eqref{guess} and round off the $r$ largest elements of ${\tt g}$ to $1$ and the rest to $0$. We refer to this procedure as to ``roughening'' and write ${\tt b} = {\mathcal R}[{\tt g}]$, where ${\mathcal R}[\cdot]$ is the roughening operator. The result is a binary initial guess ${\tt b}$ with the correct length and popcount, so that ${\tt b} \in \Omega(N,r)$. However, ${\tt b}$ is not expected to be consistent with the data. To be sure, we always check whether ${\tt b}$ satisfies the first stop condition \eqref{stop_cond_1}. If this is not so, as is usually the case, we invoke a recursive procedure that builds all vectors ${\tt x}$ such that $d({\tt x}, {\tt b}) = 1$, then all vectors such that $d({\tt x}, {\tt b}) = 2$, etc. The algorithm stops when either a vector ${\tt x}$ satisfying \eqref{stop_cond_1} is found (first stop condition) or all vectors ${\tt x}$ such that $d({\tt x}, {\tt b}) \leq d_{\tt max}$, where $d_{\tt max}$ is the maximum depth of recursion, have been tested (second stop condition). If we select $d_{\tt max} = r$, then all vectors in $\Omega(N,r)$ are tested, but such exhaustive search is rarely necessary. In the computational package accompanying this paper, the default value is $d_{\tt max} = \min(10,r)$, and it can be tuned by the user if necessary.
We now adduce the pseudo-code for the combinatorial algorithm.
\begin{algorithmic}[1]
\STATE Compute ${\tt g}$ according to \eqref{guess} and ${\tt b}$ as ${\tt b} = {\mathcal R}[{\tt g}]$;
\FOR{$d=1$ \TO $d=d_{\tt max}$}
\STATE Initialize $\chi_{\rm min} \leftarrow 10^9$;
\FOR{$i=1$ \TO $r!/d!(r-d)!$}
\STATE{\label{Step:A} Select a new unique combination of $d$ 1s out of $r$ 1s in ${\tt b}$;}
\FOR{$j=1$ \TO $(N-r)!/d!(N-r-d)!$}
\STATE{\label{Step:B} Select a new unique combination of $d$ 0s out of $N-r$ 0s in ${\tt b}$;}
\STATE{Swap 1s selected in Line~\ref{Step:A} with 0s selected in Line~\ref{Step:B} and leave other 1s and 0s in ${\tt b}$ unchanged; assign the resulting values to ${\tt x}$;}
\STATE{Compute $\chi \equiv \chi_p({\tt x}, \upvarphi; L)$;}
\IF{$\chi \leq \epsilon$}
\STATE{Solution found using Stop Condition 1. Assign ${\tt s} \leftarrow {\tt x}$ and exit;}
\ELSE
\IF{$\chi < \chi_{\rm min}$}
\STATE{$\chi_{\rm min} \leftarrow \chi$;}
\STATE{${\tt x}_{\rm min} \leftarrow {\tt x}$;}
\ENDIF
\ENDIF
\ENDFOR
\ENDFOR
\ENDFOR
\STATE{Solution found using Stop Condition 2. Assign ${\tt s} \leftarrow {\tt x}_{\rm min}$;}
\PRINT{Achieved distance to data, $\chi_{\rm min}$;}
\PRINT{Recursion depth at which solution was found, $d_{\rm min}$;}
\end{algorithmic}
The two internal loops, which go over all unique $d$-combinations of $r$ 1s and $N-r$ 0s in ${\tt b}$ can be defined recursively. The problem here is that of generation of all $d$-combinations of a set of $r$ or $N-r$ distinguishable (labeled) objects. To construct all $d$-combinations of 1s, we define the recursive procedure ${\rm next\_1}(k,l)$, where $k$ is the number of 1s already selected and $l$ is the sequential number of the previous 1 selected. Here we assume that all 1s in ${\tt b}$ are numbered sequentially from $1$ to $r$ (one can imagine a label attached to each 1). Similarly, 0s can be numbered sequentially from $1$ to $N-r$. The procedure is first invoked as ${\rm next\_1}(0,0)$. For generic $k$ and $l$, ${\rm next\_1}(k,l)$ goes over all available positions $i$ for the next, $(k+1)$-th, 1. These positions, $i$, are in the range $l < i \leq r-d + (k+1)$. For each $i$, the procedure includes the corresponding 1 into a new combination and then invokes itself as ${\rm next\_1}(k+1,i)$. Importantly, the new 1 in a combination is always selected ``to the right'' of the previous selection. Once a full $d$-combination of 1s is constructed, we have $k=d$, and ${\rm next\_1}(d,*)$ invokes another recursive procedure ${\rm next\_0}(0,0)$, which builds all unique $d$-combination of 0s in a similar manner. Once two unique $d$-combinations have been constructed, the 0s and 1s in ${\tt b}$ are swapped and $\chi_p$ is computed.
The computational complexity of the described algorithm scales as $O(L C)$, where
\begin{align}
\label{C_def}
C = \sum_{d=1}^{d_{\tt max}} \frac{r!(N-r)!}{(d!)^2(r-d)!(N-r-d)!} \ .
\end{align}
Every term in this summation is the product of the number of unique $d$-combinations of $r$ 1s times the number of unique $d$-combinations of $N-r$ 0s, which are indicated in Lines 4 and 6 of the pseudo-code. This product is equal to the number of all unique vectors ${\tt x}$ tested at the recursion depth $d$. It is also equal to the number of all vectors in $\Omega(N,r)$ whose real-space distance to ${\tt b}$ is $d$. The factor of $L$ in $O(LC)$ accounts for the overhead of computing $\chi_p$ for every ${\tt x}$ tested. The complexity $C$ is illustrated in Fig.~\ref{fig:8} for several values of $N$ and $r$ as a function of $d_{\tt max}$. It can be seen that $C$ is much smaller than the complexity of exhaustive search [that is, testing all vectors in $\Omega(N,r)$] if $d_{\tt max}$ is significantly smaller than $r$. For example, for $N=61$, the complexity is still manageable and well below that of exhaustive search for $d_{\tt max} \lesssim 10$.
Still, the complexity \eqref{C_def} strongly depends on $d_{\tt max}$, and the choice of this parameter in practical computations is not trivial. Let us assume that the data $\tilde{\varphi}_m$ correspond to some binary vector ${\tt x}_{\tt mod}$ with good precision. That is, the stop condition $\chi_p({\tt x}, \upvarphi; L) \leq \epsilon$ is met for ${\tt x} = {\tt s} = {\tt x}_{\tt mod}$ and is not met for any other vector in $\Omega(N,r)$. Then, to guarantee finding the true solution, the recursion must run to $d_{\tt max} = d({\tt b}, {\tt x}_{\tt mod})$. The distance from the initial guess to the model is bounded from above by $r$, but otherwise is not known {\em a priori}. This underscores the importance of making the initial guess ${\tt b}$ as close to the true solution as possible. In Fig.~\ref{fig:9}, we plot the average values $\langle d({\tt b}, {\tt x}_{\tt mod}) \rangle$ and the corresponding standard deviations for $10^6$ random model vectors ${\tt x}_{\tt mod}$ of the length $N=61$ and different values of $L$ as functions of the popcount $r$. It can be seen that the typical values of $d({\tt b}, {\tt x}_{\tt mod})$ are significantly smaller than $r$ and, as expected, decrease with $L$. We emphasize that the results shown in Fig.~\ref{fig:9} pertain to random model vectors without any particular structure. The vectors in which 1s are grouped, i.e., representing a single rectangular pulse or a few such pulses tend to have smaller $d({\tt b}, {\tt x}_{\tt mod})$. Therefore, the data of Fig.~\ref{fig:9} or similar easily-computable data sets can be useful for estimating the values of $d_{\tt max}$ that are sufficient for obtaining the solution.
\begin{figure}
\centering
\includegraphics[]{Fig_8}
\caption{\label{fig:8} Computational complexity $C$ \eqref{C_def} as a function of the maximum recursion depth $d_{\tt max}$ for several values of $N$ and $r$, as labeled. Thin black lines indicate the computational complexity of the exhaustive search, i.e., testing all vectors in $\Omega(N,r)$. Panel (a) shows the dependence for several values of $N$ and the maximum value of $r$ that is allowed for each $N$. Panel (b) shows the dependence for $N=61$ and several allowed values of $r$ for this $N$. The blue lines for $N=61,r=31$ in the two panels are identical.}
\end{figure}
\begin{figure*}
\centering
\includegraphics[]{Fig_9}
\caption{\label{fig:9} Average distance between random models ${\tt x}_{\tt mod}$ and the corresponding (roughened) initial guess ${\tt b}$, $\langle d({\tt b}, {\tt x}_{\tt mod}) \rangle$, for $10^4$ random vectors ${\tt x}_{\tt mod}$ of length $N=61$ each, shown as functions of $r$ for different values of $L$. Error bars are drawn at the level of one standard deviation. }
\end{figure*}
The algorithm described in this subsection was able to find all three model vectors (a), (b) and (c) defined above with $L=1$ by using the stop condition $\chi_2 \leq \epsilon = 10^{-5}$ (for the models (a) and (b) $\epsilon=10^{-4}$ is sufficient). We note that IP($33,16,1$) is not uniquely solvable but the algorithm has found the true solution anyway. This occurred by chance; the search could have ended up with one of the two false solutions. Also, IP($35,17,1$) is not uniquely solvable in general but, as was mentioned in Remark~\ref{rm:1}, the particular model vector (c) is uniquely recoverable with $L=1$. In Fig.~\ref{fig:10}, we show the intermediate reconstruction steps for model (c) using $L=1$ and $L=5$. It can be seen that the band-limited initial guess ${\tt g}$ does not have a well defined structure at both $L=1$ and $L=5$. The ``roughened'' binary initial guess ${\tt b}$ has the distance $d({\tt b}, {\tt x}_{\rm mod})=8$ for $L=1$ and $d({\tt b}, {\tt x}_{\rm mod}) = 4$ for $L=5$.
For $L=1$, the combinatorial inversion took 8, 40, and 209 seconds to recover models (a), (b), and (c), respectively. In contrast, MATLAB's {\tt intlinprog} function recovered these models in 60, 518, and 3006 seconds. See the User Guide of the computational package (in Supplemental Material) for additional benchmarks and examples.
\begin{figure}
\centering
\includegraphics[]{Fig_10}
\caption{\label{fig:10} Intermediate steps in the reconstruction of model (c) with $N=35$, $r=17$ and $L=5$. Initial guesses for $L=1$ ${\tt g}$ (a) and $L=5$ (b); initial guess after roughening, ${\tt b}=R[{\tt g}]$, for $L=1$ (c) and $L=5$ (d); and the reconstruction ${\tt s}$ (e), the same for both values of $L$ and identical to the model.}
\end{figure}
\subsection{Non-convex optimization}
\label{sec:inv.opt}
When $N$ increases past $\sim 60$, the combinatorial algorithm of the previous subsection becomes impractical. We now describe an alternative approach that relies on the continuity of the conventional (unrestricted and unconstrained) DFT to define an iterative scheme that does not involve combinatorial complexity. Let ${\tt v} = (v_1, \ldots, v_N)$ be a general real-valued unconstrained vector of length $N$. We define the cost function $F[{\tt v}]$ that quantifies how close ${\tt v}$ is to a binary vector as
\begin{align}
\label{F_def}
F[{\tt v}] = \sum_{n=1}^N v_n^2 (v_n - 1)^2 \ .
\end{align}
We then seek to minimize $F[{\tt v}]$ while keeping ${\tt v}$ consistent with the data. To this end, we start with the band-limited initial guess ${\tt v} = {\tt g}$ [defined in \eqref{guess}] and update ${\tt v}$ iteratively according to
\begin{subequations}
\label{iter}
\begin{align}
\label{iter_step}
{\tt v} \longleftarrow {\tt v} + {\tt q} \ ,
\end{align}
where ${\tt q} = (q_1, \ldots, q_N)$ is of the form
\begin{align}
\label{q_def}
q_n = \sum_{m=L+1}^M \left[c_m \cos (\xi mn) + s_m \sin (\xi m n) \right] \ .
\end{align}
\end{subequations}
Here $c_m$ and $s_m$ are some real-valued coefficients, which must be determined at each iteration step independently. It can be seen that the vector ${\tt v}$ updated according to \eqref{iter} remains consistent with the data. To determine ${\tt q}$, we use the steepest descent approach. The derivatives of $F[{\tt v} + {\tt q}]$ with respect to the set of coefficients $c_m, s_m$ evaluated at ${\tt q}=0$ are given by
\begin{subequations}
\label{dir_cm}
\begin{align}
F_m^{(c)}[{\tt v}] \equiv \left. \frac{F[{\tt v} + {\tt q}]}{\partial c_m} \right|_{{\tt q}=0} = 2\sum_{n=1}^N u_n \cos(\xi m n) \ , \\
F_m^{(s)}[{\tt v}] \equiv \left. \frac{F[{\tt v} + {\tt q}]}{\partial s_m} \right|_{{\tt q}=0} = 2\sum_{n=1}^N u_n \sin(\xi m n) \ ,
\end{align}
\end{subequations}
where
\begin{align}
\label{u_def}
u_n = v_n(v_n - 1)( 2 v_n -1 ) \ .
\end{align}
Therefore, we have determined the gradient of $F[{\tt v}]$ with respect to the unknown coefficients $c_m$ and $s_m$. Denoting the gradient vector by ${\tt p} = (p_1,\ldots,p_N)$, we have for the components:
\begin{align}
\label{p_def}
p_n = \sum_{m=L+1}^M \left[ F_m^{(c)}[{\tt v}]\cos(\xi m n) + F_m^{(s)}[{\tt v}]\sin(\xi m n) \right] \ .
\end{align}
According to the general strategy of steepest descent, we select ${\tt q} = \alpha {\tt p}$ in the iteration step \eqref{iter_step}, where $\alpha$ is a scalar to be determined. By direct substitution, we find that
\begin{subequations}
\label{F_der_alpha}
\begin{align}
\label{f_alpha}
f(\alpha) \equiv \frac{\partial F[{\tt v}+\alpha{\tt p}]}{2\partial \alpha} = A_0 + A_1 \alpha + A_2 \alpha^2 + A_3 \alpha^3 \ ,
\end{align}
where
\begin{align}
& A_0 = \sum_{n=1}^N v_n(v_n - 1)(2v_n - 1) p_n = \sum_{n=1}^N u_n p_n \ , \\
& A_1 = \sum_{n=1}^N [2v_n (v_n - 1) + (2v_n - 1)^2] p_n^2 \ , \\
& A_2 = 3\sum_{n=1}^N (2v_n - 1) p_n^3 \ , \ \ A_3 = 2\sum_{n=1}^N p_n^4 \ .
\end{align}
\end{subequations}
The function $F[{\tt v}+\alpha{\tt p}]$ is a fourth-order polynomial in $\alpha$ with a positive senior coefficient. Consequently, either it has one real minimum or two minima and one maximum. These special values of $\alpha$ can be determined by solving the cubic equation $f(\alpha)=0$. If the cubic has a single real root $\alpha_1$, we set $\alpha=\alpha_1$. If there are three real roots $\alpha_1 \leq \alpha_2 \leq \alpha_3$, then $F[{\tt v}+\alpha{\tt p}]$ has a maximum at $\alpha_2$ and two minima at $\alpha_1$ and $\alpha_3$. To avoid ``jumping'' over a maximum, we set $\alpha = \alpha_1$ if $\alpha_2>0$ and $\alpha=\alpha_3$ otherwise. This completely defines each iteration step and guarantees that $F[{\tt v}]$ is decreased in each iteration until a local minimum of $F[{\tt v}]$ is reached. The condition for the local minimum is
\begin{align}
\label{loc_min_cond}
& \sum_{n=1}^N u_n \cos(\xi mn) = \sum_{n=1}^N u_n \sin(\xi mn) = 0 \\
& \nonumber \hspace*{5cm} {\rm for} \ L<m\leq M \ ,
\end{align}
where $u_n$ are defined in \eqref{u_def}. The steepest descent algorithm described here can find a local minimum satisfying the above condition efficiently and with high precision. Note that, in most iteration steps, the cubic has only one real root. The problem is however that $F[{\tt v}]$ has many local minima. It is unlikely that any given local minimum is close to the true solution.
To overcome the above difficulty, we can adopt the following stochastic approach. Once a local minimum $\bar{\tt v}_k$ is found ($k$ labels different local minima), we compute $\chi_p({\mathcal R}[\bar{\tt v}_k], \upvarphi; L)$
and check whether the first stop condition \ref{stop_cond_1} has been satisfied. If so, we have found the solution ${\tt s} = {\mathcal R}[\bar{\tt v}_k]$. If not, we start from the deepest local minimum found so far, $\bar{\tt v}_{\tt min}$, and perturb it by adding a vector ${\tt q}_{\tt rand}$ of the form \eqref{q_def} with a given length and random direction (in the space of coefficients $c_m$, $s_m$). In this way, we select a new initial guess for ${\tt v}$, from which we run the steepest descent algorithm again. Depending on the length and direction of ${\tt q}_{\tt rand}$, we will end up either in the same or a different local minimum. To avoid being trapped in the same local minimum, we gradually increase the length of random jumps.
Finally, a functionality is provided in the package to define the stop condition in terms of $F[{\tt v}]$ itself rather than in terms of $\chi_p$. Since the latter approach does not require computation of $\chi_p$, it is slightly faster. The default setting is however to use \eqref{stop_cond_1}. Finally, the algorithm stops after a certain amount of local minima has been found and reports the deepest local minimum (second stop condition).
A simplified pseudo-code for the non-convex optimization algorithm is presented below. Some parts of the algorithm are stated only briefly and some additional logic, which is needed to avoid mistakes and improve precision and speed, is not shown to avoid excessive complexity. However, conceptually important steps of the algorithm are all shown.
\begin{algorithmic}[1]
\STATE{Define constants: small precision-related constant $\delta$, random jump step length (initial value $S=1$), number of iterations before $S$ is incremented $i_{\tt inc}$ (typical value $i_{\tt inc}=1000$), and the increment of $S$, $\Delta$. These and other constants are initialized from input parameter files.}
\STATE{Compute ${\tt g}$ according to \eqref{guess};}
\STATE{Initialize ${\tt v} \leftarrow {\tt g}$;}
\STATE{Initialize $\chi_{\rm min} \leftarrow 10^9$;}
\FOR{$i=1$ \TO $i_{\tt max}$}
\STATE{Compute $F_0 = F[{\tt v}]$;}
\STATE{Initialize ${\rm test \leftarrow True}$;}
\WHILE{{\rm test}}
\STATE{Compute steepest descent direction ${\tt p} = {\tt p}[{\tt v}]$ according to \eqref{dir_cm}-\eqref{p_def};}
\STATE{Compute the length of the steepest descent step $\alpha$ by solving the cubic \eqref{f_alpha} and choosing the distance to the closest minimum along the search direction;}
\STATE{Make the steepest descent step ${\tt v} \leftarrow {\tt v} + \alpha {\tt p}$;}
\STATE{Compute $F=F[{\tt v}]$;}
\STATE{Evaluate $t \leftarrow F_0/F - 1$;}
\IF{$t > \delta$}
\STATE{$F_0 \leftarrow F$;}
\ELSE
\STATE{${\rm test = False}$;}
\ENDIF
\ENDWHILE
\STATE{Check the local minimum condition \eqref{loc_min_cond}. If local minimum can not be reached, report error and exit;}
\STATE{Compute ${\tt x} = {\mathcal R}[{\tt v}]$;}
\STATE{Compute $\chi_p = \chi_p({\tt x}, \upvarphi; L)$;}
\IF{$\chi_p < \epsilon$}
\STATE{Stop condition 1. Solution has been found. Assign ${\tt s} \leftarrow {\tt x}$ and exit;}
\ENDIF
\IF{$\chi_p < \chi_{\tt min}$}
\STATE{$\chi_{\tt min} \leftarrow \chi_p$;}
\STATE{${\tt v}_{\tt min} \leftarrow {\tt v}$;}
\ENDIF
\IF{$\mod(i,i_{\tt inc})=0$}
\STATE{Increase the random jump step length as $S \leftarrow S + \Delta$;}
\ENDIF
\STATE{Construct a vector ${\tt q}$ of the form \eqref{q_def} of unit length and random direction in the space of $c_m,s_m$, $L<m\leq M$;}
\STATE{Make random jump ${\tt v} \leftarrow {\tt v}_{\tt min} + S {\tt q}$;}
\ENDFOR
\STATE{Solution found using Stop Condition 2. Assign ${\tt s} \leftarrow {\mathcal R}[{\tt v}_{\rm min}]$;}
\PRINT{Achieved distance to data, $\chi_{\rm min}$;}
\end{algorithmic}
Some variations of this algorithm, as implemented in the computational package, include the following. (i) If local minimum in Line 20 is not verified with required precision, the code will attempt to approach the minimum closer by moving along several deterministic directions parallel to the axes or multi-dimensional diagonals in the space of $c_m, s_m$. Only if after several attempts the local minimum could not be verified, the code reports an error. This error occurs (due to round-off errors) extremely rarely and can be fixed by changing the precision-related constants. Note that the functional $F[{\tt v}]$ only has local minima and maxima but not saddle points. (ii) The random jumps in Line 34 can originate not only from the deepest local minim found (default) but also from the initial guess, i.e., ${\tt v} \leftarrow {\tt g} + S {\tt q}$ or from the local minimum most recently found, ${\tt v} \leftarrow {\tt v} + S {\tt q}$. In the latter case, the search for solution is a random walk over the local minima of $F[{\tt v}]$. In the first (default) case, the random walk is restricted so that the next stop always has a smaller $\chi_p$. (iii) The algorithm can use a different formulation of the stop condition 1, which is based on smallness of $F$ rather than $\chi_p$. (iv) Finally note that the current implementation of the codes relies on the $L_2$ norm. This can be changed to $L_1$ or $L_\infty$ norms as explained in the User Guide.
\begin{figure}
\centering
\includegraphics[]{Fig_11}
\caption{\label{fig:10_N=199} Reconstruction with $N=199$, $r=90$ and $L=29$. Initial guess ${\tt g}$ (a) and the reconstruction ${\tt s}$ (identical to the model ${\tt x}_{\tt mod}$) (b).}
\end{figure}
To illustrate the method, we have reconstructed a vector of the length $N=199$ with $r=90$ using $L=29$ (so that the ratio $L/M=29/99 \approx 1/3$) and the stop condition $\chi_2 \leq \epsilon=10^{-5}$ (if the DFT data are rounded-off to 4 significant figures, solution is still found fast with $\epsilon=0.002$). The initial guess ${\tt g}$ for this simulation is shown in Fig.~\ref{fig:10_N=199}(a) and the reconstruction (identical to the model) in Fig.~\ref{fig:10_N=199}(b). If we apply the roughening operation directly to the initial guess, ${\tt b} = {\mathcal R}[{\tt g}]$, we would obtain $d({\tt b}, {\tt x}_{\tt mod}) = 8$. Although the roughened initial guess and the model are not too far apart in real space, the complexity of finding the solution by the combinatorial method of Section~\ref{sec:inv.comb} is $O(10^{24})$ operations. This is beyond the reach for any modern computer. The optimization method of this subsection however finds the solution in under one minute. In general, however, it is difficult to state a general condition of convergence to the true solution or estimate the computational complexity of the method described in this subsection. We note that the method has several adjustable parameters and changing them can help in situations when convergence appears to be slow. Additional details are provided in the User Guide of the computational package.
\section{Discussion and conclusions}
\label{sec:disc}
We have shown that binary compositional constraints are powerful priors that allow one to obtain a well-pronounced super-resolution effect. In particular, we have proved that a band limited DFT of a binary vector of length $N$ is uniquely invertible from the knowledge of just two complex DFT coefficients -- the zeroth and the first -- if $N$ is prime. If $N$ has two prime factors, then Theorem~\ref{th:2} tells us how many additional DFT coefficients must be known to guarantee uniqueness. The above result can be useful if applied to analysis of two-dimensional images.
Although Theorems~\ref{th:1} and \ref{th:2} establish the sufficient condition for invertibility assuming only that the unknown vector is consistent with the data, many (but not all) binary vectors are uniquely recoverable even if conditions of the Theorems do not hold.
We have further investigated stability of inversion and conditions under which reconstructing a binary vector from a limited set of DFT coefficients is practically feasible. Preliminary numerical data indicate that stable reconstructions can be obtained when about $1/3$ of all DFT coefficients are known. This entails an approximately three-fold super-resolution effect.
Although binarity is a powerful constraint, devising practical reconstruction algorithms is a nontrivial task. We provided and demonstrated two such algorithms in the paper. The first (combinatorial) algorithm is guaranteed to find a solution of the NP-hard problem if run consistently but might become prohibitive due to large computational time. Still, it involves fewer or much fewer operations than exhaustive search. This approach is applicable to vectors with $N \lesssim 100$. For longer vectors, we have developed an algorithm based on optimization of the non-convex cost function $F[\cdot]$ \eqref{F_def}. The local minima of this function can be easily found using the steepest descent approach. However, not every local minimum of a non-convex function is a global minimum. To overcome this difficulty, we have introduced random perturbations, which allow one to sample many local minima and finally find one with sufficient depth, which then coincides with the global minimum. We note that convexization of the inverse problem, that is, finding a relevant convex cost function, proved to be difficult and is likely impossible.
Although we have obtained numerical reconstructions for vectors up to $N=199$ with about $1/3$ of DFT coefficients considered to be known, it is clear that the developed algorithms are not optimal and leave a lot of space for improvement. We hope that, with further refinements, the theoretical results of this paper can be used to obtain the super-resolution effect in two-dimensional black-and-white images. Extending the theoretical results and algorithm performance to two-dimensional binary images is highly related to discrete tomography~\cite{herman_2012_1}. This would allow for future applications in image processing, nondestructive testing~\cite{krimmel_2005_1}, X-ray crystallography~\cite{alpers_2006_1}, and medical imaging~\cite{herman_2003_1}. One key challenge to overcome is to adapt our inversion algorithms to inversion of two-dimensional DFT. While the $N=199$ inversion runs fast in one-dimension, further algorithm development is required for achieving fast super-resolved inversion of $199 \times 199$ images. Refinement of the cutting planes approach (applicable to both combinatorial and optimization algorithms) appears to be a promising way forward.
| {
"timestamp": "2021-10-05T02:24:15",
"yymm": "2011",
"arxiv_id": "2011.10130",
"language": "en",
"url": "https://arxiv.org/abs/2011.10130",
"abstract": "A binary vector of length $N$ has elements that are either 0 or 1. We investigate the question of whether and how a binary vector of known length can be reconstructed from a limited set of its discrete Fourier transform (DFT) coefficients. A priori information that the vector is binary provides a powerful constraint. We prove that a binary vector is uniquely defined by its two complex DFT coefficients (zeroth, which gives the popcount, and first) if $N$ is prime. If $N$ has two prime factors, additional DFT coefficients must be included in the data set to guarantee uniqueness, and we find the number of required coefficients theoretically. One may need to know even more DFT coefficients to guarantee stability of inversion. However, our results indicate that stable inversion can be obtained when the number of known coefficients is about $1/3$ of the total. This entails the effect of super-resolution (the resolution limit is improved by the factor of $\\sim 3$).",
"subjects": "Numerical Analysis (math.NA)",
"title": "Binary Discrete Fourier Transform and its Inversion",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9904406023459218,
"lm_q2_score": 0.7154239957834733,
"lm_q1q2_score": 0.7085849733165095
} |
https://arxiv.org/abs/0912.3814 | A Family of Recompositions of the Penrose Aperiodic Protoset and Its Dynamic Properties | This paper describes a recomposition of the rhombic Penrose aperiodic protoset due to Robert Ammann. We show that the three prototiles that result from the recomposition form an aperiodic protoset in their own right without adjacency rules. An interation process is defined on the space of Ammann tilings that produces a new Ammann tiling from an existing one, and it is shown that this process runs in parallel to Penrose deflation. Furthermore, by characterizing Ammann tilings based on their corresponding Penrose tilings and the location of the added vertex that defines the recomposition process, we show that this process proceeds to a limit for the local geometry. | \section{Introduction}
The Penrose aperiodic tiles have been well-studied by deBruijn \cite{deB}, Lunnon and Pleasants \cite{LP}, and others. See Senechal \cite{Senechal} for a survey. This paper describes a recomposition of the rhombic Penrose aperiodic protoset defined by Robert Ammann in Gr\"unbaum and Shepard \cite{GS}, showing that the three tiles that result from the construction form an aperiodic protoset in their own right without adjacency rules. An interation process is defined on the space of Ammann tilings that runs in parallel to Penrose deflation, and it is shown that this process proceeds to a limit for the local geometry.
While there are a variety of tilings attributed to Roger Penrose, the kind relevant to this paper are those admitted by a protoset of two rhombic tiles, a thin and a thick, with dimensions dependent on the golden ratio $\phi=\frac{1+\sqrt{5}}{2}$. These two tiles can be used to tile the plane non-periodically (without translational symmetry) when assembled following specific adjacency rules (see Figure~\ref{3tilings}(a)).
An Ammann tiling (see Figure~\ref{3tilings}(b)) may be constructed from a Penrose tiling by rhombs by a process referred to as \textit{recomposition} \cite{GS}. In this process, a single vetex $Q$ is added within a thin Penrose rhomb, and edges are drawn between it and the three nearest Penrose vertices. The geometry of the newly constructed edges are then used to create two specific new vertices and five new edges inside the thick Penrose rhomb. These new vertices and edges are then copied into every Penrose rhomb in the tiling, and the original Penrose edges are deleted.
Although this construction was mentioned in passing in \cite{GS}, it has never been thoroughly studied in the literature to the knowledge of this author. This paper shows that this recomposition process produces a tiling by three prototiles that form an aperiodic protoset in their own right, that is, every tiling admitted by this protoset is non-periodic. Furthermore, after defining Ammann tilings independently of the recomposition process, I show that each Ammann tiling has a unique corresponding Penrose tiling.
For Penrose tilings, there is a process called \textit {double composition} by which a new Penrose tiling may be constructed from a starting Penrose tiling. By deleting specific edges from the original Penrose tiling, this process produces a new Penrose tiling composed of tiles geometrically similar to the originals but scaled by the golden ratio.
With the correspondence between Penrose tilings and Ammann tilings established, we next describe an iteration process for Ammann tilings that resembles Penrose double composition. Although this iteration process adds edges as well as deleting some, we prove that it runs in parallel to Penrose deflation.
In analyzing the Ammann iteration process further, we address the questions: (1) Is the iterated tiling composed of tiles geometrically similar to those of the original Ammann tiling? (2) If not, is there a sense in which the new tiling is of the same type as the original Ammann tiling? (3) Can this process be carried out using the new tiling as the starting point? and (4) If so, does the sequence of tilings approach a limit?
Although we show that the Ammann iteration process does not yield a single limit tiling, it does yield a limit for the local geometry of the tilings (see Figure \ref{3tilings} (c)). By identifying an Ammann tiling by two parameters (a) its corresponding Penrose tiling and (b) the location of the $Q$ within a thin rhomb, we show that while (a) does not approach a limit, (b) does. This result is summarized in the following theorem.
\begin{theorem}
The map $Q\mapsto Q'$ defined by Ammann iteration has a unique attractive fixed point along the edge of the thin Penrose rhomb which divides that edge according to the golden ratio.
\label{limit_thm}
\end{theorem}
\begin{figure}
(a)$\qquad\qquad$$\qquad\qquad$$\qquad\qquad$ (b)$\qquad\qquad$$\qquad\qquad$$\qquad\qquad$ (c)
\includegraphics[scale=.35]{1.jpg}
\includegraphics[scale=.35]{2.jpg}
\includegraphics[scale=.35]{3.jpg}
\caption{From left to right: a patch of a Penrose tiling, the corresponding patch of a generic Ammann tiling, a patch of the Ammann limit tiling.}\label{3tilings}
\end{figure}
In section 2, we recall the necessary terminology related to tilings. In section 3, we describe Penrose tilings, including the geometry of the Penrose rhombs, the local isomorphism theorem, the definition of Penrose deflation, and the construction of identifying sequences. Section 4 describes Ammann tilings, detailing the recomposition process that constructs an Ammann tiling from a Penrose tiling, and proving that the three tiles that result from the recomposition process form an aperiodic protoset. Section 5 describes the iteration process for Ammann tilings and shows the correspondence between Ammann iteration and Penrose deflation. In section 6, we examine the dynamics of the Ammann iteration process and prove Theorem~\ref{limit_thm}. Finally, in section 7 we discuss possible application to quasicrystals.
The research for this paper was begun at the National Science Foundation sponsored Research Experience for Undergraduates at Canisius College, Summer 2008, under the guidance of Professors Terry Bisson and B.J. Kahng. The project was continued through Fall 2009 at the University of Notre Dame with Professor Arlo Caine. Research during Summer 2009 was partially funded with the help of Professor Frank Connolly through NSF Grant DMS-0601234. Many thanks to Professor Caine for the long hours he spent with me on this project and in particular for a suggestion that simplified the proof of Theorem \ref{limit}. Without his help this project would not have been possible. Also, thanks to Professor Jeffrey Diller for suggestions pertaining to the dynamics of the Ammann iteration process.
\section{Terminology}
A \textit{plane tiling} is a countable family $\mathcal{T}=\{T_{1},T_{2},...\}$ of closed subsets of the Euclidean plane, each homeomorphic to a closed circular disk, such that the union of the sets $T_{1}, T_{2},...$ (which are known as the \textit{tiles} of $\mathcal{T}$) is the whole plane, and the interiors of the sets $T_{i}$ are pairwise disjoint \cite{GS}.
We say that a set $\mathcal {S}$ of representatives of the congruence classes in $\mathcal{T}$ is a \textit{protoset} for $\mathcal{T}$, and each representative is a \textit{prototile}. If $\mathcal{S}$ is a protoset for $\mathcal{T}$, then we say that $\mathcal{S}$ \textit{admits }$\mathcal{T}$.
A \textit{patch} is a finite set of tiles whose union is simply connected.
A patch is called \textit{locally legal} if the tiles are assembled according to the relevant adjacency rules and is called \textit{globally legal} if it can be extended to an infinite tiling.
The \textit{(first) corona} of a tile $T_{i}$ is the set
$$\mathcal{C}(T_{i})=\{T_{j}\in \mathcal{T}: \exists\, x,y\in T_{i}\cap T_{j} \text{ such that } x\neq y\}.$$
The \textit{(first) corona atlas }is the set of all (first) coronas that occur in $\mathcal{T}$, and a \textit{reduced (first) corona atlas} of $\mathcal{T}$ is a subset of the corona atlas of $\mathcal{T}$ that covers $\mathcal{T}$.
Similarly, the \textit{(first) vertex star} of a vertex $v$ is the set $\mathcal{V}(v)=\{T_{j}\in\mathcal{T}\mid T_{j}\cap \{v\} \neq\emptyset\}$, and a \textit{(first) vertex star atlas} is the set of all (first) vertex stars that occur in $\mathcal{T}$.
Finally, a tiling $\mathcal{T}$ is \textit{non-periodic }if it does not have translational symmetry in more than one direction, and a set of prototiles $\mathcal{S}$ is \textit{aperiodic} if it admits only non-periodic tilings.
\section{Penrose Tilings}
\begin{figure}\center{\includegraphics[scale=.3]{4.jpg}\caption{(a) the Penrose rhombs properly assembled, (b) the Penrose rhombs improperly assembled, (c) the Penrose rhombs split into triangles. }\label{rhombs}}\end{figure}
There are several types of tilings known as Penrose tilings. The type relevant for this paper is built from the two rhombic prototiles shown in Figure~\ref{rhombs}(a).
The sides of the rhombs are all of length one and the angles measure $\alpha=\frac{\pi}{5}, \beta=\frac{4\pi}{5},\gamma=\frac{2\pi}{5},$ and $\delta=\frac{3\pi}{5}$. In order to guarantee a non-periodic tiling, the edge and angle markings must line up with each other as in Figure~\ref{rhombs}(a).
Illegal configurations, such as the one in Figure~\ref{rhombs}(b),
either produce a periodic tiling of the plane or prevent a tiling of the plane.
\begin{theorem}
The Penrose protoset, together with the adjacency rules, admits uncountably many non-congruent tilings of the plane, all of which are non-periodic (\cite{Senechal}, p189).
\end{theorem}
\begin{theorem}[Local Isomorphism] Every patch in a given Penrose tiling by rhombs occurs infinitely many times in every other Penrose tiling by rhombs, \textit{i.e.,} all Penrose tilings by rhombs are locally isomorphic (\cite{Senechal}, p175).
\end{theorem}
The rhombic prototiles may equivalently be thought of as pairs of triangles, as shown in Figure~\ref{rhombs}(c), the smaller with side lengths 1 and $\frac{1}{\phi}$, where $\phi$ is the golden ratio $\phi=\frac{1+\sqrt{5}}{2}=1.618\dots$, and the larger with side lengths 1 and $\phi+1$ (see Figure~\ref{rhombs}).
\begin{figure}\center{\includegraphics[scale=.3]{5.jpg}}\caption{The Penrose ``kite and dart" tiles.}\label{k&d}\end{figure}
\begin{remark}
Another type of Penrose tiling is one whose prototiles are commonly referred to as a ``kite" and a ``dart." In this version, the acute triangle has side lengths measuring 1 and $\phi+1$ as in Figure~\ref{k&d}.
As in the case of rhombs, in order to produce a nonperiodic tiling the triangles of the kites and darts are assembled so that the marked vertices shown in Figure~\ref{k&d} coincide. We will be chiefly concerned with Penrose tilings by kites and darts only with regard to Penrose deflation.
\end{remark}
\begin{figure}\center{\includegraphics[scale=.25]{6.jpg}}\caption{The six locally legal placements of tiles adjacent to a small tile.}\label{def}\end{figure}
\begin{figure}\center{\includegraphics[scale=.5]{7.jpg}}\caption{Problematic configurations resulting from the locally legal placements of Figure \ref{def}.}\label{defbad}\end{figure}
\begin{proposition} Given a Penrose tiling by either rhombs or kites and darts, for each small triangle (half tile with divisions shown in Figures \ref{rhombs} and \ref{k&d}) there is exactly one large triangle adjacent to it such that the edge between them may be erased producing an even larger triangular tile similar to the original small triangle.
\end{proposition}
\begin{proof}
Figure \ref{def} shows the six locally legal configurations of tiles adjacent to a small triangle in the rhomb case. Of these, all are globally legal except for the one on the top right (see Figure~\ref{pvstars}). The others may be combined into the possible problematic configurations shown in Figure~\ref{defbad}. In the leftmost configuration in Figure~\ref{defbad}, there are two tiles that might be combined with the center tile, and in the other two configurations there are none. However, these configurations are not globally legal (see Figure~\ref{pvstars}, (\cite{Senechal}, p177)). The remaining globally legal configurations satisfy the theorem.
The kite and dart case is similarly easy to verify. \end{proof}
\begin{definition}\label{pdef}[Penrose Composition] Penrose composition is the process by which each small triangle in a Penrose tiling is amalgamated with an adjacent large triangle. The adjacency rules ensure that each small triangle has exactly one such large triangle adjacent to it. When each small triangle is amalgamated with a large triangle in this way, the resulting tile is geometrically similar to the original small tile, and the tiling produced is a Penrose tiling.
\end{definition}
We will distinguish between Penrose tilings by constructing an identifying index sequence.
\begin{algorithm} [Penrose Index Sequence]
Given a Penrose tiling by triangles, label the tiles $s$ or $l$ depending on whether they are small or large.
\begin{enumerate}
\item Pick an arbitrary point $P$ interior to a tile.
\item If $P$ lies in a small tile, record an $s$. If it lies in a large tile, record an $l$.
\item Perform the Penrose composition process \ref{pdef} on the tiling, thus eliminating all original small triangles from the tiling. A new Penrose tiling will result in which the original large tiles are the new small tiles.
\item Return to step 2.
\end{enumerate}
Notice that the tiling alternates between a tiling by rhombs and a tiling by kites and darts.
This process may be repeated indefinitely, producing an infinite index sequence for the tiling relative to $P$.
\end{algorithm}
To identify a tiling independent of the choice of $P$, we define an equivalence relation $\sim$ on the set $ X_{p} $ of index sequences of Penrose tilings. (Note that $ X_{p} $ is the set of all sequences of $s$'s and $l$'s in which an $s$ is always followed by an $l$.) Let
$$ \{x_{n}\} \sim \{y_{n}\} \Leftrightarrow \exists\, m \text{ such that } x_{n} = y_{n} \,\forall n \geq m$$
that is, two sequences are in the same equivalence class if they eventually coincide. This yields the quotient set $ X_{p} / \sim $ representing the set of all Penrose tilings.
\section{Amman Tilings and Their Combinatorial and Geometric Properties}
\begin{figure}\center{\includegraphics[scale=.5]{8.jpg}}\caption{The recomposition of Penrose thick and thin rhombs into Ammann tiles.}\label{recomp}\end{figure}
\begin{figure}\center{\includegraphics[scale=.5]{9.jpg}}\caption{The Ammann tiles created from recomposition.}\label{abc}\end{figure}
Amman tilings are derived from Penrose tilings by the process of \textit{recomposition} (\cite{GS}, p548) .
For the following algorithm we assume the orientation shown in Figure~\ref{recomp}.
\begin{algorithm}[Recomposition]\label{recompa}
Ammann's construction creates an Ammann tiling from a Penrose tiling by rhombs.
\begin{enumerate}
\item Choose a point $Q$ within a single thin rhomb. Without loss of generality, we assume $Q$ is in the lower half of the rhomb.
\item Connect $Q$ to the three closest vertices of the rhomb.
\item Copy this construction into all thin rhombs.
\item Copy $\triangle ABQ$ into the lower left of each thick rhomb such that $\overrightarrow{AB}\mapsto\overrightarrow{EF}$ and a new point is created $Q\mapsto R$ to yield $\triangle EFR$ within the thick rhomb.
\item Copy $\triangle DAQ$ into the upper right of each thick rhomb such that $\overrightarrow{DA}\mapsto\overrightarrow{GH}$ and a new point is created $Q\mapsto S$ to yield $\triangle GHS$ within the thick rhomb.
\item Connect points $R$ and $S$ within every thick rhomb.
\item Copy $\triangle GHS$ and $\triangle GHS$ to all of the thick rhombs.
\item Erase the edges of the original Penrose tiling.
\end{enumerate}
\end{algorithm}
This construction uniquely determines a tiling once $Q$ is chosen.
In order to ensure a non-periodic tiling of the plane, $Q$ must be chosen so that no two of $\abs{AQ},\,\abs{BQ},\,\abs{CQ},\,\abs{RS}$ are equal. If any two of them are equal, the three resulting Ammann prototiles may admit some periodic tilings of the plane.
\begin{figure}\center{\includegraphics[scale=.5]{10.jpg}}\caption{The five coronas of an Ammann $A$ tile.}\label{acor}
\end{figure}
\begin{theorem}
Given an Ammann tiling $\mathcal{T}$ constructed from a Penrose tiling $\mathcal P$, the five coronas illustrated in Figure~\ref{acor} are the only possible coronas of an $A$ tile of $\mathcal T$.
\end{theorem}
\begin{proof}
(Sketch) As can be seen by inspection of Figure~\ref{abc}, there is a bijective correspondence between thick rhombs in $\mathcal P$ and type $A$ tiles in $\mathcal T$. So, to determine the possible coronas of $A$, we consider the possible Penrose coronas of a thick rhomb. The possible coronas of a Penrose thick rhomb are determined by the Penrose vertex atlas, and by conducting the recomposition algorithm on the tiles of these Penrose coronas it can be seen that these five are the only possible coronas of an Ammann type A tile.
\end{proof}
\begin{theorem}
The set of Ammann coronas of $A$ is a reduced corona atlas.
\end{theorem}
\begin{proof}
As in the previous theorem, let $\mathcal P$ be the Penrose tiling corresponding to an Ammann tiling $\mathcal T$. Assume for contradiction that the set of coronas of $A$ does not cover $\mathcal T$. Then, there is at least one corona of a $B$ or $C$ tile that does not contain any $A$ tiles.
\begin{figure}\center{\includegraphics [scale=.5]{11.jpg}}\caption{A locally legal construction of a $B$ tile that is not globally legal.}\label{illegal}\end{figure}
By inspection of Figure~\ref{abc}, we can see that a thick rhomb is needed to create a $C$ tile, so every $C$ tile has an $A$ tile in its corona. On the other hand, the patch shown in Figure~\ref{illegal} shows that it it is locally legal to assemble three thin rhombs with Ammann markings to form a $B$ tile. However, this configuration is not globally legal, as is immediately apparent when we try to put another tile between lines $m$ and $n$. So, this configuration is impossible, and this set of five coronas of type $A$ tiles covers $\mathcal T$, and is thus a reduced corona atlas.
\end{proof}
\begin{figure}\center{\includegraphics[scale=.5] {12.jpg}}\caption{}\label{abcangles}\end{figure}
\begin{proposition} \label{cangles} Let $\theta=\frac{\pi}{5}$.
Based on the labeling in Figure~\ref{abcangles}, the following angle relations hold for Ammann tiles.
\begin{enumerate}
\item $\varepsilon=\gamma=\nu=2\theta $
\item
$\iota=\lambda=4\theta$ \item
$\beta+\sigma=6\theta$ \item
$\chi+\rho+\omega=10\theta$ \item
$\delta+\tau+\eta=10\theta$ \item
$\alpha+\kappa+\mu=10\theta $\item
$\alpha=\eta $\item
$\mu=\rho$.
\end{enumerate}
\end{proposition}
\begin{figure}\center{\includegraphics [scale=.5]{13.jpg}}\caption{}\label{recomp2}\end{figure}
\begin{proof}
Since the thick rhombs have angles of $\frac{2}{5}\pi$ and $\frac{3}{5}\pi$, and the thin rhombs have angles of $\frac{1}{5}\pi$ and $\frac{4}{5}\pi$, we see in Figure~\ref{recomp2} that
\begin{enumerate}
\item $a+p=h+j=\frac{2}{5}\pi=2\theta$
\item$ b+d=n+m=\frac{3}{5}\pi=3\theta$
\item $a+n=s=\frac{1}{5}\pi=\theta$
\item $b+v=t+j=\frac{4}{5}\pi=4\theta.$
\end{enumerate}
Furthermore, we can see in Figure~\ref{abcangles} that
$\alpha= c, \quad
\beta=b+m, \quad
\chi=l ,\quad
\delta=f ,\quad
\\ \varepsilon=p+a=
\gamma=a+s+n,\quad
\eta= c,\quad
\iota=b+v,\quad
\kappa=u,\quad
\lambda=t+j ,\quad
\mu=k,\quad
\\ \nu=h+j,\quad
\rho=k,\quad
\sigma=d+n,\quad
\tau=e,\quad
\omega=g.$
It is easy to verify that these relations give us the desired result.
\end{proof}
\begin{figure}\center{\includegraphics[width=3in] {14.jpg}}\caption{}\label{abcedges}
\end{figure}
\begin{proposition}\label{cedges}
Referring to the labeling in Figure~\ref{abcedges}, the following edge congruences hold.
\begin{enumerate} \item $a=c=e=f=g=n $ \item$ b=h=i=o$\item $j=k=l=m$ \item $d=p$\end{enumerate}
\end{proposition}
\begin{proof}
The proposition is evident by inspection of Figures~\ref{abcedges} and \ref{abcangles}.
\end{proof}
\begin{theorem}\label{abcaperiodic}
Let $\mathcal P$ be a Penrose tilng and let $\mathcal T$ be a tiling obtained via the recomposition process in Algorithm \ref{recompa}. Let $\{A, B, C\}$ denote the protoset of $\mathcal T$ as labeled in Figure \ref{abc}. Then if $\mathcal T'$ is any other tiling admitted by $\{A, B, C\}$ then there exists a Penrose tiling $\mathcal P'$ such that $\mathcal T'$ is obtained from $\mathcal P'$ by recomposition.
\end{theorem}
\begin{proof}
Because the Ammann vertex atlas determines all Ammann tilings and all Ammann tilings derived from Penrose tilings are non-periodic by construction, it is sufficient to show that all Ammann vertex stars are derived from globally legal Penrose patches.
\begin{figure}\center{\includegraphics[scale=.5] {15.jpg}}\caption{The eight globally legal Penrose vertex stars (\cite{Senechal}, p177).}\label{pvstars}\end{figure}
\begin{figure}\center{\includegraphics[scale=.5] {16.jpg}}\caption{The eight Ammann vertex stars derived from the eight Penrose vertex stars in Figure~\ref{pvstars}.}\label{avs1}\end{figure}
From the Penrose vertex atlas shown in Figure~\ref{pvstars}, we get the eight Ammann vertex stars of $\mathcal T$ shown in Figure~\ref{avs1}. Since these were constructed from the Penrose vertex star atlas, they are globally legal.
\begin{figure}\center{\includegraphics[scale=.4] {17.jpg}}\caption{Six globally legal vertex stars obtained from the five coronas of a type $A$ tile.}\label{acoronas2}\end{figure}
The recomposition process added the three vertices $Q$, $R$, and $S$, so we now consider their possible vertex stars. The five coronas of $A$ give us six more globally legal vertex stars shown in Figure~\ref{acoronas2}. These were also constructed from the recomposition of a Penrose tiling, so they are globally legal as well. This gives us fourteen globally legal vertex stars of $\mathcal T$.
\begin{figure}\center{\includegraphics[scale=.5] {18}}\caption{}\label{abcnumbers}\end{figure}
\begin{figure}\center{\includegraphics [scale=.4]{19.jpg}}\caption{}\label{tenillegals}\end{figure}
\begin{figure}\center{\includegraphics [scale=.4]{20.jpg}}\caption{}\label{8,8}\end{figure}
Next, we use the angle and edge relations established in Propositions \ref{cangles} and \ref{cedges} to check if there are any other locally legal Amman vertex stars, and we find that there are only the ten shown in Figure~\ref{tenillegals}.
(This is where we use the condition that the segments constructed in Algorithm \ref{recompa} are of different lengths. If any were instead the same length, there would be more locally legal vertex stars than those shown here.) The central vertex of each vertex star in Figure~\ref{tenillegals} is labeled with the numbers of the vertices that meet there as per the numbering in Figure~\ref{abcnumbers}.
\begin{claim} Each of the ten vertex configurations in Figure~\ref{tenillegals} is not globally legal.\end{claim}
\begin{figure}\center{\includegraphics [scale=.5]{21.jpg}}\caption{No tile has two adjacent sides of length $i=h$ and angle between them of $10\theta-2\iota=2\theta$, so the empty triangle on the left can never be filled.}\label{9+7=!}\end{figure}
Notice that (7,13,9) and (7,11,9) are not globally legal because whenever vertices 7 and 9 meet, a space is created that cannot be filled (see Figure~\ref{9+7=!}).
Furthermore, when two instances of vertex 6 meet, vertices 7 and 11 also meet. But if vertices 7 and 11 meet, then vertex 9 meets there as well, because $\eta+\mu=10\theta-\kappa$. However, as stated above, (7,11,9) is not globally legal, so a vertex star where two instances of vertex 6 meet is not globally legal.
Therefore, (6,6,6,6,6), (6,6,6,6,5), (6,6,6,5,5), (6,6,5,5,5), (6,5,6,5,6), and (2,14,6,6) are not globally legal.
Now, we focus our attention to vertex stars (14,6,5,2) and (14,6,6,2). In both cases, only a $B$ tile can fit adjacent to the $C$ and $B$ tiles on the right side of the vertex stars (see Figure \ref{8,8}) because the angle between them is $10\theta-\nu-\rho=\kappa$ and the edge lengths are $m$ and $h$. This creates vertex (8,8) in which a tile would fit with a vertex angle of $2\theta$ between congruent edges of length $h=i=b=o$. Since no such tile exists, the vertex star (8,8) cannot be completed, so vertex stars (14,6,5,2) and (14,6,6,2) are not globally legal.
Finally, the left side of vertex star (14,5,5,2) contains vertex (3,4). Two edges of length $d$ meet there at an angle of $10\theta-\chi-\delta$. Since no tile fits in this space, the vertex star is not globally legal.
Therefore, each of the ten vertex stars in Figure \ref{tenillegals} is not globally legal, and the vertex star atlas of $\mathcal T$ contains only the aforementioned fourteen vertex stars, which are derived from $\mathcal P$. This proves the claim.
The Penrose vertex star atlas of eight vertex stars completely determines all Penrose tilings (\cite{Senechal}, p177). So, the set of fourteen vertex stars of $\mathcal T$ that are derived from $\mathcal P$ completely determines all tilings that can be derived from a Penrose tiling by the recomposition process in Algorithm \ref{recompa}. But the set of vertex stars of $\mathcal T$ is identical to the vertex star atlas of $\mathcal T'$. Therefore, an arbitrary Ammann tiling can be derived from a corresponding Penrose tiling.
\end{proof}
\begin{corollary}
The protoset $\{A,B,C\}$ is an aperiodic protoset.
\end{corollary}
\begin{proof}
Since Penrose tilings are non-periodic, by the previous theorem so too are $\mathcal T$ and $\mathcal T'$. Therefore, A, B, and C form an aperiodic protoset.
\end{proof}
\begin{remark}
The protoset $\{A,B,C\}$ has no adjacency rules, unlike the underlying Penrose rhombs. These results motivate the following definition.
\end{remark}
\begin{definition} \label{Adef} Using the edge labeling of Figure~\ref{abcedges} and the angle labeling of Figure~\ref{abcangles}, we define an Ammann tiling to be a tiling of the plane admitted by a protoset of three tiles, two pentagons and a hexagon, satisfying the edge relations
of Proposition \ref{cedges}
and the angle relations of Proposition \ref{cangles}
\end{definition}
\begin{remark}
The angle and edge conditions in the previous definition are equivalent to specifying the vertex star atlas of Ammann types.
\end{remark}
\begin{remark}
There are several types of tilings already referred to as Ammann tilings in the literature. Therefore, one should consider Definition~\ref{Adef} to be local to this paper.
\end{remark}
\section{Iterating Ammann Tilings}
\begin{figure}
\center{\includegraphics[scale=.5]{22.jpg}
\includegraphics[scale=.5]{22a.jpg}\caption{Algorithm \ref{ait} creates the three tiles of $\mathcal T'$ from the coronas of $A$ in $\mathcal T$. Note that if the tiles were rescaled by a factor of $\frac{1}{\phi}$ the new tiles would be similar in size to the originals.}\label{iter}}
\end{figure}
\begin{algorithm}[Ammann Iteration]\label{ait}
Let $\mathcal T$ be an Ammann tiling. By connecting vertices within the five coronas of $A$ as shown in Figure~\ref{iter} and then erasing the original edges, we create a new tiling of the plane. Examining all possible arrangements of the five coronas, we see that this process produces a tiling by the three prototiles shown in Figure~\ref{iter}. We label the new tiling $\mathcal T'$ and call the new tiles from corona 1 the type $A'$ tiles of $\mathcal T'$, the tiles from coronas 2 and 3 we call the type $B'$ tiles, and the tiles from corona 4 and 5 we call the type $C'$ tiles. The angles and edges of the tiles of $\mathcal T'$ are labeled in reverse, \textit{e.g.} if the tiles of $\mathcal T$ are labeled counter-clockwise, then those of $\mathcal T'$ are labeled clockwise.
\end{algorithm}
\begin{theorem}
Let $\mathcal T$ be an Ammann tiling.
The tiling $\mathcal T'$ obtained via Algorithm \ref{ait} is also an Ammann tiling.
\end{theorem}
\begin{proof}
By construction, as shown in Figure~\ref{iter} the tiles in $\mathcal T'$ obey
\begin{enumerate} \item $a'=c'=e'=f'=g'=n'$\item $b'=h'=i'=o'$\item $k'=m'$.
\end{enumerate}
Since in $\mathcal T$, $a=e=g$, we have $j'=k'=m'=l'$. Also, it can be easily shown that only an $A$ tile would fit between the $C$ and $B$ tiles on the far right in coronaa 5 in Figure~\ref{135}. This yields the edge length equality $d'=p'$.
Therefore, we get the the fourth relation in Proposition \ref{cedges} required of the edges in an Ammann protoset.
So, the algorithm preserves the edge congruence relations.
By examining Figure~\ref{iter}, it is clear that the angles of $\mathcal T$ are not congruent to the angles of $\mathcal T'$. However, we aim to show that the angle restrictions imposed by our algorithm for constructing $\mathcal T'$ are the same as the restrictions present in $\mathcal T$.
\begin{figure}\center{\includegraphics [scale=.5]{23.jpg}\caption{}\label{135}}
\end{figure}
From Figure \ref{135} we get immediately:
\begin{enumerate}
\item $ \nu'= \varepsilon =2\theta$,
\item $\varepsilon' = \gamma'=\nu=2\theta$,
\item $\iota' =\lambda=4\theta$,
\item $\lambda'=\varepsilon+\gamma=4\theta$,
\item $\mu'= \rho' $, and
\item $\alpha'=\eta' $
\end{enumerate}
The circle in the upper left of corona 1 in Figure~\ref{135} shows that $\delta' +\eta' +\tau' =10\theta$.
The circle on the upper left of corona 3 shows that $\alpha'+\kappa'+\mu'=10\theta$.
The circle in the lower right of corona 5 shows that $\chi'+\rho' +\omega' =10\theta$.
Now we turn our attention to the lower right of corona 1. The larger of the two arcs marks angle $\beta'$ and the smaller marks angle $\sigma'$. Recall from the labeling in Figure~\ref{abcangles} that $\lambda=4\theta$ and $\nu=2\theta$.
From these we get $\beta' +\sigma'=\lambda+\nu=6\theta$.
This gives us angle restrictions
\begin{enumerate}
\item $\varepsilon=\gamma' =\nu'=2\theta$ \item
$\iota'=\lambda'=4\theta $\item
$\beta'+\sigma'=6\theta$ \item
$\gamma'+\rho'+\omega'=10\theta$ \item
$\delta'+\tau'+\eta'=10\theta $\item
$\alpha'+\kappa'+\mu'=10\theta $\item
$\alpha'=\eta' $\item
$\mu' =\rho'. $
\end{enumerate}
These are exactly the same angle relations of Proposition \ref{cangles} required of an Ammann tiling. Therefore,
$\mathcal T'$ is an Ammann tiling in the sense of Definition \ref{Adef}.
\end{proof}
Recall that Penrose composition done twice on a Penrose tiling by rhombs produces another Penrose tiling by rhombs whose prototiles are similar to the originals but scaled by a factor of $\phi$. We define the notation
\begin{equation*}
\xymatrix {
& \mathcal P\ar[r]^{\text{Comp.$^2$}} &\mathcal P'}
\end{equation*}
for Penrose tilings $\mathcal P$ and $\mathcal P'$ by rhombs to refer to performing Penrose compostion twice. Analogously, we define the notation
\begin{equation*}
\xymatrix {
& \mathcal T\ar[r]^{\text{Iter.}} &\mathcal T'}
\end{equation*}
for Ammann tilings $\mathcal T$ and $\mathcal T'$ to refer to Ammann iteration as defined in Algorithm \ref{ait}.
\begin{figure}\center{\includegraphics[scale=.5]{24.jpg}}\caption{The Ammann coronas with associated Penrose rhombs.}\label{corswithpen}
\end{figure}
\begin{figure}\center{\includegraphics[scale=.5]{25.jpg}}\caption{Divisions of Ammann tiles by Penrose rhombs of corresponding Penrose tiling $\mathcal P$.}\label{cutatiles}\end{figure}
\begin{theorem} \label{R=P'}
Given an Ammann tiling $\mathcal T$, let $\mathcal P$ be its underlying Penrose tiling. Let $\mathcal T'$ be the iterated Ammann tiling, let $\mathcal P'$ be the tiling produced by applying Penrose composition to $\mathcal P$ twice, and let $\mathcal R$ be the underlying Penrose tiling of $\mathcal T'$. Then $\mathcal P'=\mathcal R$.
\begin{equation*}
\xymatrix {\text{Penrose Tilings:}
& \mathcal P\ar[r]^{\text{Comp.$^2$}} &\mathcal P'\ar@(r,r)[dd]\\
\text{Ammann Tilings:}&\mathcal T\ar[u]\ar[r]^{\text{Iter.}} &\mathcal T'\ar[d]\\
\text{Penrose Tilings:}& ~&\mathcal R\ar@(r,r)[uu]
}
\end{equation*}
\end{theorem}
\begin{proof}
First, note that $\mathcal R$ is well defined by Theorem \ref{abcaperiodic}. (Although Ammann tilings were not specifically defined until after the proof of Theorem \ref{abcaperiodic}, the proof used only the edge and angle relations that were later used to define Ammann tilings, allowing us to use its result here.)
Next, consider the relationship between $\mathcal T$ and $\mathcal P$. The edges of $\mathcal P$ cut the tiles of $\mathcal T$ in exactly the same way for each type of Ammann tile, as shown in Figure~\ref{cutatiles} (each type $A$ tile of $\mathcal T$ is cut once by segment $\overline{2,5}$, $B$ tiles are cut twice by segments $\overline{8,6}$ and $\overline{10,6}$, and $C$ tiles are cut once by segment $\overline{12,14}$). Since $\mathcal T'$ is an Ammann tiling with corresponding Penrose tiling $\mathcal R$, $\mathcal R$ will cut the tiles of $\mathcal T'$ across the corresponding vertices. To show that $\mathcal P'=\mathcal R$, it is sufficient to show that $\mathcal P'$ makes exactly the same divisions in the tiles of $\mathcal T'$ as $\mathcal R$.
\begin{figure}\center{\includegraphics[scale=.5]{26.jpg}}\caption{The Ammann coronas with Penrose rhombs of $\mathcal P'$. }\label{T,P'}
\end{figure}
\begin{figure}\center{\includegraphics[scale=.5]{27.jpg}}\caption{The same coronas as Figure~\ref{T,P'} but with the tiles of $\mathcal T'$ drawn in. }\label{T,P'2}
\end{figure}
\begin{figure}\center{\includegraphics[scale=.5]{28.jpg}}\caption{The tiles of $\mathcal T'$ with the cuts made by $\mathcal P'$. }\label{T,P'3}
\end{figure}
Figure~\ref{T,P'} shows the five Ammann coronas of $\mathcal T$ in the context of all possible second coronas of the Ammann $A$ tiles. The Penrose rhombs of $\mathcal P'$ are superimposed. Iterating, we see in Figures~\ref{T,P'2} and \ref{T,P'3} that the tiles of $\mathcal T'$ are cut in same way by $\mathcal P'$ as they are by $\mathcal R$. Therefore, $\mathcal P'=\mathcal R$.
\end{proof}
Since $\mathcal T'$ is an Ammann tiling, the iteration algorithm may be performed on $\mathcal T'$ etc., yielding an infinite sequence of Ammann tilings corresponding to the infinite sequence of Penrose tilings created by the Penrose double composition process.
\section{Dynamics of the Ammann Iteration Process}
The Ammann iteration process does not preserve the exact shapes of the prototiles, making it difficult to compare Ammann tilings obtained through repeated iteration. On the other hand, Penrose tiles under double composition are easily described. Ammann iteration thus may be tracked by simultaneously tracking the change in the underlying Penrose tiling and the change in the relative location of $Q$ within the Penrose thin rhombs. Recall that the Penrose double composition process scales the prototiles by a factor of $\phi$, so by rescaling both the Penrose and Ammann tilings by $\frac{1}{\phi}$ after each Ammann iteration (equivalently Penrose double composition), the dimensions of the tiles of the underlying Penrose tiling remain constant throughout, allowing us to quantitatively compare the location of $Q$ within the thin rhomb at each stage of repeated iteration.
We may thus study the dynamics of the iteration process on Ammann tilings by appealing to the local isomorphism theorem and tracking the movement of $Q$ in an Ammann tiling with respect to the (changing) underlying Penrose tiling.
Since each Ammann iteration reverses the direction of the labeling of the Ammann prototiles, the location of $Q_n$ will alternate between the lower right and lower left quadrants of the reference thin Penrose rhomb. In order to simplify our analysis, we let the sequence $\{Q_0, Q_1, Q_2, \dots, Q_n,\dots\}$ represent the movement of $Q$ under repeated iteration, but with the odd elements of the sequence reflectied through the principal (long) axis of the reference thin rhomb. We locate $Q$ within a Penrose thin rhomb using polar coordinates based along one edge of the rhomb. Accordingly, we locate $Q_n$ by the parameters $r_n, \theta_n$.
\begin{figure}\center{\includegraphics[scale=.5]{29.jpg}\caption{}\label{findQ2}}\end{figure}
Since we will first examine what happens to $Q$ under one application of the iteration process, we denote $Q':=Q_1$. We locate $Q'$ in corona 3 illustrated in Figure~\ref{findQ2} and notice that in this case, the point $Q$ is identical to the point $Q'$, but the orientation of the reference triangle has changed.
\begin{figure}\center{\includegraphics[scale=.5]{30.jpg}\caption{The enlarged triangle from corona 3), Figure~\ref{findQ2}. Notice that $Q$ is located within the dashed triangle by $(r,\theta)$ and within the solid triangle by $(p,\theta')$.}\label{findQ}}\end{figure}
Using the law of cosines on the triangle in Figure~\ref{findQ}, we get:
$$p^2=r^2 +\phi^2-2r\phi\cos{\theta}.$$
Normalizing, so that $\frac{p}{\phi}=r'$, we arrive at the formula
\begin{equation}
r'=\frac{\sqrt{r^2+\phi^2-2r\phi\cos\theta}}{\phi}.
\label{r'}
\end{equation}
Again, we using the law of cosines on the triangle in Figure~\ref{findQ},
\begin{equation}
\begin{aligned}
r^2&=\phi^2+p^2-2p\phi\cos{\theta'}
\\& =\phi^2+(r')^2\phi^2-2\phi^2r'\cos{\theta'}.
\end{aligned}
\nonumber
\end{equation}
Rearranging we have
$$
\cos{ \theta'}={\left(\frac{\phi^2+(r')^2\phi^2-r^2}{2\phi^2r'}\right)},
$$
and substituting for $r'$ using \ref{r'}, we have
$$
\cos{\theta'}= \frac{\phi^2+(\frac{\sqrt{r^2+\phi^2-2r\phi\cos\theta}}{\phi})^2\phi^2-r^2}{2\phi^2\frac{\sqrt{r^2+\phi^2-2r\phi\cos\theta}}{\phi}},
$$
which simplifies to
\begin{equation}
\cos \theta'=\frac{\phi-r\cos\theta}{\sqrt{r^2+\phi^2-2r\phi\cos\theta}}.
\end{equation}
Thus, the movement of $Q$ is described by the assignment $(r,\theta)\mapsto(r',\theta')$ where
\begin{equation}
(r',\theta')=\left(\frac{\sqrt{r^2+\phi^2-2r\phi\cos\theta}}{\phi}, \quad\arccos{\left( \frac{\phi-r\cos\theta}{\sqrt{r^2+\phi^2-2r\phi\cos\theta}} \right)} \right)\label{newQ}.
\end{equation}
\begin{theorem}
The map $(r,\theta)\mapsto(r',\theta')$ in \ref{newQ} has a unique attractive fixed point at $(r,\theta)=(\frac{1}{\phi},0)$.\label{limit}
\end{theorem}
\begin{proof}
Changing to cartesian coordinates,
$$
x'=r'\cos{\theta'}
=\left( \frac{\sqrt{x^2+y^2+\phi^2-2x\phi}}{\phi}\right)
{\left( \frac{\phi-x}{\sqrt{x^2+y^2+\phi^2-2x\phi}} \right)}
=1-\frac{x}{\phi}
$$
and
$$\begin{aligned}
y'&= r'\sin{\theta'}
\\&=\frac{\sqrt{x^2+y^2+\phi^2-2x\phi}}{\phi}\sin\arccos {\left( \frac{\phi-x}{\sqrt{x^2+y^2+\phi^2-2x\phi}} \right)}.
\end{aligned}$$
Using the identity $\sin\arccos {x}=\sqrt{1-x^2}$, this simplifies to
$$\begin{aligned}
y'&=\frac{\sqrt{x^2+y^2+\phi^2-2x\phi}}{\phi}\left(\sqrt{1-\frac{(\phi^2-x\phi)^2}{\phi^2(x^2+y^2+\phi^2-2x\phi)}}\right)
\\&=\frac{\sqrt{\phi^2(x^2+y^2+\phi^2-2x\phi)-(\phi^4-2x\phi^3+x^2\phi^2)}}{\phi^2}
\\&=\frac{\abs{y}}{\phi}.
\end{aligned}$$
Since we are only interested in positive values of $y$ since $0\leq \theta \leq \frac{\pi}{5}$, we simply write
\begin{equation}
y'=\frac{y}{\phi}.
\end{equation}
One more change of variables transforms this affine map to a linear map. Let
$$ u=x-\frac{1}{\phi}\text{ and }v=y.$$
Then
\begin{equation}
u'=-\frac{u}{\phi}\text{ and } v'=\frac{v}{\phi}.
\label{map}
\end{equation}
This is a linear map which fixes $(u,v)=(0,0)$ and no other point. Going back to the previous coordinate system we have that the map fixes $(x,y)=(\frac{1}{\phi},0)$, which is equivalent to the point $(r,\theta)=(\frac{1}{\phi},0)$.
Furthermore, since $0<\frac{1}{\phi}<1$, the eigenvalues for (\ref{map}) have absolute value less than 1, so the fixed point is attractive.
\end{proof}
\begin{remark}
The result of Theorem \ref{R=P'} allows us to identify an Ammann tiling by its underlying Penrose tiling along with the location of the point $Q$ within the thin Penrose rhombs. While Theorem \ref{limit} proves that $Q$ approaches a limit as the iteration process is repeated, it does not prove that the entire tiling approaches a limit. Considering a sequence of 0s and 1s that identifies a Penrose tiling, Penrose composition is equivalent to performing the shift map on that sequence. This map is chaotic, and except in a limited number of special cases, the underlying Penrose tiling does not approach a limit.
\end{remark}
\section{Diffraction Properties and Possible Relations to Quasicrystals}
\begin{figure}\center{\includegraphics[scale=.5]{31.jpg}\includegraphics[scale=1]{31a.jpg}\caption{Left: the vertices of a patch of a Penrose tiling by rhombs. Right: its diffraction pattern.}\label{p}}
\end{figure}
\begin{figure}\center{\includegraphics[scale=.5]{32.jpg}\includegraphics[scale=1]{32a.jpg}\caption{Left: the vertices of a patch of a generic Ammann tiling. Right: its diffraction pattern.}\label{ga}}
\end{figure}
\begin{figure}\center{\includegraphics[scale=.5]{33.jpg}\includegraphics[scale=1]{33a.jpg}\caption{Left: the vertices of a patch of an Ammann limit tiling. Right: its diffraction pattern.}\label{a}}
\end{figure}
It is well known that a three-dimensional version of the Penrose tiling by rhombs is used to model quasicrystals with five-fold symmetry, so it is natural to ask whether Ammann tilings are similarly useful.
The symmetry of a quasicrystal is identified by examining its Fraunhofer diffraction pattern [the far-range x-ray diffraction pattern]. This pattern can be simulated for tilings using the Fourier transform. We describe the set of vertices of the tiling [molecules of the quasicrystal] by the generalized function that has mass one at each point of the set and is zero everywhere else. The diffraction pattern of the vertex set of the tiling is then given by the intensity plot of the magnitude squared of the Fourier transform of this generalized function.
Figures \ref{p}, \ref{ga}, and \ref{a} each show the vertices of a tiling represented by small circles (left) and the inverted image of that set's diffraction pattern (right). Although the generic Ammann tiling (Figure \ref{ga}) does not show very clear symmetry, the Ammann limit tiling (Figure \ref{a}) shows very pronounced peaks, which are even brighter than those of the corresponding Penrose tiling (Figure \ref{p}). This may indicate that the Ammann limit tiling also models a quasicrystal.
| {
"timestamp": "2009-12-18T22:36:06",
"yymm": "0912",
"arxiv_id": "0912.3814",
"language": "en",
"url": "https://arxiv.org/abs/0912.3814",
"abstract": "This paper describes a recomposition of the rhombic Penrose aperiodic protoset due to Robert Ammann. We show that the three prototiles that result from the recomposition form an aperiodic protoset in their own right without adjacency rules. An interation process is defined on the space of Ammann tilings that produces a new Ammann tiling from an existing one, and it is shown that this process runs in parallel to Penrose deflation. Furthermore, by characterizing Ammann tilings based on their corresponding Penrose tilings and the location of the added vertex that defines the recomposition process, we show that this process proceeds to a limit for the local geometry.",
"subjects": "Metric Geometry (math.MG)",
"title": "A Family of Recompositions of the Penrose Aperiodic Protoset and Its Dynamic Properties",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9904406014994152,
"lm_q2_score": 0.7154239957834733,
"lm_q1q2_score": 0.7085849727108983
} |
https://arxiv.org/abs/1412.5398 | Nowhere-zero 5-flows on cubic graphs with oddness 4 | Tutte's 5-Flow Conjecture from 1954 states that every bridgeless graph has a nowhere-zero 5-flow. In 2004, Kochol proved that the conjecture is equivalent to its restriction on cyclically 6-edge connected cubic graphs. We prove that every cyclically 6-edge-connected cubic graph with oddness at most 4 has a nowhere-zero 5-flow. | \section[]{Introduction}
An integer nowhere-zero $k$-flow on a graph $G$ is an assignment of a direction and a value of $\{1, \dots, (k-1)\}$ to each edge of $G$ such that the Kirchhoff's law is satisfied at every vertex of $G$. This is the most restrictive definition of a nowhere-zero $k$-flow. But it is equivalent to more flexible definitions, see e.g.~\cite{Seymour_95}.
A cubic graph $G$ is bipartite if and only if it has a nowhere-zero 3-flow, and $\chi'(G)=3$ if and only if $G$ has a nowhere-zero 4-flow. Seymour \cite{Seymour_81} proved that every bridgeless graph has a nowhere-zero 6-flow.
So far this is the best approximation to Tutte's famous 5-flow conjecture, which is equivalent to its restriction to cubic graphs.
\begin{conjecture} [\cite{Tutte_54}] \label{5FC}
Every bridgeless graph has a nowhere-zero 5-flow.
\end{conjecture}
Kochol \cite{Kochol_04} proved that a minimum counterexample to the 5-flow conjecture is a cyclically 6-edge-connected cubic graph. Hence, it suffices to prove Conjecture \ref{5FC} for these graphs.
A classical parameter to measure how far a cubic graph is from being 3-edge-colorable is its {\it oddness}. The oddness, denoted by $\omega(G)$, of a bridgeless cubic graph $G$ is the minimum number of odd circuits in a 2-factor of $G$. Since $G$ has an even number of vertices, $\omega(G)$ is necessarily even. Furthermore, $\omega(G)=0$ if and only if $G$ is $3$-edge-colorable.
Jaeger \cite{Jaeger_88} showed that cubic graphs with oddness at most 2 have a nowhere-zero 5-flow.
Furthermore, a consequence of the main result in \cite{Steffen_10} is that cyclically $7$-edge connected cubic graphs with oddness at most $4$ have a nowhere-zero 5-flow.
The following is our main theorem, and it is a strengthening for both previous results.
\begin{theorem} \label{oddness4_5flow}
Let $G$ be a cyclically 6-edge-connected cubic graph. If $\omega(G) \leq 4$, then $G$ has a nowhere-zero 5-flow.
\end{theorem}
\section[] {Balanced valuations and flow partitions} \label{gamma_2}
In this section, we recall the concept of flow partitions, which was introduced by the second author in \cite{Steffen_10}.
Let $G$ be a graph and $S \subseteq V(G)$. The set of edges with precisely one end in $S$ is
denoted by $\partial_G(S)$.
An {\em orientation} $D$ of $G$ is an assignment of a
direction to each edge. For $S \subseteq V(G)$,
$D^-(S)$ ($D^+(S)$) is the set of edges of $\partial_G(S)$ whose head
(tail) is incident to a vertex of $S$.
The oriented graph is denoted by $D(G)$,
$d_{D(G)}^-(v) = |D^-(\{v\})|$ and $d_{D(G)}^+(v) = |D^+(\{v\})|$ denote the {\em indegree}
and {\em outdegree} of vertex $v$ in $D(G)$, respectively. The
degree of a vertex $v$ in the undirected graph $G$ is $d_{D(G)}^+(v) + d_{D(G)}^-(v)$, and it is denoted by $d_G(v)$.
Let $k$ be a positive integer, and $\varphi$ a function from the edge set of the directed graph $D(G)$
into the set $\{0, 1, \dots, k-1\}$. For $S \subseteq V(G)$ let
$\delta \varphi (S) = \sum_{e \in D^+(S)}\varphi(e) - \sum_{e \in D^-(S)}\varphi(e)$.
The function $\varphi$ is a $k$-flow on $G$ if $\delta \varphi(S) = 0$ for every $S \subseteq V(G)$.
The {\em support} of $\varphi$ is
the set $\{e \in E(G) : \varphi(e) \not = 0\}$, and it is denoted by $supp(\varphi)$.
A $k$-flow $\varphi$ is a nowhere-zero $k$-flow if $supp(\varphi) = E(G)$.
We will use balanced valuations of graphs, which were introduced by
Bondy \cite{Bondy} and Jaeger \cite{Jaeger_75}. A {\em balanced valuation} of a graph
$G$ is a function $f$ from the vertex set $V(G)$ into the real numbers, such that
$| \sum_{v \in X} f(v) | \leq | \partial_G(X) |$ for all $X \subseteq V(G)$. Jaeger proved
the following fundamental theorem.
\begin{theorem} [\cite{Jaeger_75}] \label{Thm_Jaeger_75}
Let $G$ be a graph with orientation $D$ and $k\geq 3$. Then $G$
has a nowhere-zero $k$-flow if and only if there
is a balanced valuation $f$ of $G$ with
$ f(v) = \frac{k}{k-2}(2d_{D(G)}^+(v) - d_G(v))$, for all $v \in V(G).$
\end{theorem}
In particular, Theorem \ref{Thm_Jaeger_75} says that a cubic graph $G$ has a nowhere-zero $5$-flow
if and only if there is a balanced valuation of $G$ with values in $\{ \pm \frac{5}{3}\}$.
Let $G$ be a bridgeless cubic graph, and ${\cal F}_2$ be a $2$-factor of $G$ with odd circuits $C_1,\ldots,C_{2t}$,
and even circuits $C_{2t+1},\ldots, C_{2t+l}$ ($t \geq 0$, $l \geq 0$), and let ${\cal F}_1$ be the complementary $1$-factor.
A {\em canonical} $4$-edge-coloring, denoted by $c$, of $G$ with respect to ${\cal F}_2$ colors the edges of ${\cal F}_1$ with color $1$, the edges of the even circuits of ${\cal F}_2$ with $2$ and $3$, alternately, and the edges of the odd circuits of ${\cal F}_2$ with colors $2$ and $3$ alternately, but one edge which is colored $0$. Then, there are precisely $2t$ vertices $z_1,\ldots,z_{2t}$ where color $2$ is missing (that is, no edge which is incident to $z_i$ has color $2$).
The subgraph which is induced by the edges of colors $1$ and $2$ is union of even circuits and $t$ paths $P_i$ of odd length
and with $z_1,\ldots,z_{2t}$ as ends.
Without loss of generality we can assume that $P_i$ has ends $z_{2i-1}$ and $z_{2i}$, for $i \in \{1,\ldots,t\}$.
Let $M_G$ be the graph obtained from $G$ by adding two edges $f_i$ and $f'_i$ between $z_{2i-1}$ and $z_{2i}$ for $i \in \{1,\ldots,t\}$.
Extend the previous edge-coloring to a proper edge-coloring of $M_G$ by coloring $f'_i$ with color $2$ and $f_i$ with color $4$.
Let $C'_1,\ldots,C'_s$ be the cycles of the $2$-factor of $M_G$ induced by the edges of colors $1$ and $2$ ($s \geq t$).
In particular, $C'_i$ is the even circuit obtained by adding the edge $f'_i$ to the path $P_i$, for $i \in \{1,\ldots,t\}$.
Finally, for $i \in \{1,\ldots,t \}$ let $C''_i$ be the $2$-circuit induced by the edges $f_i$ and $f'_i$.
We construct a nowhere-zero $4$-flow on $M_G$ as follows:
\begin{itemize}
\item for $i \in \{1, \dots, 2t+l\}$ let $(D_i,\varphi_i)$ be a nowhere-zero flow on the directed circuit $C_i$ with $\varphi_i(e)=2$ for all $e \in E(C_i)$;
\item for $i \in \{1 \dots, s\}$ let $(D'_i,\varphi'_i)$ be a nowhere-zero flow on the directed circuit $C'_i$ with $\varphi'_i(e)=1$ for all $e \in E(C'_i)$;
\item for $i \in \{1, \dots, t\}$ let $(D''_i,\varphi''_i)$ be a nowhere-zero flow on the directed circuit $C''_i$ (choose $D''_i$ such that $f'_i$ receives the same direction as in $D'_i$) with $\varphi''_i(e)=1$ for all $e \in \{f_i,f'_i\}$.
\end{itemize}
Then,
$$(D,\varphi)= \sum_{i=1}^{2t+l} (D_i,\varphi_i) + \sum_{i=1}^{s} (D'_i,\varphi'_i) + \sum_{i=1}^{t} (D''_i,\varphi''_i) $$
is the desired nowhere-zero $4$-flow on $M_G$.
By Theorem \ref{Thm_Jaeger_75}, there is a balanced valuation $w(v)=2(2d_{D(M_G)}^+(v) - d_{M_G}(v))$ of $M_G$. It holds that $|2d_{D(M_G)}^+(v) - d_{M_G}(v)|=1$, and hence, $w(v) \in \{ \pm 2 \}$ for all vertices $v$.
The vertices of $M_G$, and therefore, of $G$ as well, are partitioned into two classes $A=\{v | w(v)=-2\}$ and $B=\{v | w(v)=2\}$.
We call the elements of $A$ ($B$) the white (black) vertices of $G$, respectively.
\begin{definition}
Let $G$ be a bridgeless cubic graph and ${\cal F}_2$ a 2-factor of $G$. A partition of $V(G)$ into two classes $A$ and $B$ constructed as above with a canonical $4$-edge-coloring $c$, the $4$-flow $(D,\varphi)$ on $M_G$ and the induced balanced valuation $w$ of $M_G$ is called a \textbf{flow partition} of $G$ w.r.t.~${\cal F}_2$. The partition is
denoted by $P_G(A,B) (=P_G(A,B,{\cal F}_2,c,(D,\varphi),w))$.
\end{definition}
\begin{lemma} \label{diffends_lemma} Let $G$ be a bridgeless cubic graph and
$P_G(A,B)$ be a flow partition of $V(G)$ which is induced by a canonical nowhere-zero 4-flow with respect to an edge-coloring $c$. Let $x$, $y$ be the two vertices of an edge $e$. If $e \in c^{-1}(1) \cup c^{-1}(2)$, then $x$ and $y$ belong to different classes, i.e. $x \in A$ if and only if $y \in B$.
\end{lemma}
From a flow partition $P_G(A,B) (=P_G(A,B,{\cal F}_2,c,(D,\varphi),w))$ we easily obtain a flow partition
$P_G(A',B') (=P_G(A',B',{\cal F}_2,c,(D',\varphi'),w'))$ such that the colors on the vertices of $P_i$ are switched.
Let $(D',\varphi')$ be the nowhere-zero $4$-flow on $M_G$ obtained by using the same $2$-factor ${\cal F}_2$, the same $4$-edge-coloring $c$ of $G$ and the same orientations for all circuits, but for one $i \in \{i, \dots ,t\}$ use opposite orientation
of $C_i'$ and $C_i''$ with respect to the one selected in $(D,\varphi)$.
\begin{lemma}\label{switching_lemma} Let $G$ be a bridgeless cubic graph and
$P_G(A,B)$ be the flow partition which is induced by the nowhere-zero $4$-flow $(D,\varphi)$. If $P_G(A',B')$ is the flow partition induced by the nowhere-zero $4$-flow $(D',\varphi')$, then
$A \setminus V(P_i) = A' \setminus V(P_i)$,
$B \setminus V(P_i) = B' \setminus V(P_i)$, $A \cap V(P_i) = B' \cap V(P_i)$ and $B \cap V(P_i) = A' \cap V(P_i)$.
\end{lemma}
\section{Proof of Theorem \ref{oddness4_5flow}}
Suppose to the contrary that the statement is not true. Then there is a cyclically 6-edge-connected cubic graph $G$, which
has no nowhere-zero 5-flow. Let ${\cal F}_2$ be a 2-factor of $G$ with
precisely four odd circuits $C_1,\dots,C_4$. Let $c$ be a canonical 4-edge coloring of $G$ and
$z_1, z_2, z_3, z_4$ be the four vertices where color $2$ is missing. Let $Z=\{z_1$,$z_2$,$z_3,z_4\}$.
Note, that in any flow partition which depends on ${\cal F}_2$ and $c$, the vertices
$z_1$ and $z_2$ (and $z_3$ and $z_4$ as well) belong to different color classes.
By Lemma \ref{switching_lemma} there are flow partitions $P_G(A,B)$ and $P_G(A',B')$ of $G$
such that $\{z_1, z_3\} \subseteq A$, and $\{z_1, z_4\} \subseteq A'$. Hence, $\{z_2, z_4\} \subseteq B$
and $\{z_2, z_3\} \subseteq B'$.
Let $w$ be the function with $w(v)=-\frac{5}{3}$ if $v \in A$ and $w(v)=\frac{5}{3}$ if $v \in B$, and
$w'$ be a function with $w'(v)=-\frac{5}{3}$ if $v \in A'$ and $w'(v)=\frac{5}{3}$ if $v \in B'$.
We will prove that $w$ or $w'$ is a balanced valuation of $G$, and therefore, $G$ has a nowhere-zero $5$-flow by Theorem \ref{Thm_Jaeger_75}. Hence, there is no counterexample and Theorem \ref{oddness4_5flow} is proved.
\subsection{$Z$-separating edge-cuts}
Since $G$ has no nowhere-zero 5-flow, $w$ and $w'$ are not balanced valuations of $G$.
Then there are $S\subseteq V(G)$, $S' \subseteq V(G)$ with
$|\sum_{v \in S} w(v)| > | \partial_G(S) |$, and $|\sum_{v \in S'} w'(v)| > | \partial_G(S') |.$
We will prove some properties of the edge-cuts $\partial_G(S)$ and $\partial_G(S')$. We deduce the results
for $S$ only. The results for $S'$ follow analogously.
If $S=V(G)$, then $|\sum_{v \in S} w(v)|= 0 = | \partial_G(S) |$.
Therefore, $S$, $S'$ are a proper subset of $V$.
If $S = \{v\}$, then $|\sum_{v \in S} w(v)|= \frac{5}{3} \leq 3 = |\partial_G(S)|$.
Since $G$ is cyclically $6$-edge-connected, it has no non-trivial $3$-edge-cut and no $2$-edge-cut. Hence,
we assume that $|\partial_G(S)| \geq 4$ in the following.
Let $k$ ($k'$) be the absolute value of the difference between the number of black and white vertices in $S$ ($S'$). Hence,
$\frac{5}{3} k > |\partial_G(S)|$, and $\frac{5}{3} k' > |\partial_G(S')|$.
For $i \in \{0,1,2,3\}$, let $c_i=|\partial_G(S) \cap c^{-1}(i)|$ and $c'_i=|\partial_G(S') \cap c^{-1}(i)|$.
\begin{claim} \label{c1}$|\partial_G(S)| \equiv k \pmod 2$, $|\partial_G(S')| \equiv k' \pmod 2$
\end{claim}
{\it Proof.} If $k$ is even, then $|S \cap A|$ and $ |S \cap B|$ have the same parity, and if $k$ is
odd, then they have different parities. Since $S$ is the disjoint union of $S \cap A$
and $S \cap B$ it follows that $k$ and $|S|$ have the same parity. Since $G$ is cubic it
follows that $|\partial_G(S)| \equiv k \pmod 2$. $\square$
Let $q_A$ ($q_B$) be the number of white (black) vertices of $S$ where color $2$ is missing. Let $q=|q_A - q_B|$.
Since $Z$ has two black and two white vertices, it follows that $q \leq 2$.
\begin{claim} \label{c2}$|S \cap Z|= 2 = q$, and $|S' \cap Z|=2=q'$.
\end{claim}
{\it Proof.}
Since $c^{-1}(1)$ is a $1$-factor of $G$, Lemma \ref{diffends_lemma} implies that $k = c_1$. Hence,
\begin{equation}\label{eqn_c1}
c_1 > \frac{3}{5} |\partial_G(S)|.
\end{equation}
Furthermore, Lemma \ref{diffends_lemma} implies that $k \leq c_2 + q$. Hence,
\begin{equation}\label{eqn_c2}
c_2+q > \frac{3}{5} |\partial_G(S)|.
\end{equation}
Suppose to the contrary, that $|S \cap Z|\not=2$. Thus, $q \leq 1$ and $c_2+1 \geq c_1$.
Hence, $|\partial_G(S)| \geq c_1 + c_2 \geq 2k-1$, and therefore, $\frac{5}{3}k \leq |\partial_G(S)|$ if
$|\partial_G(S)| \geq 6$.
If $|\partial_G(S)|=4$, then $k \leq 2$, and if $|\partial_G(S)|=5$, then $k \leq 3$.
In both cases, it follows that $\frac{5}{3}k \leq |\partial_G(S)|$, a contradiction.
Thus, $|S \cap Z|=2$, and therefore, $q \in \{0,2\}$. If $q = 0$, then $|\partial_G(S)| \geq c_1+c_2 \geq 2k$, a contradiction.
Hence, $q = 2$. $\square$
\begin{claim} \label{c3} $|\partial_G(S)|=6$, $c_1=4$ and $c_2=2$, and $|\partial_G(S')|=6$, $c'_1=4$ and $c'_2=2$.
\end{claim}
{\it Proof.}
If $|\partial_G(S)| = 4$, then $k \geq 3$. Hence, $c_1 = 3$ and $c_2 =1$. The edge of $\partial_G(S) \cap c^{-1}(2)$
is contained in a circuit of ${\cal F}_2$ whose edges are not in $c^{-1}(1)$. Hence, $2 \geq c_1 = k$, a contradiction.
If $|\partial_G(S)| = 5$, then $k \geq 4$. We deduce a contradiction as in the case before.
Now suppose to the contrary that $|\partial_G(S)|> 6$.
Since $c_1 > \frac{3}{5} |\partial_G(S)|$, $c_2 > \frac{3}{5} |\partial_G(S)| - 2$, and $c_1 + c_2 \leq |\partial_G(S)|$, it follows
that $|\partial_G(S)| > \frac{6}{5} |\partial_G(S)| - 2 $. Therefore, $|\partial_G(S)|<10$.
If $|\partial_G(S)|=7$, then $c_1 \geq 5$ and $c_2 \geq 3$, a contradiction.
If $|\partial_G(S)|=8$, then $c_1 = 5$ and $c_2 = 3$, a contradiction to Claim \ref{c1} since $c_1 = k$.
If $|\partial_G(S)|=9$, then $c_1 \geq 6$ and $c_2 \geq 4$, a contradiction.
Hence, $|\partial_G(S)|=6$ and $c_1 \geq 4$ and $c_2 \geq 2$. That leaves the unique possibility $c_1=4$ and $c_2=2$. $\square$
\begin{claim} \label{c4}
$G[S]$ and $G[S']$ are connected.
\end{claim}
{\it Proof.} If $G[S]$ is not connected, then there is $E \subseteq \partial_G(S)$ such that
$G-E$ has at least two components $K_1$ and $K_2$. Since $G$ does not have a 2-edge-cut or a non-trivial 3-edge-cut,
it follows that $|E|=3$ and one of $K_1$, $K_2$ is a single vertex. Hence, $\frac{5}{3}k \leq |\partial_G(S)|$, a contradiction. $\square$
\begin{definition} \label{D_bad} A 6-edge-cut $E$ of $G$ is \textbf{bad} with respect to a flow partition $P_G(A^*,B^*)$
if it satisfies the following two conditions:
\begin{enumerate}[i)]
\item $|E \cap c^{-1}(1)| = 4$ and $|E \cap c^{-1}(2)| = 2$,
\item $E$ partitions the vertices $z_1$, $z_2$, $z_3$ and $z_4$ into two sets $\{z_{i_1},z_{i_2}\}$, $\{z_{i_3},z_{i_4}\}$, which are in
different components of $G-E$ and $\{z_{i_1},z_{i_2}\} \subseteq A^*$ or $\{z_{i_1},z_{i_2}\} \subseteq B^*$.
\end{enumerate}
\end{definition}
Note that $\{z_{i_1},z_{i_2}\} \subseteq A^*$ if and only if $\{z_{i_3},z_{i_4}\} \subseteq B^*$. Further,
only condition $ii)$ depends on the flow partition. Condition i) depends on the canonical 4-edge-coloring of
$G$ which is unchanged along the proof. From the previous results we deduce:
\begin{claim} \label{c5}
$\partial_G(S)$ is bad w.r.t.~$P_G(A,B)$ and $\partial_G(S')$ is bad w.r.t.~$P_G(A',B')$.
\end{claim}
Bad 6-edges-cuts are the only obstacles in $G$ for having a nowhere-zero 5-flow.
In order to deduce the desired contradiction we will show that
all 6-edge-cuts are not bad with respect to either $P_G(A,B)$ or $P_G(A',B')$.
Recall that, $z_1$ and $z_3$ receive the same color in $P_G(A,B)$, and that $z_1$ and $z_4$ receive the same color in $P_G(A',B')$.
For $i \in \{1,2,3\}$, let ${\cal S}_i = \{V : V \subseteq V(G) \mbox{ and } \{z_1,z_i\} \subseteq V\}$ and
${\cal E}_i = \{E: E \subseteq E(G), V \in {\cal S}_i \mbox{ and } E = \partial_G(V) \}$
be the corresponding set of edge-cuts.
Since $z_1$ and $z_2$ have different colors in both $P_G(A,B)$ and $P_G(A',B')$, all edge-cuts in ${\cal E}_2$
are not bad with respect to $P_G(A,B)$ and with respect to $P_G(A',B')$.
For $i \in \{3,4\}$, by Claim \ref{c5} there is a 6-edge-cut $E_i \in {\cal E}_i$ which is bad.
By Claim \ref{c4}, $G-E_3$ consists of two components with vertex sets $X$ and $Y$, i.e.~$X \cup Y=V(G)$.
Analogously, $G-E_4$ consists of two components with vertex sets $X'$ and $Y'$.
Let $U_1=X \cap X'$, $U_2=Y \cap Y'$, $U_3=X \cap Y'$ and $U_4=Y \cap X'$.
Thus, $z_i \in U_i$ for $i \in \{1,\dots,4\}$, see Figure \ref{badcuts}.
\begin{claim}\label{small_component}
$|\partial_G(U_i)| \geq 5$. In particular, $|\partial_G(U_i)|=5$ if and only if
$G[U_i]$ is a path with two edges, one of color $0$ and one of color $3$.
\end{claim}
{\it Proof.} If $G[U_i]$ has a circuit, then $|\partial_G(U_i)| \geq 6$ since $G$ is cyclically $6$-edge-connected.
If this is not the case, then $G[U_i]$ is a forest, say with $n$ vertices. Hence, $|\partial_G(U_i)| \geq n+2$.
Since $\partial_G(U_i) \subseteq E_3 \cup E_4$,
it follows that $\partial_G(U_i) \subseteq c^{-1}(1) \cup c^{-1}(2)$.
Two edges $z_ix_i$ and $z_iy_i$ which are incident to $z_i$ are colored with color 0 and 3, respectively.
Hence, $\{x_i, y_i\} \subseteq U_i$, $n \geq 3$, and $\partial_G(U_i) \geq 5$.
If $|\partial_G(U_i)| = 5$, then $|U_i|= 3$, and $G[U_i]$ is a path with two edges, one of color $0$ and one of color $3$.
$\square$
\begin{claim}\label{two_large_components}
$|\partial_G(U_i)| = 5$ for at most two of the four subsets $U_i$. Furthermore,
if there are $i,j$ such that $i \not = j$ and $|\partial_G(U_i)| = |\partial_G(U_j)| = 5$,
then $\{i,j\} \in \{\{1,2\}, \{3,4\}\}$.
\end{claim}
{\it Proof.} Since $E_3$ and $E_4$ are bad, each of them has exactly two edges of color $2$ and four edges of color $1$.
Hence, each of them intersects with at most one circuit of ${\cal F}_2$. For each $i \in \{1, \dots ,4\}$, $|U_i \cap c^{-1}(0)| = 1$, and
hence, there are $j_1, j_2$ such that $j_1 \not= j_2$ and $U_{j_1}$, $U_{j_2}$ contain an odd circuit of ${\cal F}_2$.
Since $G$ is cyclically 6-edge-connected it follows that $|\partial_G(U_{j_1})| \geq 6$ and $|\partial_G(U_{j_2})| \geq 6$.
Let $i,j \in \{1, \dots, 4\}$ such that $i \not = j$ and $|\partial_G(U_i)| = |\partial_G(U_j)| = 5$.
For symmetry, it suffices to prove that $\{i,j\}| \not= \{1,3\}$.
Suppose to the contrary that $\{i,j\} = \{1,3\}$. By Claim \ref{small_component}, $G[U_1]$ and $G[U_3]$ are paths of length two with edges colored $0$ and $3$. Further, $\partial_G(U_1)$ consists of three edges of color $1$ and two edges of color $2$, which belong to the
odd circuit $C_1$ of ${\cal F}_2$. Analogously, the two edges of color 2 of $\partial_G(U_3)$ belong to the odd circuit $C_3$ of ${\cal F}_2$.
Hence, both pairs of edges of color $2$ in $\partial_G(U_1)$ and $\partial_G(U_3)$ belong to $E_3$ and they are distinct, a contradiction since $E_3$ has only two edges of color $2$.
$\square$
For $i \not = j$ let $\partial_G(U_i,U_j)$ be the set of edges with one vertex in $U_i$ and the other one in $U_j$.
\begin{figure}
\centering
\includegraphics[width=8cm]{oddness4.eps}
\caption{$Z$-separating 6-edge cuts}\label{badcuts}
\end{figure}
\begin{claim}
The following relations hold:
\begin{itemize}
\item $|\partial_G(U_i,U_j)|=0$, for $\{i,j\} \in \{\{1,2\},\{3,4\}\}$.
\item $|\partial_G(U_i,U_j)|=3$, for $\{i,j\} \in \{\{1,3\}, \{1,4\}, \{2,3\}, \{2,4\}\}$.
\end{itemize}
\end{claim}
{\it Proof.} Recall that $|E_3|=|E_4|=6$. Hence, $|E_3 \cup E_4| \leq 12$. Due to Claim \ref{two_large_components}, we can assume
that $|\partial_G(U_1)| \geq 5$, $|\partial_G(U_2)| \geq 5$, $|\partial_G(U_3)|\geq 6$ and $|\partial_G(U_4)|\geq 6$. By adding up, we obtain $\sum_{i=1}^4 |\partial_G(U_i)| \geq 22$, where each edge of $E_3$ and $E_4$ is counted exactly twice. Hence, $|E_3 \cup E_4| \geq 11$.
If $|E_3 \cup E_4| = 11$, then exactly one edge, say $e$, belongs to $E_3 \cap E_4$.
If $e \in \partial_G(U_1,U_2)$, then $\partial_G(U_3)$ and $\partial_G(U_4)$ are distinct sets of cardinality at least $6$. Hence,
$|E_3 \cup E_4| > 12$, a contradiction.
If $e \in \partial_G(U_3,U_4)$, then $\partial_G(U_1,U_4)$ or $\partial_G(U_2,U_3)$ has cardinality at most 2, say, without loss of generality,
$\partial_G(U_1,U_4)$. For the same reason, $\partial_G(U_1,U_3)$ or $\partial_G(U_2,U_4)$ has cardinality at most 2.
If $|\partial_G(U_1,U_3)| \leq 2$, then $|\partial_G(U_1)|\leq 4$, and if $|\partial_G(U_2,U_4)| \leq 2$, then $|\partial_G(U_4)|\leq 5$,
a contradiction (in both cases).
Hence, $|E_3 \cup E_4| = 12$, and therefore, $|\partial_G(U_i,U_j)|=0$ for $\{i,j\} \in \{\{1,2\}, \{3,4\}\}$.
Now,
$|\partial_G(U_i,U_j)|=3$, for $\{i,j\} \in \{\{1,3\}, \{1,4\}, \{2,3\}, \{2,4\}\}$ can be deduced easily. $\square$
Let $E'_3=E_3 \cap \partial(U_1)$, $E''_3=E_3 \cap \partial(U_2)$, and $E'_4=E_4 \cap \partial(U_1)$, $E''_4=E_4 \cap \partial(U_2)$, see Figure \ref{badcuts}.
Let $H = G[c^{-1}(1) \cup c^{-1}(2)]$. The components of $H$ are even circuits and the two paths $P_1$ and $P_2$,
where $P_1$ has the end vertices $z_1$, $z_2$, and $P_2$ has the end vertices $z_3$, $z_4$.
The paths $P_1$ and $P_2$ intersect both $E_3=E'_3 \cup E''_3$ and $E_4=E'_4 \cup E''_4$ an odd number of times, since both,
$E_3$ and $E_4$, separate their ends.
For symmetry, we can assume that $P_1 \cap E'_3$ and $P_1 \cap E''_4$ are even, and hence, $P_1 \cap E''_3$ and $P_1 \cap E'_4$ are odd.
Furthermore, we assume that $P_2 \cap E''_3$ and $P_2 \cap E''_4$ are even, and hence, $P_2 \cap E'_3$ and $P_2 \cap E'_4$ are odd.
Note, that every other possible choice produces an analogous configuration.
The $6$-edge-cut $E'_3 \cup E'_4$ contains an odd number of edges of $E(P_1) \cup E(P_2)$. Since $E'_3 \cup E'_4 \subseteq E(H)$,
it follows that an odd number of edges of $E'_3 \cup E'_4$ are not in $E(P_1) \cup E(P_2)$, a contradiction, since
all other components of $H$ are circuits, and they intersect every edge-cut an even number of times.
Hence, at least one of $E_3$ and $E_4$ is not bad, contradicting our assumption that both of them are bad.
| {
"timestamp": "2014-12-18T02:12:36",
"yymm": "1412",
"arxiv_id": "1412.5398",
"language": "en",
"url": "https://arxiv.org/abs/1412.5398",
"abstract": "Tutte's 5-Flow Conjecture from 1954 states that every bridgeless graph has a nowhere-zero 5-flow. In 2004, Kochol proved that the conjecture is equivalent to its restriction on cyclically 6-edge connected cubic graphs. We prove that every cyclically 6-edge-connected cubic graph with oddness at most 4 has a nowhere-zero 5-flow.",
"subjects": "Combinatorics (math.CO)",
"title": "Nowhere-zero 5-flows on cubic graphs with oddness 4",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9904406003707396,
"lm_q2_score": 0.7154239897159439,
"lm_q1q2_score": 0.7085849658938893
} |
https://arxiv.org/abs/1709.00748 | Recovery of the singularities of a potential from backscattering data in general dimension | We prove that in dimension $n\ge 2$ the main singularities of a complex potential $q$ having a certain a priori regularity are contained in the Born approximation $q_{B}$ constructed from backscattering data. This is archived using a new explicit formula for the multiple dispersion operators in the Fourier transform side. We also show that ${q-q_{B}}$ can be up to one derivative more regular than $q$ in the Sobolev scale. On the other hand, we construct counterexamples showing that in general it is not possible to have more than one derivative gain, sometimes even strictly less, depending on the a priori regularity of $q$. | \section{Introduction and main theorems}
The central problem in inverse scattering for the Schrödinger equation is to recover a potential $q(x)$, $x \in \RR^n$, from the scattering data, the so called scattering amplitude $u_\infty$. The scattering amplitude measures the far field response of the Hamiltonian $H:=-\Delta + q$ to incident plane waves. In backscattering, as the name suggest, only the far field response appearing in the opposite direction of the incoming wave is considered, or in other words, only the waves scattered in the opposite direction of the incident wave (the echoes).
The usual reconstruction procedure is to construct the Born approximation $q_B$ of the potential, also an $\RR^n$ function as $q$, from the backscattering data contained in $u_\infty$. This is the linear approximation to the inverse problem and it is widely used in applications.
From a mathematical point of view an important question that is not completely answered is to establish how much information does the Born approximation contain about the actual potential $q$. This problem can be approached in different ways. One is to look for uniqueness results, that is, if $q_B$ is enough to determine $q$ (this problem is still open, see next section for references).
Motivated by the use of the Born approximation in applications, another approach is to ask how much and what kind of information about $q$ can be obtained just by looking at $q_B$, that is, in a very immediate way.
In this sense, in \cite{PSo} it was proposed that the Born approximation must contain the leading singularities of $q$. Since then, this approach has received great amount of attention in different scattering problems. In the case of backscattering we mention, among others, \cite{GU,esw} for recovery of conormal singularities, \cite{OPS, Re} for recovery of singularities in $2$ dimensions, \cite{RV,RRe} in dimensions $2$ and $3$ and \cite{BM09,BM09quad} in odd dimension $n\ge3$.
The main objective of this work is to quantify as exactly as possible how much more regular than $q$ can $q-q_B$ be in general, depending on the dimension $n$, and the a priori regularity of the potential $q$ measured in the Sobolev scale. The potential can be complex valued. We provide positive and negative results, which answer this question almost completely, except for potentials in a certain range of the Sobolev scale where there is still a gap between them (see figure \ref{fig.dim2y3}).
To measure the regularity, we introduce the fractional derivative operator $\jp{D}^{\alpha}$, $\alpha\in \RR $ given by the Fourier symbol $\jp{\xi}^\alpha$ with $\jp{x} = (1+|x|^2)^{1/2}$, and the weighted Sobolev space $W_\delta^{\alpha,p}(\RR^n)$, $\delta \in \RR$,
$$W_\delta^{\alpha,p}(\RR^n) := \{ f\in \mathcal{S}'(\RR^n) : \norm{\jp{\cdot}^\delta \jp{D}^{\alpha} f}_{L^p(\RR^n)} <\infty\}. $$
We usually use the notation $L_\delta^p(\RR^n) := W^{0,p}_\delta(\RR^n)$ and $W^{\alpha,p}(\RR^n) := W^{\alpha,p}_0(\RR^n)$, also we say that $f\in W^{\alpha,p}_{loc}(\RR^n)$ if $\phi f\in W^{\alpha,p}(\RR^n)$ for every $\phi \in C^\infty_c(\RR^n)$.
As we shall see in the next section, the Born approximations $q_B$ is related to the potential through the Born series expansion,
\begin{equation*}
{q}_B \sim {q} + \sum_{j=2}^{\infty}{Q_{j}(q)},
\end{equation*}
where $Q_{j}(q)$ are certain multilinear operators describing the (multiple) dispersion of waves (we use the $\sim$ symbol to avoid claiming anything about convergence yet). We will call the $Q_{2}$ operator the double dispersion operator of backscattering. A key guiding principle is that in general $Q_j(q)$ is expected to be smoother as $j$ grows. We can introduce now the main theorems of this work.
\begin{theorem} \label{teo:main1}
Let $n\ge 2$ and $\beta \ge 0$. Assume that $q-q_B \in W_{loc}^{\alpha,2}(\RR^n)$ for every $q\in W^{\beta,2}(\RR^n)$ compactly supported, radial, and real. Then $\alpha$ necessarily satisfies,
\begin{equation} \label{eq:mainrange1}
\alpha \le \begin{cases}
\,\, 2\beta - (n-4)/2, \quad if \quad m \le \beta < (n-2)/2,\\
\, \, \beta + 1, \hspace{18mm} if \quad (n-2)/2 \le \beta<\infty ,
\end{cases}
\end{equation}
\begin{equation} \label{eq:m}
\text{where} \hspace{8mm} m = (n-4)/2 + 2/(n+1).
\end{equation}
\end{theorem}
\begin{theorem}[Recovery of singularities] \label{teo:main2}
Let $n\ge 2$ and $\beta\ge 0$. Assume that $q \in W^{\beta,2}(\RR^n)$ is compactly supported. Then $q-q_B \in W^{\alpha,2}(\RR^n)$, modulo a $C^\infty$ function, if the following condition also holds
\begin{equation} \label{eq:mainrange}
\alpha < \begin{cases}
\,\, 2\beta - (n-3)/2, \quad if \quad (n-3)/2 <\beta < (n-1)/2,\\
\, \, \beta + 1, \hspace{18mm} if \quad (n-1)/2 \le \beta<\infty .
\end{cases}
\end{equation}
\end{theorem}
See Figure \ref{fig.dim2y3} for a graphic representation of these results for $n=2$ and $n=4$.
{Theorem \ref{teo:main1}} is the first result giving upper bounds for the
maximum possible regularity that can be obtained from the Born
approximation in backscattering. As we shall see, condition
{(\ref{eq:mainrange1})} is a consequence of upper bounds for the
regularity of the $Q_{2}$ operator given by {Theorem \ref{teo:Q2count}}
below. The main reason we need $\beta \ge m$ and compact support is that
the convergence of the (high frequency) Born series in a Sobolev space
$W^{\alpha ,2}(\mathbb{R}^{n})$ is known only under these assumptions
(see {Proposition \ref{prop:convergence}} below). A remarkable consequence
of condition {(\ref{eq:mainrange1})} is that for $\beta <(n-2)/2$ and
$n>2$ it is not possible to reach the expected gain of one derivative
over the regularity of $q$ (see {Fig. \ref{fig.dim2y3}} for the cases
$n=2,4$). In fact, we reach the minimum value of $2/(n+1)$ in
{(\ref{eq:mainrange1})} for the upper bound of the derivative gain
$\alpha -\beta $ when $\beta =m$, which approaches $0$ as $n$ grows.
{Theorem \ref{teo:main2}} is a consequence of new estimates of the
$Q_{j}$ operators for $n\ge 2$ and $j\ge 2$ given in {Theorem \ref{teo.Qj}} below. As far as we know, these are the first results of
recovery of singularities for every dimension $n$ in backscattering. We
remark that {Theorem \ref{teo:main1}} implies that a one derivative gain
is the best possible result and so, the $1^{-}$ derivative gain in
{(\ref{eq:mainrange})} is optimal except for the limiting case
$\alpha = \beta +1$. In
\cite{RRe} it was shown that $q-q_{B}$ is in $W^{\alpha ,2}(
\mathbb{R}^{n})$ (modulo a $C^{\infty }$ function) with $n=2,3$ and
$\alpha <\beta +1/2$. Therefore, in dimension 2 we improve the previous
results for all $\beta \ge 0$ (see {Fig. \ref{fig.dim2y3}}), but in
dimension $3$ the result in \cite{RRe} is still the best result
for low a priori regularity $0\le \beta < 1/2$. Also, a similar result to {Theorem \ref{teo:main2}} has been obtained in \cite[Corollary 4.8]{BM09} in odd dimension $n\ge 3$ using a certain modified Born approximation.
Indeed, {Theorems \ref{teo:main1} and \ref{teo:main2}} leave a gap of up to $1/2$ derivative when $\max (m,0) \le \beta <(n-1)/2$ between the
positive and negative results. A similar situation is found in the fixed
angle and full data scattering problems, where analogous results to
{Theorems \ref{teo:main1} and \ref{teo:main2}} have been proved in
\cite{fix} (see \cite{BFRV10} for the positive results in the case
of full data scattering). In backscattering, this gap has been partially
closed in dimension $3$ by the mentioned result in \cite{RRe} and
in dimension $2$ in \cite{BFRV13}, where a uniform $1^{-}$
derivative gain has been obtained using a weaker regularity scale than
the Sobolev scale $W^{\alpha ,2}$. We will make more observations about
this problem in the final remarks.
\begin{figure}
\centering
\begin{tikzpicture}[scale=1.5]
\draw[<->] (3,0) node[below]{$\beta$} -- (0,0) --
(0,2.3) node[left]{$\alpha-\beta$};
\draw [gray, very thin] (0.5,1) -- (0.5,0);
\draw[dashed, semithick,red, domain=0:0.5] plot (\x, {1});
\draw [semithick,blue](0,0.5) -- (0.5,1);
\draw [semithick,blue] (0.5,1) -- (3,1);
\draw [dash dot ] (0,0.5) -- (3,.5);
\node at (1,2) {$n=2$};
\node [left] at (0,1) { \tiny $1$ };
\node [left] at (0,0.5) { \tiny $\frac{1}{2}$ };
\node [below] at (0.5,0) { \tiny $\frac{1}{2}$ };
\end{tikzpicture}
\begin{tikzpicture}[scale=1.5]
1. \draw[<->] (3,0) node[below]{$\beta$} -- (0,0) --
(0,2.3) node[left]{$\alpha-\beta$};
\draw [gray, very thin] (1,1) -- (1,0);
\draw [gray,very thin] (1.5,1) -- (1.5,0);
\draw [gray,very thin] (0.4,0) -- (0.4,0.4);
\draw[dashed,red, semithick, domain=0:1.5] plot (\x, {min(1,\x)});
\draw [semithick,blue] (.5,0) -- (1.5,1);
\draw [semithick,blue] (1.5,1) -- (3,1);
\node at (1,2) {$n=4$};
\node [left] at (0,1) { \tiny $1$ };
\node [ below] at (0.57,0) { \tiny $\frac{1}{2}$ };
\node [below] at (1,0) { \tiny $1$ };
\node [below] at (1.5,0) { \tiny $\frac{3}{2}$ };
\node [below] at (0.33,0) { \tiny $m$ };
\end{tikzpicture}
\caption{The (red) dashed line represents the limitation on the regularity gain given in Theorem \ref{teo:main1} for $q-q_B$, and the solid (blue) line represents the positive results given in Theorem \ref{teo:main2}. When $n=2$, the dot dashed line represents the previously known positive results of \cite{RRe}.}
\label{fig.dim2y3}
\end{figure}
We introduce now the Sobolev estimates of the $Q_j$ operators. Consider a constant $C_0 \ge 1$ and let $0 \le \chi(\xi) \le 1$, $\xi\in \mathbb{R}^n$, be a smooth cut-off function such that
\begin{equation} \label{eq:cutoff1}
{\chi}(\xi)= 1 \;\; \text{if} \;\; |\xi|>2C_0 \;\; \text{and} \;\; \chi(\xi)=0 \;\; \text{if} \;\; |\xi|<C_0.
\end{equation}
We define the operator $\widetilde{Q}_{j}$ by the relation
\begin{equation}
\label{eq.cutoff}
\widehat{\widetilde{Q}_{j}(q)}(\xi ) := \chi (\xi )
\widehat{{Q}_{j}(q)}(\xi ),
\end{equation}
so that $Q_j(q)$ differs from $\widetilde Q_j(q)$ in a smooth function. $Q_j(q)$ will be introduced in the following section, see (\ref{eq:Qjraw}).
\begin{theorem} \label{teo.Qj}
Let $n\ge 2$ and $j \ge 2$. Assume that $0 \le \beta \le \infty$ and that the following condition also holds
\begin{equation*}
\alpha < \begin{cases}
\,\, \beta + (j-1)(\beta - (n-3)/2), \quad if \quad (n-3)/2 <\beta < (n-1)/2,\\
\, \, \beta + (j-1), \hspace{27mm} if \quad (n-1)/2 \le \beta<\infty .
\end{cases}
\end{equation*}
Then for $q\in W_2^{\beta,2}(\RR^n)$ and $j=2$ we have the estimate
\begin{equation} \label{eq:estimateQ2}
\norm{\widetilde {Q}_2(q)}_{{W}^{\alpha,2}}\le C \norm{q}_{W^{\beta,2}_2}^2.
\end{equation}
Otherwise if $j\ge 3$ and $q\in W_4^{\beta,2}(\RR^n)$ we have that
\begin{equation} \label{eq:estimateQj}
\norm{\widetilde {Q}_j(q)}_{{W}^{\alpha,2}}\le C \norm{q}_{W_4^{\beta,2}}^j.
\end{equation}
\end{theorem}
To prove this result, we first obtain an explicit formula expressing the Fourier transform of ${\widetilde{Q}_{j}(q)}$ as certain principal value distributions acting on the radial parameters of an integral operator over the Ewald spheres (the spherical operator), see {Proposition \ref{prop:Qjstruct}} below. In the proof of {Theorem \ref{teo.Qj}} we use trace estimates to control the spherical
integrals, and a new method to reduce the estimate of the principal
value distributions to certain estimates of the spherical operators. One advantage of these techniques is that with the same effort we can prove estimates
for general dimension $n\ge 2$. In odd dimension, using very different
techniques, similar estimates for certain operators related to the $Q_{j}$ operators have been obtained in \cite[Theorem 1.1]{BM09quad} and
\cite[Theorem 1.2]{BM09}, for compactly
supported potentials. We mention that the estimate of the $
\widetilde{Q}_{3}$ operator for $n=3$ given in \cite{RRe} is still
the best estimate in the range $0 \le \beta <1/4$.
Finally, we give an upper bound for the regularity of the double dispersion operator, which constrains the amount of regularity of $q$ that one can expect to recover from the Born approximation, as stated in {Theorem \ref{teo:main1}}.
\begin{theorem} \label{teo:Q2count}
Let $0< \beta < \infty$ and assume that $ Q_2(q) \in {W}_{loc}^{\alpha,2}(\RR^n)$ for every potential ${q\in W^{\beta,2}(\RR^n)}$ radial, real and compactly supported. Then $\alpha$ necessarily satisfies
\begin{equation} \label{eq.remarkable}
\alpha \le \begin{cases}
\,\, 2\beta - (n-4)/2, \quad if \hspace{21mm} 0 \le \beta < (n-2)/2,\\
\, \, \beta + 1, \hspace{18mm} if \hspace{6mm} (n-2)/2 \le \beta<\infty ,
\end{cases}
\end{equation}
\end{theorem}
The paper is structured as follows. In section \ref{sec:2} we introduce
with more detail the backscattering problem, and we show how to deduce
{Theorems \ref{teo:main1} and \ref{teo:main2}} respectively from {Theorems \ref{teo:Q2count} and \ref{teo.Qj}}. Section \ref{sec.PV} is dedicated to introducing the main result used for the estimate of the principal
value operators, and in section \ref{sec.Q2L2} we estimate the spherical
part $\widetilde{Q}_{2}(q)$. In section \ref{sec:Qj} we study the
general $\widetilde{Q}_{j}$ operators and we finish the proof of {Theorem \ref{teo.Qj}}. In section \ref{sec:implicitQj} we give the estimates
necessary to show the convergence of the Born series in Sobolev spaces,
and section \ref{sec:5} is devoted to proving {Theorem \ref{teo:Q2count}}.
\section{Convergence of the Born series in Sobolev spaces} \label{sec:2}
Let us introduce the backscattering inverse problem more rigorously (see, for example, \cite[chapter V]{eskin} for an introduction to Schrödinger scattering theory). We follow a similar exposition to that of \cite{fix}.
Consider a scattering solution $u_s(k,\theta,x)$, $k\in(0,\infty)$, $\theta\in \SP^{n-1}$, of the stationary Schrödinger equation satisfying
\begin{equation} \label{eq.int.1}
\begin{cases}
(-\Delta + q -k^2)u=0 \\
u(x) = e^{i k \theta \cdot x} + u_s(k,\theta,x) \\
\lim_{|x| \to \infty} (\frac{\partial u_s}{\partial r} - iku_s)(x) = o(|x|^{-(n-1)/2}),
\end{cases}
\end{equation}
where the last line is the outgoing Sommerfeld radiation condition (necessary for uniqueness). If $q$ is compactly supported, a solution $u_s$ of (\ref{eq.int.1}) has the following asymptotic behavior when $|x| \to \infty$
$$u_s(k,\theta,x) = C |x|^{-(n-1)/2}k^{(n-3)/2} e^{ik|x|} u_\infty(k,\theta,x/|x|) + o(|x|^{-(n-1)/2}) ,$$
for a certain function $u_\infty(k,\theta,\theta')$, $k\in (0,\infty)$, $\theta,\theta' \in \SP^{n-1}$. As mentioned in the introduction, $u_\infty$ is the so called scattering amplitude or far field pattern, and is given by the expression
\begin{equation} \label{eq.int.2}
u_\infty (k,\theta,\theta') = \int_{\RR^n} e^{-ik\theta'\cdot y} q(y) u(y) \, dy,
\end{equation}
where is important to notice that $u$ depends also on $k$ and $\theta$ (for a proof of this fact when $q\in C^\infty_c(\RR^n)$ see for example \cite{notasR}).
Applying the outgoing resolvent of the Laplacian $R_k$ in the first line of (\ref{eq.int.1}), where
\begin{equation} \label{eq:resolvent1}
\widehat{R_k(f)}(\xi) = (-|\xi|^2+k^2 + i0)^{-1}\widehat{f}(\xi),
\end{equation}
we obtain the Lippmann-Schwinger integral equation
\begin{equation} \label{eq.int.3}
u_s=R_k(qe^{ik\theta \cdot (\cdot)}) + R_k(qu_s(k,\theta,\cdot)).
\end{equation}
The existence and uniqueness of scattering solutions of (\ref{eq.int.1}) follows from a priori estimates for the resolvent operator $R_k$ and the previous integral equation (\ref{eq.int.3}). In the case of real potentials, this can be shown with the help of Fredholm theory for $k >0$, see for example \cite{notasR}. Otherwise, since the norm of the operator $T(f)= R_k(qf)$ decays to zero as $k \to \infty$ in appropriate function spaces, we can also use a Neumann series expansion in (\ref{eq.int.3}) which will be convergent for $k >k_0$ (in general $k_0 \ge 0$ will depend on some a priori bound of $q$). For our purposes it is enough to consider $q \in L^r(\RR^n)$, $r>n/2$ and compactly supported. Notice that by the Sobolev embedding this is satisfied if $q\in W^{\beta,2}(\RR^n)$ with $\beta>(n-4)/2$. See \cite[p. 511]{BFRV10} for more details and references.
We can introduce now the inverse backscattering problem. If we insert (\ref{eq.int.3}) in (\ref{eq.int.2}), we can expand the Lippmann-Schwinger equation in a Neumann series, as we mentioned before. Then we obtain the Born series expansion relating the scattering amplitude and the Fourier transform of the potential:
\begin{align}
\nonumber u_\infty (k,\theta,\theta') = \widehat{q}(\xi) + \sum^l_{j=2}\int_{\RR^n} e^{-ik \theta'\cdot y} (qR_k))^{j-1}(q(\cdot)e^{ik\theta \cdot (\cdot)} )(y) \,dy\\
\label{eq.int.4} + \int_{\RR^n} e^{-ik \theta'\cdot y} (qR_k)^{l-1}(q(\cdot)u_s(k,\theta, \cdot) )(y) \,dy,
\end{align}
where $\xi = k(\theta'-\theta)$ and the last is the error term. Since we are considering complex potentials, $u_\infty (k,\theta,\theta')$ is not defined for $k \le k_0$ as we have seen. Therefore we also have to ask $k>k_0$ in (\ref{eq.int.4}).
The problem of determining $q$ from the knowledge of the scattering amplitude is formally overdetermined in the sense that the data $u_\infty(k,\theta,\theta')$ is described by $2n-1$ variables, while the unknown potential $q(x)$ has only $n$. We avoid the overdetermination by reducing to the backscattering data, assuming only knowledge of $u_\infty(k,\theta,-\theta)$, for all $k > k_0$ and $\theta\in \SP^{n-1}$. For backscattering data the problem is formally well determined, and the Born approximation $q_{B}$ is defined by the identity,
\begin{equation} \label{eq:bornF}
\widehat{q_{B}}(\xi):= u_\infty(k,\theta,-\theta), \hspace{4mm} \text{where} \hspace{3mm} \xi= -2k\theta.
\end{equation}
Since $u_\infty(k,\theta,-\theta)$ is not defined for $k\le k_0$, from now on we consider that $q_B(x)$ is defined modulo a $C^\infty$ function.
By (\ref{eq:bornF}), the condition $k>k_0$ is equivalent to asking $|\xi|>2k_0$. Therefore, using the cut-off introduced before (\ref{eq.cutoff}) with $C_0>2k_0$, and assuming convergence of the series, we can write (\ref{eq.int.4}) as follows
\begin{equation} \label{eq.int.6}
\chi(\xi)\widehat{q}_B(\xi) = \chi(\xi)\widehat{q}(\xi) + \sum_{j=2}^{\infty}\widehat{ \widetilde Q_{j}(q)}(\xi) ,
\end{equation}
where $\widetilde Q$ was defined in (\ref{eq.cutoff}) and
\begin{equation} \label{eq:Qjraw}
\widehat{Q_{j}(q)}(\xi) =\int_{\RR^n} e^{ik \theta\cdot y} (qR_k)^{j-1}(q(\cdot)e^{ik\theta \cdot (\cdot)} )(y) \,dy,
\end{equation}
again with $\xi= -2k\theta$.
We examine now the question of the convergence in Sobolev spaces of the series (\ref{eq.int.6}), an essential step in the proof of Theorems \ref{teo:main1} and \ref{teo:main2}.
\begin{proposition} \label{prop:convergence}
Let $n\ge 2$, $j\ge 2$ and $\max(0,m) \le \beta<\infty$, where $m$ was defined in $(\ref{eq:m})$. If $q\in W^{\beta,2}(\RR^n)$ is compactly supported in $B_\rho$, the ball of radius $\rho$, then $\widetilde{Q}_j(q) \in W^{\alpha,2}(\RR^n)$
if $\alpha<\alpha_j$, with
\begin{equation} \label{eq:alfaj}
\alpha_j = \beta + (j-1) - \frac{n}{2} - \frac{(n-1)}{2} (j-2) \max{\left( 0,\frac{1}{2} - \frac{\beta}{n} \right )} .
\end{equation}
Moreover, for every $\alpha<\alpha_l$, $l\ge 2$ the series $\sum_{j=l}^{\infty} \widetilde Q_j(q),$
converges absolutely in $W^{\alpha,2}(\RR^n)$ provided we take ${C_0=C\norm{q}_{W^{\beta,2}}^{1/\varepsilon}}$ in $(\ref{eq:cutoff1})$ and $(\ref{eq.cutoff})$ for a large constant $C=C(n,\alpha,\beta,\rho)$ and a certain $\varepsilon =\varepsilon(n,\beta)>0$.
\end{proposition}
This proposition improves the original result of \cite[Proposition 4.3]{RV} given for the range $m \le \beta \le n/2$ and later extended in \cite{RRe} for $\beta\ge n/2$. We have used certain properties of the fractional Laplacian $(-\Delta)^{s}$ to improve the value of $\alpha_j$ in dimension $n$ (see section \ref{sec:implicitQj}). It also improves the regularity gain given in \cite{RRe} for the $\widetilde Q_4$ operator with $n=3$. This would allow to obtain the results of recovery of singularities in that paper without the very technical proof to estimate $\widetilde Q_4(q)$.
Using Proposition \ref{prop:convergence}, we can reduce the proof of Theorem \ref{teo:main2} to proving Theorem \ref{teo.Qj}.
\begin{proof}[Proof of Theorem $\ref{teo:main2}$]
Taking the inverse Fourier transform of (\ref{eq.int.6}), we can write {\it modulo a $C^\infty$ function}
\begin{equation} \label{eq:bornseries}
q(x) -q_B(x) = - \sum_{j=2}^\infty \widetilde{Q}_j(x).
\end{equation}
Consider $\alpha$ and $\beta \ge 0$ satisfying condition (\ref{eq:mainrange}). Observe that by Theorem \ref{teo.Qj}, we have $\widetilde Q_j(q) \in W^{\alpha,2}(\RR^n)$, so we just need to study the convergence of the series. But, since the value of $\alpha_j$ in (\ref{eq:alfaj}) grows linearly with $j$, we can always find an integer $l$ such that $\alpha_l > \alpha$. As a consequence, if $q$ has compact support, by Proposition \ref{prop:convergence}, the series $\sum_{j=l}^\infty \widetilde{Q}_j(q)(x)$ converges in $W^{\alpha,2}(\RR^n)$.
\end{proof}
Similarly, the proof of Theorem \ref{teo:main1} can be reduced to Theorem \ref{teo:Q2count} and Proposition \ref{prop:convergence}.
\begin{proof}[Proof of Theorem $\ref{teo:main1}$]
Take $\alpha \ge 0$ and assume that we have that $q-q_{B} \in W^{
\alpha ,2}_{loc}(\mathbb{R}^{n})$ for every compactly supported, real
and radial potential $q\in W^{\beta ,2}(\mathbb{R}^{n})$. We are going
to prove that then necessarily $Q_{j}(q) \in W^{\alpha ,2}_{loc}(
\mathbb{R}^{n})$ for all $2 \le j < \infty$.
We denote by ${ q}_{B}(\lambda )$ the Born approximation of the
potential $q(\lambda ) = \lambda q$, where $\lambda \in (0,1)$. By the
multilinearity of the $\widetilde{Q}_{j}$ operators, the Born series
{(\ref{eq:bornseries})} for $q(\lambda )$ becomes
\begin{equation}
\label{eq:algebra}
{\lambda q}- { q}_{B}(\lambda ) = - \sum _{j=2}^{l-1} \lambda ^{j}{\widetilde{Q}
_{j}(q)} -\sum _{j=l}^{\infty } \lambda ^{j}{\widetilde{Q}_{j}(q)},
\end{equation}
modulo a $C^{\infty }$ function (which depends on $\lambda $).
By Proposition \ref{prop:convergence} we have that if $m\le \beta <
\infty $, we can choose $l$ in (\ref{eq:algebra}) such that
$\alpha < \alpha _{l}$. Then $\widetilde{Q}_j(q) \in W^{\alpha,2}(\mathbb{R}^n)$ for $l \le j <\infty$, and the series $\sum _{j=l}^{\infty } \lambda ^{j}
{\widetilde{Q}_{j}(q)} $ will converge absolutely in $W^{\alpha ,2}(
\mathbb{R}^{n})$.
Since by hypothesis we have that ${\lambda q}- { q}_{B}(\lambda )$ is
also a $W^{\alpha ,2}(\mathbb{R}^{n})$ function, by
(\ref{eq:algebra}) we have
\begin{equation*}
\sum _{j=2}^{l-1} \lambda ^{j} \widetilde{Q}_{j}(q) \in W_{loc}^{\alpha,2}(\mathbb{R}^n).
\end{equation*}
But, by choosing $\lambda_i \in (0,1)$ for every $2 \le i \le l-1$ such that $\det(\lambda_i^j) \neq 0$ (this is always possible, since it is a Vandermonde determinant), we obtain that, for all $2\le j \le l-1$, $\widetilde{Q}_j(q) \in W^{\alpha,2}_{loc}(\mathbb{R}^n)$.
To finish, notice that, by condition (\ref{eq.remarkable}) of
Theorem \ref{teo:Q2count}, we know that this implies that $\alpha $ must
be in the range given in (\ref{eq:mainrange1}).
\end{proof}
As we have mentioned in the introduction, the question of uniqueness of
the inverse scattering problem for backscattering data is still open.
In \cite{RU} it has been proved for $n=3$ that two potentials
differing in a finite number of spherical harmonics with radial
coefficients must be identical if they have the same backscattering
data. The question of uniqueness for small potentials was studied in
\cite{prosser4}. Generic uniqueness and uniqueness for small potentials
has been obtained in \cite{er3,st92} for dimensions 2 and 3
and in \cite{lagerg} for $n=3$. Similar results have been obtained
in odd dimension $n\ge 3$ in \cite{Uh01,BM09}, and in even dimension in \cite{Wa}. See \cite{RU} for references about the uniqueness problem and \cite{er3} for more results concerning the regularity of the backscattering map.
The recovery of singularities has been studied in other inverse
scattering problems. The case of full data has been studied in
\cite{PSo,PSe,PSS} (real potentials) and \cite{BFRV10,fix}
(complex potentials) and the case of fixed angle data in
\cite{ser} in dimension $2$, and \cite{R} in dimension
$n\ge 2$. The regularity gain has been improved recently in
\cite{fix}. Analogous problems have been formulated to
study the recovery of singularities of live loads in Navier elasticity, see
\cite{BFPRM1,BFPRM2}.
Before going to the next section, we want to highlight the following property of Sobolev norms that we will use frequently in this work.
\begin{remark}
\label{remark:Sob}
We have that $W^{\beta ,2}_{\delta }\subset W^{\beta ',2}_{\delta '}$
if $\beta \ge \beta '$ and $\delta \ge \delta '$. This follows from the
equivalence
\begin{equation*}
\lVert <\cdot >^{\delta }<D>^{\beta } f\rVert _{L^{2}(\mathbb{R}^{n})}
\sim \lVert <D>^{\beta } <\cdot >^{\delta }f\rVert _{L^{2}(\mathbb{R}
^{n})},
\end{equation*}
and Plancherel theorem, see for example \cite[Definition 30.2.2]{HormanderIV}.
\end{remark}
\section{From the spherical integral to the P.V. integral} \label{sec.PV}
As we have explained in the introduction, the $Q_{j}$ operators that
appear in the Born series expansion of $q$ can be expressed in terms of certain spherical integrals, and principal value distributions acting on them. The usual strategy is to estimate the spherical part and then try to extend this estimate to the other terms. This is generally a very long and technical process that must be repeated case by case if the dimension or the value
of $j$ is changed (see \cite{RV,Re,RRe,BFRV13}). In this section we give a general method to reduce the estimate of the principal value distributions to the estimate of the spherical integrals
First, we define the following distributions. Let $f\in C^\infty_c((0,\infty))$, we put
\begin{equation} \label{eq:dandP}
d(f) = \int_0^\infty \delta(1-r)f(r) \, dr, \hspace{3mm} \text{and} \hspace{3mm}P(f) = \pv \int_0^\infty \frac{1}{1-r} f(r) \, dr,
\end{equation}
where $\delta$ denotes the Dirac delta distribution, as usual.
\begin{proposition} \label{prop:Q2struct}
Let $r\in (0,\infty)$ and consider the modified Ewald spheres defined by the equation
\begin{equation} \label{eq:ewald}
\Gamma_r(\eta):=\{\xi\in\RR^n: |\xi-\eta/2| = r|\eta|/2\},
\end{equation}
(see Figure $\ref{fig.ewald1}$ in section $\ref{sec:5}$ below). Then we have that
\begin{equation} \label{eq:Q2}
\widehat{Q_2(q)}(\eta) = (i \pi d + P) S_{r}(q)(\eta),
\end{equation}
where, if we denote by $\sigma_{r\eta}$ the Lebesgue measure of $\Gamma_r(\eta)$,
\begin{equation} \label{eq:Sr}
{S_r(q)}(\eta) := \frac{2}{|\eta|(1+r)} \int_{\Gamma_{r}(\eta)}\widehat{q}(\xi) \widehat{q}(\eta-\xi) \, \ds{r}(\xi).
\end{equation}
\end{proposition}
We omit the proof since it is just the case $j=2$ of Proposition \ref{prop:Qjstruct} below.
Motivated by the previous proposition, we introduce the following result to control the the principal value term. It will also simplify a
great amount of work when studying the $Q_{j}$ operators with
$j>2$.
\begin{proposition} \label{prop:genPV}
Let $1 \le p <\infty$ and $\alpha \in \mathbb{R}$. Assume that there is a $0<\delta<1$, $\tau \in \mathbb{R}$, $\gamma>0$ and $M>0$ such that the one parameter family of $L^1_{loc}( \mathbb{R}^{n})$ functions $\{F_r\}_{r\in (0,\infty)}$ satisfies
\begin{enumerate}
\item For $a.e.$ $\eta \in \mathbb{R}^{n}$ fixed, $\partial_r F_r(\eta)$ is a continuous function for all $r\in (1-\delta,1+\delta)$, and in the same interval satisfies the estimate
\begin{equation} \label{eq:pv:Fr1}
\lVert \partial_r F_r \rVert_{L_\tau^p} \le M .
\end{equation}
\item For every $r\in (0,\infty)$,
\begin{equation} \label{eq:pv:Fr2}
\lVert F_r\rVert_{L_\alpha^p} \le (1+r)^{-\gamma}M .
\end{equation}
\end{enumerate}
Then we have that
\begin{equation} \label{eq:pv:Fr3}
\lVert(i\pi d + P) F_r \rVert_{ L^p_{\alpha'}}\le C_2 M,
\end{equation}
for every $\alpha'<\alpha$ and $C_2=C_2(\delta,\alpha,\alpha',\tau,p,\gamma)$.
\end{proposition}
Notice that the value of $\tau $ in {(\ref{eq:pv:Fr1})} does not have any
influence on the value of $\alpha'$ in {(\ref{eq:pv:Fr3})}.
\begin{proof} By the definition of $d$ in {(\ref{eq:dandP})}, we clearly have that $\lVert d \left( F_r \right) \rVert_{ L^p_{\alpha}}\le 2^{-\gamma}M$ follows directly putting $r=1$ in {(\ref{eq:pv:Fr2})}. Therefore it remains to estimate the term
\begin{equation*}
{P(F_r)}(\eta) = \mathit{P.V.} \int_0^\infty \frac{1}{1-r} {F_r}(\eta) \,dr,
\end{equation*}
in $L^p_\alpha$. For some $s \ge 0$ that will be chosen later, set
\begin{equation} \label{eq:genPV.sing.1}
\delta_\eta := {\delta}{ <\eta>^{-s}}.
\end{equation}
Since for any $a>0$, $P.V. \int_{|1-r|<a} \frac{dr}{1-r} = 0$, we have that
\begin{align}
\nonumber P(F_r)(\eta) =& \\
\nonumber =& \int_{|1-r| \le \delta_\eta} \frac {F_r - F_1}{1-r}(\eta) \,dr +\int_{\delta_\eta <|1-r|<\delta} \frac{F_r(\eta)}{1-r}dr + \int_{\delta \le |1-r|} \frac{F_r(\eta)}{1-r}\,dr \\
\label{eq:noPV} :=& \, P_{A}(\eta) + P_{B}(\eta) + P_{C}(\eta),
\end{align}
where the $\mathit{P.V.}$ is not necessary any more, since we can cancel the singularity in the denominator thanks to the fact that $F_r(\eta)$ is $C^1$ in $(1-\delta,1+\delta)$ by the first condition. Applying Minkowski's integral inequality and estimate (\ref{eq:pv:Fr2}), we obtain that
\begin{equation}\label{eq:PV1:B}
\|P_{C} \|_{L^p_\alpha} \le \int_{ \delta<|1-r|} \frac{\|F_r\|_{L^p_\alpha} }{|1-r|} \, dr \le C(\delta,\gamma) M.
\end{equation}
By the fundamental theorem of calculus we have
\[ \frac{F_r(\eta)-F_1(\eta)}{1-r} = - \int^{1}_{0} \partial_u F_{u} (\eta) \big |_{u = u(t) } \, dt ,\]
where $u(t) = (r-1)t +1 $. Then the inequality
\[
<\eta>^{s/2} \, \le \delta^{1/2}|1-r|^{-1/2},
\]
that holds in the region $|1-r|<\delta_\eta$, yields
\begin{align*}
&\lVert P_A \rVert_{ L^p_{\alpha}} = \left( \int_{\mathbb{R}^n} <\eta>^{p\alpha}
\left |\int_{|1-r| < \delta_\eta} \int^{1}_{0} \partial_u F_{u}(\eta)\big |_{u=u(t)} \, dt \,dr\right|^p d\eta \right)^{1/p} \\
&\le\delta^{1/2} \left( \int_{\mathbb{R}^n} <\eta>^{p(\alpha -s/2)}
\left (\int_{|1-r| < \delta} |1-r|^{-1/2} \int^{1}_{0} \left | \partial_u F_{u}(\eta)\big |_{u=u(t)} \right | \, dt \,dr\right)^p d\eta \right)^{1/p} \\
&\le \delta^{1/2} \int_{|1-r| < \delta} \int^{1}_{0} |1-r|^{-1/2} \lVert \partial_u F_{u}\big |_{u=u(t)}\rVert_{L^p_{\alpha-s/2}} \, dt \,dr ,
\end{align*}
where to get the last line we have used Minkowski's inequality.
We have two cases. If in (\ref{eq:pv:Fr2}) and (\ref{eq:pv:Fr1}) we have $\alpha \le \tau$ we can choose $s = 0$, otherwise, if $\alpha>\tau$, we choose $s$ such that $\alpha-s/2 = \tau$.
In both cases by (\ref{eq:pv:Fr1}) we obtain
\begin{align}
\nonumber \lVert P_A \rVert_{ L^p_\alpha} &\le \delta^{1/2} \int_{|1-r| < \delta} \int^{1}_{0} |1-r|^{-1/2} \lVert \partial_u F_{u}\big|_{u=u(t)} \rVert_{L^p_{\tau}} \, dt \,dr \\
\label{eq:PV2:B} &\le 4\delta M.
\end{align}
To finish we need estimate $P_B$ which is non-zero when $s>0$, that is when $\alpha>\tau$.
We set $N(\eta) = - \log_2(\delta <\eta>^{-s})$, and consider the next dyadic decomposition,
\begin{align*}
P_{B}(\eta) :&= \int_{B } \frac{F_r(\eta)}{1-r} \,dr \\
&= \sum_{0 \le j<N(\eta)} \int_{\{2^{-(j+1)}<|1-r|<2^{-j}\}} \chi_{\{|1-r|<\delta_\eta \}}(r) \frac{F_r(\eta)}{1-r} \,dr .
\end{align*}
If $j=0,1,...,N(\eta)$, for $\eta$ fixed, the definition of $N(\eta)$ implies that ${2^j \le <\eta>^{s}}/\delta$, therefore
\begin{equation} \label{eq:diadic1:B}
|P_{B}(\eta)|\le \sum_{j=0}^\infty 2^{j+1} \chi_{(\delta 2^{j},\infty)}(<\eta>^{s})\int_{|1-r|<2^{-j}} |{F_r}(\eta)| \,dr .
\end{equation}
But observe that in the last line we have an expression of the kind
\[{P^{\lambda}}(\eta) := \chi_{(\delta{\lambda^{-1}},\infty)} (<\eta>^{s}) \int_{|1-r|\le \lambda} |{F_r}(\eta)| \, dr,\]
with $0<\lambda\le 1$. Computing its $L^p_{\alpha-\varepsilon}$ norm when $\varepsilon>0$ and applying Minkowski's integral inequality we obtain
\begin{equation} \label{eq:diadic2:B}
\lVert P^{\lambda}\rVert_{ L^p_{\alpha-\varepsilon}} \le {\lambda}^{\varepsilon/s} \int_{\{|1-r|\le {\lambda} \}} \lVert F_r \rVert_{ L^p_\alpha} \,dr \le {\lambda}^{1+\varepsilon/s} M,
\end{equation}
where we have used estimate (\ref{eq:pv:Fr2}), and that in the region where the characteristic function does not vanish we have that $<\eta>^{-\varepsilon} \le \delta^{-\varepsilon/s} \lambda^{\varepsilon/s}$. Hence, taking the $L^p_{\alpha'}$ norm of (\ref{eq:diadic1:B}) and applying estimate (\ref{eq:diadic2:B}) with $\varepsilon= \alpha-\alpha'$ yields
\begin{align}
\nonumber \lVert P_B \rVert_{ L^p_{\alpha'}} \le 2 \sum^{\infty}_{j=0} 2^{j} \lVert {P^{2^{-j}}} \rVert_{ L^p_{\alpha'}}
&\le 2\delta^{-\varepsilon/s} M \sum^{\infty}_{j=0} 2^{-j \varepsilon/s}\\
\label{eq:PV3:B}&\le C(\delta,\alpha,\alpha',\tau,p) M.
\end{align}
Observe that this is the first time we need the strict inequality $\alpha'<\alpha$ in the statement of the theorem. Therefore since $P(F_r) = P_A+P_B+P_C$ we conclude the proof putting together estimates (\ref{eq:PV1:B}), (\ref{eq:PV2:B}) and (\ref{eq:PV3:B}).
\end{proof}
In our case usually the family of functions $F_r$ is given by a multilinear operator over the potentials, as it can be seen in \eqref{eq:Q2} where $F_r= S_r(q)$.
\section{Sobolev estimates for the double dispersion operator} \label{sec.Q2L2}
In this section, we study in detail the spherical operator $S_{r}$ of
the double dispersion operator $\widetilde{Q}_{2}$ in order to prove
{Theorem \ref{teo.Qj}} for $j=2$. This section will serve to illustrate
the approach used to obtain in Section \ref{sec:Qj} the main estimates
of the spherical operators related to the $\widetilde{Q}_{j}$ operators.
For notational convenience we define the operator
\begin{equation*}
{\widetilde{S}_{r}(q)}(\eta ) :=\chi (\eta ) {S_{r}(q)}(\eta ).
\end{equation*}
Then, multiplying both sides of equation {(\ref{eq:Q2})} by the smooth
cut-off $\chi (\eta )$ we get
\begin{equation}
\label{eq:Q2tilde}
\widehat{\widetilde{Q}_{2}(q)}(\eta ) = (i \pi d + P) \widetilde{S}
_{r}(q)(\eta ).
\end{equation}
Hence, the main idea to estimate the $\widetilde{Q}_{2}$ operator is to
apply {Proposition \ref{prop:genPV}} to the particular case $F_{r}= \widetilde{S}_{r}(q)$. We begin with the necessary estimates for
$ \widetilde{S}_{r}(q)$.
\begin{lemma} \label{lemma:SrQ2}
Let $n\ge 2$ and $q\in W_1^{\beta,2}(\RR^n)$ with $\beta\ge 0$. Then the estimate
\begin{equation*}
\norm{\widetilde S_r(q)}_{ L^2_\alpha}\le C{(1+r)^{-\gamma} } \norm{q}_{W_1^{\beta,2}}^2,
\end{equation*}
holds when
\begin{equation} \label{eq:range:S2}
\begin{cases}
\alpha \le \beta +(\beta- (n-3)/2), \quad if \quad (n-3)/2 <\beta < (n-1)/2,\\
\alpha<\beta +1, \hspace{26mm} if \quad (n-1)/2 \le \beta<\infty ,
\end{cases}
\end{equation}
for some real number $\gamma >0$ (possibly depending on $\beta$ and $\alpha$).
\end{lemma}
To simplify later computations we define the operator
$$ {\widetilde{K}_{r}(g_1,g_2)(\eta)} = \chi(\eta){K}_{r}(g_1,g_2)(\eta),$$
where
\begin{equation} \label{eq:K2}
{{K}_{r}(g_1,g_2)}(\eta) := \frac{1}{|\eta|}\int_{\Gamma_{r}(\eta)}|g_1(\xi)| |g_2(\eta-\xi)| \, \ds{r}(\xi).
\end{equation}
Then we have that
$$
\left| \widetilde{S}_r(q)(\eta) \right| \le \frac{2}{1+r}{\widetilde{K}_{r}(\widehat{q},\widehat{q})(\eta)},
$$
and therefore the proof of Lemma \ref{lemma:SrQ2} is an immediate consequence of the following lemma taking $\gamma =1- \lambda$.
\begin{lemma} \label{lemma.K.Q2}
Let $n\ge 2$ and $f_1,f_2 \in W_1^{\beta,2}(\RR^n)$ with $\beta\ge 0$. Then the estimate
\begin{equation} \label{eq.thm.sph.1}
\norm{\widetilde K_{r}(\widehat{f_1},\widehat{f_2})}_{ L^2_\alpha}\le C r^{\lambda} \norm{f_1}_{W_1^{\beta,2}} \norm{f_2}_{W_1^{\beta,2}},
\end{equation}
holds when condition $(\ref{eq:range:S2})$ is also satisfied, for some real number $0<\lambda<1$ (possibly depending on $\beta$ and $\alpha$).
\end{lemma}
In the proof we are going to use the following result.
\begin{lemma}[\cite{fix}, Lemma 3.3]\label{lemma.integrals}
Let $\SP_\rho\subset\RR^n$ be any sphere of radius $\rho$ and let $d\sigma_\rho$ be its Lebesgue measure.
Then for any $0< \lambda\le (n-1)/2$, we have that
\begin{equation*}
\int_{\SP_\rho} \frac{1}{|x-y|^{(n-1)-2\lambda}} \, d\sigma_\rho(y) \le C_\lambda \rho^{2\lambda},
\end{equation*}
for any $x\in\RR^n$, and for a constant $C_\lambda$ that only depends on $\lambda$.
\end{lemma}
This can be proved by direct computation (for a detailed proof see \cite[Appendix]{fix}).
\begin{proof}[Proof of Lemma $\ref{lemma.K.Q2}$]
Since by \eqref{eq:cutoff1}, $\chi(\eta) = 0$ for $|\eta|\le 1$, we have that $|\eta|^{-1} \le 2 \jp{\eta}^{-1}$ in the region where $\chi(\eta)$ does not vanish. Then
\begin{equation*}
\norm{\widetilde{K}_r({\widehat{f_1}}, \widehat{f_2})}_{L^2_\alpha}^2 \le C \int_{\RR^n} \jp{\eta}^{2\alpha-2} \left( \int_{\Gamma_{r}(\eta)} |\widehat{f_1}(\xi)| |\widehat{f_2}(\eta-\xi)| \,\ds{r}(\xi) \right)^2 d\eta.
\end{equation*}
Now, $\eta = (\eta-\xi) + \xi$, so if we choose any $0<c<1/2$ at least one of the conditions $|\xi|>c|\eta|$ and $|\eta-\xi|>c|\eta|$ must hold.
But observe now that the change of variables $\xi' = \eta -\xi$ leaves invariant $\Gamma_r(\eta)$ and $\widetilde{K}_r({\widehat{f_1}}, \widehat{f_2})$, except for the fact that interchanges the roles of $\widehat{f_1}$ and $\widehat{f_2}$. Therefore is enough to study only the case of $|\xi|>c|\eta|$ since then the other follows applying the change of variables. We want to estimate
\begin{equation*}
I :=\int_{\RR^n} \jp{\eta}^{2\alpha-2} \left( \int_{\Gamma^+_{r}(\eta)} |\widehat{f_1}(\xi)| |\widehat{f_2}(\eta-\xi)| \,\ds{r}(\xi) \right)^2 d\eta,
\end{equation*}
$$ \text{where} \hspace{3mm} \Gamma_r^+(\eta) := \{\xi\in \Gamma_r(\eta):|\xi|>c|\eta|\}.$$
We introduce a real parameter $0<\lambda \le (n-1)/2$. Then by Cauchy-Schwarz's inequality we have
\begin{align}
\nonumber &I \le C \int_{\RR^n} \jp{\eta}^{2\alpha-2} \int_{\Gamma_{r}^+(\eta)} |\widehat{f_1}(\xi)|^2|\widehat{f_2}(\eta-\xi)|^2|\eta-\xi|^{n-1-2\lambda}\,\ds{r}(\xi) \times \dots\\
\nonumber &\hspace{60 mm} \dots \times \int_{\Gamma_{r}^+(\eta)}\frac{1}{|\eta-\xi|^{n-1-2\lambda}}\,\ds{r}(\xi)\, d\eta .
\end{align}
Since $\Gamma_r(\eta)$ has radius $r|\eta|/2$, using Lemma \ref{lemma.integrals} to bound the second integral we obtain
\begin{align}
\nonumber &I \le C r^{2\lambda}\int_{\RR^n} {\jp{\eta}^{2\alpha-2}}|\eta|^{2\lambda}\times \dots \\
& \nonumber \hspace{2cm}\dots \times \int_{\Gamma_{r}^+(\eta)} |\widehat{f_1}(\xi)|^2|\widehat{f_2}(\eta-\xi)|^2 \jp{\eta-\xi}^{n-1-2\lambda}\,\ds{r}(\xi)\, d\eta \\
\label{eq.Q2sph.1} &\le C r^{2\lambda}\int_{\RR^n} \int_{\Gamma_{r}(\eta)} |\widehat{f_1}(\xi)|^2{\jp{\xi}^{2\alpha-2+2\lambda}}|\widehat{f_2}(\eta-\xi)|^2 \jp{\eta-\xi}^{n-1-2\lambda}\,\ds{r}(\xi)\, d\eta,
\end{align}
using also that $\jp{\eta}^{2\alpha-2+2\lambda} \le C{\jp{\xi}^{2\alpha-2+2\lambda}}$, which follows from the fact that $|\eta| \le c |\xi|$, if we impose the extra condition ${\alpha-1+\lambda}\ge 0$.
We are going to use the trace theorem to bound second integral in {(\ref{eq.Q2sph.1})}. The fundamental point is that for spheres, the constant of the trace theorem can be taken to be $1$, independently of the radius of the sphere. See \cite[Proposition A.1]{fix}, for an elementary proof of this fact. Then
\begin{align}
\nonumber &I \le \, Cr^{2\lambda} \int_{\RR^n} \int_{\RR^n} | \widehat{f_1}(\xi)|^2\jp{\xi}^{2\alpha-2+2\lambda}|\widehat{f_2}(\eta-\xi)|^2 \jp{\eta-\xi}^{(n-1)-2\lambda} d\xi\,d\eta \\
\nonumber & \, + Cr^{2\lambda}\int_{\RR^n} \int_{\RR^n} \left |\nabla \left( \widehat{f_1}(\xi)\jp{\xi}^{\alpha-1+\lambda}\right) \right |^2 |\widehat{f_2}(\eta-\xi)|^2 \jp{\eta-\xi}^{(n-1)-2\lambda} d\xi \,d\eta\\
\nonumber &\, + \, Cr^{2\lambda} \int_{\RR^n} \int_{\RR^n} | \widehat{f_1}(\xi)|^2\jp{\xi}^{2\alpha-2+2\lambda}\left |\nabla \left(\widehat{f_2}(\eta-\xi) \jp{\eta-\xi}^{(n-1)/2-\lambda}\right)\right|^2 d\xi\,d\eta.
\end{align}
Therefore changing the order of integration and using that by Plancherel theorem we have
\begin{equation*}
\int_{\RR^n} \left|\nabla (\widehat{f}(\xi)\jp{\xi}^{t} )\right|^2 \, d\xi \le C \norm{f}_{W^{t,2}_1}^2,
\end{equation*}
we obtain
$$I \le Cr^{2\lambda} \norm{f_1}_{W^{\alpha -1+\lambda,2}_1}^2\norm{f_2}_{W^{(n-1)/2-\lambda,2}_1}^2.$$
As we have explained before, in the case $|\eta-\xi|>c|\eta|$ we obtain the same estimate but interchanging the roles of $f_1$ and $f_2$. Putting both estimates together we get
\begin{align*}
&\norm{\widetilde{K}_r({\widehat{f_1},\widehat{f_2}})}_{L^2_\alpha} \\
&\le C r^{\lambda} \left( \norm{f_1}_{W^{\alpha -1+\lambda,2}_1}\norm{f_2}_{W^{(n-1)/2-\lambda,2}_1} + \norm{f_2}_{W^{\alpha -1+\lambda,2}_1}\norm{f_1}_{W^{(n-1)/2-\lambda,2}_1} \right).
\end{align*}
We also add the extra restriction $\lambda<1$, this is necessary to have a negative value for $\gamma$ in Lemma \ref{lemma:SrQ2}. Now, fix $\lambda$ such that
\begin{equation} \label{eq.par.Q2}
\beta = \alpha -1+\lambda,
\end{equation}
hence, the condition $\alpha -1+\lambda \ge 0$ used in the proof implies we must have $\beta \ge 0$. As a consequence of (\ref{eq.par.Q2}), equation (\ref{eq.thm.sph.1}) follows directly in the range $\beta \ge (n-1)/2$ (we are using remark \ref{remark:Sob}). But, by the conditions imposed in the proof we have to take into account the restrictions
\begin{equation}\label{restriccion_1}
\begin{cases}
0 < \lambda < 1 \\ 0 < \lambda \le \frac{n-1}{2}
\end{cases} \Longleftrightarrow \begin{cases}
\beta < \alpha < \beta +1 \\ \beta +1 - \frac{n-1}{2} \le \alpha < \beta +1.
\end{cases}
\end{equation}
We can discard the lower bounds for $\alpha$ using that $\norm{f}_{L^2_\alpha} \le \norm{f}_{L^2_{\alpha'}}$ always holds if $\alpha\le \alpha'$. Therefore only the restriction $ \alpha<\beta +1$ remains.
Otherwise, if $\beta$ is in the range $0 \le \beta < (n-1)/2 $, estimate (\ref{eq.thm.sph.1}) will follow if we add the extra condition
\begin{equation} \label{rest2}
(n-1)/2-\lambda \le \beta.
\end{equation}
Then, since $\lambda<1$, we must have $\beta>(n-3)/2 $ (the other conditions on $\lambda$ don't add new restrictions). Also (\ref{eq.par.Q2}) and (\ref{rest2}) imply together that $\alpha \le 2\beta-(n-3)/2$, which is a stronger condition than $\alpha<\beta+1$ since we have $\beta<(n-1)/2$. Hence, we have obtained the ranges of the parameters given in the statement.
\end{proof}
\begin{lemma} \label{lemma:Kpoint}
Let $q \in \mathcal S'(\mathbb{R}^n)$ such that $\widehat{q}$ is smooth. Then, for every $\eta \neq 0$ fixed, $S_r(q)(\eta)$ is smooth in the $r$ variable. Moreover, we have the following pointwise inequality
\begin{equation} \label{eq:Kpoint}
\left | \partial_r S_{r}(q)(\eta) \right | \le C K_{r}(\widehat{q},\widehat{q})(\eta) + C |\eta|\sum_{i=1}^n K_{r}(\widehat{x_iq},\widehat{q}).
\end{equation}
\end{lemma}
In general, the constant $C$ in the estimate might depend on $\delta$, but this is harmless.
\begin{proof}
We centre the Ewald sphere in (\ref{eq:Sr}) at the origin with the change $\xi = \eta/2 + r|\eta/2|\theta$, where $\theta \in \SP^{n-1}$, to obtain
\begin{equation} \label{eq:derivative1}
S_{r}(q)(\eta) =\frac{r^{n-1}|\eta|^{n-2}}{2^{n-2}(1+r)} \int_{\SP^{n-1}}\widehat{q}\left(r\frac{|\eta|}{2}\theta + \frac{\eta}{2} \right) \widehat{q}\left(-r\frac{|\eta|}{2}\theta + \frac{\eta}{2}\right) \, d\sigma(\theta).
\end{equation}
Now we can compute derivatives in the $r$ variable. Consider $\eta$ fixed, then
\begin{align*}
\nonumber & \partial_r S_{r}(q)(\eta) = \\
\nonumber &= \frac{((n-1)r^{n-2}(1+r)-r^{n-1})|\eta|^{n-2}}{2^{n-2}(1+r)^2} \int_{\SP^{n-1}}\widehat{q}\left(r\frac{|\eta|}{2}\theta + \frac{\eta}{2} \right) \widehat{q}\left(-r\frac{|\eta|}{2}\theta + \frac{\eta}{2}\right) \, d\sigma(\theta) \\
\nonumber &+ \frac{r^{n-1}|\eta|^{n-1}}{2^{n-1}(1+r)} \int_{\SP^{n-1}}\theta \cdot\nabla \widehat{q}\left(r\frac{|\eta|}{2}\theta + \frac{\eta}{2} \right) \widehat{q}\left(-r\frac{|\eta|}{2}\theta + \frac{\eta}{2}\right) \, d\sigma(\theta) \\
&- \frac{r^{n-1}|\eta|^{n-1}}{2^{n-1}(1+r)} \int_{\SP^{n-1}} \widehat{q}\left(r\frac{|\eta|}{2}\theta + \frac{\eta}{2} \right)\theta \cdot \nabla \widehat{q}\left(-r\frac{|\eta|}{2}\theta + \frac{\eta}{2}\right) \, d\sigma(\theta).
\end{align*}
We have passed the derivative inside the integral since we are
integrating in finite measure and $\widehat{q}$ is smooth. This implies
that $ S_{r}(q)(\eta )$ is smooth in the $r$ variable
for every $\eta \neq 0$. Also, a change of variables $\omega = -\theta $ shows that the
last terms are identical. Hence, if we undo the change to spherical
coordinates we get
\begin{align}
\nonumber \partial_r S_{r}(q)(\eta)=& \frac{(n-2)r+(n-1)}{r(1+r)^2}\frac{2}{|\eta|} \int_{\Gamma_r(\eta)}\widehat{q}(\xi) \widehat{q}(\eta-\xi) \, \ds{r}(\xi) \\
\label{eq:derivative2} +& \frac{2}{(1+r)} \int_{\Gamma_r(\eta)} \frac{(\xi-\eta/2)}{|\xi-\eta/2|} \cdot\nabla \widehat{q}(\xi) \widehat{q}(\eta-\xi) \, \ds{r}(\xi).
\end{align}
Therefore by (\ref{eq:K2}), if we fix some $0<\delta<1$, for $r\in (1-\delta,1+\delta)$ we obtain
\begin{equation*}
\left | \partial_r S_{r}(q)(\eta) \right | \le C K_{r}(\widehat{q},\widehat{q})(\eta) + C |\eta| K_{r}(|\nabla\widehat{q}|,\widehat{q})(\eta).
\end{equation*}
The estimate follows then using that
\begin{equation*}
K_{r}(|\nabla\widehat{q}|,\widehat{q})\le \sum_{i=1}^n K_{r}(\partial_i \widehat{q},\widehat{q})= C \sum_{i=1}^n K_{r}(\widehat{x_iq},\widehat{q}).
\end{equation*}
\end{proof}
From Lemmas \ref{lemma:SrQ2} and \ref{lemma:Kpoint} we get the following proposition.
\begin{proposition}
\label{prop:derSr}
Let $n\ge 2$ and fix some $0<\delta <1$. Then for every $r\in (1-
\delta ,1+\delta )$ and $q \in \mathcal{S}(\mathbb{R}^{n})$ we have
that
\begin{equation*}
\lVert \partial _{r} \widetilde{S}_{r}(q)\rVert _{ L^{2}_{\alpha -1}}
\le C \lVert q\rVert _{W^{\beta ,2}_{2}}^{2},
\end{equation*}
holds when $\alpha $ and $\beta \ge 0$ satisfy condition
${(\ref{eq:range:S2})}$.
\end{proposition}
Notice the appearance of the Sobolev space $W^{\beta,2}_2$ instead of $W^{\beta,2}_1$.
\begin{proof}
Multiplying (\ref{eq:Kpoint}) by $\chi(\eta)$ we get
\begin{equation*}
\norm{\partial_r \widetilde S_{r}(q)}_{L^2_{\alpha-1}}
\le C \norm{ \widetilde{K}_{r}(\widehat{q},\widehat{q}) }_{L_{\alpha-1}^2}+ C \sum_{i=1}^{n} \norm{\widetilde K_{r}(\widehat{x_iq},\widehat q)}_{ L^2_\alpha}.
\end{equation*}
Notice that we get the $L^2_{\alpha}$ norm in the last term due to the extra $|\eta|$ factor appearing in (\ref{eq:Kpoint}). Then, by Lemma \ref{lemma.K.Q2} we obtain the desired estimate using that
\begin{equation} \label{eq:pain}
\norm{\widetilde K_{r}(\widehat{x_iq},\widehat q)}_{ L^2_\alpha}\le C \norm{x_iq}_{W_1^{\beta,2}} \norm{q}_{W_1^{\beta,2}} \le C \norm{q}_{W_2^{\beta,2}}^2.
\end{equation}
The estimate $\norm{x_iq}_{W_1^{\beta,2}} \le C \norm{q}_{W_2^{\beta,2}}$ can be verified for integer $\beta$ and extended by interpolation to the general case.
\end{proof}
By Lemma \ref{lemma:SrQ2} and \ref{prop:derSr} we can apply Proposition \ref{prop:genPV} to estimate the $\widetilde{Q}_2$ operator, but we leave this for the next section.
\section{Sobolev estimates for the general \texorpdfstring{$\widetilde{Q}_j$}{Qj} operator } \label{sec:Qj}
In this section we prove Theorem \ref{teo.Qj}.
Let $\ell \ge 1$, and assume we have $\mb r \in (0,\infty)^{\ell}$, $\mb r = (r_1, \dots,r_{\ell})$ and $f\in C^{\infty}_c((0,\infty)^{\ell})$. We define the operators,
$$P_i,d_i : C^{\infty}_c((0,\infty)^{\ell}) \to C^{\infty}_c((0,\infty)^{\ell-1}),$$
following the notation introduced in (\ref{eq:dandP}),
\begin{align*}
d_i(f)(r_1,\dots,\widehat{r_i},\dots,r_\ell) &:= \int_0^\infty \delta(r_i-1)f(\mb r) \, dr_i, \\
P_i(f)(r_1,\dots,\widehat{r_i},\dots,r_\ell) &:= \pv \int_{0}^\infty \frac{1}{1-r_i} f(\mb r) \, dr_i,
\end{align*}
where $\widehat{r_i}$ indicates that this coordinate is deleted in the list. Hence, if $\ell=1$, $d_i(f)$ and $P_i(f)$ are just scalars. Also, if $\mb r \in (0,\infty)^\ell$ we define the manifold,
$$\Gamma_{\mb r}(\eta) = \Gamma_{ r_1}(\eta) \times \dots \times \Gamma_{ r_\ell}(\eta) ,$$
and we denote by $\sigma_{\mb r} $ its Lebesgue measure (product of the measures of the spheres $\Gamma_{r_i}(\eta)$),
$$d \sigma_{\mb r}(\xi_1,\dots,\xi_\ell) = d\sigma_{r_1\eta}(\xi_1) \times\dots\times \, d\sigma_{r_\ell\eta}(\xi_\ell).$$
\begin{proposition}[$Q_j(q)$ structure] \label{prop:Qjstruct}
Let $n\ge 2$ and $j\ge 2$. Then we have that
\begin{equation} \label{eq:Qj}
\widehat{Q_j(q)}(\eta) = \prod_{i=1}^{j-1} \left( i\pi d_i + P_i \right) S_{j,\mb r} (q)(\eta),
\end{equation}
where $\mb r = (r_1, \dots,r_{j-1})$, and
\begin{align}
\nonumber S_{j,\mb r}(q)&(\eta) := \left(\prod_{i=1}^{j-1} \frac{2}{1+r_i}\right) \times \dots
\\ \label{eq:Sj} &\frac{1}{|\eta|^{j-1}} \int_{\Gamma_{\mb r}(\eta)} \widehat{q}(\eta -\xi_1) \left(\prod_{i=1}^{j-2} \widehat{q}(\xi_i - \xi_{i+1}) \right) \widehat{q}(\xi_{j-1}) \,\, d\sigma_{\mb r}(\xi_1,\dots,\xi_{j-1}).
\end{align}
\end{proposition}
Proposition \ref{prop:Qjstruct} implies that the higher order operators $Q_j$ have a similar structure to the $Q_2$ operator (we consider $\prod_k^m$ =1 if $k>m$, as it is usual). In fact, when $j=2$, (\ref{eq:Qj}) is equivalent to equation (\ref{eq:Q2}) since with the new notation we have $ S_r =S_{2,\mb r}$ (in this case we have $\mb r = r$, since there is only one parameter).
\begin{proof}
Let $k\in(0,\infty)$, we are going need the identity
\begin{equation} \label{eq:resAlberto}
R_k(f)(x) = i \frac{\pi}{2}k^{n-2} \int_{\SP^{n-1}} \widehat{f}(k\omega) e^{ik x \cdot \omega} \, d\sigma(\omega) + \pv \int_{\RR^n} e^{i x \cdot \zeta} \frac{\widehat{f}(\zeta)}{-|\zeta|^2 + k^2} \, d\zeta ,
\end{equation}
for the resolvent of the Laplacian. (It follows from computing explicitly the limit in (\ref{eq:resolvent1}) in the sense of distributions, see for example \cite{notasR} and \cite[pp. 209-236]{GS} for more details).
We take spherical coordinates in the principal value integral, denoting by $t$ the radial variable and use the change of variables $t=rk$ in the radial integral,
\begin{align*}
\pv \int_{\RR^n} e^{i x \cdot \zeta} \frac{\widehat{f}(\zeta)}{-|\zeta|^2 + k^2} \, d\zeta &= \pv\int_0^\infty \frac{1}{(k-t)(k+t)} \int_{\mathbb{S}^{n-1}}e^{ix \cdot t\omega}\widehat{f}(t\omega) \, t^{n-1}d \sigma(\omega)\, dt \\
= \pv \, \frac{1}{k}\int_0^\infty &\frac{1}{(1-r)(1+r)}\int_{\mathbb{S}^{n-1}}e^{ix \cdot rk\omega} \widehat{f}(rk \omega) \, (rk)^{n-1} d \sigma(\omega)\, dr \\
= \pv \frac{1}{k}\int_0^\infty &\frac{1}{(1-r)(1+r)}\int_{\Gamma_r(\eta)}e^{i(-\xi-k \theta) \cdot x}\widehat{f}(-\xi -k \theta) \,d \sigma_{r\eta}(\xi)\, dr,
\end{align*}
where to obtain the integral over the Ewald sphere in the last line we have used the change of variables $rk\omega= -\xi-k\theta$ in the spherical integral, and that $\Gamma_r(\eta) = \left\{ \xi \in \mathbb{R}^n: \; |\xi+k\theta|=rk \right\}$ if $\eta= -2k\theta$ (see (\ref{eq:ewald})).
Hence, using the analogous change of variables $k \omega=-\xi-k \theta$ in the first integral in (\ref{eq:resAlberto}), we finally obtain
\begin{align}
\nonumber R_k(f)(x)&= i \pi \frac{1}{|\eta|}\int_{\Gamma_1(\eta)} e^{i (-\xi-k\theta) \cdot x} \widehat{f}(-\xi -k\theta) \, d\sigma_\eta(\xi) \\
\nonumber &\hspace{20mm} + \pv \int_0^\infty \frac{2}{|\eta|(1+r)} \int_{ \Gamma_{r}(\eta)} e^{i (-\xi-k\theta) \cdot x} \, \widehat{f}(-\xi -k\theta) \, d \sigma_{r\eta}(\xi) \,dr \\
\label{eq:resfin} &= (i \pi d + P)\left( \frac{2}{|\eta|(1+r)} \int_{\Gamma_r(\eta)} e^{i (-\xi-k\theta) \cdot x} \, \widehat{f}(-\xi -k\theta) \, d \sigma_{r\eta}(\xi)\right).
\end{align}
We recall that by \eqref{eq:Qjraw}, we have
\begin{equation} \label{eq:Qjraw2}
\widehat{Q_{j}(q)}(-2k\theta) =\int_{\RR^n} e^{ik \theta\cdot y} (qR_k)^{j-1}(q(\cdot)e^{ik\theta \cdot (\cdot)} )(y) \,dy.
\end{equation}
Let $m \in \NN$, we define
\begin{equation} \label{eq:fm}
f_m(x) := R_k((q R_k)^{m-1}(q(\cdot) e^{ik\theta \cdot (\cdot)})) (x).
\end{equation}
We claim that
\begin{align}
\nonumber f_m(x) &=\left( \prod_{i=1}^{m} (i\pi d_i + P_i) \right) \left( \prod_{i=1}^{m} \frac{2}{(1+r_i)} \right) \frac{1}{|\eta|^{m}}\times \dots \\
\label{eq:fmex} &\int_{\Gamma_{r_m}(\eta)} \dots \int_{\Gamma_{r_1}(\eta)}e^{i (-\xi_m -k\theta) \cdot x} \,\widehat{q}(\eta -\xi_1) \left(\prod_{i=1}^{m-1} \widehat{q}(\xi_i - \xi_{i+1}) \right) d\sigma_{\mb r}(\xi_1,\dots,\xi_{m}).
\end{align}
We prove the claim by induction. The case $m=1$ follows directly from \eqref{eq:resfin} using that $\widehat{qe^{ik \theta \cdot (\cdot)}}(\xi)=
\widehat{q}(\xi-k \theta)$ and that $\eta=-2k\theta$.
We are going to prove (\ref{eq:fmex}) for $m+1$ assuming that it is true for $m$. On the one hand, by (\ref{eq:fm}) and (\ref{eq:resfin}) we have
\begin{align}\label{eq:parte}
\nonumber f_{m+1}(x) &= R_k(q f_{m} ) (x)=\left(i \pi d_{m+1}+P_{m+1} \right)\dots \\
\hspace{3mm} &\left( \frac{2}{(1+r_{m+1})|\eta|}\int_{\Gamma_{r_{m+1}}(\eta)}e^{i(-\xi_{m+1}-k \theta)\cdot x}(\widehat{qf_m})(-\xi_{m+1}-k\theta) \, d\sigma_{r_{m+1}\eta}(\xi_{m+1}) \right).
\end{align}
On the other hand, by (\ref{eq:fmex}), changing the order of integration we have
\begin{align}
\nonumber (\widehat{q f_m})(\zeta) &= \int_{\mathbb{R}^n}q(y)f_m(y)e^{-i\zeta\cdot y} \,dy \\
\nonumber &=\left( \prod_{i=1}^m (i \pi d_i+P_i ) \right) \left(\prod_{i=1}^m \frac{2}{1+r_i} \right)\frac{1}{|\eta|^m} \int_{\Gamma_{r_m}(\eta)} \dots \int_{\Gamma_{r_1}(\eta)} \,\widehat{q}(\eta -\xi_1) \times \dots \\
&\label{eq:alberto_2} \hspace{30mm} \left(\prod_{i=1}^{m-1} \widehat{q}(\xi_i - \xi_{i+1}) \right)\widehat{q}(k \theta+\zeta +\xi_m) \, d\sigma_{\mb r}(\xi_1,\dots,\xi_{m}) .
\end{align}
Thus, putting $\zeta =-\xi_{m+1}-k\theta$ in the previous equality and using (\ref{eq:parte}), we get
\begin{align*}
f_{m+1}(x) &=\left( \prod_{i=1}^{m+1} (i\pi d_i + P_i) \right) \left( \prod_{i=1}^{m+1} \frac{2}{(1+r_i)} \right) \frac{1}{|\eta|^{m+1}}\times \dots \\
&\int_{\Gamma_{r_{m+1}}(\eta)} \dots \int_{\Gamma_{r_1}(\eta)}e^{i (-\xi_{m+1} -k\theta) \cdot x} \,\widehat{q}(\eta -\xi_1) \left(\prod_{i=1}^{m} \widehat{q}(\xi_i - \xi_{i+1}) \right) d\sigma_{\mb r}(\xi_1,\dots,\xi_{m+1}),
\end{align*}
which proves the claim.
By \eqref{eq:Qjraw2} we have that $\widehat{Q_j(q)}(-2k \theta)=\widehat{qf_{j-1}}(-k\theta)$, and
hence, in order to obtain (\ref{eq:Qj}), is enough to put $\zeta=- k \theta $ in \eqref{eq:alberto_2},
\begin{align*}
\widehat{Q_j(q)}(-2k \theta) &=\left( \prod_{i=1}^{j-1}(i\pi d_i + P_i) \right) \left( \prod_{i=1}^{j-1} \frac{2}{(1+r_i)} \right) \frac{1}{|\eta|^{j-1}} \times \dots \\
&\int_{\Gamma_{r_{j-1}}(\eta)} \dots \int_{\Gamma_{r_1}(\eta)}\widehat{q}(\eta -\xi_1) \left(\prod_{i=1}^{j-2}\widehat{q}(\xi_i - \xi_{i+1})\right) \widehat{q}( \xi_{j-1}) \, d\sigma_{\mb r}(\xi_1,\dots,\xi_{j-1}).
\end{align*}
\end{proof}
We now introduce now the $\widetilde S_{j, \mb r}$ spherical operators,
$$\widetilde S_{j,\mb r}(q)(\eta) := \chi(\eta) S_{j,\mb r}(q)(\eta),$$
as we did for $j=2$. The following proposition generalizes the results of Lemma \ref{lemma:SrQ2} and Proposition \ref{prop:derSr} for $j\ge 2$. Its proof will be given later on.
\begin{proposition} \label{prop:Srmulti}
Let $q \in \mathcal S(\RR^n)$, $n\ge 2$, $j\ge 3$ and $0<\delta<1$. Consider all the multi-indices $\mb a = (a_1,\dots,a_{j-1})$ with $a_i$, $1\le i \le j-1$, either $0$ or $1$. Then the estimate
\begin{equation} \label{eq:Srmultiest}
\norm{ \partial_{\mb r}^{\mb a} \widetilde{S}_{j,\mb r} (q)}_{L^2_{\alpha-|\mb a|}} \le C \left( \prod_{i=1}^{j-1} \frac{1}{(1+r_i)^{\gamma}} \right) \norm{q}_{W^{\beta,2}_4}^j,
\end{equation}
holds for $\beta\ge0$, a certain $\gamma >0$ (possibly dependent on $\beta$), and some constant $C =C(n,j,\alpha,\beta)$, if the following conditions also hold
\begin{equation} \label{eq:forgotten}
r_i \in (1-\delta,1+\delta) \text{ if } a_i = 1, \text{ and } r_i \in (0,\infty) \text{ if } a_i=0.
\end{equation}
\begin{equation} \label{eq:rangeSphej}
\begin{cases}
\alpha \le \beta + (j-1)(\beta - (n-3)/2), \hspace{4 mm} if \hspace{4mm} (n-3)/2 <\beta < (n-1)/2,\\
\alpha < \beta + (j-1), \hspace{32mm} if \hspace{4mm} (n-1)/2 \le \beta<\infty .
\end{cases}
\end{equation}
\end{proposition}
With this proposition we can prove finally Theorem \ref{teo.Qj}, with the help of the following density argument.
\begin{lemma} \label{lemma:density}
Assume that the operator $\widetilde{Q}_j$ satisfies an a priori estimate
\begin{equation} \label{eq:estref}
\norm{\widetilde{Q}_j(q)}_{W^{\alpha,2}} \le C \norm{q}_{W^{\beta,p}_\delta}^{j},
\end{equation}
for every $q\in C^\infty_c(\RR^n)$. Then there is a unique continuous extension $\widetilde{Q}_j : W^{\beta,p}_\delta(\RR^n) \longrightarrow W^{\alpha,2}(\RR^n)$ of the operator, and estimate $(\ref{eq:estref})$ holds also for $q\in W^{\beta,p}_\delta(\RR^n)$.
\end{lemma}
This lemma is just a trick to extend estimates for $\widetilde{Q}_j(q)$ without having to give an estimate for the multilinear operator $Q_j(f_1,\dots,f_j)$ (this operator is defined by putting $f_i$ instead of $q$ in (\ref{eq:Qjraw}) following the order of appearance of each $q$ in the formula). It is a direct consequence of the more general statement given in {Lemma \ref{lemma:densityMain}} below.
The key idea in the proof is to symmetrize $Q_{j}(f_{1},f_{2},\dots ,f_{j})$ and use a polarization identity for multilinear operators. (The advantage of having $f_{i} =q$ in most of the
estimates in this work is a question of notational simplicity,
but it is not an essential restriction in any of them.)
\begin{proof}[Proof of {Theorem \ref{teo.Qj}}]
We begin with the case $j=2$. By {Proposition \ref{prop:derSr}} and {Lemma \ref{lemma:SrQ2}} for each $q\in \mathcal S(\mathbb{R}^n)$ we can apply {Proposition \ref{prop:genPV}} with $F_r =\widetilde S_r(q)$, $p=2$, $\tau = \alpha-1$ and $M=C \lVert q\rVert_{W_2^{\beta,2}}^2$. Therefore by {(\ref{eq:Q2tilde})} this yields the estimate
\[ \lVert \widehat{\widetilde Q_2(q)}\rVert_{L^2_{\alpha'}} \le C \lVert q \rVert_{W_2^{\beta,2}}^2,\]
for $\alpha'<\alpha$ and $\alpha$ in the range {(\ref{eq:range:S2})}. Then by Plancherel theorem we get the desired estimate for $\widetilde Q_2(q)$ in the Sobolev norm, and by {Lemma \ref{lemma:density}} we can extend by
density these estimates for $q\in W_{2}^{\beta ,2}(\mathbb{R}^{n})$.
This is enough to prove estimate {(\ref{eq:estimateQ2})}.
Now, let's study the case $j\ge 3$. Consider $f\in \mathcal{S}$. We
introduce the following operators,
\begin{align}
\label{eq:T1}
T_{j,1}(r_{1},\dots ,r_{j-1})(f) :
&= \widetilde{S}_{j,\mathbf{r}}(f),
\\
\label{eq:Tkind}
T_{j,k}(r_{k}, \dots ,r_{j-1})(f) : &= (i\pi d_{k-1}
+P_{k-1})T_{j,k-1}(r_{k-1}, \dots ,r_{j-1})(f)
\\
\nonumber
&= \prod _{i=1}^{k-1} (i\pi d_{i} +P_{i})
\widetilde{S}_{j,\mathbf{r}}(f),
\end{align}
for $2 \le k \le j-1$, and
\begin{align}
\nonumber T_{j,j}(f) := (i\pi d_{j-1} +P_{j-1})T_{j,j-1}(r_{j-1})(f) &= \prod_{i=1}^{j-1} (i\pi d_i +P_i) \widetilde{S}_{j,\mathbf{r}}(f) \\
\label{eq:Tj} &= \widehat{\widetilde{Q}_j(f)}.
\end{align}
$T_{j,k}(r_{k}, \dots ,r_{j-1})(f)(x)$ is a well
defined function, smooth in the variables $r_{k},\dots ,r_{j-1}$ and
$x$ (see {Proposition \ref{prop:smooth}} in the appendix for more
details). As we are going to see, the proof can be reduced to proving
the following claim.
\medskip
\textbf{Claim.} \emph{Let $1\le k\le j$, and let $\mathbf{a} = (a
_{1},\dots ,a_{j-1})$ with $a_{i} =0$ if $1\le i \le k-1$, and
$a_{i} =0,1$ if $k \le i \le j-1$. Then the estimate
\begin{equation}
\label{eq:Tkest}
\lVert \partial _{\mathbf{r}}^{\mathbf{a}} T_{j,k}(r_{k},\dots ,r_{j-1})(f)
\rVert _{L^{2}_{\alpha '}} \le c_{k} \lVert f\rVert ^{j}_{W^{\beta ,2}
_{4}},
\end{equation}
holds for $\alpha ' < (\alpha -|\mathbf{a}|)$ if conditions
\textup{ {(\ref{eq:forgotten})}} and \textup{ {(\ref{eq:rangeSphej})}} are
satisfied, with a constant $c_k$ given by
\begin{equation}
\label{eq:ck}
c_{k} = C C_{2}^{k-1}\prod _{i=k}^{j-1} \frac{1}{(1+r_{i})^{\gamma }},
\end{equation}
where $C_{2}$ is the constant introduced in
{Proposition \ref{prop:genPV}}.}
\medskip
By {(\ref{eq:Tj})}, we have that for $q\in \mathcal S(\mathbb{R}^n)$, estimate {(\ref{eq:Tkest})} with $k=j$, $\mathbf{a}= 0$, and $f=q$ gives
\begin{equation*}
\lVert \widehat{\widetilde Q_j(q)} \rVert_{L^2_{\alpha'}} \le C \lVert q \rVert_{W_2^{\beta,2}}^j,
\end{equation*}
for every $\alpha'<\alpha$ and $\alpha$ in the range (\ref{eq:rangeSphej}).
Then, using Plancherel theorem and {Lemma \ref{lemma:density}} to extend the resulting estimate for all $q \in W^{\beta,2}_4(\mathbb{R}^n)$, yields estimate (\ref{eq:estimateQj}). This is enough to conclude the proof of the theorem.
We now prove the claim by induction in $k$ (observe that $j$ is fixed in the claim). By {(\ref{eq:T1})}, the case $k=1$
of estimate {(\ref{eq:Tkest})} is equivalent to {Proposition \ref{prop:Srmulti}}. To prove that {(\ref{eq:Tkest})} holds true for
$2\le k \le j$, in each induction step we are going to use {Proposition \ref{prop:genPV}} and {(\ref{eq:Tkind})}.
Let's assume that the claim holds for a certain $k$, $1\le k <j-1$, then
we are going to prove it for $k+1$. Let $\mathbf{a}' = (a'_{1},\dots
,a'_{j-1})$ with $a'_{i} =0$ if $1\le i \le k$, and $a'_{i} =0,1$ if
$k+1\le i \le j-1$. We are going to apply {Proposition \ref{prop:genPV}} with
\begin{equation*}
F_{r_{k}}(x) := \partial _{\mathbf{r}}^{\mathbf{a}'} T_{j,k}(r_{k},
\dots ,r_{j-1})(f)(x).
\end{equation*}
By the induction hypothesis {(\ref{eq:Tkest})} with $\mathbf{a} =
\mathbf{a}'$, and {(\ref{eq:ck})} we have
\begin{equation}
\label{eq:Tk3ant}
\lVert F_{r_{k}}\rVert _{L^{2}_{\alpha '}} \le \frac{c_{k+1}}{(1+r
_{k})^{\gamma }} C_{2}^{-1} \lVert f\rVert ^{j}_{W^{\beta ,2}_{4}},
\end{equation}
for $\alpha ' < (\alpha -|\mathbf{a}'|)$ and $r_{k} \in (0,\infty )$.
Moreover, taking now $\mathbf{a}$ with $a_{i}=a'_{i}$ for $i \neq k$,
and $a_{k}=1$, we also get from {(\ref{eq:Tkest})} the estimate
\begin{equation}
\label{eq:Tk3}
\lVert \partial _{r_{k}} F_{r_{k}} \rVert _{L^{2}_{\alpha '-1}}
\le \frac{c_{k+1}}{(1+r_{k})^{\gamma } } C_{2}^{-1} \lVert f\rVert
^{j}_{W^{\beta ,2}_{4}},
\end{equation}
with $\alpha ' < (\alpha -|\mathbf{a}'|-1)$ and $r_{k} \in (1-\delta
,1+\delta )$. Then, for each $f\in \mathcal S(\mathbb{R}^n)$ we can apply {Proposition \ref{prop:genPV}} since condition (\ref{eq:pv:Fr1}) is given by (\ref{eq:Tk3}) and (\ref{eq:pv:Fr2}) by (\ref{eq:Tk3ant}) with $M = c_{k+1}C_2^{-1} \lVert f \rVert^{j}_{W^{\beta,2}_4}$.
Therefore, for $\alpha ' < (\alpha -|\mathbf{a}'|)$, we obtain that
\begin{align*}
&\lVert \partial _{\mathbf{r}}^{\mathbf{a}'} T_{j,k +1}(r_{k},\dots ,r
_{j-1})(f)\rVert _{L^{2}_{\alpha '}} =
\\
&\lVert (i\pi d_{k} + P_{k})\partial _{\mathbf{r}}^{\mathbf{a}'} T
_{k}(r_{k},\dots ,r_{j-1})(f)\rVert _{L^{2}_{\alpha '}} = \lVert (i
\pi d_{k} + P_{k}) F_{r_{k}} \rVert _{L^{2}_{\alpha '}} \le c_{k}
\lVert f\rVert ^{j}_{W^{\beta ,2}_{4}},
\end{align*}
where the first equality is true by {Proposition \ref{prop:smooth}} in the
appendix. This concludes the proof of the claim.
\end{proof}
We devote the remaining part of this section to prove Proposition \ref{prop:Srmulti}. We define the operator
\begin{align*}
& K_{j,\mb r}(g_1,\dots, g_j)(\eta) = K_{j,\mb r}(g_i)(\eta):= \\
& \frac{1}{|\eta|^{j-1}} \int_{\Gamma_{\mb r}(\eta)} |g_1(\eta -\xi_1)| \left(\prod_{i=1}^{j-2} |g_{i+1}(\xi_i - \xi_{i+1})| \right) |g(\xi_{j-1})| \,\, d\sigma_{\mb r}(\xi_1,\dots,\xi_{j-1}),
\end{align*}
and $\widetilde K_{j,\mb r}(g_1,\dots, g_j)(\eta) := \chi(\eta)K_{j,\mb r}(g_1,\dots, g_j)(\eta)$ . Hence we have that
\begin{equation} \label{eq:KandS}
\left|\widetilde{S}_{j,\mb r}(q) (\eta)\right| \le \left(\prod_{i=1}^{j-1} \frac{2}{1+r_i}\right) \widetilde K_{j,\mb r}(\widehat{q},\dots,\widehat{q})(\eta) .
\end{equation}
The main tool to prove Proposition \ref{prop:Srmulti} is the following Lemma.
\begin{lemma} \label{lemma:Kj}
Let $n\ge 2$ and $j\ge 3$, and consider $f_l \in W_2^{\beta,2}(\RR^n)$, $1 \le l \le j $ with $\beta \ge 0$. Then the estimate
\begin{equation} \label{eq:Kjmain}
\norm{\widetilde K_{j,\mb r}(\widehat{f_1},\dots,\widehat{f_j})}_{L^2_\alpha} \le C\left( \prod_{i=1}^{j-1} (1+r_i)^{\lambda} \right) \prod_{l=1}^{j} \norm{f_l}_{W_2^{\beta,2}},
\end{equation}
holds when $\alpha$ is in the range given in $(\ref{eq:rangeSphej})$ for some real number $0<\lambda<1$.
\end{lemma}
\begin{proof}
Since
$$\eta = (\eta-\xi_1) + \sum_{i=1}^{j-2}(\xi_i-\xi_{i+1}) + \xi_{j-1},$$
if we fix $c<1/j$, one of the conditions $|\eta-\xi_1|>c|\eta|$, $|\xi_i-\xi_{i+1}|>c|\eta|$ for some $1\le i\le j-2$, or $|\xi_{j-1}|>c|\eta|$ must hold. Hence, the sets
\begin{align*}
A^1_\textbf{r}(\eta) &= \left\{ (\xi_1, \dots , \xi_{j-1}) \in \Gamma_\textbf{r}(\eta): \; |\eta -\xi_1 |>c |\eta| \right\}, \\
A^i_\textbf{r}(\eta) &= \left\{ (\xi_1, \dots , \xi_{j-1}) \in \Gamma_\textbf{r}(\eta): \; |\xi_{i-1} -\xi_i |>c |\eta| \right\}, \;\; i=2, \dots , j-1, \\
A^j_\textbf{r}(\eta) &=\left\{ (\xi_1, \dots , \xi_{j-1}) \in \Gamma_\textbf{r}(\eta): \; |\xi_{j-1} |>c |\eta| \right\},
\end{align*}
satisfy $\Gamma_\textbf{r}(\eta)= \cup_{k=1}^jA^k_\textbf{r}(\eta)$. As a consequence
\begin{equation} \label{eq:allcases}
\|\widetilde{K}_{j,\textbf{r}}(\widehat{f_1}, \dots , \widehat{f_j})\|_{L^2_\alpha} \leq \sum_{k=1}^j \|\widetilde{K}^k_{j,\textbf{r}}(\widehat{f_1}, \dots , \widehat{f_j})\|_{L^2_\alpha} ,
\end{equation}
where $\widetilde{K}^k_{j,\textbf{r}}$ is defined as $\widetilde{K}_{j,\textbf{r}}$ but integrating over $A^k_\textbf{r}(\eta)$ instead of $\Gamma_{\bf r}(\eta)$.
We now fix a parameter $0<\lambda<(n-1)/2$, such that
\begin{equation} \label{eq:Kjbeta}
\beta := \alpha -(j-1)(1-\lambda).
\end{equation}
In the region where $\chi(\eta)$ does not vanish, $|\eta| \sim \jp{\eta}$, and hence
\begin{align}
\nonumber &\|\widetilde{K}^k_{j,\textbf{r}}(\widehat{f_1}, \dots , \widehat{f_j})\|_{L^2_\alpha}^2 \le \\
\label{eq:KjB} &\int_{\RR^n} \jp{\eta}^{2\beta}|\eta|^{-2(j-1)\lambda}
\left( \int_{A^k_\textbf{r}(\eta)} |\widehat{f_1}(\eta -\xi_1)| \left(\prod_{i=1}^{j-2}| \widehat{f}_{i+1}(\xi_i - \xi_{i+1})| \right) |\widehat{f_j}(\xi_{j-1})| \, d\sigma_{\mb r} \right)^2 d\eta,
\end{align}
where $d\sigma_{\mb r} = d\sigma_{\mb r}(\xi_1,\dots,\xi_{j-1})$.
The analysis is exactly the same for each $\widetilde{K}^k_{j,\textbf{r}}$ $k=1,\cdot \cdot \cdot, j$, so we only show one explicitly, for example the case $k=j$.
If $\beta \ge 0$ we can use that ${\jp{\eta}^\beta \le C \jp{\xi_{j-1}}^\beta}$ in $A^j_{\mb r}(\eta)$. Hence multiplying and dividing by $|\eta-\xi_1|^{n-1-2\gamma} \prod_{i=1}^{j-2} |\xi_{i}-\xi_{i+1}|^{n-1-2\gamma}$ and applying Cauchy-Schwarz inequality, we get the following point-wise estimate for the integrand of (\ref{eq:KjB}),
\begin{align}
\nonumber \jp{\eta}^{2\beta} &|\eta|^{-2(j-1)\lambda} \left( \int_{A^j_{\mb r}(\eta)} |\widehat{f_1}(\eta -\xi_1)| \left(\prod_{i=1}^{j-2}| \widehat{f}_{i+1}(\xi_i - \xi_{i+1})| \right) |\widehat{f_j}(\xi_{j-1})| \, d\sigma_{\mb r} \right)^2 \\
\nonumber \le |\eta|^{-2(j-1)\lambda} &\int_{\Gamma_{\mb r}(\eta)} |\widehat{f_1}(\eta -\xi_1)|^2|\eta-\xi_1|^{n-1-2\lambda} \left(\prod_{i=1}^{j-2}| \widehat{f}_{i+1}(\xi_i - \xi_{i+1})|^2|\xi_i - \xi_{i+1}|^{n-1-2\lambda} \right) \\
\label{eq:Kj1} \dots &\times |\widehat{f_j}(\xi_{j-1})|^2 \jp{\xi_{j-1}}^{2\beta} \, d\sigma_{\mb r} \int_{\Gamma_{\mb r}(\eta)} \frac{1 }{|\eta-\xi_1|^{n-1-2\gamma}} \prod_{i=1}^{j-2} \frac{1}{|\xi_{i}-\xi_{i+1}|^{n-1-2\gamma}} \, d\sigma_{\mb r} .
\end{align}
Now, by Lemma \ref{lemma.integrals} we have that
\begin{equation} \label{eq:iterInteg}
|\eta|^{-2(j-1)\lambda} \int_{\Gamma_{\mb r}(\eta)} \frac{ 1 }{|\eta-\xi_1|^{n-1-2\gamma}} \prod_{i=1}^{j-2} \frac{1}{|\xi_{i}-\xi_{i+1}|^{n-1-2\gamma}} \, d\sigma_{\mb r} \le C \prod_{i=1}^{j-1} r_i^{2\lambda},
\end{equation}
where $C$ is some constant independent of $\eta$ (to see this, always compute first the integral in the variable $\xi_{i}$ that only appears in one factor, in this case $\xi_{j-1}$). Hence, using this in (\ref{eq:Kj1}) and integrating in the $\eta$ variable we get the estimate
\begin{equation*}
\|\widetilde{K}^j_{j,\textbf{r}}(\widehat{f_1}, \dots , \widehat{f_j})\|_{L^2_\alpha}^2 \le C\prod_{i=1}^{j-1} r_i^{2\lambda} \int_{\RR^n} \int_{\Gamma_{\mb r}(\eta)} |F(\xi_1,\dots,\xi_{j-1},\eta)|^2 \, d\sigma_{\mb r} \,d\eta,
\end{equation*}
where
\begin{align*}
F(\xi_1,\dots,&\xi_{j-1},\eta) := \widehat{f_1}(\eta -\xi_1) \jp{\eta-\xi_1}^{(n-1)/2-\lambda} \times \dots \\
&\nonumber \left(\prod_{i=1}^{j-2} \widehat{f}_{i+1}(\xi_i - \xi_{i+1})\jp{\xi_i - \xi_{i+1}}^{(n-1)/2-\lambda} \right) \widehat{f_j}(\xi_{j-1}) \jp{\xi_{j-1}}^{\beta} .
\end{align*}
Therefore, as in Lemma \ref{lemma.K.Q2}, we apply the trace theorem to each one of the integrals over he spheres $\Gamma_{r_i}(\eta)$, to obtain
\begin{align}
\nonumber \|\widetilde{K}^j_{j,\textbf{r}}&(\widehat{f_1}, \dots , \widehat{f_j})\|_{L^2_\alpha}^2 \le C \left(\prod_{i=1}^{j-1} r_i^{2\lambda} \right) \times \dots \\
\label{eq:Kj3} &\hspace{-4mm}\sum_{0 \le |\alpha_1|,\dots, |\alpha_{j-1}| \le 1} \int_{\RR^n} \dots \int_{\RR^n} | \partial_{\xi_1}^{\alpha_1} \dots \partial_{\xi_{j-1}}^{\alpha_{j-1}} F(\xi_1,\dots,\xi_{j-1},\eta)|^2 \,d \xi_1 \dots \,d\xi_{j-1} \, d\eta,
\end{align}
where the $\alpha_i$ are multi-indices related to the corresponding $\RR^n$ variables $\xi_i$, see Lemma \ref{lemma.trazas2} in the appendix for a more detailed formulation. Now, using the Leibniz rule in (\ref{eq:Kj3}) we can put the derivative operators on the functions ${\widehat{f_i}\jp{\cdot}^{a}}$. In the worst case we are going to get terms of the kind
$$\partial_{\xi_i}^{\alpha_i} \partial_{\xi_{i+1}}^{\alpha_{i+1}} \left( \widehat{f}_{i+1}(\xi_{i} -\xi_{i+1})\jp{\xi_{i} -\xi_{i+1}}^{a} \right),$$
with at most two derivative operators having $|\alpha_i| = |\alpha_{i+1}|=1$. Therefore, if we integrate each summand first in $\eta$ and then in $\xi_1, \xi_{2}, \dots ,\xi_{j-1}$, we obtain
\begin{equation} \label{eq:Kj4}
\|\widetilde{K}^j_{j,\textbf{r}}(\widehat{f_1}, \dots , \widehat{f_j})\|_{L^2_\alpha}^2 \le C \left(\prod_{i=1}^{j-1} r_i^{2\lambda} \right) \norm{f_{j}}_{W_2^{\beta,2}}^2\prod_{l=1}^{j-1} \norm{f_l}_{W_2^{(n-1)/2 - \lambda ,2}}^2 .
\end{equation}
Putting together (\ref{eq:Kj4}) and the analogous estimates coming from the analysis of the other cases in (\ref{eq:allcases}), we obtain
\begin{align*}
\norm{\widetilde{K}_{j,\mb r}(\widehat{f_i})}_{W_2^{\alpha,2}} \le C \left( \prod_{i=1}^{j-1} r_i^{2\lambda} \right) \sum_{i=1}^{j} \, \, \norm{f_i}_{W_2^{\beta,2}} \prod_{\substack{1\le l \le j \\ l \neq i}} \norm{f_l}_{W_2^{(n-1)/2-\gamma,2}} .
\end{align*}
We reason as in Lemma \ref{lemma.K.Q2}. First we impose the extra condition $\lambda<1$.
As a consequence of Remark \ref{remark:Sob}, equation (\ref{eq:Kjmain}) follows directly in the range $\beta \ge (n-1)/2$. The restrictions on $\lambda$ together with (\ref{eq:Kjbeta}) give us the following restrictions on $\alpha$,
\begin{equation}\label{restriccion_2}
\begin{cases}
0 < \lambda < 1 \\ 0 < \lambda \le \frac{n-1}{2}
\end{cases} \Longleftrightarrow \begin{cases}
\beta +(j-2)< \alpha < \beta +(j-1) \\ \beta +(j-1) - \frac{n-1}{2} \le \alpha < \beta +(j-1).
\end{cases}
\end{equation}
We discard the lower bounds for $\alpha$ as in Lemma \ref{lemma.K.Q2}.
Otherwise, if $\beta$ is in the range $\beta < (n-1)/2$, estimate (\ref{eq:Kjmain}) holds if we add the extra condition $(n-1)/2-\lambda \le \beta$. Then, since $\lambda<1$, we must have $\beta>(n-3)/2$ as in Lemma \ref{lemma.K.Q2}. Also, (\ref{eq:Kjbeta}) together with $(n-1)/2-\lambda \le \beta$ imply that $\alpha \le \beta + (j-1)(\beta-(n-3)/2)$ which is a more restrictive condition than $\alpha<\beta+(j-1)$ since we have $\beta<(n-1)/2$. Hence, we have obtained the ranges of parameters given in the statement.
\end{proof}
\begin{proof}[Proof of Proposition $\ref{prop:Srmulti}$]
By (\ref{eq:KandS}), estimate (\ref{eq:Srmultiest}) follows directly if $\mb a = 0$. Therefore, we consider the case $\mb a \neq 0$.
Let $\mb r = (r_1,\dots,r_k)$, $1\le k < \infty$. We follow the same computations to obtain (\ref{eq:derivative2}) from (\ref{eq:derivative1}). We have that for a general function $F(\xi_1,\dots,\xi_k,\eta)$, $C^1$ in the first $k$ variables,
\begin{align}
\nonumber \partial_{r_i} &\left( \frac{1}{1+r_i} \int_{\Gamma_{\mb r}(\eta)} F(\xi_1,...,\xi_k,\eta) \,\, d\sigma_{\mb r}(\xi_1,\dots,\xi_k) \right) \\
\nonumber & \hspace{5mm} =\frac{(n-2)r_{i}+(n-1)}{r_{i}(1+r_{i})^2} \int_{\Gamma_{\mb r}(\eta)} F(\xi_1,...,\xi_k,\eta) \,\, d\sigma_{\mb r} \\
\label{eq:lastlemm} & \hspace{15mm} + |\eta|\frac{1}{1+r_{i}}\int_{\Gamma_{\mb r}(\eta)} \theta_i \cdot \nabla_{\xi_i} F(\xi_1,...,\xi_k,\eta) \,\, d\sigma_{\mb r},
\end{align}
where $\theta_i = \frac{\xi_i -\eta/2}{|\xi_i -\eta/2|}$ is a unitary vector. Observe that the coefficients before the integrals are functions of $r_i$ which are bounded for $r_i \in (1-\delta,1+\delta)$ for any $0<\delta<1$ fixed. Hence if we take a derivative $\partial^{\mb a}_{\mb r}$ with $\mb a= (a_1,\dots,a_k)$ and $a_i=0,1$, we have
\begin{align}
\nonumber &\left|\partial_{\mb r}^{\mb a} \left( \left( \prod_{i=1}^{j-1} \frac{1}{1+r_i} \right) \int_{\Gamma_{\mb r}(\eta)} F(\xi_1,...,\xi_k,\eta) \, d\sigma_{\mb r} \right)\right| \\
\label{eq:d1} &\le C \left( \prod_{i=1}^{j-1} \frac{1}{1+r_i} \right) \sum_{ \substack{0\le |\alpha_i| \le a_i, \\ 1 \le i \le k }} |\eta|^{|\alpha_1| + \dots + |\alpha_k| }\int_{\Gamma_{\mb r}(\eta)} \left| \partial_{\xi_1}^{\alpha_1} \dots \partial_{\xi_k}^{\alpha_k} F(\xi_1,...,\xi_k,\eta) \right| \, d\sigma_{\mb r},
\end{align}
where $\alpha_i$ are multi-indices associated to derivatives in $\RR^n$, and we have imposed $r_i \in (1-\delta , 1+\delta)$ if $a_i=1$ (to bound the coefficients dependent on $r_i$ as we did before). Notice that $|\alpha_1| + \dots + |\alpha_k|$ can take all the integer values from $0$ to $|\mb a|$. We are interested in computing $\partial_{\mb r}^{\mb a} S_{j,\mb r}$ so we put $k=j-1$ and
\begin{equation*}
F(\xi_1,...,\xi_{j-1},\eta) =\widehat{q}(\eta -\xi_1) \left(\prod_{i=1}^{j-2} \widehat{q}(\xi_i - \xi_{i+1}) \right) \widehat{q}(\xi_{j-1}).
\end{equation*}
In this case, each potential is going to be derived at most twice, since in the worst case $\widehat{q}$ is valued on the difference of two variables, $\xi_i$ and $\xi_{i+1}$. Therefore it suffices to give the following rough estimate,
\begin{equation*}
\int_{\Gamma_{\mb r}(\eta)} \left| \partial_{\xi_1}^{\alpha_1} \dots \partial_{\xi_{j-1}}^{\alpha_{j-1}} F(\xi_1,...,\xi_{j-1},\eta) \right| \, d\sigma_{\mb r} \le C \sum_{\substack{0 \le |\alpha_i'| \le 2 \\ 1\le i\le j-1}} K_{j,\mb r}(\partial^{\alpha_1'} \widehat{q},\partial^{\alpha'_2} \widehat{q}, \dots, \partial^{\alpha'_{j-1}} \widehat q) (\eta),
\end{equation*}
for some new multi-indices $\alpha_1',\dots ,\alpha_{j-1}'$. Hence, by (\ref{eq:Sj}), putting together (\ref{eq:d1}) and the previous estimate, we obtain
\begin{align*}
&\left| \partial_{\mb r}^{\mb a}{S}_{j,\mb r} (q)(\eta) \right| \\
&\le C \left( \prod_{i=1}^{j-1} \frac{1}{1+r_i} \right) \frac{1}{|\eta|^{j-1}} \left( 1+|\eta|+\dots+ |\eta|^{|\mb a|} \right) \sum_{\substack{0 \le |\alpha_i| \le 2 \\ 1\le i\le j-1}} K_{j,\mb r}(\widehat{ qx^{\alpha_1}},\widehat{ qx^{\alpha_2}}, \dots, \widehat{ qx^{\alpha_{j-1}}})(\eta).
\end{align*}
Then, since $|\alpha_i|\le 2$, multiplying the previous inequality by $\chi(\eta)$, (\ref{eq:Srmultiest}) follows form Lemma \ref{lemma:Kj} using that $\norm{x^{\alpha_i} q}_{W_2^{\beta,2}} \le C \norm{q}_{W_4^{\beta,2}}$, which can be obtained by the the same reasoning given after (\ref{eq:pain}).
\end{proof}
\section{Implicit estimates for the \texorpdfstring{$Q_j$}{Qj} operator} \label{sec:implicitQj}
In this section we prove Proposition \ref{prop:convergence}.
We follow the method developed for fixed angle scattering in \cite{R}, and for backscattering in \cite{RV} (case $\beta \le n/2$) and \cite{RRe} (extension to general $\max(0,m)<\beta<\infty$). It has been also adapted to the elasticity setting in \cite{BFPRM1} and \cite{BFPRM2}. As mentioned in the introduction, we have improved the regularity gain given in \cite{RV} and \cite{RRe} and we have obtained directly the general case $m \le \beta<\infty$ by using the cancellation given by the fractional Laplacian $(-\Delta)^{s}$. As in the mentioned works, we begin by giving some estimates of the resolvent of the Laplacian (see \cite{R,ReT} or \cite{notasR}). We define the conjugate resolvent operator
\begin{equation} \label{eq.Res.conj}
R_\theta(q)(x) := e^{-ik\theta \cdot x}R_k \left( e^{ik\theta\cdot(\cdot)} q(\cdot)\right )(x).
\end{equation}
\begin{lemma} \label{lemma.Res}
Let $s\ge 0$ and let $r$ and $t$ be such that $0\le 1/t-1/2 \le 1/(n+1)$ and $0\le 1/2 -1/r \le 1/(n+1)$. There exist $\delta$, $\delta'>0$ and $C$ (independent of k) such that
$$ \norm{R_{\theta}(q)}_{W^{s,r}_{-\delta}} \le C k^{-1 + (1/t-1/r)(n-1)/2 }\norm{q}_{W^{s,t}_{\delta'}} .$$
\end{lemma}
We need also a theorem of Zolesio on the product of functions in the Sobolev spaces (a proof can be found in \cite{grin} and for the compactly supported case in \cite[pp. 182-183]{ReT})
\begin{lemma}[Zolesio] \label{lemma.zol}
Let $s_1,s_2,s \ge 0$, $s \le s_1$, $s \le s_2$, and let $r,t$ and $p$ be such that $t < \min(p,r)$ and
$$s_1+s_2-s \ge n \left( \frac{1}{p} + \frac{1}{r} - \frac{1}{t} \right).$$
Then
$$ \norm{q f}_{W^{s,t}} \le C \norm{q}_{W^{s_1,p}} \norm{f}_{W^{s_2,r}} .$$
Moreover, if $q$ is compactly supported and $\delta,\delta'\in \RR$, then
\begin{equation} \label{eq:zol}
\norm{q f}_{W^{s,t}_{-\delta}} \le C(\supp \, q,\delta,\delta') \norm{q}_{W^{s_1,p}} \norm{f}_{W^{s_2,r}_{\delta'}} .
\end{equation}
\end{lemma}
\begin{proof}[Proof of Proposition $\ref{prop:convergence}$]
For brevity, we will omit the dependency of the constants on the dimension $n$. Without loss of generality assume $q \in C^\infty_c(B_\rho)$, where $B_\rho$ denotes the ball of radius $\rho$. In terms of $R_\theta$, defined in (\ref{eq.Res.conj}), the expression of $Q_j$ given in (\ref{eq:Qjraw}) becomes
$$\widehat{Q_j(q)}(\xi) = \int_{\RR^n} e^{i2k\theta \cdot y} (q R_{\theta})^{j-1}(q)(y) \,dy,$$
with $\xi=-2k\theta$. In spherical coordinates we can write
\begin{equation*}
\norm{\widetilde Q_j(q)}^2_{W^{\alpha ,2}} \le \int_{C_0}^{\infty} k^{n-1+2\alpha}\int_{\SP^{n-1}}\left | \int_{\RR^n} e^{i2k\theta \cdot y} (q R_{\theta})^{j-1}(q)(y) \,dy \right|^2 \,d\sigma(\theta) \, dk.
\end{equation*}
Now, if $f$ is a $C_c^\infty(\RR^n)$ function and $\beta \ge 0$, the fractional Laplacian (see for example \cite[Section 3]{valdinoci}) can be defined by the identity
\begin{equation} \label{eq:fracfourier}
\mathcal{F} \left((-\Delta)^{\beta/2} \,f\right) (\xi) := |\xi|^\beta \, \widehat{f}(\xi),
\end{equation}
and we have that in the sense of distributions
$$(-\Delta)^{\beta/2} e^{i2k\theta \cdot x} = (2k)^{\beta} e^{i2k\theta \cdot x},$$
see \cite[chapter 2]{silvestre} for a rigorous extension to distributions of the fractional Laplacian. Hence applying this to the previous inequality, since $(q R_{\theta})^{j-1}(q)\in C^{\infty}_c(\RR^n)$ we obtain
\begin{align}
\nonumber \norm{\widetilde Q_j(q)}^2_{W^{\alpha,2}}& \ \\
\nonumber \le C(\beta) \int_{C_0}^{\infty} &k^{n-1+2\alpha-2\beta} \int_{\SP^{n-1}} \left | \int_{\RR^n} (-\Delta)^{\beta/2} (e^{i2k\theta \cdot y} )(q R_{\theta})^{j-1}(q)(y) \,dy \right|^2 \,d\sigma(\theta) \, dk \\
\nonumber = C(\beta) \int_{C_0}^{\infty} &k^{n-1+2\alpha-2\beta} \int_{\SP^{n-1}} \left | \int_{\RR^n} e^{i2k\theta \cdot y} (-\Delta)^{\beta/2} \left( (q R_{\theta})^{j-1}(q)\right) (y) \,dy \right|^2 \,d\sigma(\theta) \, dk \\
\label{eq.prpQj} &\le C(\beta) \int_{C_0}^{\infty} k^{n-1+2\alpha-2\beta} \int_{\SP^{n-1}} \norm{(-\Delta)^{\beta/2} \left( (q R_{\theta})^{j-1}(q)\right) }_{L^1(\RR^n)}^2 \, d\sigma(\theta) \, dk.
\end{align}
Applying Lemma \ref{lemma.LapFracL1} in the Appendix we have
\begin{align*}
\norm{(-\Delta)^{\beta/2} \left( (q R_{\theta})^{j-1}(q)\right) }_{L^1} &\le C(\beta)\norm{\jp{\cdot}^{\delta}q}_{W^{\beta,2}}\norm{\jp{\cdot}^{-\delta} R_\theta ((q R_{\theta})^{j-2}(q))}_{W^{\beta,2}}\\
&\le C(\beta,\rho)\norm{q}_{W^{\beta,2}}\norm{ R_\theta ((q R_{\theta})^{j-2}(q))}_{W_{-\delta}^{\beta,2}}
\end{align*}
using that $q$ is compactly supported. Now, choose $\delta$ in the previous equation as in Lemma \ref{lemma.Res}.
The idea to deal with the norm in the right hand side is to iterate lemmas \ref{lemma.Res} and \ref{lemma.zol} following the diagram,
{\footnotesize{
\begin{equation*}
\minCDarrowwidth10mm
\begin{CD}
\mathrm{W}_{\delta'}^{\beta,t_{j-1}}
@>
\mathrm{R_\theta}
>>
\mathrm{W}_{-\delta}^{\beta,r_{j-1}}
@>
{\mathrm{\rm{q \cdot}}}
>>
\mathrm{W}_{\delta'}^{\beta,t_{j-2}}
\ldots
@>
q\cdot
>>
W_{\delta'}^{\beta,t_{1}}
@>
\mathrm{R_\theta}
>>
W_{-\delta}^{\beta,r_{1}}
\\
q
@>
{}
>>
R_\theta(q)
@>
{}
>>
qR_\theta(q)
\ldots
@>
{}
>>
(qR_\theta)^{j-2}(q)
@>
{}
>>
R_\theta(qR_\theta)^{j-2}(q)
\end{CD}
\end{equation*}
}}
where $r_1 =2$ and $t_{j-1} =2$ and $r_\ell$ and $t_\ell$, $\ell= 1, \dots, j-2$ have to satisfy the conditions
\begin{align*}
&0\le \frac{1}{t_\ell} -\frac{1}{2} \le \frac{1}{n+1} \hspace{8mm}\text{and} \hspace{8mm} 0\le \frac{1}{2} -\frac{1}{r_{\ell+1}} \le \frac{1}{n+1},\\
&t_\ell < 2 \hspace{29.5mm} \text{and} \hspace{8mm} 0\le \frac{1}{2} +\frac{1}{r_{\ell+1}} - \frac{1}{t_\ell} \le \frac{\beta}{n}.
\end{align*}
Hence we obtain
$$\norm{ R_\theta ((q R_{\theta})^{j-2}(q))}_{W_{-\delta}^{\beta,2}} \le C^j(\beta,\rho) k^{\gamma_j}\norm{q}^{j-1}_{W^{\beta,2}},$$
(in (\ref{eq:zol}) the constant depends on the support $B_\rho$ of $q$) where
\begin{align*}
\gamma_j &= -(j-1) + \frac{(n-1)}{2} \sum_{\ell=1}^{j-1} \left(\frac{1}{t_\ell}- \frac{1}{r_\ell} \right) \\
&= -(j-1) + \frac{(n-1)}{2} \sum_{\ell=1}^{j-2} \left(\frac{1}{t_\ell}- \frac{1}{r_{\ell+1}}\right).
\end{align*}
Now, for small $\varepsilon > 0$, when $\beta\ge m = (n-4)/2 + 2/(n+1)$ we can choose $r_\ell$ and $t_\ell$ satisfying all the previous conditions and
$$1/t_\ell - 1/r_{\ell+1} = \max (1/2-\beta/n,\varepsilon) ,$$
for all $1\le \ell \le j-2$, and so we obtain
$$\gamma_j = -(j-1) + \frac{(n-1)}{2}(j-2) \max (1/2-\beta/n,\varepsilon). $$
Putting all the previous estimates together in (\ref{eq.prpQj}) we obtain
\begin{multline} \label{eq:Qjimplicit}
\norm{\widetilde Q_j(q)}^2_{W^{\alpha ,2}} \le C^{2j}(\beta,\rho)\, \norm{q}^{2j}_{W^{\beta ,2}}\int_{C_0}^\infty k^{n-1 + 2\alpha -2\beta + 2\gamma_j} \, dk \\
= C^{2j}(\beta,\rho) \frac{C_0^{-2(\alpha_j-\alpha)}}{\alpha_j-\alpha} \norm{q}_{ W^{\beta,2}}^{2j},
\end{multline}
with $\alpha<\alpha_j$ and $\alpha_j = \beta + (j-1) - \frac{n}{2} - \frac{(n-1)}{2} (j-2) \max{\left( 0,\frac{1}{2} - \frac{\beta}{n} \right )}$. By density, we can extend estimate (\ref{eq:Qjimplicit}) for $q\in W^{\beta,2}(\RR^n)$ compactly supported in $B_\rho$. This follows from Lemma \ref{lemma:density}, with minor changes to take into account the restriction in the support.
Hence we can consider now $q\in W^{\beta,2}(\RR^n)$ compactly supported in $B_\rho$. Choose some $\alpha>0$. Now, for $\beta \ge m$, $\alpha_j$ grows linearly with $j$. Then, for any integer $l>0$ such that $\alpha_l > \alpha$ we have by the previous estimates that
$$\left \| \sum_{j=l}^{\infty}{\widetilde Q_{j}(q)} \right\|_{W^{\alpha,2}} \le \sum_{j=l}^{\infty} \norm{\widetilde Q_j(q)}_{W^{\alpha,2}}\le \sum_{j=l}^{\infty} C_0^{-(\alpha_j-\alpha)} C^j(\alpha,\beta,\rho) \norm{q}_{W^{\beta,2}}^{j}.$$
Using the linear growth of $\alpha_j$ we can choose some $\varepsilon(\beta) =\varepsilon>0$ such that for every $j\ge l$, $(\alpha_j-\alpha) \ge j \varepsilon$. Therefore we obtain that
$$ \left \| \sum_{j=l}^{\infty}{\widetilde Q_{j}(q)} \right\|_{W^{\alpha,2}} \le \sum_{j=l}^{\infty} C_0^{-\varepsilon j } C^j(\alpha,\beta,\rho)\norm{q}_{W^{\beta,2}}^{j},$$
and the right hand side converges taking $C_0 > \left( C(\alpha,\beta,\rho)\norm{q}_{W^{\beta,2}} \right)^{1/\varepsilon}$.
\end{proof}
\section{Some limitations on the regularity of the double dispersion operator} \label{sec:5}
In this section we use a certain family of compactly supported radial and real functions, to obtain the upper bounds to the maximum regularity of the $ Q_2$ operator given by Theorem \ref{teo:Q2count}. This family of functions was constructed in \cite{fix} to illustrate an analogous phenomenon in the fixed angle and full data scattering problems.
See also \cite[pp. 20]{BM10} for an explicit radial counterexample for the the case $\beta =(1/2)^-$ and $n=3$.
\begin{lemma}[Proposition 5.3 of \cite{fix}] \label{lemma:gbeta}
For every $0<\beta<\infty$ there is a radial, real and compactly supported function $g_\beta$ such that $\widehat{g_\beta}$ is non negative, $\widehat{g_\beta}(0)>0$, and for some $c>0$ we have that
\begin{equation} \label{eq:gbetasymp}
\widehat{g_\beta}(\xi) \sim \, \jp{\xi}^{-n/2-\beta} \, \, \, \text{if} \, \, \, |\xi|>c.
\end{equation}
\end{lemma}
Notice that $g_\beta \in W^{\gamma,2}$ if and only if $\gamma<\beta$.
The construction of these functions is not difficult, the idea is to define
\begin{equation} \label{eq:lasthope}
g_\beta(x) := (\phi *\phi)(x) G_\beta(x) ,
\end{equation}
where $\phi$ is any real and radial $C^\infty_c(\RR^n)$ function and the $G_\beta$ functions are, up to normalizing factors, kernels of Bessel potential operators. Indeed, if we have that
$$
\widehat{G_\beta}(\xi) := {\jp{\xi}^{-n/2-\beta}},
$$
then $G_\beta(x)$ is a smooth and exponentially decaying function outside the origin (see, for example \cite[Chapter V]{stein}). Hence, multiplying $G_\beta$ as in (\ref{eq:lasthope}) by a $C^\infty_c(\RR^n)$ cut off function, non vanishing at the origin, we get the desired asymptotic behaviour of $\widehat{g_\beta}(\xi)$. The choice of the cut off $\phi *\phi$ guarantees the positivity of $\widehat{g_\beta}(\xi)$ (see \cite{fix} for more details).
The key idea behind the proof of Theorem \ref{teo:Q2count} is to study the asymptotic behavior of $|\widehat{ {Q}_2(g_\beta)}(\eta)|$ when $|\eta| \to \infty $. This is greatly simplified by the fact we have the explicit formula (\ref{eq:Q2}). Now, $g_\beta$ has a real Fourier transform $\widehat{g_\beta}(\xi)$ by construction, so $\widehat{Q_2(g_\beta)}$ has a real part given by the principal value term in (\ref{eq:Q2}) and an imaginary part given by $\pi S_{r=1}(g_\beta)$. As there is no possible cancellation between the real an imaginary parts, we are going to study only the asymptotic behavior of the spherical integral, which has the advantage of having a positive integrand.
To simplify notation we put $S(q) := S_{r=1}(q)$ and $ \Gamma(\eta) := \Gamma_{r=1}(\eta)$. The main estimate is the following one.
\begin{lemma} \label{lemma.example.backs}
Let $\beta > -n/2$ and assume that $q_\beta \in \mathcal S'(\RR^n)$ satisfies the following conditions,
\begin{enumerate}[i)]
\item Its Fourier transform $\widehat{q_\beta}(\xi)$ is real and non negative function in all $ \RR^n$.
\item There is a constant $c>0$ such that if $|\xi|>c$, then $\widehat{q_\beta}(\xi)\ge C\jp{\xi}^{-n/2-\beta}$.
\item $\widehat{q_\beta}(\xi)$ is continuous and satisfies $\widehat{q_\beta}(0) >0$.
\end{enumerate}
Then we have that if $|\eta|>4c$, there is a constant $C$ independent of $\eta$ such that
\begin{equation*}
S(q_\beta)(\eta) \ge C \max\left( \jp{\eta}^{-\beta-n/2 -1},\jp{\eta}^{-2\beta-2}\right ) .
\end{equation*}
\end{lemma}
\begin{figure}
\centering
\begin{tikzpicture}[scale=0.8]
\draw[<->] (6,0) -- (0,0) --
(0,5.3);
\draw [ semithick] (2.12132,2.12132) circle [radius=1.5];
\draw [->][semithick,blue] (0,0) -- (2.12132,2.12132);
\draw [->][semithick,blue] (2.12132,2.12132) -- (4.24264,4.24264);
\draw [dashed,semithick] ([shift=(105:3)]2.12132,2.12132) arc (105:165:3);
\draw [semithick] ([shift=(165:3)]2.12132,2.12132) arc (165:285:3);
\draw [dashed,semithick] ([shift=(285:3)]2.12132,2.12132) arc (285:345:3);
\draw [semithick] ([shift=(-15:3)]2.12132,2.12132) arc (-15:105:3);
\node [above] at (2.2,3.7) { $\Gamma_r(\eta)$ };
\node [above] at (3.2,5) { $\Gamma(\eta)$ };
\node [below] at (2.2,2) { $\eta/2$ };
\node [below] at (4.2,4) { $\eta$ };
\node [above] at (-1,4.5) { $A(\eta)$ };
\end{tikzpicture}
\caption{The largest sphere is the Ewald sphere $\Gamma(\eta):= \Gamma_1(\eta)$, and the small one represents the Ewald sphere $\Gamma_r(\eta)$ for some $r<1$. The dashed region is the set $A(\eta)\subset \Gamma(\eta)$ .}
\label{fig.ewald1}
\end{figure}
\begin{proof}
Since $\widehat{q_\beta}$ is non negative, we have that
\begin{equation} \label{eq.example.3}
S(q_\beta)(\eta) \ge \frac{1}{|\eta|}\int_{A(\eta)}\widehat{q_\beta}(\xi)\widehat{q_\beta}(\eta-\xi) \,d\sigma_\eta(\xi),
\end{equation}
where, if we write $\eta = |\eta|\theta$ with $\theta$ a unitary vector, $A(\eta)\subset \Gamma(\eta)$ is defined as follows
$$A(\eta) := \{ \xi \in \Gamma(\eta): |(\xi-\eta/2)\cdot \theta|\le |\eta|/4 \}.$$
That is, $A(\eta)$ is a band around the equator orthogonal to $\eta$ of width proportional to $|\eta|$ (see figure \ref{fig.ewald1}). Observe that we have that $\xi\in A(\eta)$ if and only if $\eta-\xi\in A(\eta)$, and that in this region $|\xi| \ge |\eta|/4$. Hence, if we consider $|\eta|>4c$ (where $c$ is given in the statement) and $\xi\in A(\eta)$, we have that $|\xi|>c$ and $|\eta-\xi|>c$, so from (\ref{eq.example.3}) we get
\begin{align}
\nonumber S(q_\beta)(\eta) &\ge C\frac{1}{|\eta|}\int_{A(\eta)} \jp{\eta-\xi}^{-\beta-n/2}\jp{\xi}^{-\beta-n/2} \,d\sigma_\eta(\xi)\\
\label{anadida} &\ge C \jp{\eta}^{-2\beta-n } |\eta|^{n-2} > C \jp{\eta}^{-2\beta -2},
\end{align}
where to get the last line we have used that the measure of $A(\eta)$ is proportional to $|\eta|^{n-1}$, and that $|\xi| \le |\eta|$ and $|\eta-\xi| \le |\eta|$ always hold in $\Gamma(\eta)$.
Now, if $\widehat{q_\beta}$ is continuous and $\widehat{q_\beta}(0)>0$, we can take a ball $B_\varepsilon$ around the origin of radius $0 < \varepsilon< c$ such that $\widehat{q_\beta}{(\xi)}$ is positive in its closure. Then if $|\eta|> 2c$, $\xi \in B_\varepsilon\cap \Gamma(\eta)$ implies $|\eta-\xi| > c$, so
\begin{align}
\nonumber S(q_\beta)(\eta) &\ge \frac{1}{|\eta|}\int_{B_\varepsilon\cap \Gamma(\eta)}\widehat{q_\beta}(\xi)\widehat{q_\beta}(\eta-\xi) \,d\sigma_\eta(\xi)\\
\label{eq.example.4} &\ge C \frac{1}{|\eta|}\int_{B_\varepsilon\cap\Gamma(\eta)} \jp{\eta-\xi}^{-\beta-n/2} \,d\sigma_\eta(\xi) \ge C \jp{\eta}^{-\beta-n/2 -1} ,
\end{align}
using that $|\eta-\xi| \le |\eta|$ always, and that the measure $|B_\varepsilon\cap\Gamma(\eta)|$ is bounded below by a positive constant independent of $\eta$ (this is because the region $B_\varepsilon\cap\Gamma(\eta)$ approaches a flat disc of radius $\varepsilon$ for $\eta$ large). To finish we have just to put together (\ref{anadida}) and (\ref{eq.example.4}).
\end{proof}
The proof of Theorem \ref{teo:Q2count} follows from the previous lemma and the following simple result.
\begin{lemma} \label{lemma.Wloc}
Let $f \in \mathcal S'(\RR^n)$ be such that $\widehat f$ is a non negative measurable function. Assume also that for some $c>0$, $\gamma \in \RR $ and $|\eta|>c$ we have ${\widehat{f}(\eta)\ge C \jp{\eta}^{-n/2-\gamma}}$. Then we have that $f\notin W_{loc}^{\alpha,2}(\RR^n)$ if $\alpha \ge \gamma$.
\end{lemma}
\begin{proof}
We can always take a function $\psi \in C^\infty_c(\RR^n)$ such that $\widehat \psi(\xi) \ge 0$ in $\RR^n$ and $\widehat \psi(0)>0$ (for example, is enough to choose $\psi = \phi*\phi$ with $\phi \in C^\infty_c(\RR^n) $ radial and real, as in the definition of $g_\beta$). Then we can take an $0<\varepsilon<c/2$ small such that $\widehat{\psi}(\xi)$ is bounded below by a positive constant in $B_\varepsilon$. Hence if $ |\eta| \ge 2c$,
\begin{align*}
\widehat{\psi f}(\eta) &= \int_{\RR^n} \widehat \psi(\xi) \widehat{f}(\eta-\xi) \, d\xi \\
&\ge \int_{B_\varepsilon} \widehat \psi(\xi) \widehat{f}(\eta-\xi) \, d\xi \ge C\jp{\eta}^{-n/2-\gamma}.
\end{align*}
As a consequence we have that $\psi f\notin W^{\alpha,2}(\RR^n)$ for $\alpha\ge\gamma$, which implies that $f\notin W_{loc}^{\alpha,2}(\RR^n)$ by definition of the local Sobolev spaces.
\end{proof}
\begin{proof}[Proof of Theorem $\ref{teo:Q2count}$]
By Lemma \ref{lemma:gbeta} the function $g_\beta$ satisfies all the conditions necessary to apply Lemma \ref{lemma.example.backs}, so for $\eta$ large we have
\begin{equation}\label{eq.max1}
S(g_\beta)(\eta) \ge C \max\left( \jp{\eta}^{-\beta-n/2 -1} ,\jp{\eta}^{-2\beta-2 }\right ).
\end{equation}
By (\ref{eq:Q2}), we have that
\begin{equation}\label{eq:Qg} \widehat{Q_{2}(g_\beta)}(\eta) = P(S_r(g_\beta))(\eta) + i\pi S(g_\beta)(\eta),
\end{equation}
and $\widehat{g}_\beta$ is real, so $P(S_r(g_\beta))$ and $S(g_\beta)$ are real functions of $\eta$ also. This means that if we assume $Q_{2}(g_\beta) \in W_{loc}^{\alpha,2}(\RR^n)$, we must have $\mathcal F^{-1} (S(g_\beta)) \in W_{loc}^{\alpha,2}(\RR^n) $, since there are no possible cancellations between the real and imaginary parts in (\ref{eq:Qg}).
As a consequence of (\ref{eq.max1}), applying Lemma \ref{lemma.Wloc} with $f= \mathcal F^{-1} (S(g_\beta))$ we obtain that $\alpha$ must satisfy simultaneously $\alpha< \beta+1$ and $\alpha< 2\beta+ (n-4)/2$.
Hence, we have shown that for every $0<\beta<\infty$ there is a radial, real and compactly supported function $g_\beta$ such that $g_\beta \in W^{\gamma,2}$ if and only if $\gamma<\beta$, but we have that $ Q_2(g_\beta) \in W_{loc}^{\alpha,2}(\RR^n)$ only if $\alpha< \min(\beta + 1, 2\beta -(n-4)/2)$. This enough to conclude the proof.
\end{proof}
\section{Further Remarks}
In the introduction we have seen that there is a gap betwen the negative and positive results of recovery of singularities given in Theorems \ref{teo:main1} and \ref{teo:main2}. This is a consequence of Theorems \ref{teo.Qj} and \ref{teo:Q2count}, where essentially the same gap is manifested in the results concerning the $Q_2$ operator. It appears in the range ${(n-4)/2 < \beta< (n-2)/2}$ (see for example figure \ref{fig.dim2y3} for the case $n=4$). What happens in this range is not known except for some partial results in dimension 2 and 3. In \cite[Proposition 3.1]{RV} a $1/2^-$ derivative gain for $\beta < 1/2$ is given in dimension 3 using finer properties of the structure of the Ewald spheres. In this work, thanks to the trace theorem, we have not used any special properties of the spheres $\Gamma_r(\eta)$ in the estimates of the $Q_2$ operator. This suggest that there is an opportunity to improve the positive results in order to narrow this gap. Another possible strategy that we have already mentioned, is to choose a weaker scale for measuring the regularity of the $Q_2(q)$ operator. This is the approach of \cite{BFRV13} in dimension 2, where they show that, if $\Lambda^\alpha(\RR^2)$ denotes the Hölder class and $q\in W^{\beta,2}(\RR^2)$, $\beta\ge 0$, then, modulo a $C^\infty$ function, $q-q_B \in \Lambda^\alpha(\RR^2)$ for every $\alpha<\beta$. This is a $1^-$ derivative gain in the sense of integrability. A similar result holds also in dimension $n\ge 3$ and this will be the subject of a forthcoming work.
A similar problem is what happens in the limiting case $\alpha = \beta + 1$, when $\beta \ge (n-1)/2$. It is not difficult to show, modifying slightly the proof of lemma \ref{lemma.K.Q2}, that there is a whole 1 derivative gain when $\beta > (n-1)/2$ for the spherical operator $S_r$. Unfortunately, it is not possible to say the same about the principal value operator, since in the estimate of the $P_{G,B}$ term in Proposition \ref{prop:genPV} is necessary to sacrifice an $\varepsilon$ of the regularity of the spherical operator (hence the final estimate for $\alpha'<\alpha$). Also, since this term involves cancellations, it is difficult to determine if this is a limitation of the techniques, or if it is possible to construct a counterexample.
\section*{Aknowledgments}
I am very grateful to my PhD advisors Alberto Ruiz and Juan Antonio Barceló for their invaluable advice and constant support during the development of this work.
The author was supported by Spanish government predoctoral grant BES-2015-074055 (project MTM2014-57769-C3-1-P).
| {
"timestamp": "2019-01-18T02:01:06",
"yymm": "1709",
"arxiv_id": "1709.00748",
"language": "en",
"url": "https://arxiv.org/abs/1709.00748",
"abstract": "We prove that in dimension $n\\ge 2$ the main singularities of a complex potential $q$ having a certain a priori regularity are contained in the Born approximation $q_{B}$ constructed from backscattering data. This is archived using a new explicit formula for the multiple dispersion operators in the Fourier transform side. We also show that ${q-q_{B}}$ can be up to one derivative more regular than $q$ in the Sobolev scale. On the other hand, we construct counterexamples showing that in general it is not possible to have more than one derivative gain, sometimes even strictly less, depending on the a priori regularity of $q$.",
"subjects": "Analysis of PDEs (math.AP); Mathematical Physics (math-ph)",
"title": "Recovery of the singularities of a potential from backscattering data in general dimension",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.990440598959895,
"lm_q2_score": 0.7154239897159438,
"lm_q1q2_score": 0.7085849648845371
} |
https://arxiv.org/abs/1111.4866 | A strong form of the Quantitative Isoperimetric inequality | We give a refinement of the quantitative isoperimetric inequality. We prove that the isoperimetric gap controls not only the Fraenkel asymmetry but also the oscillation of the boundary. | \section{Introduction and statement of the results}
In recent years there has been a growing interest in the study of the stability of a large class of geometric and functional inequalities, such as the isoperimetric and the Sobolev inequality. After some early work going back to the beginning of last century the first quantitative version of the isoperimetric inequality in any dimension was proved by Fuglede in \cite{F}. He showed that if $E$ is a {\it nearly spherical set}, i.e., is a Lipschitz set with the barycenter at the origin and the volume of the unit ball $B_1$ such that
\begin{equation}\label{one}
\partial E=\{z(1+u(z)):\,\,z\in\partial B_1\}\,,
\end{equation}
with $\|u\|_{W^{1,\infty}}$ small, then
\begin{equation}\label{two}
\|u\|^2_{W^{1,2}(\partial B_1)}\leq C\left[P(E)-P(B_1)\right]\,.
\end{equation}
Here $P(\cdot)$ denotes the perimeter of a set. From this estimate he was able to deduce that the perimeter deficit $P(E)-P(B_1)$ controls also the Hausdorff distance between $E$ and $B_1$, whenever $E$ is nearly spherical or convex.
\par
However, Hausdorff distance is too strong when dealing with general sets of finite perimeter and one must replace it (see \cite{H}) by the so called {\it Fraenkel asymmetry } index
$$
\alpha(E):=\min_{y\in\mathbb{R}^n}\Bigl\{\frac{|E\Delta B_r(y)|}{r^n}:\,\,|B_r|=|E|\Bigr\}\,.
$$
Then, the {\it quantitative isoperimetric} inequality states that there exists a constant $C=C(n)$ such that
\begin{equation}\label{three}
\alpha(E)^2\leq CD(E),\,
\end{equation}
where $D(E)$ stands for the {\it isoperimetric deficit}
$$
D(E):=\frac{P(E)-P(B_r)}{r^{n-1}},\qquad \text{with $|B_r|=|E|\,.$}
$$
Note that in this inequality, first proved in \cite{FuMP} with symmetrization techniques, the exponent $2$ on the left hand side is optimal, i.e., it cannot be replaced by any smaller number. Later on Figalli, Maggi and Pratelli in \cite{FMP} extended \eqref{three} to the anisotropic perimeter via an optimal transportation argument, while a short proof in the case of the standard perimeter has been recently given in \cite{CL} with an argument based on the regularity theory of area almost minimizers.
\par
In this paper we prove a stronger form of the quantitative inequality \eqref{three}. The underlying idea is that the perimeter deficit should control not only the $L^1$ distance between $E$ and some optimal ball, that is the Fraenkel asymmetry, but also the oscillation of the boundary.
\par
Let us fix some notation. Given a ball $B_r(y)$, let us denote by $\pi_{y,r}$ the projection of $\mathbb{R}^n\setminus\{y\}$ onto the boundary $\partial B_r(y)$, that is
$$
\pi_{y,r}(x):=y+r\frac{x-y}{|x-y|}\qquad\text{for all $x\not=y$}
$$
and let us define the asymmetry index as
$$
A(E):=\min_{y\in\mathbb{R}^n}\biggl\{\frac{|E\Delta B_r(y)|}{r^n}+\biggl(\frac{1}{r^{n-1}}\int_{\partial^*E}|\nu_E(x)-\nu_{B_r(y)}(\pi_{y,r}(x))|^2\,d{\mathcal{H}}^{n-1}(x)\biggr)^{1/2}\!:\,\, |B_r|=|E|\biggr\}\,,
$$
where $\partial^*E$ is the reduced boundary of $E$ and $\nu_E$ is its generalized exterior normal. Then, our main result reads as follows.
\begin{theorem}
\label{mainthm}
Let $n\geq2$. There exists a constant $C(n)$ such that for every set $E\subset\mathbb{R}^n$ of finite perimeter
\begin{equation}\label{main}
A(E)^2\leq CD(E)\,.
\end{equation}
\end{theorem}
A few comments on this inequality are in order. First, let us observe that \eqref{main} is essentially equivalent to the estimate \eqref{two} for nearly spherical sets. In fact, if $|E|=|B_1|$ and $\partial E$ is as in \eqref{one}, then the normal vector at a point $x(z)=z(1+u(z))$ is given by
$$
\nu_E(x(z))=\frac{z(1+u(z))+\nabla_\tau u(z)}{\sqrt{(1+u)^2+|\nabla_\tau u|^2}}\,,
$$
where $\nabla_\tau u$ stands for the tangential gradient of $u$ on the unit sphere. Therefore, recalling that $\|u\|_{W^{1,\infty}}$ is small, one easily gets
\[
\begin{split}
A(E)^2 & \leq \biggl[|E\Delta B_1|+\biggl(\int_{\partial E}\Bigl|\nu_E(x)-\frac{x}{|x|}\Bigr|^2d{\mathcal{H}}^{n-1}\biggr)^{1/2}\biggr]^2 \\
& \leq C \biggl[\biggl(\int_{B_1}|u|\,dx\biggr)^2+\int_{\partial B_1}\biggl(1-\frac{1+u(z)}{\sqrt{(1+u)^2+|\nabla_\tau u|^2}}\biggr)d{\mathcal{H}}^{n-1}\biggr] \\
& \leq C \biggl[\int_{B_1}|u|^2\,dx+\int_{\partial B_1}|\nabla_\tau u|^2\,d{\mathcal{H}}^{n-1}\biggr]\,.
\end{split}
\]
Hence, \eqref{main} follows by combining this inequality with \eqref{two}.
\par
Next observation is that since the second integral in the definition of $A(E)$ behaves like the $L^2$ distance between two gradients, it should control the symmetric difference $|E\Delta B_r(y)|$ as in a Poincar\'e type inequality. This is precisely the statement of the next result.
\begin{proposition}\label{vesaprop} There exists a constant $C(n)$ such that if $E$ is a set of finite perimeter, then
\begin{equation}\label{five}
A(E)+\sqrt{D(E)}\leq C\beta(E)\,,
\end{equation}
where
\begin{equation}\label{defbeta}
\beta(E):=\min_{y\in\mathbb{R}^n}\biggl\{\biggl(\frac{1}{2r^{n-1}}\int_{\partial^*E}|\nu_E(x)-\nu_{B_r(y)}(\pi_{y,r}(x))|^2\,d{\mathcal{H}}^{n-1}(x)\biggr)^{1/2}\!:\,\,|B_r|=|E|\biggr\}\,.
\end{equation}
\end{proposition}
In view of \eqref{five}, the proof of the strong quantitative estimate \eqref{main} reduces to proving that
\begin{equation}\label{six}
\beta(E)^2\leq CD(E)\,,
\end{equation}
for a suitable constant $C=C(n)$.
\par
In order to prove this inequality we follow the strategy introduced in \cite{CL} for proving the quantitative inequality stated in \eqref{three}, with some further simplifications due to \cite{AFM}, where a different isoperimetric problem is considered (see also \cite{FM} for a similar approach).
\par
The starting point is the above observation that Fuglede's result implies \eqref{six} for nearly spherical sets. Then, we argue by contradiction assuming that there exists a sequence of equibounded sets $E_k\subset B_{R_0}$, for some $R_0>0$, $|E_k|=|B_1|$, converging in $L^1$ to the unit ball and for which \eqref{six} does not hold. The idea is to replace this sequence by minimizers $F_k$ of the following penalized problems
$$
\min\bigl\{P(F)+\frac14|\beta(F)^2-\beta(E_k)^2|+\Lambda\bigl||F|-|B_1|\bigr|:\,\,F\subset B_{R_0}\bigr\}\,,
$$
where $\Lambda>n$ is a fixed constant. Then, we show that also $F_k$ converges in $L^1$ to the unit ball. Moreover, each $F_k$ is an area almost minimizer. Thus, a well known result of B. White (see \cite{W}) yields that the sets $F_k$ actually converge in $C^1$ to the unit ball, and in particular that for $k$ large they are all nearly spherical. This immediately gives a contradiction on observing that if \eqref{six} does not hold for $E_k$, the same is true also for $F_k$.
\par
We conclude with a final remark. In order to prove the area almost minimality of the sets $F_k$ we have to show preliminarily that they are area quasiminizers. This is a much weaker notion
than almost minimality (see definition \eqref{quasimin} below), but it is enough to ensure that the sets are uniformly porous (see \cite{DS} and \cite{KKLS}). This mild regularity property turns out to be an essential tool to pass from the $L^1$ convergence to the Hausdorff convergence of the sets.
\section{Preliminaries}
We denote by $B_r(x)$ a ball with radius $r$ centered at $x$ and write $B_r$ when the center is at the origin. We set $\omega_n:=|B_1|$. If $E$ is a measurable set in $\mathbb{R}^n$ we denote by $P(E)$ its perimeter and by $\partial^*E$ its reduced boundary. The generalized outer normal will be denoted by $\nu_E$. For the precise definition of these quantities and their main properties we refer to \cite{AFP}.
A key tool in the proof of Theorem \ref{mainthm} is the result by Fuglede \cite{F}. As observed in the Introduction, it implies \eqref{main} for Lipschitz sets which are close to the unit ball in $W^{1, \infty}$.
\begin{theorem}[Fuglede]
\label{Fuglede}
Suppose that $E\subset\mathbb{R}^n$ has its barycenter at origin, $|E|= \omega_n$, and that
\[
\partial E= \{ ( 1 + u(z) ) \, z: \,\, z \in \partial B_1\}
\]
for $u \in W^{1, \infty}(\partial B_1)$. There exist $c>0$ and $\varepsilon_0 >0$ such that if $ || u||_{W^{1, \infty}(\partial B_1)} \leq \varepsilon_0$, then
\[
D(E) \geq c \, || u||_{W^{1, 2}(\partial B_1)}^2\,.
\]
Moreover,
\begin{equation}\label{fuglede1}
\beta(E)^2\leq A(E)^2\leq C_0D(E)\,,
\end{equation}
for some positive constant $C_0$ depending only on $n$.
\end{theorem}
Another key ingredient in our proof is the regularity of area almost minimizers. To this aim, we recall that a set $F$ is an area {\it $( \Lambda, r_0)$-almost minimizer} if for every $G$, such that $G \Delta F \Subset B_r(x)$ with $r \leq r_0$, it holds
\[
P(F) \leq P(G) + \Lambda r^n.
\]
Next result is contained in \cite{W}.
\begin{theorem}[B. White]
\label{areamin}
Suppose that $F_k$ is a sequence of area $(\Lambda, r_0)$-almost minimizers such that
\[
\sup_k \, P(F_k) < \infty \quad \text{and} \quad \chi_{F_k} \to \chi_{ B_1} \quad \text{in} \,\,L^1.
\]
Then, for $k$ large, each $F_k$ is of class $C^{1, \frac{1}{2}}$ and
\[
\partial F_k = \{ (1+u_k(z))\,z \mid z \in \partial B_1 \}\,,
\]
with $u_k \to 0$ in $ C^{1, \alpha}(\partial B_1)$ for every $ \alpha \in (0, \frac{1}{2})$.
\end{theorem}
We will also use the theory of the so called area $(K, r_0)$-quasiminimizers. We say that a set $F$ is an area {\it $(K, r_0)$-quasiminimizer} if for every $G$, such that $ G \Delta F \Subset B_{r}(x)$ with $r \leq r_0$, the following inequality holds
\begin{equation}\label{quasimin}
P(F; B_{r}(x) ) \leq K \, P(G; B_{r}(x)) \,.
\end{equation}
Here $P(G; B_{r}(x)) $ stands for the perimeter of $G$ in $B_{r}(x)$.
The regularity of $(K, r_0)$-quasiminimizers is very weak. Nevertheless we have the following result by David and Semmes \cite{DS}, see also Kinnunen, Korte, Lorent and Shanmugalingam \cite{KKLS}, where the result below is proven in a general metric space.
\begin{theorem}[David \& Semmes]
\label{david}
Suppose that $F$ is an area $(K, r_0)$-quasiminimizer. Then, up to modifying $F$ in a set of measure zero, the topological boundary of $F$ coincides with the reduced boundary, i.e.,
$\partial F = \partial^* F$.
\par
Moreover $F$ and $\mathbb{R}^n \setminus F$ are locally porous, i.e., there exist $R> 0 $ and $C>1$ such that for any $0 < r< R$ and every $x \in \partial F$ there are points $y,z \in B_r(x)$ for which
\[
B_{r/C}(y) \subset F \qquad \text{and} \qquad B_{r /C}(z) \subset \mathbb{R}^n \setminus F.
\]
\label{david.semmes}
\end{theorem}
\section{Proof of the theorem}
In this section we give the proof of Theorem~\ref{mainthm}. Since the quantities $A(E)$ and $D(E)$ in \eqref{main} are scale invariant, we shall assume from now on and without loss of generality that $|E|=\omega_n$. Moreover, in view of Proposition~\ref{vesaprop}, whose proof will be given at the end of this section, we will only need to prove the estimate \eqref{six}.
\par
Thus, we begin by giving a closer look to the oscillation term $\beta(E)$ defined in \eqref{defbeta}. Observe that by the divergence theorem we immediately have
\[
\begin{split}
\frac12\int_{\partial^*E}|\nu_E(x)-\nu_{B_r(y)}(\pi_{y,r}(x))|^2\,d{\mathcal{H}}^{n-1}(x) & =\int_{\partial^*E}\Bigl(1-\nu_E(x)\cdot\frac{x-y}{|x-y|}\Bigr)\,d{\mathcal{H}}^{n-1}(x) \\
& = P(E)-\int_E\frac{n-1}{|x-y|}\,dx\,.
\end{split}
\]
Therefore, we may write
\begin{equation}
\label{remember}
\beta (E)^2 = P(E) -(n-1)\gamma(E)\,,
\end{equation}
where we have set
\begin{equation}
\label{potent}
\gamma (E) := \max_{y\in\mathbb{R}^n} \int_E \frac{1}{|x-y|} \, dx\,.
\end{equation}
We say that a set $E$ is \emph{centered at $y$} if
\[
\beta(E)^2 =\int_{\partial^* E} \left( 1- \nu_{E} \cdot \frac{x-y}{|x-y|} \right) \, d {\mathcal{H}}^{n-1}(x).
\]
Notice that in general a center of a set is not unique.
The following simple lemma shows that the centers of sets, which are close to the unit ball in $L^1$, are close to the origin.
\begin{lemma}
\label{centerpoint}
For every $\varepsilon>0$ there exists $\delta>0$ such that if $F \subset B_{R_0}$ and $|F \Delta B_1| < \delta$, then $|y_F| < \varepsilon$ for every center $y_F$ of $F$.
\end{lemma}
\begin{proof}
We argue by contradiction and assume that there exist $F_k \subset B_{R_0}$ such that $F_k \to B_1$ in $L^1$ and $y_{F_k} \to y_0$ with $|y_0| \geq \varepsilon$, for some $\varepsilon >0$. Then we would have
\[
\int_{F_k} \, \frac{1}{|x|} \, dx \leq \int_{F_k} \, \frac{1}{|x - y_{F_k}|} \, dx.
\]
Letting $k \to \infty$, by the dominated convergence theorem the left hand side converges to $\int_{B_1} \, \frac{1}{|x|} \, dx$, while the right hand side converges to $\int_{B_1} \, \frac{1}{|x - y_0|} \, dx$. Thus we have
\[
\int_{B_1} \, \frac{1}{|x|} \, dx \leq \int_{B_1} \, \frac{1}{|x - y_0|} \, dx.
\]
By the divergence theorem we conclude that
\[
\int_{\partial B_1} 1 \, dx \leq \int_{\partial B_1} \, x \cdot \frac{x- y_0}{|x - y_0|} \, dx
\]
and this inequality may only hold if $y_0=0$, thus leading to a contradiction.
\end{proof}
The next lemma states that in order to prove \eqref{six} we may always assume that our set $E$ is contained a sufficiently large ball $B_{R_0}$. The proof follows closely the one given in \cite[Lemma~5.1]{FMP} and we only indicate the few changes needed in our case.
\begin{lemma}
\label{large.ball}
There exist $R_0 =R_0(n)>1$ and $C =C(n)$ such that for every set $E$, with $|E| = \omega_n$, we may find $E' \subset B_{R_0}$ such that $|E'| = \omega_n$ and
\begin{equation}\label{largeball1}
\beta(E)^2 \leq \beta(E')^2 + CD(E), \qquad D(E') \leq C D(E)\,.
\end{equation}
\end{lemma}
\begin{proof}
Let us assume that $D(E) \leq \frac{1}{4}(2^{1 / n} -1) $. Otherwise, observing that $\beta(E)^2\leq P(E)= P(B_1)+D(E)$, \eqref{largeball1} (and in turn \eqref{six}) follows at once taking $E'=B_1$ and a sufficiently large constant $C(n)$.
\par
Moreover, up to rotation, we may also assume
without loss of generality that
\[
{\mathcal{H}}^{n-1 }(\{ x \mid \nu_E(x) = \pm e_i \}) = 0
\]
for any $i = 1, \dots, n$.
Arguing exactly as in \cite{FMP}, we may find $\tau_1, \tau_2$ such that $0 < \tau_2 -\tau_1 < \rho_0$, for some $\rho_0$ depending only on $n$, such that the set $\tilde{E} = E \cap \{x \mid \tau_1 < x_1 < \tau_2 \}$ satisfies
\begin{equation}
\label{Etilde}
|\tilde{E}| \geq |B_1| \left( 1- 2 \, \frac{D(E)}{2^{1 / n} -1} \right) \qquad \text{and} \qquad P(\tilde{E}) \leq P(E).
\end{equation}
The latter inequality follows simply from the fact that we cut $E$ by a hyperplane.
The first inequality in \eqref{Etilde} and the isoperimetric inequality yield
\[
P(\tilde{E}) \geq n\omega_n^{1/n} |\tilde{E}|^{\frac{n-1}{n}} \geq n\omega_n^{1/n} |B_1|^{\frac{n-1}{n}} \left( 1- 2 \frac{D(E)}{2^{1 / n} -1} \right)^{\frac{n-1}{n}} \geq P(B_1) \left( 1- C \, D(E) \right).
\]
From this inequality, using \eqref{remember} and \eqref{potent} and denoting by $y_{\tilde E}$ the center of $\tilde E$, we get
\begin{equation}
\label{asym-estim}
\begin{split}
\beta(E)^2 -\beta(\tilde{E})^2 &\leq ( P(E) - P(\tilde{E}) ) + \int_{\tilde{E}} \frac{n-1}{|x-y_{\tilde E}|} \, dx - \int_{E} \frac{n-1}{|x-y_{\tilde E}|} \, dx \\
&\leq D(E) + P(B_1)-P({\tilde E})
\leq C_1 D(E).
\end{split}
\end{equation}
Set now
\[
\lambda = \left( \frac{|B_1|}{|\tilde{E}|} \right)^{1/n} \qquad \text{and} \qquad E' = \lambda \tilde{E}.
\]
From the first inequality in \eqref{Etilde} we get that $1 \leq \lambda \leq 1 + C_2 D(E)$, while the second inequality yields
\[
P(E') = \lambda^{n-1} P(\tilde{E}) \leq (1 + C_3 D(E)) P(E)
\]
and the second inequality in \eqref{largeball1} follows. On the other hand from \eqref{asym-estim} we get
\[
\beta(E')^2 = \lambda^{n-1} \beta(\tilde{E})^2 \geq \beta(\tilde{E})^2 \geq \beta(E)^2 - C_1 D(E)\,,
\]
that is the first inequality in \eqref{largeball1}.
The proof is completed by repeating the same argument for all coordinate axes.
\end{proof}
We will also need the following corollary of the isoperimetric inequality.
\begin{lemma}
\label{penal.ball}
Suppose that $R_0>1$ and $\Lambda >n$. Then, up to a translation, the unit ball $B_1$ is the unique minimizer of
\[
P(F) + \Lambda \big| |F| - \omega_n \big|
\]
among all sets contained in $B_{R_0}$.
\end{lemma}
\begin{proof}
Suppose that $E$ is a minimizer of the functional above . Then we have
\[
P(E) \leq P(E)+ \Lambda \big| |E| - \omega_n \big| \leq P(B_1) = n \omega_n.
\]
Thus the isoperimetric inequality implies that $|E| \leq \omega_n$. Therefore, by the minimality of $E$ and the isoperimetric inequality again, we have
\[
\begin{split}
0 &\geq P(E) + \Lambda \big| |E| - \omega_n \big|- P(B_1) \\
&\geq n \omega_n^{1/n} \, |E|^{\frac{n-1}{n}} + \Lambda ( \omega_n - |E|) - n \omega_n
\geq (\Lambda - n) (\omega_n - |E|) \,.
\end{split}
\]
Hence, $E$ is a ball of radius one.
\end{proof}
The following lower semicontinuity lemma will be used in the proof of Theorem~\ref{main}. It deals with the functional
\begin{equation}
\label{functional}
{\mathcal{F}}(E) = P(E) + \Lambda \big| |E| - \omega_n \big| + \frac{1}{4}| \beta (E)^2 - \varepsilon|\,.
\end{equation}
\begin{lemma}
\label{lowersemi}
The functional \eqref{functional} is lower semicontinuous with respect to the $L^1$-convergence in $B_{R_0}$.
\end{lemma}
\begin{proof}Let us first prove that the functional $\gamma$ defined in \eqref{potent} is continuous with respect to $L^1$ convergence in $B_{R_0}$, that is
\begin{equation}
\label{potent.cont}
\lim_{k\to \infty} \gamma (E_k) = \gamma(E)\,,
\end{equation}
whenever $E_k,E\subset B_{R_0}$ and $E_k \to E$ in $L^1$. To this aim, suppose that the sets $E_k$ and $E$ are centered at $y_k$ and at $y_0$, respectively. From the definition of $\gamma$ we obtain
\[
\int_{E_k} \frac{1}{|x-y_0|} \, dx \leq \gamma(E_k)
\]
and therefore $\lim_{k\to \infty} \gamma (E_k) \geq \gamma(E)\,.$ On the other hand, choose $r_k$ such that $|B_{r_k}| = |E_k \setminus E|$ and use the divergence theorem to obtain
\[
\begin{split}
\gamma(E_k) &\leq \int_{E} \frac{1}{|x-y_k|} \, dx + \int_{ E_k \setminus E} \frac{1}{|x-y_k|} \, dx \leq \gamma(E) + \int_{B_{r_k}(y_k)} \frac{1}{|x-y_k|} \, dx\\&= \gamma(E) + \frac{1}{n-1} P(B_{r_k}) .
\end{split}
\]
Therefore $\lim_{k\to \infty} \gamma (E_k) \leq \gamma(E)$ and \eqref{potent.cont} follows.
To show the lower semicontinuity of ${\mathcal{F}}$, let us consider $E_k,E\subset B_{R_0}$, with $E_k \to E$ in $L^1$. Without loss of generality, we may assume that
\[
\lim \inf_{k\to \infty} {\mathcal{F}}(E_k) = \lim_{k\to \infty} {\mathcal{F}}(E_k)<\infty\,.
\]
Passing possibly to a subsequence we may also assume that $\lim_{k\to \infty} P(E_k) = \alpha$. By the lower semicontinuity of the perimeter we have
\[
\alpha \geq P(E)\,.
\]
Then, recalling \eqref{remember}, we get
\[
\begin{split}
\lim_{k\to \infty} {\mathcal{F}}(E_k) &= \alpha + \Lambda \big| |E| - \omega_n \big| + \frac{1}{4}| \alpha - (n-1) \gamma(E) - \varepsilon| \\
&\geq {\mathcal{F}}(E) + (\alpha - P(E)) - \frac{1}{4}|\alpha - P(E)| \geq {\mathcal{F}}(E)\,,
\end{split}
\]
thus concluding the proof.
\end{proof}
We are now ready to prove the main result.
\begin{proof}[\textbf{Proof of Theorem \ref{mainthm}}]
Let $c_0>0 $ be a constant which will be chosen at the end of the proof. Thanks to Lemma~ \ref{large.ball} and Proposition~\ref{vesaprop} it is sufficient to prove that there exists $\delta_0>0$ such that, if $D(E) \leq \delta_0$ then
$$
D(E)\geq c_0 \beta(E)^2
$$
for all $ E \subset B_{R_0}, |E| = \omega_n.$
We argue by contradiction and assume that there exists a sequence of sets $E_k \subset B_{R_0}$ such that $|E_k| = \omega_n$, $P (E_k) \to P(B_1)$ and
\begin{equation}
\label{contradict}
P(E_k) < P(B_1) + c_0 \, \beta (E_k)^2.
\end{equation}
By compactness we have that, up to a subsequence, $ E_k \to E_{\infty}$ in $L^1$ and by the lower semicontinuity of the perimeter we immediately conclude that $ E_{\infty}$ is a ball of radius one. Set $\varepsilon_k := \beta (E_k)$.
In the proof of the Lemma \ref{lowersemi} it was shown that the functional $\gamma$ defined in \eqref{potent} is continuous with respect to $L^1$ convergence. Therefore, since $E_k$ is converging to a ball of radius one in $L^1$ and $P (E_k) \to P(B_1)$, we have that
\[
\varepsilon_k = P(E_k) -(n-1)\, \gamma(E_k) \to 0
\]
As in \cite{AFM} we replace each set $E_k$ by a minimizer $F_k$ of the following problem
\begin{equation}
\label{contr-funct}
\min \, \{ P(F) + \Lambda \big| |F| - \omega_n \big| + \frac{1}{4}| \beta (F)^2 - \varepsilon_k^2|, \quad F \subset B_{R_0}\}
\end{equation}
for some fixed $\Lambda > n$. By Lemma~ \ref{lowersemi} we know that the functional above is lower semicontinuous with respect to $L^1$-convergence of sets and therefore a minimizer exists.
\noindent\textbf{Step 1:} \, Up to a subsequence, we may assume that $F_k \to F_{\infty}$ in $L^1$. Since $F_k$ minimizes \eqref{contr-funct} we have from \eqref{contradict} and Lemma \ref{penal.ball} that
\[
\begin{split}
P(F_k) + \Lambda \big| |F_k| - \omega_n \big|+ \frac{1}{4}| \beta (F_k)^2 - \varepsilon_k^2| &\leq P(E_k) < P(B_1) + c_0 \, \varepsilon_k^2 \\
&\leq P(F_k) + \Lambda \big| |F_k| - \omega_n \big| + c_0 \, \varepsilon_k^2\,.
\end{split}
\]
Hence $| \beta (F_k)^2 - \varepsilon_k^2| \leq 4 c_0 \, \varepsilon_k^2$, which implies $\beta (F_k) \to 0$ and
\begin{equation}
\label{good.obs}
\varepsilon_k^2 \leq \frac{1}{1- 4c_0} \beta (F_k)^2\,.
\end{equation}
Therefore $F_{\infty}$ is a minimizer of the problem
\[
\min \{ P(F) + \Lambda \big| |F| - \omega_n \big|: \,\, F \subset B_{R_0}\}\,.
\]
Thus by the Lemma \ref{large.ball} we conclude that $F_{\infty}$ is a ball $B_1(x_0)$ for some $x_0$.
\noindent\textbf{Step 2:} \,We claim that for any $\varepsilon>0$, $B_{1- \varepsilon}(x_0) \subset F_k \subset B_{1+\varepsilon}(x_0) $ for $k$ large enough.
To this aim we show that the sets $F_k$ are area $(K, r_0)$-quasiminimizers and use Theorem~\ref{david.semmes}. Let $G\subset\mathbb{R}^n$ be such that $G \Delta F_k \Subset B_r(x)$, $r \leq r_0$.
\textit{Case 1:} Suppose that $B_r(x) \subset B_{R_0}$. By the minimality of $F_k$ we obtain
\begin{equation}
\label{ach}
P (F_k) \leq P(G ) + \frac{1}{4}| \beta(F_k)^2 - \beta(G)^2| + \Lambda \big| |F_k| - |G| \big|
\end{equation}
Assume that $\beta(F_k)\geq\beta(G)$ (otherwise the argument is similar) and denote by $y_G$ the center of $G$. Then we get
\[
\begin{split}
&| \beta(F_k)^2 - \beta(G)^2| \leq \int_{\partial^*F_k}\!\left( 1- \nu_{F_k} \cdot \frac{z-y_G}{|z-y_G|} \right) d {\mathcal{H}}^{n-1}(z)-\int_{\partial^* G}\! \left( 1- \nu_{G} \cdot \frac{z-y_G}{|z-y_G|} \right) d {\mathcal{H}}^{n-1}(z) \\
&\qquad\quad= \int_{\partial^* F_k\cap B_r(x)}\! \left( 1- \nu_{F_k} \cdot \frac{z-y_G}{|z-y_G|} \right) d {\mathcal{H}}^{n-1}(z)-\int_{\partial^* G\cap B_r(x)}\! \left( 1- \nu_{G} \cdot \frac{z-y_G}{|z-y_G|} \right) d {\mathcal{H}}^{n-1}(z) \\
&\qquad\quad\leq 2\bigl[P(F_k ; B_r(x)) + P(G ; B_r(x)) ]\,,
\end{split}
\]
where $P(E; B_r(x))$ stands for the perimeter of $E$ in $B_r(x)$. Therefore, from \eqref{ach} we get
$$
P(F_k ; B_r(x)) \leq 3P(G ; B_r(x))+2\Lambda \big| |F_k| - |G| \big| \,.
$$
From the above inequality the $(K, r_0)$-quasiminimality immediately follows by observing that
\[
|F_k \Delta G| \leq \omega_n^{1/n} r^{1/n} |F_k \Delta G|^{\frac{n-1}{n}} \leq C(n) r^{1/n} [ P(F_k ; B_r(x)) + P(G ; B_r(x)) ]
\]
and choosing $r_0 $ sufficiently small.
\textit{Case 2:} If $ |B_r(x) \setminus B_{R_0} | >0$, we may write
\[
\begin{split}
&P(F_k ; B_r(x)) - P(G ; B_r(x)) \\
&= P(F_k ; B_r(x)) - P(G \cap B_{R_0} ; B_r(x)) + P(G \cap B_{R_0} ; B_r(x)) - P(G ; B_r(x)) \\
&= P(F_k; B_r(x)) - P(G \cap B_{R_0} ; B_r(x)) + P (B_{R_0}) - P(G \cup B_{R_0}) \\
&\leq P(F_k; B_r(x)) - P(G \cap B_{R_0} ; B_r(x)).
\end{split}
\]
From Case 1 we have that this term is less than $(K-1) P(G \cap B_{R_0}; B_r(x))$ which in turn is smaller than $(K-1) P(G; B_r(x))$. Hence, all $F_k$ are $(K, r_0)$-quasiminimizer with uniform constants $K$ and $r_0$.
The claim then follows from the theory of $(K, r_0)$-quasiminimizers and the fact that $F_k \to B_1(x_0)$ in $L^1$. Indeed, arguing by contradiction, assume that there exists $0<\varepsilon_0<2r_0$ such that for infinitely many $k$ one can find $x_k \in \partial F_k $ for which
\[
x_k \notin B_{1+ \varepsilon_0}(x_0) \setminus B_{1- \varepsilon_0}(x_0) .
\]
Let us assume that $x_k \in B_{1- \varepsilon_0}(x_0)$ for infinitely many $k$ (otherwise, the argument is similar). From Theorem \ref{david} it follows that there exist $y_k \in B_{\frac{\varepsilon_0}{2}}(x_k)$ such that $B_{\frac{\varepsilon_0}{2C}}(y_k) \subset B_1(x_0) \setminus F_k$. This implies
\[
|B_1(x_0) \setminus F_k| \geq |B_{\frac{\varepsilon_0}{2C}}| >0,
\]
which contradicts the fact that $F_k \to B_1(x_0)$ in $L^1$, thus proving the claim.
\noindent\textbf{Step 3:}
Let us now translate $F_k$, for $k$ large, so that the resulting sets, still denoted by $F_k$, are contained in $B_{R_0}$, have their barycenters at the origin and converge to $B_1$.
We are going to use Theorem~\ref{areamin} to show that $F_k$ are $C^{1,1/2}$ and converge to $B_1$ in $C^{1, \alpha}$ for all $\alpha<1/2$. To this aim, fix a small $\varepsilon>0$. From Step 2 we have that for $k$ large
\begin{equation}\label{incl}
B_{1-\varepsilon}\subset F_k\subset B_{1+\varepsilon}\,.
\end{equation}
We want to show that when $k$ is large $F_k$ is a $(\Lambda',r_0)$-almost minimizer for some constants $\Lambda',r_0$ to be chosen independently of $k$.
To this aim,
fix a set $G\subset\mathbb{R}^n$ such that $G \Delta F_k \Subset B_r(y) $, with $r<r_0$.
If $B_r(y) \subset B_{1- \varepsilon}$, from \eqref{incl} it follows that $G \Delta F_k \Subset F_k$ for $k$ large enough. This immediately yields $P(F_k) \leq P(G)$.
If $B_r(y) \not \subset B_{1- \varepsilon}$, choosing $r_0$ and $\varepsilon$ sufficiently small we have that
\begin{equation}\label{empty}
B_r(y)\cap B_{1/2}=\emptyset\,.
\end{equation}
Denote by $y_{F_k}$ and $y_G$ the centers of $F_k$ and $G$, respectively. If $\varepsilon$ is sufficiently small, from \eqref{incl} and Lemma \ref{centerpoint} we have that for $k$ large
\begin{equation}
\label{not-far}
|y_{F_k} | \leq \frac14 \quad \text{and} \quad |y_G | \leq \frac14\,.
\end{equation}
By the minimality of $F_k$ we have
\[
P(F_k) \leq P(G) + \frac{1}{4}| P(F_k) - P(G)| + \Lambda \big| |F_k| - |G| \big| + \frac{n-1}{4} |\gamma(F_k) - \gamma(G) |,
\]
which immediately implies
\begin{equation}
\label{compare}
P(F_k) \leq P(G) + 2 \Lambda | F_k \Delta G|+ (n-1)|\gamma(F_k) - \gamma(G) |.
\end{equation}
We may estimate the last term simply by
\[
\gamma(F_k) - \gamma(G) \leq \int_{F_k} \frac{1}{|x-y_{F_k}|} \, dx -\int_{G} \frac{1}{|x-y_{F_k}|} \, dx \leq \int_{F_k \Delta G} \frac{1}{|x-y_{F_k}|} \, dx
\]
and
\[
\gamma(G) - \gamma(F_k) \leq \int_{G} \frac{1}{|x-y_G|} \, dx -\int_{F} \frac{1}{|x-y_G|} \, dx \leq \int_{F_k \Delta G} \frac{1}{|x-y_G|} \, dx.
\]
Therefore, recalling \eqref{empty} and \eqref{not-far}, we have
$$
|\gamma(F_k) - \gamma(G) | \leq 4|F_k \Delta G|\,.
$$
From this estimate and inequality \eqref{compare} we may then conclude that
\[
P(F_k) \leq P(G) + ( 2\Lambda + 4(n-1)) \, |F_k \Delta G| \leq P(G) + \Lambda' \, r^n.
\]
Hence, the sets $F_k$ are $(\Lambda', r_0)$- almost minimizers with uniform constants $\Lambda'$ and $ r_0$.
Thus, Theorem \ref{areamin} yields that the $F_k$ are $C^{1, 1/2}$ and that, for $k$ large, \begin{equation}
\label{C^1-conver}
\partial F_k = \{ (1 + u_k(z))z:\,\, z \in \partial B_1\}
\end{equation}
for some $u_k\in C^{1,1/2}(\partial B_1)$ such that $u_k \to 0$ in $C^{1}(\partial B_1)$.
\smallskip
\noindent\textbf{Step 4:}
By the minimality of $F_k$, \eqref{contradict} and \eqref{good.obs} we have
\begin{equation}
\label{almost.there}
P(F_k) + \Lambda \big| |F_k| - \omega_n \big| \leq P(E_k) < P(B_1) + c_0 \varepsilon_k ^2\leq P(B_1) + \frac{c_0}{1 -4c_0} \beta(F_k)^2\,.
\end{equation}
We are almost in a position to use Theorem \ref{Fuglede} to obtain a contradiction. We only need to rescale $F_k$ so that the volume constrain is satisfied. Thus, set $F_k' := \lambda_k F_k$, where $\lambda_k$ is such that $\lambda_k ^n|F_k| = \omega_n $. Then $\lambda_k \to 1 $ and also the sets $F_k' $ converge to $B_1$ in $C^1$ and have their barycenters at the origin. Therefore, since $\Lambda>n$, $P(F_k)\to n\omega_n$ and $|F_k|\to\omega_n$, we have that for $k$ sufficiently large
\begin{equation}
\label{scaling}
| P(F_k') - P(F_k) | = |\lambda_k^{n-1} - 1| \, P(F_k) \leq \Lambda \, |\lambda_k^n - 1 | \, |F_k|= \Lambda \, \big| |F_k'| - |F_k| \big|.
\end{equation}
Then \eqref{almost.there} and \eqref{scaling} yield
\[
\begin{split}
P(F_k') &\leq P(F_k) + \Lambda \big| |F_k| - \omega_n \big| < P(B_1) + \frac{c_0}{1 -4c_0} \beta(F_k)^2 \\
&= P(B_1) + \frac{c_0 \, \lambda_k^{1-n}}{1 -4c_0} \beta(F_k')^2.
\end{split}
\]
which contradicts \eqref{fuglede1} if $2c_0/(1-4c_0)<1/C_0$ and $k$ is large.
\end{proof}
We conclude by proving that the oscillation index $\beta(E)$ defined in \eqref{defbeta} controls the total asymmetry $A(E)$.
\begin{proof}[\textbf{Proof of Proposition~\ref{vesaprop}}]
Let $E$ be a set of finite perimeter such that $|E| = \omega_n$ and assume that
$E$ is centered at the origin, i.e.,
\[
\beta(E)^2 = \int_{\partial^* E} \left( 1- \nu_{E} \cdot \frac{x}{|x|} \right) \, d {\mathcal{H}}^{n-1}.
\]
By the divergence theorem we may write
\[
\begin{split}
\int_{\partial^* E} \nu_{E} \cdot \frac{x}{|x|} \, d {\mathcal{H}}^{n-1} - P(B_1) &= \int_{ E} \frac{n-1}{|x|} \, dx - \int_{ B_1} \frac{n-1}{|x|} \, dx\\
&= \int_{ E \setminus B_1} \frac{n-1}{|x|} \, dx - \int_{ B_1 \setminus E} \frac{n-1}{|x|} \, dx\,.
\end{split}
\]
This yields the equality
\begin{equation}
\label{asym.equlity}
\beta(E)^2 = D(E) - \int_{ E \setminus B_1} \frac{n-1}{|x|} \, dx + \int_{ B_1 \setminus E} \frac{n-1}{|x|} \, dx.
\end{equation}
Let us estimate the last two terms in \eqref{asym.equlity}. Since $|E| = |B_1|$ we have
\begin{equation}
\label{definition.a}
|E \setminus B_1| = |B_1 \setminus E| =:a.
\end{equation}
Denote by $A(R,1) = B_R \setminus B_1$ and $A(1,r) = B_1 \setminus B_r$ two annuli such that $|A(R,1)| = |A(1, r)| =a$, where $a$ is defined in \eqref{definition.a}. In other words
\[
R = \left( 1 + \frac{a}{\omega_n}\right)^{1/n} \qquad \text{and} \qquad r = \left( 1 - \frac{a}{\omega_n}\right)^{1/n}.
\]
By construction $|A(R,1)| = |E \setminus B_1|$. Hence, we have that
\[
\int_{ E \setminus B_1} \frac{n-1}{|x|} \, dx \leq \int_{ A(R,1)} \frac{n-1}{|x|} \, dx\,,
\]
since the weight $\frac{1}{|x|}$ gets smaller the further the set is from the unit sphere. Similarly, we have
\[
\int_{ B_1 \setminus E} \frac{n-1}{|x|} \, dx \geq \int_{A(1, r)} \frac{n-1}{|x|} \, dx.
\]
Therefore we may estimate \eqref{asym.equlity} by
\begin{equation}
\label{long.calculation}
\begin{split}
\beta(E)^2 &\geq D(E) - \int_{ A(R,1)} \frac{n-1}{|x|} \, dx + \int_{ A(1, r)} \frac{n-1}{|x|} \, dx \\
&= D(E)- n\bigl[\omega_n (R^{n-1} -1) - \omega_n(1 - r^{n-1})\bigr] \\
&= D(E) + n\omega_n \left(2 - \left( 1 + \frac{a}{\omega_n}\right)^{\frac{n-1}{n}} - \left( 1 - \frac{a}{\omega_n}\right)^{\frac{n-1}{n}} \right).
\end{split}
\end{equation}
The function $f(t) = (1 + t)^{\frac{n-1}{n}}$ is uniformly concave in $[-1,1]$, i.e.,
\[
\frac{1}{2}\left(f(t) + f(s) \right) \leq f \left(\frac{t}{2} + \frac{s}{2} \right) - c_n |t-s|^2
\]
for $c_n = - \frac{1}{4} \left( \sup_{t \in (-1,1)} f''(t)\right) = \left( \frac{1}{4n} \cdot \frac{n-1}{n} \right) 2^{\frac{-n-1}{n}} >0$. Therefore, recalling \eqref{definition.a}, we may estimate \eqref{long.calculation} by
\[
\beta(E)^2 \geq D(E) + \frac{8nc_n}{\omega_n} \, a^2 = D(E) + {\tilde c}_n \, (|E \setminus B_1| + |B_1 \setminus E|)^2.
\]
Since $|E \setminus B_1| + |B_1 \setminus E| = |E \Delta B_1| $, we get
\[
\begin{split}
(1+ {\tilde c}_n) \beta(E)^2 &\geq D(E) +{\tilde c}_n \, \left( \int_{\partial^* E} \left( 1- \nu_{E} \cdot \frac{x}{|x|} \right) \, d {\mathcal{H}}^{n-1} + |E \Delta B_1| ^2 \right) \\
&\geq D(E) + c \, A(E)^2\,.
\end{split}
\]
Hence, the assertion follows.
\end{proof}
\section*{Acknowledgment}
\noindent This research was supported by the 2008 ERC Advanced Grant 226234 ``Analytic Techniques for
Geometric and Functional Inequalities''.
| {
"timestamp": "2011-11-22T02:05:21",
"yymm": "1111",
"arxiv_id": "1111.4866",
"language": "en",
"url": "https://arxiv.org/abs/1111.4866",
"abstract": "We give a refinement of the quantitative isoperimetric inequality. We prove that the isoperimetric gap controls not only the Fraenkel asymmetry but also the oscillation of the boundary.",
"subjects": "Metric Geometry (math.MG)",
"title": "A strong form of the Quantitative Isoperimetric inequality",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9817357221825194,
"lm_q2_score": 0.7217432182679956,
"lm_q1q2_score": 0.7085610996166664
} |
https://arxiv.org/abs/0710.1782 | Linear and nonlinear tails I: general results and perturbation theory | For nonlinear wave equations with a potential term we prove pointwise space-time decay estimates and develop a perturbation theory for small initial data. We show that the perturbation series has a positive convergence radius by a method which reduces the wave equation to an algebraic one. We demonstrate that already first and second perturbation orders, satisfying linear equations, can provide precise information about the decay of the full solution to the nonlinear wave equation. In a forthcoming publication (part II) we address the issue of optimal decay estimates and precise asymptotics under spherical symmetry where the perturbation equations can be solved almost exactly. | \subsection{Proof{#1}}}
\newcommand{\textit{Proof:}\\}[1][]{\subsection{#1}}
\def\refa#1{(\ref{#1})}
\def$\rightarrow$ {$\rightarrow$ }
\def\rightarrow{\rightarrow}
\def\Rightarrow{\Rightarrow}
\def\mathcal{C}{\mathcal{C}}
\def\mathbb{N}{\mathbb{N}}
\def\mathbb{R}{\mathbb{R}}
\def\R_+^{1+3}{\mathbb{R}_+^{1+3}}
\def{L^\infty}{{L^\infty}}
\def\Linfx#1{{L^\infty_{#1}}}
\def\Linftx#1{{L^\infty_{1,#1}}}
\def\Linftau#1#2{{L^\infty_{#1,#2}}}
\def{L^\infty(\R^3)}{{L^\infty(\mathbb{R}^3)}}
\def{L^\infty(\RTX)}{{L^\infty(\R_+^{1+3})}}
\def{L^\infty(\RTXb)}{{L^\infty(\RTXb)}}
\def\<{\langle}
\def\>{\rangle}
\def\norm#1{\<#1\>}
\def\nnorm#1{(1+|#1|)}
\def\w#1{\widetilde{#1}}
\def\partial{\partial}
\def\Delta{\Delta}
\def\varepsilon{\varepsilon}
\def\lambda{\lambda}
\begin{document}
\title{Linear and nonlinear tails I: \\general results and perturbation theory}
\author{Nikodem Szpak}
\affiliation{Max-Planck-Institut f{\"u}r
Gravitationsphysik, Albert-Einstein-Institut, Golm, Germany}
\date{\today}
\begin{abstract}
For nonlinear wave equations with a potential term we prove pointwise space-time decay estimates and develop a perturbation theory for small initial data. We show that the perturbation series has a positive convergence radius by a method which reduces the wave equation to an algebraic one.
We demonstrate that already first and second perturbation orders, satisfying linear equations, can provide precise information about the decay of the full solution to the nonlinear wave equation.
In a forthcoming publication (part II) we address the issue of optimal decay estimates and precise asymptotics under spherical symmetry where the perturbation equations can be solved almost exactly.
\end{abstract}
\maketitle
\section{Introduction}
It is a well known fact that the presence of a long-range potential term (power-law decay at spatial infinity) in the wave equation violates the Huygens principle and gives rise to a late-time tail in the solution with a power-law decay (both powers are related) \cite{Strauss-T, NS-WaveDecay}. It is not so well known that nonlinear terms like $u^p$ cause the same effect.
We study equations where both these effects are present and give pointwise decay estimates on the solutions. Further, we develop perturbation theory for these equations and by its means argue that presented estimates give optimal decay rates at late times. A rigorous proof of this fact will appear in a following publication \cite{NS-PB_Tails} (part II).
We consider linear and nonlinear wave equations with a potential term of the general form
\begin{equation} \label{wave-eq}
\Box u + V u = F(u)
\end{equation}
in 3 spatial dimensions, i.e. $u:(t,x)\in \mathbb{R}_+\times \mathbb{R}^3\equiv\R_+^{1+3}\rightarrow\mathbb{R}$, and solve the initial value problem with
\begin{equation} \label{init-data}
u(0,x)=f(x),\qquad \partial_t u(0,x)=g(x).
\end{equation}
First, we construct an iteration scheme and show its convergence in a weighted space-time ${L^\infty}$-norm what reproduces the decay estimate from \cite{Strauss-T, NS-WaveDecay}
\begin{equation*}
|u(t,x)| \leq \frac{C}{(1+t+|x|)(1+|t-|x||)^{q-1}} \qquad \forall (t,x)\in\R_+^{1+3}
\end{equation*}
with $q:=\min(m-1,k,p-1)$ provided the potential $V$ and the initial data $f,g$ satisfy pointwise bounds
\begin{equation*}
|V(x)| \leq \frac{V_0}{\nnorm{x}^k},\quad k>2
\end{equation*}
\begin{equation*}
|f(x)| \leq \frac{f_0}{\nnorm{x}^{m-1}}, \quad
|\nabla f(x)| \leq \frac{f_1}{\nnorm{x}^m}, \quad
|g(x)| \leq \frac{g_0}{\nnorm{x}^m},\quad m>3.
\end{equation*}
with small $V_0, f_0, f_1, f_2$ and the nonlinearity is analytic and satisfies for $p>1+\sqrt{2}$
\begin{equation*}
|F(u)|\leq F_1 |u|^p,\quad |F(u)-F(v)|\leq F_2 |u-v| \max(|u|,|v|)^{p-1}\quad \text{for } |u|, |v|<1.
\end{equation*}
Next, we construct a perturbation series representing the solution $u$ and prove its convergence (with finite convergence radius) in the same weighted space-time ${L^\infty}$-norm. It implies pointwise convergence in $\R_+^{1+3}$ what allows us to control the decay at every perturbation order and obtain estimate on the remainder of the perturbation series for any order. Finally, if we can show that at some perturbation level our decay estimate is optimal, i.e. we know the true asymptotics for late times (what is not very difficult because the perturbation equations are linear) then we immediately know the asymptotics of $u$. It is the same as that of the given perturbation order because all higher terms in the perturbation series, summed up, are too small to be able to modify the asymptotics. The issue of optimal decay estimates and precise asymptotics compared with numerical results will be addressed in a forthcoming publication \cite{NS-PB_Tails} which will be focused on spherical symmetry where the perturbation equations can be solved almost exactly.
The proof of convergence of the perturbation series is essential for justifying the perturbation scheme as a rigorous approximation and being able to provide exact decay rates. We show it by relating the (inverted) wave equation
\begin{equation*}
u = \Box^{-1} F(u) - \Box^{-1}(Vu) + \varepsilon I(f,g)
\end{equation*}
(where $\varepsilon I(f,g)$ stands for initial data contribution to the solution of the free wave equation $\Box u=0$) to an algebraic equation of a similar form
\begin{equation*}
W = C \w{F}(W) + \delta W + \varepsilon D,
\end{equation*}
($\w{F}$ is obtained from $F$ by transformation of its Taylor series), which arises from comparison of the perturbation schemes for both problems. We make an interesting observation that the nonlinear wave equation has a solution $u(\varepsilon)$ analytic in $\varepsilon$, and hence representable by a convergent series in $\varepsilon$, if the same holds for the solution $W(\varepsilon)$ of the corresponding algebraic equation. The latter, however, is always true when $F(u)$ is analytic at $u=0$ what we assume.
Regarding regularity, we can go a safe way and consider only the classical solutions, i.e. assume $(f,g)\in\mathcal{C}^3(\mathbb{R}^3)\times\mathcal{C}^2(\mathbb{R}^3)$, $V\in\mathcal{C}^2(\mathbb{R}^3)$ and $F\in\mathcal{C}^2(\mathbb{R})$ and obtain $u\in\mathcal{C}^2(\R_+^{1+3})$. However, all results remain true also for weak solutions where $(f,g)\in\mathcal{C}^1(\mathbb{R}^3)\times\mathcal{C}^0(\mathbb{R}^3)$, $V\in\mathcal{C}^0(\mathbb{R}^3)$ and $F\in\mathcal{C}^0(\mathbb{R})$ and we have $u\in\mathcal{C}^0(\R_+^{1+3})$, because the lemmas \ref{Lem:init-data}-\ref{Lem:decay}, which constitute the main ``engine'' of all estimates, preserve the continuity (see \cite{NS-WaveDecay} for a detailed discussion of the weak solutions).
This paper is organized as follows. It has three main sections addressing the linear wave equation with potential, nonlinear wave equation without and with potential, respectively. The idea is to develop tools for the simplest, linear problem and then to generalize them to the nonlinear situation.
Every section has subsections presenting an iterative and a perturbative approach to the construction of solutions and a discussion of the optimal decay rates. Appendix collects some lemmas used in the proofs, cited from other works.
\subsection*{Notation}
With the symbol $\<x\>:=1+|x|$ we define spatial and space-time weighted-$L^\infty$ norms
\begin{equation*}
\| f \|_\Linfx{m} := \| \norm{x}^m f(x) \|_{L^\infty(\R^3)}
\end{equation*}
\begin{equation*}
\| u \|_\Linftau{q}{p} := \| \norm{t+|x|}^q \norm{t-|x|}^{p-q} u(t,x)\|_{L^\infty(\RTX)}
\end{equation*}
of which we will most frequently use
\begin{equation*}
\| u \|_\Linftau{1}{p} := \| \norm{t+|x|} \norm{t-|x|}^{p-1} u(t,x)\|_{L^\infty(\RTX)}.
\end{equation*}
Its finiteness guarantees the decay of $u$ like $1/t$ on the lightcone $t\sim |x|$ and like $1/t^p$ for fixed $x$ as well as $1/|x|^p$ for fixed $t$.
Note that functions with compact support in $\mathbb{R}^3$ belong to all spaces $\Linfx{m}$ with any $m>0$.
We introduce the following notation for solutions of the wave equations. Let $I_V$ be a linear map from the space of initial data to the space of solutions of the wave equation \refa{wave-eq}-\refa{init-data} with $F(u)=0$, so that $u=I_V(f,g)$. For wave equations with a source term and null initial data
\begin{equation*}
\Box u + Vu= F,\qquad u(0,x)=0,\qquad \partial_t u(0,x)=0,
\end{equation*}
let's denote the solutions by $u=L_V(F)$, where $L_V$ is a linear map from the space of source functions to the space of solutions to the above problem. Note that, due to linearity, the solution $u$ of a wave equation with source $F$ and non-vanishing initial data $f,g$ is a sum of these two contributions
\begin{equation*}
u=L_V(F)+I_V(f,g).
\end{equation*}
Observe that if we put the potential term on the r.h.s. we obtain
\begin{equation*}
\Box u = -V u + F
\end{equation*}
which, treated as a wave equation without potential (on the l.h.s.), is formally solved by
\begin{equation*}
u=-L_0(Vu)+L_0(F)+I_0(f,g).
\end{equation*}
Here the solution $u$ appears on both sides what seems to make the formula useless, but it will allow us to formulate various iteration schemes, e.g.
\begin{equation*}
u_{n+1}=-L_0(Vu_n)+L_0(F(u_n))+I_0(f,g)
\end{equation*}
for which we will prove convergence in suitable $\Linftx{q}$ norms.
Finally, we define constants which arise from estimates proved in \cite{NS-WaveDecay}, improved in \cite{NS-DecayLemma}
\begin{equation*}
C_m:= \max \left(\frac{9}{2(m-2)}, 5\right),
\end{equation*}
\begin{equation*}
C_{p,q}:=2+\frac{8}{p-1}+\frac{2}{q-1}.
\end{equation*}
The latter will be referred to as a bound on the allowed strength $V_0$ of the potential. Our purpose is to emphasize that this bound, although not optimal, is finite and not arbitrarily small what is crucial when a potential with a given value $V_0$ is studied (like e.g. in the Regge-Wheeler equation describing waves on Schwarzschild geometry).
\section{Linear case with potential}
First, we consider a linear wave equation
\begin{equation} \label{V:wave-eq}
\Box u + \lambda V(x) u = 0
\end{equation}
where $\lambda>0$ is a small parameter, bounded by some finite constant $C_V>0$ (which will be defined later). We first show that a standard iteration scheme converges for all $\lambda<C_V$ to a solution in $\Linftx{p}$, i.e. there exists a constant $C$ such that
\begin{equation*}
|u(t,x)| \leq \frac{C}{\norm{t+|x|}\norm{t-|x|}^{p-1}} \qquad \forall (t,x)\in\mathbb{R}_+\times\mathbb{R}^3
\end{equation*}
with some $p>2$ provided the potential $V$ and the initial data $f,\nabla f,g$ are (at least) continuous and satisfy pointwise bounds
\begin{equation} \label{V-bound}
|V(x)| \leq \frac{1}{\norm{x}^k},\quad k>2
\end{equation}
and
\begin{equation} \label{fg-bound}
|f(x)| \leq \frac{f_0}{\norm{x}^{m-1}}, \qquad
|\nabla f(x)| \leq \frac{f_1}{\norm{x}^m}, \qquad
|g(x)| \leq \frac{g_0}{\norm{x}^m},\qquad m>3.
\end{equation}
Then, we show that a perturbation scheme based on expansion in powers of $\lambda$ is, due to linearity, equivalent to the iteration scheme and the perturbation series has convergence radius $C_V$.
As next, we show that the lowest order $u_0$ has, in general, a different decay estimate than all higher orders, starting from $u_1$. Finally, we prove that either $u_0$ or $u_1$ gives precise information about the decay rate of the full solution $u$.
\subsection{Iteration}
We define an iteration by
\begin{equation*}
u_{-1} := 0\\
\end{equation*}
\begin{equation*}
u_n:= I_0(f,g) - \lambda L_0(V u_{n-1}),\qquad n=0,1,2,...
\end{equation*}
Then we have the following
\begin{Theorem} \label{Th:V}
With $f,g$ and $V$ as above for any $m>3$ and $k>2$ the sequence $u_n$ converges (in norm) in $\Linftx{p}$ for $p=\min(k,m-1)$ provided $\lambda<C_{p,k}^{-1}$. The limit $u:=\lim_{n\rightarrow\infty} u_n$ satisfies
\begin{equation*}
|u(t,x)| \leq \frac{C}{\norm{t+|x|}\norm{t-|x|}^{p-1}},\qquad \forall (t,x)\in\R_+^{1+3}
\end{equation*}
with some positive constant $C$ depending only on $f_0, f_1, g_0, \lambda$ and $k,m$.
\end{Theorem}
This theorem was proved first for classical solutions in \cite{Strauss-T} and later generalized to weak solutions in \cite{NS-WaveDecay} and stated in a more detailed form, which will be important here. We cite the essential part of the proof because some of the presented estimates will be used later.
\begin{proof}
For $g,\nabla f\in\Linfx{m}$ and $f\in\Linfx{m-1}$ with $m>3$, from lemma \ref{Lem:init-data}, we get $u_0=I_0(f,g)\in\Linftx{m-1}$. Next, observe that if $u_n\in\Linftx{p}$ with some $p>1$ then
\begin{equation*}
\|\<x\>^k V u_n\|_\Linftx{p} \leq \|\<x\>^k V\|_{L^\infty} \|u_n\|_\Linftx{p} = \|u_n\|_\Linftx{p}<\infty
\end{equation*}
and from lemma \ref{Lem:source} with $F\equiv V u_n$ we get $L_0(V u_n)\in\Linftx{p}$ when $p\leq k$. Because $\Linftx{p_1}\subset\Linftx{p_2}$ when $p_1\geq p_2$, we get $u_{n+1}\in\Linftx{p}$ with $p\leq \min(m-1,k)$. By induction we obtain $u_n\in\Linftx{p}$ for every $n=0,1,2,...$ with the optimal value $p:=\min(m-1,k)$. Then, we have
\begin{equation*}
\begin{split}
\|u_{n+1}-u_n\|_\Linftx{p} &= \lambda \|L_0(-V(u_n-u_{n-1}))\|_\Linftx{p} \leq
\lambda C_{p,k} \|\<x\>^k V (u_n-u_{n-1})\|_\Linftx{p} \\ &\leq
\lambda C_{p,k} \|\<x\>^k V\|_{L^\infty} \|u_n-u_{n-1}\|_\Linftx{p} \leq
\lambda C_{p,k} \|u_n-u_{n-1}\|_\Linftx{p}
\end{split}
\end{equation*}
again making use of lemma \ref{Lem:source} with $\norm{x}^k F\equiv -\norm{x}^k V (u_n-u_{n-1}) \in \Linftx{p}$.
For $\delta:=\lambda\, C_{p,k}<1$ the iteration is a contraction in the normed space $\Linftx{p}$. A simple argument shows that the sequence $u_n$ is Cauchy. We have
\begin{equation} \label{V:u_n-u_(n-1)}
\|u_{n+1}-u_n\|_\Linftx{p} \leq \delta^{n+1} \|u_0-u_{-1}\|_\Linftx{p} =
\delta^{n+1} \|I_0(f,g)\|_\Linftx{p}
\end{equation}
and for $n'>n$
\begin{equation} \label{V:n-m}
\begin{split}
\|u_{n'}-u_n\|_\Linftx{p} &\leq \sum_{j=0}^{n'-n-1} \|u_{n+j+1}-u_{n+j}\|_\Linftx{p}
\leq \sum_{j=0}^{n'-n-1} \delta^{j+n+1} \|I_0(f,g)\|_\Linftx{p}\\
&\leq \frac{\delta^{n+1}}{1-\delta} \|I_0(f,g)\|_\Linftx{p}.
\end{split}
\end{equation}
This expression can be made arbitrarily small (smaller than any $\epsilon>0$) for all $n,m>M(\epsilon)$. Hence, $u_n$ is a Cauchy sequence in $\Linftx{p}$ which is Banach and $u_n$ has a limit $u\in\Linftx{p}$ satisfying
\begin{equation} \label{V:u-sol}
u=I_0(f,g)-\lambda L_0(Vu).
\end{equation}
This equation is equivalent to the wave equation \refa{V:wave-eq} with the initial data \refa{fg-bound}. Finally, we find the $\Linftx{p}$-norm of $u$
\begin{equation*}
\|u\|_\Linftx{p}\leq \|I_0(f,g)\|_\Linftx{p}+\lambda \|L_0(Vu)\|_\Linftx{p}
\leq C_m(f_0+f_1+g_0)+\lambda\, C_{p,k} \|u\|_\Linftx{p},
\end{equation*}
thus
\begin{equation*}
\|u\|_\Linftx{p}\leq \frac{C_m(f_0+f_1+g_0)}{1-\lambda\, C_{p,k}} \equiv C.
\end{equation*}
\end{proof}
\subsection{Perturbation series}
Now, we define a perturbation series by
\begin{equation*}
u = \sum_{n=0}^\infty \lambda^n v_n
\end{equation*}
and insert into the wave eq. \refa{V:wave-eq}. It leads to the following perturbation scheme
\begin{alignat}{4} \label{pert-V-0}
\Box v_0 &= 0,&\qquad (v_0,\dot{v}_0)(0)&=(f,g)&\qquad&\rightarrow&\qquad v_0&=I_0(f,g) \\ \label{pert-V-n}
\Box v_{n+1} &= -V v_n,&\qquad (v_{n+1},\dot{v}_{n+1})(0)&=(0,0)&\qquad&\rightarrow&\qquad v_{n+1} &= -L_0(V v_n)
\end{alignat}
Due to linearity of \refa{V:wave-eq} it turns out that the partial sums
\begin{equation*}
\sum_{k=0}^n \lambda^k v_k = u_n
\end{equation*}
give the elements $u_n$ obtained above by the iteration technique, so both methods (if they work) are equivalent. Theorem \ref{Th:V} implies convergence in $\Linftx{p}$ for $\lambda<C_{p,k}^{-1}$ with $k>2$, $m>3$ and $p:=\min(k,m-1)$.
From \refa{V:u_n-u_(n-1)} in the proof of Theorem \ref{Th:V} it follows that
\begin{equation*}
\| v_n\|_\Linftx{p} = \frac{\|u_{n}-u_{n-1}\|_\Linftx{p}}{\lambda^n} \leq (C_{p,k})^{n} \|I_0(f,g)\|_\Linftx{p},
\end{equation*}
hence $v_n \in \Linftx{p}$ for all $n\geq 0$. Observe, however, that in the case when $m-1>k=p$ we have at the lowest order $v_0=u_0$ a better decay estimate, namely $v_0\in\Linftx{m-1}$ (see first line of the proof). The reason that $v_0$ decays faster is that its decay comes only from initial data and is not influenced by the potential. At all higher orders, $v_n (n=1,2,...)$ contain the contribution from the scattering on the potential and are only in $\Linftx{k}$. Since $u\in\Linftx{p}=\Linftx{k}$, we expect that all $u_n$ starting from $u_1\in\Linftx{k}$ predict qualitatively correct asymptotic behaviour of $u$ while the lowest order $u_0\in\Linftx{m-1}$ fails in this. This becomes especially evident for initial data with compact support, for which $u_0\in\Linftx{q}$ with arbitrarily big $q$, but $u_1, u_2, ...\in\Linftx{k}\ni u$.
Knowing that the perturbation series converges for some $\lambda$ we can estimate the error of the $n$-th perturbation's order relative to the exact solution by estimating the sum of all higher order terms. For the convergent sequence $u_n$ we use the relation \refa{V:n-m} which holds also in the limit $n'\rightarrow\infty$, $u_{n'}\rightarrow u$ and gives
\begin{equation*}
\|u-u_n\|_\Linftx{p} \leq \frac{\delta^{n+1}}{1-\delta} \|I_0(f,g)\|_\Linftx{p}.
\end{equation*}
It provides a pointwise bound on the error
\begin{equation} \label{pointwise-error-bound}
|u(t,x)-u_n(t,x)| \leq \frac{(C_{p,k}\lambda)^{n+1}}{1-C_{p,k}\lambda}\cdot
\frac{C_m\cdot(f_0+f_1+g_0)}{\norm{t+|x|}\norm{t-|x|}^{p-1}} \qquad \forall (t,x)\in\R_+^{1+3}.
\end{equation}
\subsection{Optimal decay estimate}
In this section we sketch a proof how, under some conditions, the optimal decay estimate and precise asymptotic behaviour of the solution $u$ can be deduced from the behaviour of low order perturbations.
This will be studied in more detail in a forthcoming publication \cite{NS-PB_Tails} dealing with spherical symmetry where the lowest perturbation orders can be calculated almost explicitly.
Consider first the case $m-1>k=p$, i.e. when the rate of decay of $u$ is dominated by scattering on the potential (and not by decay of the initial data). We have $u_0=v_0\in\Linftx{m-1}$ and $v_n\in\Linftx{p}$ for $n\geq 1$.
Below, we show that if the asymptotic behaviour of $v_1$ is such as provided by its estimate (i.e. $p$ in the norm $\Linftx{p}$ is optimal) then theorem \ref{Th:V} gives an optimal estimate for $u\in\Linftx{p}$ with the same decay rate $p$.
Here, we consider only the asymptotics in direction of timelike infinity (the case of spatial infinity can be treated similarly). Assume, we are able to show (by some explicit calculation, like in \cite{NS-PB_Tails}) that $v_1(t,x) = L_0(V v_0) \cong c_1(x) t^{-p}\neq 0$ for $t\gg 1$, where $c_1(x)$ is independent on $\lambda$. The approximation sign means that for every small $\eta>0$ and every $x\in\mathbb{R}^3$ there is a $T_0(x,\eta)>0$ such that for all $t>T_0(x,\eta)$ the relative error is small, i.e.
\begin{equation} \label{def0-approx}
\left|v_1(t,x) - \frac{c_1(x)}{t^p}\right|\leq \eta \frac{|c_1(x)|}{t^p}.
\end{equation}
From \refa{pointwise-error-bound} with $u_1=v_0+\lambda v_1$ we have
\begin{equation*}
|u(t,x)-v_0(t,x)-\lambda v_1(t,x)| \leq \frac{(C_{p,k}\lambda)^{2}}{1-C_{p,k}\lambda}\cdot
\frac{C_m\cdot(f_0+f_1+g_0)}{\norm{t+|x|}\norm{t-|x|}^{p-1}}=:\Delta_1(t,x)
\end{equation*}
for all $(t,x)\in\R_+^{1+3}$. A simple inequality\footnote{It follows immediately from the Bernoulli's inequality.}
\begin{equation} \label{ineq-Bernoulli}
\frac{1}{(1-\zeta)^\sigma} \leq \frac{1}{1-\sigma\zeta} = 1+\frac{\sigma\zeta}{1-\sigma\zeta} \leq 2,\qquad
\forall \zeta \leq 1/(2\sigma),\quad \sigma>1
\end{equation}
implies
\begin{equation*}
\frac{1}{\norm{t-|x|}^q} = \frac{1}{(1+t)^q \left(1-\frac{|x|}{1+t}\right)^q} \leq \frac{2}{(1+t)^q}
\end{equation*}
for $\zeta:=|x|/(1+t)\leq 1/(2q)$, hence is true for all $t\geq 2q |x|$. The error term can be estimated
\begin{equation*}
\Delta_1(t,x)
\leq 2\,(C_{p,k} \lambda)^2 \frac{2\, C_m\cdot(f_0+f_1+g_0)}{(1+t)^p} \equiv
\widetilde{C} \frac{\lambda^2}{(1+t)^p},
\end{equation*}
where we have twice used \refa{ineq-Bernoulli} for $t\geq 2(p-1) |x|$ and $\lambda \leq 1/(2\,C_{p,k})$.
Further,
\begin{equation*}
\widetilde{C} \frac{\lambda^2}{(1+t)^p}
\leq \eta \lambda \frac{|c_1(x)|}{t^p}
\end{equation*}
provided $\lambda$ is small enough such that $\lambda\leq \Lambda_\eta(x):=\eta c_1(x)/\widetilde{C}$.
Again from \refa{ineq-Bernoulli} we get
\begin{equation*}
|v_0(t,x)|\leq \frac{c_0}{\norm{t+|x|} \norm{t-|x|}^{m-2}} \leq
\frac{2 c_0}{(1+t)^{m-1}}
\end{equation*}
for all $t \geq 2(m-2)|x|$. Then,
\begin{equation*}
|v_0(t,x)|\leq \frac{2 c_0}{(1+t)^{m-1}}
\leq \eta \lambda \frac{|c_1(x)|}{t^{p}}
\end{equation*}
provided $t>T_1(x,\eta,\lambda)$ is big enough, such that $t^{m-1-p} \geq 2 c_0/(\eta\, \lambda\, |c_1(x)|)$.
Finally, we arrive at the statement that for every small $\eta>0$ and every $x\in\mathbb{R}^3$, for sufficiently small $\lambda\leq \min[\Lambda_\eta(x), 1/(2\,C_{p,k})]$, and for sufficiently big $t>\max[T_0(x,\eta),T_1(x,\eta,\lambda),2(m-2)|x|]$ we have
\begin{equation*}
\begin{split}
\left|u(t,x)-\lambda\frac{c_1(x)}{t^p}\right| &\leq
\left|u(t,x)-v_0(t,x)-\lambda v_1(t,x)\right| + |v_0(t,x)| + \lambda \left|v_1(t,x)-\frac{c_1(x)}{t^p}\right|\\
&\leq 3 \eta\lambda \frac{|c_1(x)|}{t^p},
\end{split}
\end{equation*}
that is, for $p=k$,
\begin{equation*}
u(t,x)\cong \lambda \frac{c_1(x)}{t^k}.
\end{equation*}
That gives a precise information about the time-decay of $u(t,x)$ and shows that the estimate in theorem \ref{Th:V} is optimal (for $t\gg |x|$).
In the case $p=m-1\leq k$ the decay rate of $u$ is determined by the decay of (long range) initial data and all $v_n\in\Linftx{p}$. Analogously, if we can show that $v_0(t,x) \cong c_0(x) t^{-p} \neq 0$ for $t\gg 1$ then we can bound all higher perturbation orders for sufficiently small $\lambda$ and big $t$ by the same expression multiplied by an arbitrarily small $\eta$. To this aim we use again \refa{pointwise-error-bound}
\begin{equation*}
|u(t,x)-v_0(t,x)| \leq \frac{C_{p,k}\lambda}{1-C_{p,k}\lambda}\cdot
\frac{C_m\cdot(f_0+f_1+g_0)}{\norm{t+|x|}\norm{t-|x|}^{p-1}}=:\Delta_0(t,x) \quad \forall (t,x)\in\R_+^{1+3},
\end{equation*}
and bound $\Delta_0(t,x)$ by $\eta\, |c_0(x)| t^{-p}$ as above. It leads to
\begin{equation*}
\left|u(t,x)-\frac{c_0(x)}{t^p}\right| \leq
\left|u(t,x)-v_0(t,x)\right| + \left|v_0(t,x)-\frac{c_0(x)}{t^p}\right|
\leq 2 \eta \frac{|c_0(x)|}{t^p},
\end{equation*}
what for $p=m-1$ gives
\begin{equation*}
u(t,x)\cong \frac{c_0(x)}{t^{m-1}}.
\end{equation*}
That again gives a precise information about the time-decay of $u(t,x)$ and shows that the estimate in theorem \ref{Th:V} is optimal (for $t\gg |x|$).
\section{Nonlinear case without the potential term}
Now, we consider a nonlinear wave equation of the form
\begin{equation} \label{Fu:wave-eq}
\Box u = F(u)
\end{equation}
subject to initial data $(f,g)$ satisfying \refa{fg-bound} with $f_0, f_1, g_0 < \varepsilon$. The nonlinear term obeys $|F(u)|\leq F_1 |u|^p$ for $|u|<1$ and $|F(u)-F(v)|\leq F_2 |u-v| \max(|u|,|v|)^{p-1}$. The second condition is satisfied e.g. for $F(u)=u^p$ with $F_2 = p$ or for $F\in\mathcal{C}^1$ such that $|F'(u)|\leq F_2 |u|^{p-1}$ for $|u|<1$.
\subsection{Iteration}
We define an iteration scheme
\begin{equation*}
u_{0} := 0,
\end{equation*}
\begin{equation} \label{Fu-iter}
u_{n+1} := I_0(f,g) + L_0(F(u_n)), \qquad n\geq 0.
\end{equation}
For it we have the following
\begin{Theorem} \label{Th:Fu}
With $f,g$ and $F(u)$ as above for any $m>3$, $p> 1+\sqrt{2}$ and sufficiently small $\varepsilon$ the sequence $u_n$ converges (in norm) in $\Linftx{q}$ for $q=\min(p-1,m-1)$ to the solution $u$ of the equation \refa{Fu:wave-eq}. The limit $u:=\lim_{n\rightarrow\infty} u_n$ satisfies
\begin{equation*}
|u(t,x)| \leq \frac{C}{\norm{t+|x|}\norm{t-|x|}^{q-1}},\qquad \forall (t,x)\in\R_+^{1+3}
\end{equation*}
with some positive constant $C$ depending only on $p,m$ and $\varepsilon$.
\end{Theorem}
\begin{proof}
For $g,\nabla f\in\Linfx{m}$ and $f\in\Linfx{m-1}$ with $m>3$ from lemma \ref{Lem:init-data} we get $u_1=I_0(f,g)\in\Linftx{m-1}$. Next, if some $u_n\in\Linftx{q}$ with some $q>1$ then, since $L_0$ is a positive operator\footnote{In fact $L_0=\Box^{-1}$ is a measure on $\R_+^{1+3}$ and therefore has a positive kernel. Then, $L_0(F)\geq 0$ if $F\geq 0$.}, we have $|L_0(F(u_n))|\leq F_1 L_0(|u_n|^p)$ and from lemma \ref{Lem:power-p} we get
\begin{equation*}
\|L_0(F(u_n))\|_\Linftx{q} \leq F_1 \|L_0(|u_n|^p)\|_\Linftx{q} \leq F_1 C \|u_n\|_\Linftx{q}^p,
\end{equation*}
and hence $L_0(F(u_n))\in\Linftx{q}$ when $q\leq p-1$. Then, $u_{n+1}\in\Linftx{m-1}\cap\Linftx{q}=\Linftx{q}$ for $q:=\min(m-1,p-1)$. Hence, by induction we obtain $u_n\in\Linftx{q}$ for every $n=0,1,2,...$ and
\begin{equation*}
\|u_{1}\|_\Linftx{q}\leq \|I_0(f,g)\|_\Linftx{q} \leq C_m (f_0+ f_1+g_0) \leq 3 C_m \varepsilon
\end{equation*}
\begin{equation*}
\|u_{n+1}\|_\Linftx{q}\leq \|I_0(f,g)\|_\Linftx{q} + \|L_0(F(u_n))\|_\Linftx{q}
\leq 3 C_m \varepsilon + F_1 C \|u_n\|_\Linftx{q}^p.
\end{equation*}
Choose $\varepsilon>0$ such that $F_1 C (6 C_m)^p \varepsilon^{p-1} < 3 C_m\cdot \min(1,2F_1/F_2)$. Then,
\begin{equation*}
\|u_{1}\|_\Linftx{q}\leq 6 C_m \varepsilon
\end{equation*}
\begin{equation*}
\|u_{n}\|_\Linftx{q}\leq 6 C_m \varepsilon \quad \Rightarrow \quad
\|u_{n+1}\|_\Linftx{q}\leq 3 C_m \varepsilon + F_1 C (6 C_m \varepsilon)^p
\leq 6 C_m \varepsilon,
\end{equation*}
hence $\|u_{n}\|_\Linftx{q}\leq 6C_m\varepsilon$ for all $n\geq 1$. As next, we show convergence of the sequence $u_n$ by demonstrating that it is Cauchy.
\begin{equation*}
\begin{split}
\|u_{n+1}-u_n\|_\Linftx{q} &= \|L_0(F(u_n)-F(u_{n-1}))\|_\Linftx{q}\\
&\leq F_2 \|L_0(|u_n-u_{n-1}|\max(|u_n|,|u_{n-1}|)^{p-1})\|_\Linftx{q}\\
&\leq F_2 C \||u_n-u_{n-1}|\max(|u_n|,|u_{n-1}|)^{p-1}\|_\Linftx{q}\\
&\leq F_2 C (6C_m\varepsilon)^{p-1}\|u_n-u_{n-1}\|_\Linftx{q}
= \delta \|u_n-u_{n-1}\|_\Linftx{q},
\end{split}
\end{equation*}
with $\delta:=F_2 C (6C_m\varepsilon)^{p-1}<1$, hence the iteration is a contraction in the normed space $\Linftx{q}$ and $u_n$ is a Cauchy sequence, because
\begin{equation} \label{Fu:u_n-u_(n-1)}
\|u_{n+1}-u_n\|_\Linftx{q} \leq \delta^{n} \|u_1-u_{0}\|_\Linftx{q} =
\delta^{n} \|I_0(f,g)\|_\Linftx{q} \leq \delta^{n}\, 3 C_m \varepsilon
\end{equation}
and for any $n'>n$
\begin{equation} \label{Fu:u_n-u_n}
\|u_{n'}-u_n\|_\Linftx{q} \leq \sum_{j=0}^{n'-n-1} \|u_{n+j+1}-u_{n+j}\|_\Linftx{q}
\leq \sum_{j=0}^{n'-n-1} \delta^{j+n}\, 3C_m \varepsilon
\leq \frac{\delta^{n}}{1-\delta}\, 3C_m \varepsilon.
\end{equation}
Since $\Linftx{q}$ is Banach, $u_n$ has a limit $u\in\Linftx{q}$ satisfying
\begin{equation} \label{Fu:u-sol}
u=I_0(f,g)+L_0(F(u))
\end{equation}
and solving the wave equation \refa{Fu:wave-eq} with the initial data \refa{fg-bound}. Its $\Linftx{q}$-norm satisfies
\begin{equation} \label{u-norm-Fu}
\|u\|_\Linftx{q}\leq 6C_m\varepsilon.
\end{equation}
\end{proof}
From \refa{Fu:u_n-u_n} it follows, in the limit $n'\rightarrow\infty$ an error bound
\begin{equation} \label{Fu:u-u_n}
\|u-u_n\|_\Linftx{q} \leq \frac{\delta^{n}}{1-\delta} \,3C_m\varepsilon \leq C \varepsilon^{(p-1)n+1}
\end{equation}
for small $\varepsilon$.
\subsection{Perturbation series}
In order to be able to construct a well-defined perturbation scheme to all orders we have to assume additionally that $F(u)$ is analytic at $u=0$,
its Taylor series starts at power $p\geq 3$ and has convergence radius $R_F>0$. Then, for small initial data
\begin{equation} \label{Fu-pert-initdata}
(u,\dot{u})(0) = (\varepsilon f,\varepsilon g)
\end{equation}
we introduce a perturbation series for representing the solution of \refa{Fu:wave-eq}
\begin{equation} \label{Fu-pert-ser}
u = \sum_{n=1}^\infty \varepsilon^n v_n.
\end{equation}
After inserting it into \refa{Fu:wave-eq} and collecting terms according to powers of $\varepsilon$ we obtain the following perturbation scheme
\begin{alignat}{4} \label{Fu-perteq1}
\Box v_1 &= 0,&\qquad (v_1,\dot{v}_1)(0)&=(f,g)&\quad&\rightarrow&\quad
v_1&=I_0(f,g) \\ \label{Fu-perteq}
\Box v_{n+1} &= F_n(v_1,...,v_n),&\qquad (v_{n+1},\dot{v}_{n+1})(0)&=(0,0)&\quad&\rightarrow&\quad
v_{n+1} &= L_0(F_n(v_1,...,v_n)),
\end{alignat}
for $n\geq 1$, where $F_n$ result from collecting the nonlinear terms with the same powers of $\varepsilon$
\begin{equation} \label{Fu-Fn}
F_n(v_1,...,v_n)=
\sum_k
a^n_k v_1^{\alpha^{n,1}_k} \cdots v_n^{\alpha^{n,n}_k},
\end{equation}
where $\alpha^{n,m}_k\in\mathbb{N}$ satisfy $\sum_{m=1}^n m\alpha^{n,m}_k = n+1$ and $\sum_{m=1}^n \alpha^{n,m}_k\geq p$ for every $n,k$.
We call this expansion a ``zero background'' case because the zero-order term $v_0$ is absent. If a $v_0$ term were present in the series above (i.e. the summation started at $n=0$), we would have an additional equation $\Box v_0 = F(v_0)$ which is truly nonlinear (opposite to the above system of linear wave equations with source terms). Its solution $v_0$ represents a ``background'' around which the perturbations $v_n$ are calculated.
Below we show that the perturbation series converges to the solution $u$ of the nonlinear wave equation \refa{Fu:wave-eq} and has a positive convergence radius.
\begin{Theorem} \label{Th:Fu-pert}
With $f,g$ and $F(u)$ as above for any $m>3$, $p> 1+\sqrt{2}$ and sufficiently small $\varepsilon$ the series defined in \refa{Fu-pert-ser}-\refa{Fu-perteq} converges (in norm) in $\Linftx{q}$ for $q=\min(p-1,m-1)$ to the solution of the equation \refa{Fu:wave-eq} with initial data \refa{Fu-pert-initdata}.
\end{Theorem}
\begin{proof}
For $g,\nabla f\in\Linfx{m}$ and $f\in\Linfx{m-1}$ with $m>3$ from lemma \ref{Lem:init-data} we get $v_1= I_0(f,g)\in\Linftx{m-1}$ with
\begin{equation*}
\|v_1\|_\Linftx{m-1} \leq C_m(f_0+f_1+g_0) =: D < \infty.
\end{equation*}
Next, we prove by induction $\Linftx{q}$ bounds for all $n\geq 1$ with some $q>1$. Assume that for a given $n\geq 1$ we have $v_m\in\Linftx{q}$ for all $m\leq n$. Then, using \refa{Fu-Fn}, we get
\begin{equation*}
\|v_{n+1}\|_\Linftx{q} = \|L_0(F_n(v_1,...,v_n))\|_\Linftx{q}
\leq \sum_k |a^n_k|\cdot \|L_0(|v_1^{\alpha^{n,1}_k} \cdots v_n^{\alpha^{n,n}_k}|)\|_\Linftx{q}.
\end{equation*}
Observe, that $|v_1^{\alpha^{n,1}_k} \cdots v_n^{\alpha^{n,n}_k}|=\left(\sqrt[p]{|v_1^{\alpha^{n,1}_k} \cdots v_n^{\alpha^{n,n}_k}|}\right)^p$ and $\sqrt[p]{|v_1^{\alpha^{n,1}_k} \cdots v_n^{\alpha^{n,n}_k}|}\in\Linftx{q}$, because of the following estimate for $b_1+\ldots+b_n=B\geq 1$ and $b_m\geq 0$, $m=1,...,n$
\begin{equation*}
\begin{split}
&\|w_1^{b_1} \cdots w_n^{b_n}\|_\Linftx{q}
= \|\norm{t+|x|}\norm{t-|x|}^{q-1} w_1^{b_1} \cdots w_n^{b_n}\|_{L^\infty} \\
\leq& \|\norm{t+|x|}^{b_1/B}\norm{t-|x|}^{(q-1)b_1/B} w_1^{b_1}\|_{L^\infty} \cdots
\|\norm{t+|x|}^{b_n/B}\norm{t-|x|}^{(q-1)b_n/B} w_n^{b_n}\|_{L^\infty} \\
=& \|\norm{t+|x|}^{1/B}\norm{t-|x|}^{(q-1)/B} w_1\|_{L^\infty}^{b_1} \cdots
\|\norm{t+|x|}^{1/B}\norm{t-|x|}^{(q-1)/B} w_n\|_{L^\infty}^{b_n} \\
\leq& \|\norm{t+|x|}\norm{t-|x|}^{(q-1)} w_1\|_{L^\infty}^{b_1} \cdots
\|\norm{t+|x|}\norm{t-|x|}^{(q-1)} w_n\|_{L^\infty}^{b_n} \\
=& \|w_1\|_\Linftx{q}^{b_1} \cdots \|w_n\|_\Linftx{q}^{b_n},
\end{split}
\end{equation*}
which used for $b_m:=\alpha^{n,m}_k/p$ with
$B:=\alpha^{n,1}_k/p+\ldots+\alpha^{n,n}_k/p\geq 1$ gives
\begin{equation*}
\begin{split}
\left\|\sqrt[p]{|v_1^{\alpha^{n,1}_k} \cdots v_n^{\alpha^{n,n}_k}|}\right\|_\Linftx{q} &=
\left\||v_1|^{\alpha^{n,1}_k/p} \cdots |v_n|^{\alpha^{n,n}_k/p}\right\|_\Linftx{q} \leq
\|v_1\|_\Linftx{q}^{\alpha^{n,1}_k/p} \cdots \|v_n\|_\Linftx{q}^{\alpha^{n,n}_k/p}
< \infty.
\end{split}
\end{equation*}
Then, for $q\leq p-1$ we can use lemma \ref{Lem:power-p} with $u:=\sqrt[p]{|v_1^{\alpha^{n,1}_k} \cdots v_n^{\alpha^{n,n}_k}|}$ to obtain
\begin{equation*}
\begin{split}
\|v_{n+1}\|_\Linftx{q} &
\leq C\sum_k |a^n_k|\cdot \left\|\sqrt[p]{|v_1^{\alpha^{n,1}_k} \cdots v_n^{\alpha^{n,n}_k}|}\right\|_\Linftx{q}^p
= C\sum_k |a^n_k|\cdot \left\|v_1^{\alpha^{n,1}_k} \cdots v_n^{\alpha^{n,n}_k}\right\|_\Linftx{q} \\
&\leq C \sum_k |a^n_k|\cdot \|v_1\|_\Linftx{q}^{\alpha^{n,1}_k} \cdots \|v_n\|_\Linftx{q}^{\alpha^{n,n}_k}.
\end{split}
\end{equation*}
Unfortunately, we were not able to find an estimate for $\sum_k |a^n_k|$ being good enough to prove a geometric growth of $\|v_{n}\|_\Linftx{q}$ and guaranteeing convergence of the series \refa{Fu-pert-ser}. If one tries so, e.g. assuming $\|v_m\|_\Linftx{q}\leq D^m$ for all $m\leq n$, then
\begin{equation*}
\begin{split}
\|v_{n+1}\|_\Linftx{q}
&\leq C \sum_k |a^n_k|\cdot \|v_1\|_\Linftx{q}^{\alpha^{n,1}_k} \cdots \|v_n\|_\Linftx{q}^{\alpha^{n,n}_k}
\leq C \sum_k |a^n_k|\cdot D^{(1\alpha^{n,1}_k+\ldots+n\alpha^{n,n}_k)} \\
&= C D^{n+1} \sum_k |a^n_k|.
\end{split}
\end{equation*}
The best estimate we were able to find is $\sum_k |a^n_k|\leq \widetilde{C} n^p$ (imposing further assumptions on $F(u)$) which does not allow to close the induction argument. Therefore, we choose a different way and use some trick, relating the wave equation to an algebraic one.
To this goal, we need to relate the coefficients of the power series for $F(u)$
\begin{equation*}
F(u)=\sum_{n=p}^\infty b_n u^n,
\end{equation*}
which converges for $|u|<R_F$, to the expansion coefficients $a^n_k$ which result from a formal insertion of the series $u=\sum_{k=1}^\infty \varepsilon^k v_k$ into $F(u)$
\begin{equation} \label{Fu-trick-F(u)-ser}
F\left(\sum_{k=1}^\infty \varepsilon^k v_k\right) \equiv
\sum_{n=p-1}^\infty \varepsilon^{n+1} F_n(v_1,...,v_n) \equiv
\sum_{n=p-1}^\infty \varepsilon^{n+1} \sum_k a^n_k\; v_1^{\alpha^{n,1}_k} \cdots v_n^{\alpha^{n,n}_k}.
\end{equation}
By some manipulation of sums
we obtain
\begin{equation*}
a^n_k = b_{\alpha^{n,1}_k+\ldots+\alpha^{n,n}_k}
\begin{pmatrix} \alpha^{n,1}_k+\ldots+\alpha^{n,n}_k \\ \alpha^{n,1}_k,\ldots,\alpha^{n,n}_k
\end{pmatrix},
\end{equation*}
where the symbol in delimiters represents the multinomial coefficient. Since there is an analogous relation between the absolute values of the coefficients
\begin{equation*}
|a^n_k| = |b_{\alpha^{n,1}_k+\ldots+\alpha^{n,n}_k}|
\begin{pmatrix} \alpha^{n,1}_k+\ldots+\alpha^{n,n}_k \\ \alpha^{n,1}_k,\ldots,\alpha^{n,n}_k
\end{pmatrix}
\end{equation*}
we observe that the series \refa{Fu-trick-F(u)-ser} with $a^n_k$ replaced by $|a^n_k|$ gives rise to a new function $\w{F}$
\begin{equation*}
\sum_{n=p-1}^\infty \varepsilon^{n+1} \sum_k |a^n_k|\; v_1^{\alpha^{n,1}_k} \cdots v_n^{\alpha^{n,n}_k}
= \w{F}\left(\sum_{k=1}^\infty \varepsilon^k v_k\right)
\end{equation*}
such that
\begin{equation} \label{series-F_}
\w{F}(u)=\sum_{n=p}^\infty |b_n| u^n.
\end{equation}
$\w{F}(u)$ is also analytic at $u=0$ and the convergence radius is the same as that of $F(u)$, i.e. $R_{\w{F}}=R_F$ what follows from standard theory of analytic functions.
Now, instead of the system of estimates
\begin{align}
\|v_1\|_\Linftx{q} &\leq D, \\
\|v_{n+1}\|_\Linftx{q} &\leq C \sum_k |a^n_k|\cdot \|v_1\|_\Linftx{q}^{\alpha^{n,1}_k} \cdots \|v_n\|_\Linftx{q}^{\alpha^{n,n}_k}
\end{align}
with $q:=\min(m-1,p-1)$, we consider a system of equations
\begin{align}
w_1 &= D, \label{eq-w1}\\
w_{n+1} & = C \sum_k |a^n_k|\cdot w_1^{\alpha^{n,1}_k} \cdots w_n^{\alpha^{n,n}_k} \label{eq-w_n}
\end{align}
and it is easy to see (e.g. by induction) that $\|v_n\|_\Linftx{q}\leq w_n$ for all $n\geq 1$. Now comes the trick. Using the above relations we can find that this system is equivalent to
\begin{equation} \label{algebraic-Fu-series}
\sum_{n=1}^\infty \varepsilon^n w_n = C \w{F}\left(\sum_{n=1}^\infty \varepsilon^n w_n\right) + D \varepsilon.
\end{equation}
Introducing $W=\sum_{n=1}^\infty \varepsilon^n w_n$ we can write
\begin{equation} \label{algebraic-Fu}
{W = C \w{F}(W) + D \varepsilon}.
\end{equation}
Since $\w{F}(W)$ is analytic at $W=0$, so is $G(W)$
\begin{equation*}
G(W):=\frac{W-C\w{F}}{D} = \varepsilon
\end{equation*}
and also its inverse $G^{-1}(\varepsilon)$ at $\varepsilon=0$, because $G'(0)=1/D>0$ (see e.g. (real) analytic inverse function theorem in \cite{Hille}), what follows from the fact that the Taylor series for $\w{F}$ starts (as that for $F$) at the power at least $p>2$.
Then $G^{-1}(\varepsilon)$ has a Taylor series with a positive convergence radius $R_{G^{-1}}>0$.
The solution $W(\varepsilon)$ of \refa{algebraic-Fu} can be then represented by a convergent series for $|\varepsilon|<R_{G^{-1}}$
\begin{equation} \label{ser-W-eps}
W(\varepsilon)=G^{-1}(\varepsilon) = \sum_{n=1}^\infty \varepsilon^n w_n.
\end{equation}
In order to guarantee that this series can act as a good argument of $\w{F}$ we choose
a possibly smaller radius $\w{R}\leq R_{G^{-1}}$ such that $|W(\varepsilon)|<R_F$ for all $|\varepsilon|<\w{R}$.
Then $\w{F}(W(\varepsilon))$ can be represented by a convergent series \refa{series-F_} in $W(\varepsilon)$. Finally, this allows us to insert this series into \refa{algebraic-Fu} and obtain first \refa{algebraic-Fu-series} and then the system \refa{eq-w1}-\refa{eq-w_n}.
Essential for the trick is that the series in \refa{ser-W-eps} converges for all $|\varepsilon|<\w{R}$. Now, since $\|v_n\|_\Linftx{q}\leq w_n$ for all $n\geq 1$, we get from the comparison criterion that the series $\sum_{n=1}^\infty \varepsilon^n \|v_{n}\|_\Linftx{q}$ converges as well for all $|\varepsilon|<\w{R}$.
Thus, the series \refa{Fu-pert-ser} converges in norm in $\Linftx{q}$ for all $|\varepsilon|<\w{R}$ to some $\w{u}\in\Linftx{q}$ which satisfies
\begin{equation*}
\begin{split}
\widetilde{u} &:= \sum_{n=1}^\infty \varepsilon^{n} v_{n}
= \varepsilon I_0(f,g) + \sum_{n=1}^\infty \varepsilon^{n+1} L_0(F_n(v_1,...,v_n)) \\
&= \varepsilon I_0(f,g) + L_0\left(F\left(\sum_{n=1}^\infty \varepsilon^{n} v_n\right)\right) \\
&= I_0(\varepsilon f,\varepsilon g) + L_0(F(\widetilde{u}))
\end{split}
\end{equation*}
what is equivalent to the wave equation \refa{Fu:wave-eq} with initial data \refa{Fu-pert-initdata}. Uniqueness of solutions follows easily from theorem \ref{Th:Fu}.
An important consequence of the convergence of $\sum_{n=1}^\infty \varepsilon^n \|v_{n}\|_\Linftx{q}$ is that there exist constants $0<M<\infty$ and $\w{R}^{-1}<\rho<\infty$ such that $\|v_{n}\|_\Linftx{q}<M \rho^n$ for every $n\geq 1$.
\end{proof}
Since the introduction of the auxiliary parameter $\varepsilon$ in the series expansion is only a way to generate the system of linear equations equivalent to the original nonlinear equation, we can now remove the parameter $\varepsilon$ and replace the condition on the initial data by requiring $f_0, f_1, g_0 < \w{R}$. If we solve the system \refa{Fu-perteq1}-\refa{Fu-perteq} then we obtain a solution of the nonlinear wave equation \refa{Fu:wave-eq} by summing up the convergent series $\sum_{n=1}^\infty v_n = u$.
\subsection{Optimal decay estimate}
In the nonlinear case, the iteration sequence $u_n$ is different than the perturbation sequence $\widetilde{u}_n:=\sum_{m=1}^n v_n$, therefore the question whether information about the decay rate of $u$ can be read-off from the low order terms must be studied separately for both cases. On the one hand, in the iterative scheme the form of the source terms $F(u_n)$ is much simpler than that in the perturbative scheme, $F_n(v_1,...,v_n)$. On the other hand, in practise, it is much easier to calculate $v_n$'s than $u_n$'s.
Below, we address both situations.
Analogously like for the linear equation, we will have two cases depending on whether $m$ is smaller or bigger than $p$. In the first case, the initial data will dominate the late-time decay rate of $u$, in the second case the power $p$ of the nonlinearity, through nonlinear scattering, will determine the the decay rate of $u$.
\subsubsection{Iteration}
In analogy to the linear case, basing on a decay information for some low order term in the iteration sequence and on its error bound we find the exact decay rate of $u$. From the error bound \refa{Fu:u-u_n} it follows for large $t$
\begin{equation*}
|u(t,x)-u_n(t,x)|\leq \frac{C\varepsilon^{(p-1)n+1}}{\norm{t+|x|}\norm{t-|x|}^{q-1}}
\leq \frac{c_n(x)\varepsilon^{(p-1)n+1}}{(1+t)^q}
\end{equation*}
where $q:=\min(p-1,m-1)$.
If we are able to show that some $u_n(t,x)\cong \varepsilon d_n(x) t^{-q}\neq 0$ for large $t$ (the asymptotic approximation is to be understood in the following sense:
\begin{equation} \label{def-approx}
\exists_{\eta_0>0} \forall_{0<\eta<\eta_0} \exists_{T<\infty} \forall_{t>T}
\left| u_n(t,x) - \frac{\varepsilon d_n(x)}{t^q}\right| < \eta \frac{\varepsilon d_n(x)}{t^q},
\end{equation}
i.e. the relative error $\eta$ becomes arbitrarily small for sufficiently big $t$, cf. \refa{def0-approx}), then already $u_n$ shows the correct decay rate, identical with this of $u$, because then, choosing $\eta:=\varepsilon^{(p-1)n}$, we get
\begin{equation*}
\left|u(t,x)-\varepsilon \frac{d_n(x)}{t^q}\right| \leq
\left|u(t,x)-u_n(t,x)\right| + \left|u_n(t,x)-\varepsilon \frac{d_n(x)}{t^q}\right| \leq
\eta \varepsilon \frac{c_n(x)}{t^q} + \eta \varepsilon \frac{d_n(x)}{t^q}
\end{equation*}
for sufficiently small $\varepsilon$. Hence the decay rate of $u$ at late times is exactly $t^{-q}$.
In case when $m>p=q+1$, we have $u_1=I_0(f,g)\in\Linftx{m-1}$ and
\begin{equation*}
|u_1(t,x)|\leq \frac{C_m\cdot(f_0+f_1+g_0)}{\norm{t+|x|}\norm{t-|x|}^{m-2}} \leq
\frac{2 C_m\cdot(f_0+f_1+g_0)}{(1+t)^{m-1}}
\end{equation*}
for $t\geq 2(m-2)|x|$,
hence it decays faster than $u$ and it cannot be shown that $u_1(t,x)\cong \varepsilon d_1(x) t^{-q}$. It is expected that it will be true for $u_2\cong \varepsilon d_2(x) t^{-q}$, what means, that already $u_2$ would have the same rate of decay as $u$ (see \cite{NS-PB_Tails} for such results in spherical symmetry).
In case when $m \leq p$, we have $q:=m-1$. Then it should be possible to show $u_1(t,x)\cong c_1(x) t^{-(m-1)}$, what means, that already $u_1$ would have the same rate of decay as $u$.
\subsubsection{Perturbation series}
The perturbation scheme \refa{Fu-perteq1}-\refa{Fu-perteq} can be written as
\begin{align}
v_1&=I_0(f,g) \\
v_2&=v_3=...=v_{p-1}=0\\
v_{p} &= L_0(F_{p-1}(v_1,...,v_{p-1})) = a_0 L_0((v_1)^p)\\
v_{n+1} &= L_0(F_n(v_1,...,v_n)),\qquad n\geq p.
\end{align}
Assume we are in the more interesting case $m>p$ where the tail results from the nonlinear scattering. Then, $v_1=I_0(f,g)\in\Linftx{m-1}$ and $v_n\in\Linftx{p-1}$ for $n\geq 2$. If we can show that $a_0 L_0((I(f,g))^p)\cong d_p(x) t^{-(p-1)}$, then already $v_p$ has the correct decay rate, identical with this of $u$. To prove it, we need to show that $\varepsilon I_0(f,g)$ and $\varepsilon^{n+1} L_0(F_n(v_1,...,v_n))$ for $n\geq p$ are small relative to $\varepsilon^p d_p(x) t^{-(p-1)}$.
Again, for $v_1=I_0(f,g)\in\Linftx{m-1}$ the situation is obvious, $\varepsilon |v_1| = \varepsilon |I_0(f,g)(t,x)|\leq c_1(x)\varepsilon (1+t)^{-(m-1)}$ and it is much smaller than $\varepsilon^p d_p(x) t^{-(p-1)}$ for sufficiently large $t$.
From the convergence proof for the perturbation series we know that there exist constants $M,\rho>0$ such that $\|v_{n}\|_\Linftx{p-1}\leq M \rho^n$ for all $n\geq 1$. Hence, we can estimate the remainder of the perturbation series
\begin{equation*}
\left\|\sum_{m=p+1}^\infty \varepsilon^m v_m\right\|_\Linftx{p-1}
\leq M \sum_{m=p+1}^\infty \varepsilon^m \rho^m
\leq \frac{M \varepsilon^{p+1} \rho^{p+1}}{1-\varepsilon\rho} \leq C \varepsilon^{p+1},
\end{equation*}
for sufficiently small $\varepsilon<1/\rho$. It means that
\begin{equation*}
\left|\sum_{m=p+1}^\infty \varepsilon^m v_m(t,x)\right| \leq \frac{C \varepsilon^{p+1}}{\norm{t+|x|}\norm{t-|x|}^{p-2}}
\leq \frac{c(x) \varepsilon^{p+1}}{(1+t)^{p-1}}
\end{equation*}
for big $t$ (and fixed $x$). Then
\begin{equation*}
\left| u(t,x) - \varepsilon^p v_p(t,x)\right| \leq
|\varepsilon v_1(t,x)| + \left|\sum_{m=p+1}^\infty \varepsilon^m v_m\right| \leq
\frac{c_1(x) \varepsilon}{t^{m-1}} + \frac{c(x) \varepsilon^{p+1}}{t^{p-1}}
\end{equation*}
hence with $v_p\cong d_p(x) t^{-(p-1)}$ and the relative error $\eta:=\varepsilon$ (cf. \refa{def-approx} for the definition of ``$\cong$'')
\begin{equation*}
\left|u(t,x) - \frac{d_p(x) \varepsilon^p}{t^{p-1}}\right| \leq
\frac{c_1(x) \varepsilon}{t^{m-1}} + \frac{[c(x) + d_p(x)] \varepsilon^{p+1}}{t^{p-1}}.
\end{equation*}
For small $\varepsilon$ and big $t$ such that $t^{m-p} \geq \frac{c_1(x)}{c(x)} \varepsilon^{-p}$
it follows
\begin{equation*}
u(t,x) \cong \frac{d_p(x) \varepsilon^p}{t^{p-1}}.
\end{equation*}
Thus $v_p$ dominates the perturbation series for large times and small $\varepsilon$ and has the same decay rate as the full solution of the nonlinear wave equation $u$.
\section{Nonlinear case with the potential term}
Finally, let's consider a nonlinear wave equation with potential
\begin{equation} \label{Fu-V:wave-eq}
\Box u + V u = F(u)
\end{equation}
subject to initial data $(f,g)$ satisfying \refa{fg-bound} with $f_0, f_1, g_0 < \varepsilon$. The nonlinear term $F(u)$ is like in the previous section.
\subsection{Iteration}
\subsubsection{Perturbative treatment of $V$}
As in the previous sections, we define an iteration
\begin{equation*}
u_{0} := 0
\end{equation*}
\begin{equation*}
u_{n+1} := I_0(f,g) - \lambda L_0(V u_n) + L_0(F(u_n)),\qquad n\geq 0
\end{equation*}
We have the following
\begin{Theorem} \label{Th:V-Fu}
With $f,g$, $V$ and $F(u)$ as above for any $m>3$, $k>2$, $p> 1+\sqrt{2}$, $\lambda<C_{q,k}^{-1}$ and sufficiently small $\varepsilon$ the sequence $u_n$ converges (in norm) in $\Linftx{q}$ for $q=\min(p-1,k,m-1)$ to the solution $u$ of the equation \refa{Fu-V:wave-eq}. The limit $u:=\lim_{n\rightarrow\infty} u_n$ satisfies
\begin{equation*}
|u(t,x)| \leq \frac{C}{\norm{t+|x|}\norm{t-|x|}^{q-1}},\qquad \forall (t,x)\in\R_+^{1+3}
\end{equation*}
with some positive constant $C$ depending only on $p,k,m,\lambda$ and $\varepsilon$.
\end{Theorem}
The proof is a combination of proofs of theorems \ref{Th:V} and \ref{Th:Fu}, therefore we concentrate only on the points that differ.
\begin{proof}
For $g,\nabla f\in\Linfx{m}$ and $f\in\Linfx{m-1}$ with $m>3$ from lemma \ref{Lem:init-data} we get $u_1=I_0(f,g)\in\Linftx{m-1}$. Next, for $\delta:=\lambda C_{q,k}<1$ there exists $M>3/(1-\delta)>0$ and if $u_n\in\Linftx{q}$ with $\|u_n\|_\Linftx{q}\leq M C_m \varepsilon$ for some $n\geq 1$ and $q>1$ then from lemmas \ref{Lem:init-data}-\ref{Lem:power-p} we get
\begin{equation*}
\begin{split}
\|u_{n+1}\|_\Linftx{q} &\leq \|I_0(f,g)\|_\Linftx{q} + \lambda \|V u_n\|_\Linftx{q} +
\|L_0(F(u_n))\|_\Linftx{q} \\
&\leq C_m (f_0+ f_1+g_0) + \lambda C_{q,k} \|u_n\|_\Linftx{q} + F_1 C \|u_n\|_\Linftx{q}^p \\
&\leq 3 C_m \varepsilon + \delta M C_m \varepsilon + F_1 C (M C_m)^p \varepsilon^p < \infty
\end{split}
\end{equation*}
and hence $u_{n+1}\in\Linftx{q}$ if $q:=\min(m-1, k, p-1)$. By induction we obtain $u_n\in\Linftx{q}$ for every $n\geq 1$. For $\varepsilon>0$ such that $F_1 C (M C_m)^p \varepsilon^{p-1} \leq \min[(M(1-\delta)-3) C_m, M \delta(1-\delta) C_m F_1/F_2]$ we have
\begin{equation*}
\|u_{1}\|_\Linftx{q}\leq 3 C_m \varepsilon \leq M C_m \varepsilon
\end{equation*}
\begin{equation*}
\|u_{n+1}\|_\Linftx{q}\leq 3 C_m \varepsilon + \delta M C_m \varepsilon + (M(1-\delta)-3) C_m \varepsilon
= M C_m \varepsilon,
\end{equation*}
hence $\|u_{n}\|_\Linftx{q}\leq M C_m\varepsilon$ for all $n\geq 1$. Analogously like in the previous proofs, we arrive at
\begin{equation*}
\begin{split}
\|u_{n+1}-u_n\|_\Linftx{q} &\leq \|L_0(V(u_n-u_{n-1}))\|_\Linftx{q} +
\|L_0(F(u_n)-F(u_{n-1}))\|_\Linftx{q}\\
& \leq \lambda C_{q,k} \|u_n-u_{n-1}\|_\Linftx{q} + F_2 C (M C_m\varepsilon)^{p-1}\|u_n-u_{n-1}\|_\Linftx{q} \\
& \leq \delta \|u_n-u_{n-1}\|_\Linftx{q} + \delta(1-\delta) \|u_n-u_{n-1}\|_\Linftx{q}
= \delta' \|u_n-u_{n-1}\|_\Linftx{q},
\end{split}
\end{equation*}
where $\delta':=2\delta-\delta^2<1$. It follows that $u_n$ is a Cauchy sequence (see the above proofs) in the Banach space $\Linftx{q}$ and hence $u_n$ has a limit $u\in\Linftx{q}$ satisfying
\begin{equation} \label{Fu-V:u-sol}
u=I_0(f,g)+L_0(Vu_n)+L_0(F(u))
\end{equation}
and solving the wave equation \refa{Fu-V:wave-eq} with the initial data \refa{fg-bound}. Its $\Linftx{q}$-norm satisfies
\begin{equation} \label{Fu-V:u-norm}
\|u\|_\Linftx{q}\leq M C_m\varepsilon
\end{equation}
with some (finite) constant $M>3/(1-\delta)>0$.
\end{proof}
Moreover, by analogous considerations like in the proof of theorem \ref{Th:Fu}, we find for $n'>n$
\begin{equation*}
\|u_{n'}-u_n\|_\Linftx{q} \leq \frac{\delta'^n}{1-\delta'} 3 C_m \varepsilon,
\end{equation*}
and in the limit $n'\rightarrow\infty$
\begin{equation} \label{Fu-V:u-u_n}
\|u-u_n\|_\Linftx{q} \leq \frac{\delta'^n}{1-\delta'} 3 C_m \varepsilon.
\end{equation}
\subsubsection{Non-perturbative treatment of $V$}
Building on the above results we can also define an alternative iteration scheme
\begin{equation*}
u_{0} := 0
\end{equation*}
\begin{equation*}
u_{n+1} := I_V(f,g) + L_V(F(u_n)),\qquad n\geq 0
\end{equation*}
which is based on inversion of the operator $\Box+\lambda V$. According to the discussion in the introduction, it is equivalent to
\begin{equation*}
u_{n+1} = I_0(f,g) + L_0(F(u_n)) - \lambda L_0(V u_{n+1}),\qquad n\geq 0.
\end{equation*}
It converges under the same conditions as in theorem \ref{Th:V-Fu}. The proof has the only difference that now we have
\begin{equation*}
\begin{split}
\|u_{n+1}\|_\Linftx{q} &\leq \|I_0(f,g)\|_\Linftx{q} + \|L_0(F(u_n))\|_\Linftx{q}
+ \lambda \|V u_{n+1}\|_\Linftx{q}\\
&\leq C_m (f_0+ f_1+g_0) + F_1 C \|u_n\|_\Linftx{q}^p + \lambda C_{q,k} \|u_{n+1}\|_\Linftx{q}\\
&\leq (1-\delta) M C_m \varepsilon + \delta \|u_{n+1}\|_\Linftx{q}
\end{split}
\end{equation*}
what gives
\begin{equation*}
\|u_{n+1}\|_\Linftx{q} \leq \frac{(1-\delta) M C_m \varepsilon}{1-\delta} = M C_m \varepsilon.
\end{equation*}
\subsection{Perturbation series}
Definig a perturbation scheme for the nonlinear wave equation with potential \refa{Fu-V:wave-eq}
\begin{equation} \label{Fu-V:pert-ser}
u = \sum_{n=1}^\infty \varepsilon^n v_n
\end{equation}
one encounters the problem of two scales which are introduced by parameters $\lambda$ measuring the strength of the potential and $\varepsilon$ measuring the strength of the initial data. Therefore, we propose two ways of looking at the problem: in first, we treat the potential non-perturbatively, in second, we assign to $\lambda$ a scale of some power of $\varepsilon$.
\subsubsection{Non-perturbative treatment of $V$ ($\lambda \sim \varepsilon^0$)}
In this perturbation scheme we invert the operator $\Box+\lambda V$, thus treating $V$ in a non-perturbative way. For the sequence $v_n$ defined by
\begin{align}
v_1 &:=I_V(f,g) = I_0(f,g) - L_0(Vv_1) \label{V-Fu-1perteq1} \\
v_{n+1} &:= L_V(F_n(v_1,...,v_n)) = L_0(F_n(v_1,...,v_n)) - L_0(V v_{n+1}),\qquad
n\geq 1 \label{V-Fu-1perteq}
\end{align}
we have the following
\begin{Theorem} \label{Th:V-Fu-pert1}
With $f,g, V$ and $F(u)$ as above for any $m>3$, $k>2$, $p> 1+\sqrt{2}$, $\lambda<C_{q,k}^{-1}$ and sufficiently small $\varepsilon$ the series defined in \refa{Fu-V:pert-ser}-\refa{V-Fu-1perteq} converges (in norm) in $\Linftx{q}$ for $q=\min(p-1,k,m-1)$ to the solution of the equation \refa{Fu-V:wave-eq} with initial data \refa{Fu-pert-initdata}.
\end{Theorem}
\begin{proof}
The proof is essentially identical with this of theorem \ref{Th:Fu-pert} with the following differences. For $q:=\min(m-1,k,p-1)$ we obtain
\begin{equation*}
\|v_1\|_\Linftx{m-1} \leq D + \delta \|v_1\|_\Linftx{m-1},
\end{equation*}
where $\delta:=\lambda C_{q,k}$ and hence
\begin{equation*}
\|v_1\|_\Linftx{q} \leq \|v_1\|_\Linftx{m-1} \leq \frac{D}{1-\delta} < \infty.
\end{equation*}
The same modification regards all other inequalities
\begin{equation*}
\|v_{n+1}\|_\Linftx{q} \leq C \sum_k |a^n_k|\cdot \|v_1\|_\Linftx{q}^{\alpha^{n,1}_k} \cdots \|v_n\|_\Linftx{q}^{\alpha^{n,n}_k} + \delta \|v_{n+1}\|_\Linftx{q}
\end{equation*}
which leads to
\begin{equation*}
\|v_{n+1}\|_\Linftx{q} \leq \frac{C}{1-\delta} \sum_k |a^n_k|\cdot \|v_1\|_\Linftx{q}^{\alpha^{n,1}_k} \cdots \|v_n\|_\Linftx{q}^{\alpha^{n,n}_k}.
\end{equation*}
Repeating the trick used in the proof of theorem \ref{Th:Fu-pert}, we can relate this problem to the algebraic equation, which now becomes
\begin{equation} \label{algebraic-Fu-V}
{W = \frac{C}{1-\delta}\; \w{F}(W) + \frac{D}{1-\delta}\; \varepsilon}.
\end{equation}
Since $G(W)$ given by
\begin{equation*}
G(W):=\frac{(1-\delta) W-C\w{F}}{D} = \varepsilon
\end{equation*}
is again analytic, so is $W(\varepsilon)=G^{-1}(\varepsilon)$, because $G'(0)=(1-\delta)/D>0$. Repeating the reasoning, we arrive at the conclusion that $\sum_{n=1}^\infty \varepsilon^n \|v_n\|_\Linftx{q}$ has a positive radius of convergence.
It follows that the series \refa{Fu-V:pert-ser} converges in norm in $\Linftx{q}$ for all $\varepsilon<\w{R}$ to the solution of \refa{Fu-V:wave-eq} with initial data \refa{Fu-pert-initdata}.
Uniqueness follows easily from theorem \ref{Th:V-Fu}.
\end{proof}
We can, again, remove the auxiliary parameter $\varepsilon$ and replace the condition on the initial data by $f_0, f_1, g_0 < \w{R}$. Then, the series $\sum_{n=1}^\infty v_n$ defined by \refa{V-Fu-1perteq1}-\refa{V-Fu-1perteq} converges to the solution of the nonlinear wave equation \refa{Fu-V:wave-eq}.
\subsubsection{Perturbative treatment of $V$ ($\lambda \sim \varepsilon^a$)}
\def\widetilde{\lambda}{\widetilde{\lambda}}
If we assume that the small scale of the potential's strength $\lambda$ is related to the small scale of the initial data, say $\lambda = \varepsilon^a \widetilde{\lambda}$ with $a\in\mathbb{N}_+$, then the power series Ansatz
\begin{equation} \label{Fu-V:pert-ser'}
u = \sum_{n=1}^\infty \varepsilon^n v_n
\end{equation}
inserted into the wave equation \refa{Fu-V:wave-eq} gives
\begin{align}
v_{-n} &:= 0,\qquad n\geq 0 \label{V-Fu-2perteq-} \\
v_1 &:=I_0(f,g) \label{V-Fu-2perteq1} \\
v_{n+1} &:= -\widetilde{\lambda} L_0(V v_{n+1-a}) + L_0(F_n(v_1,...,v_n)) \label{V-Fu-2perteq},\qquad n\geq 1.
\end{align}
This system is much more appropriate for numerical techniques, because the equation on $v_{n+1}$ is explicit, in contrast to the previous scheme, which includes implicit equations for $v_{n+1}$ (i.e. appearing on both sides). Moreover, if we choose $a:=p-1$, then the lowest nontrivial order, $v_p$ (all lower orders satisfy $v_n=0$ for $1<n<p$), contains both contributions from $V$ and $F$ and can be used as a good approximation to $u$ (assuming the series converges), what will be discussed in the next section.
In this case we also have a convergence result
\begin{Theorem} \label{Th:V-Fu-pert2}
With $f,g, V$ and $F(u)$ as above for any $m>3$, $k>2$, $p> 1+\sqrt{2}$, $\lambda<C_{q,k}^{-1}$ and sufficiently small $\varepsilon$ the series defined in \refa{Fu-V:pert-ser'}-\refa{V-Fu-2perteq} converges (in norm) in $\Linftx{q}$ for $q=\min(p-1,k,m-1)$ to the solution of the equation \refa{Fu-V:wave-eq} with initial data \refa{Fu-pert-initdata}.
\end{Theorem}
\begin{proof}
The proof is again analogous to that of theorems \ref{Th:Fu-pert} and \ref{Th:V-Fu-pert1} with the following differences. We have for $q:=\min(m-1,k,p-1)$
\begin{equation*}
\|v_1\|_\Linftx{q} \leq \|v_1\|_\Linftx{m-1} \leq C_m(f_0+f_1+g_0) =: D < \infty
\end{equation*}
and
\begin{equation*}
\begin{split}
\|v_{n+1}\|_\Linftx{q}
&\leq \|L_0(F_n(v_1,...,v_n))\|_\Linftx{q} + \widetilde{\lambda} \|L_0(V v_{n+1-a})\|_\Linftx{q} \\
&\leq C \sum_k |a^n_k|\cdot \|v_1\|_\Linftx{q}^{\alpha^{n,1}_k} \cdots \|v_n\|_\Linftx{q}^{\alpha^{n,n}_k} + \w{\delta} \|v_{n+1-a}\|_\Linftx{q}.
\end{split}
\end{equation*}
with $\w{\delta}:=\widetilde{\lambda} C_{q,k}$. The corresponding algebraic equation becomes now
\begin{equation} \label{algebraic-Fu-Va}
{W = C\; \w{F}(W) + \w{\delta} \varepsilon^a W + D \varepsilon}.
\end{equation}
It cannot be rewritten, like before, as $G(W)=\varepsilon$, but it can be written as
\begin{equation*}
G(W,\varepsilon):=W - C\; \w{F}(W) - \w{\delta} \varepsilon^a W - D \varepsilon = 0.
\end{equation*}
Since $a$ is a positive integer number, $G(W,\varepsilon)$ is analytic in both variables around the point $G(0,0)=0$. Moreover, $\partial G(W,0)/\partial W|_{W=0} = 1-\w{\delta} \varepsilon^a = 1 - \lambda C_{q,k}>0$.
Then, by (real) analytic implicit function thorem (see e.g. \cite{Hille}), there exists a unique function $W(\varepsilon)$ such that $G(W(\varepsilon),\varepsilon)=0$. Then $W(\varepsilon)$ has a Taylor series representation with positive radius of convergence.
Repeating the reasoning of the previous proofs, we arrive at the conclusion that $\sum_{n=1}^\infty \varepsilon^n \|v_n\|_\Linftx{q}$ has a positive radius of convergence $\w{R}>0$.
It follows that the series \refa{Fu-V:pert-ser'}-\refa{V-Fu-2perteq} converges in norm in $\Linftx{q}$ for all $0<\varepsilon<\w{R}$ to the solution of \refa{Fu-V:wave-eq} with initial data \refa{Fu-pert-initdata}.
Uniqueness follows again from theorem \ref{Th:V-Fu}.
\end{proof}
\subsection{Optimal decay estimate}
\subsubsection{Iteration with perturbative treatment of $V$}
From \refa{Fu-V:u-u_n} we have
\begin{equation*}
|u(t,x)-u_n(t,x)| \leq \frac{\delta^n (2-\delta)^n}{(1-\delta)^2}
\cdot \frac{3C_m\varepsilon}{\norm{t+|x|}\norm{t-|x|}^{q-1}}
\leq \frac{c_n(x)\lambda^n\varepsilon}{(1+t)^q}
\end{equation*}
for sufficiently small $\varepsilon$ and $\lambda$ (such that $\lambda C_{q,k} = \delta<\delta_0<1$).
If we are able to show that some $u_n(t,x)\cong \varepsilon d_n(x) t^{-q}\neq 0$ for large $t$ (the asymptotic approximation ``$\cong$'' is to be understood in the sense defined in \refa{def-approx}, with the relative error $\eta$) then already $u_n$ shows the correct decay rate, identical with this of $u$, because then, choosing $\eta:=\lambda^n$, we get
\begin{equation*}
\left|u(t,x) - \frac{d_n(x)\varepsilon}{t^q}\right| \leq
\frac{[d_n(x)+c_n(x)]\lambda^n\varepsilon}{t^q}.
\end{equation*}
For small $\lambda$ it follows
\begin{equation*}
|u(t,x)| \cong \frac{d_n(x)\varepsilon}{t^q}.
\end{equation*}
Again, in case when $m>p$, we have $u_1=I_0(f,g)\in\Linftx{m-1}$ and
\begin{equation*}
|u_1(t,x)|\leq \frac{C_m\cdot(f_0+f_1+g_0)}{\norm{t+|x|}\norm{t-|x|}^{m-2}} \leq
\frac{2 C_m\cdot(f_0+f_1+g_0)}{(1+t)^{m-1}}
\end{equation*}
for $t\geq 2(m-2)|x|$,
hence it decays faster than $u$ and it cannot be shown that $u_1(t,x)\cong \varepsilon d_1(x) t^{-q}$. It is expected that it will be true for $u_2\cong \varepsilon d_2(x) t^{-q}$, what means, that already $u_2$ would have the same rate of decay as $u$ (see \cite{NS-PB_Tails} for such results in spherical symmetry).
In case when $m \leq p$, we have $q:=m-1$. Then it should be possible to show $u_1(t,x)\cong d_1(x) t^{-(m-1)}$, what means, that already $u_1$ would have the same rate of decay as $u$.
\subsubsection{Perturbation series with perturbative treatment of $V$}
Consider the system \refa{V-Fu-2perteq-}-\refa{V-Fu-2perteq} and choose the constant $a:=p-1$ so that $v_2,...,v_{p-1}=0$ and at the order $v_p$ both effects, the nonlinear and linear (potential) scattering, appear simultaneously
\begin{align}
v_{-n} &:= 0,\qquad n\geq 0\\
v_1&=I_0(f,g) \label{Fu-V:pert-v1}\\
v_2&=v_3=...=v_{p-1}=0\\
v_{p} &= -\widetilde{\lambda} L_0(V v_1) + L_0(F_{p-1}(v_1,...,v_{p-1})) = -\widetilde{\lambda} L_0(V v_1) +a_0 L_0((v_1)^p) \label{Fu-V:pert-vp}\\
v_{n+1} &= -\widetilde{\lambda} L_0(V v_{n-p+2}) +L_0(F_n(v_1,...,v_n)),\qquad n\geq p.
\end{align}
Consider only the more interesting case $m-1>\min(p-1,k)=:q$. If we can show that $v_p\cong d_p(x) t^{-q}$, then already $v_p$ has the correct decay rate, identical with this of $u$. To prove it, we can repeat the reasoning from the section when we treated nonlinear wave equation without the potential term, because the only fact, which we use is that the perturbation series $\sum_{n=1} \varepsilon^n v_n$ has a positive radius of convergence and this is here guaranteed by theorem \ref{Th:V-Fu-pert2}. Analogously, we obtain
\begin{equation*}
u(t,x) \cong \frac{d_p(x) \varepsilon^p}{t^q}
\end{equation*}
for all $t>T$ and sufficiently big $T=T(\varepsilon)$, so $v_p$ dominates the perturbation series for large times and small $\varepsilon$ and has the same decay rate as the full solution of the nonlinear wave equation $u$.
This is the simplest setting for applications. Here, we only need to solve (approximately) two linear wave equations, \refa{Fu-V:pert-v1} and \refa{Fu-V:pert-vp}, in order to determine the decay rate for solutions of \refa{Fu-V:wave-eq} . This is the starting point of \cite{NS-PB_Tails} where we solve the two equations under spherical symmetry.
\appendi
\section{Some useful estimates}
The first two lemmas we cite from \cite{NS-WaveDecay}.
\begin{Lemma} \label{Lem:init-data}
Let the data $(f,g)\in\Linfx{m-1}\times\Linfx{m}$ with $m>3$ satisfy
\begin{equation*}
f_0:=\|f\|_\Linfx{m-1} < \infty,\qquad
f_1:=\|\nabla f\|_\Linfx{m} < \infty,\qquad
g_0:=\|g\|_\Linfx{m} < \infty.
\end{equation*}
Then there exists a unique weak solution $v(t,x)=I_0(f,g)$ of the free wave equation
\begin{equation*}
\Box v = 0,\qquad v(0,x)=f(x),\qquad \partial_t v(0,x)=g(x)
\end{equation*}
which satisfies
\begin{equation*}
\|v\|_\Linftx{m-1} \leq C(f,g) := C_m\cdot (g_0+f_1+f_0).
\end{equation*}
\end{Lemma}
\begin{Lemma} \label{Lem:source}
Let the source $F$ satisfy for some $q>2$ and $1<p\leq q$
\begin{equation*}
F_0:=\|\norm{x}^q F\|_\Linftx{p} < \infty.
\end{equation*}
Then there exists a weak solution $v(t,x)=L_0(F)$ of the free wave equation with source
\begin{equation*}
\Box v = F,
\end{equation*}
and null initial data $v(0,x)=0$, $\partial_t v(0,x)=0$. Moreover, it satisfies
\begin{equation*}
\|v\|_\Linftx{p} \leq C_{p,q} F_0.
\end{equation*}
\end{Lemma}
Next lemma we cite after Asakura \cite[Cor. 2.4 and Eq. 2.33]{Asakura} and state in our notation.
\begin{Lemma} \label{Lem:power-p}
Let $u\in\mathcal{C}^2(\R_+^{1+3})\cap\Linftx{q}$ for some $q>1$. Then for any $p>1+\sqrt{2}$
\begin{equation*}
\|L_0(|u|^p)\|_\Linftx{q} \leq C \|u\|_\Linftx{q}^p
\end{equation*}
with some $C>0$ provided $q\leq p-1$.
\end{Lemma}
Note, that it is a consequence of lemma \ref{Lem:source}, but only when $p>3$, while for $1+\sqrt{2}<p\leq 3$ it requires a more general proof. It can be easily deduced, also for weak solutions $u\in\mathcal{C}^0(\R_+^{1+3})$, from a more general estimate \cite{NS-DecayLemma}
\begin{Lemma} \label{Lem:decay}
If
\begin{equation*}
|F(t,x)|\leq \frac{A}{\norm{t+|x|}^p\norm{t-|x|}^q}
\end{equation*}
with $p>2$, $q>1$ then
\begin{equation*}
|L_0(F)(t,x)|\leq \frac{C}{\norm{t+|x|}\norm{t-|x|}^{p-2}}
\end{equation*}
with some positive constant $C$.
\end{Lemma}
| {
"timestamp": "2009-03-04T12:58:55",
"yymm": "0710",
"arxiv_id": "0710.1782",
"language": "en",
"url": "https://arxiv.org/abs/0710.1782",
"abstract": "For nonlinear wave equations with a potential term we prove pointwise space-time decay estimates and develop a perturbation theory for small initial data. We show that the perturbation series has a positive convergence radius by a method which reduces the wave equation to an algebraic one. We demonstrate that already first and second perturbation orders, satisfying linear equations, can provide precise information about the decay of the full solution to the nonlinear wave equation. In a forthcoming publication (part II) we address the issue of optimal decay estimates and precise asymptotics under spherical symmetry where the perturbation equations can be solved almost exactly.",
"subjects": "Mathematical Physics (math-ph); Analysis of PDEs (math.AP)",
"title": "Linear and nonlinear tails I: general results and perturbation theory",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.981735721648143,
"lm_q2_score": 0.7217432182679956,
"lm_q1q2_score": 0.7085610992309839
} |
https://arxiv.org/abs/1901.08957 | Minimizing lattice structures for Morse potential energy in two and three dimensions | We investigate the local and global optimality of the triangular, square, simple cubic, face-centred-cubic (FCC), body-centred-cubic (BCC) lattices and the hexagonal-close-packing (HCP) structure for a potential energy per point generated by a Morse potential with parameters $(\alpha,r_0)$. The optimality of the triangular lattice is proved in dimension 2 in an interval of densities. Furthermore, a complete numerical investigation is performed for the minimization of the energy with respect to the density. In dimension 3, the local optimality of the simple cubic, FCC and BCC lattices is numerically studied. We also show that the square, triangular, cubic, FCC and BCC lattices are the only Bravais lattices in dimensions 2 and 3 being critical points of a large class of lattice energies (including the one studied in this paper) in some open intervals of densities, as we observe for the Lennard-Jones and the Morse potential lattice energies. Finally, we state a conjecture about the global minimizer for the 3d energy that exhibits a surprising transition from BCC, FCC to HCP as $\alpha$ increases. Moreover, we compare the values of $\alpha$ found experimentally for metals and rare-gas crystals with the expected lattice ground-state structure given by our numerical investigation/conjecture. Only in a few cases does the known ground-state crystal structure match the minimizer we find for the expected value of $\alpha$. Our conclusion is that the pairwise interaction model with Morse potential and fixed $\alpha$ is not adapted to describe metals and rare-gas crystals if we want to take into consideration that the lattice structure we find in nature is the ground-state of the associated potential energy. | \section{Introduction and main results}
A fundamental question in Mathematical Physics that has been actively investigated recently is the following ``Crystal Problem" (also called ``The crystallization conjecture" see e.g. \cite{RadinLowT,BlancLewin-2015}): Why are solids crystalline? Answering this question in a rigorous mathematical way is known to be extremely challenging, even though the interactions between atoms or molecules are assumed to be a sum of pairwise potentials. Whereas the one-dimensional version of this problem is well-understood \cite{Ventevogel1,VN2, VN3, Rad1,Crystbinary1d}, only few results have been proved in dimensions $2$ and $3$ for models consisting of short-range interactions \cite{Rad2,Stef1,Stef2,Luca:2016aa,Friedrich:2018aa}, perturbations of the hard-sphere potential \cite{Crystal,ELi,TheilFlatley} and oscillating functions \cite{Suto1}.
\medskip
In 1929, Morse \cite{Morse} solved the three-dimensional Schr\"odinger equation with potential
$$
V_M(|x|):=e^{-2\alpha(|x|-r_0)}-2e^{-\alpha(|x|-r_0)},\quad x\in \mathbb R^d,
$$
known now as the ``Morse Potential", where $|.|$ is the euclidean norm in $\mathbb R^d$. This is an attractive-repulsive potential (see Figure \ref{fig:Morseplots} for a plot), i.e. a decreasing-increasing potential having one well. Parameters $r_0$ and $\alpha$ respectively represent the minimizer of $V_M$ and the hardness of the interaction (as $\alpha$ increases, $V_M$ goes to a hard-core potential, see Figure \ref{fig:Morseplots}). This potential is known to be a canonical model for social aggregation -- e.g. swarming and flocking -- as explained in \cite[Sect. 4]{PrimerSwarm} (see also \cite{Selfpropelled,CompactGlobMinMorse} and references therein). Furthermore, it has been shown (see e.g. \cite{RareGasHorton}) that $V_M$ provides a description of the vibrational properties of rare-gas crystals which is better than the one given by the quantum harmonic oscillator (see also \cite{Raff1990,AlimietalMorse,ParsonMorse,BarkerMorse}). Moreover, interactions in cubic metals are also well-described by the Morse potential, as explained in \cite{GirifalcoWeizer,LincolnetalMorse,SharmaKachhavaMorse,MilsteinBCC,PamuketalMorse,HungetalMorse} and \cite[p. 22]{KuboNagamiya}. The values of parameters $(\alpha,r_0)$ can then be computed from experimental data for many metals and rare-gas crystals.
\begin{figure}[!h]
\centering
\includegraphics[width=7cm]{Morseplots.png}
\caption{Plots of $V_M$ for $r_0=1$ and $\alpha\in \{1,3,6,100\}$.}
\label{fig:Morseplots}
\end{figure}
\medskip
Crystallization problems for Morse potential have not received so much attention. Ventevogel and Nijboer \cite[p. 276]{VN2} pointed out the fact that, even in dimension 1, any general crystallization result is difficult to reach for $V_M$ (they claim to have the solution among one-dimensional periodic configurations with 2 and 3 points in each period). However, in a recent paper, Bandegi and Shirokoff \cite[Sect. 6.1]{Bandegi:2015aa} give numerical evidences for the global optimality of the equidistant configuration for some values of the density and the parameters of the Morse potential, using convex relaxation arguments. In higher dimension, no such result exists and only local stability properties has been proved in \cite{MilsteinMorse1970}, using Born's stability criteria for crystals, following the methods developed in \cite{born1940,misra1940}.
\medskip
Instead of investigating the pure Crystal Problem involving $V_M$ -- i.e. minimizing the interaction energy among all the possible point configurations -- we choose to study the minimization of the energy per point among periodic lattices where the points are interacting via a Morse potential. This point of view has been taken in several previous works in Number Theory \cite{Rankin,Eno2,Ennola,Cassels,Diananda,Mont,SarStromb, Coulangeon:kx,Coulangeon:2010uq,Gruber}, optimal point configurations problems \cite{CohnKumar} and Mathematical Physics \cite{NonnenVoros,AftBN,CheOshita,Sandier_Serfaty,BeterminPetrache,BetKnupfdiffuse}. It is a natural first step for keeping or rejecting periodic structures which could be good candidates for the Crystal Problem associated to the interaction potential. We have already studied the Lennard-Jones type potential case in \cite{Betermin:2014fy,BetTheta15,Beterminlocal3d,Beterloc,OptinonCM} and our goal is to give the same kind of quantitative results for the Morse potential.
\medskip
Let us first define our spaces of periodic lattices. Let $\mathcal{L}_d^\circ(V)$ be the space of Bravais lattices, i.e. having the form $L=\bigoplus_{i=1}^d \mathbb Z u_i$ where $\{u_i\}_i$ is a basis of $\mathbb R^d$, of area (in dimension $2$, and we will note it $A$) or volume $V>0$ (i.e. the volume $|\det \{u_i\}_i|$ of their unit cell) and $\mathcal{L}_d$ be the space of all $d$-dimensional Bravais lattices. We also write $\mathcal{P}_d$ the space of all $d$-dimensional periodic configurations, i.e. all the possible finite unions of Bravais lattices. We hence have, for any volume $V$, $\mathcal{L}_d^{\circ}(V)\subset \mathcal{L}_d\subset \mathcal{P}_d$. Furthermore, in dimension $2$ and $3$, as explained for instance in \cite{Terras_1988}, any Bravais lattice $L\in \mathcal{L}_d$ can be parametrized by a vector $(x,y,A)$ or $(u,v,x,y,z,V)$ where $(x,y)\in \mathbb R^2$ (resp. $(u,v,x,y,z)\in \mathbb R^5$) belongs to a fundamental domain containing only one copy of each lattice (it is basucally due to the reduction of quadratic forms). This parametrization will be used in Sections \ref{sec-numeric2d} and \ref{sec-numeric3d}. In particular, for any $E:\mathcal{L}_d\to \mathbb R$ of class $C^2$, the gradient and the Hessian of $E$ at $L\in \mathcal{L}_d$ will be respectively denoted by $\nabla_L E[L]$ and $D^2 E[L]$. For more details, see \cite{Beterloc,Beterminlocal3d}. Furthermore, the same differentation with respect to the structure can be done for periodic configurations (see e.g. \cite{Coulangeon:2010uq} for details).
\medskip
We now define the energy we want to focus on. Writing
$$
V_M(|x|)=e^{\alpha r_0}\left(e^{\alpha r_0}e^{-2\alpha |x|} -2e^{-\alpha |x|} \right)=:e^{\alpha r_0} f(|x|^2),\quad f(r):=e^{\alpha r_0} e^{-2\alpha \sqrt{r}}-2e^{-\alpha \sqrt{r}},
$$
the goal of this paper is to investigate rigorously and numerically the energy per point defined by
\begin{equation}
E_{\alpha,r_0}[L]:=\sum_{p\in L} f(|p|^2)=e^{\alpha r_0} \sum_{p\in L} e^{-2\alpha |p|}-2\sum_{p\in L} e^{-\alpha |p|},
\end{equation}
among Bravais lattices $L\in \mathcal{L}_d$ (or among periodic configurations). This energy, as it is the case for Lennard-Jones type potentials and for the difference of Yukawa potentials (see \cite[Sec. 5]{BetTheta15}), can be seen as a difference of competitive interactions with completely monotone potentials (i.e. the functions are positive and the signs of their derivatives alternate). It has been shown in \cite[Sec. 3.1]{BetTheta15} that $L\mapsto \sum_{p\in L} e^{-\beta |p|}$ is minimized by the triangular lattice and the square lattice $\mathbb Z^2$ is a saddle point of the energy, in $\mathcal{L}_2^\circ(A)$ for all fixed $A$ and for all $\beta>0$, following the lattice theta function properties proved in \cite{Mont}. Thus, studying the minimization on $\mathcal{L}_d^\circ(V)$ for a difference of such lattice energies must show, as in the Lennard-Jones case, many other minimizing lattices with respect to $V$, at least in dimension 2 (see \cite{Beterloc}).
\medskip
Using a method we have developed in \cite{Betermin:2014fy,BetTheta15}, we find an interval of areas $A$ such that the triangular lattice $\Lambda_A$ (see Figure \ref{fig-lattice2d}) defined by
\begin{equation}
\Lambda_A:=\sqrt{\frac{2A}{\sqrt{3}}}\left[\mathbb Z\left(1,0 \right)\oplus \mathbb Z\left( \frac{1}{2},\frac{\sqrt{3}}{2} \right) \right]
\end{equation}
is the unique minimizer, up to rotation, of $E_{\alpha,r_0}$ in $\mathcal{L}_2^\circ(A)$, when $\alpha$ is not too small. Furthermore, the non-optimality of the triangular lattice at low density (i.e. large area) can be shown as a direct consequence of \cite[Thm 1.5]{OptinonCM}, which is itself a consequence of Montgomery Theorem \cite[Thm 1]{Mont} and the functional equation for the lattice theta function \cite[Eq. (2)]{Mont} defined by
\begin{equation}\label{def-thetafunction}
\theta_L(\alpha):=\sum_{p\in L} e^{-\pi \alpha |p|^2}.
\end{equation}
\begin{figure}[!h]
\centering
\includegraphics[width=6cm]{2dlattices.png}
\caption{Triangular lattice $\Lambda_A$ and square lattice $\sqrt{A}\mathbb Z^2$. The area of their unit cells is $A$} \label{fig-lattice2d}
\end{figure}
More precisely, we prove the following result.
\begin{thm}[Sufficient conditions for the optimality of $\Lambda_A$]\label{thm-main}
Let $\alpha>0$. If $A$ satisfies one of the following conditions:
\begin{enumerate}
\item[(C1)] $\frac{8\pi}{\alpha^2}\leq A<\frac{\pi r_0}{\alpha}$ and $e^{\alpha r_0} e^{-\frac{\alpha^2 A}{\pi}}-1 \geq e^{-\frac{\alpha^2 A}{4\pi}}$,
\item[(C2)] $A<\min \left\{ \frac{\pi r_0}{\alpha}, \frac{8\pi}{\alpha^2} \right\}$ and $\displaystyle e^{\alpha r_0} e^{-\frac{\alpha^2 A}{\pi}}-1\geq \frac{64\pi^2}{e^2 A^2\alpha^4}$,
\end{enumerate}
then the triangular lattice $\Lambda_A$ is the unique minimizer of $E_{\alpha,r_0}$ in $\mathcal{L}_2^\circ(A)$, up to rotation.
\medskip
Furthermore, for any $\alpha,r_0\in (0,+\infty)$ there exists $A_1$ such that for any $A>A_1$, $\Lambda_A$ is not a minimizer of $E_{\alpha,r_0}$ in $\mathcal{L}_2^\circ(A)$.
\end{thm}
\begin{remark}
If $\alpha=6$ and $r_0=1$, then $\min \left\{ \frac{\pi r_0}{\alpha}, \frac{8\pi}{\alpha^2} \right\}=\frac{\pi}{6}\approx 0.524$, $(C2)$ is satisfied and, solving numerically $e^{\alpha r_0} e^{-\frac{\alpha^2 A}{\pi}}-1\geq \frac{64\pi^2}{e^2 A^2\alpha^4}$, we find that $0.0139\leq A\leq 0.5034$ which is then an interval of area where the triangular lattice is the minimizer of $E_{6,1}$ in $\mathcal{L}_2^\circ(A)$. Other numerical values are given in Table \ref{table-C1C2}. \\
In particular, we notice that this method does not apply for $\alpha<3.078$ (approximatively), where none of the conditions are satisfied. This is due to the lower bound \eqref{Eq:LBuA} we have used in the proof. Numerical investigations show that area bounds exist for smaller value of $\alpha$, but we have chosen conditions $(C1)$ and $(C2)$ because they are simple and tractable and we are mostly interested in the $\alpha=6$, $r_0=1$ case which is comparable to the classical Lennard-Jones potential case \eqref{def-LJ} (see Figure \ref{fig-LJMorse}). Furthermore, in most of the case (for instance in \cite{GirifalcoWeizer,LincolnetalMorse,SharmaKachhavaMorse,PamuketalMorse,HungetalMorse,Raff1990,AlimietalMorse,ParsonMorse,BarkerMorse}), the values of $\alpha$ (rescaled such that $r_0=1$) computed from experimental data are larger than $3.078$.
\end{remark}
Since both conditions $(C1)$ and $(C2)$ are satisfied for large values of $\alpha$, we then derive the following consequence of Theorem \ref{thm-main} that gives an explicit interval of areas where the triangular lattice is minimal.
\begin{corollary}[Interval of areas for the optimality of the triangular lattice when $\alpha$ is large]\label{thm-triang}
Let $A_0=-\frac{4\pi}{\alpha^2}\log X_0$ where $X_0$ is the unique solution of $e^{\alpha r_0}X^4-X-1=0$ on $[e^{-\alpha r_0/4},e^{-2}]$.
If $\alpha> \frac{8+\log 2}{r_0}$ and $A$ satisfies
$$
\frac{8\pi}{\alpha^2 \sqrt{e^{\alpha r_0-8}-1}}\leq A\leq A_0<\frac{\pi r_0}{\alpha},
$$
then $\Lambda_A$ is the unique minimizer of $E_{\alpha,r_0}$, up to rotation, in $\mathcal{L}_2^\circ(A)$.\\
\end{corollary}
\begin{remark}
We notice that, by assumption, $A_0\geq \frac{8\pi}{\alpha^2}$, and therefore the optimality of the triangular lattice at very high density is not proved. We also remark that it is possible to reach any small value of $A$ by increasing $\alpha$ (or $r_0$) and such that $\Lambda_A$ is minimal in $\mathcal{L}_2^\circ(A)$ for $E_{\alpha,r_0}$.
\end{remark}
This result ensures the possibility to get a triangular lattice as a ground-state at a certain density (and in the neighborhood of it), using a Morse potential. However, it also shows the limits of our methods, as discussed in Section \ref{sec-limit}. In particular, we cannot get the optimality of the triangular lattice in the high density limit $A\to 0$ as it was the case for the Lennard-Jones type potentials in \cite{BetTheta15}, and as we have numerically checked it for $V_M$. A modification of the Morse potential is then proposed in Section \ref{sec-limit} in order to fill this gap. Furthermore, we also remark that the whole method developed in \cite{BetTheta15} cannot be applied for the Morse potential in order to show the global optimality, in $\mathcal{L}_2$, of a triangular lattice for $E_{\alpha,r_0}$. Indeed, it is straightforward to prove that any global minimizer of $E_{\alpha,r_0}$ on $\mathcal{L}_2$ has an area smaller than $r_0^2$, and it is not possible -- using our method -- to show that $\Lambda_A$ is the unique minimizer of $E_{\alpha,r_0}$ for any $A\in (0,r_0^2]$, whatever $\alpha$ and $r_0$ are.
\medskip
We have performed a numerical investigation of the minimizers of $E_{\alpha,r_0}$ in $\mathcal{L}_2^\circ(A)$ when $A$ varies, as it was done for the classical Lennard-Jones potential
\begin{equation}\label{def-LJ}
V_{LJ}(r)=\frac{1}{r^{12}}-\frac{2}{r^6}
\end{equation}
in \cite{Beterloc}. Contrary to the latter, the lack of homogeneity of $V_M$ makes the systematic analysis of the local extrema of $E_{\alpha,r_0}$ very difficult, and we cannot find explicit analytic bounds for the local optimality of $\Lambda_A$ and the square lattice $\sqrt{A}\mathbb Z^2$ depicted in Figure \ref{fig-lattice2d}. We only propose, in Lemma \ref{lem:asympttriangular}, an asymptotic result for the local minimality of $\Lambda_A$ for small values of $A$. The details of our numerical study can be seen in Section \ref{sec-numeric2d}, based in particular on the minimization among rectangular lattices $\sqrt{A} L_y$ (i.e. its unit cell is a rectangle with sides of length $y^{\pm 1/2}$) and rhombic lattices $\sqrt{A}L_\theta$ (i.e. its unit cell is a rhombus with smallest angle $\theta$) where $L_y,L_\theta\in \mathcal{L}_2^\circ(1)$ are respectively defined by
\begin{align}
&L_y:=\mathbb Z\left(\frac{1}{\sqrt{y}},0 \right)\oplus \mathbb Z \left(0,\sqrt{y} \right)\quad y\geq 1;\label{rectangular}\\
& L_\theta=\mathbb Z u_\theta \oplus \mathbb Z v_\theta, \quad |u_\theta|=|v_\theta|, \quad (\widehat{u_\theta,v_\theta})=\theta, \quad \frac{\pi}{3}\leq \theta\leq \frac{\pi}{2}. \label{rhombic}
\end{align}
The two most important observations are the following:
\begin{enumerate}
\item Global minimizer: for all $\alpha,r_0$, the global minimizer of $E_{\alpha,r_0}$ in $\mathcal{L}_2$ seems to be a triangular lattice, as it seems to be the case for Lennard-Jones type potentials (see \cite{BetTheta15,Beterloc,OptinonCM}).
\item Confirmation of our conjecture: the minimizer's transition with respect to $A$ follows the same law as the one we have observed for the classical Lennard-Jones potential in \cite{Beterloc} This supports a conjecture we have stated in \cite[Sec. 5.4]{Beterloc} and where the same phenomenon is expected for all difference of completely monotone functions having only one well. More precisely, as $A$ increases, the evolution of the minimizer of $E_{\alpha,r_0}$ in $\mathcal{L}_2^\circ(A)$ is depicted in Table \ref{table-conj}, where the case $\alpha=6,r_0=1$ has been chosen, and compared to the classical Lennard-Jones case (in such a way that $f''(1)=V_{LJ}''(1)$ and both have their absolute minimum for $r=1$). We observe a transition triangular-rhombic-square-rectangular for the minimizer of $E_{\alpha,r_0}$ in $\mathcal{L}_2^\circ(A)$, as $A$ increases, which seems to stay true for other values of $(\alpha,r_0)$ (like $\alpha=r_0=3$ for instance). As $A\to +\infty$, the minimizer is rectangular and becomes more and more thin. More details are given in Section \ref{subsec-Conj}. This minimizer's behaviour is also similar to the one appearing in the two-components Bose-Einstein Condensates described by Ho and Mueller in \cite[Fig. 1 and 2]{Mueller:2002aa}. The same phenomenon is also naturally expected in other physical and biological models involving infinite lattices and competitive interactions.
\end{enumerate}
\begin{table}\label{table-conj}
\begin{center}
\begin{tabular}[H]{|c|c|}
\hline
\textbf{Area $A$} & \textbf{Minimizer in $\mathcal{L}_2^\circ(A)$}\\
\hline
(M): $A< 1.1560011044$ & Triangular\\
(LJ): $A< 1.138$ & \\
\hline
(M): $1.1560011044<A<1.1560011045$ & Rhombic\\
(LJ): $1.138<A<1.143$ & \\
\hline
(M): $1.1560011045<A<1.291$ & Square\\
(LJ): $1.143<A<1.268$ & \\
\hline
(M): $A>1.291$ & Rectangular\\
(LJ): $A>1.268$ & (more and more thin as $A\to +\infty$)\\
\hline
\end{tabular}
\caption{Shape of the minimizer with respect to the area $A$ (numerical values) for Morse with $\alpha=6$,$r_0=1$ and Lennard-Jones potential \eqref{def-LJ} for which the values are taken from \cite{Beterloc}.}
\end{center}
\end{table}
In particular, we notice that the triangular and square lattices are the only one staying critical points of $E_f$ in $\mathcal{L}_2^\circ(A)$ for $A$ in some open intervals. We call these lattices `volume-stationary' and we have also remarked this phenomenon in \cite{Beterloc} for the classical Lennard-Jones potential. The next result, based on Gruber's result \cite[Cor 2]{Gruber}, shows that this property has a certain universality.
\begin{thm}[Volume-stationary lattices in dimension $2$]\label{prop:stat2d}
Let $d=2$ and $f:(0,+\infty)\to \mathbb R$ be a nonzero function such that
\begin{enumerate}
\item as $r\to +\infty$, we have $f(r)=O(r^{-d/2-\eta})$ for some $\eta>0$,
\item for any $r>0$, it holds $\displaystyle f(r)=\int_0^{+\infty} e^{-rt} d\mu_f(t)$ for some Borel measure $\mu_f$ on $(0,+\infty)$.
\end{enumerate}
For any $L\in \mathcal{L}_d$, we define
$$
E_f[L]:=\sum_{p\in L\backslash \{0\}} f(|p|^2),
$$
and let $L_0\in \mathcal{L}_d^\circ(1)$. There exists an open interval $I$ such that $\nabla_{L_0} E_f[\sqrt{A} L_0]=0$ for all $A\in I$ if and only if $L_0\in \{\mathbb Z^2,\Lambda_1\}$, thus $I=(0,+\infty)$.
\end{thm}
\begin{remark}
The same result is stated in dimension $d=3$ in Theorem \ref{prop:stat3d}, and a more general result (see Theorem \ref{thm:eutactic}) will be proved for $d$-dimensional lattice energies in Section \ref{sec:stat}: all the layers of such lattices $L_0$ are strongly eutactic in the sense of Definition \ref{def:eutactic}. Note that the Morse potential $f$ satisfies the assumption of the theorem because $\mu_f$ is defined by \eqref{eq:laplacetransformMorse} (as well as any Lennard-Jones type potential), thus the result is actually true for $E_{\alpha,r_0}$ (and also for the Lennard-Jones energy studied in \cite{Beterloc}).
\end{remark}
In dimension 3, the problem is obviously more difficult. The space $\mathcal{L}_3^\circ(V)$ of Bravais lattices with fixed volume $V$ is a five-dimensional space and only local optimality results have been proved for usual interaction potentials \cite{Ennola,BeterminPetrache,Beterminlocal3d}, even for the completely monotone potentials (see e.g. \cite{BetSoftTheta} for a review in the soft lattice theta function case). The only global minimality results is proved by Sarnak and Str\"ombergsson in \cite{SarStromb} for the height of the three-dimensional torus (i.e. the derivative of the Epstein zeta function at the origin), using a computer assisted proof. The exact formula for the partial derivatives of $E_{\alpha,r_0}$ are known (see e.g. \cite{Beterminlocal3d}), but their systematic analysis is again very difficult, by lack of homogeneity of $f$ which contains exponential terms. The four important (for us, in this context) three-dimensional lattices of unit density are the simple cubic lattice $\mathbb Z^3\in \mathcal{L}^\circ_3(1)$, Face-Centred-Cubic (FCC) lattice $\mathsf{D}_3\in \mathcal{L}^\circ_3(1)$, Body-Centred Cubic (BCC) lattice $\mathsf{D}_3^*\in \mathcal{L}^\circ_3(1)$ and the Hexagonal-Close-Packing (HCP) structure $\mathop{\rm hcp}\nolimits\in \mathcal{P}_3\backslash \mathcal{L}_3$ depicted in Figure \ref{fig-latticethreedimensions}, and defined by
\begin{align}
&\mathbb Z^3=\mathbb Z(1,0,0)\oplus \mathbb Z(0,1,0)\oplus \mathbb Z(0,0,1);\label{cubic}\\
&\mathsf{D}_3:=2^{-\frac{1}{3}}\left[\mathbb Z(1,0,1)\oplus \mathbb Z(0,1,1)\oplus \mathbb Z(1,1,0) \right];\label{FCC}\\
&\mathsf{D}_3^*:=2^{\frac{1}{3}}\left[\mathbb Z(1,0,0)\oplus \mathbb Z(0,1,0)\oplus \mathbb Z\left(\frac{1}{2},\frac{1}{2},\frac{1}{2} \right) \right]; \label{BCC}\\
&\mathop{\rm hcp}\nolimits:= L\cup \left( L+\left(\frac{1}{2},\frac{1}{\sqrt{12}},\sqrt{\frac{2}{3}} \right) \right),\quad L:=\mathbb Z(1,0,0)\oplus \mathbb Z\left(\frac{1}{2},\frac{\sqrt{3}}{2},0 \right)\oplus \mathbb Z\left( 0,0,\sqrt{\frac{8}{3}} \right).\label{HCP}
\end{align}
\begin{figure}[!h]
\centering
\includegraphics[width=3cm]{3dlatticeZ3.png} \quad \includegraphics[width=3cm]{3dlatticeFCC.png}\quad \includegraphics[width=3cm]{3dlatticeBCC.png}\quad \includegraphics[width=4cm]{hcp.png}
\caption{Three-dimensional periodic lattices. The cubic lattice $\mathbb Z^3$, the FCC lattice $\mathsf{D}_3$, the BCC lattice $\mathsf{D}_3^*$ and the HCP structure $\mathop{\rm hcp}\nolimits$.}\label{fig-latticethreedimensions}
\end{figure}
Using the formula proved in \cite{Beterminlocal3d}, we have numerically studied the local optimality of the cubic lattices $\mathbb Z^3$, $\mathsf{D}_3$ and $\mathsf{D}_3^*$ for $E_f$ in $\mathcal{L}_3^\circ(V)$, when $V$ varies. The results, presented in Table \ref{table-3dcubic}, are similar with the one found for the classical Lennard-Jones potential in \cite{Beterminlocal3d}. In particular, we also observe that these cubic lattices are the only one staying critical points for $E_f$ in some open intervals of volumes $V$. It turns out that this phenomenon, as the one observed in dimension $2$ for $\mathbb Z^2$ and $\Lambda_1$ and proved in Theorem \ref{prop:stat2d}, is also universal as shown in the next result, again based on Gruber's result \cite[Cor 2]{Gruber}, and generalized in Section \ref{sec:stat}.
\begin{thm}[Volume-stationary lattices in dimension $3$]\label{prop:stat3d}
Let $f$ and $E_f$ be defined as in Theorem \ref{prop:stat2d} with $d=3$. A Bravais lattice $L_0\in \mathcal{L}_3^\circ(1)$ satisfies $\nabla_{L_0} E_f[V^{1/3} L_0]=0$ for all $V\in I$ where $I$ is an open interval if and only if $L_0\in \{\mathbb Z^3,\mathsf{D}_3,\mathsf{D_3}^*\}$, thus $I=(0,+\infty)$.
\end{thm}
\medskip
According to \cite{ModifMorse}, it is important to notice that the global minimum for the three-dimensional Morse lattice energy in $\mathcal{P}_3$ should be an HCP structure, for large parameter $\alpha$, which does not belong to the class of Bravais lattice we are interested in. This also holds for the Lennard-Jones potential for large exponents (see e.g. \cite{ModifMorse,OptinonCM}). Even though $\mathcal{P}_3\ni \mathop{\rm hcp}\nolimits\not \in \mathcal{L}_3$, using formula \eqref{HCP} we have computed the numerical values of $E_{\alpha,r_0}[\lambda \mathop{\rm hcp}\nolimits]$.
\medskip
Surprisingly, we numerically found the following transition of minimizers. Defining
$$
H_\alpha:=\min_\lambda E_{\alpha,1}[\lambda \mathop{\rm hcp}\nolimits],\quad B_\alpha:=\min_\lambda E_{\alpha,1}[\lambda \mathsf{D}_3^*],\quad F_\alpha:=\min_\lambda E_{\alpha,1}[\lambda \mathsf{D}_3],$$
we obtain:
\begin{enumerate}
\item If $\alpha\leq \alpha_0$, where $\alpha_0\in (3.05,3.06)$, then $\displaystyle B_\alpha <F_\alpha <H_\alpha$;
\item if $\alpha_0<\alpha<\alpha_1$, where $\alpha_1\in (3.54,3.55)$, then $ \displaystyle F_\alpha <H_\alpha<B_\alpha $;
\item if $\alpha\geq \alpha_1$, then $\displaystyle H_\alpha <F_\alpha <B_\alpha$.
\end{enumerate}
It turns out that the multiple stable structures for Morse potential, Modified Morse potential and Lennard-Jones potential have been numerically studied in \cite[Fig. 5]{ModifMorse}. Then, we propose the following new conjecture for the Morse potential lattice energy that will be heuristically justified (for the BCC/FCC transition) in Section \ref{sec-conj3d}, based on Sarnak-Str\"ombergsson conjecture \cite[Eq. (43)]{SarStromb} for the lattice theta functions defined by \eqref{def-thetafunction}.
\begin{Conjecture}[Global minimizer of Morse potential lattice energy]\label{ConjMorse3d} Let $r_0>0$, then there exists $\alpha_0$, $\alpha_1$ such that
\begin{enumerate}
\item If $\alpha<\alpha_0$, then the global minimizer of $E_{\alpha,r_0}$ in $\mathcal{P}_3$ is a BCC lattice.
\item If $\alpha>\alpha_1$, then the global minimizer of $E_{\alpha,r_0}$ in $\mathcal{L}_3$ (resp. in $\mathcal{P}_3$) is a FCC lattice (resp. a HCP structure).
\item If $\alpha_0<\alpha<\alpha_1$, then the unique minimizer of $E_{\alpha,r_0}$ in $\mathcal{P}_3$ is a FCC lattice.
\end{enumerate}
\end{Conjecture}
This conjecture appears to be obviously more difficult to prove than the Sarnak-Str\"ombergsson conjectures for lattice theta functions and Epstein zeta functions described in \cite[Eq. (43)-(44)]{SarStromb}, in particular for the minimization problem on $\mathcal{P}_3$.
\medskip
We have compared this conjecture to the values of $\alpha$ found from experimental quantities for metals in \cite{GirifalcoWeizer,LincolnetalMorse,SharmaKachhavaMorse,PamuketalMorse,HungetalMorse}. We observe that the real ground-states of metals match with the expected ground-state given by our conjecture, according to the experimental values of $\alpha$, only in few cases for both FCC and BCC structures. Details are given in Section \ref{sec-experiment}, but it clearly appears that the pure central-force model with two-body Morse potential is not sufficiently accurate to describe metals if we want to take into consideration that their FCC or BCC structure are not only local minimizers, but global minimizers. The same holds for rare-gas crystals when we compare the expected ground-state structure with the values of the parameters found in \cite{Raff1990,AlimietalMorse,ParsonMorse,BarkerMorse}.
\medskip
Another interesting fact is the similarity of the two and three-dimensional cases, respectively in the square/cubic cases and the triangular/FCC cases. This justifies the relevance of studying one (two-dimensional) layer instead of the whole crystal, even though the dimension reduction techniques described in \cite{BeterminPetrache} and following from the multiplicative property of the exponential function cannot be applied here. That was also observed in the Lennard-Jones case in \cite{Beterloc,Beterminlocal3d} and we expect this property to be true for many other repulsive-attractive potentials. It is by itself a good motivation to study two-dimensional lattice energies.
\medskip
\textbf{Plan of the paper.} We start in Section \ref{sec-triangoptimal} by showing Theorem \ref{thm-main} and Corollary \ref{thm-triang}, then explaining what are the limits of our method based on Montgomery result and finally proving the local optimality of the triangular lattice at high density. We then prove a generalization of Theorem \ref{prop:stat2d} and Theorem \ref{prop:stat3d} in Section \ref{sec:stat} about Bravais lattices being critical points of energies of type $E_f$ in $\mathcal{L}_d^\circ(V)$, for all $V$ in an open interval. The numerical investigation of the minimizers of $E_{\alpha,r_0}$ is explained in Section \ref{sec-numeric2d}, where the local optimality of the triangular and square lattices is studied and our Conjecture for competitive completely monotone functions is checked. In Section \ref{sec-numeric3d}, the three-dimensional minimization problem is numerically studied and heuristically justified. We also compare the experimental values of $\alpha$ with our Conjecture, explaining why the Morse potential is not a good candidate for describing metals or rare-gas crystals, in the central-force setting.
\section{Optimality of the triangular lattice in $\mathcal{L}_2^\circ(A)$: rigorous results}\label{sec-triangoptimal}
\subsection{Proof of Theorem \ref{thm-main}}
The goal is to study the minimization of $L\mapsto E_{\alpha,r_0}[L]$ in $\mathcal{L}^\circ_2(A)$ for fixed $A$. We want to use the method described in \cite{Betermin:2014fy,BetTheta15} based on Montgomery Theorem \cite[Thm. 1]{Mont} about the optimality of the triangular lattice for the lattice theta function $\theta_L$ defined by \eqref{def-thetafunction}. For that, we need to compute the inverse Laplace transform of $f(r)=e^{\alpha r_0} e^{-2\alpha \sqrt{r}}-2e^{-\alpha \sqrt{r}}$, which is given by
\begin{equation}\label{eq:laplacetransformMorse}
\mu_f(y):=\mathcal{L}^{-1}[f](y)=\frac{\alpha y^{-3/2}}{\sqrt{\pi}}\left( e^{\alpha r_0} e^{-\alpha^2/y}- e^{-\alpha^2/4y}\right).
\end{equation}
Therefore, we now use \cite[Thm 1.1]{BetTheta15} in order to get two sufficient conditions for the optimality of $\Lambda_A$ in $\mathcal{L}_2^\circ(A)$ and, studying the sign of $\mu_f$ and using \cite[Thm 1.5]{OptinonCM}, we get the non-optimality of the triangular lattice in $\mathcal{L}_2^\circ(A)$ for large $A$.
\begin{proof}[Proof of Theorem \ref{thm-main}]
We recall that, in \cite[Thm 1.1]{BetTheta15}, we have proved the following integral representation which we directly apply to $f$, for any $L\in \mathcal{L}_2^\circ(A)$ (note the tiny difference of definition for $\theta_L$ with \cite{BetTheta15} and the fact that the term $p=0$ has to be removed from the sum),
$$
E_{\alpha,r_0}[L]-f(0)=\frac{\pi}{A}\int_1^{+\infty}\left(\theta_L\left( \frac{y}{A} \right)-1 \right)g_A(y)dy + C_A, \quad g_A(y):=y^{-1}\mu_f\left( \frac{\pi}{yA} \right)+\mu_f\left( \frac{\pi y}{A} \right) ,
$$
where $C_A$ is a constant independent of $L$, $\mu_f$ is the inverse Laplace transform of $f$ and $\theta_L$ is defined by \eqref{def-thetafunction}. In our case, for any $A>0$ and any $y\geq 1$,
\begin{equation}\label{Eq:uA}
g_A(y)=\frac{\alpha A}{\pi^2 y^{3/2}}\left[ e^{\alpha r_0} e^{-\frac{\alpha^2 yA}{\pi}}y^2 - e^{-\frac{\alpha^2 y A}{4\pi}}y^2 + e^{\alpha r_0} e^{-\frac{\alpha^2 A}{\pi y}} -e^{-\frac{\alpha^2 A}{4\pi y}}\right]=:\frac{\alpha A}{\pi^2 y^{3/2}} u_A(y).
\end{equation}
Therefore, if $g_A(y)\geq 0$ for almost every $y\geq 1$, then $\Lambda_A$ is the unique minimizer of $ E_{\alpha,r_0}$ in $\mathcal{L}_2^\circ(A)$. We now show that $(C1)$ and $(C2)$ are both sufficient conditions of the positivity of $g_A$ on $[1,+\infty)$. Let us write $C=e^{\alpha r_0}$ and $\beta=\frac{\alpha^2 A}{\pi}$. We therefore have, for any $y\geq 1$,
\begin{equation}\label{Eq:LBuA}
u_A(y)=Cy^2 e^{-\beta y}-y^2 e^{-\frac{\beta}{4}y} +Ce^{-\frac{\beta}{y}}-e^{-\frac{\beta}{4y}}\geq -y^2 e^{-\frac{\beta}{4}y}+Ce^{-\beta}-1=:h(y).
\end{equation}
We now assume that $Ce^{-\beta}-1>0$, i.e. $A<\frac{\pi r_0}{\alpha}$, otherwise $h$ is clearly negative. We remark that
$$
h(y)\geq 0 \iff g(y):=2\log y -\frac{\beta}{4}y-\log(Ce^{-\beta}-1)\leq 0.
$$
We compute $g'(y)=\frac{2}{y}-\frac{\beta}{4}$, and therefore $g$ is increasing on $[0,8/\beta]$ and decreasing on $[8/\beta,+\infty)$. We thus have two cases:
\begin{enumerate}
\item[(C1)] If $\beta\geq 8$, then $\max_{y\geq 1} g(y)=g(1)=-\frac{\beta}{4}-\log(Ce^{-\beta}-1)$. Therefore, $g(y)\leq 0$ for all $y\geq 1$ if and only if $-\frac{\beta}{4}-\log(Ce^{-\beta}-1)\leq 0$. We then have found that $(C1$) is a sufficient condition for $g_A$ to be positive on $[1,+\infty)$.
\item[(C2)] If $\beta<8$, then $\max_{y\geq 1} g(y)=g(8/\beta)=-2\log \beta -\log(Ce^{-\beta}-1)+2\log 8 -2$. Therefore, $g(y)\leq 0$ for all $y\geq 1$ if and only if $-2\log \beta -\log(Ce^{-\beta}-1)+2\log 8 -2\leq 0$. We then have found that $(C2)$ is a second sufficient condition for $g_A$ to be positive on $[1,+\infty)$.
\end{enumerate}
For the second part of the theorem, it is straightforward to show that $\mu_f(y)\geq 0$ if and only if $y\geq \frac{3\alpha}{4 r_0}$. Therefore, $\mu_f<0$ in a neighbourhood of the origin and, by \cite[Thm 1.5.(1)]{OptinonCM}, $\Lambda_A$ cannot be a minimizer of $E_{\alpha,r_0}$ in $\mathcal{L}_2^\circ(A)$ for sufficiently large $A$.
\end{proof}
\begin{remark}[Numerical values]\label{rem-numtri}
We can now compute the corresponding area bounds for the optimality of the triangular lattice. We fix $r_0=1$ and we compute these bounds for $\alpha \in \{1,2,...,10\}$. in Table \ref{table-C1C2}.
\begin{table}
\begin{center}
\begin{tabular}[H]{|c|c|c|}
\hline
\textbf{$\alpha$} & \textbf{Values of $A$ such that $\Lambda_A$ is minimal for $E_{\alpha,1}$} & \textbf{Condition(s) satisfied} \\
\hline
1 & $\emptyset$ & $\emptyset$\\
\hline
2 & $\emptyset$ & $\emptyset$\\
\hline
3 & $\emptyset$ & $\emptyset$\\
\hline
4 & $[0.1034,0.6782]$ & $(C2)$\\
\hline
5 & $[0.0351,0.5862]$ & $(C2)$\\
\hline
6 & $[0.0139,0.5034]$ & $(C2)$\\
\hline
7 & $[0.0060,0.4378]$ & $(C2)$\\
\hline
8 & $[0.0028,0.3862]$ & $(C2)$\\
\hline
9 & $[0.0013,0.3450]$ & $(C1)$,$(C2)$\\
\hline
10 & $[0.0007,0.3116]$ & $(C1)$,$(C2)$\\
\hline
\end{tabular}
\caption{For $r_0=1$, some values of $A$ such that $\Lambda_A$ is minimal according to Theorem \ref{thm-main}. We actually begin to have a solution for $(C2)$ for (approximatively) $\alpha \geq 3.078$.}\label{table-C1C2}
\end{center}
\end{table}
\end{remark}
\subsection{Proof of Corollary \ref{thm-triang}}
We now give sufficient conditions for $(C1)$ and $(C2)$ to be satisfied and we then prove Corollary \ref{thm-triang}.
\begin{lemma}[Condition for which $(C1)$ is satisfied]\label{lem-C1}
Let $A_0=-\frac{4\pi}{\alpha^2}\log X_0$ where $X_0$ is the solution of $e^{\alpha r_0}X^4-X-1=0$ on $[e^{-\alpha r_0/4},e^{-2}]$. If $\alpha\geq \frac{8+\log(1+e^{-2})}{r_0}$ and $A$ is such that $\frac{8\pi}{\alpha^2}\leq A\leq A_0<\frac{\pi r_0}{\alpha}$, then $(C1)$ is satisfied.
\end{lemma}
\begin{proof}
Let $X:=e^{-\frac{\alpha^2 A}{4\pi}}\leq 1$. We notice that $\frac{8\pi}{\alpha^2}\leq A<\frac{\pi r_0}{\alpha}$ if and only if
\begin{equation}\label{condX}
e^{-\frac{\alpha r_0}{4}}<X\leq e^{-2}.
\end{equation}
We now define $P(X):=e^{\alpha r_0}X^4-X-1$ and we want to find $A$ such that \eqref{condX} is satisfied and $P(X)\geq 0$. Since $P'(X)=4e^{\alpha r_0}-1$, then $P$ is decreasing on $[0,4^{-1/3}e^{-\alpha r_0/3})$ and increasing on $(4^{-1/3}e^{-\alpha r_0/3},1]$. Since we assume $\frac{8\pi}{\alpha^2}<\frac{\pi r_0}{\alpha}$, therefore $\alpha>\frac{8}{r_0}$ and we get $\frac{e^{-\frac{\alpha r_0}{3}}}{4^{\frac{1}{3}}}\leq e^{-\frac{\alpha r_0}{4}}<e^{-2}$. It then follows that $P$ is then increasing on $\left[ e^{-\frac{\alpha r_0}{4}},e^{-2}\right]$. We compute $P\left(e^{-\frac{\alpha r_0}{4}} \right)=-e^{-\frac{\alpha r_0}{4}}<0$ and $P(e^{-2})=e^{\alpha r_0-8}-e^{-2}-1$ which is positive if and only if $\alpha\geq \frac{8+\log(1+e^{-2})}{r_0}$ and the proof is then completed.
\end{proof}
\begin{remark}
It is actually possible to write an exact (complicated) formula for $A_0$ involving $r_0$ and $\alpha$, but we do not need such expression for our purpose here.
\end{remark}
\begin{lemma}[Sufficient conditions for which $(C2)$ is satisfied]\label{lem-C2}
If
$$
\alpha>\frac{8+\log 2}{r_0} \quad \textnormal{and}\quad \frac{8\pi}{\alpha^2 \sqrt{e^{\alpha r_0-8}-1}}\leq A< \frac{8\pi}{\alpha^2}<\frac{\pi r_0}{\alpha},
$$
then $(C2)$ is satisfied.
\end{lemma}
\begin{proof}
Let $F(A):= A^2(e^{\alpha r_0}e^{-\frac{\alpha^2 A}{\pi}}-1)-\frac{64\pi^2}{e^2 \alpha^4}$, then, since $A<\frac{8\pi}{\alpha^2}$,
\begin{align*}
F(A)&\geq A^2\left(e^{\alpha r_0}e^{-\frac{\alpha^2}{\pi}\frac{8\pi}{\alpha^2}}-1 \right)-\frac{64\pi^2}{e^2 \alpha^4}=A^2\left(e^{\alpha r_0-8} -1 \right)-\frac{64\pi^2}{e^2 \alpha^4}
\end{align*}
which is nonnegative if $A\geq \frac{8\pi}{\alpha^2 \sqrt{e^{\alpha r_0-8}-1}}$.
\end{proof}
Therefore, combining Lemmas \ref{lem-C1} and \ref{lem-C2}, Theorem \ref{thm-triang} is proved.
\begin{remark}\label{rmk:globaltriimp}
For any fixed $\alpha$, as $r_0\to +\infty$, $A_0\to \frac{\pi r_0}{\alpha}<r_0^2$. Therefore, it is impossible to prove
that $\Lambda_A$ is the unique minimizer of $E_{\alpha,r_0}$ in $\mathcal{L}_2^\circ(A)$ for any $A\in (0,r_0^2]$. We notice that it is straightforward (see e.g. \cite[Step 3 p. 3252]{BetTheta15}) to show that the global minimizer of $E_{\alpha,r_0}$ in $\mathcal{L}_2$ must have an area smaller than $r_0^2$. Therefore, it is not possible to conclude, whatever $r_0$ is, that the global minimizer of $E_{\alpha,r_0}$ in $\mathcal{L}_2$ is a triangular lattice.
\end{remark}
\subsection{Limits of our method}\label{sec-limit}
As we have already explained in \cite[Sec. 4.3]{BetTheta15}, our method based on Montgomery Theorem \cite[Thm 1]{Mont} is not optimal. Even though we were quite successful with it for Lennard-Jones type potentials and some difference of Yukawa potentials in \cite[Thm 1.2]{BetTheta15}, it turns out that, contrary to these examples:
\begin{enumerate}
\item We cannot prove the minimality of the triangular lattice at high density for $E_{\alpha,r_0}$ (i.e. for arbitrarily small $A$).
\item We cannot conclude that the global minimizer of $E_{\alpha,r_0}$ in $\mathcal{L}_2$ is a triangular lattice, even for a single value of $(\alpha,r_0)$.
\end{enumerate}
In this section, we show why the high density minimality cannot be reached with our method and we propose a modification of the Morse potential for getting this optimality by using this method.
\medskip
The next results shows that for all $\alpha>0$, we can find a large value $y_0\geq 1$ such that $g_A(y_0)$ is negative for small enough $A$. We recall that $g_A(y)=\frac{\alpha A}{\pi^2 y^{3/2}}u_A(y)$ (see \eqref{Eq:uA}).
\begin{lemma}[Negativity of $u_A$ for small $A$]
For any $\alpha>0$ and $r_0>0$, there exists $\lambda$ such that
$$
\lim_{A\to 0} u_A\left( \frac{\lambda}{A}\right)=-\infty.
$$
\end{lemma}
\begin{proof}
We easily compute that
$$
\lim_{A\to 0} u_A\left( \frac{\lambda}{A}\right)=\lim_{A\to 0} \frac{\lambda^2}{A^2}\left( e^{\alpha r_0} e^{-\frac{\alpha^2 \lambda}{\pi}}-e^{-\frac{\alpha^2 \lambda}{4\pi}}\right) +e^{\alpha r_0}-1.
$$
Furthermore, we have that
$$
e^{\alpha r_0} e^{-\frac{\alpha^2 \lambda}{\pi}}-e^{-\frac{\alpha^2 \lambda}{4\pi}}<0 \iff \alpha>\frac{4\pi r_0}{3\lambda}.
$$
Therefore, for any $\alpha$ and $r_0$, there exist $\lambda$ such that $\alpha r_0>4\pi/3\lambda$, thus the result is proved.
\end{proof}
It is also possible to modify the Morse potential by adding an inverse power law to its expression in order to get the optimality of the triangular lattice at high density.
\begin{prop}[Modification of $f$ and optimality of $\Lambda_A$ for small $A$] Let $p>5/2$ and $\beta>0$, we then define the following modification of the Morse potential:
$$
\tilde{f}(r^2)=f(r^2)+\frac{\beta}{r^{2p}}=e^{\alpha} e^{-2\alpha r}-2e^{-\alpha r}+\frac{\beta}{r^{2p}}.
$$
For any $\beta>0$, $\alpha>0$ and $p>5/2$, if $0<A\leq \min\left\{\frac{\pi}{\alpha}, \left( \frac{\beta \pi^{p+1}}{\alpha \Gamma(p)} \right)^{\frac{1}{p}}\right\}$, then $\Lambda_A$ is the unique minimizer in $\mathcal{L}_2^\circ(A)$, up to rotation, of
$$
L\mapsto E_{\tilde{f}}[L]=\sum_{p\in L\backslash \{0\}} \tilde{f}(|p|^2).
$$
\end{prop}
\begin{remark}
The potential $r\mapsto \tilde{f}(r^2)$ can be also seen as a small (exponential) perturbation of the opposite of the Buckingham type potential $V_B(r)=2e^{-\alpha r} -\beta r^{-2p}$ (see \cite[Sec. 7.2]{BetTheta15}) originally proposed by Buckingham for modelling interactions in rare-gases (see \cite[p. 276]{Buck}).
\end{remark}
\begin{proof}
It is again a straightforward application of \cite[Thm 1.1]{BetTheta15}, using the estimate we found in the proof of Theorem \ref{thm-triang}. More precisely, we have, for any Bravais lattices $L$ of area $A$ (note again the tiny difference of definition for $\theta_L$ with \cite{BetTheta15}),
$$
E_{\tilde{f}}[L]=\frac{\pi}{A}\int_1^{+\infty}\left(\theta_L\left( \frac{y}{A} \right)-1 \right) \tilde{g}_A(y)dy + C_A, \quad \tilde{g}_A(y):=y^{-1}\mu_{\tilde{f}}\left( \frac{\pi}{yA} \right)+\mu_{\tilde{f}}\left( \frac{\pi y}{A} \right).
$$
It is easy to compute $\tilde{g}_A(y)=\frac{\alpha A}{\pi^2 y^{p-1/2}}\left(u_A(y) y^{p-5/2} +\frac{\beta \pi^{p+1}}{\alpha \Gamma(p)}A^{-p}+\frac{\beta \pi^{p+1}}{\alpha \Gamma(p)}A^{-p}y^{2p-2} \right)$ and to show that, assuming that $A<\frac{\pi}{\alpha}$,
\begin{align*}
\tilde{g}_A(y) &\geq \frac{\alpha A}{\pi^2 y^{p-1/2}}\left( -y^{p-1/2}+\left(e^{\alpha}e^{-\frac{\alpha^2 A}{\pi}}-1 \right)y^{p-5/2}+ \frac{\beta \pi^{p+1}}{\alpha \Gamma(p)}A^{-p}y^{2p-2} + \frac{\beta \pi^{p+1}}{\alpha \Gamma(p)}A^{-p}\right)\\
&=:\frac{\alpha A}{\pi^2 y^{p-1/2}}P_A(y).
\end{align*}
Using Cauchy's upper bound for the largest root of $P_A$ (see \cite[Sec. 2.4]{BetTheta15}), we find that $y\geq \left( \frac{\Gamma(p) \alpha A^p}{\beta \pi^{p+1}} \right)^{\frac{1}{p-3/2}}\Rightarrow P_A(y)\geq 0\Rightarrow \tilde{g}_A(y)\geq 0$. Since we have $\left( \frac{\Gamma(p) \alpha A^p}{\beta \pi^{p+1}} \right)^{\frac{1}{p-3/2}}\leq 1$ by assumption, $\tilde{g}_A(y)\geq 0$ if $y\geq 1$ and the proof is completed.
\end{proof}
\begin{example}
By choosing $p$ very large and $\beta$ very small, we get a reasonable approximation of $V_M$ (close and after its minimum) for which we can prove the optimality of the triangular lattice at high density. As an example, we have plotted $V_M$ for $r_0=1$ and $\alpha=6$ as well as $r\mapsto V_M(r)+r^{-12}=e^6\tilde{f}(r)$ with $p=100$ and $\beta=(10 e)^{-6}$ in Figure \ref{fig:modifmorse}. Applying the previous proposition, we get the optimality of $\Lambda_A$ for $E_{\tilde{f}}$ in $\mathcal{L}_2^\circ(A)$ for any $0<A<0.07056$ and Theorem \ref{thm-main} gives the optimality of the triangular lattice for $E_{6,1}$ when $0.0139\leq A\leq 0.5034$.
\end{example}
\begin{figure}[!h]
\centering
\includegraphics[width=7cm]{ModifMorse.png}
\caption{Plot of $V_M$ for $\alpha=6,r_0=1$ (dashed line) and $r\mapsto e^6\tilde{f}(r^2)$ with $\beta=(10e)^{-6}$, $p=100$.}
\label{fig:modifmorse}
\end{figure}
\begin{remark} Our method seems to be effective for proving the optimality of $\Lambda_A$ when $A$ is arbitrarily small when the interaction potential diverges and is equivalent to a completely monotone function (like an inverse power law) at $0$, as we have already remarked in \cite{Betermin:2014fy,BetTheta15}. We also notice that $\tilde{f}$ is not a function with only one well, hence it is again impossible to apply our method developed in \cite{Betermin:2014fy,BetTheta15} to show that the global minimizer of $E_{\tilde{f}}$ is a triangular lattice (see Remark \ref{rmk:globaltriimp}).
\end{remark}
\subsection{Local minimality of $\Lambda_A$ for small $A$}
Using lattice symmetries, it is straightforward to show that, for any $\alpha,r_0,A\in (0,+\infty)$, $\Lambda_A$ and $\sqrt{A}\mathbb Z^2$ are critical points of $E_{\alpha,r_0}$ in $\mathcal{L}_2^\circ(A)$ (see e.g. \cite[Prop 3.2 and 3.4]{Beterloc}). We also recall the following result showing that the Hessian of our energy at $\Lambda_A$ has a very simple diagonal form.
\begin{lemma}[\cite{Beterloc}]\label{derivtriangular}
The Hessian of $E_{\alpha,r_0}$ at $\Lambda_A$ is $D^2 E_{\alpha,r_0}[\Lambda_A]=T_{\alpha,r_0}(A) I_2$ where
\begin{align*}
T_{\alpha,r_0}(A):=\frac{4A}{\sqrt{3}}\sum_{m,n}n^2 f'\left(\frac{2A}{\sqrt{3}}[m^2+mn+n^2] \right)+\frac{4A^2}{3}\sum_{m,n}n^4f''\left(\frac{2A}{\sqrt{3}}[m^2+mn+n^2] \right).
\end{align*}
\end{lemma}
Therefore, substituting $f$ by its expression, writing $Q(m,n):=m^2+mn+n^2$ and $\ell_A:=\sqrt{\frac{2A}{\sqrt{3}}}$, we obtain
\begin{align*}
T_{\alpha,r_0}(A)=&A\alpha^2 \sum_{m,n} \frac{n^4 e^{-\alpha \ell_A\sqrt{Q(m,n)}}}{Q(m,n)}\left\{\frac{2}{\sqrt{3}}e^{\alpha r_0}e^{-\alpha \ell_A \sqrt{Q(m,n)}}-1 \right\}\\
&\quad +\frac{\sqrt{A}\alpha}{\sqrt{2}3^{1/4}}\sum_{m,n} \frac{n^2 e^{-\alpha \ell_A\sqrt{Q(m,n)}}}{\sqrt{Q(m,n)}}\left( 4-\frac{n^2}{Q(m,n)}\right)\left\{1-e^{\alpha r_0} e^{-\alpha \ell_A \sqrt{Q(m,n)}} \right\}
\end{align*}
Thus, it is possible to show that $\Lambda_A$ is a local minimizer of $E_{\alpha,r_0}$ in $\mathcal{L}_2^\circ(A)$ for sufficiently small values of $A$. This is a good indication about the optimality of the triangular lattice at high density, which is impossible to get by using our method from \cite{Betermin:2014fy,BetTheta15} as recalled in the previous section.
\begin{lemma}[Local optimality of the triangular lattice at high density]\label{lem:asympttriangular}
There exists $A_0$ such that for any $0<A<A_0$, $\Lambda_A$ is a local minimizer of $E_{\alpha,r_0}$ in $\mathcal{L}_2^\circ(A)$.
\end{lemma}
\begin{proof}
We first recall that $\Lambda_A$ is a critical point in $\mathcal{L}_2^\circ(A)$. Furthermore, we can write $T_{\alpha,r_0}(A)=\sqrt{A}\alpha\left(E_1(A,\alpha)+ \sqrt{A}\alpha E_2(\alpha,A) \right)$, thus the sign of $T_{\alpha,r_0}(A)$ as $A\to 0$ is given by
$$
E_1(\alpha,A):=\sum_{m,n} \frac{n^4 e^{-\alpha \ell_A\sqrt{Q(m,n)}}}{Q(m,n)}\left\{\frac{2}{\sqrt{3}}e^{\alpha r_0}e^{-\alpha \ell_A \sqrt{Q(m,n)}}-1 \right\}>0
$$
for $A$ small enough, because $\frac{2}{\sqrt{3}}e^{\alpha r_0}-1>0 $ and $\frac{n^4 e^{-\alpha \ell_A\sqrt{Q(m,n)}}}{Q(m,n)}$ is decreasing exponentially fast as $m^2+mn+n^2$ increases.
\end{proof}
It turns out that a systematic analysis of the sign of $T_{\alpha,r_0}(A)$, as well as all the Hessians in dimensions $2$ and $3$, with respect to the area $A$ (or the volume) is a difficult task. Therefore, we have performed in Section \ref{sec-numeric2d} and \ref{sec-numeric3d} many numerical investigations showing the nature of the main Bravais lattices we are interested in.
\section{A general result about volume-stationary lattices - Proof of Theorems \ref{prop:stat2d} and \ref{prop:stat3d}}\label{sec:stat}
In this section, we show a general result about `volume-stationary lattices', i.e. lattices being critical points of $E_f$ defined by
$$
E_f[L]:=\sum_{p\in L\backslash \{0\}} f(|p|^2)
$$
on $\mathcal{L}_d^\circ(V)$ in an open interval of volumes $V$, for a large class of interaction potential $f:(0,+\infty)\to \mathbb R$ integrable at infinity and being the Laplace transform of a Borel measure on $(0,+\infty)$. It is important to notice that all the classical interaction potentials used in molecular simulations belong to this class of functions. Our result is based on Gruber's results \cite[Cor 1 and Cor 2]{Gruber} where all the lattices being critical points for the Epstein zeta function, defined by
$$
\zeta_L(s):=\sum_{p\in L \backslash \{0\}} \frac{1}{|p|^s}
$$
and for all $s>d$, are characterized. It turns out that they all have their layers strongly eutactic in the sense of the following definition.
\begin{defi}[Strongly eutactic layer]\label{def:eutactic}
Let $L\in \mathcal{L}_d^\circ(1)$. We say that a layer $M=\{p\in L ; |p|=\lambda\}$, for some $\lambda>0$, of $L$ is strongly eutactic if $\sharp M=2k$ and, for any $x\in \mathbb R^d$,
$$
\sum_{p\in M} \frac{(p\cdot x)^2}{|p|^2}=\frac{2k}{d}|x|^2.
$$
\end{defi}
\begin{remark}
After a suitable renormalization, $M$ is also called a spherical $2$-design (see \cite{BachocVenkov}).
\end{remark}
We then show the following result describing the only Bravais lattices in dimensions $2$ and $3$ that can stay stationary for $E_f$ under any small perturbation of the density. This result confirms our numerical observations performed in this paper as well as in \cite{Beterloc,Beterminlocal3d} for the classical Lennard-Jones potential.
\begin{thm}[Volume-stationary lattices for $E_f$]\label{thm:eutactic}
Let $d\geq 2$ and $f:(0,+\infty)\to \mathbb R$ be a nonzero function such that
\begin{enumerate}
\item as $r\to +\infty$, we have $f(r)=O(r^{-d/2-\eta})$ for some $\eta>0$,
\item for any $r>0$, it holds $\displaystyle f(r)=\int_0^{+\infty} e^{-rt} d\mu_f(t)dt$ for some Borel measure $\mu_f$ on $(0,+\infty)$.
\end{enumerate}
Let $L_0\in \mathcal{L}_d^\circ(1)$. There exists an open interval $I$ such that $\nabla_{L_0} E_f[V^{1/d} L_0]=0$ for all $V\in I$ if and only if all the layers of $L_0$ are strongly eutactic, and it follows that $I=(0,+\infty)$.
\medskip
In particular, $L_0\in \{\mathbb Z^2,\Lambda_1\}$ in dimension $2$ and $L_0\in \{\mathbb Z^3,\mathsf{D}_3,\mathsf{D}_3^*\}$ in dimension $3$.
\end{thm}
\begin{remark}
As we will see in the proof, it also means that $L_0$ is a critical point of $\theta_L(\alpha)$ for almost all $\alpha>0$, where the lattice theta function is defined by \eqref{def-thetafunction}. Furthermore, in higher dimension, as explained in \cite[Cor 2]{Gruber}, there are only finitely many such lattices.
\end{remark}
\begin{proof}
Let $L_0\in \mathcal{L}_d^\circ$. We assume that $\nabla_{L_0} E_f[V^{1/d}L_0]=0$ for any $V\in I$ where $I\subset \mathbb R$ is an open interval. We easily show, using Fubini's theorem and the definition of $f$, that, for any $L\in \mathcal{L}_d^\circ(1)$, and where $\theta_L$ is defined by \eqref{def-thetafunction},
\begin{align*}
E_f[V^{1/d} L]&=\sum_{p\in L\backslash \{0\}} f\left(V^{2/d} |p|^2\right)=\int_0^{+\infty} \left[ \theta_L\left( \frac{V^{2/d}t}{\pi} \right)-1 \right]d\mu_f(t)
\end{align*}
We therefore get, by Lebesgue's dominated convergence Theorem,
$$
\nabla_{L_0} E_f[V^{1/d}L_0]=\int_0^{+\infty} \nabla_{L_0} \theta_{L_0}\left(\frac{V^{2/d}t}{\pi} \right)d\mu_f(t).
$$
By analyticity of $\alpha\mapsto \theta_L(\alpha)$ for any $L$, it follows that all the components of $V^{2/d}\mapsto \nabla_{L_0} E_f[V^{1/d}L_0]$ are also analytic functions on $(0,+\infty)$. Thus, for each component, its set of zeros is a discrete set, which implies that $\nabla_{L_0} E_f[V^{1/d}L_0]=0$ for all $V>0$, proving that $I=(0,+\infty)$. It follows that $\nabla_{L_0} \theta_{L_0}\left(\frac{t}{\pi} \right)=0$ for almost every $t>0$, i.e. $L_0$ is a critical point of $L\mapsto \theta_L(\alpha)$ for almost every $\alpha>0$. Since $\zeta_L(s)=E_{f_s}[L]$ for $f_s(r)=\frac{1}{r^{s/2}}$ that belongs to the space of functions defined in the statement of our theorem, it follows that $L_0$ is a critical point of the Epstein zeta function $L\mapsto \zeta_L(s)$ for all $s>d$. By \cite[Cor 1]{Gruber}, the only such lattices have their layers being strongly eutactic and the result is proved because all these lattices are critical points for $E_f$ for all fixed volume as proved in \cite[Thm 4.4]{CoulSchurm2018}. Furthermore, the strongly eutactic lattices in dimensions $2$ and $3$ are recalled in \cite[Cor 2]{Gruber} and are the square, triangular, simple cubic, FCC and BCC lattices.
\end{proof}
\section{Numerical investigations in dimension 2}\label{sec-numeric2d}
In this part, as in the next one treating the three-dimensional minimization problem, we want to understand how the minimizer of $E_{\alpha,r_0}$ in $\mathcal{L}_2^\circ(A)$ changes when $A$ varies as well as the nature of its global minimizer in $\mathcal{L}_2$. We also want to compare our numerical observations with the one we have performed in \cite{Beterloc} for the classical Lennard-Jones potential $V_{LJ}(r)=\frac{1}{r^{12}}-\frac{2}{r^6}$. We then have to choose properly the values of $\alpha$ and $r_0$. We have decided that the parameters have to satisfy $V_{LJ}(1)=f(1)=\min_r f(r)=-1$ and $V_{LJ}''(1)=f''(1)$, i.e. we want $r_0=1$ and $\alpha=6$. Figure \ref{fig-LJMorse} shows the graph of both potentials. They obviously are very similar in a small neighbourhood of $r=1$, but $f$ is decreasing faster to $0$ as $r$ increases and is equal to $e^{6}-2\approx 401.4$ for $r=0$.\\
\begin{figure}[!h]
\centering
\includegraphics[width=7cm]{LJMorse.png}
\caption{Plot of $V_M$ for $\alpha=6,r_0=1$ (dashed line) and $V_{LJ}(r)=r^{-12}-2r^{-6}$.}
\label{fig-LJMorse}
\end{figure}
As we will see in this section, we also have performed the same investigation for many values of $\alpha,r_0$, and in particular for $\alpha=3$ and $r_0=3$ (see Section \ref{subsec-Conj}). It turns out that the transition between the minimizers of $E_{\alpha,r_0}$ with respect to the area $A$ is more clear in the latter case.
\subsection{Local optimality of the triangular and square lattices}
The numerical study of the Hessian's sign can be easily done for the triangular lattice $\Lambda_A$ and the square lattice $\sqrt{A}\mathbb Z^2$. We already know that these lattices are critical points for $E_{\alpha,r_0}$ for all $A>0$ and that the Hessian is a multiple of the identity in the triangular lattice case (see Lemma \ref{derivtriangular}). For the square lattice, it turns out that, again for symmetry reasons (see \cite[Cor. 3.8]{Beterloc}), the Hessian is also diagonal, thus we only have again to study the sign of its eigenvalues. Our results are summarized in Table \ref{table:localtrisquare}.
\medskip
\begin{table}
\begin{center}
\begin{tabular}[H]{|c|c|c|c|c|}
\hline
\textbf{Lattice} & \textbf{$\Lambda_A$ (M)} & \textbf{$\Lambda_A$ (LJ)} & \textbf{$\sqrt{A}\mathbb Z^2$ (M)} & \textbf{$\sqrt{A}\mathbb Z^2$ (LJ)}\\
\hline
\textbf{Local minimum} & $A<1.175$ & $A<1.152$ & $1.555<A<1.285$ & $1.143<A<1.268$\\
\hline
\textbf{Local maximum} & $A>1.175$ & $A>1.152$ & $\emptyset$ & $\emptyset$\\
\hline
\textbf{Saddle point} & $\emptyset$ & $\emptyset$ & $A\not\in (1.555,1.1285)$ & $A\not \in (1.143,1.268)$\\
\hline
\end{tabular}
\caption{Local optimality of $\Lambda_A$ and $\sqrt{A}\mathbb Z^2$ for Morse potential (M) with $r_0=1$, $\alpha=6$, and Lennard-Jones (LJ) potential for which the values are taken from \cite{Beterloc}}\label{table:localtrisquare}
\end{center}
\end{table}
\subsection{Confirmation of our conjecture in dimension 2}\label{subsec-Conj}
As in \cite{Beterloc}, we have investigated the minimizer of $E_{\alpha,r_0}$ in $\mathcal{L}_2^\circ(A)$ when the area $A$ varies. Our study is again based on the investigation of this minimizer among rhombic and rectangular lattices defined by \eqref{rectangular}-\eqref{rhombic}. Our results are summarized in Table \ref{table-conj} and again compared to the one found for the classical Lennard-Jones potential in \cite{Beterloc}. As in the Lennard-Jones case, we observe that there is a transition triangular-rhombic-square-rectangular as $A$ increases. Certainly due to the exponential decay of the Morse potential, all the transitions appear earlier than in the Lennard-Jones case. Furthermore, we also observe a transition from a triangular lattice to a rhombic lattice with an angle slightly larger than $60^\circ$ around $A\approx 1.1560011044$. Then we have observed that the minimizer becomes extremely quickly and continuously a square lattice, as $A$ increases, for $A\approx 1.1560011045$. This transition with a discontinuous jump is better observed in the case $\alpha=3$, $r_0=3$, where it appears for $A\approx 9.285$, the transition being from a triangular lattice to a rhombic lattice with an angle $\theta\approx 72.19^\circ$ (see Figure \ref{fig-rhombic}), and continuously becoming a square lattice at $A\approx 9.4$. Moreover, as proved in Theorem \ref{prop:stat2d}, we also observe that the triangular and square lattices are the only one being minimizers in an open interval of areas and, moreover, that the minimizer of $E_{\alpha,r_0}$ in $\mathcal{L}_2^\circ(A)$ becomes a rectangular lattice arbitrarily thin as $A$ goes to $+\infty$ (see Figure \ref{fig-rect}). Thus, for the Morse potential, the global behaviour of the minimizer of $E_{\alpha,r_0}$ as $A$ varies seems to be qualitatively exactly the same as for the Lennard-Jones potential, confirming our Conjecture \cite[Sec. 5.4]{Beterloc} already recalled in the introduction of this paper.
\medskip
Furthermore, we observe that, for any $A\leq 1$, the triangular lattice seems to be the minimizer of $E_{6,1}$ in $\mathcal{L}_2^\circ(A)$. Since the minimum of $f$ is achieved for $r=r_0=1$, it is easy to prove (see e.g. \cite[Step 3 p. 3252]{BetTheta15}) that the area $A_0$ of the global minimizer of $E_{6,1}$ in $\mathcal{L}_2$ satisfies $A_0\leq 1$. Therefore, it seems to be numerically clear that this global minimizer is a triangular lattice, as we expect for the classical Lennard-Jones energy and as we have already proved for some Lennard-Jones-type potentials with small parameters in \cite[Thm 1.2]{BetTheta15}. This results is conjectured in \cite[Sec. 5.4]{Beterloc} to be true for any difference $f=g-h$ of completely monotone potential such that $f$ has one well.
\begin{figure}
\centering
\includegraphics[width=8cm]{FigureA9dot28Rhomb.png} \quad \includegraphics[width=8cm]{FigureA9dot285Rhomb.png}
\caption{For $\alpha=3$, $r_0=3$, plot of $\theta\mapsto E_{3,3}[L_\theta]$, where $L_\theta$ is a rhombic lattice with angle $\theta$ (see \eqref{rhombic}), on $[\pi/3,\pi/2]$. We observe a transition of the minimizer from a triangular lattice when $A=9.28$ (left) to a rhombic lattice with an angle $\theta \approx 72.19^\circ$ when $A=9.285$ (right).}
\label{fig-rhombic}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=8cm]{FigA10Rect.png} \quad \includegraphics[width=8cm]{FigA50Rect.png}
\caption{The minimizer of $E_{6,1}$ becomes more and more thin as $A\to +\infty$. We have plotted $y\mapsto E_{6,1}[\sqrt{A}L_y]$ where $L_y$, $y\geq 1$ is a rectangular lattice (see \eqref{rectangular}), for $A=10$ (left) and $A=50$ (right). We have computed the minimum in many other cases and it seems that the minimum $y_A$ of $y\mapsto E_{6,1}[\sqrt{A}L_y]$ satisfied $y_A\approx A$, showing that the minimizer, for large $A$, is becoming more and more thin as $A$ increases.}
\label{fig-rect}
\end{figure}
\begin{remark}[The large $\alpha$ case in any dimension]
We observe that the Morse potential $V_M$, for $r_0=1$ fixed, converges to $-\delta(x-1)$ as $\alpha\to +\infty$ (see Figure \ref{fig:Morseplots}). A similar proof as the one we have done for the Lennard-Jones type potential in \cite[Thm 1.13]{OptinonCM} shows the global optimality of a lattice achieving the kissing number with balls of radius $1/2$, for sufficiently large $\alpha$. In particular, in dimension $2$ (resp. 3), the global minimizer of $E_{\alpha,r_0}$ in $\mathcal{L}_d$ for large $\alpha$ is a triangular (resp. FCC) lattice.
\end{remark}
\section{Numerical investigations in dimension 3}\label{sec-numeric3d}
We now give the results of our investigation for the minimization of $E_{\alpha,r_0}$ in $\mathcal{L}_3$ and some comparisons of the Morse energy for BCC/FCC lattices and HCP structure.
\subsection{Local optimality of the cubic lattices}
As in dimension 2, for symmetry reasons, it is straightforward to prove that all the cubic lattices are critical point of the Morse energy, i.e. for any $\alpha,r_0,V\in (0,+\infty)$, $V^{\frac{1}{3}}\mathbb Z^3$, $V^{\frac{1}{3}}\mathsf{D}_3$ and $V^{\frac{1}{3}}\mathsf{D}_3^*$ are critical points of $E_{\alpha,r_0}$ in $\mathcal{L}_3^\circ(V)$ (see \cite[Prop 3.2]{Beterminlocal3d}).
\medskip
We have first investigated the local optimality of these cubic lattices, as in \cite{Beterminlocal3d}, for $E_{\alpha,r_0}$ with respect to the volume $V$ of their unit cells. We have used the formulas proved in \cite[Sec. 4]{Beterminlocal3d}. We can see that the results are quite similar with what we get in the classical Lennard-Jones case, except for the BCC lattice $V^{\frac{1}{3}}\mathsf{D}_3^*$ for which there is a difference with the FCC lattice. This is basically due to the lack of homogeneity of the Morse potential and also to the minimizer transition for the exponential sum $\theta_{\sqrt{L}}$ described in Section \ref{sec-conj3d}. We also notice that this cubic lattices are the only one being critical points of $E_{\alpha,r_0}$ in $\mathcal{L}_3^\circ(V)$ for $V$ being in some open intervals of volumes (see Theorem \ref{prop:stat3d}).
\begin{table}
\begin{center}
\begin{tabular}[H]{|c|c|c|c|}
\hline
\textbf{Lattice} & \textbf{$V^{\frac{1}{3}}\mathsf{D}_3$} & \textbf{$V^{\frac{1}{3}}\mathsf{D}_3^*$} & \textbf{$V^{\frac{1}{3}}\mathbb Z^3$} \\
\hline
\textbf{Local min} & (M): $V<1.125$ & (M): $V<0.33$ & (M): $1.215<V<1.425$\\
& (LJ): $V<1.091$ & (LJ): $V< 1.091$& (LJ):$1.2<V<1.344$ \\
\hline
\textbf{Local max} &(M): $V>1.375$ & (M): $V>1.215$ & (M): $\emptyset$\\
& (LJ): $V>1.313$ & (LJ): $V>1.313$ &(LJ): $\emptyset$\\
\hline
\textbf{Saddle point} & (M): $1.125<V<1.375$ & (M): $0.33<V<1.215$ & (M): $V\not \in (1.215,1.425)$\\
& (LJ): $1.091<V<1.313$ & (LJ): $1.091<V<1.313$ & (LJ): $V\not\in (1.2,1.344)$ \\
\hline
\end{tabular}
\caption{Values of $V$ such that the cubic lattices $V^{\frac{1}{3}}\mathsf{D}_3, V^{\frac{1}{3}}\mathsf{D}_3^*$ and $V^{\frac{1}{3}}\mathbb Z^3$ are local optimizers for $E_{6,1}$ (notation: (M)) and $E_{V_{LJ}}$ (notation: (LJ)) for which the values are taken from \cite{Beterminlocal3d}.}\label{table-3dcubic}
\end{center}
\end{table}
\begin{remark}\label{rem-alpha3locmin}
We also notice that, for $\alpha=3$ and $r_0=1$, $V^{\frac{1}{3}}\mathsf{D}_3^*$ is a local minimum of $E_{3,1}$ in $\mathcal{L}_3^\circ(V)$ if and only if $V\in (0,V_0]$ where $V_0\approx 1.085$.
\end{remark}
\subsection{Comparison of possible global minimizers}
We first give a result explaining the behaviour of $E_{\alpha,r_0}$ among the dilated version of a given lattice $L$.
\begin{lemma}\label{lem-dilationenergy}
For any fixed lattice $L\in \mathcal{L}_3$ and any $\alpha,r_0\in (0,+\infty)$, the function $f_L(\lambda):=E_{\alpha,r_0}[\lambda L]$ is decreasing on $(0,\lambda_0)$ and increasing on $(\lambda_0,+\infty)$ for some $\lambda_0$ depending on $\alpha,r_0,L$.
\end{lemma}
\begin{proof}
We have
$$
f_L'(\lambda)=2\alpha\left( \sum_{p\in L} |p| e^{-\alpha \lambda |p|} -e^{\alpha r_0}\sum_{p\in L} |p| e^{-2\alpha \lambda |p|}\right)
$$
and
$$
f_L'(\lambda)=0 \iff e^{\alpha r_0}=\frac{g_L(\lambda)}{g_L(2\lambda)},\quad g_L(\lambda):=\sum_{p\in L} |p| e^{-\alpha \lambda |p|}> 0.
$$
It is now clear, by comparison of exponential growth, that, for any fixed $\alpha,r_0>0$, $\lambda\mapsto \frac{g_L(\lambda)}{g_L(2\lambda)}$ is strictly increasing and has its values on $(0,+\infty)$, both bounds corresponding to the limit of $g_L$ as $\lambda\to 0$ and $\lambda\to +\infty$. Therefore, there exists a unique $\lambda_0$ such that $f_L'(\lambda_0)=0$ and it is easy to see that $f_L'(\lambda)<0$ (resp. $f_L'(\lambda)>0$) as $\lambda\to 0$ (resp. $\lambda\to +\infty$). It follows that $f_L$ is decreasing on $(0,\lambda_0)$ and increasing on $(\lambda_0,+\infty)$.
\end{proof}
Therefore, comparing the energies of $\lambda L$ among all the $\lambda>0$ and for different lattices $L$, we are able to numerically find what is the good candidate for the global minimization of $E_{\alpha,r_0}$ among these structures. Fixing $r_0=1$, we have investigated the energy of $\lambda \mathsf{D}_3$, $\lambda \mathsf{D}_3^*$ and $\lambda \mathop{\rm hcp}\nolimits$ for $\lambda>0$ and different values of $\alpha$. We have defined $H_\alpha:=\min_\lambda E_{\alpha,1}[\lambda \mathop{\rm hcp}\nolimits]$, $B_\alpha:=\min_\lambda E_{\alpha,1}[\lambda \mathsf{D}_3^*]$ and $F_\alpha:=\min_\lambda E_{\alpha,1}[\lambda \mathsf{D}_3]$. Using Lemma \ref{lem-dilationenergy} to be sure to localize all minima, we then have observed the following:
\begin{enumerate}
\item For any $\alpha\in \{3+0.01k : k\in \{0,1,...,5\}\}$, then $B_\alpha<F_\alpha<H_\alpha$,
\item For any $\alpha \in \{ 3+0.01k : k\in \{6,...,40\}\}$, then $F_\alpha<H_\alpha<B_\alpha$,
\item For any $\alpha\in \{3.5,4,5,6,7,8,9,10\}$, then $H_\alpha<F_\alpha<B_\alpha$.
\end{enumerate}
These results support our Conjecture \ref{ConjMorse3d} and the BCC/FCC phase transition is heuristically explained in the next section.
\begin{remark}[Local minimality of the probable global BCC minimizer]
It is also important to notice how close are the values of the minimal energies. For example, in the $\alpha=3$ case, we have $|B_3- F_3|<5\times 10^{-4}$. Furthermore, according to Remark \ref{rem-alpha3locmin} and the fact that a global minimizer of any $E_{\alpha,1}$ must have a volume smaller than $1$, we know that $V_m^{1/3}\mathsf{D}_3^*$ -- where $V_m$ is the volume minimizing $V\mapsto E_{3,1}[V^{1/3}\mathsf{D}_3^*]$ -- is a local minimum of $E_{3,1}$ on $\mathcal{L}_d$, i.e. the expected global minimizer of $E_{3,1}$ is a local minimizer. The same can be shown in all the previously stated cases.
\end{remark}
\subsection{Heuristic arguments supporting Conjecture \ref{ConjMorse3d} based on duality relation}\label{sec-conj3d}
Let us define, for any $\alpha>0$ and any Bravais lattice $L\in \mathcal{L}_3$,
$$
F_\alpha[L]:=\sum_{p\in L} e^{- \alpha |p|}.
$$
Using the Laplace transform representation of $r\mapsto e^{-\alpha \sqrt{r}}$, the Jacobi Transformation Formula for the lattice theta function (see e.g. \cite[Prop. 1.12]{BeterminKnuepfer-preprint}) and the change of variable $t=\frac{u\alpha^2}{4}$, we obtain for any $L\in \mathcal{L}_3^\circ(1)$ and any $\alpha>0$, by Fubini's theorem,
\begin{align*}
F_\alpha[L] &=\sum_{p\in L} \int_0^{+\infty} e^{-t|p|^2}\frac{\alpha e^{-\frac{\alpha^2}{4t}}}{2\sqrt{\pi} t^{\frac{3}{2}}}dt=\frac{\alpha}{2 \sqrt{\pi}}\int_0^{+\infty} \theta_L\left( \frac{t}{\pi} \right) \frac{e^{-\frac{\alpha^2}{4t}}}{t^{\frac{3}{2}}}dt=\frac{\alpha \pi}{2}\int_0^{+\infty} \theta_{L^*}\left(\frac{\pi}{t} \right)\frac{e^{-\frac{\alpha^2}{4t}}}{t^{\frac{3}{2}}}dt\\
&=\frac{8\pi}{\alpha^3}\int_0^{+\infty} \theta_{L^*}\left(\frac{4\pi}{u\alpha^2} \right) e^{-\frac{1}{u}}u^{-3}du.
\end{align*}
Since $u\mapsto e^{ -\frac{1}{u}}u^{-3}$ is decreasing very rapidly and is equal to $0$ for $u=0$ -- i.e. for any $\varepsilon$, $\{u\in \mathbb R_+: e^{ -\frac{1}{u}}u^{-3}>\varepsilon\}$ is included in a connected compact set -- $\alpha$ large implies that the minimizer of $L\mapsto F_\alpha[L]$ in $\mathcal{L}_3^\circ(1)$ has the tendency to be the minimizer of $L\mapsto \theta_{L^*}(4\pi/(u\alpha^2))$ in $\mathcal{L}_3^\circ(1)$ where $u$ is in a compact set and $\alpha$ is large. The Sarnak-Str\"ombergsson conjecture \cite[Eq. (43)]{SarStromb} tells us that the minimizer of $L\mapsto \theta_L(\beta)$ on $\mathcal{L}_3^\circ(1)$ is expected to be $\mathsf{D}_3^*$ for $\beta<1$. Therefore, the minimizer of $L\mapsto F_\alpha[L]$ in $\mathcal{L}_3^\circ(1)$ for large values of $\alpha$ is expected to be $\mathsf{D}_3$. By duality, it is clear that the minimizer in $\mathcal{L}_3^\circ(1)$ for small values of $\alpha$ is expected to be $\mathsf{D}_3^*$. This duality relation has been observed by Torquato and Stillinger in \cite[p. 4]{Torquatoduality}. We numerically compute that $F_\alpha[\mathsf{D}_3^*]<F_\alpha[\mathsf{D}_3] \iff \alpha<3.86$ (see Figure \ref{fig-thetaroot}).
\begin{figure}
\centering
\includegraphics[width=8cm]{FigThetaroot1.png} \quad \includegraphics[width=8cm]{FigThetaroot2.png}
\caption{Graph of $\alpha \mapsto F_\alpha[\mathsf{D}_3]-F_\alpha[\mathsf{D}_3^*]$ on $[0.5,5]$ (left) and $[1.5,5]$ (right). We observe that we have equality if and only if $\alpha\approx 3.86$.}
\label{fig-thetaroot}
\end{figure}
In particular, the asymptotic minimality as $\alpha\to 0$ and $\alpha\to +\infty$ for the theta function is already known (see e.g. \cite{BeterminPetrache}). We then have the following rigorous result:
\begin{lemma}[Asymptotic minimizer of $F_\alpha$]
As $\alpha\to 0$ (resp. $\alpha\to +\infty$), $\mathsf{D}_3^*$ (resp. $\mathsf{D}_3$) is the unique asymptotic minimizer of $F_\alpha$ in $\mathcal{L}_3^\circ(1)$, i.e. for any $L\in \mathcal{L}_3^\circ(1)$, there exists $\alpha_L$ such that for any $0<\alpha<\alpha_L$, $F_\alpha[L]>F_\alpha[\mathsf{D}_3^*]$ (resp. there exists $\tilde{\alpha}_L$ such that for any $\alpha>\tilde{\alpha}_L$, $F_\alpha[L]>F_\alpha[\mathsf{D}_3]$).
\end{lemma}
Now, for $E_{\alpha,r_0}[L]=e^{\alpha r_0}F_{2\alpha}[L]-2F_\alpha[L]$, it is then expected that the BCC lattice (resp. FCC lattice) is the unique minimizer at high density if $\alpha$ is small enough (resp. $\alpha$ large enough). Furthermore, as in the Lennard-Jones type case, the shape of the minimizer at high density is a good candidate for the global minimization problem in $\mathcal{L}_3$. Explaining why the HCP structure is optimal in $\mathcal{P}_3$ for large $\alpha$ is much more difficult. It certainly follows from the fact that $\theta_{\mathsf{D}_3}(\alpha)<\theta_{\mathop{\rm hcp}\nolimits}(\alpha)$ for any $\alpha>0$, as explained in \cite[Ex. 2.6]{BeterminPetrache}, and from the attractive-repulsive form of the potential combined with the integral representation of $F_\alpha[L]$ previously stated. That is why we can expect Conjecture \ref{ConjMorse3d} to be true.
\begin{remark}[Duality relation]
More precisely, the Poisson Summation formula gives, for any three-dimensional Bravais lattice, since the Fourier transform of $\mathbb R^3\ni x\mapsto e^{-|x|}$ is $y\mapsto C(1+y^2)^{-2}$,
$$
E_{\alpha,r_0}[L]=C\alpha^3 \frac{1}{|L|}\sum_{q\in L^*} \left\{\frac{4 e^{\alpha r_0}}{(4\alpha^2 + |q|^2)^2}-\frac{1}{(\alpha^2 + |q|^2)^2} \right\},
$$
where $C$ is a constant. Therefore, minimizing $E_{\alpha,r_0}$ is equivalent with minimizing
$$
\widehat{E}_{\alpha,r_0}[L]:=\sum_{q\in L^*} \left\{\frac{4 e^{\alpha r_0}}{(4\alpha^2 + |q|^2)^2}-\frac{1}{(\alpha^2 + |q|^2)^2} \right\}.
$$
We then observe that, as $\alpha\to 0$, we have $\widehat{E}_{\alpha,r_0}[L]\sim \sum_{q\in L^*} \frac{3}{|q|^4}$ which is expected -- by Sarnak-Str\"ombergsson Conjecture for the Epstein zeta function \cite[Eq. (44)]{SarStromb} -- to be minimized in $\mathcal{L}_3^\circ(V)$, for any $V$, by $L^*=\mathsf{D}_3$, i.e. $L=\mathsf{D}_3^*$. This fact also supports the first part of our Conjecture \ref{ConjMorse3d}.
\end{remark}
\subsection{Comparison of our conjecture with some experimental values of $\alpha$ and $r_0$}\label{sec-experiment}
We now want to compare our Conjecture \ref{ConjMorse3d} with the values of $\alpha$ empirically obtained for metals in \cite{GirifalcoWeizer,LincolnetalMorse,SharmaKachhavaMorse,PamuketalMorse,HungetalMorse} and rare-gas crystals in \cite{Raff1990,AlimietalMorse,ParsonMorse,BarkerMorse}. In order to compare these values, we need a result we have already proved in \cite[Thm 1.11]{OptinonCM} in the Lennard-Jones type potentials case. We recall that the shape of a lattice is its class of equivalence modulo dilation in the fundamental domain of $\mathcal{L}_d^\circ(1)$ where only one copy of each lattice exists (see \cite[Sec. 1.1]{OptinonCM}).
\begin{lemma}\label{lem:scalingr0}
For any $\alpha>0$ and $r_0>0$, it holds
$$
\displaystyle \mathop{\rm argmin}\nolimits_L E_{\alpha,r_0}[L]=\frac{\mathop{\rm argmin}\nolimits_L E_{\alpha r_0,1}[L]}{r_0}.
$$
In particular, the shape of the global minimizer for both energies is the same, i.e. its shape is independent of $r_0$
\end{lemma}
\begin{proof}
It is a straightforward consequence of the following equality
$$
E_{\alpha,r_0}[L]=e^{\alpha r_0}\sum_{p\in L}e^{-2\alpha r_0 |p/r_0|}-2\sum_{p\in L} e^{-\alpha r_0 |p/r_0|}=E_{\alpha r_0,1}[L/r_0].
$$
\end{proof}
In \cite{GirifalcoWeizer,LincolnetalMorse,SharmaKachhavaMorse,PamuketalMorse,HungetalMorse}, different values of $\alpha,r_0$ have been computed for metals according to different experimental data. The structure of each metal is taken in its (known) ground-state. More precisely, let us review the results obtained in these papers:
\begin{enumerate}
\item The energy of vaporization, lattice constant and compressibility at zero temperature have been used to compute the parameters in \cite{GirifalcoWeizer}. Therefore, the equation of state and the elastic constants computed with these parameters reasonably agreed with experiment for FCC and BCC metals, as well as all local stability conditions. However, the agreement is more accurate for the FCC metals.
\item In \cite{LincolnetalMorse}, the lattice parameter, bulk modulus and cohesive energy have been used to compute the parameters, cutting off the range of the potential after 176 (for the FCC structure) and 168 neighbours (for the BCC structure). These parameters were used to compute the pressure derivatives of the second-order elastic constants. These computations match fairly well with the experimental values, which is not the case for third-order elastic constants.
\item In \cite{PamuketalMorse}, crystalline state physical properties at any temperature are used to compute the Morse parameters, thus the second order elastic constants are shown to match with the experimental values. The values are reasonably accurate with the one computed in \cite{GirifalcoWeizer} for the FCC structures, but not so accurate for the BCC one like K and Na. The authors then remarked that ``even though for metals the additive form of the total potential in terms of pair interactions is not a very good approximation, for the sake of simplicity, pair potentials are widely used in calculating various properties of metallic systems." (e.g. for Monte-Carlo-type calculations).
\item The correlation between molecular properties and crystal state have been investigated, by assuming the identity of qualitative behaviour of metals in the two states, in \cite{SharmaKachhavaMorse} for computing the parameters. In particular, this hypothesis permits the invariance of the fundamental potential parameter $\alpha$ in the two states for the metals, where
\begin{equation}\label{def-alphaexp}
\alpha=2\pi w_e\sqrt{\frac{\mu}{2D_e}},
\end{equation}
$w_e$ being the classical frequency data of small vibrations of a diatomic molecule, $\mu$ is the reduced mass of the molecule and $D_e$ represents its dissociation energy. They hence have computed the values of the cohesive energy, thermal expansion, Gr\"uneisen parameter and elastic constants. Satisfactory agreement is obtained for elastic constants of Cu and Pb at zero temperature, which is not true for the thermal expansion and the Gr\"uneisen parameter for the same elements as well as for the cohesive energy. Therefore, computing $\alpha$ from the cohesive energy, the authors found a good match with the others experimental quantities.
\item In \cite{HungetalMorse}, the parameters have been computed, for BCC crystals, using the volume per atom and atomic number in each elementary cell, as well as the energy of sublimation, the compressibility and the lattice constant. These parameters values show a good agreement with the anharmonic interatomic effective potential and the local force constant in X-ray absorption fine structure for Fe, W and Mo.
\end{enumerate}
A first observation is that, in all the cited works, the values of the parameters match with some local quantities (e.g. local deformation of the solid in its ground-state) that are experimentally found from the real (already given) ground-state structure. In this paper, we are interested in the values of the parameters $\alpha,r_0$, thus only for the pairs with $r_0=1$ (by Lemma \ref{lem:scalingr0}), such that the FCC lattice, BCC lattice or HCP structures are the ground-state of the energy per point. Since there is no metal having a HCP structure, we necessarily should have $\alpha r_0<3.5$. In the list of parameters that are computed in the previously cited papers, it only happens for:
\begin{enumerate}
\item K, Na, Cs and Rb in \cite{GirifalcoWeizer}, but there is no match of the real crystal structure with the expected ground-state.
\item K and Na in \cite{LincolnetalMorse}, and their BCC structures match with the real ground-state because the value of $\alpha r_0$ is 3.05643 (we have numerically checked the optimality of the BCC lattice in this case) for K and 2.95443 for Na.
\item Li, Na and K in \cite{PamuketalMorse}, but the expected ground-state structure only matches with real one for Li (BCC) for which $\alpha r_0=2.9751$.
\item Li an Cu in \cite{SharmaKachhavaMorse}, but the expected ground-state structure only matches with the real one for Cu (FCC) for which $\alpha r_0=3.08757$.
\end{enumerate}
Furthermore it never happens for the values given in \cite{HungetalMorse}. Therefore, we can conclude that the central-force model for metals agrees with the minimizing lattice of $E_{\alpha r_0,1}$ only for few BCC structures (Na, K, Li) and only for one element (Cu) that has a FCC structure. It is interesting to remark that these metals atoms with which the structures match are not the heaviest possible -- as we could expect -- thus the approximation of the atoms interaction in metals as a sum of Morse two-body potentials seems to be unsatisfactory, when the parameters $\alpha,r_0$ are chosen according to the real physical properties of the solid.
\medskip
Furthermore, the same comparison can be done with the rare-gas crystals models with Morse potential proposed in \cite{Raff1990,AlimietalMorse,ParsonMorse,BarkerMorse}. For all the values of the parameters, we have $\alpha r_0>3.5$ and the FCC structure that is expected to be the ground-state for all the rare-gas crystals does not match with the HCP structure theoretically expected from our conjecture. This central-force model with Morse pairwise interaction is then again not appropriate for describing the real structure of rare-gas crystals as a ground-state.
\medskip
\noindent \textbf{Acknowledgement:} I acknowledge support
from VILLUM FONDEN via the QMATH Centre of Excellence (grant No. 10059). I am also grateful to Xavier Blanc and Jan Philip Solovej for their feedback on the first version of this paper.
{\small
\bibliographystyle{plain}
| {
"timestamp": "2019-02-05T02:19:24",
"yymm": "1901",
"arxiv_id": "1901.08957",
"language": "en",
"url": "https://arxiv.org/abs/1901.08957",
"abstract": "We investigate the local and global optimality of the triangular, square, simple cubic, face-centred-cubic (FCC), body-centred-cubic (BCC) lattices and the hexagonal-close-packing (HCP) structure for a potential energy per point generated by a Morse potential with parameters $(\\alpha,r_0)$. The optimality of the triangular lattice is proved in dimension 2 in an interval of densities. Furthermore, a complete numerical investigation is performed for the minimization of the energy with respect to the density. In dimension 3, the local optimality of the simple cubic, FCC and BCC lattices is numerically studied. We also show that the square, triangular, cubic, FCC and BCC lattices are the only Bravais lattices in dimensions 2 and 3 being critical points of a large class of lattice energies (including the one studied in this paper) in some open intervals of densities, as we observe for the Lennard-Jones and the Morse potential lattice energies. Finally, we state a conjecture about the global minimizer for the 3d energy that exhibits a surprising transition from BCC, FCC to HCP as $\\alpha$ increases. Moreover, we compare the values of $\\alpha$ found experimentally for metals and rare-gas crystals with the expected lattice ground-state structure given by our numerical investigation/conjecture. Only in a few cases does the known ground-state crystal structure match the minimizer we find for the expected value of $\\alpha$. Our conclusion is that the pairwise interaction model with Morse potential and fixed $\\alpha$ is not adapted to describe metals and rare-gas crystals if we want to take into consideration that the lattice structure we find in nature is the ground-state of the associated potential energy.",
"subjects": "Mathematical Physics (math-ph); Optimization and Control (math.OC)",
"title": "Minimizing lattice structures for Morse potential energy in two and three dimensions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9817357205793903,
"lm_q2_score": 0.7217432182679957,
"lm_q1q2_score": 0.7085610984596189
} |
https://arxiv.org/abs/2210.03124 | Learning Transfer Operators by Kernel Density Estimation | Inference of transfer operators from data is often formulated as a classical problem that hinges on the Ulam method. The conventional description, known as the Ulam-Galerkin method, involves projecting onto basis functions represented as characteristic functions supported over a fine grid of rectangles. From this perspective, the Ulam-Galerkin approach can be interpreted as density estimation using the histogram method. In this study, we recast the problem within the framework of statistical density estimation. This alternative perspective allows for an explicit and rigorous analysis of bias and variance, thereby facilitating a discussion on the mean square error. Through comprehensive examples utilizing the logistic map and a Markov map, we demonstrate the validity and effectiveness of this approach in estimating the eigenvectors of the Frobenius-Perron operator. We compare the performance of Histogram Density Estimation(HDE) and Kernel Density Estimation(KDE) methods and find that KDE generally outperforms HDE in terms of accuracy. However, it is important to note that KDE exhibits limitations around boundary points and jumps. Based on our research findings, we suggest the possibility of incorporating other density estimation methods into this field and propose future investigations into the application of KDE-based estimation for high-dimensional maps. These findings provide valuable insights for researchers and practitioners working on estimating the Frobenius-Perron operator and highlight the potential of density estimation techniques in this area of study.Keywords: Transfer Operators; Frobenius-Perron operator; probability density estimation; Ulam-Galerkin method; Kernel Density Estimation; Histogram Density Estimation. |
\section{Introduction}
Transfer operators play a vital role in the global analysis of dynamical systems. Abundant data from dynamical systems make those operators popular in data-driven analysis methods in complex systems. Hence, numerically estimating the transfer operators through data is key to success in global analysis. Frobenius-Perron operator is one such popular operator used for global analysis of dynamical systems, and the Ulam method \cite{ulam1960collection} is the most popular method to estimate it. However, as we point out here, there are tremendous opportunities to recast this problem as one of density estimation as would be stated in the statistics literature, specifically noting that there is a well matured analysis of variance and bias that allows for discussion of mean squared error. This langauge has been overlooked in the dynamical systems community.
Therefore, here we introduce the probability density estimation viewpoint to estimating the Frobenius-Perron operator, and with it the rich analysis already developed in other mathematical communities and methods, notably kernel estimation which as it turns out is provably more efficient than the histogram methods used in the standard Ulam method.
A Frobenius-Perron operator evolves the density of ensembles of initial conditions of a dynamical system forward in time. This statement can also be re-interpreted in a Bayesian framework. In these terms, we have essentially a problem of density estimation, for the conditional probability density that is generally described as the Frobenius-Perron operator. The classical Ulam method is essentially a histogram method for estimation of this conditional density function by simple nonparametric means. Many, including one of the authors of this work, have described the approach as a projection onto basis functions as characteristic functions, and in these terms, we described it as Ulam-Galerkin's method \cite{bollt2013applied}, which covers many of the analyses of convergence since the original conjecture \cite{ CHIU1992291, Boyarsky1997LawsOC, froyland2001,GUDER1997525}.
However, it is generally understood that histograms, while easy to describe, are a primitive variant amongst the approaches available to the problem of nonparametric density estimation. It has been said by Tukey \cite{tukey1961curves,tukey1981graphical}, that appearance of the traditional histogram is blocky, and difficult to balance smoothing, bandwidth, bias, and variance. Even in two dimensions, ``blocky" variability of sampling, and details such as the simply choosing an appropriate orientation of the grid, become problematic. There are, however, more suitable methods that are reviewed here, especially the kernel density estimation which has many nice smoothing, analytic, and convergence properties. Additionally, density estimation is shifted from a question of density in space to expectation of points. Note also that the k-nearest neighbor (kNN) methods also have many of these advantages, but kernel methods allow for better tuning of smoothing parameters and good convergence statistics.
It is argued in \cite{dehnad_1987}, that the argument for kernel density estimation instead of a simple histogram method becomes stronger in more than one dimension, due to difficulties not only in histogram box size (bandwidth) but now also, in orientation and origin location that generally lead to a block appearance that becomes more difficult to interpret the joint and conditional probabilities. Tukey asserted \cite{tukey1981graphical}, "...it is difficult to do well with bins in as few as two dimensions. Clearly, bins are for the birds!"
In this article, we show how to use the probability density estimation methods to approximate the Frobenius-Perron operator. Hence, KDE based method is a better approximation for the transfer operator. This analysis will demonstrate improvements in the accuracy of KDE based estimation compared to the current high popular Ulam-Galerkin's method. Understanding the Frobenius-Perron operator as the integration against a conditional distribution kernel is key in this demonstration. We will show that connection in section (\ref{Sec:BayInter}) then we will discuss the density estimation theory to demonstrate the theoretical advantages of KDE over the histogram-based method. Finally, we will numerically demonstrate the better accuracy of the KDE based method by using the chaotic logistic map example, which we show matches well to the rigorous analysis reviewed here for variance and bias.
\section{Frobenius-Perron and the Classical Ulam-Galerkin Method for Estimation}
First we briefly review a standard discussion of Frobenius-Perron operators for deterministic and then random maps, and flows are covered in as much as the maps discussed can be taken as derived from the flow by a Poincare' or stroboscopic mapping.
Assuming a map,
\begin{eqnarray}\label{map}
f:X&\rightarrow& X, \nonumber \\
x &\mapsto& f(x),
\end{eqnarray}
the forward $orbit(x)=\{x,f(x),f^2(x),...\}$ from an initial condition $x$ is a subject of dynamical systems. However, if we consider an ensemble of many initial conditions, that are distributed by $\rho\in L^1(X)$, and we assume $f$ is a nonsingular transformation and measurable relative to $(X,{\cal B},\mu)$ on a Borel sigma-algebra of measurable sets $ {\cal B } \subset X$, then follows the Frobenius-Perron operator that describes the orbit of ensembles, following our notation from \cite{bollt2013applied}, and comparable to \cite{lasota1982exact}. The linear map, ${\cal P}_f:L^1(X)\rightarrow L^1(X)$, follows the discrete continuity equation,
\begin{equation}
\int_{f(B)} \rho_{n+1} d\mu=\int_{B} \rho_n d\mu, \mbox{ for any } B\in {\cal B}.
\end{equation}
For differentiable maps, this simplifies ,
\begin{equation}
P_f[\rho](x)=\sum_{y:x=f(y)} \frac{\rho(y)}{|Df(y)|},
\end{equation}
where if $f(y)$ is a single-variate function, then $|Df(y)|=|f'(y)|$ is the absolute value of the derivative, or it is the determinant of the Jacobain (matrix) derivative if multi-variate.
The following equivalent form is relevant for our purposes here,
\begin{equation}\label{delta}
{\cal P}_f[\rho](x)=\int_X \delta(x-f(y))\rho(y) dy,
\end{equation}
in terms of the delta function. Also, we have specialized to Lebesgue measure on $X$ from this point forward.
The Ulam-Galerkin method is a way to estimate the action of the Frobenius-Perron operator, given a (fine) finite topological cover of $X$ by (usually rectangles, or boxes, or triangles, or other simple spatial elements) ${\cal B}=\{B_i\}_{i=1}^K$, $K>0$. The estimator,
\begin{equation}\label{est1}
P_{i,j}=\frac{m(B_i\cap f^{-1}(B_j))}{m(B_i)},
\end{equation}
is stated in terms of Lebesgue measure, $m(B)=\int_B dx$. In fact, a simple estimate of this $K\times K$ matrix $P$ follows if a large collection of input, output pairs are available, $(x_n,x_{n+1})$, as examples of $x\in B_i\cap f^{-1}(B_j)$ perhaps derived from a long orbit that samples the space. Note that $f^{-1}$ denotes the pre-image of $f$ which may well not be one-one.
\begin{equation}\label{est2}
P_{i,j}\sim \frac{\# x_n\in B_i, x_{n+1}\in B_j}{\# x_n\in B_i}.
\end{equation}
Notice that the ``$\cap$" notation for intersection of sets is coincident with the ``$,$" notation for ``and" which denotes both events occur. This is useful for reinterpretation by a Bayesian discussion in the next section. Under the above construction, it can easily be seen that $P$ is a stochastic matrix, which therefore has a leading eigenvalue of $1$ and, if simple, a dominant eigenvector which describes the steady state of the corresponding Markov chain.
The original Ulam-conjecture \cite{ulam1960collection} described that, in a limit of a refining partition $\{B_i\}$, the dominant eigenvector of the discrete state Markov chain converges to invariant density of the original dynamical system. This conjecture was first proved by Li \cite{LI1976177} under hypothesis of bounded variation of one-dimensional maps, providing weak convergence.
An estimate such as Eqs.~(\ref{est1})-(\ref{est2}) has previously been called an Ulam-Galerkin estimate ~\cite{bollt2013applied,ma2013relatively}, a description which we made to intentionally separate the concept of the limit of long time iteration, as one does when considering ergodic averages, from short time considerations, such as that Eqs.~(\ref{est1})-(\ref{est2}) can simply be taken as an estimate of the action of the map on ensemble densities, between two time frames, or perhaps to be iterated a few times. The phrase Galerkin is stated in terms of projection of the action of the operator onto a basis of characteristic functions $\{\xi_{B_i}(x)\}$, supported over the grid elements, $\{ B_i \}$,
\begin{equation}
\xi_{B_i}(x)=
\begin{cases}1,\mbox{if} \ x \in B_i \\ 0, \ \mbox{otherwise} \end{cases}.
\end{equation}
Then the Ulam-Galerkin estimate formally describes a projection, $R:L^2(X) \rightarrow \Delta_K$, for a finite linear subspace $ \Delta_K \subset L^2(X)$, that is spanned by the collection of characteristic functions over the grid elements. Notice, for this description, this is in terms of $L^2(X)$, in order that an inner product structure makes sense, and then
\begin{equation}\label{est3}
P_{i,j}=\frac{(\xi(B_i),\xi(f^{-1}(B_j))}{\|\xi_{B_i}\|}=\frac{\int_X \xi(B_i)(x)\xi(f^{-1}(B_j)(x) dx }{\int_X \xi_{B_i}(x)^2 dx} .
\end{equation}
If considering this finite rank transition for finite time discussion, then we worry only about the estimation of transitions by finite estimation by the basis functions as discussed in, \cite{bollt2013applied, bollt2002manifold}. Infinite time questions are clearly more nuanced which is why the Ulam-conjecture remained a conjecture for almost twenty years. Our Bayesian discussion will likewise avoid the same.
In the more general case of a random dynamical system, Eq.~(\ref{map}) is recast,
\begin{eqnarray}\label{map2}
x_{n+1}=f(x_n)+s_n,
\end{eqnarray}
which describes a deterministic part $f$ together with a stochastic ``kick" $s$ which we assume is identically independently distributed by $x\sim \nu$. Consequently, the kernel integral form of the Frobenius-Perron transfer operator becomes,
\begin{equation}\label{delta2}
{\cal P}_f[\rho](x)=\int_X \nu(x-f(y))\rho(y) dy,
\end{equation}
which we see is closely related to the zero-noise case of Eq.~(\ref{delta}) where the kernel in that case is a delta-function.
For discussion in the next section, we will specialize further to the truncated normal distribution, $s\sim t{\cal N}(0,\sigma)$ to maintain perturbations within the bounded domain, unit square by avoiding unbounded tails.
\begin{equation}\label{tnormal}
\nu(x;\mu,\sigma,a,b)=\frac{1}{\sigma} \frac{\phi(\frac{x-\mu}{\sigma}) }{\Phi(\frac{b-\mu}{\sigma})-\Phi(\frac{a-\mu}{\sigma})}, \mbox{ where, } \phi(z)=\frac{1}{\sqrt{2 \pi}} e^{-\frac{z^2}{2}}, \Phi(z)=\frac{1+erf(\frac{z}{\sqrt{2}})}{2},
\end{equation}
and we choose, $a=0, b=1, x-\mu=f(y) $.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.4\textwidth]{ fh0-1000-40inv}
\includegraphics[width=0.4\textwidth]{ fh0-1000-40ns2e-10}
\caption{ Data consisting of $N=1,000$ samples of $(x_n,x_{n+1})$ pairs sampled from: (Left) The logistic map, $x_{n+1}=f(x_n)=4 x_n (1-x_n)$ along an orbit, following an initial transient so that the sample distribution closely approximates the invariant distribution, $p_X(x)=\frac{1}{\pi\sqrt{x(1-x)}}$. The joint distribution is a delta function, $p_{X'X}(x',x)=\delta(x'-f(x))$. (Right) A noisy logistic map $x_{n+1}=f(x_n)=4 x_n (1-x_n)+s_n$, where $s_n$ is chosen from an i.i.d. truncated normal distribution of standard deviation $\sigma=0.02$. The ``blur" of points roughly describes the joint distribution, $p_{X'X}(x',x)=\nu(x'-f(x))$, for $\nu$ is the truncated normal distribution, Eq.~(\ref{tnormal}). See resulting kernels in Fig.\ref{figs}.}
\label{fig1}
\end{figure}
\begin{figure}[htbp]
\includegraphics[width=.3\textwidth]{ figs.025}
\includegraphics[width=.3\textwidth]{ figs.05}
\includegraphics[width=.3\textwidth]{ figs.1}
\caption{Kernel of the Frobenius-Perron operator, in the case of a truncated normal distribution sampling, Eq.~(\ref{tnormal}), with standard deviations $s=0.025, s=0.05$ and $s=0.1$. Notice the ``bumps" that appear as what would otherwise seem mostly like a normal distribution with tails on both sides of the peaks, becomes clamped, reflecting that the bounded domain variant must have this feature to remain a probability distribution, $\int p(x)=1$ on the stated domain. See sample data in Fig.~\ref{fig1}. }
\label{figs}
\end{figure}
In \cite{bollt2013applied}, we interpreted the random sampling associated with Eq.~(\ref{est2}) as a Monte-Carlo integration estimate involved with projection onto basis functions, Eq.~(\ref{est3}).
Now in the next sections, we will encode this same expression as a histogram based density estimator of a Bayesian interpretation of the transfer operator. This will open the door to considering a different kind of error analysis, as well as other estimators. A sample of data for the Logistic map, and a random map Logistic map perturbed by truncated normal distribution noise, is shown in Fig.~\ref{fig1} and \ref{figs}. An Ulam-Galerkin estimate of the stochastic matrices Eq.~(\ref{est3}) are shown in Fig.~\ref{fig2}-\ref{fig4}, with further interpretation as a histogram estimator as described in the next section of corresponding Bayesian estimators.
\section{Bayesian Interpretation of the Transfer Operator}\label{Sec:BayInter}
The Frobenius-Perron operator has a Bayesian interpretation as follows. Here we will write $x'=f(x)$ so that $(x',x)$ are a output-input pair, of $f$, which we take as samples of random variables, $X$ and $X'$. Considering, a statement of conditional and compound densities leads to an interpretation of the Frobenius-Perron operator as a Bayes update. Reviewing, the joint density, $p_{X'X}(x',x)$, of random variables $X'$ and $X$ marginalizes to,
\begin{equation}
p_{X'}(x')=\sum_{x:x'=f(x)} p_{X'X}(x',x)=\sum_{x:x'=f(x)} \frac{p_X(x)}{|Df(x)|},
\end{equation}
in terms of the summation over all pre-images of $x'$. Notice that the middle term is written as a marginalization across $x$ of all those $x$ that lead to $x'$. This Frobenius-Perron operator, as usual, maps densities of ensembles under the action of the map $f$. Comparing to the defining statement of a conditional density in terms of a joint density,
\begin{equation}
p_{X'X}(x',x)=p_{X'|X}(x'|x)p_X(x).
\end{equation}
We reinterpret, in the noiseless case,
\begin{equation}
p_{X'|X}(x'|x)=\frac{1}{|Df(x)|} \delta(x'-f(x)).
\end{equation}
In the language of Bayesian uncertainty propagation, $p_{X'|X}(x'|x)$ describes a likelihood function, interpreting future states $x'$ as data and past states $x$ as parameters, by the standard Bayes phrasing,
\begin{equation}
p(\Theta|\mbox{data})\propto p(\mbox{data}|\Theta)\times p(\Theta),
\end{equation}
for parameter $\Theta$, or simply by standard names of the terms,
\begin{equation}
\mbox{posterior}\propto\mbox{likelihood}\times \mbox{prior}.
\end{equation}
In these terms, comparing to Eq.~(\ref{est1}), $P_{i,j}$ can be interpreted as a matrix of likelihood functions
\begin{equation}\label{est4}
P_{i,j}=P(x\in B_i|x'\in B_j) = \frac{P(x\in B_i, x'\in B_j)}{P(x\in B_i)} = \frac{m(B_i\cap f^{-1}(B_j))}{m(B_i)}.
\end{equation}
Furthermore, the standard Ulam estimator, Eq.~(\ref{est2}), can be taken as a histogram method to estimate, the joint and marginal probabilities, $p_{X'X}$ and $p_X$ by occupancy counts in the related boxes, $B_i$ and $B_j$ with,
\begin{eqnarray}\label{UG}
P(x\in B_i, x'\in B_j)&\sim& \# x_n\in B_i, x_{n+1}\in B_j, \mbox{ and, } \nonumber \\ P(x\in B_i)&\sim& \# x_n\in B_i.
\end{eqnarray}
The conditional follows by division, to the estimator of the matrix $P_{i,j}$ describing the likelihood function. In these terms, we are positioned to describe the statistical error of expressions such as Eq.~(\ref{est2}) for the matrix $P_{i,j}$ estimator of the Frobenius-Perron operator, by the theory of density estimators, for $p_{X'X}(x',x)$ and $p_X(x)$ respectively. First, in the next section, we will discuss this histogram estimator, and then in following sections, we will consider other estimators, notably the kernel density estimator.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.32\textwidth]{ fh2-1000-40inv}
\includegraphics[width=0.32\textwidth]{ fh1-1000-40inv}
\includegraphics[width=0.32\textwidth]{ fh3-1000-40inv}
\includegraphics[width=0.32\textwidth]{ fh1-10000-40inv}
\includegraphics[width=0.32\textwidth]{ fh2-10000-40inv}
\includegraphics[width=0.32\textwidth]{ fh3-10000-40inv}
\caption{ Histogram estimates of (Top Row) The marginal, joint and conditional distributions $p_X(x),$ $ p_{X'X}(x',x), $ $ p_{X'|X}(x'|x) $of the $N=1,000$ sample orbit illustrated in Fig.~\ref{fig1}, using bandwidth of $K =40$ and $40 \times 40$ cells. The rightmost estimate, $p_{X'|X}(x'|x)$ by Eqs.~(\ref{est1}), (\ref{est2}), (\ref{est3}), (\ref{est4}), (\ref{UG}) is therefore an Ulam-Galerkin estimate of the Frobenius-Perron operator that can be understood as a transition matrix. (Bottom Row) Longer orbit of $N=10,000$ iterates allows a better (smoother, with less variability) estimate of the true distributions. Compare to the true distribution shown in Fig.~\ref{figs}, sampled by truncated normal $t{\cal N}$.}
\label{fig2}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.32\textwidth]{ fh2-1000-10u}
\includegraphics[width=0.32\textwidth]{ fh1-1000-10u}
\includegraphics[width=0.32\textwidth]{ fh3-1000-10u}
\caption{ Histogram distribution estimates, as Fig.~\ref{fig2}, but sampling $N=1,000$ points of $x$, not from an orbit, but i.i.d. from a uniform distribution, histogram estimates of the marginal, joint and conditional distributions $p_X(x),$ $ p_{X'X}(x',x),$ $ p_{X'|X}(x'|x) $, however more coarsely (wider bandwidth for less variability, more bias (smoothing) with $K=10$ and $10 \times 10$ cells respectively. Compare to the true distribution shown in Fig.~\ref{figs}, sampled by truncated normal $t{\cal N}$.}
\label{fig4}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.32\textwidth]{ fh2-1000-40ns2e-10}
\includegraphics[width=0.32\textwidth]{ fh1-1000-40ns2e-2}
\includegraphics[width=0.32\textwidth]{ fh3-1000-40ns2e-2}
\caption{ Histogram distribution estimates, as Fig.~\ref{fig4}, of $p_X(x),$ $ p_{X'X}(x',x),$ $ p_{X'|X}(x'|x) $, but of the random Logistic map orbit data by truncated normal distribution noise, $s=.02$ the orbit data shown in Fig.~\ref{fig1}. These plots are similar in variability vs bias (smoothing) character as the no noise scenario of Fig.\ref{fig4}(Top Row) even with the same smoothing, despite the differences of the true underlying distribution as the kernel cannot distinguish these properties. However, it is interesting that the marginal distribution estimating the invariant distribution is clearly different.Compare to the true distribution shown in Fig.~\ref{figs}, sampled by truncated normal $t{\cal N}$. }
\label{fig5}
\end{figure}
\begin{figure}[htbp]
\includegraphics[width=1.\textwidth]{ fig4Kernels1}
\caption{ Top raw shows the KDE estimation of of a given data set. Top right figure comperes the estimation with the kernel band width. Bottom raw demonstrate the density estimation for the same data set by histogram estimation method. Furthermore, bottom left and right figures compere the estimation with the bin size. }
\label{fig6}
\end{figure}
\section{Theory of Density Estimation }\label{Sec:TheoryDen}
As we have argued in the previous section, the problem of estimation of an Ulam-Galerkin estimator of the Frobenius-Perron operator is equivalent to the Bayes computation of the conditional density $ p_{X'|X}(x'|x) $, derived by histogram estimators of the joint and marginal densities, $ p_{X'X}(x',x),$ $p_X(x),$ respectively. Therefore in this section, we review what is classical theory from the statistics of density estimation, found in many excellent textbooks, such as \cite{silverman_1999,scott_2015}. First we will review some details of histogram estimators, considering the central issues bias, variance and choice of bandwidth. Then in the subsequent subsection, we will re-cast the problem as one of kernel density estimation (KDE), for which those same three issues, bias, variance and bandwidth, suggest some advantages for KDE.
For each, we state a general random variable $X$ that is distributed by $p_X(x)$, but for simplicity of presentation in this section, we assume a unit interval, $x\in [0,1]$, and so is the support of $p_X(x)$. Likewise, assume $(x',x) \in [0,1]\times [0,1]$, and $P_{X'X}(x',x)$ has support in the unit square. The standard theory of density estimation also assumes a smooth density function, $|p_X'(x)|\leq C_1$ and $|DP_{X'X}|\leq C_2$, for constants of uniform bound, $C1, C_2\geq 0$. Note however, this is already a problem regarding perhaps the most popular example for pedagogical study, the invariant density of the logistic map is $p_x(x)=\frac{1}{\pi\sqrt{x(1-x)}}$, is unbounded and has unbounded derivative, $p'_X(x)=\frac{2x-1}{2\pi \sqrt{x(1-x)}^3}$. Nonetheless, many have made practice of presenting the invariant distribution as estimated from sample orbits, Fig.~\ref{fig2}(Left).
The key issue in any estimator is accuracy, versus the amount of data available.
Generally, we want an analysis of mean square error, MSE of the estimator, which requires both bias and variance, since $MSE=bias^2+Var^2$.
\subsection{Theory of Density Estimation for Histograms}
Here we review density estimation, closely following \cite{scott_2015}, first in one dimension, and then the multivariate scenario.
Considering a unit interval, $[0,1]$, it may be divided into $K$ cells (bins),
\begin{equation}
{\cal B}=\{B_i\}_{i=1}^K, B_i=[\frac{i-1}{K},\frac{i}{K}), i=1,2,..,K,
\end{equation}
which is a uniform topological partition, meaning interiors are mutually disjoint and the union closure covers.
Similarly, a multivariate histogram is a topological partition into bins, usually rectangles (but other shapes, especially tessellations are not uncommon). Otherwise, continuing with the discussion of the single variate estimator, given a sample
$\{x_n\}_{n=1}^N$, suppose that $x\in B_i$. Then,
\begin{equation}\label{histest}
\overline{p_{N,K}}(x)=\frac{\# x_n \in B_i}{N} \times \frac{1}{m(B_i)}=\frac{K}{N}\sum_{j=1}^N \xi_{B_i}(x_j).
\end{equation}
The key part of density estimators is the analysis of the bias of the estimator, \cite{scott_2015}, continuing with $\overline{p_{N,K}}(x) $ for a point $x\in B_i$ that need not be assume to be one of the data points $x_j$. Consider the probability that the sample $x_j\in B_i$, $P(x_j\in B_j)$,
\begin{equation}
{\mathbb E}(\overline{p_{N,K}}(x)=K P(x_j\in B_j))=K\int_{\frac{i-1}{K}}^{\frac{i}{K}} p_X(s)ds=p_X(\tilde{x}), \mbox{ for } \tilde{x}\in B_j,
\end{equation}
by mean value theorem and fundamental theorem of calculus. Therefore, the bias of the estimator is,
\begin{equation}
bias_{hist}[\overline{p_{N,K}}(x))]={\mathbb E}(\overline{p_{N,K}}(x)-p_X(x))=p_X(\tilde{x}) -p_X(x)\leq |p_X'(\hat{x})| |\tilde{x}-x| \leq \frac{C_1}{K}.
\end{equation}
The first inequality follows again by the mean value theorem, this time for a value $\hat{x}\in (x,\tilde{x})$ (or perhaps the opposite order), and by product of absolute values, and the fact that $\tilde{x},\hat{x}\in B_i$. Bias is a question of balancing the derivative $p'_X\leq C_1$ versus a good choice of the number of bins, $K$.
The variance can be computed with Eq.~(\ref{histest}),
\begin{eqnarray}
Var_{hist}(\overline{p_{N,K}(x)})&=&K^2 Var(\frac{K}{N}\sum_{j=1}^N \xi_{B_i}(x_j)) \nonumber \\
&=& \frac{K^2 P(x_j\in B_i)(1-P(x_j \in B_i))}{N}=
\frac{K^2( \frac{p_X(\tilde{x})}{K} ) (1- \frac{p_X(\tilde{x})}{K} )}
{N} \nonumber \nonumber \\
&=& \frac{K p_X(\hat{x}) + p_X^2(\hat{X})}{N}
\end{eqnarray}
Variance is a question of balancing the number of bins $K$ versus the data count $N$, but relative to the unknown density $p_X$.
Therefore the mean square error for the density estimation $p_{N,K}(x)$ at an arbitrary point $x\in [0,1]$ follows,
\begin{equation}\label{msehist}
MSE_{hist}(\overline{p_{N,K}}(x))=bias_{hist}^2(\overline{p_{N,K}}(x))+Var_{hist}(\overline{p_{N,K}}(x))\leq \frac{C_1^2}{K^2}+
\frac{K p_X(\hat{x}) + p_X^2(\hat{X})}{N}.
\end{equation}
To interpret, when a fixed data set (size) $N$ is given, from an unknown distribution $p_X$, we can only choose $K$ and this choice is called bandwidth selection. From Eq.~(\ref{msehist}), large $K$ (more bins) yields decreased bias (the first term), but the variance (the second term) will tend to be large. Thus demonstrates the balancing struggle between bias and variance in choosing the number of bins, the bandwidth. Figs.~\ref{fig2}-\ref{fig5} demonstrate this bandwidth selection balancing act. Again, we reiterate that formally the analysis requires $C_1\geq 0$ be bounded whereas the derivative of the invariant density of the logistic map is not of bounded derivative type in [0,1], still many estimates exist in the literature (\cite{Peter1995, bollt2000controlling, nie_coca_2018} etc.), including ourselves, which we now call a ``typical sin." An argument that it may not be fatal for practical problems is the fact that in real world dynamical systems that there is always noise, which has the effect of smoothing (e.g., noise sampled from a smooth distribution serves as a mollifier that can bring even a singular distribution ``blurred" into a $C^\infty$ distribution) or rather producing invariant densities that are smooth after all. See Fig.~\ref{fig2} for histogram density estimations for the Fig.~\ref{fig1}(Right) randomly perturbed logistic map data.
Note that analysis of MSE in the theory of multivariate histogram estimators is similar in methodology, to which we refer to \cite{scott_2015}. The important point to this stage of the paper is that the famous Ulam-Galerkin estimation of the transfer operators by formula Eq.~(\ref{est2}) amounts to problems of density estimations of the marginal and joint distributions $p_X(x)$ and $p_{X'X}(x',x)$ leading to the estimation of the conditional distribution $p_{X'|X}(x'|x)$. This said, we can now contrast that this discussion is not a description of the Ulam problem (vs the Ulam-Galerkin estimation), since the Ulam problem describes that these estimates are stochastic matrices each with dominant eigenvector describing the invariant state of the corresponding Markov chain, and that converges weakly to the invariant distribution of the original dynamical system; and conditions for when this is in fact true were given as a theorem under hypothesis of bounded total variation first in \cite{LI1976177}.
Now we pursue to other, perhaps more favorable density estimators of the transfer operator, notably kernel density estimation.
\subsection{Theory of Kernel Density Estimation}\label{sec:KDE_theory}
Another major category of data-driven nonparameteric density estimators is the Kernel Density Estimator, or KDE. It is a data-driven estimator based on mixing simpler densities. These are defined in terms of a kernel function, ${\cal K}$, which is itself a real density function. Stating for single-variate data, ${\cal K}:{\mathbb R}\times {\mathbb R}\rightarrow {\mathbb R}^+$, and such that: 1) ${\cal K}(x)$ is symmetric, 2) $\int_{\mathbb R} {\cal K}(x) dx=1$, 3) $lim_{x\rightarrow \pm\infty}{\cal K}(x)=0.$ These are sufficient to guarantee that the KDE estimator built out of convex sums of sampling ${\cal K}$ at data points, itself is a density,
\begin{equation}
\overline{p_{N,\delta}} (x)=\frac{1}{\delta N} \sum_{i=1}^N {\cal K}(\frac{x_i-x}{\delta}),
\end{equation}
where $\delta>0$ is the bandwidth that controls the range or extent of influence of a given data point $x_i$ and is a primary parameter choice, just as was the bin size for the histogram method. There are several favorite kernels, but we mention especially the Gaussian kernel, ${\cal K}(x)\propto exp(-x^2/2)$, and the Epanechnikov kernel, ${\cal K}(x)=\propto 1-x^2$.
Similarly, for multivariate data, $\overline{p_{N,\Sigma}} (x)=\frac{1}{ N} \sum_{i=1}^N {\cal K}_\Sigma(x_i-x) $, using the common compact ``scaled" kernel notation, and ${\cal K}_\Sigma(z)=|\Sigma^{-1/2}|K(\Sigma^{-1/2} z)$ which for the most commonly used Gaussian kernel, $K(z)=(2\pi)^{-d/2} exp(-z^Tz/2).$ The matrix $\Sigma$ serves the role of a variance-covariance in the case of a Gaussian with mean $x_i$.
A crucial difference is whereas histograms are centered on spatial positions, the location of the bins, and data would occupy those positions, a kernel density estimator is centered only where there is data. This can be a real savings when considering sparsely sampled data from a distribution with a relatively small support, especially in higher dimensions where the curse of dimensionality prohibits covering the space with boxes, many of which may be empty of data if the support of the density zero.
To analyze MSE, we must again state the bias and variance of the estimator.
\begin{eqnarray}
bias_{kde}(\overline{p_{N,\delta}}(x))&=&{\mathbb E}(\frac{1}{\delta N} \sum_{i=1}^N {\cal K}(\frac{x_i-x}{\delta})- p_X(x)) \nonumber \\
&=&\frac{1}{N}\int {\cal K}(\frac{y-x}{\delta})p(y)dy-p_X(x)=
\int {\cal K}(z)p(x+\delta z)dz - p_X(x).
\end{eqnarray}
By substituting a Taylor series, $p(x+\delta z)=p_X(x)+\delta z p_X'(z) +\frac{1}{2} \delta^2 z^2 p_X''(x) + o(\delta^2) $, it follows \cite{scott_2015} that,
\begin{equation}\label{biaskde}
bias_{kde}(\overline{p_{N,\delta}}(x))=\frac{c}{2} \delta^2 p_X''(z) +o(\delta^2),
\end{equation}
where $c=\int z^2 {\mathcal K}(z) dz$ is the second moment of the kernel.
Analysis of variance follows similarly.
\begin{equation}\label{variancekde}
Var_{kde}(\overline{p_{N,\delta}}(x)) = Var(\frac{1}{\delta N} \sum_{i=1}^N {\cal K}(\frac{x_i-x}{\delta}))
\leq \frac{1}{\delta^2 N}{\mathbb E}( {\mathcal K}^2(\frac{x_i-x}{\delta}))
\end{equation}
\begin{equation}
= \frac{1}{\delta N} \int {\cal K}^2(y) (p_X(x)+\delta y p_X'(x) + o(\delta)) dy=
\frac{1}{\delta N} (p_X(x))\int {\cal K}^2 (y) dy + o(\delta))
= \frac{1}{\delta N} p_X(x) d + o(\frac{1}{\delta N}),
\end{equation}
with $d=\int {\cal K}^2(y) dy$.
So follows the MSE, combining Eqs.~(\ref{biaskde}) and (\ref{variancekde}).
\begin{equation}\label{Eq:MSE_KDE}
MSE_{kde}(x)=\frac{c^2}{4}\delta^4 |p_X''(x)|^2 + \frac{d}{\delta N} p_X(x) + o(\delta^4)+o((\delta N)^{-1}).
\end{equation}
Or,
\begin{equation}
MSE= O(\delta^4)+O((\delta N)^{-1})
\end{equation} moderates the MSE relative to bandwidth $\delta$ choice. We see the in the role of bandwidth choice, bias dominate for larger $\delta>0$ proportionally to $p_X''(x)$ (or curvature), or variance dominating for smaller $\delta$.
\subsection{Optimal MSE}\label{sec:opt}
The choice of bandwidth tailored to a given data set size is the key question in using a given nonparameteric estimator. While both the histogram discussion and KDE discussion each have unknown to us constants, depending on either $p_X(x)$ or derivatives of the same, which not knowing $p$, these are inaccessible, all we can assert is bandwidth and data set size. For histograms,
\begin{equation}
MSE_{hits}(\overline{p_{N,K}}(x)) ~\sim {\mathcal O}(\frac{1}{K^2})+{\mathcal O} (\frac{K}{N}),
\end{equation}
but for kernel density estimation,
\begin{equation}
MSE_{hits}(\overline{p_{N,K}}(x)) ~\sim {\mathcal O}(\delta^4) + {\mathcal O}(\frac{1}{\delta N}),
\end{equation}
each balances large bias when the bandwidth is too large, versus large variance large variance when the bandwidth times data set size is too small, but at different rates. The asymptotic mean square error can be shown to be optimal when,
\begin{equation}\label{bandwidth1}
\delta_{opt;KDE}=\frac{C}{N^{1/5}},
\end{equation}
where $C$ is a constant related to the unknown density function, $C=\frac{4 p(x) d}{c^2|p''(x)|^2} $.
Similarly, for histograms, an optimal bandwidth selection is described by,
\begin{equation}
K_{opt;hist}=(\frac{N C_1^2}{p(\tilde{x})})^{\frac{1}{3}},
\end{equation}
(and note that bandwidth for a histogram is considered to be as $1/K$). So we see that asymptotically, cubic versus quintic scaling and the KDE may be better when best used (optimal bandwidth), but in practice that also depends on the constants, and one depends largely on the $p_X$ and the other also on $P_X''$. The most relevant quantity when choosing a method is that for a KDE, MSE when using the optimal bandwidth is, \cite{scott_2015}
\begin{equation}
MSE_{\delta_{opt;KDE}}(\overline{p_{\delta, N}}(x))={\cal O}(\frac{1}{N^{\frac{4}{5}}}).
\end{equation}
However, all we can do is selection in practice, since we will not know $p_X$, is to inspect the scaling as we will do in the results Sec.~\ref{sec:rest}. Beyond 1-dimensional density estimation, multivariate KDE has a slower bandwidth rate,
\begin{equation}
\delta_{opt;KDE}=\frac{C}{N^{\frac{1}{4+D}}},
\end{equation}
in $D\geq 1$ dimensions. For example, the density estimation problem associated with $P_X(x)$ is $D=1$ for a transfer operator of the logistic map, but the joint density $P_{X'X}(x',x)$ is $D=2$ for the same.
\section{Results and Discussion}\label{sec:rest}
In this section, we focus on estimating the Frobenius-Perron operator by using the previously discussed density estimation methods. In other words, we calculate the $P$ matrix in eq.~(\ref{est4}) by density estimation methods. We use the logistic map example to demonstrate the results. In this demonstration, uniformly distributed $N=10^6$ initial conditions were used and evolved using a logistic map $x'=4x(1-x)$ for a relatively long time which was used to approximate the invariant density $\rho(x)=\frac{1}{\pi\sqrt{x(1-x)}}$. Now our goal is to estimate the Frobenius-Perron operator by evaluating the probability density function $p(x'|x)=\frac{p(x,x')}{p(x)}$. For this calculation, we estimated the $p(x)$ and the joint probability density $p(x,x')$ by using the data through density estimation methods. In this section, we analyze the estimation of the $p(x)$ by the density estimation methods in detail and compare it to the theoretical explanation in the section \ref{Sec:TheoryDen}. Then we demonstrate the estimation $P$ matrix of the Frobenius-Perron operator by discretized probability density function $p(x'|x)$.
\subsection{Histogram Estimation vs Kernel Density Estimation for logistic map example}
Histogram Estimation is based on the number of samples ($N$) and the number of bins ($K$) (See details in section \ref{Sec:TheoryDen}). Here, we demonstrate the effect of the bin size. The estimation of the invariant density $\rho(x)$ by the histogram method is denoted by $\overline{\rho_{N,K}}$. Figure(\ref{fig:histEstK}) shows the changes in the estimation with parameter $K$. Since the true density is known, the mean squared error(MSE) and the upper bound (UB) of the MSE can be calculated for this example by Eq.~(\ref{msehist}). Note that the notations
\begin{align*}
MSE &=(\rho(x)-\overline{\rho_{N,K}}(x))^2 \\
UB & = C_1^2 \frac{1}{K^2}+ \frac{\rho(\hat{x}) }{N} K+ \frac{\rho^2(\hat{x})}{N}
\end{align*}
are similar to the Eq.~(\ref{msehist}) and the constants are evaluated by the true density function $\rho(x)$. Furthermore, by analyzing the UB and MSE, we can get an idea of the optimal bin size $K_{opt;hist}$ (see figure~(\ref{fig:histMSE_UB}) and section~(\ref{sec:opt})).
\begin{figure}[htbp]
\centering
\includegraphics[width=0.6
\textwidth]{ Hist_Logis}
\caption{ The histogram based density estimation $\overline{\rho_{N,K}}$ for the invariant density $\rho$ of the logistic Map. This figure demonstrate the effects of the number of bins $K$ for the estimation $\overline{\rho_{N,K}}$ with sample size $N=10^6$.}
\label{fig:histEstK}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45
\textwidth]{ Hist_Error}
\includegraphics[width=0.45
\textwidth]{ UB_Hist}
\caption{Left figure shows the MSE of the estimated density function on the equaly spaced $100$ grid points on the interval $[0.01,0.99]$. Furthermore it shows the behavior of the MSE with bin size $K$. Analysis of UB can be found in the right figure. It shows the changes of the UB with the bin size for logistic map example and optimal MSE can be achieved around the bin size $K_{opt}=2503$. }
\label{fig:histMSE_UB}
\end{figure}
As we discussed in the section (\ref{sec:KDE_theory}), KDE is based on the number of data points ($N$) and the kernel bandwidth ($\delta$). In this section, we numerically demonstrate the effect of the bandwidth. Notice that, a change in the bandwidth will result in a change in the MSE (see figure~(\ref{Fig:KDE_logi})). The upper bound (UB) of MSE for the kernel density estimation is given in Eq.~(\ref{Eq:MSE_KDE}) and the following results are calculated by the Eq.~(\ref{Eq:MSE_KDE}). Error analysis and the optimal MSE is demonstrated in the figure~(\ref{fig:KDEMSE_UB}). Furthermore, note that the optimal MSE can be achieved with a bandwidth of approximately $\delta_{opt;KDE}=0.0011$.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.6
\textwidth]{ KDE_logi}
\caption{ The KDE for the invariant density $\rho$ of the logistic map. This figure shows the effect of kernel bandwidth $\delta$ for the KDE with sample size $N=10^6$.}
\label{Fig:KDE_logi}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45
\textwidth]{ KDE_Error}
\includegraphics[width=0.45
\textwidth]{ UB_KDE}
\caption{Left shows the MSE of the estimated density function on the equaly spaced $100$ grid points on the interval $[0.01,0.99]$. Furthermore it shows the behavior of the MSE with bandwidth $\delta$. Analysis of the UB can be found in the right figure. It shows the changes of the UB with the $\delta$ for the logistic map example and optimal bandwidth can be identified as being around $\delta_{opt}=0.0011$. }
\label{fig:KDEMSE_UB}
\end{figure}
Due to the unboundedness of the density function, both estimation methods have higher estimation errors closer to the endpoints of the interval $[0,1]$. In general, KDE has issues when estimating a probability density with finite support. However, the overall estimating error is much lower for KDE when compared to the histogram method. Figure~(\ref{Fig:HistVsKDE}) demonstrates that MSE for KDE is comparatively lower than the MSE for histrogram method with their optimal parameter values.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45
\textwidth]{ MSE_Opt}
\caption{Comparison of the MSE of KDE and histogram methods with their optimal parameter values.}
\label{Fig:HistVsKDE}
\end{figure}
\subsection{Estimating the Frobenius-Perron operator}
Now, we will numerically investigate the theoretical discussion presented in section~(\ref{Sec:BayInter}). We have shown the popular classical Ulam-Galerkin method as a histogram estimation of the $p(x'|x)$. Hence, we argued that estimating Frobenius-Perron operator through conditional probability density $p(x'|x)$ can be extended by using any density estimation method. In this article, we presented the KDE as an alternative density estimation to the histogram method.
The finite-dimensional estimation of Frobenius-Perron operator can be represented using a matrix ($P$). The comparison of the estimated $P$ matrix by each method can be found in the figure~(\ref{fig:Pmatrix}). Furthermore, left eigenvector corresponds to the eigenvalue $1$ of the $P$ matrix can be used to estimate the invariant density of the map. Figure~(\ref{fig:lestEigs}) shows the left eigenvector correspond to the eigenvalue $1$ of matrix $P$ calculated by histogram and KDE method.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45
\textwidth]{ MatrixP_Hist100}
\includegraphics[width=0.45
\textwidth]{ MatrixP_KDE}
\caption{The matrix $P$ which estimates the Frobenius-Perron operator in finite domain. The $P$ matrix (left) is calculated by the histogram method and (right) the $P$ matrix as calculated by the KDE. }
\label{fig:Pmatrix}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45
\textwidth]{ Hist_K100EigVec}
\includegraphics[width=0.45
\textwidth]{ KDEEigVec}
\caption{The left eigenvector correspond to the eigenvalue $1$ of matrix $P$ which can used estimates the invariant density of the logistic map. The eigenvector of $P$ matrix which is calculated by the histogram method (left). The eigenvector of $P$ matrix which is calculated by the KDE (right).}
\label{fig:lestEigs}
\end{figure}
\section{Conclusion}
The main contribution of this paper is the probability density viewpoint to the estimating Frobenius–Perron operator that enables us to incorporate the existing rich analysis of statistical density estimation formalism to find an efficient estimator. Furthermore, the theory suggests that the kernel density estimation is more efficient than the histogram methods used in the standard Ulam method. Additionally, this paper discusses a kernel density estimation method to estimate the transition probability that estimates the Frobenius–Perron operator, from empirical time series ensemble data of a dynamical system. To date, the literature mostly used the Ulam method for estimating the transfer operator but this study offers a more accurate estimation based on KDE. Our Bayesian interpretation of the Frobenius–Perron operator is important to identify the operator in terms of conditional probability density because it allows us to bring density estimation theory into play. It is shown at the beginning of this article how the Ulam-Galerkin method can be interpreted as a histogram density estimation method. Theory and numerical results have been presented which suggest that KDE is a better approximation for estimating probability densities. Hence, is is also shown that KDE may be used for finite approximation for the Frobenius–Perron operator. Finally we showed that the KDE-based approximation is a better estimator for the operator than the histogram-based current Ulam-Galerkin method.
As a result of conducting this research, we propose the possibility of introducing any density estimation methods to this field. It would be fruitful to pursue further research about the KDE-based estimation of the Frobenius–Perron operator for high-dimensional maps to analyze this method.
\newpage
\nonumsection{Acknowledgments} \noindent Erik Bollt gratefully acknowledges funding from the Army Research Office (ARO), the Defense Advanced Research Projects Agency (DARPA), the National Science Foundation (NSF) and National Institutes of Health (NIH) CRNS program, and the Office of Naval Research (ONR) during the period of this work.
\bibliographystyle{ws-ijbc}
\typeout{}
| {
"timestamp": "2022-10-10T02:00:10",
"yymm": "2210",
"arxiv_id": "2210.03124",
"language": "en",
"url": "https://arxiv.org/abs/2210.03124",
"abstract": "Inference of transfer operators from data is often formulated as a classical problem that hinges on the Ulam method. The conventional description, known as the Ulam-Galerkin method, involves projecting onto basis functions represented as characteristic functions supported over a fine grid of rectangles. From this perspective, the Ulam-Galerkin approach can be interpreted as density estimation using the histogram method. In this study, we recast the problem within the framework of statistical density estimation. This alternative perspective allows for an explicit and rigorous analysis of bias and variance, thereby facilitating a discussion on the mean square error. Through comprehensive examples utilizing the logistic map and a Markov map, we demonstrate the validity and effectiveness of this approach in estimating the eigenvectors of the Frobenius-Perron operator. We compare the performance of Histogram Density Estimation(HDE) and Kernel Density Estimation(KDE) methods and find that KDE generally outperforms HDE in terms of accuracy. However, it is important to note that KDE exhibits limitations around boundary points and jumps. Based on our research findings, we suggest the possibility of incorporating other density estimation methods into this field and propose future investigations into the application of KDE-based estimation for high-dimensional maps. These findings provide valuable insights for researchers and practitioners working on estimating the Frobenius-Perron operator and highlight the potential of density estimation techniques in this area of study.Keywords: Transfer Operators; Frobenius-Perron operator; probability density estimation; Ulam-Galerkin method; Kernel Density Estimation; Histogram Density Estimation.",
"subjects": "Machine Learning (cs.LG); Information Theory (cs.IT); Dynamical Systems (math.DS); Statistics Theory (math.ST); Computation (stat.CO)",
"title": "Learning Transfer Operators by Kernel Density Estimation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9817357205793903,
"lm_q2_score": 0.7217432182679957,
"lm_q1q2_score": 0.7085610984596189
} |
https://arxiv.org/abs/1507.07765 | Percolation and isoperimetry on roughly transitive graphs | In this paper we study percolation on a roughly transitive graph G with polynomial growth and isoperimetric dimension larger than one. For these graphs we are able to prove that p_c < 1, or in other words, that there exists a percolation phase. The main results of the article work for both dependent and independent percolation processes, since they are based on a quite robust renormalization technique. When G is transitive, the fact that p_c < 1 was already known before. But even in that case our proof yields some new results and it is entirely probabilistic, not involving the use of Gromov's theorem on groups of polynomial growth. We finish the paper giving some examples of dependent percolation for which our results apply. | \section{Introduction}
Since its introduction by Broadbent and Hammersley in \cite{PSP:2048852}, the model of independent percolation has received major attention from the physical and mathematical communities.
From the perspective of applications, it has the potential to model several different systems, from the flow of fluids in porous media, to the transmission of information on networks or diseases on populations.
On the theoretical side, this model has been source of challenging questions, and has given rise to beautiful theories.
For a mathematical background of the model on $\mathbb{Z}^d$, see \cite{Gri99} and \cite{bollobas2006percolation} and the references therein.
Besides the classical independent model on $\mathbb{Z}^d$, this study has been generalized by both considering the model on more general graphs, see for instance \cite{LP11}, \cite{BS96}, \cite{HJ06} and \cite{zbMATH05636419}, or by adding dependence to the percolation configuration, see \cite{zbMATH01018381}, \cite{zbMATH00846204}, \cite{Szn09} and \cite{TW10b} for some examples of such works.
In this article, we study vertex percolation on roughly transitive graphs, with or without dependence, showing the existence of a phase transition for the process as we vary the density of open vertices.
Another important contribution of this work is to help develop multi-scale renormalization on roughly transitive graphs of polynomial growth.
Renormalization is a powerful tool, which has been used to analyze several stochastic processes.
However this technique has limitations that often restrict its use to the lattice $\mathbb{Z}^d$.
\subsection{Graphs under consideration}
\nc{c:rough_trans}
In this paper we consider both dependent and independent percolation on roughly transitive graphs.
To define this concept precisely, we need to first introduce the notion of rough isometries.
Given graphs $G$, $G'$ and a constant $\uc{c:rough_trans} \geq 1$, a map $\phi:G \to G'$ is said to be a $\uc{c:rough_trans}$-rough isometry if for any~$x, y \in G$ we have
\begin{equation}
\label{e:rough_iso}
\frac{1}{\uc{c:rough_trans}} \; d\big( x, y \big) - 1 < d(\phi(x), \phi(y)) \leq \uc{c:rough_trans} \; d\big( x, y \big)
\end{equation}
and for any $y \in G'$, there exists some $x \in G$ such that
\begin{equation}
\label{e:rough_surj}
d(\phi(x), y) \leq \uc{c:rough_trans}.
\end{equation}
We say that a given graph $G$ is $\uc{c:rough_trans}$-roughly transitive if for any $x, y \in G$ there exists a $\uc{c:rough_trans}$-rough isometry $\phi$ satisfying $\phi(x) = y$.
\begin{Remark}
There are other (equivalent) definitions of rough isometry, see e.g., Definition~3.7 of \cite{W00}.
In this work, it is convenient to use \eqref{e:rough_iso} together with \eqref{e:rough_surj} as used for example in \cite{elek}.
\end{Remark}
\bigskip
In \cite{BS96}, Benjamini and Schramm suggested a connection between the existence of a phase transition for independent percolation on a given graph and its isoperimetric dimension.
\begin{Definition}
We say that $ G=(V,E)$ satisfies the isoperimetric inequality $\mathcal{I}(c_i, d_i)$ if
\begin{equation}\label{eq:isoperimetric}
\textnormal{for any finite set }A\subseteq V, \textnormal{ we have }|\partial A|\geq c_i |A|^{\frac{d_i-1}{d_i}}.
\end{equation}
\end{Definition}
For example, it is not difficult to see that $\mathbb{Z}^d$ satisfies the $\mathcal{I}(c, d)$ for some $c > 0$, see Theorem 6.37 of \cite{LP11}, p. 210.
In \cite[Question~2]{BS96}, Benjamini and Schramm asked this:
\begin{Question}\label{question:BS96}
Is it true that if $G$ satisfies $\mathcal{I}(c_i, d_i)$ for some $d_i > 1$ then $p_c(G) < 1$?
\end{Question}
See the precise definition of $p_c(G)$ in \eqref{e:p_c} below.
In this article we give a positive answer to the above question in the case of roughly transitive graphs of polynomial growth.
Isoperimetric conditions and independent percolation have been studied in various works.
In \cite{BS96}, the authors proved that $p_c(G) < 1$ when $G$ has \emph{infinite isoperimetric dimension} (meaning that \eqref{eq:isoperimetric} holds with $(d_i - 1)/d_i$ replaced by $1$).
In \cite{zbMATH05229215}, Kozma showed that $p_c(G) < 1$ when $G$ is a \emph{planar} graph with isoperimetric dimension strictly larger than one, polynomial growth and no accumulation points.
In \cite{2014arXiv1409.5923T}, a stronger version of \eqref{eq:isoperimetric} called \emph{local isoperimetric inequality} was shown to imply $p_c(G) < 1$ for graphs with polynomial growth.
Some arguments in this paper are very similar in spirit to those of \cite{2014arXiv1409.5923T}, the main novelty being that we can replace the stronger \emph{local isoperimetric inequality} of \cite{2014arXiv1409.5923T} by the classical \eqref{eq:isoperimetric} in the case of roughly transitive graphs.
In this paper we deal with graphs with polynomial growth, as specified in the following.
\begin{Definition}
Given constants $c_u, d_u > 0$, we say that $G$ has \emph{polynomial growth} (denoted by $\mathcal{V}(c_u, d_u)$) if for every $r \geq 1$ and $x \in V$
\begin{equation}
\label{eq:volume_upper_bound}
|B(x,r)| \leq c_u r^{d_u}.
\end{equation}
\end{Definition}
\subsection{Main result}
The first result we present here is the following.
\begin{Theorem}
\label{thm:p_c_Bernoulli}
If $G$ is a roughly transitive graph of polynomial growth satisfying \eqref{eq:isoperimetric}, then $p_c(G) < 1$.
This gives a positive answer to Question~\ref{question:BS96} in this special case.
\end{Theorem}
\begin{Remark}
Let us note that whenever $p_c(G) < 1$ and $G$ has bounded degree, then the graph $G$ also undergoes a non-trivial phase transition for the Ising model, the Widom-Rowlinson model and the beach model.
This follows from Theorems~1.1 and 1.2 of \cite{MR1765172}.
\end{Remark}
The above result is a consequence of our Theorem~\ref{thm:p_c<1_dependent} below, which applies to both dependent and independent percolation processes.
Roughly speaking, Theorem~\ref{thm:p_c<1_dependent} states that, if the dependencies decay fast enough with the distance, then the percolation undergoes a non-trivial phase transition.
To be more precise, we need to define what we mean by ``decay of dependence''.
Let $\P$ denote any probability measure on the state space $\Omega:=\{0,1\}^V$, endowed with the $\sigma$-algebra generated by the canonical projections $Y_x:\Omega \to \{0,1\}$, defined by $Y_x(\omega):=\omega(x)$, for $x \in V$.
\begin{Definition}\label{def:decoupling}
We shall say that $\P$ satisfies the \emph{decoupling inequality} $ \mathcal{D}(\alpha,c_\alpha)$ (where $\alpha>0$ is a fix parameter) if for any $x \in V$, $r\geq 1$ and two events $ \mathcal{G}$ and $\mathcal{G}'$ such that
\[
\mathcal{G}\in \sigma\bigl ( Y_z, z\in B(x,r)\bigr ) \qquad \text{and}\qquad \mathcal{G}'\in \sigma\bigl ( Y_w, w\notin B(x,2r)\bigr ),
\]
we have
\[
\P(\mathcal{G} \cap \mathcal{G}')\leq \bigl (\P (\mathcal{G}) + c_\alpha r^{-\alpha}\bigr ) \P(\mathcal{G}').
\]
\end{Definition}
We are now in position to state the following.
\begin{Theorem}\label{thm:p_c<1_dependent}
Let $G$ be a $\uc{c:rough_trans}$-roughly transitive graph satisfying $\mathcal{V}(c_u, d_u)$ and $\mathcal{I}(c_i, d_i)$, with $d_i > 1$ and assume the law $\P$ satisfies $\mathcal{D}(\alpha, c_\alpha)$ with $\alpha > \alpha_{\ast}$ (see Remark~\ref{remark:alpha_star} for the definition of $\alpha_{\ast}$).
Then there exists a $p_* < 1$, depending only on $\uc{c:rough_trans}, c_i, d_i, c_u$ and $d_u$, such that if $\inf_{x \in V} \mathbb{P}[Y_x = 1] > p_*$, then $G$ contains almost surely a unique open infinite cluster.
Moreover, fixed any value $\theta > 0$ if the marginal distributions of $Y_x$ are large enough,
\begin{equation}
\label{e:decay_second}
\lim_{v \to \infty}v^\theta \P[v < |\mathcal{C}_o| < \infty] = 0,
\end{equation}
where $\mathcal{C}_o$ denotes the open connected component containing the origin.
\end{Theorem}
We also prove a theorem establishing the existence of a non-trivial sub-critical phase.
This result is simpler to prove but helps to establish a more complete picture of phase transition for dependent percolation on $G$.
\begin{Theorem}\label{thm:p_c>0_dependent}
Let $G$ be a graph satisfying $\mathcal{V}(c_u, d_u)$.
Moreover, let $\P$ be a probability measure that satisfies $\mathcal{D}(\alpha, c_\alpha)$ with $\alpha > \alpha_{\ast \ast}$, where $\alpha_{\ast \ast} > 0$ is defined in Remark~\ref{remark:alpha_double_star}.
Then there exists a $p_{**} > 0$, depending only on $c_u$ and $d_u$, such that if $\sup_{x \in V} \mathbb{P}[Y_x = 1] < p_{**}$, then the graph contains almost surely no infinite open cluster.
Moreover, fixed $\theta > 0$, if the marginal distributions of $Y_x$ is small enough,
\[
\lim_{r \to \infty} r^\theta \P[\diam(\mathcal{C}_x) > r] = 0,
\]
where $\mathcal{C}_x$ denotes the open connected component containing a fixed site $x \in V$.
\end{Theorem}
\begin{Remark}
\begin{enumerate}[\quad a)]
\item Note that Theorem~\ref{thm:p_c>0_dependent} does not require $G$ to be roughly transitive.
\item Moreover, this theorem does not follow from a simple path counting argument because of the dependence present in the law $\mathbb{P}$.
\item Given the above results, a natural question would be whether the condition $\mathcal{D}(\alpha, c_\alpha)$ on the decay of dependence of $\mathbb{P}$ could be weakened.
Of course, the parameters $\alpha_{\ast}$ and $\alpha_{\ast \ast}$ that appear above are not supposed to be sharp. However, let us observe that if the exponent $\alpha$ appearing in the decay of dependence of the law $\mathbb{P}$ is slow enough, then there are counterexamples showing that Theorem~\ref{thm:p_c<1_dependent} does not hold, see Subsection~\ref{ss:elipses}.
\end{enumerate}
\end{Remark}
\subsection{Transitive graphs}
We can specialize our main results to the special case of transitive graphs of polynomial growth.
It is important to observe that the hypothesis \eqref{eq:isoperimetric} is not necessary in this case, since this can be deduced for instance from \cite{CPC:1771424}.
This yields to another consequence of our main result, which was already known before.
\begin{Corollary}\label{thm:p_c<1}
Let $G=(V,E)$ be a transitive graph satisfying $\mathcal{V}(c_u, d_u)$ having growth faster than linear, then $p_c(G) < 1$.
\end{Corollary}
Although the above result was already known, as we discuss in detail in the next subsection, it is worth mentioning that our proof does not make use of Gromov's theorem on groups of polynomial growth, relying instead on probabilistic tools only.
\paragraph{Previously known results}
Percolation on transitive graphs has been intensively studied in the last decades specially for the independent case.
Let us now mention some of the works that more closely relate to the current article.
In \cite{lyons_1995}, Russel Lyons proved that for independent percolation, $p_c(G) < 1$ if $G$ is a group of exponential growth (see also \cite[Chapter 7]{LP11}).
The case of Cayley graphs of finitely presented groups with one end has been dealt with in \cite{zbMATH01224777} also in the independent case.
A similar question has also been considered on the Grigorchuk group, an example of group with intermediate growth (see \cite{percolation_Grigorchuk}).
In Corollary~3.2 of \cite{zbMATH06138623} it has been proved that $p_c(G) < 1$ for transitive graphs $G$ satisfying another isoperimetric inequality, see (2.4) and Definition~2.3 of \cite{zbMATH06138623}.
If $G$ is a transitive amenable graph, it was proved in \cite{BK89} that if for some $p$ there exists an infinite open cluster, then it is almost surely unique, see also Theorem~2.4 of \cite{HJ06}.
\vspace{4mm}
The most important relation between previously known results and our work comes at the intersection with Corollary~\ref{thm:p_c<1}, since transitive graphs can be associated with a group of automorphisms, benefiting therefore from important results on group theory.
More precisely, if $G$ is a transitive graph of polynomial growth, then $G$ is quasi-isometric to a Cayley graph of a nilpotent group (see \cite{trofimov}, \cite{losert}, Theorem~4 of \cite{Sabidussi64} or \cite[Theorem 2]{note_authomorphisms}).
This yields two different proofs of Corollary~\ref{thm:p_c<1}.
Let $G$ be the Cayley graph of a nilpotent group with super-linear growth.
Then
\begin{enumerate} [\quad a)]
\item We can use Theorem~7.19 of \cite{LP11} to conclude that there exists a subset of $G$ which is quasi-isometric to $\mathbb{Z}^2$, therefore $p_c(G) < 1$ as desired.
This argument has the advantage that it allows for duality arguments that can work even for dependent percolation.
\item Alternatively, we observe that $G$ is finitely presented (see Exercise~4.3 of \cite{pete_book}) and use Theorem~9 of \cite{zbMATH01224777} to conclude that the number of cut-sets of size $n$ separating a fixed vertex from infinity is at most $c^n$.
Then a simple Peierls-type argument can show that $p_c < 1$.
The added benefit of this approach is that it gives an exponential bound on the probability \eqref{e:decay_second} for Bernoulli percolation on transitive graphs.
\end{enumerate}
\begin{Remark}
\label{r:advantages}
In light of the above, let us emphasize some advantages of our approach.
\begin{enumerate} [\quad a)]
\item For the case of transitive graphs, our proof does not make use of Gromov's Theorem on groups of polynomial growth.
\item To the best of our knowledge, the bound in \eqref{e:decay_second} does not seem to follow from the above arguments in the case of dependent percolation on transitive graphs.
\item Uniqueness of the infinite cluster obtained in Theorem~\ref{thm:p_c<1_dependent} does not depend on the translation invariance of the law $\mathbb{P}$ as is the case with the argument in \cite{BK89}.
\item Note that being roughly isometric to each other defines an equivalence relation over the set of graphs.
However, it is important to notice that the distortion constant $\uc{c:rough_trans}$ worsens as we compose rough isometries.
Therefore, for a given roughly transitive graph there is not necessarily an analogue of the group of isomorphisms that is fundamental in the case of transitive graphs.
\item We strongly believe that the techniques we develop here could be easily extended in order to work for weaker notions of transitivity, for example by weakening the notion of rough isometries.
We however kept the current presentation in order to avoid an overly complicated exposition.
\end{enumerate}
\end{Remark}
We thank Yuval Peres, G\'abor Pete, Russel Lyons and Itai Benjamini for bringing some of the above results to our attention.
\subsection{Idea of the proofs}
The proofs of Theorems~\ref{thm:p_c<1} and \ref{thm:p_c<1_dependent} follow a renormalization scheme which allows us to bound the probability of certain ``bad events'' as the scale size grows.
In this section we will focus on the case of Theorem~\ref{thm:p_c<1_dependent} which is the more elaborate one.
For any $x\in V(G)$ and $L > 0$ set
\begin{equation*}
\S(x, L) =
\begin{array}{c}
\text{``there exist two large connected sets in $B(x, 3L)$,}\\
\text{which cannot be joined by an open path in $B(x, 3L^2)$''}.
\end{array}
\end{equation*}
This will play the role of the ``bad event'' in the proof of Theorem~\ref{thm:p_c<1_dependent}, see \eqref{e:separation_event} for a precise definition.
The main advantage of the above event is that it plays two complementary roles.
First, the events $\S(x,L)$ are hierarchical (see the \nameref{lemma:joao}~\ref{lemma:joao}), therefore is possible to bound their probabilities using inductive arguments coming from a multi-scale renormalization procedure.
Secondly, these events are rich enough that, once we show that $\mathbb{P}[\S(x, L)]$ decays fast as $L$ goes to infinity, we can derive the existence of a unique open infinite connected component, as desired (see Lemma~\ref{lemma:lego}).
For the inductive part of the argument, we need to introduce a rapidly growing sequence $(L_k)_{k \geq 1}$ of scales, see \eqref{eq:inductive_def_L_k}.
As we mentioned above, our objective is to show that for large enough values of the percolation parameter $p$, the probabilies $p_k = \mathbb{P}[\S(o, L_k)]$ of observing a separation event at scale $k$ go to zero fast as $k$ goes to infinity.
\vspace{4mm}
The proof of our main results can then be described through three steps:
\begin{enumerate}[\quad a)]
\item We first show that $\S(o, L_{k+1})$ implies the occurrence of $\S(y_i, L_k)$ for several points $y_i \in B(o, 2L_{k+1}^2)$, see the \nameref{lemma:joao}~\ref{lemma:joao}.
Note that the event $\S(y_i, L_k)$ takes place in the smaller scale $L_k$.
\item Derive from the above a recursive inequality between $p_{k+1}$ and $p_k$, to show that if $p$ is close enough to $1$, then $p_k$ goes to zero fast as $k$ goes to infinity, see Section~\ref{s:reduction}.
\item Finally, in Lemma~\ref{lemma:lego} we show that a fast decay of $p_k$ implies our main result.
\end{enumerate}
Although all of the above steps are essential in establishing Theorems~\ref{thm:p_c<1} and \ref{thm:p_c<1_dependent}, we note that items $b)$ and $c)$ follow the same spirit to what has been done in \cite{2014arXiv1409.5923T}.
For the sake of completeness we also include their proofs in the current paper.
However it is step $a)$ that contains the main novelty of the current work, see the \nameref{lemma:joao}~\ref{lemma:joao}.
It is this lemma that allows us to weaken the \emph{local isoperimetric inequality} of \cite{2014arXiv1409.5923T} to the canonical definition \eqref{eq:isoperimetric} for roughly transitive graphs of polynomial growth.
\subsection{Sketch of the proof of the \nameref{lemma:joao}}
The main new ingredient of this paper is the \nameref{lemma:joao} proved in Section~\ref{s:proof_joao}.
Setting up a renormalization scheme on a graph that is not $\mathbb{Z}^d$ requires a good understanding of the geometry of the graph in question and it is during the proof of \nameref{lemma:joao} that this difficulty is revealed.
For this proof we make strong use of the isoperimetric inequality and rough transitivity of $G$.
The proof of the \nameref{lemma:joao} follows three main steps.
Recall that we are assuming the occurrence of $\S(o, L_{k+1})$, which provides us with two large sets $A^0$, $A^1 \subseteq B(x, L_{k+1})$ which cannot be connected by an open path.
Our aim is to show the existence of such separation events in various balls of size $L_k$ inside $B(o, 2L_{k+1}^2)$.
\begin{enumerate}[\quad i)]
\item The first step of the proof will be to reduce the quest of finding separation events $\S(y_i, L_k)$ into simply connecting $A^0$ with $A^1$ through several paths.
This is the content of Lemma~\ref{lemma:new_lemma_3.2}.
\item Therefore, we can assume by contradiction that there exists two sets $A^0$ and $A^1$ which cannot be connected by several paths as above.
However, the isoperimetric inequality \eqref{eq:isoperimetric} guarantees the existence of several disjoint paths (not necessarily open) connecting $A^0$ to distance $L_{k+1}^2$ (similarly for $A^1$), see Lemma~\ref{lemma:N''}.
\item Roughly speaking, in the last step we use the existence of $A^0$ and $A^1$ above in order to embed a binary tree into $G$, which would contradict the polynomial growth of this graph.
We start with the ball $B(o, 3L_{k+1})$ (where the sets $A^0$ and $A^1$ reside) and two paths from the previous step as a building block.
They will respectively represent the root $\varnothing$ of the binary tree and the edges connecting $\varnothing$ to its descendants.
Finally we use the rough transitivity of $G$ to replicate this pattern.
Arguing in a recursive way we obtain the desired embedding, which leads to a contradiction on the polynomial growth of $G$.
\end{enumerate}
Steps $i)$ and $iii)$ are illustrated in Figures~\ref{f:six_balls} and \ref{f:crab_party} respectively.
\bigskip
This paper is organized as follows. In Section~\ref{s:notation} we introduce some preliminary notation and prove an auxiliary result, followed by Section~\ref{s:proof_pc>0_dep}, where we show Theorem~\ref{thm:p_c>0_dependent}.
In Section~\ref{s:reduction} we define the separation events $S(x, L)$ and state two fundamental intermediate results (Lemmas~\ref{lemma:joao} and \ref{lemma:lego}).
Then, assuming their validity, we prove Theorems~\ref{thm:p_c<1} and \ref{thm:p_c<1_dependent}, which corresponds to \textit{Step b)} in the outline of the proof of our main results.
Section~\ref{s:proof_joao} is devoted to proving the \nameref{lemma:joao} and is split into three subsections.
Each of these subsections correspond to one step in the above sketch.
Finally we show Lemma~\ref{lemma:lego} in Section~\ref{s:lego}, and we conclude with Section \ref{s:examples}, which is devoted to giving some examples of dependent percolation processes for which our results apply.
\subsection*{Acknowledgments}
We are grateful to Yuval Peres, G\'abor Pete, Russel Lyons and Itai Benjamini for bringing to our attention some fundamental references and suggestions.
Thanks also to Mikhail Belolipetski for the fruitful discussions.
A.T. is grateful to CNPq for its financial contribution to this work through the grants 306348/2012-8 and 478577/2012-5.
This work began during a visit of E.C.\ to IMPA, that she thanks for the support and hospitality.
\section{Notation and auxiliary results}
\label{s:notation}
In this section we introduce some notation and prove some auxiliary results that will be useful throughout the paper.
\subsection{Notation}
For every finite set $A\subset V$ we denote by $|A|$ its cardinality, and by $\partial A$ its edge boundary:
\[
\partial A := \big\{ \{x,y\}\in E \ : \ x\in A \textnormal{ and }y\notin A \big\}.
\]
Analogously, its internal vertex boundary is denoted by
\[
\partial_i A := \big\{ x \in A \ : \text{ there exists $y \in V \setminus A$ such that $\{x, y\} \in E$} \big\}.
\]
For any two vertices $x,y\in V$ we will denote by $d(x,y)$ the \emph{graph distance} between $x$ and $y$, i.e., the minimum number of edges contained in a path that goes from $x$ to $y$.
Analogously, for any two sets $A,B\subset V$ we set
\[
d(A,B):= \min \{ d(a,b) \ : \ a\in A, b\in B\}.
\]
By $B(x, R) $ we denote a ball centered at $x$ and radius $R \geq 0$ in the graph distance and let us define the growth function
\begin{equation}
\bar{v}_G(r) = \sup_{x \in G} |B(x, r)|,
\end{equation}
where we may omit the sub-index in $v_G$ if it is clear from the context.
\begin{Remark}\label{remark:note_v}
Note that by \eqref{eq:volume_upper_bound} we have $\bar{v}_G(r)\leq c_u r^{d_u}$.
\end{Remark}
Independent percolation (sometimes called Bernoulli) can be described as follows.
We associate for each vertex $x \in V$ an independent coin toss with success parameter $p \in [0,1]$, in case of success we say that the vertex is \emph{open} otherwise we call it \emph{closed}.
This gives rise to a random sub-graph $\mathbb{G}_p$ of $G$, induced by the set of open vertices.
One of the most interesting features of this model is that for several graphs it presents a phase transition at a critical value $p_c \in (0,1)$.
To make the above statement more precise, we define the critical value $p_c = p_c(G)$ as follows
\begin{equation}
\label{e:p_c}
p_c := \sup \{p\in [0,1] \ : \ \P [\textnormal{there exists an infinite cluster on $\mathbb{G}_p$}] = 0\}.
\end{equation}
It follows that, for $p < p_c$, the induced sub-graph contains almost surely only finite connected components, while for $p > p_c$ it contains almost surely an infinite cluster.
See \cite{Gri99} for a proof that $p_c \in (0, 1)$ for the case $V = \mathbb{Z}^d$, $d \geq 2$, endowed with edges connecting nearest neighbors vertices.
\subsection{Some remarks about rough isometries}
The results presented here follow the exposition of \cite{elek}, to which the reader is referred for more details.
Suppose that $\phi: G \to G'$ is a $\uc{c:rough_trans}$-rough isometry.
Then for any set $A \subseteq G$ we have
\begin{equation}
\label{e:large_image}
|\phi(A)| \geq \frac{|A|}{\bar{v}_G(\uc{c:rough_trans})}.
\end{equation}
In fact, if $d(x, y) \geq \uc{c:rough_trans}$, then $\phi(x) \neq \phi(y)$ by \eqref{e:rough_iso}.
This implies that at most $\bar{v}_G(\uc{c:rough_trans})$ many points can share the same image under $\phi$ in $G'$.
Another interesting property of rough isometries is that they are almost invertible, in the following sense.
\begin{display}
\label{e:rough_inverse}
Given a $\uc{c:rough_trans}$-rough isometry $\phi:G \to G'$, there is a
$4\uc{c:rough_trans}^2 $-rough isometry $\psi:G' \to G$ such that $d(x, \phi \circ \psi(x)) \leq \uc{c:rough_trans}$ for any $x \in V$.
\end{display}
Indeed, let us define $\psi(x')$ as the point $x \in V$ such that $d(x', \phi(x))$ is minimized (choosing arbitrarily in case of ties).
First of all, observe by \eqref{e:rough_surj} that $d(x', \phi \circ \psi(x')) \leq \uc{c:rough_trans}$.
We now show that $\psi$ is a $4 \uc{c:rough_trans}^2$-rough isometry and for this fix $x', y' \in G'$.
We can assume that $x' \neq y'$ (the other case is trivial), then one estimates
\[
\begin{split}
\frac{1}{4 \uc{c:rough_trans}^2} \; d\big( x', y' \big) - 1 & \leq \frac{1}{4 \uc{c:rough_trans}^2} \Big( d\big ( x', \phi \circ \psi(x')\big )+d\big ( y', \phi \circ \psi(y')\big )+ d\big( \phi \circ \psi(x'), \phi \circ \psi(y') \big) \Big) - 1\\
&
\stackrel{\eqref{e:rough_surj}}{\leq} \frac{1}{4 \uc{c:rough_trans}^2} \Big( d\big( \phi \circ \psi(x'), \phi \circ \psi(y') \big) + 2 \uc{c:rough_trans} \Big) - 1 \\
& \overset{\eqref{e:rough_iso}}< \frac{1}{4\uc{c:rough_trans}^2} d\big( \psi(x'), \psi(y') \big)+\frac{1}{\uc{c:rough_trans}}-1\\
& \stackrel{\uc{c:rough_trans} \geq 1}{\leq} d\big( \psi(x'), \psi(y') \big) \overset{\eqref{e:rough_iso}}\leq \uc{c:rough_trans} d\big( \phi \circ \psi(x'), \phi \circ \psi(y') \big) + \uc{c:rough_trans}\\
& \leq \uc{c:rough_trans} \Big ( d\big (x',\phi \circ \psi(x')\big )+d\big ( \phi \circ \psi(y'),y'\big ) +d\big( x', y' \big) \Big ) + \uc{c:rough_trans} \\%[3mm]
& \stackrel{\eqref{e:rough_surj}}{\leq} \uc{c:rough_trans} d\big( x', y' \big) + 2 \uc{c:rough_trans}^2 + \uc{c:rough_trans} \overset{\uc{c:rough_trans} \geq 1}\leq 4 \uc{c:rough_trans}^2 d(x', y').
\end{split}
\]
Also, if $x'$ belongs to the image of $\phi$, then $d(\phi(\psi(x')), x') = 0$, so that $d(\psi(\phi(x)), x) \leq \uc{c:rough_trans}$, and consequently \eqref{e:rough_surj} also holds for $\psi$.
This concludes the proof of \eqref{e:rough_inverse}
\begin{Remark}
It would be tempting to say that every roughly transitive graph is roughly isomorphic to a transitive one.
This is however not the case, as shown in \cite[Proposition~2]{elek}.
We would like also to recall Open Question~2.3 of \cite{zbMATH06243708}: ``Is there an infinite $\uc{c:rough_trans}$-roughly transitive graph, which is not roughly-isometric to a homogeneous space, where a homogeneous space is a space with a transitive isometry group?''
On the other hand, recall from Remark~\ref{r:advantages} e) that the techniques presented here are believed to work beyond the case of roughly transitive graphs.
\end{Remark}
\subsection{Paving}
For the next lemma, we need also to introduce a lower bound on the volume growth of balls on $G$.
\begin{Definition}
Given constants $c_l, d_l > 0$, we say that $G$ satisfies $\mathcal{L}(c_l, d_l)$ if for every $r \geq 1$ and $x \in V$
\begin{equation}
\label{eq:volume_lower_bound}
|B(x,r)| \geq c_l r^{d_l}.
\end{equation}
\end{Definition}
Note that every infinite connected graph satisfies the above bound for $d_l = 1$ and we don't need more than this for our proofs.
However, if one knew in advance that the above condition holds for some $d_l > 1$, the final results will be improved through a smaller $\alpha_{\ast}$ or $\alpha_{\ast \ast}$, see \eqref{e:alpha_star}.
Proposition~\ref{lemma:paving} below allows us to cover a large ball of radius $r^2$ with smaller balls of radius $s$.
This can be thought of as a replacement for paving arguments for renormalization procedures on the lattice $\mathbb{Z}^d$.
\nc{c:paving}
\begin{Proposition}
\label{lemma:paving}
If $G = (V, E)$ satisfies the volume growth estimates $\mathcal{V}(c_u, d_u)$ and $\mathcal{L}(c_l, d_l)$, then there is a constant $\uc{c:paving} = \uc{c:paving}(c_l, d_l, c_u, d_u)$ such that
\begin{display}
\label{eq:paving}
for every $r \geq 1$ and $ s\in [ 2 , 2r^2]$, for every $x\in V$, there exist $K \subseteq B(x, 2r^2)$, \\
such that $B(x, r^2) \subseteq B(K, s)$ and $|K| \leq \uc{c:paving}\frac{r^{2d_u}}{s^{d_l}}$.
\end{display}
\end{Proposition}
\begin{proof}
Fix $s$ in the range given in the hypothesis and take the set $K \subseteq B(x, 2r^2)$ to be an arbitrary \emph{maximal} set satisfying
\begin{display}
\label{e:mutually_far}
$d(y, y') \geq s$ for every $y, y' \in K$.
\end{display}
Since $K$ is maximal, it is also an $s$-net of $B(x, 2r^2)$, or in other words $B(x, 2r^2) \subseteq B(K, s)$.
By \eqref{e:mutually_far}, all the balls $\{B\bigl (y, s/2 \bigr )\}_{y\in K}$ are disjoint.
Therefore, by the lower bound $\mathcal{L}(c_l, d_l)$ we obtain
\[
\bigl |B(K, s/2 )\bigr |=\sum_{y\in K} \bigl |B(y, s/2)\bigr | \geq |K| c_l \, \frac{ s^{d_l}}{2^{d_l} }.
\]
On the other hand, $|B(K, s/2 )|\leq |B(x, 2r^2+s)|\leq c_u (2r^2+s)^{d_u}.$
By putting together these two facts, we obtain that there is a positive constant $\uc{c:paving}=\uc{c:paving}(c_l,d_l,c_u,d_u)$ such that
\[
|K|\leq \uc{c:paving} \frac{r^{2d_u}}{s^{d_l}}.
\]
The above argument implies that there exists a set $K\subseteq B(o, 2r^2)$ such that the statement holds (for $r$ large enough).
By possibly increasing the constant $\uc{c:paving}$, we can assure that the statement holds for all $r \geq 1$.
\end{proof}
\subsection{Decoupling several events}
Our next statement is a consequence of the decoupling inequality from Definition~\ref{def:decoupling}.
\begin{Proposition}\label{claim:lemma4.2}
Suppose that $\P$ satisfies the decoupling inequality $\mathcal{D}(\alpha,c_\alpha)$ for some $\alpha>0$.
Now fix any value of $r\geq 1$, an integer $J'\geq 2$ and disjoint points $y_1, y_2, \ldots, y_{J'} \in V$ such that
\[
\min_{1 \leq i < j \leq J'} d(y_i, y_j)\geq 3r.
\]
Then for any set of decreasing events $\mathcal{G}_1, \ldots, \mathcal{G}_{J'} $ such that $ \mathcal{G}_i\in \sigma (Y_z, z \in B(y_i, r))$ we have
\begin{equation}
\label{e:decouple_various}
\P\bigl ( \mathcal{G}_1\cap \ldots \cap \mathcal{G}_{J'}\bigr ) \leq \bigl ( \P(\mathcal{G}_1)+c_\alpha r^{-\alpha}\bigr )\dots \bigl ( \P(\mathcal{G}_{J'})+c_\alpha r^{-\alpha}\bigr ).
\end{equation}
\end{Proposition}
\begin{proof}
The proof is immediate from Definition \ref{def:decoupling}.
In fact, setting $\mathcal{G}' = \mathcal{G}_1 \cap \dots, \cap \mathcal{G}_{J' - 1}$,
\begin{equation}
\P\bigl ( \mathcal{G}_1\cap \ldots \cap \mathcal{G}_{J'}\bigr ) \overset{\mathcal{D}(\alpha, c_\alpha)}\leq \left (\P ( \mathcal{G}_{J'})+c_\alpha r^{-\alpha}\right ) \P\bigl ( \mathcal{G}_1\cap \ldots \cap \mathcal{G}_{J'-1}\bigr ).
\end{equation}
By iterating this calculation, we obtain the statement.
\end{proof}
\section{Proof of Theorem~\ref{thm:p_c>0_dependent}}\label{s:proof_pc>0_dep}
This proof is inspired to previous renormalization procedures that were developed for $\mathbb{Z}^d$, see for instance \cite{Szn09}.
Here we adapt them to work on more general classes of graphs.
Although Theorem~\ref{thm:p_c>0_dependent} is not the central result of the current article, we present its proof before for two reasons.
First, it is a warm-up to the proof of Theorem~\ref{thm:p_c<1_dependent} and secondly, it includes some lemmas that will be useful later in the text.
Let us first define what we call the crossing event
\begin{equation}
\label{e:crossing_events}
\mathcal{T}(x, L) =
\bigg[
\begin{array}{c}
\text{there is an open path from $B(x,3L)$ to $\partial B(x, 3L^2)$}
\end{array}
\bigg].
\end{equation}
Our main argument shows the decay of the probabilities of $\mathcal{T}(x, L)$ following a renormalization scheme.
This procedure relates the probabilities of the above events at different scales, that we now introduce.
Given some $\gamma \geq 2$, we set
\begin{equation} \label{eq:inductive_def_L_k}
L_0:= 10 000, \textnormal{ and } L_{k+1} = L_k^\gamma, \ \textnormal{ for all } k \geq 0.
\end{equation}
\begin{Remark}
We have not yet chosen $\gamma$ because it will assume different values for the proofs of Theorems~\ref{thm:p_c<1_dependent} and \ref{thm:p_c>0_dependent}, see Remarks~\ref{remark:alpha_double_star} and \ref{remark:alpha_star} below.
\end{Remark}
In the next definition we introduce the concept of a \emph{cascading} family of events.
Intuitively speaking, it means that if some event occurs at a given scale $L_{k+1}$, then it must also occurs several times in the previous scale $L_k$ in well separated regions.
\nc{c:k_def_joao}
\begin{Definition}
\label{d:joao}
We say that a family of events $\big( \mathcal{E}(x, L_k) \big)_{x \in V, k \geq 1}$ is \emph{cascading} if for any $J \geq 1$ there exists $\uc{c:k_def_joao} = \uc{c:k_def_joao}(G, J, \gamma)$ for which the following holds.
Fix any $x \in V$, $k \geq \uc{c:k_def_joao}$ and set $K \subseteq B(x, 2L_{k+1}^2)$ such that $B(K, L_k)$ covers $B(x, L_{k+1}^2)$.
Then, if the event $\mathcal{E}(x,L_{k+1})$ occurs, there exists a sequence
\begin{equation}
y_1, y_2, \ldots ,y_J \in K \text{ with $d(y_j, y_l) \geq 9L_k^2$ for all $j \neq l$}
\end{equation}
and such that $\mathcal{E}(y_j, L_k)$ occurs for all $j \leq J$.
\end{Definition}
The importance of the above definition is that it allows us to relate the probabilities of events $\mathcal{E}$ at different scales using recursive inequalities together with the decoupling provided by $\mathcal{D}(\alpha, c_\alpha)$.
\nc{c:cross_cas}
\begin{Lemma}
\label{l:cross_cas}
The family of events $\{\mathcal{T}(x, L)\}$ defined in \eqref{e:crossing_events} is cascading in the sense of Definition~\ref{d:joao}.
\end{Lemma}
\begin{proof}
We first fix $J \geq 1$ and let $\uc{c:cross_cas} \geq 1$ be such that for all $k\geq \uc{c:cross_cas}$ we have
\begin{equation}
\label{e:choose_c_cross_cas}
3L_{k+1} + 30J L_k^2 \leq L_{k+1}^2,
\end{equation}
which can be done by our choice of scales in \eqref{eq:inductive_def_L_k}.
To prove that the events $\mathcal{T}(x, L)$ are cascading, let us pick $k \geq \uc{c:cross_cas}$, $x \in V$ and assume that $\mathcal{T}(x, L_{k+1})$ occurs, that is
\begin{display}
there exists an open path $\sigma$ from $B(x, 3L_{k+1})$ to $\partial B(x, 3L_{k+1}^2)$.
\end{display}
Let us consider the concentric spheres $S_j = \partial B(x, 3L_{k+1} + (30j)L_k^2)$, for $j = 1, \dots, J$.
Note that all these spheres are contained in $B(x, L_{k+1}^2)$ by \eqref{e:choose_c_cross_cas}.
We now let $x_j$ be the first point of intersection of the path $\sigma$ to $S_j$.
Given the set $K$ as in Definition~\ref{d:joao} (or more precisely, such that $B(K, L_k)$ covers $B(x, L_{k+1}^2)$) we can pick $y_j \in K$ ($j \leq J$) such that $x_j \in B(y_j, L_k)$.
We see that the distance between two distinct $y_j$'s is at least
\begin{equation}
d(y_j, y_{j'}) \geq d(x_j, x_{j'}) - 2L_k \geq 30L_k^2 - 2L_k \geq 9L_k^2
\end{equation}
as required in Definition~\ref{d:joao}.
To finish the proof, observe that the open path $\sigma$ that guarantees the occurrence of $\mathcal{T}(x, L_{k+1})$ can be split into pieces that show the occurrence of $\mathcal{T}(y_j, L_k)$, for $j \leq J$.
The piece corresponding to $j$ can be constructed for instance by picking the first time $\sigma$ touches $x_j$ until it first exits $B(y_j, 3L_k^2)$.
This finishes the proof of the lemma.
\end{proof}
\begin{Remark}
In the next section we will turn to the proof of Theorem~\ref{thm:p_c<1} and for this we define another family of events (denoted by $\mathcal{S}(x,L)$) and prove a result which is analogous to Lemma~\ref{l:cross_cas}, namely the \nameref{lemma:joao}.
However, the proof of that the events $\mathcal{S}(x, L)$ are cascading will be more involved.
It is important to observe that some definitions and arguments in this section were written in a way that they can be used also during the proof of Theorem~\ref{thm:p_c<1_dependent}, instead of optimizing for brevity.
\end{Remark}
The importance of the definition of cascading events will become clear in the following bootstrapping result.
Given a scale sequence $L_k$ as in a family of events $\mathcal{E}(x, L_k)$ let
\begin{equation}
p_k^{\mathcal{E}} = \sup_{x \in V} \mathbb{P}(\mathcal{E}(x, L_k)).
\end{equation}
\nc{c:cascade_decays}
\nc{c:cascade_decays2}
\begin{Lemma}
\label{l:cascade_decays}
Suppose that $G$ satisfies $\mathcal{V}(c_u, d_u)$ and $\mathcal{L}(c_l, d_l)$ and $\mathbb{P}$ has the decoupling inequality $\mathcal{D}(\alpha, c_\alpha)$, for $\alpha > 2 \gamma d_u - d_l$.
Moreover, let $\mathcal{E}(x, L_k)$ be a family of events which is cascading in the sense of Definition~\ref{d:joao}, then for any $\beta > 0$, there exists a constant $\uc{c:cascade_decays} = \uc{c:cascade_decays}(\beta, c_i, d_i, c_u, d_u) \geq 1$ such that
\begin{display}
\label{e:induct_cascade}
if for some $k_o \geq \uc{c:cascade_decays}$ we have $p^{\mathcal{E}}_{k_o} \leq L_{k_o}^{-\beta}$ then $p^{\mathcal{E}}_{k} \leq L_{k}^{-\beta}$ for all $k \geq k_o$.
\end{display}
\end{Lemma}
\begin{proof}
Given $\beta > 0$, let us pick some $\beta' > \max\{\alpha, \beta\}$ and an integer $J'$ such that
\[
J'\geq \max \left \{ 2, \frac{\gamma \beta'}{\alpha -(2 \gamma d_u - d_l)}\right \}.
\]
which is possible since $\alpha > 2 \gamma d_u - d_l$.
By choosing $k$ large enough we can apply Proposition~\ref{lemma:paving} and set $s:=L_k$ and $r:=L_{k+1}$, which gives us a set $K\subseteq B(o, 2L_{k+1}^2)$ such that
\[
|K|\leq \uc{c:paving}L_k^{2\gamma d_u-d_l} \qquad \text{and} \qquad B(o, L_{k+1}^2)\subseteq B(K,L_k).
\]
Our purpose is to bound the probabilities $p^{\mathcal{E}}_k$ using induction.
In fact, using the fact that the events $\mathcal{E}(x, L_k)$ are cascading, we have
\[
\begin{split}
p^{\mathcal{E}}_{k+1} & \leq \P \big[\exists \, y_1, \ldots , y_{J'}\in K\text{ at mutual distance at least }9L_k^2,\text{ s.t.\ }\mathcal{E}(y_i,L_k)\text{ occurs }\forall i\leq J' \big]\\
& \stackrel{\text{Prop.~}\ref{claim:lemma4.2}}{\leq} \left ( \uc{c:paving}L_k^{2\gamma d_u-d_l} \right )^{J'}\left ( p^{\mathcal{E}}_k+c_\alpha L_k^{-\alpha}\right )^{J'}.
\end{split}
\]
Assume as in \eqref{e:induct_cascade} that for some $k_0$ large enough we have $p^{\mathcal{E}}_{k_0} \leq L_{k_0}^{-\beta'}$, we need to show that this condition holds for all $k \geq k_0$.
In fact, by using the fact that
\[
( p^{\mathcal{E}}_k+c_\alpha L_k^{-\alpha} ) \leq (c_\alpha +1)L_k^{-\min\{\alpha,\beta'\}},
\]
as well as our assumption $ p^{\mathcal{E}}_{k_0}\leq L_{k_0}^{-\beta'}$, we obtain:
\[
\frac{p^{\mathcal{E}}_{k_0+1}}{L_{k_0+1}^{-\beta'}}\leq \uc{c:paving}^{J'}(c_\alpha+1 )^{J'}L_{k_0}^{J'(2\gamma d_u-d_l)-J'(\min\{\alpha,\beta'\})+\gamma \beta'}
\stackrel{\beta'\geq \alpha}{\leq} \uc{c:paving}^{J'}(c_\alpha+1 )^{J'}L_{k_0}^{-J'(\alpha -(2\gamma d_u-d_l)))+\gamma \beta'}.
\]
By our choice of $J'$, the above is smaller than $1$ for all $k$ large enough, proving \eqref{e:induct_cascade}.
\end{proof}
\begin{Remark}\label{remark:alpha_double_star}
Recall that in Theorem~\ref{thm:p_c>0_dependent} we have used the value $\alpha_{\ast \ast}$ without giving its precise value.
We can now introduce
\begin{equation}
\label{e:alpha_double_star}
\alpha_{\star\star} := 4 d_u -d_l.
\end{equation}
\end{Remark}
\begin{proof}[Proof of Theorem~\ref{thm:p_c>0_dependent}]
We now fix $\gamma = 2$ and let the scale sequence $(L_k)_{k \geq 0}$ be defined as in \eqref{eq:inductive_def_L_k}.
Observe also that for $\alpha > \alpha_{\ast \ast}$ as in \eqref{e:alpha_double_star}, we have $\alpha > 2 \gamma d_u - d_l$ as required in Lemma~\ref{l:cascade_decays}.
Therefore, we are in position to apply Lemma~\ref{l:cascade_decays} for some arbitrarily chosen $\beta > 2 \theta$.
In order to show that $p_k \leq L_k^{-\beta}$ for large enough $k$, we have simply to show that $p_{k_o} \leq L_{k_o}^{-\beta}$ for some $k_o \geq \uc{c:cascade_decays}$.
But by a simple union bound,
\begin{equation}
p^{\mathcal{T}}_{k_o} = \sup_{x \in V} \mathbb{P}(\mathcal{T}(x, L_{k_o})) \leq \mathbb{P} \big[ Y_z = 1 \text{ for some } z \in B(x, L_{k_o}) \big] \leq c_{d_u} L_{k_o}^{d_u} \sup_{x \in V}[Y_x = 1].
\end{equation}
Therefore, as soon as
\begin{equation}
\sup_{x \in V} \mathbb{P}[Y_x = 1] \leq \frac{1}{c_{d_u}} L_{k_o}^{- d_u - \beta},
\end{equation}
we have $p^{\mathcal{T}}_{k_o} \leq L_{k_o}^{-\beta}$ as desired and therefore $p_k \leq L_k^{-\beta}$ for all $k \geq k_o$.
To finish, given a large enough $r \geq 1$, take $\bar{k}$ such that $L_{\bar{k}} \leq r < L_{\bar{k}+1}$.
Then,
\begin{equation}
r^\theta \P[\diam(\mathcal{C}_o) > r] \leq L_{\bar{k} + 1}^\theta p^{\mathcal{T}}_{\bar{k}} \leq L_{\bar{k}}^{2 \theta - \beta}
\end{equation}
The proof now follows from the fact that $\beta > 2 \theta$.
\end{proof}
\section{Proof of Theorem~\ref{thm:p_c<1_dependent}}
\label{s:reduction}
The proof of Theorem~\ref{thm:p_c<1_dependent} follows the same lines of the previous section.
We are going to define a family of events $\mathcal{S}(x, L)$ and then show that they are cascading in the sense of Definition~\ref{d:joao}.
This task will however be much more involved than in the previous section.
We now define what we call a \emph{separation event}.
This will play the role of a ``bad'' event whose probability we intend to bound from above.
Roughly speaking, the separation event says that inside a big ball one can find two large and separated clusters (which are not necessarily open).
Denoting by $\diam(Y)$ the diameter of the set $Y$, for every $x\in V$ and $L\in \mathbb{R}_+$, the separation event $\S(x,L)$ is defined as follows:
\begin{equation}
\label{e:separation_event}
\S(x,L):=
\bigg[
\begin{array}{c}
\text{there are $A^0,A^1\subseteq B(x,3L)$ with $d(A^0, A^1) > 1, \diam(A^i) \geq L/100$,}\\
\text{s.t. there is no open path in $B(x,3L^2)$ connecting $A$ with $A'$}.
\end{array}
\bigg].
\end{equation}
See Figure~\ref{f:six_balls} for an illustration of the above event.
Recall $d_i$ and $d_u$ from \eqref{eq:isoperimetric} and \eqref{eq:volume_upper_bound} respectively, and consider a fixed constant $\gamma \geq 2$ such that
\begin{equation}\label{eq:condition_gamma}
\gamma \left ( \frac{d_i-1}{d_i}\right )>2d_u,
\end{equation}
and as above we set
\begin{equation}\label{eq:scales_new}
L_0:= 10 000, \textnormal{ and } L_{k+1}=L_k^\gamma, \ \textnormal{ for all } k\geq 0.
\end{equation}
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=0.4]
\begin{scope}
\clip (6,6) circle (8);
\foreach \x in {1,...,16}
{ \foreach \y in {1,...,16}
\draw[color=gray,fill=gray] (1 * \x + 0.4 * rand - 3, 1 * \y + 0.4 * rand - 3) circle (0.01);
}
\end{scope}
\draw[fill, color=white] (.6,10.3) rectangle (1.5,11.3);
\node at (1.1,10.9) {$K$};
\draw[fill, color=white] (6.5,3.1) rectangle (7.5,2.1);
\foreach \x in {1,...,3} { \draw[color=gray] (6, 6) circle (\x); }
\foreach \x in {6,8,10} {\draw[color=gray] (6, 6) circle (\x);}
\foreach \x in {0, 1, 2, 3}
{ \draw[dashed] (5.8 - 0.2 * \x, 5.2 - 0.3 * \x) .. controls (5.7 - 1.4 * \x, 3 - 2 * \x) and
(8.7 + 2 * \x, 3 - 2 * \x) .. (7.6 + 0.2 * \x, 5.4 - 0.1 * \x);
}
\draw[color=gray] (7,-1) circle (.5);
\draw[fill, color=gray] (7,-1) circle (.01);
\draw[fill, color=white] (7.1,-.7) rectangle (9.8,.2);
\node[below right] at (6.9,0.3) {\tiny $\mathcal{S}(y,L_k)$};
\draw (5.2,4.4) .. controls (0, -2) and (6,-6) .. (10,-2) .. controls (14,2) and (10, 5) .. (8.5,5.3);
\draw[fill] (10,-2) circle (0.05);
\draw[fill=gray] plot [smooth cycle] coordinates {(6,5.4) (5.5,5) (5,4.2) (5.4,4.3) (5.7, 4.7)};
\draw[fill=gray] plot [smooth cycle] coordinates {(7.5,5.4) (7.8,5) (8.7,5.2) (8.7,5.5) (7.7, 5.5)};
\node[left] at (6.4,5.8) {$A^0$};
\node[left] at (8.2,6) {$A^1$};
\end{tikzpicture}
\caption{The six balls $B(x,L_{k+1}), \dots, B(x,3L_{k+1})$ and $B(x,L_{k+1}^2), \dots, B(x,3L_{k+1}^2)$.
The sets $A^0$ and $A^1$ from the definition of $\S(x,L_{k+1})$ are pictured, together with a solid path connecting them.
According to the definition of $\S(x,L_{k+1})$, this solid path must pass through a closed vertex.
The gray dots in the picture represent the set $K$ from Proposition~\ref{lemma:paving}.
We also indicate the occurrence of the event $\mathcal{S}(y,L_k)$ as in Lemma~\ref{lemma:new_lemma_3.2}.
}
\label{f:six_balls}
\end{figure}
By $p_k$ we denote the probability to observe a separation event at scale $k$, i.e., set
\begin{equation}
\label{eq:pk}
p_k := \inf_{x \in V}\mathbb{P}[\S(x, L_k)].
\end{equation}
In the above definition we use the infimum over $x \in V$, as we are not necessarily assuming that $G$ is transient of that $\mathbb{P}$ is translation invariant.
A fundamental step in the proof of Theorem~\ref{thm:p_c<1_dependent} is to show that for values of $p$ close enough to one, the probabilities $p_k$ decay to zero very fast as $k$ increases.
In this section we assume that $G$ satisfies the extra Condition~\ref{cond:joao} below and prove Theorem~\ref{thm:p_c<1_dependent}.
This condition will later be proved to hold true for roughly transitive graphs satisfying $\mathcal{V}(c_u, d_u)$ and $\mathcal{I}(c_i, d_i)$ with $d_i > 1$, see the \nameref{lemma:joao}~\ref{lemma:joao}.
Roughly speaking Condition~\ref{cond:joao} states that if $\S(o,L_{k+1})$ occurs for some $k+1$, then we can find various separation events at the smaller scale $k$.
\nc{c:k_joao}
\begin{Condition}
\label{cond:joao}
We say that a given graph satisfies Condition~\ref{cond:joao} for some $\gamma \geq 2$ and $L_k$ as in \eqref{eq:scales_new}, if the collection of events $\big( \S(x, L_k) \big)_{x \in V, L_k \geq 1}$ is cascading in the sense of Definition~\ref{d:joao}.
\end{Condition}
Before proceeding, let us briefly recall how a statement similar to the above was derived in \cite{2014arXiv1409.5923T} and the main challenges that we face in our context.
On that paper, a stronger hypothesis on the underlying graph was assumed, namely that $G$ verifies certain \emph{local isoperimetric inequalities}.
In the current work, we only make use of the standard isoperimetric inequality \eqref{eq:isoperimetric}, together with the hypothesis that $G$ is roughly transitive and has polynomial growth (see also Remark~2.3 (c) in \cite{2014arXiv1409.5923T}).
In particular, the next lemma will guarantee that Condition~\ref{cond:joao} is implied by $\mathcal{V}(c_u, d_u)$ and $\mathcal{I}(c_i, d_i)$, with $d_i > 1$.
This will be an important novelty of this work and we will postpone its proof to Section~\ref{s:proof_joao}.
\begin{Lemma}[Cascading Lemma]
\label{lemma:joao}
Let $G$ be $\uc{c:rough_trans}$-roughly transitive, satisfying the conditions $\mathcal{V}(c_u, d_u)$, \eqref{eq:isoperimetric} with $d_i > 1$, and let $\gamma$ be as in \eqref{eq:condition_gamma}, then $G$ satisfies Condition~\ref{cond:joao}.
Moreover, the constant $\uc{c:k_def_joao}$ appearing in Definition~\ref{d:joao} depends only on $\uc{c:rough_trans}, J, c_i, d_i, c_u$ and $d_u$.
\end{Lemma}
We will now give a proof of Theorem~\ref{thm:p_c<1_dependent}, assuming the validity of the \nameref{lemma:joao} above, which will be proved in Section~\ref{s:proof_joao}.
\begin{Remark}\label{remark:alpha_star}
In Theorem~\ref{thm:p_c<1_dependent}, we assumed that $\alpha > \alpha_{\ast}$, which still had to be defined.
We can now introduce
\begin{equation}
\label{e:alpha_star}
\alpha_{\ast} := 2 \left ( \frac{2 d_u d_i}{d_i-1}\right ) d_u -d_l,
\end{equation}
Note that for $\alpha > \alpha_{\ast}$
\begin{display}
we can find $\gamma$ as in \eqref{eq:condition_gamma} and such that $\alpha > 2 \gamma d_u - d_l$ as in Lemma~\ref{l:cascade_decays}.
\end{display}
\end{Remark}
\bigskip
Recall the definitions of $p_k$ from \eqref{eq:pk}.
We first show the decay of $p_k$ for large enough $p$ in the following lemma.
\nc{c:k_pk_decay}
\begin{Lemma}
\label{lemma:pk_decay}
Suppose that $G$ is a roughly transitive graph satisfying $\mathcal{V}(c_u, d_u)$, $\mathcal{L}(c_l, d_l)$ and $\mathcal{I}(c_i, d_i)$ with $d_i > 1$, $\gamma$ satisfies \eqref{eq:condition_gamma} and fix $\mathbb{P}$ fulfilling $\mathcal{D}(\alpha, c_\alpha)$.
Then, given any $\beta > 0$, there exists $p_* = p_*(\beta, \gamma, c_i, d_i, c_l, d_l, c_u, d_u) < 1$ such that for any $p > p_*$ we have
\begin{equation}
p_k \leq L_k^{-\beta}, \text{ for every $k \geq 1$}.
\end{equation}
\end{Lemma}
\begin{proof}
Since $\gamma$ satisfies \eqref{eq:condition_gamma}, we can use the \nameref{lemma:joao} to conclude that the events $\mathcal{S}(x, L)$ are cascading.
Hence Lemma~\ref{l:cascade_decays} implies that if for some large value $k_0$ we have $p_{k_0} \leq L_{k_0}^{-\beta}$, then this relation holds for all $k \geq k_0$.
We now observe that as the percolation parameter $p$ converges to one, then the probability of $\S(o, L_k)$ (for some fixed $k \geq \uc{c:k_pk_decay}$) converges to zero, since balls will likely be completely open.
This implies that, if $\inf_{x \in V} \mathbb{P}[Y_x = 1] \geq p_*$ as in the statement of the theorem, we will have $p_k \leq L_k^{-\beta}$ for all $k \geq \uc{c:k_pk_decay}$.
By possibly increasing $p_*$ we can make sure that the above holds for all $k \geq 1$.
\end{proof}
The statement of Theorem~\ref{thm:p_c<1_dependent} now follows from Lemma~\ref{lemma:pk_decay} above and the following lemma, whose proof is deferred to Section \ref{s:lego}.
\begin{Lemma}
\label{lemma:lego}
Fix an arbitrary value $\theta > 0$, $\gamma \geq 2$ satisfying \eqref{eq:condition_gamma} and $\beta > \gamma(2 + \theta)$.
Then, if $p_k \leq L_k^{-\beta}$ for all $k$ large enough,
\begin{equation}
\mathbb{P} \big[ \text{there is a unique infinite connected open cluster $\mathcal{C}_\infty$} \big] = 1
\end{equation}
and moreover, for every $k$ large enough we have
\begin{equation}
\sup_{x \in V} \P[L < |\mathcal{C}_x| < \infty] \leq L^{-\theta},
\end{equation}
where $\mathcal{C}_x$ stands for the open connected component containing $x$.
\end{Lemma}
Note the similarity between the above result and Lemma~4.1 of \cite{2014arXiv1409.5923T}.
It is worth mentioning that despite this similarity, a new proof of the above lemma is required since the definitions of $\S(o, L_k)$ and consequently of $p_k$ are different.
\section{Proof of the \nameref{lemma:joao}}
\label{s:proof_joao}
As we mentioned above, the most innovative step in proving Theorem~\ref{thm:p_c<1} was the intermediate \nameref{lemma:joao}, that we now prove.
The argument is split into three main steps that can be informally described as follows.
\emph{Step 1.} Suppose we have two sets $A^0, A^1$ which are separated as in the definition of $ \S(o,L_{k+1})$.
We first show that paths connecting $A^0$ to $A^1$ necessarily cross a separation event at the smaller scale $L_k$.
This is explained in Subsection~\ref{ss:path}.
\emph{Step 2.} Therefore our task is now reduced to showing that there are several paths connecting these two sets inside $ B(o, 3L_{k+1}^2)$.
This is not an immediate consequence of the isoperimetric inequality \eqref{eq:isoperimetric}.
However, this inequality shows that there must be several disjoint paths connecting $A^0$ to $\partial_i B(o,3L_{k+1}^2)$ (same for $A^1$), see Subsection~\ref{ss:disjoint_paths}.
\emph{Step 3.} Finally, we will show that indeed there exist several paths connecting $A^0$ to $A^1$ and this is done by contradiction.
More precisely, assuming that there are only few paths connecting these sets, we have a type of ``local bottleneck'' in our graph.
This, together with rough transitivity will allow us to replicate this local bottleneck in different parts of the graph and they act as a branching point for paths of the graph.
Therefore, we are able (under this contradiction assumption) to embed a chunk of a binary tree inside $G$, see Figure~\ref{f:crab_party}.
This will contradict the polynomial growth that we assumed in first place, concluding the proof of the \nameref{lemma:joao}.
This final argument can be found in Subsection~\ref{ss:embed}.
\subsection{Using paths to find separation events}
\label{ss:path}
The first step in the proof of the \nameref{lemma:joao} is to reduce the task of finding separation events at the finer scale $k$ to simply finding paths between the separated sets $A^0$ and $A^1$ at scale $L_{k+1}$.
\nc{c:Lk_small}
First, we need to choose a large enough constant $\uc{c:Lk_small} > 0$ such that, for $k \geq \uc{c:Lk_small}$ we have
\begin{equation}
L_k \leq \frac{L_{k+1}}{2000},
\end{equation}
which is legitimate, given the inductive definition of $L_k$ in \eqref{eq:inductive_def_L_k}.
The next lemma helps us obtaining separation events from paths connecting $A^0$ to $A^1$.
\begin{Lemma}\label{lemma:new_lemma_3.2}
For some $x \in V$ and $k\geq \uc{c:Lk_small}$, consider a set $K \subseteq B(x, 2L_{k+1}^2)$ such that $B(K, L_k) $ covers $ B(x, L_{k+1}^2)$ and a pair of sets $A^0, A^1$ as in the definition of the event $ \S(x, L_{k+1})$.
In other words, assume that
\begin{itemize}
\item[(a)] $A^0$ and $A^1$ are connected and contained in $ B(x, 3 L_{k+1})$
\item[(b)] their diameters are at least $ L_{k+1}/100$ and
\item[(c)] no open path inside $ B(x, 3L_{k+1}^2)$ connects $A^0$ and $A^1$.
\end{itemize}
Then, for every path $\sigma$ in $ B(x, L_{k+1}^2)$ connecting $A^0$ to $A^1$ there exists $y \in K$ such that
\begin{itemize}
\item[(i)] $\sigma$ intersects $ B(y, L_k)$ and
\item[(ii)] the event $\S(y,L_k)$ holds.
\end{itemize}
See also Figure~\ref{f:six_balls}.
\end{Lemma}
\begin{proof}
The proof of this lemma essentially follows the steps of the proof of Lemma~3.2 in \cite{2014arXiv1409.5923T}.
Therefore, we will not repeat the entire argument here.
Instead, we just indicate what substitutions should be done to make that proof match exactly the context of the present article.
First, replace each occurrence of $B(y, j L_k/6)$, for $j = 1, 2, 3$, by $B(y, j L_k)$.
Then replace the balls $B(y, j L_k/6)$, for $j = 4, 5$ and $6$ with $B(y, (j-3) L_k^2)$.
\end{proof}
The above lemma will allow us to reduce Condition~\ref{cond:joao} to the following simpler condition, which only concerns the geometry of $G$, not the realization of the percolation process.
\nc{c:cond_2}
\begin{Condition}
\label{cond:avoid_balls}
We say that a graph $G$ satisfies Condition~\ref{cond:avoid_balls} if for any $J \geq 1$ there exists a constant $\uc{c:cond_2} = \uc{c:cond_2}(G, J, \gamma)$ for which the following holds.
Given $x \in V$, a scale $k \geq \uc{c:cond_2}$, connected sets $A^0, A^1 \subseteq B(x, 3 L_{k+1})$ with diameters at least $L_{k+1}/100$ and any collection $y_1 \dots, y_{J-1} \in B(x, 2 L_{k+1}^2)$, there exists a path $\sigma$ contained in $B(x, L_{k + 1}^2)$, connecting $A^0$ with $A^1$ while avoiding the set of balls $\textstyle \bigcup\limits\nolimits_{j \leq J-1} B(y_j, 12 L_k^2)$.
\end{Condition}
\begin{Lemma}
\label{lemma:joao_2}
Condition~\ref{cond:avoid_balls} implies Condition~\ref{cond:joao}.
\end{Lemma}
The proof of this lemma will be a consequence of Lemma~\ref{lemma:new_lemma_3.2}.
\begin{proof}
In order to establish Condition~\ref{cond:joao}, we first fix $x \in V$, a set $K \subseteq B(x, 2L_{k+1}^2)$ such that $B(K, L_k)$ covers $B(x, L_{k+1}^2)$ and assume that the event $\S(x, L_{k+1})$ holds.
We now need to show that the events $\S(y_j, L_k)$ occur for several points $y_1, \dots, y_J$, which will be done using induction in $j = 1, \dots, J$.
The occurrence of $\S(x, L_{k+1})$ implies the existence of sets $A^0$ and $A^1$ in $B(x, 3L_{k+1})$ as in \eqref{e:separation_event}.
To start the induction, we use the fact that $B(x, 3L_{k+1})$ is connected to obtain a path between $A^0$ and $A^1$ and employing Lemma~\ref{lemma:new_lemma_3.2} we obtain the point $y_1 \in K$ satisfying $\S(y_1, L_k)$.
Then, supposing that we have already found the sequence $y_1, \dots, y_{J'} \in K$ for $J' < J$ as above, we use Condition~\ref{cond:avoid_balls} to obtain a path from $A^0$ to $A^1$ that avoids $\cup_{j \leq J'} B(y_j, 12 L_k^2)$.
Therefore we can use Lemma~\ref{lemma:new_lemma_3.2} again in order to obtain a new vertex $y_{J' + 1} \in K$ within distance at least $9L_k^2$ from all the previous $y_1, \dots, y_{J'}$ and for which $\mathcal{S}(y_{J' + 1}, L_k)$ holds.
We can now continue inductively until we get Condition~\ref{cond:joao}.
\end{proof}
\subsection{Finding disjoint paths}\label{ss:disjoint_paths}
The next lemma uses the Max-Flow-Min-Cut Theorem to show that we can find several disjoint paths connecting a large set $A \subset B(x, 3 L_{k+1})$ to the (internal) boundary of the ball $B(x, 2 L_{k+1}^2)$.
By possibly trimming some of these paths, we are able to find one that avoids several balls in the previous scale.
This lemma carries some similarities with Condition~\ref{cond:avoid_balls}, however the path that one obtains is not connecting $A^0$ to $A^1$, but rather $A^0$ to far away.
This difference is in the heart of distinction between the isoperimetric condition \eqref{eq:isoperimetric} and the \emph{local isoperimetric inequality} of \cite{2014arXiv1409.5923T}.
\nc{c:N''_large}
\begin{Lemma} \label{lemma:N''}
Suppose that a given graph $G = (V, E)$ satisfies \eqref{eq:isoperimetric} and let $v \geq 1$ be fixed.
Then there is a constant $\uc{c:N''_large} := \uc{c:N''_large}(J, \gamma, v, c_i, d_i, c_u, d_u)$ such that the following holds.
Fixed any, $x \in V$, $k \geq \uc{c:N''_large} $, a collection $z_1, \dots, z_n \in B(x, L_{k+1}^2)$ with $n \leq J \log^2(L_k)$ and any set $A \subseteq B(x, 3 L_{k+1})$ with $|A| \geq L_{k+1}/(100 v)$, we can find a path from $A$ to $\partial_i B(x, 2 L_{k+1}^2)$ that does not touch the union $\bigcup_{i\leq n}B(z_i, 20 \uc{c:rough_trans} L_k^2)$.
\end{Lemma}
For every finite \emph{connected} set $A$ define
\begin{equation}\label{eq:def_N}
N(A) := c_i |A|^{\frac{d_i-1}{d_i}},
\end{equation}
with $c_i > 0$.
Note the resemblance with \eqref{eq:isoperimetric}.
\begin{proof}
We start by showing that when $k$ is large enough, there are at least $N(A)$ disjoint paths connecting $A$ to $\partial_i B(x, 2 L_{k+1}^2)$.
In fact, suppose by contradiction that this is not verified.
Then, by the Max-Flow Min-Cut Theorem, there exists a set of edges $C_A$ inside the ball $B(x, 2 L_{k+1}^2)$ which disconnects $A$ from $\partial_i B(x, 2 L_{k+1}^2)$ and such that $|C_A| < N(A)$.
Then we have
\[
|C_A | < N(A) = c_i|A|^{\frac{d_i - 1}{d_i}}.
\]
But then, this implies that there is a set $\tilde A$ (containing $A$) of points that can be reached from $A$ without using edges in $C_A$ that has to satisfy
\[
|\partial \tilde A| = |C_A | < c_i|A|^{\frac{d_i - 1}{d_i}} \leq c_i |\tilde A|^{\frac{d_i - 1}{d_i}},
\]
contradicting condition \eqref{eq:isoperimetric} and hence proving the first step.
Now we use this fact in order to a path that satisfies the statement of the lemma.
In fact, using the above we have
\begin{equation}
\begin{split}
\# \textnormal{disjoint paths from } & A\textnormal{ to }\partial_i B(x, 2 L_{k+1}^2)
\geq N(A) = c_i|A|^{\frac{d_i - 1}{d_i}}\\
& \begin{array}{e}
& \geq & c_i\left (\frac{L_{k+1}}{100 v}\right )^{\frac{d_i - 1}{d_i}}\geq c L_k^{\gamma\frac{d_i - 1}{d_i}}\\
& \overset{\eqref{eq:condition_gamma}, k \text{ large}}{>} & 3J\log^2(L_k)c_u(20 \uc{c:rough_trans} L_k)^{2d_u}\\
& \stackrel{\eqref{eq:volume_upper_bound}}{\geq } & 3 J \log^2(L_k) |B(x,20 \uc{c:rough_trans} L_k^2)|.
\end{array}
\end{split}
\end{equation}
This bound shows that if we remove all those paths connecting $A$ to $\partial_i B(x, 2 L_{k+1}^2)$ which happen to intersect $\bigcup_{i\leq m}B(z_i, 20 \uc{c:rough_trans} L_k^2)$, we are still left with several paths.
\end{proof}
\subsection{Embedding a tree into \texorpdfstring{$G$}{G}}
\label{ss:embed}
In this section we will assume that Condition~\ref{cond:avoid_balls} fails, since we have already proved Theorem~\ref{thm:p_c<1} assuming Condition~\ref{cond:joao}, which follows from Condition~\ref{cond:avoid_balls}.
Negating Condition~\ref{cond:avoid_balls} is equivalent to saying that there exist
\begin{gather}
\label{e:bad_J}
\text{some number $J \geq 1$,}\\
\text{a sequence of points $x_l \in V$,}\\
\text{a diverging sequence $k_l \to \infty$ of scales,}\\
\text{sets $A^0_l, A^1_l \subseteq B(x_l, 3L_{k_l + 1})$ satisfying $d(A^0_l, A^1_l) > 1$, $\diam(A^i_l) \geq L_{k_l + 1}/100$}\\
\label{eq:evil_ys}
\text{and for each $l \geq 1$ a collection $y^l_1, \dots, y^l_{J-1} \in B(x_l, 2 L_{k_l+1}^2)$}
\end{gather}
such that
\begin{display}
\label{e:Hl_splits_As}
every path in $B(x_l, L_{k_l+1}^2)$ connecting $A^0_l$ to $A^1_l$\\
touches the set $\smash{\textstyle \bigcup\limits_{j \leq J-1}} B(y^l_j, 12L_{k_l}^2)$.
\end{display}
We are now in position to start embedding a binary tree inside $G$, which will ultimately lead to a contradiction on the polynomial growth that we assumed on $G$.
The nodes of this tree will simply be vertices of $G$, however two adjacent vertices in the tree will not be mapped to neighbors in $G$.
Instead, they will be mapped into reasonably far apart points as we describe in detail soon.
The nodes of our binary tree are indexed by words in the alphabet $\{0,1\}$.
For every such a word $\omega \in \Gamma$, we denote by $|\omega|$ its length and by $\omega' \omega$ the word obtaining by appending $\omega$ to the right of $\omega'$.
In this case, we say that $\omega'$ is a prefix of $\omega' \omega$.
This prefix is said to be proper if $\omega$ is non-empty.
We denote the bad set
\begin{equation}
H_l = \textstyle \bigcup\limits_{j \leq J-1} B(y_j^l, 20 \uc{c:rough_trans} L_{k_l}^2).
\end{equation}
Note that the balls used to define $H_l$ have radius $20 \uc{c:rough_trans} L_{k_l}^2$, which is larger than the ones appearing in \eqref{e:Hl_splits_As}.
This difference will be important later once we start playing with rough isomorphisms, since we want paths avoiding $H_l$ to be mapped to paths that do not touch $\cup_{j \leq J-1} B(y_j^l, 12 L_{k_l}^2)$.
\begin{Remark}
In the next lemma, given some $l \geq 1$ and any word $\omega \in \Gamma$ such that $|\omega| \leq \log^2(L_{k_l})$, we will construct a $\uc{c:rough_trans}$-rough isometry $\phi^l_\omega$ of $G$.
Given such a map, we can define
\begin{equation}
\label{e:build_tree}
\begin{array}{c}
x_l(\omega) := \phi^l_\omega(x_l),\\
A^i_l(\omega) := \phi^l_\omega(A^i_l), \text{ $i = 0, 1$},\\
B_l(\omega) := B\bigl (x_l(\omega), 3L_{k_l + 1}\bigr ) \text{ and}\\
H_l(\omega) := \phi^l_\omega(H_l).
\end{array}
\end{equation}
Note that $\phi^l_\varnothing$ will be the identity map on $G$.
Therefore, we can think of $x_l$, $A^0_l$ and $A^1_l$ as $x_l(\varnothing)$, $A^0_l(\varnothing)$ and $A^1_l(\varnothing)$ respectively.
In the same way, we have that $y_j^{l} = y_j^{l}(\varnothing)$ for all $j$ and $l$ as above.
\end{Remark}
The next lemma constructs an embedding of a binary tree into $G$ satisfying a list of requirements.
Later we will use this together with \eqref{e:Hl_splits_As} to show that all leafs of the constructed tree have to be disjoint, contradicting the polynomial growth of the graph $G$, see Lemma~\ref{lemma:loops}.
\nc{c:build_tree}
\begin{Lemma}\label{lemma:induction}
Let $G$ be a $\uc{c:rough_trans}$-roughly transitive graph satisfying \eqref{eq:isoperimetric} and assume the existence of a collection $(J, (x_l), (k_l), (A^0_l), (A^1_l), y_j^l)_{l \geq 1}$ as in \eqref{e:bad_J}--\eqref{eq:evil_ys}.
Then, there exists $\uc{c:build_tree} = \uc{c:build_tree}(J, \gamma, \uc{c:rough_trans}, c_i, d_i, c_u, d_u)$ such that for $k_l \geq \uc{c:build_tree}$ and for each $\omega$ such that $1 \leq |\omega| \leq \log^2(L_{k_l})$, we can construct
\begin{enumerate}
\item a $\uc{c:rough_trans}$-rough isometry $(\phi^l_\omega)$ and
\item a path $\gamma_\omega$,
\end{enumerate}
in such a way that the following holds.
For $\omega$ such that $|\omega| \leq \log^2(L_{k_l}) - 1$,
\begin{gather}
\label{e:xs_not_too_far}
\text{if $\omega = \omega' i$, for $i = 0, 1$, then $\gamma_{\omega'} \subseteq B(x_l(\omega), L_{k_l + 1}^{3/2})$,}\\
\label{e:B_H_disj}
\text{for any $\omega'$ proper prefix of $\omega$, $B_l(\omega)$ is disjoint from $H_l(\omega')$,}\\
\label{e:gamma_connects}
\text{if $\omega = \omega'i$, with $i = 0,1$, then $\gamma_\omega$ connects $A^i_l(\omega')$ to $x_l(\omega)$ and}\\
\label{e:gamma_H_disj}
\text{if $\omega'$ is a proper prefix of $\omega$, then the path $\gamma_{\omega}$ is disjoint from $H_l(\omega')$}.
\end{gather}
See Figure~\ref{f:crab_party} for an illustration of the above.
\end{Lemma}
\begin{proof}
We first choose the constant $\uc{c:build_tree}(c_i, d_i, c_u, d_u) \geq \uc{c:N''_large}$ in such a way that for $k_l \geq \uc{c:build_tree}$,
\begin{gather}
\label{e:def_c_tree}
\log^2(L_{k_l}) > J\\
\label{e:few_defects}
2 (J - 1)(3L_{k_l + 1} + 20 \uc{c:rough_trans} L_{k_l}^2) \log^2 L_{k_l} < L_{k_l + 1}^{3/2} - 3L_{k_l + 1}
\end{gather}
which can be done by our choice of the scales $L_k$ in \eqref{eq:condition_gamma}.
Since we are assuming \eqref{eq:isoperimetric} and that $\uc{c:build_tree} \geq \uc{c:N''_large}$, the conclusion of Lemma~\ref{lemma:N''} is at our disposal (at each scale $k_l\geq \uc{c:build_tree}$).
\begin{figure}
\centering
\begin{tikzpicture}[scale=.9]
\draw[decorate, rounded corners=1pt, decoration={random steps,segment length=6pt,amplitude=2pt}] (0 + .4, 6 - .4) -- (2, 3);
\draw[decorate, rounded corners=1pt, decoration={random steps,segment length=6pt,amplitude=2pt}] (0 - .4, 6 - .4) -- (-2, 3);
\draw[decorate, rounded corners=1pt, decoration={random steps,segment length=6pt,amplitude=2pt}] (2 + .4, 3 - .4) -- (5, 0);
\draw[decorate, rounded corners=1pt, decoration={random steps,segment length=6pt,amplitude=2pt}] (2 - .4, 3 - .4) -- (1, -.5);
\draw[decorate, rounded corners=1pt, decoration={random steps,segment length=6pt,amplitude=2pt}] (-2 + .4, 3 - .4) -- (-1, -1);
\draw[decorate, rounded corners=1pt, decoration={random steps,segment length=6pt,amplitude=2pt}] (-2 - .4, 3 - .4) -- (-5, 0);
\draw[decorate, rounded corners=1pt, decoration={random steps,segment length=6pt,amplitude=2pt}] (1 + .4, -.5 - .4) -- (2, -3);
\draw[decorate, rounded corners=1pt, decoration={random steps,segment length=6pt,amplitude=2pt}] (1 - .4, -.5 - .4) -- (0, -3);
\draw[decorate, rounded corners=1pt, decoration={random steps,segment length=6pt,amplitude=2pt}] (-1 + .4, -1 - .4) -- (-.5, -3);
\draw[decorate, rounded corners=1pt, decoration={random steps,segment length=6pt,amplitude=2pt}] (-1 - .4, -1 - .4) -- (-2, -3);
\draw[decorate, rounded corners=1pt, decoration={random steps,segment length=6pt,amplitude=2pt}] (-5 + .4, - .4) -- (-4, -3);
\draw[decorate, rounded corners=1pt, decoration={random steps,segment length=6pt,amplitude=2pt}] (-5 - .4, - .4) -- (-6, -3);
\draw[decorate, rounded corners=1pt, decoration={random steps,segment length=6pt,amplitude=2pt}] (5 + .4, - .4) -- (6, -3);
\draw[decorate, rounded corners=1pt, decoration={random steps,segment length=6pt,amplitude=2pt}] (5 - .4, - .4) -- (4, -3);
\crab{ 0}{ 6}{ \varnothing}
\crab{ 2}{ 3}{ 1}
\crab{-2}{ 3}{ 0}
\crab{ 1}{-.5}{01}
\crab{-1}{ -1}{10}
\crab{-5}{ 0}{00}
\crab{ 5}{ 0}{11}
\end{tikzpicture}
\caption{An illustration of the isomorphisms $\phi^l_\omega$ and the paths $\gamma_\omega$ defined in Lemma~\ref{lemma:induction}.
Note that the points $x_l(\omega)$ and the sets $A^i_l(\omega)$ are images under $\phi^l_\omega$.
The small gray circles correspond to the sets $H_l(\omega)$.}
\label{f:crab_party}
\end{figure}
In order to construct the maps $\phi^l_\omega$, we follow an induction argument on the length of the word $\omega$.
The only word of length zero is $\varnothing$ and we have already defined $\phi_\varnothing^l$ as the identity map.
Assume that for $n < \log^2(L_{k_l})-1$ we have already constructed the maps $\big(\phi^l_\omega\big)_{|\omega| \leq n}$ and paths $(\gamma_\omega)_{1 \leq |\omega| \leq n}$, satisfying \eqref{e:B_H_disj}--\eqref{e:gamma_H_disj}.
Then, given any word $\omega$ with $|\omega| = n$, our task now is to define $\phi^l_{\omega 0}$ and $\phi^l_{\omega 1}$ with help of Lemma~\ref{lemma:N''}.
To apply Lemma~\ref{lemma:N''}, we need to choose the points $z_1, \dots, z_m$ to be avoided, which roughly speaking will correspond to the points $\{y^l_j(\omega')$, for each $\omega'$ prefix of $\omega\}$.
More precisely, we denote by $\omega_k$ the unique prefix of $\omega$ with $|\omega_k| = k$ and set
\begin{gather}
z_{k(J-1) + j - 1} = y^l_j (\omega_k), \text{ with $k = 0, \dots, |\omega|$ and $j = 1, \dots, J - 1$.}
\end{gather}
Recall that $|\omega| \leq \log^2(L_k)$, so that the number of $z_i$'s is no larger than $2 J \log^2(L_{k_l})$.
Using \eqref{e:large_image}, we conclude that
\[
|A_l^0(\omega)| \geq \frac{|A_l^0|}{c_u (\uc{c:rough_trans})^{d_u}} \geq \frac{L_{k_l + 1}}{100 v},
\]
where $v = c_u (\uc{c:rough_trans})^{d_u}$ (cf.\ Remark \ref{remark:note_v}).
The same also being true for $A_l^1(\omega)$.
We are now in position to apply Lemma~\ref{lemma:N''}, which provides us with paths $\gamma_{\omega 0}$ from $A^0_l(\omega)$ to $\partial_i B(x_l(\omega), 2L_{k_l + 1}^2)$ and $\gamma_{\omega 1}$ from $A^1_l(\omega)$ to $\partial_i B(x_l(\omega), 2L_{k_l + 1}^2)$ that satisfy \eqref{e:gamma_H_disj}.
More precisely, these paths are such that
\begin{display}
\label{e:sneak}
$\gamma_{\omega 0}$ and $\gamma_{\omega 1}$ do not touch the union of the balls $B(z_i, 12L_{k_l}^2)$, for any $i$.
\end{display}
These paths will give rise to the two children of $\omega$ ($\omega 0$ and $\omega 1$).
Recall that these paths go quite far, reaching distance $2L_{k_l + 1}^2$ from $x_l(\omega)$, however we are going to truncate these paths earlier in such a way that \eqref{e:xs_not_too_far} holds and moreover
\begin{display}
\label{e:stop_far}
the end points of the paths $\gamma_{\omega 0}$ and $\gamma_{\omega 1}$ lie within distance at least $3 L_{k_l + 1}$ from $H_l(\omega')$ for any $\omega'$ prefix of $\omega$.
\end{display}
Before proving the above, let us briefly see why this would finish the proof of the lemma.
We call these end-points $x_l(\omega 0)$ and $x_l(\omega 1)$ respectively and using the rough transitivity of the graph $G$ we can find two $\uc{c:rough_trans}$-rough isometries, satisfying $\phi^l_{\omega 0} (x_l(\varnothing)) = x_l(\omega 0)$ and $\phi^l_{\omega 1}(x_l(\varnothing)) = x_l(\omega 1)$.
We can now define $A_l^i$, $B_l$ and $H_l$ as in \eqref{e:build_tree}, obtaining another layer of the tree.
The fact that these satisfy \eqref{e:B_H_disj}--\eqref{e:gamma_H_disj} is a consequence of their construction, \eqref{e:sneak} and \eqref{e:stop_far}.
We still need to prove that we can stop the paths $\gamma_{\omega 0}$ and $\gamma_{\omega 1}$ in such a way that they satisfy \eqref{e:xs_not_too_far} and \eqref{e:stop_far}.
First observe that a point $x$ being within distance at least $3L_{k_l + 1}$ from the sets $H_l(\omega')$ (for $\omega'$ prefix of $\omega$) is equivalent to $x$ being within distance $3L_{k_l + 1} + 20 \uc{c:rough_trans} L_{k_l}^2$ from the collection of points $K = \{y^l_j(\omega'); \text{ $\omega'$ prefix of $\omega$ and $j \leq J - 1$} \}$.
We first stop these paths as soon as they reach $L_{k_l + 1}^{3/2}$ (recall that they reach $\partial_i B(x_l(\omega), 2L_{k_l + 1}^2)$) therefore $\gamma_{\omega 0}$ and $\gamma_{\omega 1}$ will automatically satisfy \eqref{e:xs_not_too_far}.
Even after this truncation, the range of these paths still have diameter at least $L_{k_l + 1}^{3/2} - 3L_{k_l + 1}$.
Therefore, by \eqref{e:few_defects} they cannot be covered by $(J - 1) \log^2 L_k$ balls of radius $3L_{k_l + 1} + 20 \uc{c:rough_trans} L_{k_l}^2$.
This proves that we can stop the paths $\gamma_{\omega 0}$ and $\gamma_{\omega 1}$ in a way that their endpoints satisfy \eqref{e:stop_far}, finishing the proof of the lemma.
\end{proof}
In order to conclude the proof of the \nameref{lemma:joao} we will show that under the current assumptions all the points $(x_l(\omega))_{|\omega| = \lfloor \log^2(L_k) \rfloor}$ are disjoint, contradicting the polynomial growth that we have assumed on the graph $G$.
\nc{c:loops}
\begin{Lemma}\label{lemma:loops}
There exists a constant $\uc{c:loops} = \uc{c:loops}(\gamma, \uc{c:rough_trans})$ such that for all $k_l \geq \uc{c:loops}$ we have the following.
Let $n_l = \lfloor \log^2(L_{k_l}) \rfloor$ and fix the construction of the $\uc{c:rough_trans}$-rough isometries $\phi^l_\omega$, for $|\omega| \leq n_l$ as in Lemma~\ref{lemma:induction}.
Then for all pair of words $\omega$, $\omega'$ such that $|\omega| = |\omega'| = n_l$, the points $x_l(\omega)$ and $x_l(\omega')$ are distinct.
\end{Lemma}
\begin{proof}
Suppose that there are two words $\omega$ and $\omega'$, both of length $n_l$, for which
\begin{equation}
\label{eq:non-empty_intersection}
x_l(\omega) = x_l(\omega')
\end{equation}
and let $\hat \omega$ be their closest common ancestor (in other words, $\hat{\omega}$ is the longest common prefix of $\omega$ and $\omega'$).
We first fix $\uc{c:loops}$ large enough so that for $k \geq \uc{c:loops}$, one has
\begin{equation}
\label{e:large_c_loops}
8 \uc{c:rough_trans} L_{k_l}^2 > 4 \uc{c:rough_trans}^2 (4 \uc{c:rough_trans}^2 + \uc{c:rough_trans} + 1).
\end{equation}
This specific choice will become clear later.
Our aim is to build a path between $A^0_l$ and $A^1_l$, which is contained in $B(x_l, L_{k_l + 1}^2)$ and avoids the set $\cup_{j \leq J-1} B(y_j^l, L_{k_l}^2)$.
This will lead to a contradiction to \eqref{e:Hl_splits_As}, which we have obtained from negating Condition~\ref{cond:avoid_balls}.
As a first step, we will construct a path $\sigma$ such that
\begin{display}
\label{e:make_handle}
$\sigma$ is contained in $B(x_l, n_l L_{k_l + 1}^{2/3})$, connects $A^0_l(\hat{\omega})$ to $A^1_l(\hat{\omega})$\\
and avoids the set $H_l(\hat{\omega})$.
\end{display}
Then we will use the rough inverse of $\phi^l_{\hat{\omega}}$ to map $\sigma$ to the desired path.
Before building $\sigma$, we start by constructing a path from $A^0_l(\hat{\omega})$ to $x_\omega$.
In order to do this, we first write $\omega_0, \dots, \omega_n$ to be the sequence of prefixes of $\omega$, obtained by setting $\omega_0 = \hat{\omega}$ and adding one letter at a time until $\omega_n = \omega$.
We start by observing that $A^0_l(\hat{\omega})$ can be connected to $x_l(\omega_1)$ by the path $\gamma_\omega$ which avoids $H_l(\hat{\omega})$ by \eqref{e:gamma_connects} and \eqref{e:gamma_H_disj}.
Supposing by induction that we have already reached $x_l(\omega_j)$ for some $j < n$ by a path that avoids $H_l(\hat{\omega})$, we are now going to extend this path until $x_l(\omega_{j + 1})$.
We know by \eqref{e:gamma_connects} and \eqref{e:gamma_H_disj} that if $\omega_{j+1} = \omega_j i$ ($i = 0,1$), then the path $\gamma_{\omega_{j+1}}$ connects $A^i_l(\omega_j)$ to $x_l(\omega_{j+1})$ while avoiding $H_l(\hat{\omega})$, therefore this is a good candidate for the extension we need.
The obstacle to perform this extension comes from the fact that this path does not necessarily start at $x_l(\omega_j)$, in fact its starting point $\gamma_{\omega_{j+1}}(0)$ is somewhere in $A^i_l(\omega_j) \subseteq B(x_l(\omega_j), 3 L_{k_l + 1})$, see Figure~\ref{f:crab_party}.
But using the fact that this ball is connected and disjoint from $H_l(\hat{\omega})$ (by \eqref{e:B_H_disj}), we can connect $x_l(\omega_j)$ to $\gamma_{\omega_{j+1}}(0)$ and finally to $x_l(\omega_{j+1})$.
Proceeding with this induction, we can construct the required path from $A^0_l(\hat{\omega})$ to $x_l(\omega)$ which avoids $H_l(\hat{\omega})$.
We can also build a similar path from $A^1_l(\hat{\omega})$ to $x_\omega$ and by concatenating these two we have proved \eqref{e:make_handle}.
We now use the path $\sigma$ obtained in \eqref{e:make_handle} to derive a contradiction to \eqref{e:Hl_splits_As}, finishing the proof of the lemma.
For this, pick a $4 \uc{c:rough_trans}^2$-rough isometry $\psi$ which is the rough inverse of $\phi^l_{\hat{\omega}}$ as in \eqref{e:rough_inverse}.
We now consider the image of the path $\sigma$ under the map $\psi$, obtaining a sequence of vertices $x_1, \dots, x_M$, for some suitable $M \geq 1$.
This sequence does not necessarily constitute a path, however, by \eqref{e:rough_iso} we have
\begin{display}
\label{e:hopping_walk}
$d(x_m, x_{m+1}) \leq 4 \uc{c:rough_trans}^2$ for every $m = 1, \dots, M-1$.
\end{display}
Recall that the path $\sigma$ connects $A^0_l(\omega)$ to $A^1_l(\omega)$, which are images of $A^0_l$ and $A^1_l$ under $\phi^l_{\hat{\omega}}$.
Therefore, the point $x_1$ (which is the image of the first point of $\sigma$) is within distance at most $4 \uc{c:rough_trans}^2$ from $A^0_l$ (and similarly for $x_M$ and $A^1_l$).
So we can add points $x_0 \in A^0_l$ and $x_{M+1} \in A^1_l$ to the sequence, without violating \eqref{e:hopping_walk}.
We now use \eqref{e:hopping_walk} and the above property of $x_0$ and $x_{M + 1}$ to turn the sequence $(x_m)_{m=0}^{M+1}$ into a path by connecting $x_m$ to $x_{m+1}$, one by one, while using no more than $4 \uc{c:rough_trans}^2$ intermediate points to join each pair.
This gives rise to a path $\sigma'$ for which we need to verify:
\begin{enumerate} [\quad a)]
\item $\sigma'$ connects $A^0_l$ to $A^1_l$,
\item $\sigma'$ is contained in $B(x_l, \uc{c:rough_trans} + \uc{c:rough_trans} n_l L^{2/3}_{k_l + 1}) \subseteq B(x_l, L_{k_l + 1}^2)$,
\item $\sigma'$ does not intersect the set $\cup_{j \leq J-1} B(y_j^l, 12 L_{k_l}^2)$.
\end{enumerate}
In fact, a) is a consequence of the construction of the path.
The statement b) follows since $\psi$ is a $4\uc{c:rough_trans}^2$-rough isometry.
Finally, to show c), we fix $y_j^l$ and $x \in B(y_j^l, 12 L_{k_l}^2)$ and, observing that
\begin{equation}
\label{e:phi_hat_omega_x_close}
\phi_{\hat{\omega}}^l (x) \in B(y_j^l(\hat \omega), 12 \uc{c:rough_trans} L_{k_l}^2)
\end{equation}
we estimate
\begin{equation*}
\begin{split}
d(\sigma', x) & \geq d \big( \sigma', \psi(\phi^l_{\hat \omega}(x)) \big) - d\big(\psi(\phi^l_{\hat \omega}(x)), x \big) \geq d \big( \psi(\sigma), \psi(\phi^l_{\hat \omega}(x)) \big) - 4 \uc{c:rough_trans}^2 - \uc{c:rough_trans}\\
& \geq \frac{1}{4 \uc{c:rough_trans}^2} d(\sigma, \phi^l_{\hat \omega}(x)) - 1 - 4 \uc{c:rough_trans}^2 - \uc{c:rough_trans} \overset{\eqref{e:phi_hat_omega_x_close}}\geq \frac{1}{4 \uc{c:rough_trans}^2} (20 - 12) \uc{c:rough_trans} L_{k_l}^2 - 4 \uc{c:rough_trans}^2 - \uc{c:rough_trans} - 1 \overset{\eqref{e:large_c_loops}}> 0.
\end{split}
\end{equation*}
This finishes the proof that $\sigma'$ indeed contradicts \eqref{e:Hl_splits_As}, yielding the lemma.
\end{proof}
It is now very easy to finish the proof of the \nameref{lemma:joao}.
\begin{proof}[Proof of the \nameref{lemma:joao}]
Supposing that $G$ does not satisfy Condition~\ref{cond:joao}, we know by Lemma~\ref{lemma:joao_2} that it does not satisfy Condition~\ref{cond:avoid_balls} either.
This provides us with a sequence $(J, (k_l), (A^0_l), (A^1_l), (y^l_j))$ satisfying \eqref{e:bad_J}--\eqref{eq:evil_ys}.
Employing Lemma~\ref{lemma:induction}, we can construct for each $l \geq 1$ the rough isometry $\phi^l_{\omega}$, for $|\omega| \leq n_l := \lfloor \log^2(L_{k_l}) \rfloor$ satisfying \eqref{e:xs_not_too_far}--~\eqref{e:gamma_H_disj}.
Lemma~\ref{lemma:loops} now claims that the points $(x_l(\omega))_{|\omega| \leq n_l}$ obtained in the above construction are disjoint.
However, there are $2^{n_l}$ such points and by \eqref{e:xs_not_too_far} they are all contained in the ball $B(o, n_l L^{3/2}_{k_l + 1})$.
This contradicts the polynomial growth of $G$ assumed in \eqref{eq:volume_upper_bound}, finishing the proof of the \nameref{lemma:joao}.
\end{proof}
\section{Proof of Lemma \ref{lemma:lego}}
\label{s:lego}
To conclude the proof of Theorem~\ref{thm:p_c<1} we still need to show that Lemma~\ref{lemma:lego} holds.
The main ideas of the proof are taken from \cite[Lemma 4.1]{2014arXiv1409.5923T}, which we report here for seek of clarity.
We split the proof into several auxiliary results, in order to make it more clear.
Choose $\beta >\max\{2\gamma d_u-d_l,\gamma(2+\chi)\}$ for some arbitrary value $\chi>0$ and suppose that $G$ is a roughly transitive graph satisfying $\mathcal{V}(c_u, d_u)$ and $\mathcal{I}(c_i, d_i)$ with $d_i > 1$.
Then we are in the condition to apply Lemma~\ref{lemma:pk_decay}, obtaining that $ p_k\leq L_k^{-\beta}$ for all $k\geq 1$.
Start by fixing a path $\sigma:\mathbb{N} \to V$ that satisfies the following property:
\[
d(\sigma(i),\sigma(j))=|i-j|, \textnormal{ for all }i,j\in \mathbb{N}.
\]
In particular, $d(x,y)$ denotes the graph distance between vertices $x$ and $y$.
The existence of such paths will not be discussed here, but the interested reader is referred to \cite{Watkins1986341}.
(More precisely, this result holds whenever the graph $G$ is infinite, locally finite, simple and connected.)
Now, given $\sigma$, define the following collection of points:
\[
x_{k,i}:=\sigma (i L_k^2/10), \textnormal{ for }k\geq 1 \textnormal{ and }i=0,\ldots, L_{k+1}^2/L_k^2,
\]
and, for some fixed $k_0\geq 1$ define the following event:
\[
\mathcal{G}_0:= \bigcap_{k\geq k_0}\bigcap_{i=1}^{L_{k+1}^2/L_k^2}\S(x_{k,i},L_k)^c.
\]
The next claim shows that $\mathcal{G}_0$ occurs with high probability.
\begin{Claim}\label{claim:(4.14)}
The event $\mathcal{G}_0$ occurs with probability bounded from below by $ 1-c L_{k_0}^{-\gamma \chi}$, where $c>0$ is a constant, $\gamma$ is as in \eqref{eq:condition_gamma} and $\chi>0$ is as in the statement of Lemma~\ref{lemma:lego}.
\end{Claim}
\begin{proof}
We show that $\P(\mathcal{G}_0^c)\leq c L_{k_0}^{-\gamma \chi}$.
In fact,
\[
\begin{split}
\P(\mathcal{G}_0^c)
& = \P \Bigl ( \bigcup_{k\geq k_0}\bigcup_{i=1}^{L_{k+1}^2/L_k^2}\S(x_{k,i},L_k)\Bigr ) \leq \sum_{k\geq k_0}\sum_{i=1}^{L_{k+1}^2/L_k^2}p_k \leq \sum_{k\geq k_0}L_k^{2\gamma-2}L_k^{-\beta}\\
& \stackrel{\beta>\gamma(2+\chi)}{\leq} \sum_{k\geq k_0}L_k^{-\chi \gamma} = L_{k_0}^{-\chi \gamma} \sum_{k\geq 0}(L_0^{-\chi \gamma} )^{\gamma^{k_0}(\gamma^k-1)}.
\end{split}
\]
Now, since $\gamma^{k_0}>1$ and $L_0^{-\chi \gamma}<1$, the sum $\sum_{k\geq 0}(L_0^{-\chi \gamma} )^{\gamma^{k_0}(\gamma^k-1)}$ converges, leading to the claim.
\end{proof}
The next auxiliary result shows that conditional on $\mathcal{G}_0$ being realized, we can find several \emph{open} paths which can be connected, discovering an infinite (open) connected component.
\begin{Lemma}\label{lemma:(4.16)}
Condition on $\mathcal{G}_0$ being realized.
Then for all $k\geq k_0$ there is an infinite connected component that intersects the ball $B(x_{k,0}, L_k^2/100) $.
\end{Lemma}
\begin{proof}
Fix $k\geq k_0$, the argument proceeds inductively on $k$.
For all $i=0,\ldots,L_{k+1}^2/L_k^2-1$, we use the fact that by hypothesis (i.e., conditioning on $\mathcal{G}_0$) we have that $ \S (x_{k,i},L_k)^c$ is realized.
Hence, we can find an \emph{open} path $\sigma_{k,i}\subseteq B(x_{k,i},3L_{k+1}^2)$ that connects the ball $B(x_{k,i}, L_k^2/100) $ to $B(x_{k,i+1}, L_k^2/100) $.
The next step consists in joining all such \emph{open} paths $ \sigma_{k,i}$'s into one (open) connected component.
Therefore, we first estimate the diameters of such paths:
\[
\textnormal{diam}(\sigma_{k,i})\geq d \bigl ( B(x_{k,i}, L_k^2/100), B(x_{k,i+1}, L_k^2/100) \bigr )\geq d(x_{k,i},x_{k,i+1})-2 L_k^2/100\geq L_k^2/50.
\]
The last inequality follows from our choice of $\sigma$, in fact $d(x_{k,i},x_{k,i+1})\geq L_k^2/10$.
Such bound implies that before exiting the ball $ B(x_{k,i}, L_k^2)$, the path $\sigma_{k,i}$ has diameter at least $L_k^2/100$.
At this point, since we are under the assumption that $\mathcal{G}_0$ holds, we can find again \emph{open} paths $\gamma_{k,i}$ that join $\sigma_{k,i}$ with $\sigma_{k,i+1}$ (for all $i=0,\ldots,L_{k+1}^2/L_k^2-2$) that are contained inside the ball $B(x_{k,i},3L_{k+1}^2)$.
Our next step is to join the $\sigma_{k,i}$'s and the $\gamma_{k,i}$'s in order to obtain longer open paths.
Note that the paths $\gamma_{k,i}$ are necessary to avoid any issue coming from the fact that the balls $B(x_{k,i}, L_k^2/100)$ are not necessarily open.
In order to join such open paths, we define a new \emph{open} path $\sigma_k$ that goes through $\sigma_{k,i}$ and $\gamma_{k,i}$, for \emph{all} values of $i=0,\ldots, L_{k+1}^2/L_k^2-1$.
Now, by construction, we have
\[
\textnormal{diam}(\sigma_k)\geq d(x_{k,0},x_{k,L_{k+1}^2/L_k^2-1})-L_k^2/100\geq \left ( \frac{L_{k+1}^2}{L_k^2}-1\right )\left ( \frac{L_k^2}{10}\right )- \frac{L_k^2}{100}\geq \frac{L_{k+1}^2}{100}.
\]
Now observe that since $\mathcal{G}_0$ is realized, for all $k\geq k_0$ the paths $\sigma_k$ and $\sigma_{k+1}$ must be on the same (open) connected component.
In fact, since we are assuming $\S(x_{k,0},L_{k+1})^c$, before $\sigma_k$ and $\sigma_{k+1}$ can find ``a way out'' from the ball $ B(x_{k,0}, L_{k+1}^2)$, they will have already gained a diameter of at least $L_{k+1}^2/100$.
Proceeding inductively on $k\geq k_0$ we get the statement.
\end{proof}
The next result gives a sufficient condition that will imply Lemma~\ref{lemma:lego}.
\begin{Claim}\label{claim:(4.13)}
Assuming that $\mathcal{G}_0$ is realized, there exists a unique infinite (open) connected component $\mathcal{C}_\infty$.
Moreover, denoting by $\mathcal{C}_x$ the connected component containing any vertex $x\in V$, either $\mathcal{C}_x\equiv \mathcal{C}_\infty$, or $\textnormal{diam}(\mathcal{C}_x)\leq L_{k_0}^2$.
\end{Claim}
\begin{proof}
First of all, observe that the infinite cluster has to be unique due to $\mathcal{G}_0$, since the existence of two or more infinite components would imply that $S(x_{k,0},L_k)$ holds for all but finitely many $k$'s.
Furthermore, the fact that either $\mathcal{C}_x\equiv \mathcal{C}_\infty$ or $\textnormal{diam}(\mathcal{C}_x)\leq L_{k_0}^2$ is a consequence of the following observation.
If $\textnormal{diam}(\mathcal{C}_x)> L_{k_0}^2$, but $\mathcal{C}_x\neq \mathcal{C}_\infty$, we would find two separated components intersecting the ball $ B(x_{k,0}, L_{k_0}^2/100)$, for some $k_0\geq 1$.
But this fact would contradict Lemma~\ref{lemma:(4.16)}, hence the statement is proven.
\end{proof}
Finally we have everything in place to prove Lemma~\ref{lemma:lego}.
\begin{proof}[Proof of Lemma~\ref{lemma:lego}]
By putting together Claims~\ref{claim:(4.14)} and \ref{claim:(4.13)} we obtain the first half of the Lemma.
More precisely, we obtain that if $p_\star <p<1$ (with $p_\star$ found in Lemma~\ref{lemma:pk_decay}), and hence $ p_k\leq L_k^{-\beta}$ for all $k\geq 1$, then there is a unique infinite cluster $\mathcal{C}_\infty$ almost surely.
Regarding the second part, we observe that for every value $k_0$ large enough we have
\[
\P \bigl (L_{k_0}^2<\textnormal{diam}(\mathcal{C}_x) <\infty \bigr ) \ \stackrel{\textnormal{Claims~\ref{claim:(4.14)},~\ref{claim:(4.13)}}}{\leq } \ cL_{k_0}^{-\gamma \chi}\ \stackrel{\eqref{eq:condition_gamma}}{<} \ cL_{k_0}^{-\chi}.
\]
This concludes the proof of Lemma~\ref{lemma:lego}.
\end{proof}
\section{Examples}\label{s:examples}
This section is devoted to giving some examples of dependent percolation processes for which our results apply.
These examples include loop soups, germ-grain models and divide and color percolation.
\subsection{Loop soups}
The model of loop soups was informally introduced by Symanzik in \cite{Sym69} and was rigorously defined in \cite{LW04} in the context of Brownian loops.
The model has been intensively studied, see for example \cite{zbMATH06093904} and \cite{zbMATH06340288}, displaying some very interesting percolation features, see \cite{2014arXiv1403.5687C}.
To properly define this model, we start by introducing a space of closed loops on $G$
\begin{equation*}
W = \Big\{(x_0, \dots, x_{k-1}) \in V^k; k \geq 1, \text{$x_0 = x_{k-1}$ and $\{x_i, x_{i-1}\} \in E$ for all $i < k$} \Big\}.
\end{equation*}
We now fix a parameter $\kappa > 0$ and endow the countable space $W$ with the measure
\begin{equation}
\label{e:loop_measure}
\mu(w) = \frac{1}{k} \Big( \frac{1}{\Delta(1 + \kappa)} \Big)^k,
\end{equation}
where $k$ gives the length of the loop $w$ and $\Delta$ is the degree of any given vertex in $G$.
We define an equivalence relation on $W$, where we identify two loops (denoting this by $w \sim w'$) if they have the same path length $k$ and $w(i) = w'(i + j)$ for some $j \geq 1$, where the sum is taken on $\mathbb{Z}/(k \mathbb{Z})$.
Given the equivalence relation $\sim$, we define the space of unmarked loops $W^*$ as $W / \sim$ and define the push forward $\mu^*$ of $\mu$ under the canonical projection from $W$ to $W^*$.
The process we are interested in is a Poisson Point Process $\omega^\beta$ on $W^*$ with intensity $\beta \mu^*$, where $\beta > 0$ is an parameter controlling the amount of loops that enter the picture.
We will be interested in both the occupied and vacant set left by the loop soup, or more precisely:
$\mathcal{L}^\beta = \cup_{w \in \text{supp}(\omega^\beta)} \text{Range}(w)$ and $\mathcal{V}^\beta = V \setminus \mathcal{L}^\beta$.
Let us state a decoupling inequality for this model, inspired by the (2.15) of \cite{Szn09}.
\begin{Lemma}
Fix $\beta, \kappa > 0$.
Then, for $r \geq 1$, $J \geq 2$, points $x_1, x_2, \ldots,x_{J} \in V$ satisfying
\[
\min_{1\leq i < j\leq J}d(x_i,x_j)\geq 3r
\]
and for events $\mathcal{G}_1, \ldots, \mathcal{G}_{J} $ such that $ \mathcal{G}_i \in \sigma (Y_z, z\in B(x_i, r))$ we have
\begin{equation}
\mathbb{P}(\mathcal{G}_1 \cap \dots \cap \mathcal{G}_J) - \mathbb{P}(\mathcal{G}_1) \cdots \mathbb{P}(\mathcal{G}_J) \leq 2 \beta J \exp\{- c(\kappa) r\}.
\end{equation}
\end{Lemma}
\begin{proof}
Let us first define the sets
\begin{equation}
W_i = \big\{ w \in W; \Range(w) \subseteq B(x_i, 3r/2) \big\}.
\end{equation}
We denote by $\omega^\beta_i$ the Poisson point process $\omega^\beta$ restricted to $W_i$, for $i = 1, \dots, J$.
Note that the $\omega^\beta_i$'s are independent, since their supports $W_i$ sets are disjoint.
Writing $\mathcal{G}'_i$ for the event $\mathcal{G}_i$ evaluated for the trimmed point process $\omega^\beta_i$, we can estimate
\begin{equation}
\begin{split}
\big| \mathbb{P}(\mathcal{G}_1 & \cap \dots \cap \mathcal{G}_J) - \mathbb{P}(\mathcal{G}_1) \cdots \mathbb{P}(\mathcal{G}_J) \big|\\
& \leq \big| \mathbb{P}(\mathcal{G}_1 \cap \dots \cap \mathcal{G}_J) - \mathbb{P}(\mathcal{G}'_1 \cap \dots \cap \mathcal{G}'_J) \big| + \big| \mathbb{P}(\mathcal{G}_1) \cdots \mathbb{P}(\mathcal{G}_J) - \mathbb{P}(\mathcal{G}'_1) \cdots \mathbb{P}(\mathcal{G}'_J) \big|\\
& \leq 2 J \smash{\sup_i} \mathbb{P}[\mathcal{G}_i \Delta \mathcal{G}'_i] \leq 2 J \sup_i \mathbb{P}\Big[
\begin{array}{c}
\text{there is $w \in \text{supp}(\omega^\beta)$ intersecting}\\
\text{both $B(x_i, r)$ and $B(x_i, 3r/2)^c$}.
\end{array}\Big]
\end{split}
\end{equation}
In order to bound the last term in the above equation we make use of the definition of the intensity measure in \eqref{e:loop_measure}, finishing the proof of the lemma.
\end{proof}
We are now in position to state the first application of our main result.
\begin{Theorem}
Given a $\uc{c:rough_trans}$-roughly transitive graph satisfying $\mathcal{V}(c_u, d_u)$ and $\mathcal{I}(c_i, d_i)$ for some $d_i > 1$ and fix $\kappa > 0$, define the Poisson Point Processes on $G$ as above.
Then,
\begin{enumerate}[\quad a)]
\item For $\beta > 0$ small enough, almost surely the set $\mathcal{L}^\beta$ contains no infinite connected component, while $\mathcal{V}^\beta$ contains a unique one.
\item On the other hand, if $\beta > 0$ is large enough, almost surely there exists a unique infinite cluster in $\mathcal{L}^\beta$, but $\mathcal{V}^\beta$ is composed solely of finite components.
\end{enumerate}
\end{Theorem}
This result allows us to define two critical values corresponding to the appearance of infinite clusters in $\mathcal{L}^\beta$ and $\mathcal{V}^\beta$.
\begin{Remark}
Note that the above arguments are more general than for Loop Soups only, in fact it should work for any \emph{Germ-Grain} model.
These models are defined as a decorated Poisson Point process, where each point gets associated with a random object to be inserted in the graph.
Under conditions that the random objects have sufficiently light tails (for instance, exponentially bounded), then the above proof should work equally well for such models.
\end{Remark}
\subsection{Divide and color}
The divide and color model was introduced by H\"{a}ggstr\"{o}m in \cite{haggstrom_coloring}, and it is a process that is governed by two parameters ($p, q \in [0, 1]$) and evolves in two steps.
In this section we will follow the description in \cite{BBT13}, to which the reader is referred for more details and further results.
\begin{enumerate}
\item Firstly we perform a Bernoulli percolation on the edges of $G$, i.e. each edge of the given graph is retained with probability $p$, independently of each other.
This partitions the vertices of $G$ into clusters, corresponding to the connected components induced by open edges.
\item Secondly we color the resulting connected components either \emph{black} or \emph{white} with probability $q$ or $1-q$ respectively, independently for distinct components.
All vertices of a component take the same color, which induces dependence in this site percolation model.
\end{enumerate}
The rest of this section is devoted to proving that the decoupling condition \eqref{e:decouple_various} holds true for this model under some conditions on the parameter $p$.
In order to do so, we need to introduce some further notation.
We start by defining a Bernoulli percolation by associating at each edge $e$ a random variable $\eta \in \{0,1\}$ that takes value $\eta(e) = 1$ with probability $p$ and $\eta(e) = 0$ with probability $1 - p$.
Given such an assignment, the vertices of $G$ can be split into clusters and we associate a random variable $\xi(\mathcal{C}) \in \{0,1\}$ to each \emph{connected component} $\mathcal{C}$ determined above.
The variables $\xi(\mathcal{C})$ are i.i.d. and satisfy $\P(\xi = \textnormal{black}) = q$ and $\P(\xi = \textnormal{white}) = 1 - q$.
Finally, we re-open all edges (essentially forgetting the variables $\eta(e)$) and we ask ourselves whether there exists an infinite cluster of black sites in the above coloring.
Let $\mu_{p,q}$ denote the measure governing the site-percolation process as described above.
Then \cite[Proposition 2.5]{haggstrom_coloring} assures that for any graph $G$ and any $p \in [0, 1]$ there exists a critical value $q_\star^G(p)\in [0,1]$ such that
\[
\mu_{p,q} (\text{there exists an infinite black }q\text{-cluster})
= \left \{
\begin{array}{ll}
=0 & \text{ if }q < q_\star^G(p),\\
>0 & \text{ if }q > q_\star^G(p).
\end{array}
\right .
\]
It is clear that if $p > p_c(G)$, or in other words if the first stage of the process can lead to an infinite cluster, then $q_*(p) = 0$, since for every positive $q$ there is a chance that the cluster containing the origin is infinite and is painted black.
Therefore, one can focus on the subcritical and critical phases $p \leq p_c(G)$.
On the subcritical phase, there is a strong belief that the size of a typical cluster should have exponential tails.
To make this more precise, let us define the critical value for ``strongly subcriticality''.
\begin{equation}
p_* = \sup\{p \in [0,1]; \text{ for some $\theta > 0$, } \mathbb{P}_p[\diam(\mathcal{C}_o) \geq n] \leq \exp\{\theta n\}\}.
\end{equation}
It is clear that $p_*$ is smaller than $p_c$ and it is commonly believed that $p_c = p_*$ for a large variety of graphs.
This equality has been proved for the $d$-dimensional lattice in \cite{aizenman1987} and \cite{zbMATH03996823} and later extended to transitive graphs in \cite{AV08} and \cite{DCopinTassion15}.
Another important observation is that for any graph $G$ with degrees bounded by $\Delta$, we have $p_* \geq 1/\Delta$, as one can easily prove by a counting path argument.
Intuitively speaking, once $p < p_*$, then the clusters are small and the dependence of the \emph{divide and color} model should be short-ranged.
This is made precise in the following proposition.
\begin{Proposition}
Fix a graph $G$ and let $p < p_*(G)$.
Then, for any $\alpha > 0$ there exists a constant $c_\alpha = c_\alpha(p, \alpha, J)$ for which the condition \eqref{e:decouple_various} holds for the divide and color model on $G$ for any $q \in [0,1]$.
As a consequence, if $G$ is roughly transitive, has polynomial growth and isoperimetric dimension larger than one, then $q_*^G(p) < 1$.
\end{Proposition}
\begin{proof}
Given $x_1, \dots, x_{J} \in V$ and $r \geq 1$, we are going to construct a simple decoupling of what happens in the various regions $B(x_j, r)$.
For this, we define $J + 1$ independent percolation measures on $G$.
More precisely, let $(\eta_j(e))_{e \in E}$ be independent Bernoulli variables (all of them i.i.d. with parameter $p$), for $j = 0, 1, \dots, J$.
We also define a mixed configuration $\eta_\textnormal{mix}$ which is given by
\begin{equation}
\eta_\textnormal{mix}(e) =
\begin{cases}
\eta_j(e), \qquad & \text{if $e \in B(x_j, 3r/2)$ and }\\
\eta_0(e), & \text{otherwise.}
\end{cases}
\end{equation}
We now use the above configurations to construct $J + 1$ instances of the \emph{divide and color} model, which will be denoted by $(Y^j_x)_{x \in V}$, $j = 1, \dots, J$ and $j =$ mix.
Obviously, they use the clusters determined by their respective edge configuration $\eta^j$ defined above.
Moreover, we add the restriction that if a given cluster of $\eta_j$ is contained in $B(x_j, 3r/2)$ (in which case it coincides with that of $\eta_\textnormal{mix}$), then both $Y^j$ and $Y^\textnormal{mix}$ will assign the same color to this cluster during the coloring stage.
Note that $Y^\textnormal{mix}$ has the correct law of the model, and we are now in position to prove that it satisfies \eqref{e:decouple_various}.
For this, fix events $\mathcal{G}_1, \dots, \mathcal{G}_J$ as in Proposition~\ref{claim:lemma4.2} and estimate
\begin{equation}
\begin{split}
\mathbb{P} \Big(\mathcal{G}_1(Y^\textnormal{mix}) & \cap \dots \cap \mathcal{G}_J(Y^\textnormal{mix}) \Big) - \mathbb{P} \Big( \mathcal{G}_1(Y^1)) \cdots \mathbb{P}(\mathcal{G}_J(Y^J) \Big)\\
& \leq \mathbb{P} \big( Y^\textnormal{mix}_x \neq Y^j_x, \text{ for some $j \leq J$, $x \in B(x_j, r)$} \big)\\
& \leq \mathbb{P} \big( \text{for some $j$, an open path in $\eta_j$ connects $B(x_j, r)$ to $B(x_j, 3r/2)$} \big)\\
& \overset{p < p_*}\leq J \exp\{-\theta r\}.
\end{split}
\end{equation}
This finishes the proof of the proposition by properly choosing the constant $c_\alpha$.
\end{proof}
\subsection{Slow decay of dependence}
\label{ss:elipses}
Let us briefly comment on the decay of correlation that we have assumed on the law $\mathbb{P}$.
It has been proved in \cite{BLPS97}, Theorem~1.1 that if $G$ is an amenable Cayley graph, then for any $p < 1$ there exists some invariant percolation law $\bar{\mathbb{P}}$ on $G$ such that $\bar{\mathbb{P}}^*[Y_o = 1] > p$ but the set $\{x; Y_x = 1\}$ does not percolate.
In contrast with this statement, Theorem~\ref{thm:p_c<1_dependent} states the existence of an absolute value $p^*$ above which every percolation law satisfying $\mathcal{D}(\alpha, c_\alpha)$ admits a unique infinite open cluster.
This distinction is clearly a consequence of the quantitative decay of correlations that we have assumed through $\mathcal{D}(\alpha, c_\alpha)$.
A natural question at this point is about the sharpness of Theorem~\ref{thm:p_c<1_dependent}.
For instance, is it true that Theorem~\ref{thm:p_c<1_dependent} still holds true if we replace the polynomial decay assumption by some slower one?
To shed some light into this question, let us mention an example from \cite{TW10b}.
It consists of a family of dependent percolation measures $(\mathbb{P}^u)_{u > 0}$ that satisfy a polynomial decay of correlations.
However, the exponent $\alpha$ appearing in the decay is not sufficiently high, so that for all $u > 0$, there is $\mathbb{P}^u$-a.s. no percolation for $\{x; Y_x = 1\}$, despite the fact that $\mathbb{P}^u [Y_o = 1]$ converges to one as $u$ tends to zero.
More precisely, in \cite{TW10b} the authors define a Poisson process on $\mathbb{R}^d$ which determines a set of lines passing through the space.
The intensity of this process is given by a non-trivial Haar measure on the space of lines, which invariant under translations and rotations, unique up to scaling.
Having defined this process of lines, one removes from $\mathbb{R}^d$ the cylinders of radius one and axis centered in these lines.
The resulting set is called $\mathcal{V}$.
By varying the intensity of the Poisson process, a phase transition in the percolation of $\mathcal{B}$ occurs for all $d \geq 3$, see \cite[Theorems 4.1 and 5.1]{TW10b} and \cite{HST12}.
In our setting we look at the intersection of $\mathcal{V}$ and $\mathbb{R}^2$, where $\mathcal{V}\subset \mathbb{R}^3$.
In this case, the cylinders intersected with the plane consist of ellipses with random major axis size.
In Proposition~5.6 of \cite{TW10b}), they show that, for every intensity $u > 0$ of the Poisson process,
there is no infinite component in $\mathcal{V} \cap \mathbb{R}^2$.
On the other hand, the model satisfies a condition very similar to $\mathcal{D}(\alpha, c_\alpha)$ with $\alpha = 2$, see Lemma~3.3 of \cite{TW10b}.
\nocite{}
\bibliographystyle{amsalpha}
| {
"timestamp": "2016-02-17T02:09:07",
"yymm": "1507",
"arxiv_id": "1507.07765",
"language": "en",
"url": "https://arxiv.org/abs/1507.07765",
"abstract": "In this paper we study percolation on a roughly transitive graph G with polynomial growth and isoperimetric dimension larger than one. For these graphs we are able to prove that p_c < 1, or in other words, that there exists a percolation phase. The main results of the article work for both dependent and independent percolation processes, since they are based on a quite robust renormalization technique. When G is transitive, the fact that p_c < 1 was already known before. But even in that case our proof yields some new results and it is entirely probabilistic, not involving the use of Gromov's theorem on groups of polynomial growth. We finish the paper giving some examples of dependent percolation for which our results apply.",
"subjects": "Probability (math.PR)",
"title": "Percolation and isoperimetry on roughly transitive graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9817357195106375,
"lm_q2_score": 0.7217432182679957,
"lm_q1q2_score": 0.7085610976882539
} |
https://arxiv.org/abs/1912.04901 | Closed graphs and open maps | We offer a new perspective on the closed graph theorem and the open mapping theorem for separated barrelled spaces and fully complete spaces. | \section*{Introduction}
\medbreak
The `classical' closed graph theorem asserts that a linear function between two Banach spaces is continuous if its graph is closed; the `classical' open mapping theorem asserts that a surjective continuous linear function between Banach spaces is open. Although these two theorems are very different in form, they are traditionally regarded as being equivalent, since either is an essentially elementary consequence of the other. In [1] these two theorems are generalized to a context in which they become equivalent in a very precise sense, and the two classical theorems become special cases of one underlying (or overarching) theorem.
\medbreak
The `classical' theorems themselves have been generalized considerably. Let $X$ be a separated barrelled space and $Y$ a fully complete space: a general version of the closed graph theorem asserts that a linear function from $X$ to $Y$ is continuous if its graph is closed; a general version of the open mapping theorem asserts that a surjective continuous linear function from $Y$ to $X$ is open. For these general versions, see [2] Chapter VI; see also [3] Chapter 12-5 and Chapter 12-4. Our aim in the present paper is to realize these two general versions as special cases of one `master theorem'.
\medbreak
As preparation, it is convenient to set out here some notation pertaining to relations. Let $X$ and $Y$ be sets; a relation from $X$ to $Y$ is a subset $\Gamma \subseteq X \times Y$ of their product. When $A \subseteq X$ and $B \subseteq Y$ we write
$$A \, \Gamma = \{y : (\exists a \in A) \, (a, y) \in \Gamma \} \subseteq Y$$
and
$$ \Gamma B= \{x : (\exists b \in B) \, (x, b) \in \Gamma \} \subseteq X.$$
In particular, $X \, \Gamma = {\rm Ran} \, Y$ is the range of $\Gamma$ and $\Gamma Y = {\rm Dom} \, \Gamma$ is the domain of $\Gamma.$ As a special case, $\Gamma$ might be the graph ${\rm gra} \, \phi = \{ (x, \phi (x)) : x \in X \} \subseteq X \times Y$ of a function $\phi : X \to Y$; in this case, $A \, \Gamma$ is the direct image $\phi \, (A)$ and $\Gamma B$ is the inverse image $\phi^{-1} \, (B)$.
\medbreak
\section*{Theorem}
\medbreak
Let $X$ be a separated (or Hausdorff) barrelled space and $Y$ a fully complete (or Ptak) space.
\medbreak
\noindent
{\bf Theorem.}
{\it Let $\Gamma \subseteq X \times Y$ be a closed linear relation with ${\rm Dom}\, \Gamma = X$. If $B \subseteq Y$ is open then $\Gamma \, B \subseteq X$ is open}.
\medbreak
This theorem has the closed graph theorem (CGT) and the open mapping theorem (OMT) as immediate corollaries.
\medbreak
\noindent
{\bf Closed graph theorem}. {\it If $\phi : X \to Y$ is a linear function with closed graph, then $\phi$ is continuous.}
\begin{proof}
Apply the Theorem when $\Gamma$ is the graph of $\phi$. The condition ${\rm Dom}\, \Gamma = X$ holds by definition of function. Note that here, if $B \subseteq Y$ then $\Gamma \, B = \phi^{-1} \, (B)$.
\end{proof}
\medbreak
\noindent
{\bf Open mapping theorem}. {\it If $\psi : Y \to X$ is a surjective linear function with closed graph, then $\psi$ is open.}
\begin{proof}
Apply the Theorem when $\Gamma$ is the transpose of the graph of $\psi$. The condition ${\rm Dom}\, \Gamma = X$ holds by definition of surjectivity. Note that here, if $B \subseteq Y$ then $\Gamma \, B = \psi \, (B)$.
\end{proof}
\medbreak
These versions of CGT and OMT appear in Chapter 12 of [3]: CGT as Theorem 12-5-7 and OMT as Theorem 12-4-9. They also appear in Chapter VI of [2]: CGT as Theorem 6 and OMT as Theorem 7; there, Theorem 6 stems from Proposition 10(i) and Theorem 7 stems from Proposition 10(ii); also, Proposition 10(ii) depends on Proposition 10(i). In practical terms, our proof of the main Theorem simplifies the nature and the function of this Proposition 10. The theoretical value of our perspective is perhaps greater than the practical: the viewpoint of the present paper is that both CGT and OMT are simply special cases of one `master theorem'.
\medbreak
\section*{Proof}
\medbreak
Here we prove the main Theorem, preparing the way with a Lemma and a Proposition.
\medbreak
We begin quite generally. Let $X$ and $Y$ be vector spaces and let $\Gamma$ be a linear subspace of the product $X \times Y$ such that ${\rm Dom}\,\Gamma = X$. The embedding $i_Y : Y \to X \times Y : y \mapsto (0, y)$ being a linear map, the inverse image $Y_{\Gamma} = i_Y^{-1} (\Gamma)$ is a linear subspace of $Y$. Let $x \in X$: as $X = {\rm Dom} \, \Gamma$ there exists $y \in Y$ such that $(x, y) \in \Gamma$; if also $y' \in Y$ and $(x, y') \in \Gamma$ then $(0, y' - y) = (x, y') - (x, y) \in \Gamma$ so that $y' - y \in Y_{\Gamma}$. Accordingly, a linear function
$$\gamma : X \to Y/Y_{\Gamma}$$
is well-defined by the rule that if $x \in X$ then
$$\gamma (x) = y + Y_{\Gamma}$$
for any choice of $y \in Y$ such that $(x, y) \in \Gamma$. It is straightforward to check that if $B \subseteq Y$ then
$$\Gamma \, B = \gamma^{-1} (B + Y_{\Gamma}).$$
\medbreak
Now, let $X$ and $Y$ be locally convex topological vector spaces.
\medbreak
\noindent
{\bf Lemma.} {\it If $\, \Gamma$ is closed then the graph of $\,\gamma$ is closed.}
\begin{proof}
Let $(x_0, y_0 + Y_{\Gamma}) \in X \times Y/\,Y_{\Gamma}$ be a point outside the graph ${\rm gra} \, \gamma$ of $\gamma$; thus, $(x_0, y_0) \in X \times Y$ is a point outside $\Gamma$. Choose a continuous linear functional $\ell \in (X \times Y)'$ such that $\ell |_{\Gamma} = 0$ but $\ell (x_0, y_0) \neq 0$; see [3] Corollary 7-2-12. Let $q : X \times Y \to X \times Y/\, Y_{\Gamma}$ be the (open) quotient map and let $W = (X \times Y) \setminus {\rm Ker} \, \ell \subseteq X \times Y$ be the open set on which $\ell$ is nonzero. The proof ends with the claim that the open neighbourhood $q(W) \subseteq X \times Y/\, Y_{\Gamma}$ of $(x_0, y_0 + Y_{\Gamma})$ is disjoint from ${\rm gra} \, \gamma$. To justify this claim, let $(x, y + Y_{\Gamma}) \in q(W)$: there exists $(x', y') \in W$ such that $(x', y') - (x, y) \in {\rm Ker}\,q = 0 \times Y_{\Gamma} \subseteq \Gamma \subseteq {\rm Ker}\,\ell$ whence $\ell (x, y) = \ell (x', y') \neq 0$ and therefore $(x, y) \notin \Gamma$; it follows that $(x, y + Y_{\Gamma}) \notin {\rm gra} \, \gamma$.
\end{proof}
\medbreak
Next, assume that $X$ and $Y$ are separated locally convex topological vector spaces, and that the linear subspace $\Gamma \subseteq X \times Y$ is closed. Assume further that $Y$ is fully complete. Denote by $\mathcal{N}(X)$ the set of (not necessarily open) neighbourhoods of zero in $X$.
\medbreak
\noindent
{\bf Proposition.} {\it If
$$B \in \mathcal{N}(Y) \Rightarrow \overline{\Gamma \, B} \in \mathcal{N}(X)$$
then
$$B \subseteq Y \; {\rm open} \; \Rightarrow \; \Gamma \, B \subseteq X \; {\rm open}.$$}
\begin{proof}
As the subspace $Y_{\Gamma} = i_Y^{-1} (\Gamma) \subseteq Y$ is closed and $Y$ is fully complete, the quotient $Y/\,Y_{\Gamma}$ is also fully complete: see the Corollary to Proposition 9 in Chapter VI of [2]; see also Theorem 12-4-5 in [3]. Each $\beta \in \mathcal{N}(Y/\,Y_{\Gamma})$ has the form $\beta = B + Y_{\Gamma}$ for some $B \in \mathcal{N}(Y)$ and the identity recorded prior to the Lemma implies that $\overline{\gamma^{-1} (\beta)} = \overline{\Gamma \, B}$; thus $\beta \in \mathcal{N}(Y/\,Y_{\Gamma}) \Rightarrow \overline{\gamma^{-1} (\beta)} \in \mathcal{N}(X)$ and so $\gamma: X \to Y/\,Y_{\Gamma}$ is nearly continuous in the sense of [2]. With the Lemma, all the pieces are now in place for an application of Proposition 10(i) in Chapter VI of [2]: the linear map $\gamma$ is continuous, whence if $B \subseteq Y$ is open then so is $\Gamma \, B = \gamma^{-1} (B + Y_{\Gamma}) \subseteq X$.
\end{proof}
\medbreak
Thus, the two parts of Proposition 10 in Chapter VI of [2] become one when lifted from the context of linear functions to the context of linear relations: they are two special cases of one `master proposition'.
\medbreak
Finally, we prove our main Theorem, which we restate for convenience. Recall the context: $X$ is a separated barrelled space and $Y$ is a fully complete space.
\medbreak
\noindent
{\bf Theorem.}
{\it Let $\Gamma \subseteq X \times Y$ be a closed linear relation with ${\rm Dom}\, \Gamma = X$. If $B \subseteq Y$ is open then $\Gamma \, B \subseteq X$ is open}
\begin{proof}
Let $B \in \mathcal{N}(Y)$ be absolutely convex, so that $B$ is absolutely convex and absorbing: as $\Gamma \subseteq X \times Y$ is linear and has the whole of $X$ for its domain, it follows that $\Gamma \, B \subseteq X$ is absolutely convex and absorbing, whence its closure $\overline{\Gamma \, B} \subseteq X$ is a barrel; as $X$ is a barrelled space, it follows that $ \overline{\Gamma \, B} \in \mathcal{N}(X)$. As $\mathcal{N}(Y)$ has a base consisting of absolutely convex neighbourhoods of zero, an application of the Proposition concludes the proof of the Theorem.
\end{proof}
\medbreak
\bigbreak
\begin{center}
{\small R}{\footnotesize EFERENCES}
\end{center}
\medbreak
[1] R.S. Monahan and P.L. Robinson, {\it The closed graph theorem is the open mapping theorem}, arXiv 1912.02626 (2019).
\medbreak
[2] A.P. Robertson and W.J. Robertson, {\it Topological Vector Spaces}, Cambridge University Press, Second Edition (1973).
\medbreak
[3] A. Wilansky, {\it Modern Methods in Topological Vector Spaces}, McGraw-Hill (1978); Dover Publications (2013).
\medbreak
\end{document}
\medbreak
\begin{theorem}
\end{theorem}
\begin{proof}
\end{proof}
| {
"timestamp": "2019-12-12T02:00:21",
"yymm": "1912",
"arxiv_id": "1912.04901",
"language": "en",
"url": "https://arxiv.org/abs/1912.04901",
"abstract": "We offer a new perspective on the closed graph theorem and the open mapping theorem for separated barrelled spaces and fully complete spaces.",
"subjects": "Functional Analysis (math.FA)",
"title": "Closed graphs and open maps",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9817357259231532,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.70856109644056
} |
https://arxiv.org/abs/1610.06760 | Generalized Zalcman conjecture for some classes of analytic functions | For functions $f(z)= z+ a_2 z^2 + a_3 z^3 + \cdots$ in various subclasses of normalized analytic functions, we consider the problem of estimating the generalized Zalcman coefficient functional $\phi(f,n,m;\lambda):=|\lambda a_n a_m -a_{n+m-1}|$. For all real parameters $\lambda$ and $ \beta<1$, we provide the sharp upper bound of $\phi(f,n,m;\lambda)$ for functions $f$ satisfying $\operatorname{Re}f'(z) > \beta$ and hence settles the open problem of estimating $\phi(f,n,m;\lambda)$ recently proposed by Agrawal and Sahoo [S. Agrawal and S. k. Sahoo, On coefficient functionals associated with the Zalcman conjecture, arXiv preprint, 2016]. It is worth mentioning that the sharp estimations of $\phi(f,n,m;\lambda)$ follow for starlike and convex functions of order $\alpha$ $(\alpha <1)$ when $\lambda \leq 0$. Moreover, for certain positive $\lambda$, the sharp estimation of $\phi(f,n,m;\lambda)$ is given when $f$ is a typically real function or a univalent function with real coefficients or is in some subclasses of close-to-convex functions. | \section{Introduction and Preliminaries}
Let $\mathcal{A}$ be the class of all normalized analytic functions of the form $f(z)= z+ a_2 z^2 + a_3 z^3 + \cdots$ defined on the open unit disc $\mathbb{D}$. The subclass of $\mathcal{A}$ consisting of univalent functions is denoted by $\mathcal{S}$. Let $\mathcal{S}_{\mathbb{R}}$ be the class of all functions in $\mathcal{S}$ with real coefficients. For $ \alpha <1$, we denote by $\mathcal{S}^*(\alpha)$ and $\mathcal{K}(\alpha)$, the classes of functions $f \in \mathcal{A}$ satisfying $\RE \big(zf'(z)/f(z)\big) > \alpha$ and $\RE \big( 1+zf''(z)/f'(z)\big) > \alpha$ respectively. For $0 \leq \alpha <1$, these classes are subclasses of $\mathcal{S}$ and were first introduced by Robertson \cite{MR1503286} in 1936. Later, for all $\alpha <1$, these classes were considered in \cite{MR2118626, MR0338337}. The classes $\mathcal{S}^*:=\mathcal{S}^*(0)$ and $\mathcal{K}:= \mathcal{K}(0)$ represent the classes of starlike and convex functions respectively. We denote the closed convex hulls of $\mathcal{S}^*(\alpha)$ and $\mathcal{K}(\alpha)$ by $H\mathcal{S}^*(\alpha)$ and $H\mathcal{K}(\alpha)$ respectively. The class of typically real functions, denoted by $T$, consists of all functions in $\mathcal{A}$ which have real values on the real axis and non-real values elsewhere. Denote by $\mathcal{P}$, the class of all analytic functions $p(z)= 1+ c_1 z+c_2 z^2 + \cdots$ defined on $\mathbb{D}$ such that $\RE p(z)>0$. The class $\mathcal{P}_{\mathbb{R}}$ consists of all functions in $\mathcal{P}$ with real coefficients.
In 1916, Bieberbach conjectured the inequality $|a_n| \leq n$ for $f \in \mathcal{S}$. Since then, several attempts were made to prove the Bieberbach conjecture which was finally proved by de Branges in 1985. In 1960, as an approach to prove the Bieberbach conjecture, Lawrence Zalcman conjectured that $|a_n^2-a_{2n-1}| \leq (n-1)^2$ $(n \geq 2)$ for $f \in \mathcal{S}$. This led to several works related to Zalcman conjecture and its generalized version $ |\lambda a_n^2-a_{2n-1}|\leq \lambda n^2-2n+1$ $(\lambda \geq 0)$ for various subclasses of $\mathcal{S}$ \cite{ MR964850, MR3284304, efraimidis2014generalized, MR3542050, MR824446, MR3467599} but the Zalcman conjecture remained open for many years for the class $\mathcal{S}$. Recently, Krushkal \cite{krushkal2014short} proved the conjecture for the class $\mathcal{S}$ by using complex geometry of the universal Teichm\"{u}ller spaces.
In 1999, Ma \cite{MR1694809} proposed a generalized Zalcman conjecture for $f \in \mathcal{S}$ that $$|a_n a_m-a_{n+m-1}| \leq (n-1)(m-1)$$ which is still an open problem, however he proved it for the classes $\mathcal{S}^*$ and $\mathcal{S}_{\mathbb{R}}$. For $\lambda \in \mathbb{R}$, let $\phi(f,n,m;\lambda):= |\lambda a_n a_m-a_{n+m-1}|$ denote the generalized Zalcman coefficient functional over $\mathcal{A}$. For $\beta<1$, the class $\mathcal{C}(\beta)$ of close-to-convex functions of order $\beta$ consists of $f \in \mathcal{A}$ such that $\RE \big(z f'(z)/ \big( e^{i\theta} g(z)\big)\big) > \beta$ for some $g \in \mathcal{S}^*$ and $\theta \in \mathbb{R}$. For $0 \leq \beta < 1$, the class $\mathcal{C}(\beta)$ is a subclass of $\mathcal{S}$ and was considered in \cite{MR0160890} in a more general form. The class of close-to-convex functions is denoted by $\mathcal{C}:=\mathcal{C}(0)$, for details, see \cite{MR704184}. Let $\mathcal{F}_1(\beta)$ and $\mathcal{F}_2(\beta)$ be the subclasses of $\mathcal{C}(\beta)$ $(\beta<1)$ corresponding to $\theta=0$ and the starlike functions $g(z)=z/(1-z)$ and $g(z)=z/(1-z^2)$ respectively. For $\beta <1$, let $\mathcal{R}(\beta)$ denote the class of functions $f \in \mathcal{A}$ satisfying $\RE f'(z) > \beta$. For $0 \leq \beta <1$, $\mathcal{R}(\beta)$ is a subclass of $\mathcal{S}$ and was first introduced in \cite{MR0338338}. Here, we are interested in $\mathcal{R}(\beta)$ for all values of $\beta$ $(\beta <1)$. Recently, for some positive values of $\lambda$ and $0 \leq \beta <1$, Agrawal and Sahoo \cite{agrawal2016coefficient} gave the sharp estimation of $\phi(f,n,m;\lambda)$ for the classes $\mathcal{R}(\beta)$ and $H\mathcal{K}$.
In this paper, for all real values of $\lambda$, we give the sharp estimation of $\phi(f,n,m;\lambda)$ for $f \in \mathcal{R}(\beta)$ $(\beta<1)$. Also, for $f \in \mathcal{S}^*(\alpha)$ and $f \in \mathcal{K}(\alpha)$ $( \alpha <1)$, the estimations of $\phi(f,n,m;\lambda)$ are given for all real values of $\lambda$ which are sharp when $\lambda \leq 0$ or when $\lambda$ is taking certain positive values. Moreover, for certain positive values of $\lambda$, the sharp estimations of $\phi(f,n,m;\lambda)$ are provided for the classes $T$, $\mathcal{S}_{\mathbb{R}}$, $\mathcal{F}_1(\beta)$ and $\mathcal{F}_2(\beta)$ $(\beta <1)$.
We prove our results either by applying the well-known estimation of $|\lambda c_n c_m - c_{n+m}|$ for $p(z)=1+\sum_{n=1}^{\infty} c_n z^n \in \mathcal{P}$ or by applying some characterization of functions in the class $\mathcal{P}$ and that of typically real functions in terms of some positive semi-definite Hermitian form, see \cite{MR760980, MR3348983}. Earlier, such characterization of functions with positive real part in terms of some positive semi-definite Hermitian form \cite{MR760980} was used in \cite{MR1387562,MR2055766,MR3348983}. It should be pointed out that in the literature, for various subclasses of $\mathcal{S}$ which are invariant under rotations, the estimation of $\phi(f,n,n;\lambda)$ is usually obtained by using the fact that the expression $\phi(f,n,n;\lambda)$ is invariant under rotations and by an application of the Cauchy-Schwarz inequality which requires $\lambda$ to be non-negative. However, we are able to give the sharp estimation of $\phi(f,n,m;\lambda)$ for various subclasses of $\mathcal{A}$ when $\lambda \leq 0$. Moreover, for certain positive $\lambda$, our technique is giving the estimation of $\phi(f,n,m;\lambda)$ when $f$ is in some subclasses of $\mathcal{A}$ which are not necessarily invariant under rotations. We need the following lemmas to prove our results.
\begin{lemma}\cite[Lemma 2.3,\ p.\ 507]{MR3348983} \label{p4lem1}If $p(z)= 1 + \sum_{k=1}^{\infty} c_k z^k \in \mathcal{P}$, then for all $n, m \in \mathbb{N}$, $$|\mu c_n c_m - c_{n+m}| \leq \begin{cases} 2, &\text{$ 0 \leq \mu \leq 1$;}\\
2|2 \mu -1|, &\text{elsewhere.}
\end{cases} $$
The result is sharp.
\end{lemma}
\begin{lemma}\cite[Theorem 4(b), p.\ 678]{MR760980} \label{p4lem2}
A function $p(z)= 1 + \sum_{k=1}^{\infty} c_k z^k \in \mathcal{P}$ if and only if
\begin{align}
\sum_{j=0}^{\infty} \left\lbrace \left|2 z_j + \sum_{k=1}^{\infty} c_k z_{k+j} \right|^2 - \left|\sum_{k=0}^{\infty} c_{k+1} z_{k+j} \right|^2 \right\rbrace &\geq 0 \notag
\end{align}
for every sequence $\{z_k\}$ of complex numbers which satisfy $\limsup_{k \to \infty} |z_k|^{1/k} < 1$.
\end{lemma}
\begin{lemma}\cite[Theorem 4(f), p.\ 678]{MR760980}\label{p4lem2.1}
A function $f(z)= z + \sum_{k=2}^{\infty} a_k z^k \in T$ if and only if
\begin{align}
\sum_{j=0}^{\infty} \left\lbrace \left|2 z_j + \sum_{k=1}^{\infty}(a_{k+1}- a_{k-1}) z_{k+j} \right|^2 - \left|\sum_{k=0}^{\infty} (a_{k+2}-a_k) z_{k+j} \right|^2 \right\rbrace &\geq 0 \notag
\end{align}
for every sequence $\{z_k\}$ of complex numbers which satisfy $\limsup_{k \to \infty} |z_k|^{1/k} < 1$.
\end{lemma}
\begin{lemma}\label{p4lem3}
Let $\nu(t)$ be a probability measure on $[0,2\pi]$. Then for all $n,m \in \mathbb{N}$,
\begin{align*}
\left|\lambda \int_{0}^{2 \pi} e^{i n t} \, d \nu(t) \int_{0}^{2 \pi} e^{i m t} \, d \nu(t) - \int_{0}^{2 \pi} e^{i (n+m) t} \, d \nu(t)\right| \leq \begin{cases} 1, &\text{$ 0 \leq \lambda \leq 2$;}\\
| \lambda -1|, &\text{elsewhere.}
\end{cases}
\end{align*}
\end{lemma}
\begin{proof}
The function $p(z)= 1+ \sum_{n=1}^{\infty} c_n z^n $ given by the Herglotz representation formula \cite[Corollary 3.6, p.\ 30]{MR768747},
$$p(z)= \int_0^{2\pi} \dfrac{1+ e^{i t}z}{1- e^{i t}z} \, d \nu(t)$$ is clearly in $\mathcal{P}$. On comparing the coefficients on both sides in the above equation, we obtain
$$c_n=2\int_0^{2\pi} e^{i n t} \, d \nu(t) \quad (n \geq 1).$$
An application of Lemma \ref{p4lem1} to the function $p$ gives
\begin{align*}
\left|2 \mu \int_{0}^{2 \pi} e^{i n t} \, d \nu(t) \int_{0}^{2 \pi} e^{i m t} \, d \nu(t) - \int_{0}^{2 \pi} e^{i (n+m) t} \, d \nu(t)\right| \leq \begin{cases} 1, &\text{$ 0 \leq \mu \leq 1$;}\\
| 2\mu -1|, &\text{elsewhere.}
\end{cases}
\end{align*} On substituting $\lambda=2 \mu$, the desired estimates follow.
\end{proof}
For $\lambda=2$, the above lemma is proved in \cite[Lemma 2.1, p.\ 330]{MR1694809}.
\section{Generalized Zalcman conjecture for $\mathcal{S}^*(\alpha)$ and $\mathcal{K}(\alpha)$}
For $ \alpha <1$, define a function $f_1: \mathbb{D} \to \mathbb{C}$ by
\begin{align}
f_1(z):= \dfrac{z}{(1- z)^{2(1-\alpha)}}= z+ \sum_{n=2}^{\infty} A_n z^n, \label{p4eqllll}
\end{align}
where
\begin{align}
A_n= \dfrac{1}{(n-1)!}\prod_{j=0}^{n-2} \big(2(1-\alpha)+j\big). \label{p4eq1}
\end{align}
It is known that $f_1$ and its rotations work as extremal functions for the coefficient bounds of functions in the class $\mathcal{S}^*(\alpha)$ \cite[Theorem 5.6, p.\ 324]{MR2118626}. Therefore, they could be the expected extremal functions for the upper bound of the generalized Zalcman coefficient functional $\phi(f,n,m;\lambda)$ when $f \in \mathcal{S}^*(\alpha) $. This is shown to be true by the following theorem at least when $\lambda \geq 2 A_{n+m-1}/(A_n A_m)$ or $\lambda \leq 0$.
\begin{theorem}\label{p4thm1}
If $f(z)= z + \sum_{n=2}^{\infty}a_n z^n \in H\mathcal{S}^*(\alpha)$ $(\alpha <1)$, then for all $n,m=2, 3, \ldots$,
\begin{align*}
\left|\lambda a_n a_m - a_{n+m-1}\right| \leq \begin{cases}A_{n+m-1}, &\text{$ 0 \leq \lambda \leq \dfrac{2 A_{n+m-1}}{A_n A_m}$;}\\
| \lambda A_n A_m -A_{n+m-1}|, &\text{elsewhere,}
\end{cases}
\end{align*}
where $A_n$ is given by \eqref{p4eq1}.
The second inequality is sharp for the function $f_1$ and its rotations where $f_1$ is given by the equation \eqref{p4eqllll}.
\end{theorem}
\begin{proof}
Since $f \in H\mathcal{S}^*(\alpha)$ $(\alpha <1)$, there exists a probability measure $\nu(t)$ on $[0,2\pi]$ \cite[Theorem 3, p.\ 417 ]{MR0338337} such that $$f(z)=\int_{0}^{2 \pi} \dfrac{z}{(1- e^{it}z)^{2(1-\alpha)}} \, d \nu(t).$$On comparing the coefficients on both sides, we obtain
$$a_n= A_n \int_0^{2\pi} e^{i (n-1) t} \, d \nu(t) \quad (n \geq 2),$$ where $A_n$ is given by the equation \eqref{p4eq1}. This implies
\begin{align*}
&\left|\lambda a_n a_m - a_{n+m-1}\right| \\ &{}= A_{n+m-1}\left|\lambda\dfrac{A_n A_m}{A_{n+m-1}} \int_{0}^{2 \pi} e^{i (n-1) t} \, d \nu(t) \int_{0}^{2 \pi} e^{i (m-1) t} \, d \nu(t) - \int_{0}^{2 \pi} e^{i (n+m-2) t} \, d \nu(t)\right|.
\end{align*}
An application of Lemma \ref{p4lem3} to the above equation yields
\begin{align*}
&\left|\lambda a_n a_m - a_{n+m-1}\right| \leq \begin{cases}A_{n+m-1}, &\text{$ 0 \leq \lambda \leq \dfrac{2 A_{n+m-1}}{A_n A_m}$;}\\
| \lambda A_n A_m -A_{n+m-1}|, &\text{elsewhere.}
\end{cases} \qedhere
\end{align*}
\end{proof}
For $m=n$, we have the following sharp result.
\begin{corollary}
If $f(z)= z + \sum_{n=2}^{\infty}a_n z^n \in H\mathcal{S}^*(\alpha)$ $(\alpha<1)$, then for all $n=2, 3, \ldots$,
\begin{align*}
\left|\lambda a_n^2 - a_{2n-1}\right| \leq \begin{cases}A_{2n-1}, &\text{$ 0 \leq \lambda \leq \dfrac{2 A_{2n-1}}{A_n^2}$;}\\
| \lambda A_n^2 -A_{2n-1}|, &\text{elsewhere,}
\end{cases}
\end{align*}
where $A_n$ is given by \eqref{p4eq1}. The second inequality is sharp for the function $f_1$, given by the equation \eqref{p4eqllll}, and its rotations whereas the first inequality is sharp for the function of the form
\begin{align}
f(z)= \sum_{k=1}^{2(n-1)} m_k g_k(z), \label{p4eq111.1}
\end{align}
where $0 \leq m_k \leq 1$, $\sum_{k=1}^{n-1} m_{2k}=\sum_{k=1}^{n-1} m_{2k-1}=1/2$, $g_k(z)= e^{-i \theta_k} f_1(e^{i \theta_k}z)$ and $\theta_k= (2k+1)\pi/(2n-2)$.
\end{corollary}
For $\alpha=0$ and $\lambda \geq 0$, the above corollary reduces to the inequalities mentioned in \cite[p.\ 474]{MR824446}. It is a well-known result given by Alexander that a function $f \in \mathcal{A}$ is in $\mathcal{K}$ if and only if $z f'(z) \in \mathcal{S}^*$. This implies that for $ \alpha <1$, $f \in H\mathcal{K}(\alpha)$ if and only if $z f'(z) \in H\mathcal{S}^*(\alpha)$ and therefore, we have the following deduction from the Theorem \ref{p4thm1}.
\begin{corollary}\label{p4cor1}
If $f(z)= z + \sum_{n=2}^{\infty}a_n z^n \in H\mathcal{K}(\alpha)$ $(\alpha<1)$, then for all $n,m=2, 3, \ldots$,
\begin{align*}
\left|\lambda a_n a_m - a_{n+m-1}\right| \leq \begin{cases}\dfrac{A_{n+m-1}}{n+m-1}, &\text{$ 0 \leq \lambda \leq \dfrac{2 n m A_{n+m-1}}{(n+m-1)A_n A_m}$;}\\
\left| \lambda \dfrac{ A_n A_m}{n m } - \dfrac{A_{n+m-1}}{n+m-1}\right|, &\text{elsewhere,}
\end{cases}
\end{align*}
where $A_n$ is given by the equation \eqref{p4eq1}. The second inequality is sharp for the function $f_2$ and its rotations, where
\begin{align}
f_2(z)= \begin{cases}
\dfrac{(1-z)^{-(1-2 \alpha)}-1}{1-2\alpha}, &\text{ $\alpha \neq 1/2$;}\\
-\log{(1-z)}, &\text{ $\alpha = 1/2$.}
\end{cases} \label{p4eq111.2}
\end{align}
\end{corollary}
For $\alpha=0$ and $\lambda \geq 2$, the above corollary reduces to \cite[Theorem 2.1, p.\ 3]{agrawal2016coefficient}.
For $m=n$, we have the following sharp result which has been proved in \cite{MR3542050} by maximizing the real-valued functional $\RE(\lambda a_n^2-a_{2n-1})$ for the case $\lambda \geq 0$.
\begin{corollary}\label{p4corr1}
If $f(z)= z + \sum_{n=2}^{\infty}a_n z^n \in H\mathcal{K}(\alpha)$ $(\alpha<1)$, then for all $n=2, 3, \ldots$,
\begin{align*}
\left|\lambda a_n^2 - a_{2n-1}\right| \leq \begin{cases}\dfrac{A_{2n-1}}{2n-1}, &\text{$ 0 \leq \lambda \leq \dfrac{2 n^2 A_{2n-1}}{(2n-1)A_n^2 }$;}\\
\left| \lambda \dfrac{ A_n^2 }{n^2} - \dfrac{A_{2n-1}}{2n-1}\right|, &\text{elsewhere,}
\end{cases}
\end{align*}
where $A_n$ is given by the equation \eqref{p4eq1}. The second inequality is sharp for the function $f_2$, given by \eqref{p4eq111.2}, and its rotations whereas the first inequality is sharp for the function given by the equation \eqref{p4eq111.1} with $g_k(z)= e^{-i \theta_k} f_2(e^{i \theta_k} z)$.
\end{corollary}
If $\lambda \geq 0$, the above corollary reduces to \cite[Theorem 3.3]{MR3467599} and \cite[Theorem 4]{MR3542050} for $\alpha=-1/2$ and $\alpha=1/2$ respectively. Also, for $\alpha=0$ and $0 \leq \lambda \leq 2$, the above corollary was proved in \cite[Theorem 3, p.\ 3]{efraimidis2014generalized}.
\section{Generalized Zalcman conjecture for the class $\mathcal{R}(\beta)$ and for typically real functions}
For $\lambda \geq nm/((1-\beta)(n+m-1))$ and $0 \leq \beta <1$, the second inequality of the following theorem has been recently proved by Agrawal and Sahoo \cite{agrawal2016coefficient} and they proposed it as an open problem for $ 0 < \lambda < nm/((1-\beta)(n+m-1))$ which has now been settled in the following theorem by making use of the Hermitian form for functions in the class $\mathcal{P}$.
\begin{theorem}\label{p4thm2}
If $f(z)=z+\sum_{n=2}^{\infty}a_n z^n \in \mathcal{R}(\beta)$ $(\beta <1)$, then for all $n,m=2, 3, \ldots$,
\begin{align*}
\left|\lambda a_n a_m - a_{n+m-1}\right| \leq \begin{cases}\dfrac{2(1-\beta)}{n+m-1}, &\text{$ 0 \leq \lambda \leq \dfrac{n m }{(1-\beta)(n+m-1)}$;}\\
\left|\dfrac{4 \lambda (1-\beta)^2}{n m } - \dfrac{2(1-\beta)}{n+m-1}\right|, &\text{elsewhere.}
\end{cases}
\end{align*}
The result is sharp.
\end{theorem}
\begin{proof}
Since $f \in \mathcal{R}(\beta)$, $\big(f'(z)-\beta\big)/(1-\beta)= 1+ \sum_{n=1}^{\infty} (n+1) a_{n+1}/(1-\beta) z^n \in \mathcal{P}$ which gives
\begin{align}
|a_n| \leq \dfrac{2(1-\beta)}{n}\quad (n \geq 2). \label{p4eqll}
\end{align}
Clearly, the bounds are sharp for the function $f_0: \mathbb{D} \to \mathbb{C}$ defined by
\begin{align}
f_0(z)=(1-\beta)\int_0^z \dfrac{1+t}{1-t} \, dt + \beta z. \label{p4eqn}
\end{align}
For fixed $n,m =2,3,\ldots$, choose the sequence $\{z_k\}$ of complex numbers by $z_{n-2}= \lambda (1-\beta)a_m$, $ z_{n+m-3}=-n(1-\beta)/(n+m-1)$, $z_k = 0$ for all $ k \neq n-2, n+m-3$. An application of Lemma \ref{p4lem2} to the function $(f'-\beta)/(1-\beta) \in \mathcal{P}$ gives
\begin{align*}
&n^2\left|\lambda a_n a_m - a_{n+m-1}\right|^2 \\
&\leq \left|\left(2 \lambda (1-\beta) - \dfrac{mn}{n+m-1} \right) a_m \right|^2 - \left|\dfrac{mn}{n+m-1} a_m \right|^2 + \dfrac{4n^2 (1-\beta)^2}{(n+m-1)^2}\\
&= 4\lambda(1-\beta)\left(\lambda (1-\beta)-\dfrac{mn}{n+m-1}\right)|a_m|^2 + \dfrac{4n^2 (1-\beta)^2}{(n+m-1)^2}.
\end{align*}
By using the bounds given by \eqref{p4eqll} in the above inequality, we have
\begin{align*}
\left|\lambda a_n a_m - a_{n+m-1}\right|^2
&\leq
\begin{cases} \dfrac{4 (1-\beta)^2}{(n+m-1)^2}, &\text{$0 \leq \lambda \leq \dfrac{n m }{(1-\beta)(n+m-1)} $;}\\
\left( \dfrac{4 \lambda(1-\beta)^2}{n m } - \dfrac{2(1-\beta)}{n+m-1}\right)^2, &\text{elsewhere.}
\end{cases}
\end{align*}
For $0 \leq \lambda \leq n m/\big((1-\beta)(n+m-1)\big)$, the inequality is sharp for the function $f(z)= (1-\beta)\int_0^z(1+t^{n+m-2})/(1-t^{n+m-2}) \, dt + \beta z$.
For $\lambda\leq 0$ or $\lambda \geq n m/\big((1-\beta)(n+m-1)\big)$, the inequality is sharp for the function $f_0$ given by the equation \eqref{p4eqn}.
\end{proof}
For $\beta=0$ and $0 < \lambda \leq 4/3$, the above theorem was proved in \cite{efraimidis2014generalized} by maximizing the real valued functional $\RE (\lambda a_n^2 - a_{2n-1})$ over $\mathcal{R}(\beta)$. Also, we have the following simple result.
\begin{corollary}
If the analytic function $f(z)=z+\sum_{n=2}^{\infty}a_n z^n$ satisfies $\RE{\left(f(z)/z\right)} > \beta$ $(\beta <1)$ in $\mathbb{D}$, then for all $n,m=2, 3, \ldots$,
\begin{align*}
\left|\lambda a_n a_m - a_{n+m-1}\right| \leq \begin{cases} 2(1-\beta), &\text{$ 0 \leq \lambda \leq \dfrac{1}{(1-\beta)}$;}\\
2(1-\beta)\left|2\lambda (1-\beta) - 1\right|, &\text{elsewhere.}
\end{cases}
\end{align*}
The result is sharp.
\end{corollary}
The following theorem generalizes \cite[Theorem 3.1, p.\ 335]{MR1694809} which was proved for $\lambda=1$ by induction on $n$ and $m$. Although, it can be proved by induction on $n$ and $m$ but here, we are giving it as an application of the Hermitian form for typically real functions.
\begin{theorem}\label{p4thm3}
If $f(z)=z+\sum_{n=2}^{\infty}a_n z^n \in T$ and $\lambda \geq 1$, then
\begin{itemize}
\item[$(i)$] if $n=2$ and $m$ is even, the upper bound of $|\lambda a_n a_m - a_{n+m-1}|$ is
\begin{itemize}
\item[$(a)$] $3+(2 \lambda-1)(m-2)$ for $1 \leq \lambda \leq 3/2$,
\item[$(b)$] $2 \lambda m-m-1$ for $\lambda \geq 3/2$;
\end{itemize}
\item[$(ii)$] if $m=2$ and $n$ is even, the upper bound of $|\lambda a_n a_m - a_{n+m-1}|$ is
\begin{itemize}
\item[$(a)$] $3+(2 \lambda-1)(n-2)$ for $1 \leq \lambda \leq 3/2$,
\item[$(b)$] $ 2 \lambda n-n-1$ for $\lambda \geq 3/2$;
\end{itemize}
\item[$(iii)$] in the other cases, we have
\begin{align*}
|\lambda a_n a_m - a_{n+m-1}| \leq \lambda m n -n -m +1.
\end{align*}
\end{itemize}
The bounds given by (i)(b), (ii)(b) and (iii) are sharp whereas the bounds in (i)(a) and (ii)(a) are sharp for $\lambda=1$ or the case when $n=2$ and $m=2$.
\end{theorem}
\begin{proof}
For fixed $n, m=2,3,\ldots$, choose the sequence $\{z_k\}$ of real numbers by $z_{n-2}= \lambda a_m$, $ z_{n+m-3}=-1$, $z_k = 0$ for all $ k \neq n-2, n+m-3$. Since $f \in T$, $|a_n| \leq n$ $(n \geq 2)$. So, by using Lemma \ref{p4lem2.1} to the function $f \in T$, we have
\begin{align}
\left|(\lambda a_n a_m - a_{n+m-1})-(\lambda a_{n-2} a_m - a_{n+m-3})\right|^2
&\leq |(2 \lambda- 1)a_m + a_{m-2}|^2 -|a_m -a_{m-2}|^2 + 4 \notag \\
&= 4 \lambda(\lambda-1) a_m^2 + 4 \lambda a_m a_{m-2} +4 \notag\\
&\leq 4(\lambda m -1)^2. \label{p4eq1.1}
\end{align}
Since $f \in T$, therefore $f(z)=\big(z/(1-z^2)\big)p(z)$ for some $p(z)= 1 + \sum_{n=1}^{\infty} c_n z^n \in \mathcal{P}_{\mathbb{R}}$. This gives
\begin{align}
a_{2k}= c_1 + c_3 + \cdots + c_{2k-1} \quad \text{and}\quad
a_{2k+1}= 1 + c_2 + c_4 + \cdots + c_{2k}. \label{p4eq2}
\end{align}
By \cite[Theorem 1, p.\ 468]{MR824446}, we have $\lambda a_2^2 - a_3 \leq 4 \lambda-3$. Clearly, $\lambda a_2^2 - a_3 \geq -a_3 \geq -3$. Also, we observe that $1 \leq \lambda \leq 3/2$ is equivalent to $1 \leq 4 \lambda -3 \leq 3$. Therefore, we have
\begin{align}
| \lambda a_2^2 - a_3| \leq
\begin{cases}
3, &\text{for $1 \leq \lambda \leq 3/2,$}\\
4 \lambda-3, &\text{for $\lambda \geq 3/2.$} \label{p4eq3.10}
\end{cases}
\end{align}
The first inequality in \eqref{p4eq3.10} is sharp for the function $f(z)=z(1+z^2)/(1-z^2)^2$ and the second inequality holds for the Koebe function $k(z)=z/(1-z)^2$.
If $n=2$ and $m=2k$ $(k \geq 2)$, then
\begin{align}
|\lambda a_2 a_{m} - a_{m+1}|&= |\lambda a_2 a_{2k} - a_{2k+1}|\notag\\
&= |\lambda c_1 (c_1 + c_3 + \cdots + c_{2k-1}) - (1 + c_2 + c_4 + \cdots + c_{2k})|\notag\\
&= |(\lambda c_1^2 -c_2-1) + (\lambda c_1 c_3 -c_4) + \cdots + (\lambda c_1 c_{2k-1} -c_{2k})|\notag \\
&= |(\lambda a_2^2 -a_3) + (\lambda c_1 c_3 -c_4) + \cdots + (\lambda c_1 c_{2k-1} -c_{2k})|.\label{p4eq3.11}
\end{align}
An application of Lemma \ref{p4lem1} and the inequality \eqref{p4eq3.10} in the equation \eqref{p4eq3.11} gives
\begin{align}
|\lambda a_2 a_{m} - a_{m+1}| \leq
\begin{cases}
3+(2 \lambda-1)(m-2), &\text{for $1 \leq \lambda \leq 3/2;$}\\
2 \lambda m-m -1, &\text{for $\lambda \geq 3/2.$} \notag
\end{cases}
\end{align}
This proves $(i)$.
When $m=2$ and $n$ is even, the desired bounds in $(ii)$ follow by interchanging the roles of $n$ and $m$ in the equation \eqref{p4eq3.11} and in the above inequality. For $\lambda=1$, the sharpness in $(i)(a)$ and $(ii)(a)$ follow for the function $f(z)=z(1+z^2)/(1-z^2)^2$. Now, it is left to prove the inequality in the case $(iii)$.
Since $\lambda a_4^2 - a_7 \leq 16 \lambda-7$ \cite[Theorem 1, p.\ 468]{MR824446} and clearly $\lambda a_4^2 - a_7 \geq -7 \geq -9 \geq -(16 \lambda-7)$, we have
\begin{align}
|\lambda a_4^2 - a_7| \leq 16 \lambda-7. \label{p4eq3.12}
\end{align}
For $n=4 $ and $m=2k$ $(k \geq 3)$, by proceeding as in the equation \eqref{p4eq3.11}, we have
\begin{align}
&|\lambda a_4 a_{m} - a_{m+3}| \notag \\
&= |\lambda a_4 a_{2k} - a_{2k+3}|\notag\\
&= |\lambda (c_1+ c_3) (c_1 + c_3 + \cdots + c_{2k-1}) - (1 + c_2 + c_4 + \cdots + c_{2(k+1)})|\notag\\
&= |(\lambda a_4^2 -a_7)+ \lambda c_1 (c_5 + \cdots + c_{2k-1}) + (\lambda c_3 c_5 -c_8) + \cdots + (\lambda c_3 c_{2k-1} -c_{2k+2})|.\notag
\end{align}
An application of Lemma \ref{p4lem1} and the inequality \eqref{p4eq3.12} in the above equation gives
\begin{align}
|\lambda a_4 a_{m} - a_{m+3}| &\leq 4 \lambda m -m -3. \label{p4eq3.01}
\end{align}
Therefore, if $n$, $m$ are even and $n >4$, $m>2$, then
\begin{align}
|\lambda a_n a_{m} - a_{n+m-1}| &\leq |(\lambda a_n a_m - a_{n+m-1})-(\lambda a_{n-2} a_m - a_{n+m-3})| \notag \\
&\quad{}+ |(\lambda a_{n-2} a_m - a_{n+m-3})-(\lambda a_{n-4} a_m - a_{n+m-5})| + \cdots \notag \\
&\quad{}+ |(\lambda a_{6} a_m - a_{m+5})-(\lambda a_{4} a_m - a_{m+3})| + |\lambda a_{4} a_m - a_{m+3}|.\label{p4eq3.011}
\end{align}
In view of \eqref{p4eq1.1}, \eqref{p4eq3.12} and \eqref{p4eq3.01}, we have
\begin{align*}
|\lambda a_n a_{m} - a_{n+m-1}| \leq (\lambda m -1)(n-4) + 4 m\lambda- m -3 = \lambda m n -m -n +1.
\end{align*}
Next, we consider the case when $n$ is even and $m$ is odd. If $n=2$ and $m=2k+1$ $(k \geq 1)$, then by proceeding similarly as in the equation \eqref{p4eq3.11} and applying Lemma \ref{p4lem1}, we obtain
\begin{align}
|\lambda a_2 a_{m} - a_{m+1}|&= |\lambda a_2 a_{2k+1} - a_{2k+2}|\notag\\
&= |\lambda c_1 (1 + c_2 + c_4 + \cdots + c_{2k}) - (c_1 + c_3 + \cdots + c_{2k+1})|\notag\\
&\leq m(2 \lambda -1 )-1.\label{p4eq4}
\end{align}
If $n=2k$ $(k > 1)$ and $m$ is odd, then by proceeding as in the inequality \eqref{p4eq3.011} and applying \eqref{p4eq1.1} and \eqref{p4eq4}, we have
\begin{align*}
|\lambda a_n a_{m} - a_{n+m-1}| \leq 2(\lambda m -1)(k-1) + m(2 \lambda-1)-1 = \lambda m n -m -n +1.
\end{align*}
Finally, we consider the case when $n$ is odd. In this case, we have
\begin{align}
|\lambda a_n a_{m} - a_{n+m-1}| &\leq |(\lambda a_n a_m - a_{n+m-1})-(\lambda a_{n-2} a_m - a_{n+m-3})| \notag \\
&\quad{}+ |(\lambda a_{n-2} a_m - a_{n+m-3})-(\lambda a_{n-4} a_m - a_{n+m-5})| + \cdots \notag \\
&\quad{}+ |(\lambda a_{3} a_m - a_{m+2})-(\lambda a_{1} a_m - a_{m})| + |\lambda a_{1} a_m - a_{m}| \quad (a_1=1).\notag
\end{align}
Using inequality \eqref{p4eq1.1} and the bound of $|a_m|$ in the above inequality, we obtain
\begin{align*}
|\lambda a_n a_{m} - a_{n+m-1}| \leq \lambda m n- m -n +1.
\end{align*}
The sharpness in the cases $(i)(b)$, $(ii)(b)$ and $(iii)$ follow for the Koebe function $k(z)=z/(1-z)^2$.
\end{proof}
For $\lambda=1$, the following result is given in \cite[Theorem 3.2, p.\ 338]{MR1694809}.
\begin{corollary}
If $f(z)=z+\sum_{n=2}^{\infty}a_n z^n \in \mathcal{S}_{\mathbb{R}}$ and $\lambda \geq 1$, then for $n,m=2,3,\ldots$, $$|\lambda a_n a_m - a_{n+m-1}| \leq \lambda m n -n -m +1.$$ The result is sharp.
\end{corollary}
\begin{proof}
Since $\mathcal{S}_{\mathbb{R}} \subset \mathcal{S}$, by using \cite[Theorem 2, p.\ 35]{MR704183}, we have
\begin{align}
|a_2^2 - a_{3}| \leq 1. \label{p4eqs}
\end{align}
Also, $\mathcal{S}_{\mathbb{R}} \subset T$, therefore for $\lambda \geq 1$, by \cite[Theorem 1, p.\ 468]{MR824446}, we have $\lambda a_2^2-a_3 \leq 4 \lambda -3$. For $1 \leq \lambda \leq 3/2$, an application of the inequality \eqref{p4eqs} gives $\lambda a_2^2-a_3 \geq a_2^2-a_3 \geq -1 \geq -(4 \lambda -3)$. Thus, in view of the inequality \eqref{p4eq3.10}, we must have the sharp inequality
\begin{align}
|\lambda a_2^2 - a_3| \leq 4 \lambda -3 \label{p4eql}
\end{align}
where the sharpness follows for the Koebe function $k(z)=z/(1-z)^2$.
For even $m>2$, an application of \eqref{p4eql} and Lemma \ref{p4lem1} in the equation \eqref{p4eq3.11} gives
$$ |\lambda a_2 a_{m} - a_{m+1}| \leq 2 m\lambda-m -1.$$ When $m=2$ and $n>2$ is even, the desired estimate follows by interchanging the roles of $m$ and $n$ in the above inequality. The other cases follow immediately from the Theorem \ref{p4thm3}. The result is sharp for the Koebe function. \qedhere
\end{proof}
\section{Generalized Zalcman conjecture for some subclasses of close-to-convex functions}
Recall that the classes $\mathcal{F}_1(\beta)$ and $\mathcal{F}_2(\beta)$ $(\beta<1)$ are defined as follows:
\begin{align*}
\mathcal{F}_1(\beta)&:=\{f \in \mathcal{A}: \RE\big((1-z)f'(z)\big)> \beta\} \intertext{and}
\mathcal{F}_2(\beta)&:=\{f \in \mathcal{A}: \RE\big((1-z^2)f'(z)\big)> \beta\}.
\end{align*}
For $0 \leq \beta <1$, the classes $\mathcal{F}_1(\beta)$ and $\mathcal{F}_2(\beta)$ are subclasses of $\mathcal{C}$, the class of close-to-convex functions.
Define the functions $f_{1,\beta}: \mathbb{D} \to \mathbb{C}$ and $f_{2,\beta}: \mathbb{D} \to \mathbb{C}$, in $\mathcal{F}_1(\beta)$ and $\mathcal{F}_2(\beta)$ respectively, by
\begin{align}
f_{1,\beta}(z)&= \dfrac{2(1-\beta)z}{1-z}+ (1-2 \beta) \log{(1-z)} \label{p4eq4.102}
\intertext{and}
f_{2,\beta}(z)&= \dfrac{z(1-\beta ) }{1-z^2}+\dfrac{\beta }{2} \log{\left(\dfrac{1+z}{1-z}\right)}.\notag
\end{align}
Recently, for certain positive values of $\lambda $, the sharp estimation of $\phi(f,n,n;\lambda)$ over $\mathcal{C}$ is given in \cite{li2016generalized} by using the fact that $\mathcal{C}$ and $\phi(f,n,n;\lambda)$ are invariant under rotations.
Note that the classes $\mathcal{F}_1(\beta)$ and $ \mathcal{F}_2(\beta)$ are not necessarily invariant under rotations. For instance, $\mathcal{F}_1(0)$ and $ \mathcal{F}_2(0)$ are not invariant under rotations since $\RE \Big((1-z)\big(-if_{1,0}(iz)\big)'\Big)=-2$ at $z=1/2-i/2$ and $(1-z^2)\big(-if_{2,0}(iz)\big)'=(1-z^2)^2/(1+z^2)^2$ maps $\mathbb{D}$ to the whole complex plane except the negative real axis.
In this section, for certain positive values of $\lambda$, we give the sharp estimation of the generalized Zalcman coefficient functional $\phi(f,n,m;\lambda)$ when $f \in \mathcal{F}_1(\beta)$ or $f \in \mathcal{F}_2(\beta)$.
\begin{theorem}\label{p4thm4}
If $\mu \geq \max{\{ nm/\big((n+m-1)(1- \beta)\big), nm/(n+m-1)\}}$ and $f(z)=z+\sum_{n=2}^{\infty}a_n z^n \in \mathcal{F}_1(\beta)$ $(\beta <1)$, then for all $n,m=2, 3, \ldots$,
\begin{align*}
&\left|\mu a_n a_m - a_{n+m-1}\right|\leq \mu B_n B_m -B_{n+m-1},
\end{align*}
where
\begin{align}
B_n=\dfrac{ 1+ 2 (n-1)(1-\beta)}{n} \quad (n \geq 2). \label{p4eq4.10}
\end{align}
The inequality is sharp.
\end{theorem}
\begin{proof}
Let $g(z):=(1-z) f'(z)$. Since $f \in \mathcal{F}_1(\beta)$, therefore $$\dfrac{g(z)-\beta}{1-\beta}= 1 + \sum_{n=1}^{\infty} c_n z^n \in \mathcal{P},$$ which gives
\begin{align}
c_n&= \dfrac{(n+1) a_{n+1} - n a_n}{1-\beta} \quad (n \geq 1)\notag \intertext{and}
a_n&= \dfrac{1 + (1-\beta)(c_1 + c_2 + \cdots+ c_{n-1})}{n} \quad (n \geq 2). \label{p4eq4.11}
\end{align}
Since $|c_n| \leq 2$ $(n \geq 1)$, the equation \eqref{p4eq4.11} gives
\begin{align}
|a_n| \leq B_n, \label{p4eq4.12}
\end{align} where $B_n$ is given by the equation \eqref{p4eq4.10}.
For fixed $n, m =2,3,\ldots$ and $\lambda \in \mathbb{R}$, choose the sequence $\{z_k\}$ of complex numbers by $z_{n-2}= \lambda (1-\beta)a_m$, $ z_{n+m-3}=-(1-\beta)$, $z_k = 0$ for all $ k \neq n-2, n+m-3$. Then Lemma \ref{p4lem2} yields
\begin{align}
&\left|\big(\lambda n a_n a_m -(n+m-1) a_{n+m-1}\big)-\big(\lambda (n-1) a_{n-1} a_m -(n+m-2) a_{n+m-2}\big)\right|^2 \notag \\
&\leq |2 \lambda (1- \beta)a_m - m a_m +(m-1) a_{m-1}|^2 -|m a_m - (m-1) a_{m-1}|^2 + 4 (1- \beta)^2 \notag \\
&= 4 \lambda(1-\beta)\big(\lambda (1-\beta)-m\big) |a_m|^2 + 4(m-1) \lambda (1-\beta) \RE{ a_m \overline{a_{m-1}}} + 4(1-\beta)^2.\notag
\end{align}
If $\lambda \geq \max{\{m/(1-\beta),m\}}$, then by using equation \eqref{p4eq4.12} in the above inequality, we obtain
\begin{align}
&|\big(\lambda n a_n a_m -(n+m-1) a_{n+m-1}\big)-\big(\lambda (n-1) a_{n-1} a_m -(n+m-2) a_{n+m-2}\big)|^2 \notag \\
&\leq 4(1-\beta)^2 \left(\lambda B_m -1\right)^2.
\label{p4eq4.1}
\end{align}
For $\lambda \geq \max{\{m/(1-\beta),m\}}$, consider
\begin{align}
&|\lambda n a_n a_{m} - (n+m-1)a_{n+m-1}| \notag \\ &\leq |\big(\lambda n a_n a_m - (n+m-1) a_{n+m-1}\big)-\big(\lambda(n-1) a_{n-1} a_m - (n+m-2) a_{n+m-2}\big)|+ \cdots \notag \\
&\quad{}+ |\big( 2\lambda a_{2} a_m - (m+1) a_{m+1}\big)-\big(\lambda a_{1} a_m - m a_{m}\big)| + |\lambda a_{1} a_m - m a_{m}| \quad (a_1=1). \notag
\end{align}
By applying the inequality \eqref{p4eq4.1} and the bounds given by \eqref{p4eq4.12} in the above inequality, we have
\begin{align}
(n+m-1)\left|\dfrac{\lambda n}{n+m-1} a_n a_{m} - a_{n+m-1}\right|
\leq 2(1-\beta)\left(\lambda B_m-1\right)(n-1) +(\lambda-m)B_m. \notag
\end{align}
On substituting $ \mu=\lambda n /(n+m-1)$ in the above inequality and simplifying, we obtain
\begin{align}
\left|\mu a_n a_{m} - a_{n+m-1}\right|
\leq \mu B_n B_m - B_{n+m-1} \notag
\end{align}
where $\mu \geq \max{\{ nm/\big((n+m-1)(1- \beta)\big), nm/(n+m-1)\}}$ and $B_n$ is given by \eqref{p4eq4.10}. The result is sharp for the function $f_{1,\beta}$ given by \eqref{p4eq4.102}.
\end{proof}
For $\beta=0$ and $m=n$, we have the following.
\begin{corollary}
If $f(z)=z+\sum_{n=2}^{\infty}a_n z^n \in \mathcal{F}_1(0)$ and $\mu \geq n^2/(2n-1)$, then $$|\mu a_n^2 - a_{2n-1}| \leq \dfrac{\mu (2n-1)^2}{n^2}+\dfrac{3-4n}{2n-1}.$$ The result is sharp.
\end{corollary}
\begin{theorem}\label{p4thm5}
If $\mu \geq \max{\{ nm/\big((n+m-1)(1- \beta)\big), nm/(n+m-1)\}}$ and $f(z)=z+\sum_{n=2}^{\infty}a_n z^n \in \mathcal{F}_2(\beta)$ $(\beta <1)$, then for all $n,m=2, 3, \ldots$ except when both $n$ and $m$ are even,
\begin{align*}
&\left|\mu a_n a_m - a_{n+m-1}\right|\leq \mu C_n C_m -C_{n+m-1}
\end{align*}
where, for $n \geq 2$,
\begin{align}
C_n= \begin{cases}
\dfrac{1+(n-1)(1-\beta)}{n}, &\text{if $n$ is odd;}\\
1-\beta, &\text{if $n$ is even.} \label{p4eq4.13}
\end{cases}
\end{align}
The result is sharp.
\end{theorem}
\begin{proof}
Let $g(z):=(1-z^2) f'(z)$. Since $f \in \mathcal{F}_2(\beta)$, therefore $(g(z)-\beta)/(1-\beta)= p(z)$ for some $p(z)= 1 + \sum_{n=1}^{\infty} c_n z^n \in \mathcal{P}.$
This gives
\begin{align}
c_n&= \dfrac{(n+1) a_{n+1} - (n-1) a_{n-1}}{1-\beta}, \notag \\
a_{2k}&= \dfrac{(1-\beta)(c_1 + c_3 + \cdots + c_{2k-1})}{2k}\label{p4eq4.01}\intertext{and}
a_{2k+1}&= \dfrac{1 + (1-\beta)( c_2 + c_4 + \cdots + c_{2k})}{2 k +1}. \label{p4eq5}
\end{align}
Since $|c_n| \leq 2$ $(n \geq 1)$, the equations \eqref{p4eq4.01} and \eqref{p4eq5} give
\begin{align}
|a_n| \leq C_n \label{p4eq4.14}
\end{align}
for all $n \geq 2,$ where $C_n$ is given by the equation \eqref{p4eq4.13}.
Define a function $f_3: \mathbb{D} \to \mathbb{C}$ by
\begin{align}
f_3(z)&=\dfrac{z(1-\beta)}{1-z}+ \dfrac{\beta}{2}\log{\dfrac{1+z}{1-z}}\notag \\
&= z+ C_2 z^2 + C_3 z^3 + C_4 z^4 + C_5 z^5 + \cdots. \label{p4eq4.15}
\end{align}
Clearly, the bounds given in \eqref{p4eq4.14} are sharp for the function $f_3$.
For fixed $n, m =2,3, \ldots$ and $\lambda \in \mathbb{R}$, choose the sequence $\{z_k\}$ of complex numbers by $z_{n-2}= \lambda (1-\beta)a_m$, $ z_{n+m-3}=-(1-\beta)$, $z_k = 0$ for all $ k \neq n-2, n+m-3$. Then Lemma \ref{p4lem2} yields
\begin{align}
&\left|\big(\lambda n a_n a_m -(n+m-1) a_{n+m-1}\big)-\big(\lambda (n-2) a_{n-2} a_m -(n+m-3) a_{n+m-3}\big)\right|^2 \notag \\
&\leq |2 \lambda (1- \beta)a_m - m a_m +(m-2) a_{m-2}|^2 -|m a_m - (m-2) a_{m-2}|^2 + 4 (1- \beta)^2 \notag \\
&= 4 \lambda(1-\beta)\big(\lambda (1-\beta)-m\big) |a_m|^2 + 4(m-2) \lambda (1-\beta) \RE{ a_m \overline{a_{m-2}}} + 4(1-\beta)^2.\notag
\end{align}
If $\lambda \geq \max{\{m/(1-\beta),m\}}$, then an application of the equation \eqref{p4eq4.14} in the previous inequality gives
\begin{align}
&\left|\big(\lambda n a_n a_m -(n+m-1) a_{n+m-1}\big)-\big(\lambda (n-2) a_{n-2} a_m -(n+m-3) a_{n+m-3}\big)\right| \notag \\
&\leq 2(1-\beta)\left( \lambda C_m-1\right). \label{p4eq5.1}
\end{align}
If $n=2$ and $m=2k+1$ $(k \geq 1)$, then
\begin{align}
|2 \lambda a_2 a_{m} - (m+1)a_{m+1}| &= |2 \lambda a_2 a_{2k+1} - (2k+2) a_{2k+2}|\notag\\
&= \left|\dfrac{\lambda (1-\beta)}{2k+1} c_1 \left(1 + (1-\beta)\sum_{j=1}^{k} c_{2j}\right) - (1-\beta) \sum_{j=1}^{k+1}c_{2j-1} \right|\notag \\
&= \left|\left(\dfrac{\lambda}{m}-1\right) (1-\beta)c_1 + (1-\beta)\sum_{j=1}^{k} \left(\dfrac{\lambda(1-\beta)}{m}c_1 c_{2j} -c_{2 j+1}\right) \right|.\notag
\end{align}
For $\lambda \geq \max{\{m/(1-\beta),m\}}$, an application of Lemma \ref{p4lem1} in the above equation gives
\begin{align}
|2\lambda a_2 a_{m} - (m+1)a_{m+1}| &\leq (1-\beta)\big(2 \lambda C_m -(1+m) \big)\label{p4eq5.03}
\end{align}
where $C_m$ is given by the equation \eqref{p4eq4.13}.
If $ n>2$ is even and $m$ is odd, then
\begin{align}
&|\lambda n a_n a_{m} -(n+m-1) a_{n+m-1}| \notag \\
&\leq |\big(\lambda n a_n a_m - (n+m-1)a_{n+m-1}\big)-\big(\lambda (n-2) a_{n-2} a_m - (n+m-3) a_{n+m-3}\big)| + \cdots \notag \\
&\quad{} + |\big(4 \lambda a_{4} a_m -(m+3) a_{m+3}\big)-\big(2 \lambda a_{2} a_m - (m+1) a_{m+1}\big)| + |2 \lambda a_{2} a_m - (m+1)a_{m+1}|.\notag
\end{align}
For $\lambda \geq \max{\{m/(1-\beta),m\}}$, in view of \eqref{p4eq5.1} and \eqref{p4eq5.03}, we have
\begin{align*}
(n+m-1)\left|\dfrac{\lambda n }{n+m-1}a_n a_{m} - a_{n+m-1}\right| \leq (1-\beta)( \lambda n C_m -n-m+1).
\end{align*}
On substituting $ \mu=\lambda n /(n+m-1)$ in the above inequality and simplifying, we obtain
\begin{align}
\left|\mu a_n a_{m} - a_{n+m-1}\right| \leq (1-\beta)(\mu C_m-1) \notag
\end{align}
where $\mu \geq \max{\{ nm/\big((n+m-1)(1- \beta)\big), nm/(n+m-1)\}}.$
Next, we consider the case when $n$ is odd. In this case, we have
\begin{align}
&|\lambda n a_n a_{m} -(n+m-1) a_{n+m-1}| \notag \\
&\leq |\big(\lambda n a_n a_m - (n+m-1)a_{n+m-1}\big)-\big(\lambda (n-2) a_{n-2} a_m - (n+m-3) a_{n+m-3}\big)| + \cdots \notag \\
&\quad{} + |\big(3 \lambda a_{3} a_m -(m+2) a_{m+2}\big)-\big( \lambda a_{1} a_m - m a_{m}\big)| + |\lambda a_{1} a_m - m a_{m}|\quad (a_1=1).\notag
\end{align}
For $\lambda \geq \max{\{m/(1-\beta),m\}}$, use of the equation \eqref{p4eq5.1} and the bound of $|a_m|$ in the above inequality give
\begin{align*}
(n+m-1)\left|\dfrac{\lambda n}{n+m-1} a_n a_{m} - a_{n+m-1}\right|
\leq (n-1)(1-\beta)\left(\lambda C_m-1\right)+(\lambda -m)C_m
\end{align*}
where $C_m$ is given by the equation \eqref{p4eq4.13}.
Substitution of $\mu=\lambda n/(n+m-1)$ in the previous inequality and simplification give
\begin{align*}
\left|\mu a_n a_{m} - a_{n+m-1}\right|
\leq \mu C_n C_m -C_{n+m-1},
\end{align*}
where $\mu \geq \max{\{ nm/\big((n+m-1)(1- \beta)\big), nm/(n+m-1)\}}$.
The sharpness follows for the function $f_3$ given by the equation \eqref{p4eq4.15}.
\end{proof}
For $\beta=0$ and $m=n=2k+1$ $(k \geq 1)$, we have the following.
\begin{corollary}
If $f(z)=z+\sum_{n=2}^{\infty}a_n z^n \in \mathcal{F}_2(0)$ and $\mu \geq (2k+1)^2/(4k+1)$, then $$|\mu a_{2 k+1}^2 - a_{4k+1}| \leq \mu-1 \quad (k \geq 1). $$
The result is sharp.
\end{corollary}
| {
"timestamp": "2016-11-10T02:02:29",
"yymm": "1610",
"arxiv_id": "1610.06760",
"language": "en",
"url": "https://arxiv.org/abs/1610.06760",
"abstract": "For functions $f(z)= z+ a_2 z^2 + a_3 z^3 + \\cdots$ in various subclasses of normalized analytic functions, we consider the problem of estimating the generalized Zalcman coefficient functional $\\phi(f,n,m;\\lambda):=|\\lambda a_n a_m -a_{n+m-1}|$. For all real parameters $\\lambda$ and $ \\beta<1$, we provide the sharp upper bound of $\\phi(f,n,m;\\lambda)$ for functions $f$ satisfying $\\operatorname{Re}f'(z) > \\beta$ and hence settles the open problem of estimating $\\phi(f,n,m;\\lambda)$ recently proposed by Agrawal and Sahoo [S. Agrawal and S. k. Sahoo, On coefficient functionals associated with the Zalcman conjecture, arXiv preprint, 2016]. It is worth mentioning that the sharp estimations of $\\phi(f,n,m;\\lambda)$ follow for starlike and convex functions of order $\\alpha$ $(\\alpha <1)$ when $\\lambda \\leq 0$. Moreover, for certain positive $\\lambda$, the sharp estimation of $\\phi(f,n,m;\\lambda)$ is given when $f$ is a typically real function or a univalent function with real coefficients or is in some subclasses of close-to-convex functions.",
"subjects": "Complex Variables (math.CV)",
"title": "Generalized Zalcman conjecture for some classes of analytic functions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.981735725388777,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7085610960548776
} |
https://arxiv.org/abs/2204.14182 | On non-counital Frobenius algebras | A Frobenius algebra is a finite-dimensional algebra $A$ which comes equipped with a coassociative, counital comultiplication map $\Delta$ that is an $A$-bimodule map. Here, we examine comultiplication maps for generalizations of Frobenius algebras: finite-dimensional self-injective (quasi-Frobenius) algebras. We show that large classes of such algebras, including finite-dimensional weak Hopf algebras, come equipped with a nonzero map $\Delta$ as above that is not necessarily counital. We also conjecture that this comultiplicative structure holds for self-injective algebras in general. | \section{Introduction} \label{sec:intro}
All algebraic structures in this work are over a field $\Bbbk$ of characteristic 0. Moreover, all algebras are finite-dimensional as $\Bbbk$-vector spaces, and are unital and associative.
This work is motivated by the various characterizations of Frobenius algebras in the literature. These structures were introduced by Frobenius in 1903 in terms of {\it paratrophic matrices} \cite{Frobenius}, and revamped in the 1930s by Brauer-Nesbitt and Nakayama in terms of their representation theory and certain forms that they admit \cite{BN, Nak}; see also \cite[Chapter~6]{Lam}. Starting in the 1990s, the following characterization of a Frobenius algebra became prevalent due to its connection to 2-dimensional Topological Quantum Field Theories (TQFTs).
\begin{definitiontheorem} \label{defthm:Frob} \cite{Quinn, Abrams}
Let $A$ be an algebra with multiplication map, $m: A\otimes A\to A$. Then $A$ is {\it Frobenius} if and only if it comes equipped with a $\Bbbk$-linear, coassociative, counital comultiplication map $\Delta: A \to A \otimes A$ that is an $A$-bimodule map,~i.e.,
\begin{itemize}
\item $(\Delta \otimes \textnormal{id}) \Delta = (\textnormal{id} \otimes \Delta) \Delta$ \quad (for coassociativity);\smallskip
\item $\exists$ $\Bbbk$-linear map $\varepsilon: A \to \Bbbk$ so that
$(\varepsilon \otimes \textnormal{id}) \Delta = \textnormal{id} = (\textnormal{id} \otimes \varepsilon) \Delta$ \quad (for counitality).
\end{itemize}
Moreover, the left and right $A$-action on $A$ is given by multiplication, and the left (resp., right) $A$-action on $A \otimes A$ is given by left (resp., right) multiplication in the first (resp., second) factor; so, the $A$-bimodule map condition is equivalent to
\begin{equation} \label{eq:Delta-m}
(\textnormal{id} \otimes m)(\Delta \otimes \textnormal{id}) = \Delta m = (m \otimes \textnormal{id})(\textnormal{id} \otimes \Delta).
\end{equation}
\end{definitiontheorem}
We examine a similar comultiplicative structure for generalizations of Frobenius algebras: self-injective ({\it quasi-Frobenius}) algebras. We show that large subclasses of self-injective algebras $A$ come equipped with a comultiplication map $\Delta$ that make $A$ {\it non-counital Frobenius}; that is, $\Delta$ is coassociative, satisfies \eqref{eq:Delta-m}, but is not necessarily counital.
\subsection{Self-injective endomorphism algebras} To proceed, recall an algebra $A$ is called {\it self-injective} if the $A$-modules ${}_A A$ and $A_A$ are both injective. Moreover, an algebra $B$ is {\it basic} if $B_B$ is a direct sum of pairwise non-isomorphic indecomposable (projective) modules.
In 2006, Skowroński and Yamagata obtained a characterization of self-injective algebras in terms of endomorphism algebras over basic algebras \cite{SYpaper}, given as follows.
\smallskip
Take $B$ a basic algebra, and let
$B = P_0 \oplus \cdots \oplus P_{n-1}$ be a decomposition of $B$ into a direct sum of indecomposable right $B$-modules. Here, $n$ is the number of primitive orthogonal idempotents $e_i$ of $B$ for which $1_B\hspace{-.01in} = \hspace{-.01in}\sum_{i=0}^{n-1} e_i$. For $m_0, \dots, m_{n-1} \hspace{-.015in} \in \hspace{-.02in} \mathbb{Z}_{>0}$, consider the notation:
\begin{equation} \label{eq:B(m_i)}
B(m_0, \dots, m_{n-1}) := \textnormal{End}_B(M_0 \oplus \cdots \oplus M_{n-1}), \quad \text{ for } M_i: = P_i^{\oplus m_i}.
\end{equation}
\begin{theorem} \cite[Theorems~1.3 and 2.1]{SYpaper} \cite[Theorems~IV.6.1,~IV.6.2]{SYbook}
\label{thm:SY}
An algebra $A$ is self-injective if and only if $A \cong B(m_0, \dots, m_{n-1})$ for $B$ a basic, self-injective algebra. Further, $A$ is Frobenius if and only if $m_i = m_{\nu(i)}$ for all $i$, where $\nu$ is the Nakayama permutation of $B$. \qed
\end{theorem}
Towards our goal, let us consider an important class of basic algebras $B$ depending on a certain directed graph, or a {\it quiver}, $Q$. Here, we read paths of $Q$ from left-to-right.
\begin{notation} \label{not:Bnl} Take $n \in \mathbb{Z}_{>0}$ and $1 \leq \ell \leq n-1$. Let $Q_{(n)}$ be an $n$-cycle quiver with vertex set $Q_0 = \{0, 1, \dots, n-1\}$ and arrow set $Q_1=\{\alpha_i: i \to i+1\}_{i=0, \dots, n-1}$. Take $R$ to be the arrow ideal of the path algebra $\Bbbk Q$, and let $\mathcal{I}_\ell$ be the admissible ideal $R^\ell$ of $\Bbbk Q$. Form the {\it bound quiver algebra},
$$B_{n,\ell}:=\Bbbk Q_{(n)} / \mathcal{I}_\ell.$$
\end{notation}
In fact, all basic, self-injective, {\it connected} {\it Nakayama} (or {\it generalized uniserial}) algebras are of the form $B_{n,\ell}$ \cite[Theorem~IV.6.15]{SYbook}. Therefore, we set the following terminology.
\begin{definition} \label{def:NSY}
Take $A:=B(m_0, \dots, m_{n-1})$, a self-injective algebra from Theorem~\ref{thm:SY}. We refer to $A$ as a {\it Nakayama-Skowro\'{n}ski-Yamagata (NSY) algebra} if $B = B_{n,\ell}$.
\end{definition}
This brings us to our first main result.
\begin{theorem}[Theorem~\ref{thm:main}] \label{thm:main-intro} The NSY algebras $A :=B(m_0, \dots, m_{n-1})$ are non-counital Frobenius via an explicitly defined comultiplication map $\Delta: A \to A\otimes A$. In particular, $\Delta$ is counital precisely when $A$ is Frobenius (e.g., when $m_i = m_{\nu(i)}$ for all $i$, as in Theorem~\ref{thm:SY}).
\end{theorem}
We establish Theorem~\ref{thm:main-intro} in Section~\ref{sec:mainresult}. We then provide several examples of comultiplicative structures of NSY algebras in Section~\ref{sec:examples}, such as for the Frobenius algebras $B_{n,\ell}(1,\dots, 1)$, and for Nakayama's 9-dimensional non-Frobenius self-injective algebra $B_{2,2}(2,1)$ \cite[page~624]{Nak}; we also compare the 28-dimensional Frobenius algebra $B_{4,3}(1,2,1,2)$ to the 27-dimensional algebra $B_{4,3}(1,1,2,2)$ that is not counital.
\subsection{Finite-dimensional weak Hopf algebras} Next, we turn our attention to another important class of self-injective algebras: finite-dimensional weak Hopf algebras. A {\it weak Hopf algebra} is an algebra $H$ that is equipped with a $\Bbbk$-linear coassociative comultiplication map $\Delta_{\textnormal{wk}}$, which is counital and satisfies certain compatibility axioms different than \eqref{eq:Delta-m}; see Definition~\ref{def:weak}. Such algebras are generalizations of Hopf algebras, which have gained prominence due to extensive work by Böhm-Nill-Szlanchányi in the late 1990s (see, e.g., \cite{BNS}). One of their main results is the following.
\begin{theorem} \label{thm:BNS-intro} \cite[Theorems~3.11 and~3.16]{BNS}
Finite-dimensional weak Hopf algebras $H$ are self-injective. Further, $H$ is Frobenius if and only if $H$ has a non-degenerate left integral (see Definition~\ref{def:integral}). \qed
\end{theorem}
See \cite{IK} and \cite[Section~9.2]{HH} for a study of finite-dimensional weak Hopf algebras that are not counital Frobenius.
Now we fulfill our goal for finite-dimensional weak Hopf algebras as described below.
\begin{theorem}[Theorem~\ref{thm:main-weak}] \label{thm:main-weak-intro} Finite-dimensional weak Hopf algebras $H$ are non-counital Frobenius via an explicitly defined comultiplication map $\Delta: H \to H\otimes H$. In particular, $\Delta$ is counital precisely when $H$ is Frobenius (e.g., when $H$ has a non-degenerate left integral).
\end{theorem}
Background material on weak Hopf algebras and the proof of Theorem~\ref{thm:main-weak-intro} can be found in Section~\ref{sec:weak}. We also provide many examples of this result in Section~\ref{sec:weak-ex}, especially for {\it groupoid algebras}, and for {\it quantum transformation groupoids}.
\subsection{Further directions} Our two results above, Theorems~\ref{thm:main-intro} and~\ref{thm:main-weak-intro}, prompt the following conjecture, progress of which is illustrated in Diagram~1 below.
\begin{conjecture} \label{main-conj}
Self-injective algebras are non-counital Frobenius. In particular, any class of self-injective algebras admits a comultiplication map $\Delta$ yielding a non-counital Frobenius structure, and $\Delta$ is counital precisely for the Frobenius members of the class.
\end{conjecture}
\vspace{.1in}
\begin{center}
\includegraphics[scale=.15]{Diagram-1-v7.PNG}\\
\vspace{.15in}
{\small Diagram 1: Conjecture~\ref{main-conj} holds for NSY-algebras [Theorem~\ref{thm:main-intro}]\\ and for finite-dimensional weak Hopf algebras [Theorem~\ref{thm:main-weak-intro}].}
\end{center}
\vspace{.1in}
In fact, the conclusion of Conjecture~\ref{main-conj} has been achieved for another generalization of self-injective algebras: gendo-Frobenius algebras, introduced recently by Yırtıci. See \cite[Theorem~4.3]{Yir}.
Here, an algebra is said to be {\it Morita} if it is isomorphic to the endomorphism algebra of a finite-dimensional, faithful (right) module $M$ over a self-injective algebra $B$ \cite{KerYam}. Further, an algebra is called {\it gendo-Frobenius} if it is Morita under the conditions that $B$ is Frobenius and that $M \cong M_{\nu_B}$ as right $B$-modules, where $\nu_B$ is the {\it Nakayama automorphism} of $B$.
With this observation, we end with a few directions for further research.
\begin{question}
Does the conclusion of Conjecture~\ref{main-conj} hold for Morita algebras?
\end{question}
\begin{question}
What are the intersections between the classes of algebras above for which Conjecture~\ref{main-conj} holds? For instance, what are examples of {\it NSY weak Hopf algebras}?
\end{question}
\begin{question}
What are the physical uses of non-counital Frobenius algebras, say, akin to the use of (counital) Frobenius algebras in 2-dimensional TQFTs?
\end{question}
\section{NSY algebras are non-counital Frobenius} \label{sec:mainresult}
In this section, we establish an explicit non-counital Frobenius structure for the NSY algebras [Definition~\ref{def:NSY}], thereby proving Conjecture~\ref{main-conj} for a large class of self-injective algebras. Notation and preliminary results are provided in Section~\ref{sec:prelim}; the basis of the NSY algebras is discussed in Section~\ref{sec:basis} (see Proposition~\ref{prop:Bmi-basis}); and the unital, multiplicative structure of the NSY algebras is provided in Section~\ref{sec:algebra} (see Proposition~\ref{prop:Bmi-alg}). The main result on the non-counital Frobenius structure of the NSY algebras is then presented in Section~\ref{sec:Frobenius}, incorporating the results above (see Theorem~\ref{thm:main}).
\subsection{Preliminaries} \label{sec:prelim} To proceed, recall the notation in \eqref{eq:B(m_i)} and Notation~\ref{not:Bnl} for the NSY algebras $B_{n,\ell}(m_0, \dots, m_{n-1})$ in Definition~\ref{def:NSY}.
\begin{notation} \label{not:X}
Let $e_i$ denote the trivial path at vertex $i$ of $Q_0$. Consider the decomposition of $B_{n, \ell}$ into a direct sum of indecomposable right $B_{n, \ell}$-modules, $\bigoplus_{i=0}^{n-1} P_i$, for $P_i := e_i B_{n,\ell}$.
\begin{itemize}
\item Let the basis of $B_{n,\ell}$ be denoted by
$$\alpha_{i,k}:= \alpha_i \; \alpha_{i+1} \; \cdots \; \alpha_{i+k}, \quad \text{ for }
0 \leq i \leq n-1, \; 0 \leq k \leq \ell-1.$$
Here, $\alpha_{i,0} = e_i$. In particular, the basis of $P_i$ is given by $\{\alpha_{i,k}\}_{0 \leq k \leq \ell-1}$.
\medskip
\item Let $P_i^{r_i}$ be the $r_i$-th copy of $P_i$, for $0 \leq r_i \leq m_i - 1$, and denote its basis by $\{\alpha^{r_i}_{i,k}\}_{0 \leq k \leq \ell-1}$. Moreover, $P_i^{r_i}$ is a right $B_{n,\ell}$-module via
\begin{equation} \label{eq:P-act}
\alpha^{r_i}_{i,k} \cdot \beta = (\alpha_{i,k} \cdot \beta)^{r_i}, \quad \text{for} \quad \beta \in B_{n,\ell}.
\end{equation}
\smallskip
\item Consider the maps
\begin{equation} \label{eq:Xij}
X_{i,j}^{r_i, s_{i+j}}: P_{i+j}^{s_{i+j}} \longrightarrow P_i^{r_i}, \quad \alpha^{s_{i+j}}_{i+j,k} \mapsto (\alpha_{i,j} \cdot \alpha_{i+j,k})^{r_i} = \alpha_{i, j+k}^{r_i}.
\end{equation}
That is, $X_{i,j}^{r_i, s_{i+j}}$ is pre-composition by the path $\alpha_{i,j}$ (in the appropriate copy of $P_i$).
\medskip
\item Extend $X_{i,j}^{r_i, s_{i+j}}$ to an endomorphism of $\bigoplus_{i,r_i} P_i^{r_i}$ by setting
$$X_{i,j}^{r_i,s_{i+j}}(P_a^{s_a}) = 0, \quad \text{for \;$a \neq i+j, \; \;s_a \neq s_{i+j}$.}$$
\medskip
\item Indices are here as follows: $n,\; \ell,\; m_0, \dots, m_{n-1}$ are fixed; $\; i, \; a$ are taken modulo $n$; and $0 \leq j,k,b \leq \ell-1, \; \; 0 \leq r_i \leq m_i -1, \; \; 0 \leq s_{i+j} \leq m_{i+j} -1.$
\end{itemize}
\end{notation}
Next, we compute the Nakayama permutation \cite[page~377]{SYbook} \cite[page~136]{SYpaper} of the NSY algebras, as this is needed to determine when such algebras are Frobenius [Theorem~\ref{thm:SY}]. We refer to \cite[Section~I.3]{ASSbook} for background on radicals, tops, and socles of modules of finite-dimensional algebras.
\begin{proposition} \label{prop:Nak-perm}
The Nakayama permutation $\nu$ of $B_{n, \ell}$ is the permutation of $\{1, \dots, n\}$ given by
$\nu(i) = i + \ell -1 \text{ modulo $n$}.$
\end{proposition}
\begin{proof}
Recall Notation~\ref{not:X}; in particular, the indecomposable right $B_{n, \ell}$-modules are of the form $P_i = e_i B_{n,\ell} = \bigoplus_{k=0}^{\ell-1} \Bbbk \alpha_{i,k}$, with $\alpha_{i,0} = e_i$. By the definition on \cite[page 136]{SYpaper}, we need to compute the permutation of $\nu$ of $\{1, \dots, n\}$ such that
\[
\text{soc}(e_i B_{n,\ell}) \; \cong \; \text{top}(e_{\nu(i)} B_{n,\ell}).
\]
Now rad($P_i$)=$\bigoplus_{k=1}^{\ell-1} \Bbbk \alpha_{i,k}$. So, we obtain that top($P_i$) = $P_i/\text{rad}(P_i)$ = $\Bbbk e_i$. On the other hand, soc($P_i$) = $\Bbbk \alpha_{i, \ell-1} \cong \Bbbk e_{i+\ell -1}$ as right $B_{n,\ell}$-modules. Here, $\Bbbk e_{i+\ell -1}$ is the right $B_{n, \ell}$-module with action $ e_{i+\ell -1} \cdot \alpha_{j,k} =\delta_{i+\ell-1, j} \;\delta_{k,0}\; e_{i + \ell -1}$. So, $\nu(i) = i + \ell -1$ modulo $n$.
\end{proof}
\subsection{Basis} \label{sec:basis}
Next, we show that the maps $\{X_{i,j}^{r_i,s_{i+j}}\}$ from Notation~\ref{not:X} form a basis of a given NSY algebra $B_{n,\ell}(m_0,\dots,m_{n-1})$. Consider the following preliminary results.
\begin{lemma}
We have that $X_{i,j}^{r_i,s_{i+j}}\in B_{n,\ell}(m_0, \dots, m_{n-1}) $.
\end{lemma}
\begin{proof}
We need to show that $X_{i,j}^{r_i, s_{i+j}}$ is a map of right $B_{n,\ell}$-modules. Take $\beta \in B_{n,\ell}$, and take $\theta \in P_a^{s_a}$ for some $0 \leq a \leq n-1$ and $0 \leq s_a \leq m_a -1$. Then, $\theta = \gamma^{s_a}$, for some path $\gamma$ that starts at vertex $a$. Now we see that $X_{i,j}^{r_i, s_{i+j}}$ is a right $B_{n,\ell}$-module map as follows:
\[
{\small
\begin{array}{rll}
\medskip
X_{i,j}^{r_i, s_{i+j}}(\theta \cdot \beta)
&\; = \; X_{i,j}^{r_i, s_{i+j}}(\gamma^{s_a} \cdot \beta)
&\; \overset{\textnormal{\eqref{eq:P-act}}}{=} \; X_{i,j}^{r_i, s_{i+j}}(\gamma \cdot \beta)^{s_a}\\
\medskip
&\overset{\textnormal{\eqref{eq:Xij}}}{=} \; \delta_{i+j,a}\; \delta_{s_{i+j},s_a}\; (\alpha_{i,j} \cdot \gamma \cdot \beta)^{r_i}
&\; \overset{\textnormal{\eqref{eq:P-act}}}{=} \; \delta_{i+j,b}\; \delta_{s_{i+j},s_a}\; [(\alpha_{i,j} \cdot \gamma)^{r_i} \cdot \beta]\\
\medskip
&\; = \; [\delta_{i+j,a}\; \delta_{s_{i+j},s_a}\; (\alpha_{i,j} \cdot \gamma)^{r_i}] \cdot \beta
&\; = X_{i,j}^{r_i, s_{i+j}}(\gamma^{s_a}) \cdot \beta \\
&\; = \; X_{i,j}^{r_i, s_{i+j}}(\theta) \cdot \beta.
\end{array}
}
\]
\vspace{-.18in}
\end{proof}
\begin{lemma}\label{lem:Bmi-linind}
The elements $X_{i,j}^{r_i,s_{i+j}}$ are linearly independent.
\end{lemma}
\begin{proof}
Suppose not, then there exist scalars
$ c_{i,j}^{r_i,s_{i+j}} \in \Bbbk$ not all zero such that
\begin{equation}\label{eq:li1}
\sum\limits_{i=0}^{n-1} \; \sum\limits_{j=0}^{\ell-1}\; \sum\limits_{r_i=0}^{m_i-1}\; \sum\limits_{s_{i+j}=0}^{m_{i+j}-1} c_{i,j}^{r_i,s_{i+j}} X_{i,j}^{r_i,s_{i+j}} = 0.
\end{equation}
Since $ c_{i,j}^{r_i,s_{i+j}}$ are not all zero, there exists indices $a$, $b$, $r_a$, $s_{a+b}$ such that $ c_{a,b}^{r_a,s_{a+b}}\neq 0$. Now, evaluate~\eqref{eq:li1} on the element $\alpha_{a+b,0}^{s_{a+b}}$ to get that
{\small
\begin{align*}
& \sum\limits_{i=0}^{n-1}\; \sum\limits_{j=0}^{\ell-1}\; \sum\limits_{r_i=0}^{m_i-1}\; \sum\limits_{s_{i+j}=0}^{m_{i+j}-1} c_{i,j}^{r_i,s_{i+j}} \; X_{i,j}^{r_i,s_{i+j}} (\alpha_{a+b,0}^{s_{a+b}})\\
& =
\sum\limits_{i=0}^{n-1}\; \sum\limits_{j=0}^{\ell-1}\; \sum\limits_{r_i=0}^{m_i-1}\; \sum\limits_{s_{i+j}=0}^{m_{i+j}-1} c_{i,j}^{r_i,s_{i+j}} \; \delta_{i+j,a+b}\; \delta_{s_{i+j},s_{a+b}} \alpha_{i,j}^{r_i} \\
&= \sum\limits_{i=0}^{n-1} \; \sum\limits_{j=0}^{\ell-1} \; \sum\limits_{r_i=0}^{m_i-1} c_{i,j}^{r_i,s_{a+b}} \; \delta_{i+j,a+b} \; \alpha_{i,j}^{r_i} \\
& = 0.
\end{align*}
}
\noindent Since $\{ \alpha_{i,j}^{r_i} \}$ are linearly independent elements of $\bigoplus_{i,r_i} P_i^{r_i}$, we get that $c_{i,j}^{r_i, s_{a+b}} = 0$ for all $i$, $j$, $r_i$. In particular, for $i = a$ and $j=b$, we get that $c_{a,b}^{r_a,s_{a+b}}=0$. This is a contradiction to our assumption. Thus, the claim follows.
\end{proof}
\begin{lemma}\label{lem:Bmi-maps}
Every nonzero right $B_{n,\ell}$-module map $\phi$ from $P_a^{r_a}$ to $P_i^{r_i}$ is of the form $\phi(\alpha_{a,k}^{r_a}) = (\alpha_{i,j}^{r_i})\cdot \alpha_{a,k}$ for some $j$ such that $i+j\equiv a\textnormal{ mod }n$.
\end{lemma}
\begin{proof}
Since $\phi$ is a right $B_{n,\ell}$-module map, we have that
\[ 0 \neq \phi(\alpha_{a,0}^{r_a}) \overset{\textnormal{\eqref{eq:P-act}}}{=} \phi( \alpha_{a,0}^{r_a}\cdot \alpha_{a,0} ) = \phi(\alpha_{a,0}^{r_a})\cdot \alpha_{a,0}. \]
Therefore, $\phi(\alpha_{a,0}^{r_a})$ ends at vertex $a$. Also, $\phi(\alpha_{a,0}^{r_a})\in P_i^{r_i}$ implies that $\phi(\alpha_{a,0}^{r_a})$ starts at vertex~$i$. Thus, it is clear that $\phi(\alpha_{a,0}^{r_a}) = \alpha_{i,j}^{r_i}$ for some $j$ such that $i+j\equiv a \textnormal{ mod }n$. Hence, the claim holds.
\end{proof}
\begin{lemma} \label{lem:Bmi-dim}
We have that $\dim_\Bbbk \left(B_{n,\ell}(m_0, \dots, m_{n-1}) \right) = \sum_{i=0}^{n-1} \sum_{j=0}^{\ell -1} m_i m_{i+j}.$ \qed
\end{lemma}
\begin{proof}
Recall that
{\small
\begin{align*}
B_{n,\ell}(m_0,\ldots, m_{n-1}) & = \textnormal{Hom}_{B_{n,\ell}}\left(\bigoplus\limits_{a=0}^{n-1}\; \bigoplus\limits_{r_a=0}^{m_a-1} P_a^{r_a}, \; \; \bigoplus\limits_{i=0}^{n-1} \; \bigoplus\limits_{r_i=0}^{m_i-1} P_i^{r_i} \right) \\
& \cong \bigoplus\limits_{a,i=0}^{n-1} \; \bigoplus\limits_{r_a=0}^{m_a-1} \; \bigoplus\limits_{r_i=0}^{m_i-1} \textnormal{Hom}_{B_{n,\ell}}(P_a^{r_a},P_i^{r_i}).
\end{align*}
}
\noindent By Lemma~\ref{lem:Bmi-maps}, dim$_\Bbbk(\textnormal{Hom}_{B_{n,\ell}}(P_a^{r_a},P_i^{r_i})) = \#\{ 1\leq j\leq \ell-1 ~:~ i+j\equiv a \textnormal{ mod }n \}$.
Therefore,
{\small
\begin{align*}
\textnormal{dim}_\Bbbk(B_{n,\ell}(m_0,\ldots, m_{n-1})) & = \sum\limits_{a=0}^{n-1} \; \sum\limits_{i=0}^{n-1} \; \sum\limits_{r_a=0}^{m_a-1}\; \sum\limits_{r_i=0}^{m_i-1} \#\{ 1\leq j\leq \ell-1 ~:~ i+j\equiv a \textnormal{ mod }n \} \\
& = \sum\limits_{a=0}^{n-1}\; \sum\limits_{i=0}^{n-1} m_a m_i \; \#\{ 1\leq j\leq \ell-1 ~:~ i+j\equiv a \textnormal{ mod }n \} \\
& = \sum\limits_{i=0}^{n-1} m_i \; \sum\limits_{a=0}^{n-1} m_a \; \# \{ 1\leq j\leq \ell-1 ~:~ i+j\equiv a \textnormal{ mod }n \} \\
& = \sum_{i=0}^{n-1} m_i \; \sum_{j=0}^{\ell -1} m_{i+j}.
\end{align*}
}
\vspace{-.15in}
\end{proof}
\begin{proposition} \label{prop:Bmi-basis}
The collection of morphisms $\{X_{i,j}^{r_i, s_{i+j}}\}_{i,j,r_i,s_{i+j}}$ form a $\Bbbk$-basis of \linebreak $B_{n,\ell}(m_0, \dots, m_{n-1})$.
\end{proposition}
\begin{proof}
The number of elements in the linearly independent set $\{X_{i,j}^{r_i, s_{i+j}}\}_{i,j,r_i,s_{i+j}}$ from Lemma~\ref{lem:Bmi-linind} is $\sum_{i=0}^{n-1} \sum_{j=0}^{\ell -1} m_i m_{i+j}$, the same as dim$_\Bbbk (B_{n,\ell}(m_0,\ldots,m_{n-1}))$ by Lemma~\ref{lem:Bmi-dim}. So, the result follows.
\end{proof}
\subsection{Algebra structure} \label{sec:algebra}
Next, we examine the algebraic structure of the NSY algebras with the basis $\{X_{i,j}^{r_i, s_{i+j}}\}_{i,j,r_i,s_{i+j}}$ from Proposition~\ref{prop:Bmi-basis}.
\begin{proposition} \label{prop:Bmi-alg}
Take $A:=B_{n,\ell}(m_0, \dots, m_{n-1})$, an NSY algebra. Then, the formulas
\[
\begin{array}{c}
\medskip
X_{i,j}^{r_i, s_{i+j}} \cdot X_{a,b}^{r_a, s_{a+b}} = \begin{cases}
\delta_{a,i+j} \; \delta_{r_a, s_{i+j}}\; X_{i,j+b}^{r_i, s_{a+b}}, &\text{for $j +b < \ell$},\\
0, &\text{else};
\end{cases}
\medskip \\
1_A = \sum_ {i=0}^{n-1} \sum_{r_i=0}^{m_{i}-1} X_{i,0}^{r_i,r_i},
\end{array}
\]
give $A$ the structure of an associative, unital algebra.
\end{proposition}
\begin{proof}
First, we show that the multiplication is well-defined. Given the two basis elements $X_{i,j}^{r_i, s_{i+j}}$ and $X_{a,b}^{r_a, s_{a+b}}$ as above, their composition is defined to be:
\medskip
$$X_{i,j}^{r_i, s_{i+j}} \circ X_{a,b}^{r_a, s_{a+b}} = \begin{cases}
P_{a+b}^{s_{a+b}} \longrightarrow P_a^{r_a} = P_{i+j}^{s_{i+j}} \longrightarrow P_i^{r_i}, & a = i+j ,\; r_a = s_{i+j} , \; j+b < \ell \\
0, & \text{otherwise}.
\end{cases} $$
If we consider a basis element $\alpha_{a+b, k}^{s_{a+b}}$ of $P_{a+b}^{s_{a+b}}$, then when $a = i+j , \; r_a = s_{i+j} ,\; j+b < \ell$, this map is defined by:
\[
{\small
\begin{array}{rll}
\medskip
X_{i,j}^{r_i, s_{i+j}} \circ X_{a,b}^{r_a, s_{a+b}} \left(\alpha_{a+b, k}^{s_{a+b}} \right) &\; = \; X_{i,j}^{r_i, s_{i+j}} \left(\left(\alpha_{a,b} \cdot \alpha_{a+b, k} \right)^{r_a} \right)
&\; = \; \left( \alpha_{i,j} \cdot \alpha_{a,b} \cdot \alpha_{a+b,k} \right)^{r_i} \\ \medskip
&\; = \; \left( \alpha_{i,j} \cdot \alpha_{i+j,b} \cdot \alpha_{a+b,k} \right)^{r_i}
&\; = \; \left( \alpha_{i,j+b} \cdot \alpha_{a+b,k} \right)^{r_i} \\
&\; = \; X_{i,j+b}^{r_i, s_{a+b}} \left( \alpha_{a+b,k}^{s_{a+b}} \right).
\end{array}
}
\]
Here, we use~\eqref{eq:P-act} throughout. Hence, this multiplication is well-defined.
Next, we will see that the multiplication is associative via the following computation:
{\small
\begin{align*}
\medskip
&\left(X_{i,j}^{r_i, s_{i+j}} \cdot X_{a,b}^{r_a, s_{a+b}} \right) \cdot X_{c,d}^{r_c,s_{c+d}} \\
&\qquad =
\begin{cases}
\delta_{a,i+j} \; \delta_{r_a, s_{i+j}}\; X_{i,j+b}^{r_i, s_{a+b}} \cdot X_{c,d}^{r_c,s_{c+d}}, &\text{for $j +b < \ell$}\\
0, &\text{else}
\end{cases} \bigskip \\
&\qquad =
\begin{cases}
\delta_{a,i+j} \; \delta_{r_a, s_{i+j}}\; \delta_{c,i+j+b} \; \delta_{r_c,s_{i+j+b}}\; X_{i,j+b+d}^{r_i, s_{c+d}}, &\text{for $j+b+d < \ell$},\\
0, & \text{else}
\end{cases} \bigskip \\
&\qquad =
\begin{cases}
\delta_{c,a+b} \; \delta_{r_c,s_{a+b}} \; \delta_{a,i+j} \; \delta_{r_a, s_{i+j}}\; X_{i,j+b+d}^{r_i, s_{c+d}}, &\text{for $j+b+d < \ell$},\\
0, & \text{else}
\end{cases} \bigskip \\
&\qquad =
\begin{cases}
\delta_{c,a+b} \; \delta_{r_c,s_{a+b}}\; X_{i,j}^{r_i, s_{i+j}} \cdot X_{a,b+d}^{r_a, s_{c+d}}, &\text{for $b+d < \ell$},\\
0, & \text{else}
\end{cases} \bigskip\\
&\qquad = X_{i,j}^{r_i, s_{i+j}} \cdot \left( X_{a,b}^{r_a, s_{a+b}} \cdot X_{c,d}^{r_c,s_{c+d}} \right).
\end{align*}
}
We now establish the unitality axiom. Given the unit $1_A$ as defined above, we have the computations below:
\[
{\small
\begin{array}{rll}
\medskip
1_A \cdot X_{a,b}^{r_a, s_{a+b}} &= \left( \sum_ {i=0}^{n-1} \sum_{r_i=0}^{m_{i}-1} X_{i,0}^{r_i,r_i} \right) \cdot X_{a,b}^{r_a, s_{a+b}} & = \sum_ {i=0}^{n-1} \sum_{r_i=0}^{m_{i}-1} \left( X_{i,0}^{r_i,r_i} \cdot X_{a,b}^{r_a, s_{a+b}} \right) \\ \medskip
& = \sum_ {i=0}^{n-1} \sum_{r_i=0}^{m_{i}-1} \left(\delta_{a,i} \; \delta_{r_a,r_i} \; X_{i,b}^{r_i, s_{a+b}} \right) & = \sum_ {i=0}^{n-1} \left(\delta_{a,i} \; X_{i,b}^{r_a, s_{a+b}} \right) \\
\bigskip
& = X_{a,b}^{r_a, s_{a+b}}, \\ \medskip
X_{a,b}^{r_a, s_{a+b}} \cdot 1_A &\; = \; X_{a,b}^{r_a, s_{a+b}} \cdot \left( \sum_ {i=0}^{n-1} \sum_{r_i=0}^{m_{i}-1} X_{i,0}^{r_i,r_i} \right)
&= \sum_ {i=0}^{n-1} \sum_{r_i=0}^{m_{i}-1} \left(X_{a,b}^{r_a, s_{a+b}} \cdot X_{i,0}^{r_i,r_i}
\right) \\ \medskip
&= \sum_ {i=0}^{n-1} \sum_{r_i=0}^{m_{i}-1} \delta_{i,a+b}\; \delta_{r_i, s_{a+b}} X_ {a,b}^{r_a,r_i} & = \sum_{r_i=0}^{m_{i}-1} \delta_{r_i, s_{a+b}} X_ {a,b}^{r_a,r_i} \\
& = X_ {a,b}^{r_a,s_a+b}.
\end{array}
}
\]
Therefore, the result holds.
\end{proof}
\subsection{Non-counital Frobenius structure} \label{sec:Frobenius}
Here, we establish a non-counital Frobenius structure for the NSY algebras, building on the algebra structure presented in Proposition~\ref{prop:Bmi-alg}. Consider the next preliminary result.
\begin{lemma} \label{lem:coassoc}
Let $A$ be an algebra with $1_A$, and define a $\Bbbk$-linear map
$\Delta: A \to A \otimes A$ by $\Delta(1_A):= \textstyle \sum_i a_i \otimes b_i$ and $\Delta(x) := \sum_i a_i \otimes b_i x.$
If
\begin{equation}
\label{eq:Delta1}
\textstyle \sum_i a_i \otimes b_i x = \sum_i x a_i \otimes b_i, \quad \quad \text{ for all } x \in A,
\end{equation}
then $(A,m,u,\Delta)$ is a non-counital Frobenius algebra.
\end{lemma}
\begin{proof}
It suffices to show that $\Delta$ is coassociative and satisfies \eqref{eq:Delta-m}. Now $\Delta$ is coassociative as, for all $y \in A$, we get that
\begin{align*}
\textstyle (\Delta \otimes \textnormal{id})\Delta(y)
& \; \; = \; \; \textstyle \sum_i \Delta(a_i) \otimes b_i y
\; \; = \; \; \textstyle \sum_{i,j} a_j \otimes b_j a_i \otimes b_i y \; \; \overset{\textnormal{\eqref{eq:Delta1}, $x \hspace{-.03in}=\hspace{-.03in}y$}}{=} \; \; \sum_{i,j} a_j \otimes b_j y a_i \otimes b_i\\
&\overset{\textnormal{\eqref{eq:Delta1}, $x\hspace{-.03in}=\hspace{-.03in}b_j y$}}{=} \; \; \textstyle \sum_{j,i} a_j \otimes a_i \otimes b_i b_j y \; \; = \; \; \sum_j a_j \otimes \Delta(b_j y) \; \; = \; \; (\textnormal{id} \otimes \Delta)\Delta(y).
\end{align*}
Moreover, \eqref{eq:Delta-m} holds as for all $y,z \in A$, we get that
\begin{align*}
(\textnormal{id} \otimes m)(\Delta \otimes \textnormal{id})(y \otimes z) = (\textnormal{id} \otimes m)(\textstyle \sum_i a_i \otimes b_i y \otimes z) = \textstyle \sum_i a_i \otimes b_i y z = \Delta m (y \otimes z)\\
(m \otimes \textnormal{id})(\textnormal{id} \otimes \Delta)(y \otimes z) = \textstyle \sum_i y a_i \otimes b_i z
\overset{\textnormal{\eqref{eq:Delta1}, $x\hspace{-.03in}=\hspace{-.03in}y$}}{=} \textstyle \sum_i a_i \otimes b_i y z = \Delta m (y \otimes z).
\end{align*}
Thus, $(A,m,u,\Delta)$ is a non-counital Frobenius algebra.
\end{proof}
Here, $\Delta(1_A)$ is a {\it Casimir element} of $A$.
This brings us to our first main result.
\begin{theorem} \label{thm:main}
Consider the NSY algebra $A:=B_{n,\ell}(m_0, \dots, m_{n-1})$, with $\Bbbk$-basis $\{X_{i,j}^{r_i, s_{i+j}}\}$ given in Proposition~\ref{prop:Bmi-basis} and algebra structure given in Proposition~\ref{prop:Bmi-alg}. Then, the following statements hold.
\begin{enumerate} [font=\normalfont]
\item $A$ is non-counital Frobenius with
\begin{align*}
\Delta(X_{i,j}^{r_i, s_{i+j}})
&= \displaystyle \sum_{k=0}^{\ell - 1 - j} \; \sum_{t_{i+j+k}=0}^{m_{i+j+k}-1} \; \sum_{t_{i+j+k - \ell +1}=0}^{m_{i+j+k-\ell +1}-1}\Big( 1 - \delta_{m_{i+j+k},m_{i+j+k-\ell+1}}(1 - \delta_{t_{i+j+k}, t_{i+j+k-\ell+1}})\Big)\\ \noalign{\vskip-8pt}
& \displaystyle \hspace{2in} \cdot \; X_{i,j+k}^{r_i, t_{i+j+k}}\; \otimes\; X_{i+j+k-\ell+1, \ell - 1-k}^{t_{i+j+k-\ell+1}, s_{i+j}}.
\end{align*}
\item $(A, \Delta)$ is Frobenius if and only if $m_i = m_{i - \ell +1}$ for all $i=0, \dots n-1$; in which case, $$\varepsilon(X_{i,j}^{r_i, s_{i+j}}) = \delta_{j, \ell-1} \; \delta_{r_i, s_{i+j}} \; 1_\Bbbk$$ is the counit of $\Delta$.
\end{enumerate}
\end{theorem}
\begin{proof}
(a) First, note that by Proposition~\ref{prop:Bmi-alg},
\[
{\small
\begin{array}{rl}
\Delta(1_A) &= \displaystyle \sum_ {i=0}^{n-1} \; \sum_{r_i=0}^{m_{i}-1} \Delta(X_{i,0}^{r_i,r_i}) \medskip \\
&= \displaystyle \sum_ {i=0}^{n-1} \; \sum_{r_i=0}^{m_{i}-1} \; \sum_{k=0}^{\ell - 1} \; \sum_{t_{i+k}=0}^{m_{i+k}-1} \; \sum_{t_{i+k - \ell +1}=0}^{m_{i+k-\ell +1}-1}\Big( 1 - \delta_{m_{i+k},m_{i+k-\ell+1}}(1 - \delta_{t_{i+k}, t_{i+k-\ell+1}})\Big)\\
\noalign{\vskip-5pt} & \displaystyle \hspace{2.2in} \cdot \; X_{i,k}^{r_i, t_{i+k}}\; \otimes\; X_{i+k-\ell+1, \ell - 1-k}^{t_{i+k-\ell+1}, r_{i}}.
\end{array}
}
\]
It suffices to show that \eqref{eq:Delta1} holds for the basis of $A$ to conclude that $A$ is non-counital Frobenius [Lemma~\ref{lem:coassoc}]. To proceed, take the basis element $X_{a,b}^{r_a, s_{a+b}}$ and consider the following computations:
{\small
\begin{align*}
& (X_{a,b}^{r_a, s_{a+b}} \otimes 1_A)\Delta(1_A) \bigskip \\ & = \displaystyle \sum_ {i=0}^{n-1} \; \sum_{r_i=0}^{m_{i}-1} \; \sum_{k=0}^{\ell - 1} \; \sum_{t_{i+k}=0}^{m_{i+k}-1} \; \sum_{t_{i+k - \ell +1}=0}^{m_{i+k-\ell +1}-1}\Big( 1 - \delta_{m_{i+k},m_{i+k-\ell+1}}(1 - \delta_{t_{i+k}, t_{i+k-\ell+1}})\Big)\\
\noalign{\vskip-8pt} & \displaystyle \hspace{2.2in} \cdot \;X_{a,b}^{r_a, s_{a+b}} \; X_{i,k}^{r_i, t_{i+k}}\; \otimes\; X_{i+k-\ell+1, \ell - 1-k}^{t_{i+k-\ell+1}, r_{i}} \bigskip \\
& = \displaystyle \sum_ {i=0}^{n-1} \; \sum_{r_i=0}^{m_{i}-1} \; \sum_{k=0}^{\ell - 1-b} \; \sum_{t_{i+k}=0}^{m_{i+k}-1} \; \sum_{t_{i+k - \ell +1}=0}^{m_{i+k-\ell +1}-1}\Big( 1 - \delta_{m_{i+k},m_{i+k-\ell+1}}(1 - \delta_{t_{i+k}, t_{i+k-\ell+1}})\Big)\\
\noalign{\vskip-8pt} & \displaystyle \hspace{2.2in} \cdot \;\delta_{i,a+b} \; \delta_{r_i, s_{a+b}}\; X_{a,b+k}^{r_a, t_{i+k}} \; \otimes\; X_{i+k-\ell+1, \ell - 1-k}^{t_{i+k-\ell+1}, r_{i}} \bigskip \\
& = \displaystyle \sum_{k=0}^{\ell - 1-b} \; \sum_{t_{a+b+k}=0}^{m_{a+b+k}-1} \; \sum_{t_{a+b+k - \ell +1}=0}^{m_{a+b+k-\ell +1}-1}\Big( 1 - \delta_{m_{a+b+k},m_{a+b+k-\ell+1}}(1 - \delta_{t_{a+b+k}, t_{a+b+k-\ell+1}})\Big)\\
\noalign{\vskip-8pt} & \displaystyle \hspace{2in} \cdot \; X_{a,b+k}^{r_a, t_{a+b+k}} \; \otimes\; X_{a+b+k-\ell+1, \ell - 1-k}^{t_{a+b+k-\ell+1}, s_{a+b}}.
\end{align*}
}
and
{\small
\begin{align*}
&\Delta(1_A) (1_A \otimes X_{a,b}^{r_a, s_{a+b}}) \bigskip \\ & = \displaystyle \sum_ {i=0}^{n-1} \; \sum_{r_i=0}^{m_{i}-1} \; \sum_{k=0}^{\ell - 1} \; \sum_{t_{i+k}=0}^{m_{i+k}-1} \; \sum_{t_{i+k - \ell +1}=0}^{m_{i+k-\ell +1}-1}\Big( 1 - \delta_{m_{i+k},m_{i+k-\ell+1}}(1 - \delta_{t_{i+k}, t_{i+k-\ell+1}})\Big)\\
\noalign{\vskip-8pt} & \displaystyle \hspace{2.2in} \cdot \; X_{i,k}^{r_i, t_{i+k}}\; \otimes\; X_{i+k-\ell+1, \ell - 1-k}^{t_{i+k-\ell+1}, r_{i}} \; X_{a,b}^{r_a, s_{a+b}} \bigskip\\
& = \displaystyle \sum_ {i=0}^{n-1} \; \sum_{r_i=0}^{m_{i}-1} \; \sum_{k=b}^{\ell - 1} \; \sum_{t_{i+k}=0}^{m_{i+k}-1} \; \sum_{t_{i+k - \ell +1}=0}^{m_{i+k-\ell +1}-1}\Big( 1 - \delta_{m_{i+k},m_{i+k-\ell+1}}(1 - \delta_{t_{i+k}, t_{i+k-\ell+1}})\Big)\\
\noalign{\vskip-8pt} & \displaystyle \hspace{2.2in} \cdot \; X_{i,k}^{r_i, t_{i+k}}\; \otimes\; \delta_{a,i}\; \delta_{r_a,r_i}\; X_{i+k-\ell+1, \ell - 1-k+b}^{t_{i+k-\ell+1}, s_{a+b}}\bigskip\\
& = \displaystyle \sum_{k=b}^{\ell - 1} \; \sum_{t_{a+k}=0}^{m_{a+k}-1} \; \sum_{t_{a+k - \ell +1}=0}^{m_{a+k-\ell +1}-1}\Big( 1 - \delta_{m_{a+k},m_{a+k-\ell+1}}(1 - \delta_{t_{a+k}, t_{a+k-\ell+1}})\Big)\\
\noalign{\vskip-8pt} & \displaystyle \hspace{1.8in} \cdot \; X_{a,k}^{r_a, t_{a+k}}\; \otimes\; X_{a+k-\ell+1, \ell - 1-k+b}^{t_{a+k-\ell+1}, s_{a+b}}\bigskip\\
& = \displaystyle \sum_{k=0}^{\ell - 1-b} \; \sum_{t_{a+b+k}=0}^{m_{a+b+k}-1} \; \sum_{t_{a+b+k - \ell +1}=0}^{m_{a+b+k-\ell +1}-1}\Big( 1 - \delta_{m_{a+b+k},m_{a+b+k-\ell+1}}(1 - \delta_{t_{a+b+k}, t_{a+b+k-\ell+1}})\Big)\\
\noalign{\vskip-8pt} & \displaystyle \hspace{1.8in} \cdot \; X_{a,b+k}^{r_a, t_{a+b+k}}\; \otimes\; X_{a+b+k-\ell+1, \ell - 1-k}^{t_{a+b+k-\ell+1}, s_{a+b}}.
\end{align*}
}
So, $ (X_{a,b}^{r_a, s_{a+b}} \otimes 1_A) \Delta(1_A) = \Delta(1_A) (1_A \otimes X_{a,b}^{r_a, s_{a+b}})$ as desired.
\bigskip
(b) The first statement follows from the following equivalences:
{\small
\[
\text{$A$ is Frobenius } \overset{\textnormal{Thm. \ref{thm:SY}}}{\Longleftrightarrow} m_i=m_{\nu(i)}\; \forall\; i\; \overset{\textnormal{Prop. \ref{prop:Nak-perm}}}{\iff} m_i=m_{i+\ell-1} \; \forall \; i\; \overset{i\rightarrow i-\ell+1}{\iff} m_i=m_{i-\ell+1} \; \forall \; i. \]
}
\noindent When $A$ is Frobenius, that is, when $m_i = m_{i-\ell+1}$ for $i=0, \dots, n-1$, we get that
{\small
\begin{align*}
\Delta(X_{i,j}^{r_i, s_{i+j}})
&= \displaystyle \sum_{k=0}^{\ell - 1 - j} \; \sum_{t_{i+j+k}=0}^{m_{i+j+k}-1} \; \sum_{t_{i+j+k - \ell +1}=0}^{m_{i+j+k-\ell +1}-1}\Big( 1 - \delta_{m_{i+j+k},m_{i+j+k-\ell+1}}(1 - \delta_{t_{i+j+k}, t_{i+j+k-\ell+1}})\Big)\\
\noalign{\vskip-8pt} & \displaystyle \hspace{2in} \cdot \; X_{i,j+k}^{r_i, t_{i+j+k}}\; \otimes\; X_{i+j+k-\ell+1, \ell - 1-k}^{t_{i+j+k-\ell+1}, s_{i+j}} \bigskip \\
&= \displaystyle \sum_{k=0}^{\ell - 1 - j} \; \sum_{t_{i+j+k}=0}^{m_{i+j+k}-1} \; \sum_{t_{i+j+k - \ell +1}=0}^{m_{i+j+k}-1}\delta_{t_{i+j+k}, t_{i+j+k-\ell+1}} \; X_{i,j+k}^{r_i, t_{i+j+k}}\; \otimes\; X_{i+j+k-\ell+1, \ell - 1-k}^{t_{i+j+k-\ell+1}, s_{i+j}}\bigskip
\\
&= \displaystyle \sum_{k=0}^{\ell - 1 - j} \; \sum_{t_{i+j+k}=0}^{m_{i+j+k}-1} \; X_{i,j+k}^{r_i, t_{i+j+k}}\; \otimes\; X_{i+j+k-\ell+1, \ell - 1-k}^{t_{i+j+k}, s_{i+j}}.
\end{align*}
}
Now in the Frobenius case, consider the following computations:
{\small
\begin{align*}
(\varepsilon \otimes \textnormal{id}) \Delta(X_{i,j}^{r_i, s_{i+j}})
&= \displaystyle \sum_{k=0}^{\ell - 1 - j} \; \sum_{t_{i+j+k}=0}^{m_{i+j+k}-1} \; \varepsilon(X_{i,j+k}^{r_i, t_{i+j+k}})\; \otimes\; X_{i+j+k-\ell+1, \ell - 1-k}^{t_{i+j+k}, s_{i+j}} \bigskip\\
&= \displaystyle \sum_{k=0}^{\ell - 1 - j} \; \sum_{t_{i+j+k}=0}^{m_{i+j+k}-1} \delta_{j+k, \ell-1} \; \delta_{r_i, t_{i+j+k}}\; X_{i+j+k-\ell+1, \ell - 1-k}^{t_{i+j+k}, s_{i+j}} \bigskip\\
&= X_{i+j+(\ell-1-j)-\ell+1, \ell - 1-(\ell-1-j)}^{r_i, s_{i+j}} \\
\noalign{\vskip5pt} &= X_{i, j}^{r_i, s_{i+j}}, \\
(\textnormal{id} \otimes \varepsilon) \Delta(X_{i,j}^{r_i, s_{i+j}})
&= \displaystyle \sum_{k=0}^{\ell - 1 - j} \; \sum_{t_{i+j+k}=0}^{m_{i+j+k}-1} \; X_{i,j+k}^{r_i, t_{i+j+k}}\; \otimes\; \varepsilon(X_{i+j+k-\ell+1, \ell - 1-k}^{t_{i+j+k}, s_{i+j}}) \bigskip\\
&= \displaystyle \sum_{k=0}^{\ell - 1 - j} \; \sum_{t_{i+j+k}=0}^{m_{i+j+k}-1} \delta_{\ell-1-k, \ell-1} \; \delta_{t_{i+j+k},s_{i+j}}\; X_{i,j+k}^{r_i, t_{i+j+k}}\bigskip\\
&= X_{i,j}^{r_i, s_{i+j}}.
\end{align*}
}
\vspace{-.1in}
\noindent Therefore, $\Delta$ is counital via $\varepsilon$ in the Frobenius case, as claimed.
\vspace{-.2in}
\end{proof}
\section{Examples for NSY algebras} \label{sec:examples}
In this section, we illustrate the results of the previous section for various examples of the (self-injective) NSY algebras, $B_{n,\ell}(m_0, \dots,m_{n-1})$ [Definition~\ref{def:NSY}]. First, by Proposition~\ref{prop:Nak-perm} and Theorem~\ref{thm:SY}, we have the following statement:
\begin{equation} \label{eq:FrobSY}
\text{$B_{n,\ell}(m_0, \dots,m_{n-1})$ is Frobenius \quad $\iff$ \quad $m_i = m_{i+\ell-1}$ for all $i$.}
\end{equation}
In Section~\ref{sec:FrobEx}, we analyze the Frobenius NSY algebras $B_{n,\ell}(1, \dots, 1)$. In Section~\ref{sec:Nakayama}, we study Nakayama's example of a non-Frobenius, self-injective algebra, the NSY algebra $B_{2,2}(2,1)$ \cite[page~624]{Nak}. We end by comparing the Frobenius algebra $B_{4,3}(1,2,1,2)$ with the non-Frobenius, self-injective algebra $B_{4,3}(1,1,2,2)$ in Section~\ref{sec:highDim}.
\subsection{Frobenius examples} \label{sec:FrobEx}
\subsubsection{The NSY algebra $B_{2,2}(1,1)$} We begin with a discussion of the example $B_{2,2}(1,1)$. By Lemma~\ref{lem:Bmi-dim} and Proposition~\ref{prop:Bmi-basis},
$B_{2,2}(1,1)$ is a 4-dimensional algebra with $\Bbbk$-basis:
$$\{ X_{0,0}^{0,0}, \; \; X_{0,1}^{0, 0}, \; \; X_{1,0}^{0,0},\; \; X_{1,1}^{0, 0} \}.$$
\noindent By Proposition~\ref{prop:Bmi-alg}, the multiplication of this algebra is given by the following table:
\medskip
{\small
\begin{center}
\renewcommand\arraystretch{1.3}
\setlength\doublerulesep{0pt}
\begin{tabular}{r||*{4}{2|}}
* & X_{0,0}^{0,0} & X_{0,1}^{0, 0} & X_{1,0}^{0,0} & X_{1,1}^{0, 0} \\
\hline\hline
$X_{0,0}^{0,0}$ & X_{0,0}^{0,0} & X_{0,1}^{0, 0} & 0 & 0 \\
\hline
$X_{0,1}^{0, 0}$ & 0 & 0 & X_{0,1}^{0, 0} & 0 \\
\hline
$X_{1,0}^{0,0}$ & 0 & 0 & X_{1,0}^{0,0} & X_{1,1}^{0,0} \\
\hline
$X_{1,1}^{0, 0}$ & X_{1,1}^{0,0} & 0 & 0 & 0 \\
\hline
\end{tabular}
\end{center}
}
\bigskip
\noindent Moreover, the unit of $B_{2,2}(1,1)$ is $X_{0,0}^{0,0} + X_{1,0}^{0,0}$.
Now, by Theorem~\ref{thm:main}(a), we compute the comultiplication formula as follows:
{\small
\begin{align*}
\Delta(X_{0,0}^{0,0}) &= \sum_{k=0}^{1 } \; \sum_{t_k=0}^{m_{k}-1} \; \sum_{t_{k - 1}=0}^{m_{k-1}-1}\Big( 1 - \delta_{m_{k},m_{k-1}}(1 - \delta_{t_{k}, t_{k-1}})\Big) \cdot \; X_{0,k}^{0, t_{k}}\; \otimes\; X_{k-1, 1-k}^{t_{k-1}, 0} \\
&= \sum_{k=0}^{1 } \; \sum_{t_k=0}^{0} \; \sum_{t_{k - 1}=0}^{0} \Big( 1 - \delta_{1,1}(1 - \delta_{t_{k}, t_{k-1}})\Big) \cdot \; X_{0,k}^{0, t_{k}}\; \otimes\; X_{k-1, 1-k}^{t_{k-1}, 0} \\
&= \sum_{k=0}^{1 } X_{0,k}^{0, 0} \; \otimes\; X_{k-1, 1-k}^{0, 0} \\
&= X_{0,0}^{0,0} \otimes X_{1,1}^{0,0} + X_{0,1}^{0,0} \otimes X_{0,0}^{0,0} \;\;;\\
\\
\Delta(X_{0,1}^{0,0}) &= \sum_{k=0}^{0} \; \sum_{t_{k+1}=0}^{m_{1+k}-1} \; \sum_{t_k=0}^{m_{k}-1}\Big( 1 - \delta_{m_{1+k},m_{k}}(1 - \delta_{t_{1+k}, t_{k}})\Big) \cdot \; X_{0,1+k}^{0, t_{1+k}}\; \otimes\; X_{k, 1-k}^{t_{k}, 0} \\
&=\sum_{k=0}^{0 } \; \sum_{t_k=0}^{0} \; \sum_{t_{k - 1}=0}^{0} \Big( 1 - \delta_{1,1} \left(1 - \delta_{t_{1+k}, t_{k}} \right) \Big) \cdot \; X_{0,k+1}^{0, t_{k+1}}\; \otimes\; X_{k, 1-k}^{t_{k}, 0} \\
&= X_{0,1}^{0,0} \otimes X_{0,1}^{0,0} \;\;; \\
\\
\Delta(X_{1,0}^{0, 0})
&= \sum_{k=0}^{1} \; \sum_{t_{1+k}=0}^{m_{1+k}-1} \; \sum_{t_k=0}^{m_{k}-1}\Big( 1 - \delta_{m_{k+1},m_{k}}(1 - \delta_{t_{1+k}, t_{k}})\Big) \cdot \; X_{1,k}^{0, t_{1+k}}\; \otimes\; X_{k, 1-k}^{t_{k}, 0} \\
&= \sum_{k=0}^{1} \Big( 1 - (1 - \delta_{0,0})\Big) \cdot \; X_{1,k}^{0, 0}\; \otimes\; X_{k, 1-k}^{0, 0} \\
&= X_{1,0}^{0, 0} \otimes X_{0, 1}^{0, 0} + X_{1,1}^{0, 0} \otimes X_{1, 0}^{0, 0} \;\;;\\
\\
\Delta( X_{1,1}^{0,0}) &= \sum_{k=0}^{0} \; \sum_{t_{2+k}=0}^{m_{2+k}-1} \; \sum_{t_{k+1}=0}^{m_{k+1}-1}\Big( 1 - \delta_{m_{2+k},m_{k+1}}(1 - \delta_{t_{2+k}, t_{k+1}})\Big)\cdot \; X_{1,1+k}^{0, t_{2+k}}\; \otimes\; X_{k+1, 1-k}^{t_{k+1}, 0}\\
&= \sum_{k=0}^{0} \; \sum_{t_{2+k}=0}^{0} \; \sum_{t_{k+1}=0}^{0}\Big( 1 - \delta_{1,1}(1 - \delta_{t_{2+k}, t_{k+1}})\Big)\cdot \; X_{1,1+k}^{0, t_{2+k}}\; \otimes\; X_{k+1, 1-k}^{t_{k+1}, 0} \\
&= \sum_{k=0}^{0} \Big( 1 - (1 - \delta_{0, 0})\Big)\cdot \; X_{1,1+k}^{0, 0}\; \otimes\; X_{k+1, 1-k}^{0, 0} \\
&= X_{1,1}^{0, 0}\; \otimes\; X_{1, 1}^{0, 0}.
\end{align*}
}
\medskip
\noindent This gives $B_{2,2}(1,1)$ the structure of a non-counital Frobenius algebra. By~\eqref{eq:FrobSY} and Theorem~\ref{thm:main}(b), the counit for this algebra exists and is given by
$$ \varepsilon(X_{0,0}^{0, 0}) \;= \; \varepsilon(X_{1,0}^{0, 0})\; =\; 0_\Bbbk, \quad\quad \varepsilon(X_{0,1}^{0, 0})\; = \; \varepsilon(X_{1,1}^{0, 0})\; =\; 1_\Bbbk.$$
\subsubsection{The NSY algebras $B_{n,\ell}(1,\dots,1)$} We can generalize the work in the previous section by considering the NSY algebras $B_{n,\ell}(1,\dots,1)$. By Lemma~\ref{lem:Bmi-dim} and Proposition~\ref{prop:Bmi-basis}, $B_{n,\ell}(1,\dots,1)$ is a $n \ell$-dimensional algebra with basis given by $\{X_{i,j}^{0,0}\}_{i,j}$. The general comultiplication formula for this case simplifies to:
{\small
\begin{align*}
\Delta(X_{i,j}^{0, 0})
&= \displaystyle \sum_{k=0}^{\ell - 1 - j} \; \sum_{t_{i+j+k}=0}^{m_{i+j+k}-1} \; \sum_{t_{i+j+k - \ell +1}=0}^{m_{i+j+k-\ell +1}-1}\Big( 1 - \delta_{m_{i+j+k},m_{i+j+k-\ell+1}}(1 - \delta_{t_{i+j+k}, t_{i+j+k-\ell+1}})\Big)\\
\noalign{\vskip-8pt} & \displaystyle \hspace{2in} \cdot \; X_{i,j+k}^{0, t_{i+j+k}}\; \otimes\; X_{i+j+k-\ell+1, \ell - 1-k}^{t_{i+j+k-\ell+1}, 0} \\
&= \displaystyle \sum_{k=0}^{\ell - 1 - j} \; \sum_{t_{i+j+k}=0}^{0} \; \sum_{t_{i+j+k - \ell +1}=0}^{0}\Big( 1 - \delta_{0,0}(1 - \delta_{t_{i+j+k}, t_{i+j+k-\ell+1}})\Big)\\
\noalign{\vskip-8pt} & \displaystyle \hspace{2in} \cdot \; X_{i,j+k}^{0, t_{i+j+k}}\; \otimes\; X_{i+j+k-\ell+1, \ell - 1-k}^{t_{i+j+k-\ell+1}, 0} \\
&= \displaystyle \sum_{k=0}^{\ell - 1 - j} \; X_{i,j+k}^{0, 0}\; \otimes\; X_{i+j+k-\ell+1, \ell - 1-k}^{0, 0}.
\end{align*}
}
\noindent Moreover, the counit formula simplifies to $\varepsilon(X_{i,j}^{0, 0}) \;= \delta_{j, \ell-1} \; 1_\Bbbk.$
\smallskip
The reader may wish to compare these results with work of Wang-Zhang in \cite{WangZhang}.
\smallskip
\subsection{Nakayama's non-Frobenius self-injective algebra}
\label{sec:Nakayama}
Next, we consider Nakayama's example of a finite-dimensional self-injective algebra, that is not Frobenius \cite[page~624]{Nak}. This algebra is 9-dimensional (see~Lemma~\ref{lem:Bmi-dim}) and is isomorphic to the NSY algebra $B_{2,2}(2,1)$; see \cite[Section~3]{SYpaper} for a proof. By Proposition~\ref{prop:Bmi-basis}, the $\Bbbk$-basis of
$B_{2,2}(2,1)$ is
$$\{ X_{0,0}^{0,0}, \; \; X_{0,0}^{0,1}, \; \; X_{0,0}^{1,0}, \; \; X_{0,0}^{1,1}, \; \; X_{0,1}^{0,0}, \; \; X_{0,1}^{1,0}, \; \; X_{1,0}^{0,0}, \; \; X_{1,1}^{0,0}, \; \; X_{1,1}^{0,1}\},$$
Via the multiplication table below, these basis elements correspond, respectively, to the basis elements of Nakayama's algebra in \cite{Nak}:
$$\{ \alpha_{11}, \; \; \alpha_{12}, \; \; \alpha_{21}, \; \; \alpha_{22}, \; \; \gamma_1, \; \; \gamma_2, \; \; \beta, \; \; \delta_1, \; \; \delta_2\}.$$
\noindent Now by Proposition~\ref{prop:Bmi-alg}, the multiplication of $B_{2,2}(2,1)$ is given by the following table:
\medskip
{\small
\begin{center}
\renewcommand\arraystretch{1.3}
\setlength\doublerulesep{0pt}
\begin{tabular}{r||*{9}{2|}}
* & X_{0,0}^{0,0} & X_{0,0}^{0, 1} & X_{0,0}^{1,0} & X_{0,0}^{1, 1} & X_{0,1}^{0, 0} & X_{0,1}^{1, 0} & X_{1,0}^{0, 0} & X_{1,1}^{0, 0} & X_{1,1}^{0, 1} \\
\hline\hline
$X_{0,0}^{0,0}$ & X_{0,0}^{0,0} & X_{0,0}^{0,1} & 0 & 0 & X_{0,1}^{0,0} & 0 & 0 & 0 & 0 \\
\hline
$X_{0,0}^{0,1}$ & 0 & 0 & X_{0,0}^{0,0} & X_{0,0}^{0,1} & 0 & X_{0,1}^{0,0} & 0 & 0 & 0 \\
\hline
$X_{0,0}^{1,0}$ & X_{0,0}^{1,0} & X_{0,0}^{1,1} & 0 & 0 & X_{0,1}^{1,0} & 0 & 0 & 0 & 0 \\
\hline
$X_{0,0}^{1,1}$ & 0 & 0 & X_{0,0}^{1,0} & X_{0,0}^{1,1} & 0 & X_{0,1}^{1,0} & 0 & 0 & 0 \\
\hline
$X_{0,1}^{0,0}$ & 0 & 0 & 0 & 0 & 0 & 0 & X_{0,1}^{0,0} & 0 & 0 \\
\hline
$X_{0,1}^{1,0}$ & 0 & 0 & 0 & 0 & 0 & 0 & X_{0,1}^{1,0} & 0 & 0 \\
\hline
$X_{1,0}^{0,0}$ & 0 & 0 & 0 & 0 & 0 & 0 & X_{1,0}^{0,0} & X_{1,1}^{0,0} & X_{1,1}^{0,1} \\
\hline
$X_{1,1}^{0,0}$ & X_{1,1}^{0,0} & X_{1,1}^{0,1} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\hline
$X_{1,1}^{0,1}$ & 0 & 0 & X_{1,1}^{0,0} & X_{1,1}^{0,1} & 0 & 0 & 0 & 0 & 0 \\
\hline
\end{tabular}
\end{center}
}
\medskip
\noindent Moreover, the unit of $B_{2,2}(2,1)$ is $X_{0,0}^{0,0} + X_{0,0}^{1,1} + X_{1,0}^{0,0}$.
By Theorem~\ref{thm:main}(a), we compute the comultiplication for $B_{2,2}(2,1)$ as follows:
{\small $$\Delta(X_{i,j}^{r_i, s_{i+j}})
= \displaystyle \sum_{k=0}^{1 - j} \; \sum_{t_{i+j+k}=0}^{m_{i+j+k}-1} \; \sum_{t_{i+j+k - 1}=0}^{m_{i+j+k-1}-1} \; X_{i,j+k}^{r_i, t_{i+j+k}}\; \otimes\; X_{i+j+k-1, 1-k}^{t_{i+j+k-1}, s_{i+j}}.$$}
So we get the output below:
{\small
\begin{align*}
\Delta(X_{0,0}^{0,0}) &= \textstyle \sum_{k=0}^{1} \sum_{t_{k}=0}^{m_{k}-1} \sum_{t_{k - 1}=0}^{m_{k-1}-1} X_{0,k}^{0, t_{k}} \otimes X_{k-1, 1-k}^{t_{k-1}, 0} \\
&= X_{0,0}^{0, 0} \otimes X_{1, 1}^{0, 0}
+ X_{0,0}^{0, 1} \otimes X_{1, 1}^{0, 0}
+ X_{0,1}^{0, 0} \otimes X_{0, 0}^{0, 0}
+ X_{0,1}^{0, 0} \otimes X_{0, 0}^{1, 0} ;\\
\noalign{\vskip5pt} \Delta(X_{0,0}^{0,1}) &= X_{0,0}^{0, 0} \otimes X_{1, 1}^{0, 1}
+ X_{0,0}^{0, 1} \otimes X_{1, 1}^{0, 1}
+ X_{0,1}^{0, 0} \otimes X_{0, 0}^{0, 1}
+ X_{0,1}^{0, 0} \otimes X_{0, 0}^{1, 1} ;\\
\noalign{\vskip5pt}
\Delta(X_{0,0}^{1,0}) &= X_{0,0}^{1, 0} \otimes X_{1, 1}^{0, 0}
+ X_{0,0}^{1, 1} \otimes X_{1, 1}^{0, 0}
+ X_{0,1}^{1, 0} \otimes X_{0, 0}^{0, 0}
+ X_{0,1}^{1, 0} \otimes X_{0, 0}^{1, 0} ;\\
\noalign{\vskip5pt}
\Delta(X_{0,0}^{1,1}) &= X_{0,0}^{1, 0} \otimes X_{1, 1}^{0, 1}
+ X_{0,0}^{1, 1} \otimes X_{1, 1}^{0, 1}
+ X_{0,1}^{1, 0} \otimes X_{0, 0}^{0, 1}
+ X_{0,1}^{1, 0} \otimes X_{0, 0}^{1, 1} ;\\
\noalign{\vskip5pt}
\Delta(X_{0,1}^{0,0}) &= \textstyle \sum_{k=0}^{0} \sum_{t_{k+1}=0}^{m_{k+1}-1} \sum_{t_{k}=0}^{m_{k}-1} X_{0,k+1}^{0, t_{k+1}} \otimes X_{k, 1-k}^{t_{k}, 0}\\
&= X_{0,1}^{0,0} \otimes X_{0, 1}^{0, 0} + X_{0,1}^{0,0} \otimes X_{0, 1}^{1, 0} ;\\
\noalign{\vskip5pt}
\Delta(X_{0,1}^{1,0}) &= X_{0,1}^{1,0} \otimes X_{0, 1}^{0, 0} + X_{0,1}^{1,0} \otimes X_{0, 1}^{1, 0} ;\\
\noalign{\vskip5pt}
\Delta(X_{1,0}^{0,0}) &= \textstyle \sum_{k=0}^{1} \sum_{t_{k+1}=0}^{m_{k+1}-1} \sum_{t_{k}=0}^{m_{k}-1} X_{1,k}^{0, t_{k+1}} \otimes X_{k, 1-k}^{t_{k}, 0}\\
&=X_{1,0}^{0,0} \otimes X_{0,1}^{0, 0}
+ X_{1,0}^{0,0} \otimes X_{0, 1}^{1, 0}
+ X_{1,1}^{0,0} \otimes X_{1, 0}^{0, 0}
+ X_{1,1}^{0,1} \otimes X_{1, 0}^{0, 0} ;\\
\noalign{\vskip5pt}
\Delta(X_{1,1}^{0,0}) &= \textstyle \sum_{k=0}^{0} \sum_{t_{k}=0}^{m_{k}-1} \sum_{t_{k+1}=0}^{m_{k+1}-1} X_{1,k+1}^{0, t_{k}} \otimes X_{k+1, 1-k}^{t_{k+1}, 0}\\
&=X_{1,1}^{0, 0} \otimes X_{1, 1}^{0, 0}+X_{1,1}^{0, 1} \otimes X_{1,1}^{0, 0} ;\\
\noalign{\vskip5pt}
\Delta(X_{1,1}^{0,1}) &=X_{1,1}^{0, 0} \otimes X_{1, 1}^{0, 1} + X_{1,1}^{0, 1} \otimes X_{1,1}^{0, 1}.
\end{align*}
}
Lastly, we see that the comultiplication is not counital using $\varepsilon$ in Theorem~\ref{thm:main}(b):
{\small
\begin{align*}
(\varepsilon \otimes \textnormal{id}) \Delta(X_{0,0}^{0, 0}) &= \varepsilon(X_{0,0}^{0, 0}) \; X_{1, 1}^{0, 0}
+ \varepsilon(X_{0,0}^{0, 1}) \; X_{1, 1}^{0, 0}
+ \varepsilon(X_{0,1}^{0, 0}) \; X_{0, 0}^{0, 0}
+ \varepsilon(X_{0,1}^{0, 0}) \; X_{0, 0}^{1, 0} = X_{0, 0}^{0, 0} + X_{0, 0}^{1, 0},\\
(\textnormal{id} \otimes \varepsilon) \Delta(X_{0,0}^{0, 0}) &= X_{0,0}^{0, 0} \; \varepsilon(X_{1, 1}^{0, 0})
+ X_{0,0}^{0, 1} \; \varepsilon(X_{1, 1}^{0, 0})
+ X_{0,1}^{0, 0} \; \varepsilon(X_{0, 0}^{0, 0})
+ X_{0,1}^{0, 0} \; \varepsilon(X_{0, 0}^{1, 0}) = X_{0, 0}^{0, 0} + X_{0, 0}^{0, 1}.
\end{align*}
}
\smallskip
\subsection{Higher-dimensional examples}\label{sec:highDim}
In this section we discuss two examples of NSY algebras of dimensions 28 and 27, respectively in Sections~\ref{sec:1212} and~\ref{sec:1122}, and provide examples of the comultiplication formula for these cases.
\subsubsection{The NSY algebras $B_{4,3}(1,2,1,2)$}
\label{sec:1212}
By Lemma~\ref{lem:Bmi-dim} and Proposition~\ref{prop:Bmi-basis}, we get that
{\small
\begin{align*}
\textnormal{dim}(B_{4,3}(1,2,1,2)) &=\sum_{i=0}^3\sum_{j=0}^2 m_{i} m_{i+j}\\
&= 1(1+2+1) + 2(2+1+2) + 1(1+2+1) + 2(2+1+2) = 28.
\end{align*}
}
\noindent Since $m_i=m_{i+\ell-1}$ for all $i$, $B_{4,3}(1,2,1,2)$ is a Frobenius algebra by~\eqref{eq:FrobSY}. Thus, using Theorem~\ref{thm:main}(a) the comultiplication formula simplifies to
{\small
\begin{align*}
\Delta(X_{i,j}^{r_i, s_{i+j}}) &= \sum_{k=0}^{\ell - 1 - j} \; \sum_{t_{i+j+k}=0}^{m_{i+j+k}-1} \; X_{i,j+k}^{r_i, t_{i+j+k}}\; \otimes\; X_{i+j+k-\ell+1, \ell - 1-k}^{t_{i+j+k}, s_{i+j}} \\
& = \sum_{k=0}^{2 - j} \; \sum_{t_{i+j+k}=0}^{m_{i+j+k}-1} \; X_{i,j+k}^{r_i, t_{i+j+k}}\; \otimes\; X_{i+j+k-2, 2-k}^{t_{i+j+k}, s_{i+j}}\; .
\end{align*}
}
\noindent
Below we provide the comultiplication formula for two basis elements $X_{0,0}^{0,0}$ and $X_{2,1}^{0,1}$.
{\small
\begin{align*}
\Delta(X_{0,0}^{0,0}) &= \sum_{k=0}^2 \sum_{t_k=0}^{m_k -1} X_{0,k}^{0,t_k} \otimes X_{k-2,2-k}^{t_k,0} \\
&= X_{0,0}^{0,0}\otimes X_{2,2}^{0,0} + X_{0,1}^{0,0}\otimes X_{3,1}^{0,0} + X_{0,1}^{0,1}\otimes X_{3,1}^{1,0} + X_{0,2}^{0,0}\otimes X_{0,0}^{0,0} \;\;; \\
\Delta(X_{2,1}^{0,1}) & = \sum_{k=0}^{1} \sum_{t_{3+k}=0}^{m_{3+k}-1} X_{2,1+k}^{0,t_{3+k}} \otimes X_{k+1,2-k}^{t_{3+k},1} \\
& = X_{2,1}^{0,0}\otimes X_{1,2}^{0,1} + X_{2,1}^{0,1}\otimes X_{1,2}^{1,1} + X_{2,2}^{0,0}\otimes X_{2,1}^{0,1} \;\;.
\end{align*}
}
\noindent Next, we show that the comultiplication is counital for these basis elements using Theorem~\ref{thm:main}(b):
{\small
\begin{align*}
(\textnormal{id}\otimes \varepsilon) \Delta (X_{0,0}^{0,0}) & = X_{0,0}^{0,0} \; \varepsilon(X_{2,2}^{0,0}) + X_{0,1}^{0,0} \; \varepsilon(X_{3,1}^{0,0}) + X_{0,1}^{0,1} \; \varepsilon(X_{3,1}^{1,0}) + X_{0,2}^{0,0} \; \varepsilon(X_{0,0}^{0,0}) = X^{0,0}_{0,0} ; \\
(\varepsilon \otimes \textnormal{id})\Delta(X_{0,0}^{0,0}) &= \varepsilon( X_{0,0}^{0,0}) \; X_{2,2}^{0,0} + \varepsilon(X_{0,1}^{0,0}) \; X_{3,1}^{0,0} + \varepsilon(X_{0,1}^{0,1}) \; X_{3,1}^{1,0} + \varepsilon(X_{0,2}^{0,0}) \; X_{0,0}^{0,0} = X^{0,0}_{0,0} ;\\
\noalign{\vskip8pt} (\textnormal{id}\otimes \varepsilon)\Delta(X_{2,1}^{0,1}) &= X_{2,1}^{0,0} \; \varepsilon(X_{1,2}^{0,1}) + X_{2,1}^{0,1} \; \varepsilon(X_{1,2}^{1,1}) + X_{2,2}^{0,0} \; \varepsilon(X_{2,1}^{0,1}) = X_{2,1}^{0,1} ;\\
(\varepsilon\otimes \textnormal{id})\Delta(X_{2,1}^{0,1}) &= \varepsilon(X_{2,1}^{0,0}) \; X_{1,2}^{0,1} + \varepsilon(X_{2,1}^{0,1}) \; X_{1,2}^{1,1} + \varepsilon(X_{2,2}^{0,0}) \; X_{2,1}^{0,1} = X_{2,1}^{0,1}.
\end{align*}
}
We will compare this Frobenius algebra with the non-counital Frobenius algebra, \linebreak $B_{4,3}(1,1,2,2)$, in the next section.
\subsubsection{The NSY algebras $B_{4,3}(1,1,2,2)$}
\label{sec:1122}
By Lemma~\ref{lem:Bmi-dim} and Proposition~\ref{prop:Bmi-basis}, we get that
{\small
\begin{align*}
\textnormal{dim}(B_{4,3}(1,1,2,2)) & = \sum_{i=0}^3\sum_{j=0}^2 m_{i} m_{i+j}\\
&= 1(1+1+2)+1(1+2+2)+2(2+2+1)+2(2+1+1)= 27.
\end{align*}
}
\noindent Since $1 = m_0 \neq m_{0+\ell-1}=m_{2}=2$, $B_{4,3}(1,1,2,2)$ is not a Frobenius algebra by~\eqref{eq:FrobSY}. Below we provide an example showing that the comultiplication $\Delta$ is not counital (via $\varepsilon$ in Theorem~\ref{thm:main}(b)) in this case. By Theorem~\ref{thm:main}(a), we have that
{\small
\begin{align*}
\Delta(X_{2,1}^{0,1}) = \sum_{k=0}^{1} \; \sum_{t_{k+3}=0}^{m_{k+3}-1} \; \sum_{t_{k+1}=0}^{m_{k+1}-1}\Big( 1 - \delta_{m_{k+3},m_{k+1}}(1 - \delta_{t_{k+3}, t_{k+1}})\Big) \; X_{2,1+k}^{0, t_{3+k}}\; \otimes\; X_{k+1, 2-k}^{t_{k+1}, 1}
\end{align*}
}
Since $m_{k+3}\neq m_{k+1}$ for all $k$, we get that
{\small
\begin{align*}
\Delta(X_{2,1}^{0,1}) & = \sum_{k=0}^{1} \; \sum_{t_{k+3}=0}^{m_{k+3}-1} \; \sum_{t_{k+1}=0}^{m_{k+1}-1} X_{2,1+k}^{0, t_{3+k}}\; \otimes\; X_{k+1, 2-k}^{t_{k+1}, 1} \\
& = X_{2,1}^{0,0}\otimes X_{1,2}^{0,1} + X_{2,1}^{0,1}\otimes X_{1,2}^{0,1} + X_{2,2}^{0,0}\otimes X_{2,1}^{0,1} + X_{2,2}^{0,0}\otimes X_{2,1}^{1,1}.
\end{align*}
}
\noindent Compare this with the comultiplication for $B_{4,3}(1,2,1,2)$ in the previous section. We can now see that $\Delta$ is not counital because
{\small
\begin{align*}
(\varepsilon \otimes \textnormal{id})\Delta(X_{2,1}^{0,1}) &= \varepsilon(X_{2,1}^{0,0}) \; X_{1,2}^{0,1} + \varepsilon(X_{2,1}^{0,1}) \; X_{1,2}^{0,1} + \varepsilon(X_{2,2}^{0,0}) \; X_{2,1}^{0,1} + \varepsilon(X_{2,2}^{0,0}) \; X_{2,1}^{1,1} =X_{2,1}^{0,1}+ X_{2,1}^{1,1}, \\ (\textnormal{id}\otimes\varepsilon)\Delta(X_{2,1}^{0,1}) &= X_{2,1}^{0,0} \; \varepsilon(X_{1,2}^{0,1}) + X_{2,1}^{0,1} \; \varepsilon(X_{1,2}^{0,1}) + X_{2,2}^{0,0} \; \varepsilon(X_{2,1}^{0,1}) + X_{2,2}^{0,0} \; \varepsilon(X_{2,1}^{1,1})
= 0.
\end{align*}
}
\section{Finite dimensional weak Hopf algebras are non-counital Frobenius}
\label{sec:weak}
In this section, we establish an explicit non-counital Frobenius structure for finite-dimen-sional (f.d.) weak Hopf algebras [Definition~\ref{def:weak}], thereby proving Conjecture~\ref{main-conj} for another large class of self-injective algebras. Background material is provided in Section~\ref{sec:weak-backgr}, and the main result is presented in Section~\ref{sec:weak-main}.
\subsection{Background on weak Hopf algebras} \label{sec:weak-backgr}
The following material is from \cite{BNS}.
\begin{definition} \label{def:wba}
A \textit{weak bialgebra} over $\Bbbk$ is a quintuple $(H,m,u,\Delta_{\textnormal{wk}}, \varepsilon_{\textnormal{wk}})$ such that
\begin{enumerate}[label=(\roman*)]
\item $(H,m,u)$ is a $\Bbbk$-algebra,
\smallskip
\item $(H, \;\Delta_{\textnormal{wk}},\; \varepsilon_{\textnormal{wk}})$ is a $\Bbbk$-coalgebra,\smallskip
\item \label{def:wba3} $\Delta_{\textnormal{wk}}(ab)=\Delta_{\textnormal{wk}}(a)\;\Delta_{\textnormal{wk}}(b)$ for all $a,b \in H$,\smallskip
\item \label{def:wba4} $\varepsilon_{\textnormal{wk}}(abc)=\varepsilon_{\textnormal{wk}}(ab_1)\;\varepsilon_{\textnormal{wk}}(b_2c)=\varepsilon_{\textnormal{wk}}(ab_2)\;\varepsilon_{\textnormal{wk}}(b_1c)$ for all $a,b,c \in H$, \smallskip
\item \label{def:wba5} $\Delta_{\textnormal{wk}}^2(1_H)=(\Delta_{\textnormal{wk}}(1_H) \otimes 1_H)(1_H \otimes \Delta_{\textnormal{wk}}(1_H))=(1_H \otimes \Delta_{\textnormal{wk}}(1_H))(\Delta_{\textnormal{wk}}(1_H) \otimes 1_H)$.\smallskip
\end{enumerate}
Here, we use the sumless Sweedler notation, for $h \in H$: $$\Delta_{\textnormal{wk}}(h):= h_1 \otimes h_2.$$
\end{definition}
\begin{definition}[$\varepsilon_s$, $\varepsilon_t$, $H_s$, $H_t$] \label{def:eps}
Let $(H, m, u, \Delta_{\textnormal{wk}}, \varepsilon_{\textnormal{wk}})$ be a weak bialgebra. We define the {\it source and target counital maps}, respectively as follows:
\[
\begin{array}{ll}
\varepsilon_s: H \to H, & x \mapsto 1_1\;\varepsilon_{\textnormal{wk}}(x1_2) \\
\varepsilon_t: H \to H, & x \mapsto \varepsilon_{\textnormal{wk}}(1_1x)\;1_2.
\end{array}
\]
We denote the images of these maps as $H_s:=\varepsilon_s(H)$ and $H_t:=\varepsilon_t(H)$, which are subalgebras of $H$: the \emph{source counital subalgebra} and the \emph{target counital subalgebra} of $H$, respectively.
\end{definition}
\begin{definition} \label{def:weak}
A \textit{weak Hopf algebra} is a sextuple $(H,m,u,\Delta_\textnormal{wk},\varepsilon_\textnormal{wk}, S)$, where the quintuple $(H,m,u,\Delta_\textnormal{wk},\varepsilon_\textnormal{wk})$ is a weak bialgebra and $S: H \to H$ is a $\Bbbk$-linear map called the \textit{antipode} that satisfies the following properties for all $h\in H$:
$$S(h_1)h_2=\varepsilon_s(h), \quad \quad
h_1S(h_2)=\varepsilon_t(h), \quad \quad
S(h_1)h_2S(h_3)=S(h).$$
\end{definition}
It follows that $S$ is anti-multiplicative with respect to $m$, and anti-comultiplicative with respect to $\Delta_\textnormal{wk}$.
\begin{definitionproposition}\cite[page~5]{BNS} \label{def:Hopf}
Take a weak Hopf algebra $H$. Then the following conditions are equivalent:
\begin{enumerate}
\item $\Delta_\textnormal{wk}(1_H)=1_H\otimes 1_H$;\smallskip
\item $\varepsilon_\textnormal{wk}(xy)=\varepsilon_\textnormal{wk}(x)\;\varepsilon_\textnormal{wk}(y)$ for all $x,y\in H$;\smallskip
\item $S(x_1)x_2=\varepsilon_\textnormal{wk}(x)\;1_H$ for all $x \in H$; and \smallskip
\item $x_1S(x_2)=\varepsilon_\textnormal{wk}(x)\;1_H$ for all $x \in H$.\smallskip
\end{enumerate}
In this case, $H$ is a {\it Hopf algebra}. \qed
\end{definitionproposition}
\begin{hypothesis}
Recall we assume that all algebras in this work are finite-dimensional, and we will continue to assume this for weak Hopf algebras.
\end{hypothesis}
\begin{remark}
Here, $H^*$ will be the usual $\Bbbk$-linear dual of $H$, which admits the structure of a weak Hopf algebra \cite[page~5]{BNS}.
\end{remark}
Now we consider an important set of elements whose existence will determine when a finite-dimensional weak Hopf algebra is Frobenius.
\pagebreak
\begin{definition} \label{def:integral}
Let $H$ be a weak Hopf algebra.
\begin{enumerate}
\item An element $\Lambda$ in $H$ is called a {\it left} (resp., {\it right}) {\it integral} if $h \Lambda = \varepsilon_t(h) \Lambda$ (resp., $\Lambda h= \Lambda\varepsilon_s(h)$) for all $h \in H$.
\item Let $I^L(H)$ (resp., $I^R(H)$) denote the space of left (resp., right) integrals of $H$. \smallskip
\item A left/right integral $\Lambda \in H$ is called {\it non-degenerate} if the linear map
\begin{equation} \label{eq:psi-bij}
\Psi_{\Lambda}: H^* \to H, \; \phi \mapsto \Lambda_1 \phi(\Lambda_2)
\end{equation}
is a bijection.
\end{enumerate}
\end{definition}
\begin{remark} \label{rem:nondeg}
\begin{enumerate}
\item Note that the map $\Psi_\Lambda$ above is bijective if and only if the map
$$\Phi_\Lambda : H^* \xrightarrow{(\Psi_\Lambda)^*} H^{**} \xrightarrow{\sim} H \xrightarrow{S} H, \; \; \phi \mapsto \phi(\Lambda_1)S(\Lambda_2)$$
is bijective, as the antipode of a finite-dimensional weak Hopf algebra is bijective \cite[Theorem~2.10]{BNS}. Moreover, this occurs if and only if the composition below is bijective:
$$\Phi'_\Lambda: H^*\xrightarrow{(\Phi_\Lambda)^*} H^{**} \xrightarrow{\sim} H, \; \; \phi \mapsto \Lambda_1 \phi(S(\Lambda_2)).$$
\item Note that $(\Psi_\Lambda)^*$ is a left $H$-module map using the left regular $H$-actions on $H$, $H^*$:
\[
\begin{array}{rl}
h\cdot(\Psi_\Lambda)^*(\phi) &= h \cdot (\phi(\Lambda_1) \Lambda_2) = \phi(\Lambda_1)(h \Lambda_2) \smallskip\\
&\overset{\textnormal{$(\ast)$}}{=} \phi(S(h)\Lambda_1)\Lambda_2 = (h \cdot \phi)(\Lambda_1)\Lambda_2 = (\Psi_\Lambda)^*(h \cdot \phi),
\end{array}
\]
where $(\ast)$ holds by \cite[Lemma 3.2(b)]{BNS}. So, $\Psi_\Lambda$ is bijective if and only if there is a unique solution to the equation $\Psi_\Lambda(\lambda) = 1_H$, which then holds if and only if there is a unique solution to the equation $\Phi'_\Lambda(\lambda) = 1_H$.
\end{enumerate}
\end{remark}
\begin{theorem} \cite[Corollary~3.10, Theorems~3.11,~3.16,~3.18]{BNS} \label{thm:BNS} Let $H$ be a weak Hopf algebra. Then the following statements hold.
\begin{enumerate} [font=\upshape]
\item $H$ is self-injective. \smallskip
\item Non-zero left (and right) integrals of $H$ exist. \smallskip
\item $H$ is Frobenius if and only if $H$ has a non-degenerate left integral. \smallskip
\item If $H$ has a non-degenerate left integral $\Lambda$ so that $\Psi_\Lambda(\lambda) = 1_H$ for some $\lambda \in H^*$, then $\lambda$ is a non-degenerate left integral of $H^*$. \qed
\end{enumerate}
\end{theorem}
\subsection{Main result} \label{sec:weak-main} Now we present our main result on the non-counital Frobenius structure of finite-dimensional weak Hopf algebras.
\begin{theorem} \label{thm:main-weak} Let $H$ be a weak Hopf algebra. Then the following statements hold.
\begin{enumerate}[font=\upshape]
\item $H$ is non-counital Frobenius with comultiplication map $\Delta$. \smallskip
\item $\Delta$ is counital if and only if $H$ is Frobenius (e.g., if and only if $H$ has a non-degenerate left integral). In this case, the counit is a nondegenerate left integral of~$H^*$.
\end{enumerate}
\end{theorem}
\begin{proof}
(a) By Theorem~\ref{thm:BNS}(b), there exists a non-zero left integral $\Lambda$ of $H$. Moreover, by \cite[Lemma~3.2(a,b)]{BNS}, we have that $\Lambda_1 \otimes x \Lambda_2 = S(x) \Lambda_1 \otimes \Lambda_2$ for all $x \in H$. Apply $\textnormal{id} \otimes S$ to get that $\Lambda_1 \otimes S(\Lambda_2)S(x) = S(x) \Lambda_1 \otimes S(\Lambda_2)$. So, for all $h \in H$, we have that
\begin{equation} \label{eq:Lambda-S}
\Lambda_1 \otimes S(\Lambda_2)h \;=\; h \Lambda_1 \otimes S(\Lambda_2).
\end{equation}
Now by taking,
\begin{equation} \label{eq:Delta-weak}
\Delta(h) := \Lambda_1 \otimes S(\Lambda_2)h,
\end{equation}
for all $h \in H$, this part of the theorem holds by Lemma~\ref{lem:coassoc}.
\smallskip
(b) Take the comultiplication map $\Delta = \Delta_\Lambda$, for $\Lambda \in I^L(H)$, as in \eqref{eq:Delta-weak}. For an element $\lambda \in H^*$, define \begin{equation} \label{eq:ep-weak}
\varepsilon = \varepsilon_\lambda: H \to \Bbbk, \quad h \mapsto \lambda(h).
\end{equation}
Now, $\Delta$ is counital via $\varepsilon$ if and only if
\begin{align}
(\varepsilon \otimes \textnormal{id})\Delta(h) \; = \; \lambda(\Lambda_1)\;S(\Lambda_2) \; h \; = \; h \;\;\; \forall h\; \in H, \label{eq:ep-wk1} \\
(\textnormal{id} \otimes \varepsilon)\Delta(h) \; \overset{\textnormal{\eqref{eq:Lambda-S}}}{=} \; h\;\Lambda_1 \; \lambda(S(\Lambda_2)) \; =\; h \;\;\; \forall h\; \in H .\label{eq:ep-wk2}
\end{align}
Recall Remark~\ref{rem:nondeg}(a) for the definitions of the maps $\Phi_{\Lambda}, \Phi'_{\Lambda}: H^* \to H$. Then,
$$\eqref{eq:ep-wk1} \; \text{holds} \; \iff \Phi_{\Lambda}(\lambda) = 1_H
\quad \quad \quad \text{and} \quad \quad \quad
\eqref{eq:ep-wk2} \; \text{holds} \; \iff \Phi'_{\Lambda}(\lambda) = 1_H.$$
Therefore, $\varepsilon_\lambda$ is a counit for $\Delta_\Lambda$ if and only if there is a unique solution $\lambda$ to the equations $\Phi_\Lambda(\lambda) =1_H$ and $\Phi'_\Lambda(\lambda) =1_H$.
But if $\Phi'_\Lambda(\lambda) =1_H$, then $\Phi_\Lambda(\lambda) = 1_H$. Indeed, for all $h \in H$:
\begin{align*}
\lambda\Big(\Phi_\Lambda(\lambda)\;h\Big) &= \lambda\Big(\lambda(\Lambda_1)S(\Lambda_2)h\Big) = \lambda(\Lambda_1)\lambda\Big(S(\Lambda_2)h\Big) = \lambda\Big(\Lambda_1 \lambda\big(S(\Lambda_2)h\big)\Big)\\ &\overset{\textnormal{\eqref{eq:Lambda-S}}}{=} \lambda\Big(h\Lambda_1 \lambda\big(S(\Lambda_2)\big)\Big)
= \lambda\Big(h\;\Phi'_\Lambda(\lambda)\Big) = \lambda\Big(1_H \; h\Big).
\end{align*}
Considering the right regular action $\triangleleft$ of $H$ on $H^*$, we then get that $\lambda \triangleleft \Phi_\Lambda(\lambda) = \lambda \triangleleft 1_H$. Thus, $\Phi_\Lambda(\lambda) = 1_H$ since the action $\triangleleft$ is faithful. Now by Remark~\ref{rem:nondeg}(b), $\varepsilon_\lambda$ is a counit if and only if the left integral $\Lambda$ is non-degenerate. The last statement follows from Remark~\ref{rem:nondeg}(b) and Theorem~\ref{thm:BNS}(d).
\end{proof}
\section{Examples for finite dimensional weak Hopf algebras }\label{sec:weak-ex}
In this part, we provide examples of the main result of Section~\ref{sec:weak}, Theorem~\ref{thm:main-weak}, on the non-counital Frobenius condition for finite-dimensional weak Hopf algebras. In Section~\ref{sec:groupoid}, we illustrate how groupoid algebras are (counital) Frobenius. Moreover, in Section~\ref{sec:QTG}, we show that certain weak Hopf algebras, called quantum transformation groupoids, are (counital) Frobenius. In both cases, we construct an explicit non-degenerate left integral of the weak Hopf algebra $H$ under investigation, and derive formulas for the comultiplication and counit that makes $H$ (counital) Frobenius.
\subsection{Groupoid algebras} \label{sec:groupoid}
Take $\mathcal{G}$ to be a {\it finite groupoid}, that is, a category with finitely many objects $\mathcal{G}_0$, and finitely many morphisms $\mathcal{G}_1$ which are all isomorphisms. For $g \in \mathcal{G}_1$, let $s(g)$ and $t(g)$ denote the source and target of $g$, respectively.
\begin{definition} Given a finite groupoid $\mathcal{G}$, a {\it groupoid algebra} $\Bbbk \mathcal{G}$ is a finite-dimensional weak Hopf algebra, which is spanned by $g \in \mathcal{G}_1$ as a $\Bbbk$-vector space, with product $g h$ being the composition $g\circ h$ if $g$ and $h$ are composable and $0$ otherwise, and with unit $\sum_{X \in \mathcal{G}_0} \textnormal{id}_X$. Moreover, for $g \in \mathcal{G}_1$, we have that $\Delta_{\textnormal{wk}}(g)=g\otimes g$, $\varepsilon_{\textnormal{wk}}(g)=1_\Bbbk$, and $S(g)=g^{-1}$.
\end{definition}
Now consider the next result.
\begin{proposition}
The groupoid algebra $\Bbbk \mathcal{G}$ is Frobenius.
\end{proposition}
\begin{proof}
We will show that $\Bbbk \mathcal{G}$ has a non-degenerate left integral. From \cite[Example~3.1.2]{nikshych2002finite}, recall that
$I^L(\Bbbk \mathcal{G}) = $ span$\{\textstyle \sum_{h \in \mathcal{G}_1 : t(h) = X} h\}_{X \in \mathcal{G}_0}.$ Consider $$\Lambda = \textstyle \sum_{h\in \mathcal{G}_1}h \; \; \in I^L(\Bbbk \mathcal{G}),$$
and the linear map
\begin{center}
$\lambda:\Bbbk \mathcal{G} \rightarrow \Bbbk$, \;\; where $\lambda(g)=1_\Bbbk $ if $g=\textnormal{id}_X$ for some $X \in \mathcal{G}_0$, \; \; and $\lambda(g) = 0$ otherwise.
\end{center}
Then, observe that $\lambda (\Lambda_1) \Lambda_2=1_{\Bbbk \mathcal{G}}$.
Hence, $\Lambda$ is non-degenerate, and hence $\lambda$ is a non-degenerate integral of $(\Bbbk \mathcal{G})^*$ by Theorem~\ref{thm:BNS}(d). Now by Theorem~\ref{thm:BNS}(c), $\Bbbk \mathcal{G}$ is Frobenius.
Moreover, we can use the non-degenerate integrals above, along with Theorem~\ref{thm:main-weak}, to see that $\Bbbk \mathcal{G}$ is Frobenius via Definition-Theorem~\ref{defthm:Frob}. Here, the comultiplication $\Delta = \Delta_\Lambda$ is given by \eqref{eq:Delta-weak}:
$$\Delta(g)= \textstyle \sum_{h\in \mathcal{G}_1} h\otimes (h^{-1} g)$$ for $g \in \mathcal{G}_1$,
and with counit $\varepsilon = \lambda$ as in \eqref{eq:ep-weak}.
\end{proof}
\subsection{Quantum transformation groupoids}
\label{sec:QTG} Here, we discuss the Frobenius condition for certain weak Hopf algebras, called quantum transformation groupoids, which are constructed using the data of a Hopf algebra $L$ and a strongly separable module algebra $B$ over $L$.
\begin{notation} Consider the following notation.
\begin{itemize}
\item Take $L$ to be finite-dimensional Hopf algebra over $\Bbbk$, with comultiplication $\Delta_L$, counit $\varepsilon_L$, and antipode $S_L$ [Definition-Proposition~\ref{def:Hopf}].
\smallskip
\smallskip
\item Let $B$ be a strongly separable algebra over $\Bbbk$, which implies that it comes equipped with an element $e^1\otimes e^2 \in B \otimes B$ satisfying
\begin{align}
be^1\otimes e^2 & = e^1\otimes e^2b \quad \quad \forall b \in B, \label{eq:idempotent1} \\
e^1e^2 & = 1_B, \label{eq:idempotent2} \\
e^1\otimes e^2 & = e^2\otimes e^1. \label{eq:idempotent3}
\end{align}
Here, $e^1 \otimes e^2$ is called a {\it symmetric separability idempotent}.
\smallskip
\item Furthermore, there exists a non-degenerate trace form $\omega: B\rightarrow \Bbbk$ defined by
\begin{align}
\omega(e^1)e^2 = 1_B = e^1\omega(e^2) \label{eq:trace}.
\end{align}
\smallskip
\item We further impose that $B$ is a {\it right $L$-module algebra} via $\triangleleft$, that is, we have a map $\triangleleft: B\otimes L\rightarrow B$ satisfying
\begin{align}
(b\triangleleft \ell)\triangleleft \ell' = b\triangleleft (\ell \ell'), & \hspace{1cm}
b\triangleleft 1_L=b, \label{eq:QTGaction1}\\
(bb')\triangleleft \ell = (b\triangleleft \ell_1)(b'\triangleleft \ell_2), & \hspace{1cm}
1_B\triangleleft \ell=\varepsilon_L(\ell)1_B
\label{eq:QTGaction2}
\end{align}
for all $b \in B$ and $\ell, \ell' \in L$.
\smallskip
\item Moreover, we assume that the separability idempotent satisfies the identity below:
\begin{align}\label{eq:idempotentAction}
(e^1\triangleleft \ell) \otimes e^2 = e^1\otimes (e^2\triangleleft S_L(\ell))
\end{align}
for all $\ell \in L$
\end{itemize}
\end{notation}
\begin{definition}
Recall the notation above. A {\it quantum transformation groupoid} is a weak Hopf algebra over $\Bbbk$, which as a $\Bbbk$-vector space is $$H:= H(L,B)=B^{op}\otimes L\otimes B,$$ with the following structure maps:
\begin{align}
\text{multiplication: } & (a\otimes \ell \otimes b)(a'\otimes \ell'\otimes b') = (a'\triangleleft S_L(\ell_1))a \otimes \ell_2 \ell'_1 \otimes (b\triangleleft \ell'_2)b'; \label{eq:QTGmult}
\\
\text{unit: } & 1_B \otimes 1_L \otimes 1_B; \label{eq:QTGunit}
\\
\text{comultiplication: } & \Delta_{\text{wk}}(a\otimes \ell\otimes b) = (a\otimes \ell_1 \otimes e^1) \otimes ((e^2 \triangleleft S_L(\ell_2)) \otimes \ell_3 \otimes b);
\label{eq:QTGcomult}
\\
\text{counit: } & \varepsilon_{\text{wk}}(a\otimes \ell\otimes b) = \omega(a(b \triangleleft S_L^{-1}(\ell))); \label{eq:QTGcounit}
\\
\text{antipode: } & S_{\text{wk}}(a\otimes \ell\otimes b) = b\otimes S_L(\ell)\otimes a. \label{eq:QTGantipode}
\end{align}
\end{definition}
We refer the reader to \cite[Section~7]{WWW} and references therein for more details about the structure of the weak Hopf algebras $H(L,B)$.
\smallskip
Next, recall some facts about integrals of finite-dimensional Hopf algebras.
\begin{definition} \cite[Definition~10.1.1]{radford} Recall the notation above.
A {\it left} (resp., {\it right}) {\it integral} of $L$ is an element $\Lambda \in L$ such that $\ell \Lambda = \varepsilon_L(\ell) \Lambda$ (resp., $\Lambda \ell = \varepsilon_L(\ell) \Lambda$) for all $\ell \in L$.
\end{definition}
The notion above is consistent with Definition~\ref{def:integral} via Definition-Proposition~\ref{def:Hopf}. Moreover, we have the following facts.
\begin{proposition} \cite[Proposition 10.1.3(b), Section~10.2]{radford} \label{prop:L-integ}
Recall the notation above.
\begin{enumerate} [font=\upshape]
\item Non-zero left (or, right) integrals of finite-dimensional Hopf algebras $L$ exist and are always non-degenerate (and thus, finite-dimensional Hopf algebras are always Frobenius).
\smallskip
\item If $\Lambda$ is a right integral of $L$, then
\begin{align} \label{eq:rightintegL}
\ell S(\Lambda_1) \otimes \Lambda_2 = S(\Lambda_1) \otimes \Lambda_2 \ell,
\end{align}
for all $\ell \in L$.
\end{enumerate}
\vspace{-.2in}
\qed
\end{proposition}
\begin{proposition} \label{prop:QTG-nondeg}
Recall the notation above. If $\Lambda$ is a right integral for the finite-dimensional Hopf algebra $L$, then
\begin{equation} \label{eq:barLambda} \bar{\Lambda}:=(e^1\triangleleft \Lambda_1) \otimes S_L(\Lambda_2) \otimes e^2
\end{equation}
is a non-degenerate left integral of the quantum transformation groupoid $H(L,B)$.
\end{proposition}
\begin{proof}
First, we show that if $\Lambda$ is a right integral of $L$, then the element
$$(e^1 \triangleleft S_L(\Lambda_1)) \otimes \Lambda_2 \otimes e^2 $$
is a right integral of $H(L,B)$. We compute:
{\small
\begin{align*}
((e^1 \triangleleft S_L(\Lambda_1)) \otimes \Lambda_2 \otimes e^2)(a' \otimes \ell' \otimes b')
&\overset{\textnormal{(\ref{eq:QTGmult})}}{=} (a'\triangleleft S_L(\Lambda_2)) (e^1\triangleleft S_L(\Lambda_1)) \otimes \Lambda_3 \ell'_1 \otimes (e^2\triangleleft \ell'_2)b'
\medskip \\
& \overset{\textnormal{(*)}}{=} (a'\triangleleft S_L(\Lambda_1)_1) (e^{1}\triangleleft S_L(\Lambda_1)_2) \otimes \Lambda_2 \ell'_1 \otimes (e^2\triangleleft \ell'_2)b'
\medskip \\
& \overset{\textnormal{(\ref{eq:QTGaction2})}}{=} (a'e^1 \triangleleft S_L(\Lambda_1)) \otimes \Lambda_2 \ell'_1 \otimes (e^2\triangleleft \ell'_2)b'
\medskip \\
& \overset{\textnormal{(\ref{eq:idempotent1})}}{=} (e^1 \triangleleft S_L(\Lambda_1)) \otimes \Lambda_2 \ell'_1 \otimes ((e^2 a')\triangleleft \ell'_2)b'
\medskip \\
& \overset{\textnormal{(\ref{eq:QTGaction2})}}{=} (e^1 \triangleleft S_L(\Lambda_1)) \otimes \Lambda_2 \ell'_1 \otimes (e^2\triangleleft \ell'_2) (a'\triangleleft \ell'_3) b'
\medskip \\
& \overset{\textnormal{\eqref{eq:rightintegL}}}{=} (e^1 \triangleleft \ell'_1 S_L(\Lambda_1)) \otimes \Lambda_2 \otimes (e^2\triangleleft \ell'_2) (a\triangleleft \ell'_3) b'
\medskip \\
& \overset{\textnormal{(\ref{eq:QTGaction1})}}{=} ((e^1 \triangleleft \ell'_1)\triangleleft S_L(\Lambda_1)) \otimes \Lambda_2 \otimes (e^2\triangleleft \ell'_2) (a\triangleleft \ell'_3) b'
\medskip \\
& \overset{\textnormal{(\ref{eq:idempotentAction})}}{=} (e^1 \triangleleft S_L(\Lambda_1)) \otimes \Lambda_2 \otimes ((e^2 \triangleleft S_L(\ell'_1) ) \triangleleft \ell'_2) (a\triangleleft \ell'_3) b'
\medskip \\
& \overset{ \textnormal{(\ref{eq:QTGaction1})}}{=} (e^1 \triangleleft S_L(\Lambda_1)) \otimes \Lambda_2 \otimes (e^2 \triangleleft (S_L(\ell'_1) \ell'_2)) (a\triangleleft \ell'_3) b'
\medskip \\
& \overset{\textnormal{(**)}}{=} (e^1 \triangleleft S_L(\Lambda_1)) \otimes \Lambda_2 \otimes (e^2 \triangleleft \varepsilon_L(\ell'_1)1_L) (a\triangleleft \ell'_2) b'
\medskip \\
& \overset{\textnormal{(\ref{eq:QTGaction1}),(\%)}}{=} (e^1 \triangleleft S_L(\Lambda_1)) \otimes \Lambda_2 \otimes e^2 (a\triangleleft \ell') b'
\medskip \\
& \overset{\textnormal{(\ref{eq:QTGmult})}}{=} ((e^1 \triangleleft S_L(\Lambda_1)) \otimes \Lambda_2 \otimes e^2) (1_B \otimes 1_L \otimes (a'\triangleleft \ell')b')
\medskip \\
&\overset{\textnormal{($\star$)}}{=} ((e^1 \triangleleft S_L(\Lambda_1)) \otimes \Lambda_2 \otimes e^2)\; \varepsilon_s(a' \otimes \ell' \otimes b')
\end{align*}
}
\noindent for $a',b'\in B$ and $\ell' \in L$. Here, $(*)$ is anti-comultiplicativity of the antipode, $(**)$ is the antipode axiom, $(\%)$ is the counit axiom, and $(\star)$ follows from \cite[Lemma~7.9]{WWW}.
\smallskip
Next, by \cite[Lemma~2.9]{BNS}, we have that
\begin{align*}
\bar{\Lambda}:= \;S_{\text{wk}}((e^1 \triangleleft S_L(\Lambda_1)) \otimes \Lambda_2 \otimes e^2) &= e^2 \otimes S_L(\Lambda_2) \otimes (e^1 \triangleleft S_L(\Lambda_1)) \medskip \\
&\overset{\textnormal{\eqref{eq:idempotent3}}}{=} e^1 \otimes S_L(\Lambda_2) \otimes (e^2 \triangleleft S_L(\Lambda_1)) \medskip \\
&\overset{\textnormal{\eqref{eq:idempotentAction}}}{=} (e^1 \triangleleft \Lambda_1) \otimes S_L(\Lambda_2) \otimes e^2
\end{align*}
is a left integral of $H(L,B)$.
\smallskip
Finally, we will verify the non-degeneracy condition for the left integral $\bar{\Lambda}$ of $H(L,B)$. By Remark~\ref{rem:nondeg}(b), it suffices to show that there exists an element $\bar{\lambda} \in H^*$ such that \begin{align} \label{eq:Phi-bar-lam}
\Psi_{\bar{\Lambda}}(\bar{\lambda}) \; = \;\bar{\Lambda}_1 \; \bar{\lambda}(\bar{\Lambda}_2)\; = \;1_H.
\end{align}
Since, $S_L(\Lambda)$ is left integral of $L$, we can choose an element $\lambda$ of $L$ so that
\begin{align} \label{eq:lam-S-1L}
S_L(\Lambda)_1\;\lambda(S_L(\Lambda)_2) = \lambda(S_L(\Lambda_1)) \;S_L(\Lambda_2) = 1_L
\end{align}
by Proposition~\ref{prop:L-integ}(a) and Remark~\ref{rem:nondeg}(b). We claim that
\begin{align}\label{eq:barlambda}
\bar{\lambda} = \omega \otimes \lambda \otimes \omega
\end{align}
is the desired element that makes \eqref{eq:Phi-bar-lam} hold. We take $e'^1 \otimes e'^2$ to be a copy of $e^1 \otimes e^2$ in the computations below:
{\small
\begin{align*}
\bar{\Lambda}_1 \; \bar{\lambda}(\bar{\Lambda}_2)
&\overset{\textnormal{(\ref{eq:QTGcomult})}}{=}
[ (e^1\triangleleft\Lambda_1) \otimes S_L(\Lambda_2)_1 \otimes e'^1 ] \; \; \bar{\lambda}[(e'^2\triangleleft S_L(S_L(\Lambda_2)_2)) \otimes S_L(\Lambda_2)_3 \otimes e^2]
\medskip \\
&\overset{\textnormal{(\ref{eq:idempotentAction})}}{=}
[(e^1\triangleleft \Lambda_1) \otimes S_L(\Lambda_2)_1 \otimes (e'^1 \triangleleft S_L(\Lambda_2)_2) ] \; \; \bar{\lambda}[e'^2 \otimes S_L(\Lambda_2)_3 \otimes e^2]
\medskip \\
&
\overset{\textnormal{(\ref{eq:barlambda})}}{=}
[ (e^1\triangleleft \Lambda_1) \otimes S_L(\Lambda_2)_1 \otimes (e'^1 \triangleleft S_L(\Lambda_2)_2) ] \; \; \omega(e'^2) \; \lambda(S_L(\Lambda_2)_3) \; \omega(e^2 )
\medskip \\
&
=
\lambda(S_L(\Lambda_2)_3)\;\; [ ((e^1 \;\omega(e^2)) \triangleleft \Lambda_1) \otimes S_L(\Lambda_2)_1 \otimes ((e'^1 \; \omega(e'^2)) \triangleleft S_L(\Lambda_2)_2) ]
\medskip \\
&
\overset{\textnormal{(\ref{eq:trace})}}{=}
\lambda(S_L(\Lambda_2)_3)\;\; [(1_B \triangleleft \Lambda_1) \otimes S_L(\Lambda_2)_1 \otimes (1_B \triangleleft S_L(\Lambda_2)_2)]
\medskip \\
&
\overset{\textnormal{\eqref{eq:QTGaction2}}}{=}
\lambda(S_L(\Lambda_2)_3)\;\; [ \varepsilon_L(\Lambda_1)1_B \otimes S_L(\Lambda_2)_1 \otimes \varepsilon_L(S_L(\Lambda_2)_2)1_B ]
\medskip \\
&
\overset{\textnormal{(*)}}{=}
\lambda(S_L(\Lambda_2))\;\; [ \varepsilon_L(\Lambda_1) 1_B \otimes S_L(\Lambda_4) \otimes \varepsilon( S_L(\Lambda_3)) 1_B ]
\medskip \\
&
\overset{\textnormal{(\%)}}{=}
\lambda(S_L(\Lambda_1)) \; [ 1_B \otimes S_L(\Lambda_3) \otimes \varepsilon_L( S_L(\Lambda_2))1_B ]
\medskip \\
&
\overset{\textnormal{$\varepsilon_L S_L \hspace{-.03in}=\hspace{-.03in} S_L$}}{=}
\lambda(S_L(\Lambda_1)) \; [ 1_B \otimes S_L(\Lambda_3) \otimes \varepsilon_L(\Lambda_2)1_B ]
\medskip \\
&
\overset{\textnormal{(\%)}}{=}
\; 1_B \otimes \lambda(S_L(\Lambda_1)) S_L(\Lambda_2) \otimes 1_B
\medskip \\
&
\overset{\textnormal{\eqref{eq:lam-S-1L}}}{=}
1_B \otimes 1_L \otimes 1_B \medskip \\
&\overset{\textnormal{\eqref{eq:QTGunit}}}{=}
1_{H(L,B)}.
\end{align*}
}
\noindent Here, $(*)$ is anti-comultiplicativity of the antipode, and (\%) is the counit axiom.
Therefore, $\bar{\Lambda}$ is a non-degenerate left integral of $H(L,B)$ as claimed.
\end{proof}
\begin{corollary}
The quantum transformation $H(L,B)$ is a Frobenius algebra via maps $\Delta_{\bar{\Lambda}},\; \varepsilon_{\bar{\lambda}}$ defined as follows:
\begin{align*}
\Delta_{\bar{\Lambda}}(a\otimes \ell \otimes b)
& =
[(e^1\triangleleft \Lambda_1 S_L(\ell_1)) a \otimes
\ell_2 S_L(\Lambda_4) \otimes (be'^1 \triangleleft S_L(\Lambda_3)) ]
\otimes
[e^2 \otimes S^2_L(\Lambda_2) \otimes e'^2], \medskip\\
\varepsilon_{\bar{\lambda}}(a\otimes \ell \otimes b) & = \omega(a)\; \lambda(\ell)\; \omega(b),
\end{align*}
for $a,b \in B$ and $\ell \in L$.
Here, $\Lambda$ is a right integral of $L$, and $e'^1 \otimes e'^2$ is a copy of the separability idempotent $e^1 \otimes e^2$ of $B$. Moreover, $\lambda$ is a choice of element of $L^*$ such that~\eqref{eq:lam-S-1L} holds; in fact, $\lambda$ is a non-degenerate left integral of $L^*$.
\qed
\end{corollary}
\begin{proof}
The fact that $H:=H(L,B)$ is Frobenius follows from Proposition~\ref{prop:QTG-nondeg} and Theorem~\ref{thm:BNS}(c). The formulas for the comultiplication and counit maps for the Frobenius structure of $H$ then follow from the formulas for the non-degenerate integrals $\bar{\Lambda}$ and $\bar{\lambda}$ of $H$ and of $H^*$, resp., in \eqref{eq:barLambda} and \eqref{eq:barlambda}. Namely, by \eqref{eq:QTGcomult} and \eqref{eq:idempotentAction}, we have that
\begin{equation} \label{eq:Delta-bar}
\Delta_{\text{wk}}(\bar{\Lambda}) = [(e^1\triangleleft \Lambda_1) \otimes S_L(\Lambda_4) \otimes (e'^1 \triangleleft S_L(\Lambda_3)) ] \; \otimes \; [e'^2 \otimes S_L(\Lambda_2) \otimes e^2].
\end{equation}
Then, to get $\Delta$ and $\varepsilon$, one needs to apply the formulas \eqref{eq:Delta-weak} and \eqref{eq:ep-weak}, resp., in the proof of Theorem~\ref{thm:main-weak}. In particular, we have that:
{\small
\begin{align*}
&\Delta_{\bar{\Lambda}}(a\otimes \ell \otimes b)
=
[(a\otimes \ell \otimes b)\; \bar{\Lambda}_1] \otimes S_{\text{wk}}(\bar{\Lambda}_2)
\\ \medskip
& \overset{\textnormal{\eqref{eq:Delta-bar},\eqref{eq:QTGmult}}}{=}
[(e^1\triangleleft \Lambda_1 S_L(\ell_1)) a \otimes
\ell_2 S_L(\Lambda_5) \otimes (b \triangleleft S_L(\Lambda_4))(e'^1 \triangleleft S_L(\Lambda_3)) ]
\otimes
S_{\text{wk}}[e'^2 \otimes S_L(\Lambda_2) \otimes e^2] \\ \medskip
& \overset{\textnormal{\eqref{eq:QTGaction2}}}{=}
[(e^1\triangleleft \Lambda_1 S_L(\ell_1)) a \otimes
\ell_2 S_L(\Lambda_4) \otimes (be'^1 \triangleleft S_L(\Lambda_3)) ]
\otimes
S_{\text{wk}}[e'^2 \otimes S_L(\Lambda_2) \otimes e^2]
\\ \medskip
& \overset{\textnormal{\eqref{eq:QTGantipode}}}{=}
[(e^1\triangleleft \Lambda_1 S_L(\ell_1)) a \otimes
\ell_2 S_L(\Lambda_4) \otimes (be'^1 \triangleleft S_L(\Lambda_3)) ]
\otimes
[e^2 \otimes S^2_L(\Lambda_2) \otimes e'^2].
\end{align*}}
\noindent Finally, the last statement holds by Theorem~\ref{thm:BNS}(d).
\end{proof}
\begin{example}
If we take $L= \Bbbk$, then $\Lambda = \lambda = 1_\Bbbk$, and we have the following structure formulas for the Frobenius weak Hopf algebra $H:=H(\Bbbk,B) = B^{op} \otimes B$:
\begin{itemize}
\item algebra: $m((a \otimes b)\otimes (a' \otimes b)):=(a \otimes b)(a' \otimes b) = a'a \otimes bb'$, \quad $1_H = 1_B \otimes 1_B$;\smallskip
\item weak Hopf: $\Delta_{\text{wk}}(a \otimes b) = (a \otimes e^1) \otimes (e^2 \otimes b)$, \; $\varepsilon_{\text{wk}}(a \otimes b) = \omega(ab)$, \; $S_{\text{wk}}(a \otimes b) = b \otimes a$;\smallskip
\item Frobenius: $\Delta(a \otimes b) = (e^1 a \otimes be'^1) \otimes (e^2 \otimes e'^2)$, \; $\varepsilon(a \otimes b) = \omega(a) \omega(b)$;
\end{itemize}
for $a,b \in B$. Indeed, let us check that $(H,\Delta, \varepsilon)$ is a coassociative, counital coalgebra:
{\small
\begin{align*}
(\Delta \otimes \textnormal{id})\Delta(a \otimes b) &= \Delta(e^1 a \otimes be'^1) \otimes (e^2 \otimes e'^2)\\
&= (e''^1 e^1 a \otimes b e'^1 e'''^1) \otimes (e''^2 \otimes e'''^2) \otimes (e^2 \otimes e'^2)
\\
&= (e^1 e''^1 a \otimes b e'''^1 e'^1) \otimes (e^2 \otimes e'^2) \otimes (e''^2 \otimes e'''^2)\\
& \overset{\textnormal{\eqref{eq:idempotent1},\eqref{eq:idempotent3}}}{=}
(e^1 a \otimes be'^1) \otimes (e''^1 e^2 \otimes e'^2 e'''^1) \otimes (e''^2 \otimes e'''^2)\\
&=(e^1 a \otimes be'^1) \otimes \Delta(e^2 \otimes e'^2) \quad = (\textnormal{id} \otimes \Delta)\Delta(a \otimes b);
\end{align*}
}
\vspace{-.2in}
{\small
\begin{align*}
(\varepsilon \otimes \textnormal{id})\Delta(a \otimes b)
&= \omega(e^1 a) \;\omega(be'^1)\;(e^2 \otimes e'^2) \overset{\textnormal{\eqref{eq:idempotent1},\eqref{eq:idempotent3}}}{=} \omega(e^1) \;\omega(e'^1)\;(a e^2 \otimes e'^2b) \overset{\textnormal{\eqref{eq:trace}}}{=} a\otimes b;\\
(\textnormal{id} \otimes \varepsilon)\Delta(a \otimes b)
&= (e^1 a \otimes be'^1) \; \omega(e^2) \; \omega(e'^2) \overset{\textnormal{\eqref{eq:trace}}}{=} a\otimes b.
\end{align*}
}
\noindent Lastly, \eqref{eq:Delta-m} holds by the following computations:
{\small
\begin{align*}
\Delta((a \otimes b)(a'\otimes b')) &= (e^1 a'a \otimes bb'e'^1) \otimes (e^2 \otimes e'^2) =:(\star);\\
(m \otimes \textnormal{id})(\textnormal{id} \otimes \Delta)((a \otimes b) \otimes (a' \otimes b')) &= (a \otimes b) (e^1 a' \otimes b'e'^1) \otimes (e^2 \otimes e'^2) = (\star);
\\
(\textnormal{id} \otimes m)(\Delta \otimes \textnormal{id})((a \otimes b) \otimes (a' \otimes b')) &= (e^1 a \otimes be'^1) \otimes (e^2 \otimes e'^2)(a' \otimes b') \overset{\textnormal{\eqref{eq:idempotent1}}}{=} (\star).
\end{align*}
}
\end{example}
| {
"timestamp": "2022-05-02T02:20:57",
"yymm": "2204",
"arxiv_id": "2204.14182",
"language": "en",
"url": "https://arxiv.org/abs/2204.14182",
"abstract": "A Frobenius algebra is a finite-dimensional algebra $A$ which comes equipped with a coassociative, counital comultiplication map $\\Delta$ that is an $A$-bimodule map. Here, we examine comultiplication maps for generalizations of Frobenius algebras: finite-dimensional self-injective (quasi-Frobenius) algebras. We show that large classes of such algebras, including finite-dimensional weak Hopf algebras, come equipped with a nonzero map $\\Delta$ as above that is not necessarily counital. We also conjecture that this comultiplicative structure holds for self-injective algebras in general.",
"subjects": "Quantum Algebra (math.QA); Rings and Algebras (math.RA)",
"title": "On non-counital Frobenius algebras",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.981735723251272,
"lm_q2_score": 0.7217432122827967,
"lm_q1q2_score": 0.7085610945121478
} |
https://arxiv.org/abs/1310.5493 | Brooks' theorem on powers of graphs | We prove that for $k\geq 3$, the bound given by Brooks' theorem on the chromatic number of $k$-th powers of graphs of maximum degree $\Delta \geq 3$ can be lowered by 1, even in the case of online list coloring. | \section{Introduction}
\label{sec:intro}
A graph $G=(V,E)$ is \textit{$k$-colorable} if there is a way to color each vertex with an element of $\{1,\cdots,k\}$ so that no two adjacent vertices receive distinct colors. A generalization of $k$-colorability is \textit{list $k$-colorability} (or \textit{$k$-choosability}), introduced independently by Vizing~\cite{v76} and Erd\H{o}s et al.~\cite{ert80}. The graph $G$ is $k$-choosable if for every assignment of $k$ colors to each vertex in $V$, there is a way to color each vertex with an element of its assigned $k$ colors so that no two adjacent vertices have the same color.
Let $\Delta \geq 3$. Unless specified otherwise, the graphs considered here are simple, connected and their maximum degree is $\Delta$. Jointly with the assumption that $\Delta \geq 3$, this means for example that none of the graphs we consider is a cycle. We recall the following seminal Brooks-like theorem on choosability.
\begin{theorem}\cite{ert80}\label{th:brooks}
Except for cliques, every graph is $\Delta$-choosable.
\end{theorem}
The square $G^2$ of a graph $G=(V,E)$ is the graph obtained from $G$ by adding all edges between vertices that have a common neighbor. Note that $\Delta(G^2)\leq \Delta^2$, so Theorem~\ref{th:brooks} implies that if $G^2$ is not a clique on $\Delta^2+1$ vertices, then $G^2$ is $\Delta^2$-choosable. In the case $\Delta=3$, Theorem~\ref{th:brooks} ensures that the square of any graph is $9$-colorable unless it is a clique. Cranston and Kim~\cite{ck08} improved this result and conjectured that it is also true for every $\Delta$.
\begin{theorem}\cite{ck08}\label{th:cranstonkim}
Except for the Petersen graph, the square of any subcubic graph is $8$-choosable.
\end{theorem}
\emph{Moore graphs} are graphs on $\Delta^2+1$ vertices whose square is a clique~\cite{ms05}. The Petersen graph is the unique Moore graph with $\Delta=3$.
\begin{conjecture}\cite{ck08}\label{conj:cranstonkim}
Except for Moore graphs, the square of any graph is $(\Delta^2-1)$-choosable.
\end{conjecture}
The \emph{distance} between two vertices $u$ and $v$ in $G$ is the length of a shortest path between them. A generalization of the square of a graph is the $k^{th}$-power of a graph, for $k \in \mathbb{N}^*$. The \emph{$k^{th}$-power} of $G$ is obtained from $G$ by adding all edges between vertices at distance at most $k$. We denote $D(k,\Delta)$ the greatest maximum degree of a $k^{th}$-power of a graph of maximum degree $\Delta$. It basically corresponds to the maximum degree of the $k^{th}$-power of a deep enough $\Delta$-regular tree, and more precisely: \[ D(k,\Delta)=\Delta \times \sum_{i=1}^{k} (\Delta-1)^{i-1}=\Delta \times \frac{(\Delta-1)^k-1}{\Delta-2} \]
Note that $D(2,\Delta)=\Delta^2$, hence the following generalizes Conjecture~\ref{conj:cranstonkim}:
\begin{conjecture}\cite{mf13}\label{conj:gen}
For any $k \in \mathbb{N}^*$, except for Moore graphs when $k=2$, the $k^{th}$ power of any graph is $(D(k,\Delta)-1)$-choosable.
\end{conjecture}
In other words, the conjecture states that Theorem~\ref{th:brooks}, which would only yield the result for $D(k,\Delta)$, can be strengthened in the case of powers of graphs. The reason why $k=2$ is a special case is that there is no such thing as Moore graphs for higher powers (see Lemma~\ref{lem:diam} in Section~\ref{subsec:proof}).
We prove Conjecture~\ref{conj:gen} for $k \geq 3$.
\begin{theorem}\label{th:main}
For $k\geq 3$, the $k^{th}$ power of any graph is $(D(k,\Delta)-1)$-choosable.
\end{theorem}
Independently, Conjecture~\ref{conj:gen} when $k=2$ (i.e. Conjecture~\ref{conj:cranstonkim}) has been proved recently by Cranston and Rabern~\cite{cr13}.
A generalization of list coloring, namely \emph{online list coloring}, was recently introduced independently by Schauz~\cite{s09} and Zhu~\cite{z09}. The graph $G$ is \emph{k-paintable} if, for every assignment of $k$ colors to each vertex, and for every order on the colors, there is an algorithm to color the graph by concealing until step $i$ which vertices contain the $i^{th}$ color in their lists, and deciding on the spot which vertices are colored in $i$ and which not (once colored a vertex cannot be uncolored). Clearly, online list coloring is stronger than list coloring. There exist graphs which are $k$-choosable but not $k$-paintable~\cite{s09}, though we do not know any $k$-choosable graph which is not $(k+1)$-paintable~\cite{clmptw13}. Brooks' theorem is also true in the case of online list coloring~\cite{hks10}.
We actually prove a stronger version of Theorem~\ref{th:main}, as follows.
\begin{theorem}\label{th:paint}
For $k\geq 3$, the $k^{th}$ power of any graph is $(D(k,\Delta)-1)$-paintable.
\end{theorem}
Similarly, Cranston and Rabern proved the case $k=2$ in the more general setting of list online coloring~\cite{cr13}.
We wonder whether the following stronger generalization of Conjecture~\ref{conj:cranstonkim} could be true:
\begin{conjecture}
For any $k \in \mathbb{N}^*$, except for a finite number of graphs, the $k^{th}$ power of any graph is $(D(k,\Delta)+1-k)$-choosable.
\end{conjecture}
\section{Proof of Theorem~\ref{th:main}}\label{sec:proof}
Let $k \geq 3$. Let $G$ be a graph. Let $M=D(k,\Delta)$. Note that $M \geq 21$ as $\Delta \geq 3$.
We will need the following lemma, which is essentially an easy adaptation of existing results (see Section~\ref{subsec:proof} for a proof).
\begin{lemma}\label{lem:glob}
If $G$ satisfies any of the following:
\begin{enumerate}
\item $G$ contains a vertex of degree smaller than $\Delta$.
\item $G$ contains a cycle shorter than $2k$.
\item $G$ contains two intersecting cycles of length $2k$.
\item diam$(G)\leq k$.
\end{enumerate}
Then $G^k$ is $(M-1)$-paintable.
\end{lemma}
Thus we can assume from now on that $G$ is $\Delta$-regular, with $g(G) \geq 2k$, diam$(G)\geq k+1$ and that the cycles of length $2k$ in $G$ are disjoint.
\begin{lemma}\label{lem:x1y1}
The graph $G$ contains two vertices $x_1$ and $y_1$ at distance $k+1$ from each other, with two neighbors $x_2,y_2$ (respectively) at distance at least $k+1$ from each other.
\end{lemma}
\begin{proof}
Since diam$(G) \geq k+1$, $G$ contains two vertices $x_1$ and $y_1$ at distance $k+1$ from each other. Let us prove that $x_1$ has a neighbor $x_2$ and $y_1$ a neighbor $y_2$ such that $x_2$ and $y_2$ are at distance at least $k+1$ from each other. Assume for contradiction that each of the $\Delta$ neighbors of $x_1$ are at distance at most $k$ from each of the $\Delta$ neighbors of $y_1$. Let $z$ be a neighbor of $x_1$. Only $\Delta-1$ neighbors of $z$ can be part of a path of length at most $k$ containing a neighbor of $y_1$, as $x_1$ is itself at distance at least $k$ from all the neighbors of $y_1$. Therefore there is a neighbor $z'$ of $z$ that belongs to two paths of length at most $k-1$ to two different neighbors of $y_1$. This yields a cycle $C$ of length at most $2k$ containing $y_1$. The cycle $C$ is actually of length $2k$ and contains $z'$, as $z'$ is the endpoint of two different paths of length at most $k$ to $y_1$ and there is no cycle of length less than $2k$ by Lemma~\ref{lem:glob}. Consequently, $y_1$ and $z'$ are diametrically opposite on $C$. Let $w$ be another neighbor of $x_1$. By the same argument, a neighbor $w'$ of $w$ belongs to a cycle $C'$ of length $2k$ that contains $y_1$, and $w'$ is diametrically opposite to $y_1$ in $C'$. Then $C$ and $C'$ intersect on $y_1$, which by Lemma~\ref{lem:glob} implies that $C$ and $C'$ are actually the same cycle. Thus $w'$ and $z'$ are actually the same vertex. Now, $(w',w,x_1,z)$ is a cycle of length $4$, a contradiction to Lemma~\ref{lem:glob} and the fact that $k \geq 3$.
\end{proof}
We will describe an algorithm to online list color $G$. Let $L$ be a list assignment of $M-1$ colors to each vertex. Since we are in the case of online list coloring, the colors will be revealed one after another (at step $1$, we learn which vertices contain color $1$ in their list, and have to decide on the spot which will be colored in it, and so on).
At any step of the algorithm, the number of \emph{constraints} of a vertex $v$ is the number of colors in $L(v)$ that appear on vertices at distance at most $k$ from $v$. Similarly, the number of constraints \emph{implied} on a vertex $v$ by a set $S$ is the number of colors in $L(v)$ that appear on vertices of $S$. Note that the number of constraints on a vertex $v$ is bounded by its degree in $G^k$, and that this upper bound is lowered by $1$ if two neighbors of $v$ in $G^k$ have the same color or if a neighbor of $v$ in $G^k$ either is not colored or its color does not belong to $L(v)$.
We consider four vertices $x_1, x_2, y_1$ and $y_2$ obtained from Lemma~\ref{lem:x1y1}. Let $P$ be a path of length $k+1$ between $x_1$ and $y_1$. Note that by definition of $x_2,y_2$, at most one of them is on $P$. Let $v$ be a vertex at distance least two on $P$ from both $x_1$ and $y_1$ (such a vertex exists since $P$ has length at least $4$), and let $w$ be a neighbor of $v$ on $P$ distinct from $x_2$ and $y_2$. Observe that $v$ is at distance at most $k$ from all of $x_1,x_2,y_1,y_2$ and $w$ is at distance at most $k$ from $x_1,y_1$.
Our goal is to set an order on the vertices of $G$ such that by appropriately deciding at each step whether to color or not each vertex in that order, every vertex that is considered for coloring has at most $M-2$ constraints.
The order we choose is $x_1,x_2,y_1,y_2$, followed by all other vertices by decreasing distance to $\{v,w\}$ (the distance to a set is the minimum of the distance to each element of the set). Ties are broken arbitrarily. The order ends with $w$ and then $v$. Let us now describe more precisely the coloring algorithm. For each new color $i$:
\begin{itemize}
\item Treat the vertices $x_1, x_2, y_1$ and $y_2$ (in a way described a little bit further).
\item Consider all the remaining vertices, one after the other according to the chosen order. When considering a vertex $u$, color it with $i$ if $i \in L(u)$ and no neighbor of $u$ in $G^k$ is colored with $i$.
\end{itemize}
The heart of the algorithm consists in making the right decision for $\{x_1,x_2,y_1,y_2\}$ at each step, so that $v$ and $w$ each have at most $M-2$ constraints when it comes to coloring them (note that $x_1,x_2,y_1$ and $y_2$ are all at distance at most $k$ from $v$ and $w$). Let us first prove that all the other vertices are colored at the end of the algorithm.
\begin{obs}\label{obs:constraints}
Let $u$ be an uncolored vertex (distinct from $x_1,x_2,y_1,y_2$). Let $r(u)$ be the number of neighbors of $u$ in $G^k$ which appear after $u$ in the order. The number of constraints for $u$ is at most $M-r(u)$.
\end{obs}
\begin{proof}
Let $y$ be a neighbor of $u$ in $G^k$ which appears after $u$. If $y$ is not colored, then $y$ does not imply a constraint on $u$. Assume that $y$ is colored with color $i$. Since $u$ is uncolored, it means that when we tried to color $u$ with $i$, we did not succeed. So either color $i$ does not appear in $L(u)$, and then $i$ does not imply a constraint on $u$. Or another neighbor $y'$ of $u$, which appears before $u$ in the order, was colored with $i$ and then $\{y,y'\}$ implies only one constraint on $u$.
\end{proof}
Let us first prove that every vertex $u \notin \{x_1,x_2, y_1,y_2,v,w\}$ is colored at the end of the coloring algorithm (whatever the choices we did for $x_1,x_2, y_1,y_2$). Let us prove that $u$ has at most $M-2$ constraints i.e. $u$ can be colored since $|L(u)|=M-1$:
\begin{itemize}
\item If $u$ is at distance at most $k$ from both $v$ and $w$, then both $v$ and $w$ are adjacent to $u$ in $G^k$. Since they are after $u$ in the order, the result holds by Observation~\ref{obs:constraints}.
\item If $u$ is at distance at least $k+1$ from $v$ or $w$, let $P$ be a shortest path from $u$ to $\{v,w\}$. Assume w.l.o.g. that $P$ is a shortest path from $u$ to $v$.
Let $z_1$, $z_2$ and $z_3$ be the three vertices consecutive to $u$ in $P$. These vertices exist since $d(u,v) \geq k \geq 3$.\\
If $\{z_1,z_2,z_3\} \cap \{x_1,x_2, y_1,y_2\}$ has size at most one, then at least two of $\{z_1,z_2,z_3\}$ are after $u$ in the order, hence the result by Observation~\ref{obs:constraints}. \\
Otherwise, at least two of $\{z_1,z_2,z_3\}$ are in $\{x_1,x_2, y_1,y_2\}$. Since $d(x_1,y_1)\geq k+1$, if $x_1 \in \{z_1,z_2,z_3\}$ then none of $y_1,y_2$ is in this set. The same holds for $x_2$. We may assume w.l.o.g. that the intersection is exactly $x_1,x_2$. Let $w_1$ be another neighbor of $z_2$. Note that $w_1$ is neither $y_1$ nor $y_2$. Moreover $d(w_1,v)<d(u,v)$ since $P$ is a minimum path. So $w_2$ appears after $u$ in the order and $d(w_2,u) \leq k$. Two vertices at distance at most three from $u$ are after $u$ in the order, so $u$ has at most $|M|-2$ constraints.
\end{itemize}
Now, let us argue that there is a coloring of $\{x_1,x_2,y_1,y_2\}$ that ensures that $v$ and $w$ will be colored.
In standard vertex coloring, we set $x_1$ and $y_1$ to color $1$, and $x_2$ and $y_2$ to color $2$: then vertices $v$ and $w$ each have at most $M-2$ colors appearing on their neighborhood in $G^k$. So they each have at most $M-2$ constraints and then both $v$ and $w$ are colored at some step of the algorithm.
Since we are considering online list coloring, the procedure is slightly more complicated, though the idea remains the same. We want to make sure that the coloring of $\{x_1,y_1\}$ ensures that $v$ and $w$ both have one less constraint, and the coloring of $\{x_2,y_2\}$ ensures that $v$ has one less constraint. Thus when we consider $w$, it has one less constraint by $\{x_1,y_1\}$ and one less by $v$ (since $w$ is before $v$ in the order), and then $v$ has two less constraints by $\{x_1,y_1,x_2,y_2\}$.
We proceed as follows. We denote by $NO(v)$ (resp. $NO(w)$) the number of elements of $\{x_1,y_1\}$ that are colored, minus the number of constraints implied on $v$ (resp. $w$) by elements of this set. For example, if $x_1$ and $y_1$ are colored the same, then $NO(v)=1$. The value $NO$ roughly denotes the number of colored vertices in $\{x_1,y_1\}$ which do not create a constraint. For simplicity, we consider $L(u)$ to be empty once $u$ is colored.
At the beginning of each step $c$, we check the following:
\begin{enumerate}[(i)]
\item\label{1:same} If $c$ belongs to $L(x_1) \cap L(y_1)$, then color both $x_1$ and $y_1$ in $c$.
\item\label{1:v} If $c$ belongs to $L(x_1)$ or $L(y_1)$ but not to $L(v)$, and $NO(v)=0$, then color $x_1$ or $y_1$ in $c$.
\item\label{1:w} If $c$ belongs to $L(x_1)$ or $L(y_1)$ but not to $L(w)$, and $NO(w)=0$, then color $x_1$ or $y_1$ in $c$.
\item\label{1:end} If $c$ belongs to $L(x_1)$ or $L(y_1)$, when $M-2$ colors for the corresponding vertex have already been revealed, then color it in $c$.\newline
\item\label{2:same} If $c$ belongs to $L(x_2)\cap L(y_2)$, then color both $x_2$ and $y_2$ in $c$.
\item\label{2:v} If $c$ belongs to $L(x_2)$ or $L(y_2)$ but not to $L(v)$, then color $x_2$ or $y_2$ in $c$.
\item\label{2:end} If $c$ belongs to $L(x_2)$ or $L(y_2)$, when at least $M-4$ colors for the corresponding vertex have already been revealed, then color it in $c$.
\end{enumerate}
It remains to prove that this yields a coloring of $\{x_1,x_2,y_1,y_2\}$ such that $v$ and $w$ can be colored.
Let us first justify that $x_1$ and $y_1$ are colored in the desired way (i.e. for both $v$ and $w$, the set $\{x_1,y_1\}$ implies at most one constraint). If $x_1$ and $y_1$ are colored the same, the goal is reached. If the lists $L(x_1)$ and $L(y_1)$ have no color in common, since $|L(x_1) \cup L(y_1)|>|L(v)|$, at least one of them can be colored by~(\ref{1:v}) or~(\ref{1:w}). Then at least one of $x_1$ and $y_1$ is colored in $c \not\in L(v) \cap L(w)$, assume w.l.o.g. that $x_1$ is colored that way (and is the first if $x_1$ and $y_1$ both are). If $c \not\in L(v)\cup L(w)$, the vertex $y_1$ is never colored at~(\ref{1:v}) nor~(\ref{1:w}) ($NO(v)=NO(w)=1$), but it is colored at~(\ref{1:end}). If $c \in L(v) \cup L(w)$, assume w.l.o.g. that $c \in L(v) \setminus L(w)$. Then, since $y_1$ was not colored before,~(\ref{1:v}) did not apply, which means that every color that belonged to $L(y_1)$ belonged to $L(v)$. But $c \in L(v) \setminus L(y_1)$ (otherwise $x_1$ and $y_1$ would be colored the same by (\ref{1:same})). Thus there remain more colors available for $y_1$ than for $v$, and~(\ref{1:v}) will eventually apply (remember that~(\ref{1:end}) does not apply before there is exactly one color left available for $y_1$).
Now, let us justify that $x_2$ and $y_2$ are colored in the desired way, i.e. if the two are colored then the set $\{x_2,y_2\}$ implies at most one constraint on $v$.
If~(\ref{2:same}) or~(\ref{2:v}) applies, the goal is reached. Let us now prove two things: that one of~(\ref{2:same}) and~(\ref{2:v}) always applies, and that both $x_2$ and $y_2$ are colored.\\
Assume that neither (\ref{2:same}) nor (\ref{2:v}) apply on $x_2$ or $y_2$. Then~(\ref{2:end}) eventually applies as only at most two colors (the colors of $x_1$ and $y_1$) may not reach (\ref{2:same})-(\ref{2:end}) and we color $x_2$ (resp. $y_2$) as soon as it has at most $2$ colors yet to be revealed (ie, if $x_2$ was not colored then it had at least $3$ colors yet to be revealed). Thus $x_2$ and $y_2$ are both colored at the end.\\
Assume that both $x_2$ and $y_2$ are colored at (\ref{2:end}). Then (\ref{2:same}) never applied, which implies $|L(x_2) \cap L(y_2)| \leq 2$. Consequently, $|L(x_2) \cup L(y_2)| \geq 2\times(M-1)-2$. Since $M\geq 21$ and $|L(v)|=M-1$, it follows that $L(x_2)\cup L(y_2)$ contains at least $18$ colors that do not belong to $L(v)$. This is a contradiction to the fact that~(\ref{2:v}) never applied.
\subsection{Proof of Lemma~\ref{lem:glob}}\label{subsec:proof}
In the following section, we prove that the different items of Lemma~\ref{lem:glob} holds. All the proofs are based on coloring algorithms based on distance, just like the proof of Theorem~\ref{th:main}. However, here we do not have to treat vertices differently: it suffices to choose an appropriate order. The resulting proofs are thus much simpler.
\begin{lemma}\label{lem:regular}
If $G$ contains a vertex of degree $\leq \Delta-1$, then $G^k$ is $(M-1)$-paintable.
\end{lemma}
\begin{proof}
Assume $G$ contains a vertex $v$ with $d(v) \leq \Delta - 1$. Since $G$ is connected, the distance to $v$ is well-defined. Order the vertices by decreasing order to $v$. At each step of the algorithm, color the vertices by decreasing distance to $v$. Every vertex $x$ at distance at least two from $v$ has at least two neighbors which are not constraints (indeed the vertices on a shortest path from $x$ to $v$ are considered after the vertex $x$ in the order). For every vertex $w$ which is a neighbor of $v$, since $k \geq 2$, the degree of $w$ in $G^k$ is at most $M-1$. Moreover, the vertex $v$ is considered after the vertex $w$ in the order, so the vertex $w$ can be colored. Since $k\geq 2$, $\Delta \geq 3$ and $d(v)\leq \Delta-1$, the degree of $v$ in $G^k$ is at most $M-\Delta < M-1$, so $v$ can be colored.
\end{proof}
Thus we assume from now on that $G$ is $\Delta$-regular.
\begin{lemma}\label{lem:shortcycle}
If $g(G) < 2k$, then $G^k$ is $(M-1)$-paintable.
\end{lemma}
\begin{proof}
Assume $G$ contains a cycle $C$ of length at most $2k-1$. Let $v$ and $w$ be two adjacent vertices on $C$. Since $C$ is of length at most $2k-1$, the degree of $v$ and $w$ in $G^k$ is less than $M-1$. Then, at each color step, we color as many vertices as possible, by decreasing distance to $\{v,w\}$ and ending with $v$ and $w$.
\end{proof}
Thus we assume from now on that $g(G)\geq 2k$.
\begin{lemma}\label{lem:shortcycles}
If $G$ contains two intersecting cycles of length $2k$, then $G^k$ is $(M-1)$-paintable.
\end{lemma}
\begin{proof}
Assume $G$ contains a vertex $v$ belonging to two cycles of length $2k$. Let $w$ be a neighbor of $v$ on one cycle of length $2k$. Vertex $v$ has degree at most $M-2$ in $G^k$, and $w$ at most $M-1$. Then, at each color step, we color as many vertices as possible, by decreasing distance to $\{v,w\}$ and ending with $w$ and then $v$.
\end{proof}
Thus we assume from now on that the cycles of length $2k$ in $G$ are disjoint.
\begin{lemma}\label{lem:diam}
If diam$(G) \leq k$ then $G^k$ is $(M-1)$-paintable.
\end{lemma}
\begin{proof}
Assume diam$(G) \leq k$. Then $G$ contains at most $M(k,\Delta)+1$ vertices, and $G^k$ is a clique. By~\cite{ms05}, the graph $G$ contains at most $M(k,\Delta)-1$ vertices, hence the result.
\end{proof}
\section{Acknowledgements}\label{sect:ack}
The authors would like to thank Daniel Cranston for suggesting the generalization to paintability.
\bibliographystyle{plain}
| {
"timestamp": "2013-10-22T02:11:57",
"yymm": "1310",
"arxiv_id": "1310.5493",
"language": "en",
"url": "https://arxiv.org/abs/1310.5493",
"abstract": "We prove that for $k\\geq 3$, the bound given by Brooks' theorem on the chromatic number of $k$-th powers of graphs of maximum degree $\\Delta \\geq 3$ can be lowered by 1, even in the case of online list coloring.",
"subjects": "Discrete Mathematics (cs.DM); Combinatorics (math.CO)",
"title": "Brooks' theorem on powers of graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9817357205793902,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7085610925837353
} |
https://arxiv.org/abs/0803.0045 | Direct limits of infinite-dimensional Lie groups | Many infinite-dimensional Lie groups of interest can be expressed as a union of an ascending sequence of (finite- or infinite-dimensional) Lie groups. In this survey article, we compile general results concerning such ascending unions, describe the main classes of examples, and explain what the general theory tells us about these.In particular, we discuss:(1) Direct limit properties of ascending unions of Lie groups in the relevant categories;(2) Regularity in Milnor's sense;(3) Homotopy groups of direct limit groups and of Lie groups containing a dense union of Lie groups;(4) Subgroups of direct limit groups;(5) Constructions of Lie group structures on ascending unions of Lie groups. | \section{\,Introduction}
Many infinite-dimensional
Lie groups $G$ can be expressed as the union
$G=\bigcup_{n\in {\mathbb N}}\,G_n$
of a sequence $G_1\subseteq G_2\subseteq \cdots$
of (finite- or infinite-dimensional) Lie groups,
such that the inclusion maps $j_n\colon G_n\to G$
and $j_{m,n}\colon G_n\to G_m$ (for $n\leq m$)
are smooth homomorphisms.
Typically, the steps $G_n$
are Lie groups of a simpler type,
and one hopes (and often succeeds)
to deduce results concerning~$G$
from information available
for the Lie groups~$G_n$.\\[2.5mm]
The goals of this article are twofold:
\begin{itemize}
\item
To survey general results
on ascending unions of Lie groups
and their properties;\vspace{0.5mm}
\item
To collect concrete classes of examples
and explain how the general theory
specializes in these cases.
\end{itemize}
One
typical class of examples
is given by the groups $\Diff_c(M)$\index{diffeomorphism group}
of smooth diffeomorphisms
$\phi\colon M\to M$
of $\sigma$-compact, finite-dimensional smooth manifolds~$M$
which are compactly supported
in the sense that the set
$\{x\in M\colon \phi(x)\not= x\}$
has compact
closure. The group operation is
composition of diffeomorphisms.
It is known that $\Diff_c(M)$
is a Lie group (see \cite{Mic} or \cite{DIF}).
Furthermore,
\[
\Diff_c(M)\;=\: \bigcup_{n\in {\mathbb N}} \, \Diff_{K_n}(M)\vspace{-2mm}
\]
for each exhaustion
$K_1\subseteq K_2\subseteq\cdots$
of~$M$ by compact sets
(with $K_n$ in the interior of $K_{n+1}$),
where $\Diff_{K_n}(M)$ is the Lie group
of smooth diffeomorphisms of~$M$ supported in~$K_n$.
The manifold structure of $\Diff_c(M)$
is modelled on the space
${\mathcal V}_c(M)$ of compactly supported
smooth vector fields,
which is an LF-space with a
complicated topology.
By contrast, $\Diff_{K_n}(M)$
is modelled on the space
${\mathcal V}_{K_n}(M)$ of smooth vector fields
supported in~$K_n$,
which is a Fr\'{e}chet space.
Many specific tools of infinite-dimensional
calculus can be applied to ${\mathcal V}_{K_n}(M)$,
e.g.\ to clarify differentiability
questions for functions on this space.
In other typical cases,
each $G_n$ is finite-dimensional
(a particularly well-understood situation)
or modelled on a Banach space,
whence again special tools
are available to deal with the Lie groups
$G_n$ (but not \emph{a priori} for~$G$).
Besides diffeomorphism groups,
we shall also discuss the following
major classes of examples
(described in more detail in Section~\ref{secuninf}):\vspace{.8mm}
\begin{itemize}
\item
The ``test function groups''\index{test function group}
$C^\infty_c(M,H)=\bigcup_{n\in {\mathbb N}}\,C_{K_n}^\infty(M,H)$
of compactly supported Lie group-valued
smooth mappings on a $\sigma$-compact
smooth manifold $M=\bigcup_{n\in {\mathbb N}}K_n$;\vspace{.5mm}
\item
Weak direct products $\prod_{n\in {\mathbb N}}^* H_n:=\bigcup_{n\in {\mathbb N}}
\prod_{k=1}^n H_k$\index{weak direct product}
of Lie groups $H_n$;\vspace{.5mm}
\item
Unions $A^\times=\bigcup_{n\in {\mathbb N}}A_n^\times$
of unit groups
of Banach algebras $A_1\subseteq A_2\subseteq\cdots$;\vspace{.5mm}
\item
The groups $\Germ(K,H)$ of germs\label{group of germs of analytic maps}
of analytic mappings on open neighbourhoods\index{Sobolev--Lie group}
of a compact subset $K$ of a metrizable
complex locally convex space,
with values in a complex Banach--Lie group~$H$.
\item
The group
$H^{\downarrow s}(K,F)
=\bigcup_{t>s}H^t(K,F)=
\bigcup_{n\in {\mathbb N}}H^{s+\frac{1}{n}}(K,F)$,
where $K$ is a compact smooth manifold,
$s\geq \dim(K)/2$,
$F$ a finite-dimensional Lie group, and
$H^t(K,F)\subseteq C(K,F)$ the integral subgroup
whose Lie algebra is the
Sobolev space
$H^t(K,\Lie(F))$
of functions with values in the Lie algebra
$\Lie(F)$ of~$F$.\vspace{.8mm}
\end{itemize}
For $s=\dim(K)/2$, the Lie group $H^{\downarrow s}(K,F)$
is particularly interesting,
because a Hilbert--Lie group
$H^s(K,F)$ is not available in this case.
In some situations,
$H^{\downarrow s}(K,F)$ may serve as a substitute
for the missing group.
We shall also discuss the group $\GermDiff(K,X)$\index{group of germs
of diffeomorphisms}
of germs of analytic diffeo\-morphisms $\gamma$
around a compact set $K$ in a finite-dimensional
complex vector space~$X$,\vspace{-.3mm}
such that $\gamma|_K=\id_K$.
This group is not considered as a union of groups, but
as a union
of Banach \emph{manifolds} $M_1\subseteq M_2\subseteq\cdots$.
Among others, we shall discuss
the following topics
in our general setting
(and for the preceding examples):\vspace{.8mm}
\begin{itemize}
\item
Direct limit properties
of ascending unions;\vspace{.5mm}
\item
Homotopy groups
of ascending unions;\vspace{.5mm}
\item
When ascending unions
are regular Lie groups in
Milnor's sense;\vspace{.5mm}
\item
Questions concerning subgroups of ascending unions.\vspace{.8mm}
\end{itemize}
We now describe the main problems
and questions
in more detail, together
with some essential concepts.
As a rule, references to the literature,
answers (and partial
answers) will only be given later,
in the actual article.
\subsection{Direct limit properties of ascending unions}\label{dlpropannounce}
Consider a Lie group~$G$ which is an
ascending union $G=\bigcup_{n\in {\mathbb N}}\, G_n$
of Lie groups, and a map $f\colon G\to X$.
It is natural to ask:\index{direct limit properties}
\begin{description}
\item[(a)]
If~$X$ is a smooth manifold (modelled on a locally convex
space) and $f|_{G_n}$ is smooth
for each $n\in {\mathbb N}$, does it follow that $f$
is smooth?
\item[(b)]
If~$X$ is a topological space and $f|_{G_n}$ is continuous
for each $n\in {\mathbb N}$, does it follow that $f$
is continuous?
\item[(c)]
If~$X$ is a Lie group, $f$ is a homomorphism
of groups and $f|_{G_n}$ is smooth
for each $n\in {\mathbb N}$, does it follow that $f$
is smooth?
\item[(d)]
If~$X$ is a topological group,
$f$ is a homomorphism and $f|_{G_n}$ is continuous
for each $n\in {\mathbb N}$, does it follow that $f$
is continuous?
\end{description}
As we shall see, (a) and (b) are frequently not true
(unless compactness can be brought into play),
while (c) and (d) hold for our typical examples.
The preceding questions can be re-cast
in category-theoretic terms:
They amount to asking if $G$ is the direct limit
${\displaystyle\lim_{\longrightarrow}}\,G_n$\vspace{-.6mm}
in the categories of smooth manifolds,
topological spaces, Lie groups,
resp.,
topological groups
(see~\ref{explainnow}).
The relevant concepts from
category theory
will be recalled in Section~\ref{secprel}.
Questions (b) and (d) can be asked
just
as well if $G$ and each $G_n$ merely is a topological
group, and each inclusion map is a continuous
homomorphism.
Essential progress concerning
direct limits of topological groups
and their\linebreak
relations to direct limits
of topological spaces
were achieved in the last ten years,
notably by N. Tatsuuma, E. Hirai, T. Hirai
and N. Shimomura (see \cite{TSH} and \cite{HSTH})
as well as A. Yamasaki~\cite{Yam}.
In Section~\ref{secgp},
we recall the most relevant results.
\subsection{Existence of direct limit charts --
an essential hypothesis}\label{susecdlcha}
Meaningful results concerning the topics
raised above can only be expected
under additional hypotheses.
For instance,
our general setting includes
the situation
where each $G_n$ is discrete
but $G$ is not
(as we only assume that the inclusion maps
$G_n\to G$ are smooth).
In this situation,
algebraic properties
of the groups $G_n$
(like simplicity or perfectness)
pass to~$G$,
but we cannot expect to gain
information concerning the topological or
differentiable structure of~$G$
from information on
the groups~$G_n$.
A very mild additional hypothesis
is the existence
of a \emph{direct limit chart}.\index{direct limit chart}
Roughly speaking, this is
a chart of $G$ juxtaposed from charts of the Lie groups~$G_n$.
The formal definition
reads as follows (cf.\ \cite[Definition 2.1]{COM}):\\[2.7mm]
{\bf Definition.} \,A Lie group $G=\bigcup_{n\in {\mathbb N}}G_n$ is said to
admit a \emph{weak direct limit chart}\index{weak direct limit chart}
if there exists $n_0\in {\mathbb N}$,
charts $\phi_n\colon U_n\to V_n$
from open identity neighbourhoods
$U_n\subseteq G_n$ onto open $0$-neighbourhoods
$V_n\subseteq \Lie(G_n)$ in the tangent space
$\Lie(G_n):=T_{\bf 1}(G_n)$ at~${\bf 1}$ for $n\geq n_0$
and a chart $\phi\colon U\to V$
from an open\linebreak
identity neighbourhood
$U\subseteq G$ onto an open $0$-neighbourhood
$V\subseteq \Lie(G)$,
such that
\begin{description}
\item[(a)]
$U=\bigcup_{n\geq n_0}U_n$
and $U_n\subseteq U_{n+1}$ for each integer $n\geq n_0$; and
\item[(b)]
$\phi_{n+1}|_{U_n}=\Lie(j_{n+1,n})\circ \phi_n$
and
$\phi|_{U_n}=\Lie(j_n)\circ \phi_n$
for each $n\geq n_0$.
\end{description}
If, furthermore,
$\Lie(G)={\displaystyle\lim_{\longrightarrow}}\,\Lie(G_n)$\vspace{-.5mm}
as a locally convex space,\footnote{Here, we use the bonding maps
$\Lie(j_{m,n})\colon \Lie(G_n)\to\Lie(G_m)$
and the limit maps $\Lie(j_n)\colon \Lie(G_n)\to\Lie(G)$.}
then $G=\bigcup_{n\in{\mathbb N}}G_n$ is said to
admit a \emph{direct limit chart}.
Note that (b) implies that the linear maps
$\Lie(j_n)$ and $\Lie(j_{n+1,n})$
are injective on some $0$-neighbourhood and thus
injective. Hence, identifying
$\Lie(G_n)$ with its image
under $\Lie(j_n)$ in $\Lie(G)$,
we can re-write (b) as
\begin{description}
\item[(b)${}'$]
$\phi|_{U_n}=\phi_n$
and $\phi_{n+1}|_{U_n}=\phi_n$,
for each $n\geq n_0$.
\end{description}
Furthermore, we now simply have
$V=\bigcup_{n\geq n_0}V_n$.
To assume the existence of a direct
limit chart is
a natural requirement,
which is satisfied by all of our main
examples.
It provides a link
between the topologies (resp., manifold structures)
on $G$ and the Lie groups~$G_n$,
and will be encountered in connection with
most of the topics from above.\vspace{-1mm}
\subsection{Homotopy groups of ascending unions
of Lie groups}\label{subsechom}
Given a Lie group $G=\bigcup_{n\in{\mathbb N}}G_n$,
it is natural to ask if
its $k$-th homotopy group
can be calculated in terms of\index{homotopy group}
the homotopy groups $\pi_k(G_n)$ in the form\vspace{-.7mm}
\begin{equation}\label{dlhomot}
\pi_k(G)\;=\; {\displaystyle\lim_{\longrightarrow}}\, \pi_k(G_n)\,,\vspace{-2mm}
\end{equation}
for each $k\in {\mathbb N}_0$.
This is quite obvious if
$G=\bigcup_{n\in {\mathbb N}}G_n$ is \emph{compactly regular}\index{compactly regular}
in the sense that each compact subset~$K$ of~$G$\index{compact regularity}
is a compact subset of some~$G_n$
(see \cite[Proposition~3.3]{HOM};
cf.\ \cite[Remark~3.9]{FUN}
and \cite[Lemma~A.7]{NeO}
for special cases, as well as works
on stable homotopy theory and $K$-theory).
There is another, non-trivial condition:
\emph{If $G=\bigcup_{n\in{\mathbb N}}G_n$
admits a weak\linebreak
direct limit chart,
then} (\ref{dlhomot}) \emph{holds} \cite[Theorem~1.2]{HOM}.
A variant of this condition
even applies if $\bigcup_{n\in {\mathbb N}}G_n$
is merely dense in~$G$ (see Theorem~1.13 in \cite{HOM}).
Moreover, ascending unions
can be replaced with directed unions
over uncountable
families, and Lie groups with manifolds
(see Section~\ref{sechomotop}).
These results are based on approximation arguments.
Analogous results for open subsets of locally
convex spaces are classical~\cite{Pa2}.
We mention that knowledge of $\pi_0(G)=G/G_0$,
the fundamental group $\pi_1(G)$
and $\pi_2(G)$ is essential for the extension theory
of~$G$.
It is needed to understand the Lie group
extensions ${\bf 1} \to A\to \widehat{G}\to G\to {\bf 1}$
of~$G$ with abelian kernel,
by recent results of K.-H. Neeb
(see \cite{NeC}, \cite{NeA}, and \cite{NeN}).\vspace{-1mm}
\subsection{Regularity in Milnor's sense}\label{ssecreg}
Roughly speaking, a Lie group $G$
(modelled on a locally convex space)
is called a regular Lie group
if all differential equations on $G$
which are of relevance
for Lie theory can be solved,
and their solutions depend
smoothly on parameters.
To make this more precise,
given $g,h \in G$ and $v\in T_h(G)$
let us write $g\cdot v:=(T_h\lambda_g)(v)\in T_{gh}(G)$,
where $\lambda_g\colon G\to G$,
$x\mapsto gx$ denotes left translation by~$g$.\\[2.5mm]
{\bf Definition.} A Lie group~$G$
modelled on a locally convex space
is called a \emph{regular Lie group} (in Milnor's
sense) if for each smooth curve\index{regular Lie group}
$\gamma\colon [0,1]\to \Lie(G)$,
there exists a (necessarily unique)
smooth curve $\eta= \eta_\gamma\colon [0,1]\to G$
(a so-called ``product integral'')\index{product integral}
which solves the initial value problem
\begin{eqnarray}
\eta(0) & = & {\bf 1} \label{inval1}\\
\eta'(t) & = & \eta(t)\cdot \gamma(t)\;\;\mbox{for all $t\in [0,1]$}\label{inval2}
\end{eqnarray}
(with ${\bf 1} \in G$ the identity element),
and the ``evolution map''\index{evolution map}
\begin{equation}\label{basicevol}
\evol\colon C^\infty([0,1],\Lie(G))\to G\,,\quad
\evol(\gamma):=\eta_\gamma(1)
\end{equation}
is smooth (see \cite{Mr84}, \cite{GaN}
and \cite{NeS}).
Regularity is a useful property,
which provides a link between $G$
and its Lie algebra.
In particular, regularity
ensures the existence of a smooth exponential
map $\exp_G\colon \Lie(G)\to G$, i.e.,\index{exponential map}
a smooth map such that, for each $v\in \Lie(G)$,\index{exponential function}
\[
\gamma_v\colon {\mathbb R}\to G\,,\quad t\mapsto \exp_G(tv)
\]
is a homomorphism of groups with initial
velocity $\gamma_v'(0)=v$ (cf.\ \cite{Mr84}).
The modelling space $E$ of a regular Lie group
is necessarily \emph{Mackey\linebreak
complete}
in the sense that the Riemann integral\index{Mackey complete}
$\int_0^1 \gamma(t)\,dt$ exists in~$E$
for each smooth curve $\gamma\colon {\mathbb R}\to E$
(cf.\ Lemma~A.5\,(1) and p.\,4 in \cite{NaW}).
Lie groups modelled
on non-Mackey complete locally convex spaces
need not even have an exponential
map. For example, this pathology occurs
for group $G=A^\times$
of invertible elements
in the normed algebra $A\subseteq C[0,1]$ of all
restrictions to $[0,1]$
of rational functions without poles in $[0,1]$
(with the supremum norm).
Because~$A^\times$ is an open subset of~$A$,
it is a Lie group,
and it is not hard to see
that a smooth homomorphism
$\gamma_v\colon {\mathbb R}\to A^\times$
with $\gamma'(0)=v$
exists for $v\in A=\Lie(A^\times)$
only if $v$ is a constant function
\cite[Proposition~6.1]{ALG}.
Further information concerning
Mackey completeness can be found \cite{KaM}.
At the time of writing, it is unknown
whether non-regular Lie groups modelled
on Mackey complete
locally convex spaces exist.
However, there is no general method of proof;
for each individual class of Lie groups,
very specific arguments are required to
verify regularity.
\noindent
It is natural to look for
conditions ensuring that a union $G=\bigcup_{n\in {\mathbb N}}G_n$
is a regular Lie group if so is each~$G_n$.
Already the
case of finite-dimensional Lie groups $G_n$ is
not easy~\cite{FUN}. In Section~\ref{secregu},
we preview work in progress concerning
the general case. We also describe a construction
which might lead to non-regular Lie groups
(Proposition~\ref{pathrg}).
The potential counterexamples are weak direct
products of suitable regular Lie groups.
\subsection{Subgroups of direct limit groups}
It is natural to try to use
information concerning the subgroups of Lie groups~$G_n$
to deduce results
concerning the subgroups of a Lie group
$G=\bigcup_{n\in {\mathbb N}}G_n$.
Aiming at a typical
example, let us recall that a topological group~$G$
is said to \emph{have no small subgroups}\index{small subgroups}
if there exists an identity neighbourhood
$U\subseteq G$ containing no subgroup
of~$G$ except for the trivial subgroup.
Although finite-dimensional
(and Banach-) Lie groups do not have
small subgroups, already for Fr\'{e}chet--Lie groups
the situation changes:
The additive group
of the Fr\'{e}chet space ${\mathbb R}^{\mathbb N}$
has small subgroups.
In fact, every
$0$-neighbourhood contains $]{-r},r[^n\times {\mathbb R}^{\{n+1,n+2,\ldots\}}$
for some $n\in {\mathbb N}$ and $r>0$.
It therefore contains the non-trivial
subgroup $\{0\}\times {\mathbb R}^{\{n+1,n+2,\ldots\}}$.
It is natural to ask whether
a Lie group $G=\bigcup_{n\in {\mathbb N}}G_n$
does not have small subgroups
if none of the Lie groups $G_n$
has small subgroups.
In Section~\ref{secsub}, we describe the available
answers to this question,
and various other results
concerning subgroups of direct limit groups.
\subsection{Constructions of Lie group structures
on ascending unions}
So far, we assumed that $G$ is already equipped
with a Lie group structure.
Sometimes, only an ascending
sequence $G_1\subseteq G_2\subseteq \cdots$ of Lie groups
is given such that all inclusion maps $G_n\to G_{n+1}$
are smooth homomorphisms.
It is then natural to ask
whether the union
$G=\bigcup_{n\in {\mathbb N}}G_n$
can be given a Lie group structure making each
inclusion map $G_n\to G$ a smooth homomorphism.\footnote{Or even making $G$
the direct limit ${\displaystyle\lim_{\longrightarrow}}\,G_n$\vspace{-.7mm} in the category
of Lie groups.}
We shall also discuss this
complementary problem (in Section~\ref{secconstr}).
If each $G_n$ is finite-dimensional,
then a Lie group structure on $G$ is always
available.
\subsection{Properties of locally convex direct limits}\label{subseclcx}
To enable an understanding of direct limits
of Lie groups, an understanding of various
properties of locally convex direct
limits is essential,
i.e., of direct limits
in the category of locally convex spaces.\index{locally convex direct limit}
For instance, we shall see that if a Lie group
$G=\bigcup_{n\in {\mathbb N}}G_n$ admits a direct limit chart,
then $G={\displaystyle\lim_{\longrightarrow}}\,G_n$ as a topological space
if and only if $\Lie(G)={\displaystyle\lim_{\longrightarrow}}\,\Lie(G_n)$\vspace{-.7mm}
as a topological space.
The latter property
is frequently easier to prove (or refute)
than the first.
Also compact regularity of~$G$
(as in~\ref{subsechom} above)
can be checked on the level of
the modelling spaces (see Lemma~\ref{thsreg}).
Another property is useful:
Consider a locally convex space~$E$
which is a union $\bigcup_{n\in {\mathbb N}}E_n$
of locally convex spaces, such that
all inclusion maps are continuous
linear maps. We say that $E$ is \emph{regular}
(or \emph{boundedly regular}, for added clarity)\index{boundedly regular}
if every bounded subset of~$E$
is a bounded subset of some~$E_n$.
If one wants
to prove that a Lie group $\bigcup_nG_n$ is regular
in Milnor's sense,
then it helps a lot if
one knows that $\Lie(G)=\bigcup_n\Lie(G_n)$
is compactly or boundedly regular
(see Section~\ref{secregu}).
\subsection{Further comments, and some
historical remarks}\label{introhis}
The most typical examples of direct limits of finite-dimensional
Lie groups are unions
of classical groups like
$\GL_\infty({\mathbb C})=\bigcup_{n\in{\mathbb N}}\GL_n({\mathbb C})$
and its subgroups
$\GL_\infty({\mathbb R})=\bigcup_{n\in{\mathbb N}}\GL_n({\mathbb R})$,
$\gO_\infty({\mathbb R})=\bigcup_{n\in {\mathbb N}}\gO_n({\mathbb R})$
and $\U_\infty({\mathbb C})=\bigcup_{n\in {\mathbb N}}\U_n({\mathbb C})$,
where $A\in \GL_n({\mathbb C})$ is identified with
the block matrix
\[
\left(
\begin{array}{cc}
A & \; 0\\
0 & \;1
\end{array}
\right)
\]
in $\GL_{n+1}({\mathbb C})$.
Thus $\GL_\infty({\mathbb C})$ is the group
of invertible matrices of countable size,
which differ from the identity matrix
only at finitely many entries.
Groups of this form (and related ascending unions
of homogeneous spaces)
have been
considered for a long time in (stable) homotopy theory
and $K$-theory.
Furthermore, results concerning
their representation theory
can be traced back to the 1970s
(more details are given below).
However, only the group structure or topology
was relevant for these studies.
Initially, no attempt was made to consider them
as Lie groups.
The Lie group structure
on $\GL_\infty({\mathbb C})$ was first described in~\cite{Mr82}
and that on
$\U_\infty({\mathbb C})$ and $\gO_\infty({\mathbb R})$
mentioned (cf.\ also page 1053
in the survey article \cite{Mr84}).
The first systematic discussion of direct limits
of finite-dimensional Lie groups was given
in~\cite{NRW1} and \cite{NRW2}. Notably,
a Lie group structure on $G={\displaystyle\lim_{\longrightarrow}}\,G_n$\vspace{-.4mm}
was constructed there
under technical\linebreak
conditions
which ensure,
in particular, that
\[
{\displaystyle\lim_{\longrightarrow}}\,\exp_{G_n}\colon {\displaystyle\lim_{\longrightarrow}}\,\Lie(G_n)\to {\displaystyle\lim_{\longrightarrow}}\,G_n,\;\;\;
x\mapsto \exp_{G_n}(x)\;\;
\mbox{for all $\,n\in {\mathbb N}$, $\, x\in \Lie(G_n)$}\vspace{-.6mm}
\]
is a local homeomorphism at~$0$
(see Section~\ref{secconstr}
for the sketch of a more~general construction
from~\cite{FUN}).
Moreover,
situations were described in \cite{NRW2}
where ascending unions of Lie groups (or the corresponding
Lie algebras) can be completed with
respect to some coarser topology.
Also the Lie
group $C^\infty_c(M,H)=\bigcup_{n\in{\mathbb N}}C^\infty_{K_n}(M,H)$
is briefly discussed in \cite{NRW2}
(for finite-dimensional~$H$),
and in~\cite{AH93}.
Test function groups with values
in possibly infinite-dimensional Lie groups
were treated in~\cite{GCX}.
\mbox{Compare} \cite{ACM89} for an early discussion
of gauge groups and automorphism groups
of principal bundles over non-compact
manifolds in the inequivalent
``Convenient Setting''
of infinite-dimensional calculus
(cf.\ also \cite{KaM}).
\noindent
The first construction
of the Lie group structure
on $\Diff_c(M)$ was given in~\cite{Mic},
as part of a discussion
of manifold structures on
spaces of mappings\linebreak
between
non-compact manifolds.
Groups of germs of complex analytic\linebreak
diffeomorphisms of~${\mathbb C}^n$
around~$0$ were studied in~\cite{Pis}.
The real analytic
analogue was discussed in \cite{Lei},
groups of germs of more general diffeomorphisms
in~\cite{KaR}.
Further recent works will be described later.
It should be stressed that the
current article focusses on
direct limit groups as such, i.e., on their structure and properties.
Representation theory
and harmonic analysis on such groups
are outside its scope.
For completeness, we mention that
the study of (irreducible) unitary representations
of ascending unions of finite
groups started in the 1960s
(see \cite{Th64a} and \cite{Th64b}),
notably for the symmetric group $S_\infty:={\displaystyle\lim_{\longrightarrow}}\,S_n$.\vspace{-.4mm}
Representations of direct limits
of finite-dimensional
Lie groups were first investigated in the 1970s
(see \cite{Voi} for representations of $\U_\infty({\mathbb C})$,
\cite{KS77} for representations of $\U_\infty({\mathbb C})$ and $\SO_\infty({\mathbb R})$).
Ol'shanski\u{\i} \cite{Ol83}
studied representations for
infinite-dimensional
spherical pairs
like $(\GL_\infty({\mathbb R}),\SO_\infty({\mathbb R}))$,
$(\GL_\infty({\mathbb C}),\U_\infty({\mathbb C}))$ and
$(\U_\infty({\mathbb C}),\SO_\infty({\mathbb R}))$.
The representation theory
of direct limits of both finite groups
and finite-dimensional Lie groups
remains an active area of research.
Representations\linebreak
of
$\gO(\infty,\infty)$, $\U(\infty,\infty)$
and $\Sp(\infty,\infty)$
were studied in \cite{Dv02}
using infinite-dimensional adaptations
of Howe duality.
Novel results concerning the\linebreak
representation theory
of $\U_\infty({\mathbb C})$ and $S_\infty$
were obtained in~\cite{Ol03}
and~\cite{KOV}, respectively.
Versions of the Bott-Borel-Weil theorem
for direct limits of finite-dimensional
Lie groups were established in
\cite{NRW4} and \cite{DPW}
(in a more algebraic setting).
J.\,A. Wolf also investigated
principal series\linebreak
representations of suitable
direct limit groups~\cite{Wo05},
as well as the regular\linebreak
representation
on some direct limits of compact
symmetric spaces~\cite{Wo08}.
The paper~\cite{AaK}
discusses
representations of an infinite-dimensional
subgroup of unipotent matrices in $\GL_\infty({\mathbb R})$.
Finally, a version of Bochner's theorem for infinite-dimensional
spherical pairs was obtained in~\cite{Ra07}.
There also is a body of literature
devoted to irreducible representations
of diffeomorphism groups of (compact or) non-compact manifolds,
as well as quasi-invariant measures
and harmonic analysis thereon
(see, e.g.,
\cite{VGG75},
\cite{Ki81},
\cite{Hi93},
\cite{Sh01}, and \cite{Sh05}).
Representations of
$C^\infty_c(M,H)$ were studied
by R.\,S. Ismagilov~\cite{Ism}
and in~\cite{AH93}.
Such groups, $\Diff_c(M)$
and semi\-direct products thereof
arise naturally in mathematical physics~\cite{Gol}.
Direct limits of finite-dimensional Lie groups
are also encountered
as dense subgroups of some interesting Banach--Lie groups
(like the group $\U_2({\mathcal H})$
of unitary operators on a complex
Hilbert space~${\mathcal H}$ which differ
from $\id_{\mathcal H}$ by a Hilbert--Schmidt operator)
and other groups of operators.
This frequently enables the calculation
of the homotopy groups of such groups
(see \cite{Pa1}, also \cite{NeH}),
exploiting that the homotopy groups of many direct limits
of classical groups (like $\U_\infty({\mathbb C})$)
can be determined using Bott periodicity.
Dense unions of finite-dimensional Lie groups
are also useful in representation theory
(see \cite{Ne98} and \cite{NeO}).
\noindent
We mention a more specialized result:
For very particular classes
of direct limits~$G$ of finite-dimensional
Lie groups, a classification is possible
which uses the
homotopy groups of~$G$ (notably $\pi_1(G)$
and $\pi_3(G)$); see \cite{Ku06}.
In contrast to direct (or inductive) limits,
the dual notion
of an \emph{inverse} (or \emph{projective}) \emph{limit}
of Lie groups was used much earlier
in infinite-dimensional Lie theory.
Omori's theory of ILB-Lie groups
(which are \emph{i}nverse \emph{l}imits
of \emph{B}anach manifolds) gave a strong impetus
to the development of the area in the late 1960s
and early 1970s (see \cite{Om97} and the references therein).
Many important examples of infinite-dimensional
Lie groups could be discussed in this approach,
e.g.\ the group $C^\infty(K,H)=\bigcap_{k\in {\mathbb N}_0}C^k(K,H)$
of smooth maps on a compact manifold~$K$
with values in a finite-dimensional Lie group~$H$,
and the group $\Diff(K)=
\bigcap_{k\in{\mathbb N}}\Diff^k(K)$
of $C^\infty$-diffeomorphisms of a compact\linebreak
manifold.
The passage from compact to non-compact
manifolds naturally leads to the consideration of
direct limits of compactly supported objects.
\section{\,Preliminaries, terminology and basic facts}\label{secprel}
{\bf General conventions.}
We write ${\mathbb N}:=\{1,2,\ldots\}$,
and ${\mathbb N}_0:={\mathbb N}\cup\{0\}$.
As usual,
${\mathbb R}$ and ${\mathbb C}$ denote
the fields of real and complex numbers, respectively.
If $(E,\|.\|)$ is a normed space,
$x\in E$ and $r>0$, we write $B^E_r(x):=\{y\in E\colon
\|y-x\|<r\}$.
Topological spaces, topological groups
and locally convex topological vector spaces
are not assumed Hausdorff. However,
manifolds are assumed Hausdorff,
and whenever a locally convex space
serves as the domain or range of a differentiable
map, or as the modelling space
of a Lie group or manifold, it is tacitly
assumed Hausdorff. Moreover, all compact
and all locally compact topological spaces are assumed Hausdorff.
We allow non-Hausdorff topologies
because direct limits are much easier
to describe if the Hausdorff property is omitted
(further explanations will be given at the end of this
section).\\[4mm]
{\bf Infinite-dimensional calculus.}
We are working in the setting of Keller's $C^k_c$-theory~\cite{Kel},
in a topological formulation that avoids the use
of convergence structures
(as in \cite{Mic}, \cite{Mr84},
\cite{RES}, \cite{NeS}, and \cite{GaN}).
For more information on
analytic maps, see, e.g., \cite{RES},
\cite{GaN} and (for ${\mathbb K}={\mathbb C}$)~\cite{BaS}.
\begin{numba}
{\rm Let ${\mathbb K}\in \{{\mathbb R},{\mathbb C}\}$,
$r\in {\mathbb N}\cup \{\infty\}$,
$E$ and $F$ be locally convex ${\mathbb K}$-vector spaces
and $f\colon U\to F$ be a map
on an open set $U\subseteq E$.
If $f$ is continuous, we say that $f$ is $C^0$.
We call $f$ a \emph{$C^r_{\mathbb K}$-map}
if $f$ is continuous, the iterated real
(resp., complex) directional
derivatives
\[
d^kf(x,y_1,\ldots, y_k)\; :=\;
(D_{y_k}\cdots D_{y_1}f)(x)
\]
exist for all $k\in {\mathbb N}$ such that $k\leq r$,
$x\in U$ and $y_1,\ldots, y_k\in E$,
and the maps $d^kf\colon U\times E^k\to F$ so obtained
are continuous.
If ${\mathbb K}$ is understood, we write $C^r$ instead
of $C^r_{\mathbb K}$.
If $f$ is $C^\infty$,
we also say that $f$ is \emph{smooth}.
If ${\mathbb K}={\mathbb R}$,
we say that $f$ is \emph{real analytic}\index{real analytic map}
(or $C^\omega_{\mathbb R}$)
if $f$ extends to a $C^\infty_{\mathbb C}$-map
$\widetilde{f}\colon \widetilde{U}\to F_{\mathbb C}$
on an open neighbourhood $\widetilde{U}$ of~$U$ in the complexification
$E_{\mathbb C}$ of~$E$.}
\end{numba}
\begin{numba}
We mention that a map $f\colon E\supseteq U\to F$ is $C^\infty_{\mathbb C}$
if and only if it is \emph{complex analytic}\index{complex analytic map}
i.e., $f$ is continuous and for each $x\in U$,
there exists a $0$-neighbourhood $Y\subseteq E$ with $x+Y\subseteq U$
and continuous homogeneous polynomials $p_n\colon E\to F$
of degree~$n$
such that\vspace{-2mm}
\[
(\forall y\in Y)\;\;\; f(x+y)\, = \sum_{n=0}^\infty \, p_n(y)\,.\vspace{-2mm}
\]
Complex analytic maps are also called \emph{${\mathbb C}$-analytic}
or $C^\omega_{\mathbb C}$.
\end{numba}
\begin{numba}
It is known that compositions of composable
$C^r_{\mathbb K}$-maps are $C^r_{\mathbb K}$, for each $r\in{\mathbb N}_0\cup\{\infty,\omega\}$.
Thus a
\emph{$C^r_{\mathbb K}$-manifold} $M$ modelled on
a locally convex ${\mathbb K}$-vector space~$E$ can be defined
in the usual way, as a Hausdorff
topological space, together with a maximal
set of homeomorphisms
from open subsets of~$M$ to open subsets of~$E$,
such that the domains cover~$M$ and the transition maps
are $C^r_{\mathbb K}$.
Given $r\in \{\infty,\omega\}$,
a \emph{$C^r_{\mathbb K}$-Lie group} is a group~$G$,
equipped with a structure of $C^r_{\mathbb K}$-manifold
modelled on a locally convex space,
such that the group multiplication and group inversion
are $C^r_{\mathbb K}$-maps. Unless the contrary is stated,
we consider $C^\infty_{\mathbb K}$-Lie groups.
Throughout the following,
the words ``manifold'' and ``Lie group''
will refer to manifolds and Lie groups
modelled on locally convex spaces.
We shall write $T_xM$ for the tangent
space of a manifold $M$ at $x\in M$
and $\Lie(G):=T_{\bf 1}(G)$ for the (topological) Lie algebra
of a Lie group~$G$. Given a $C^1_{\mathbb K}$-map $f\colon M\to N$
between $C^1_{\mathbb K}$-manifolds,
we write $T_xf\colon T_xM\to T_{f(x)}N$ for the tangent map
at $x\in M$. Given a smooth homomorphism $f\colon G\to H$,
we let $\Lie(f):=T_{\bf 1}(f)\colon \Lie(G)\to \Lie(H)$.
\end{numba}
{\bf Direct limits.} \,We recall terminology
and basic facts concerning direct limits.
\begin{numba} (General definitions).
Let $(I,\leq)$ be a directed set,
i.e., $I$ is a non-empty set and $\leq$ a partial order on~$I$
such that any two elements have an upper bound.
Recall that a \emph{direct system}\index{direct system}
(indexed by $(I,\leq)$)
in a category~${\mathbb A}$ is a pair
${\mathcal S}:=((X_i)_{i\in I},
(\phi_{ji})_{j\geq i})$, where each $X_i$ is an object of~${\mathbb A}$
and $\phi_{ji}\colon X_i\to X_j$
a morphism such that $\phi_{ii}=\id_{X_i}$
and $\phi_{kj}\circ\phi_{ji}=\phi_{ki}$,
for all elements $k\geq j \geq i$ in~$I$.
A \emph{cone over ${\mathcal S}$}\index{cone}
is a pair $(X,(\phi_i)_{i\in I})$,
where $X$ is an object of~${\mathbb A}$ and each $\phi_i\colon X_i\to X$
a morphism such that $\phi_j\circ \phi_{ji}=\phi_i$
whenever $j\geq i$. A cone $(X,(\phi_i)_{i\in I})$
is a \emph{direct limit of ${\mathcal S}$}\index{direct limit}
(and we
write $(X,(\phi_i)_{i\in I})={\displaystyle\lim_{\longrightarrow}}\,{\mathcal S}$\vspace{-.3mm}
or $X={\displaystyle\lim_{\longrightarrow}}\, X_i$),\vspace{-.3mm}
if for every cone $(Y,(\psi_i)_{i\in I})$ over ${\mathcal S}$,
there
exists a unique morphism $\psi\colon X\to Y$ such that $\psi\circ \phi_i
=\psi_i$
for all $i\in I$.
If ${{\mathcal T}} = ((Y_i)_{i\in I},(\psi_{ji})_{j\geq i})$ is another
direct system over the same index set, $(Y,(\psi_i)_{i\in I})$
a cone over ${\mathcal T}$, and $(\eta_i)_{i\in I}$ a family of morphisms
$\eta_i \colon X_i\to Y_i$ which is
\emph{compatible}
with the direct systems
in the sense that $\psi_{ji}\circ \eta_i=\eta_j\circ\phi_{ji}$
for all $j\geq i$, then $(Y,(\psi_i\circ\eta_i)_{i\in I})$
is a cone over ${\mathcal S}$. We write ${\displaystyle\lim_{\longrightarrow}}\,\eta_i$\vspace{-.9mm}
for the induced morphism $\psi\colon X\to Y$, determined by
$\psi\circ \phi_i=\psi_i\circ \eta_i$.
\end{numba}
Direct limits in the categories of sets,
groups and topological spaces are\linebreak
particularly easy
to understand, and we discuss them now.
Direct limits of\linebreak
topological groups
(which are a more difficult topic)
and direct limits of locally convex spaces
will be discussed afterwards in Sections~\ref{secgp}
and~\ref{seclcx}, respectively.
We concentrate on direct sequences (viz.,
the case $I={\mathbb N}$)
and actually on ascending sequences, to avoid technical
complications. This is the more justified because
(except for some counterexamples) hardly anything is known about
direct limits of direct systems of Lie groups
which do not admit a cofinal subsequence.
\begin{numba}\label{dlset} (Ascending unions of sets).
If $X_1\subseteq X_2\subseteq\cdots$
is an ascending sequence of sets,
let $\phi_{m,n}\colon X_n\to X_m$
be the inclusion map for $m,n\in {\mathbb N}$ with $m\geq n$.
Then ${\mathcal S}:=((X_n)_{n\in {\mathbb N}},(\phi_{m,n})_{m\geq n}))$
is a direct system in the category $\mbox{${\mathbb S}{\mathbb E}{\mathbb T}$}$
of sets and maps.
Define $X:=\bigcup_{n\in {\mathbb N}}X_n$
and let $\phi_n\colon X_n\to X$
be the inclusion map.
Then $(X,(\phi_n)_{n\in {\mathbb N}})$ is a cone over~${\mathcal S}$
in $\mbox{${\mathbb S}{\mathbb E}{\mathbb T}$}$.
A sequence
of maps $\psi_n\colon X_n\to Y$
to a set~$Y$ gives rise to a cone $(Y,(\psi_n)_{n\in {\mathbb N}})$
if and only if
\[
\psi_m|_{X_n}\;=\; \psi_n\quad\mbox{for all $m,n\in {\mathbb N}$ with $m\geq n$.}
\]
Then $\psi\colon X\to Y$, $\psi(x):=\psi_n(x)$ if $n\in {\mathbb N}$
and $x\in X_n$ is a well-defined map,
and is uniquely determined by the requirement
that $\psi\circ \phi_n=\psi|_{X_n}=\psi_n$ for each $n\in{\mathbb N}$.
Thus $(X,(\phi_n)_{n\in {\mathbb N}})={\displaystyle\lim_{\longrightarrow}}\,{\mathcal S}$ in $\mbox{${\mathbb S}{\mathbb E}{\mathbb T}$}$.
\end{numba}
\begin{numba}\label{pardlgp}
(Direct limits of groups).
If each $X_n$ is a group in the situation of \ref{dlset}
and each $\phi_{m,n}$ a homomorphism,
then ${\mathcal S}$ is a direct system in the category~$\mathbb{G}$
of groups and homomorphisms.
If $x,y\in X$, there exists $n\in {\mathbb N}$
such that $x,y\in X_n$. We define the product of
$x$ and~$y$ in~$X$ as their product in~$X_n$,
i.e., $x\cdot y =\phi_n(x)\cdot \phi_n(y):=\phi_n(x\cdot y)$.
Since each $\phi_{m,n}$ is a homomorphism, $x\cdot y$
is independent of the choice of~$n$,
and it is clear that the product so defined makes~$X$
a group and each $\phi_n\colon X_n\to X$ a homomorphism.
If $(Y,(\psi_n)_{n\in{\mathbb N}})$ is a cone over~${\mathcal S}$ in~$\mathbb{G}$,
let $\psi\colon X\to Y$ be the unique map
such that $\psi\circ \phi_n=\psi_n$ for each $n\in {\mathbb N}$,
as in~\ref{dlset}.
Given $x,y\in X$, say $x,y\in X_n$, we then have
$\psi(xy)=\psi(\phi_n(xy))=\psi_n(xy)=\psi_n(x)\psi_n(y)=\psi(x)\psi(y)$,
whence~$\psi$ is a\linebreak
homomorphism.
Thus $(X,(\phi_n)_{n\in {\mathbb N}})={\displaystyle\lim_{\longrightarrow}}\,{\mathcal S}$
in~$\mathbb{G}$.\\[1mm]
If ${\mathcal S}=((X_n)_{n\in {\mathbb N}},(\phi_{m,n}))$
is a direct sequence of groups
(with $\phi_{m,n}\colon X_n\to X_m$
not necessarily injective), then $K_n:=\bigcup_{m\geq n}\ker(\phi_{m,n})$
is a normal subgroup of $X_n$. Consider the quotient groups
$G_n:=X_n/K_n$, the canonical quotient maps
$q_n\colon X_n\to G_n$
and the homomorphisms $\psi_{m,n}\colon G_n\to G_m$
determined by $\psi_{m,n}\circ q_n=q_m\circ \phi_{m,n}$.
Then each $\psi_{m,n}$ is injective
and it is clear that the direct limit $(G,(\psi_n)_{n\in {\mathbb N}})$
of the ``injective quotient system''
$((G_n)_{n\in{\mathbb N}},(\psi_{m,n}))$
yields a direct limit $(G,(\psi_n\circ q_n)_{n\in {\mathbb N}})$
of~${\mathcal S}$ (cf.\ \cite[\S3]{NRW1}).
\end{numba}
\begin{numba}\label{DLtop}
(Direct limits of topological spaces).
If each $X_n$ is a topological space
in the situation of~\ref{dlset}
and each $\phi_{m,n}\colon X_n\to X_m$
a continuous map, we equip $X=\bigcup_{n\in {\mathbb N}}X_n$
with the finest topology ${\mathcal O}_{\DL}$ making each inclusion map
$\phi_n\colon X_n\to X$ continuous
(the so-called
\emph{direct limit topology}).\index{direct limit topology}
Thus $U\subseteq X$ is open (resp., closed)
if and only if $\phi_n^{-1}(U)=U\cap X_n$ is open (resp., closed)
in~$X_n$ for each $n\in {\mathbb N}$.
Then $(X,(\phi_n)_{n\in {\mathbb N}})={\displaystyle\lim_{\longrightarrow}}\,{\mathcal S}$\vspace{-.7mm} in
the category ${\mathbb T}{\mathbb O}{\mathbb P}$ of topological spaces and continuous maps.
To see this, let
$(Y,(\psi_n)_{n\in {\mathbb N}})$ be a cone over~${\mathcal S}$ in~${\mathbb T}{\mathbb O}{\mathbb P}$.
Let $\psi\colon X\to Y$ be the unique map with
$\psi\circ \phi_n=\psi_n$ for each~$n$.
If $U\subseteq Y$ is open, then
$\psi^{-1}(U)\cap X_n=(\psi|_{X_n})^{-1}(U)=(\psi_n)^{-1}(U)$
is open in~$X_n$, for each~$n$.
Hence $\psi^{-1}(U)$ is open in~$X$
and thus $\psi$ is continuous.\\[2.5mm]
The direct system~${\mathcal S}$ is called
\emph{strict}\index{strict direct system}
if each $\phi_n$ is a topological embedding
(i.e., $X_{n+1}$ induces the topology of~$X_n$).
Then also the inclusion map $\phi_n\colon X_n\to X$ is a
topological embedding for each~$n$ \cite[Lemma~A.5]{NRW2}.
It is also known that~$X$ has the separation property $T_1$
if each $X_n$ is $T_1$ (see, e.g., \cite[Lemma~1.7\,(a)]{FUN}).
And in the case of a direct sequence
$X_1\subseteq X_2\subseteq\cdots$ of\linebreak
locally compact
spaces~$X_n$, the direct limit topology on $\bigcup_{n\in {\mathbb N}}X_n$
is Hausdorff (as observed in
\cite[Lemma~1.7\,(c)]{FUN},
the strictness hypotheses
in \cite[Proposition~4.1\,(ii)]{Han}
and \cite[Lemma~3.1]{DIR}
is unnecessary).
\end{numba}
{\bf First remarks on ascending unions of Lie groups and direct limits.}
Consider an ascending sequence $G_1\subseteq G_2\subseteq\cdots$
of $C^\infty_{\mathbb K}$-Lie groups,
such that the inclusion maps $j_{m,n}\colon G_n\to G_m$
are $C^\infty$-homomorphisms for all $m,n\in {\mathbb N}$ with $m\geq n$.
Then ${\mathcal S}:=((G_n)_{n\in {\mathbb N}},(j_{m,n})_{m\geq n})$
is a direct system in the category ${\mathbb L}{\mathbb I}{\mathbb E}_{\mathbb K}$
of $C^\infty_{\mathbb K}$-Lie groups
and $C^\infty_{\mathbb K}$-homomorphisms.
One would not expect
that ${\mathcal S}$ always
has a direct limit in the category
of $C^\infty_{\mathbb K}$-Lie groups
(although no counterexamples
are known at the time of writing).
What is more,
there is no general
construction principle
for a Lie group structure on $\bigcup_{n\in {\mathbb N}}G_n$
such that all inclusion maps $j_n\colon G_n\to G$
are $C^\infty_{\mathbb K}$-homomorphisms
(unless restrictive
conditions are imposed,
as in Section~\ref{secconstr}).
\begin{numba}\label{explainnow}
However,
in many concrete cases
we are given
such a Lie group structure on $G:=\bigcup_{n\in {\mathbb N}}G_n$.
Then $(G,(j_n)_{n\in {\mathbb N}})$ is a cone over~${\mathcal S}$
in ${\mathbb L}{\mathbb I}{\mathbb E}_{\mathbb K}$,
and it is natural to ask if $G={\displaystyle\lim_{\longrightarrow}}\, G_n$\vspace{-.7mm}
as a Lie group. A sequence
of $C^\infty_{\mathbb K}$-homomorphisms $f_n\colon G_n\to H$
to a $C^\infty_{\mathbb K}$-Lie group~$H$
is a cone over ${\mathcal S}$ if and only if
\[
f_m|_{G_n}\;=\; f_n\quad\mbox{for all $m,n\in {\mathbb N}$ with $m\geq n$.}
\]
Then $f\colon G\to H$, $f(x):=f_n(x)$ if $n\in {\mathbb N}$
and $x\in G_n$ is a well-defined
homo\-mor\-phism. This map
is uniquely determined by the requirement
that $f\circ j_n=f|_{G_n}=f_n$ for each $n\in{\mathbb N}$.
Therefore, $(G,(j_n)_{n\in {\mathbb N}})={\displaystyle\lim_{\longrightarrow}}\,{\mathcal S}$
holds in ${\mathbb L}{\mathbb I}{\mathbb E}_{\mathbb K}$
if and only if each $f$ of the preceding form
is~$C^\infty_{\mathbb K}$. A similar argument applies if~$H$
is a topological group, smooth manifold or topological space.\\[2.5mm]
Thus questions (a)--(d)
posed in \ref{dlpropannounce}
amount to asking if
$G={\displaystyle\lim_{\longrightarrow}}\, G_n$\vspace{-.5mm} holds\,\ldots\vspace{-1mm}
\begin{description}[(a)$'$]
\item[(a)$'$]
in the category of smooth manifolds
(modelled on locally convex spaces)
and smooth maps between them?
\item[(b)$'$]
in the category of topological spaces
and continuous maps?
\item[(c)$'$]
in the category ${\mathbb L}{\mathbb I}{\mathbb E}_{\mathbb K}$ of Lie groups?
\item[(d)$'$]
in the category of topological groups
and continuous homomorphisms?
\end{description}
\end{numba}
{\bf The Hausdorff property.}
We allow non-Hausdorff topologies
because direct limits are much easier to
describe if the Hausdorff property is omitted.
In fact, we have already seen
that it is always possible to
topologize a union $X=\bigcup_{n\in {\mathbb N}}X_n$
of topological spaces
in such a way that it becomes
the direct limit ${\displaystyle\lim_{\longrightarrow}}\,X_n$\vspace{-.7mm}
in the category of topological spaces
(see \ref{DLtop}),
and likewise
a union of topological groups
(resp., locally convex spaces)
can always be made the direct limit
in the category of topological groups
resp., locally convex spaces
(see Sections~\ref{secgp}
and~\ref{seclcx}). A mere union $X=\bigcup_{n\in {\mathbb N}}X_n$
is a very concrete object, and easy to work with.
By contrast, if each $X_n$ is Hausdorff,
then the direct limit ${\displaystyle\lim_{\longrightarrow}}\,X_n$\vspace{-.7mm}
in the category of Hausdorff
topological spaces (resp., Hausdorff topological
groups, resp., Hausdorff locally convex spaces)
can only be realized as a
quotient of $X=\bigcup_{n\in {\mathbb N}}X_n$ in general,
and is a much more elusive object in this case.
Luckily, in all situations we are interested in,
$X$ from above injects\linebreak
continuously into a Lie group
and thus $X$ is Hausdorff. Then automatically~$X$
also is the direct limit in the category
of Hausdorff topological spaces (resp., Hausdorff topological
groups, resp., Hausdorff locally convex spaces).
\section{\,Direct limits of topological groups}\label{secgp}
As an intermediate step towards the study of Lie groups,
let us consider a sequence
$G_1\subseteq G_2\subseteq\cdots$ of topological groups,
such that all inclusion maps $G_n\to G_{n+1}$
are continuous homomorphisms.
We make $G=\bigcup_{n\in {\mathbb N}}G_n$
the direct limit group (as in \ref{pardlgp})
and give it the finest group
topology ${\mathcal O}_{\DLG}$ making each inclusion map
$G_n\to G$ continuous. Then $G={\displaystyle\lim_{\longrightarrow}}\,G_n$\vspace{-.7mm}
in the category of (not necessarily Hausdorff)
topological groups. Moreover,
if each $G_n$ is Hausdorff,
then the factor group of~$G$ modulo
the closure $\overline{\{{\bf 1}\}}\subseteq G$ is the direct limit
in the category of Hausdorff topological groups.\footnote{If $G$ is Hausdorff,
then no passage to the quotient
is necessary.}
Unfortunately, the preceding description of the
topology ${\mathcal O}_{\DLG}$ on the\linebreak
direct limit topological group is not at all concrete.
Various questions are natural (and also
relevant for our studies of Lie groups):
Does ${\mathcal O}_{\DLG}$ coincide with the direct
limit topology ${\mathcal O}_{\DL}$ (as in \ref{DLtop})?
\,Can ${\mathcal O}_{\DLG}$ be described more explicitly?
Given a group topology on $G=\bigcup_{n\in {\mathbb N}}G_n$,
how can we prove that it agrees with ${\mathcal O}_{\DLG}$?
We now give some answers to the first and last
question. An answer to the second question,
namely the description of ${\mathcal O}_{\DLG}$
as a so-called ``bamboo-shoot'' topology,
can be found in~\cite{TSH} and \cite{HSTH}
(under suitable\linebreak
hypotheses).\\[4mm]
{\bf Comparison of {\boldmath${\mathcal O}_{\DL}$} and {\boldmath${\mathcal O}_{\DLG}$}.}
It is clear from the definition that the\linebreak
direct limit topology ${\mathcal O}_{\DL}$ is finer
than ${\mathcal O}_{\DLG}$.
Moreover, ${\mathcal O}_{\DL}$ may be properly finer
than ${\mathcal O}_{\DLG}$,
as emphasized by Tatsuuma et al.\ \cite{TSH}.\footnote{In part of the
older literature, there was some confusion concerning
this point.}
To understand this difficulty, let $\eta_n\colon G_n\to G_n$, $x\mapsto x^{-1}$
and $\eta\colon G\to G$ be the inversion maps
and $\mu_n\colon G_n\times G_n\to G_n$, $(x,y)\mapsto xy$
as well as $\mu\colon G\times G\to G$
be the respective group multiplication.
Then
\[
\eta\;=\; {\displaystyle\lim_{\longrightarrow}}\,\eta_n\colon \big({\displaystyle\lim_{\longrightarrow}}\, G_n, {\mathcal O}_{\DL}\big)\to \big({\displaystyle\lim_{\longrightarrow}}\, G_n,{\mathcal O}_{\DL}\big)
\]
is always continuous.
However, it may happen that $\mu$ is discontinuous
(with respect to the product topology on $G\times G$),
in which case $(G,{\mathcal O}_{\DL})$ is not a topological
group and hence ${\mathcal O}_{\DL}\not={\mathcal O}_{\DLG}$.
We recall a simple example for this pathology
from~\cite{TSH}:
\begin{example}\label{badDL1}
Let $G_n:={\mathbb Q}\times {\mathbb R}^{n-1}$
with the addition and topology induced\linebreak
by ${\mathbb R}^n$.
Identifying ${\mathbb R}^{n-1}$ with the vector subspace ${\mathbb R}^{n-1}\! \times \{0\}$
of ${\mathbb R}^n$, we
obtain a strict direct sequence
$G_1\subseteq G_2\subseteq\cdots$
of metrizable topological groups.
It can be shown by direct calculation
that the direct limit topology
${\mathcal O}_{\DL}$
does not make the group multiplication
on $G:=\bigcup_{n\in {\mathbb N}}G_n$ continuous (see \cite[Example~1.2]{TSH}).
\end{example}
To understand the difficulties concerning the
group multiplication (in\linebreak
contrast to the group inversion)
on $G=\bigcup_{n\in {\mathbb N}}G_n$, note that we always
have a continuous map
\[
{\displaystyle\lim_{\longrightarrow}}\,\mu_n\colon \big({\displaystyle\lim_{\longrightarrow}}\, (G_n\times G_n), {\mathcal O}_{\DL}\big)\to \big({\displaystyle\lim_{\longrightarrow}}\, G_n,{\mathcal O}_{\DL}\big)\,.
\]
Thus $\mu$ is continuous as a map
from $(G\times G, {\mathcal O}_{\DL})$ to $(G,{\mathcal O}_{\DL})$,
i.e., it becomes continuous if,
instead of the product
topology, the topology ${\mathcal O}_{\DL}$ is used on $G\times G$
which makes it the direct limit topological space
${\displaystyle\lim_{\longrightarrow}}\,(G_n\times G_n)$.\vspace{-.7mm}
This topology is finer than the product topology
and, in general, properly finer.
If the direct limit topology
on $G\times G$ happens to coincide
with the product topology,
then $(G,{\mathcal O}_{\DL})$
is a topological group and thus ${\mathcal O}_{\DL}={\mathcal O}_{\DLG}$
(cf.\ \cite{HSTH} and \cite[\S3]{DIR}).
The following proposition describes a
situation where the two topologies
coincide. We
recall that a topological
space $X$ is said to be a
\emph{$k_\omega$-space}\index{$k_\omega$-space}
if it is the direct limit topological space
of an ascending sequence
$K_1\subseteq K_2\subseteq\cdots$ of compact topological
spaces (see, e.g., \cite{GGH} and the references
therein).\footnote{These spaces can also be characterized as
the hemicompact $k$-spaces.}
Such spaces are always Hausdorff (see \ref{DLtop}).
For example, every $\sigma$-compact, locally compact
space is a $k_\omega$-space.
A topological space~$X$ is
called \emph{locally~$k_\omega$}\index{locally $k_\omega$ space}
if every point $x\in X$ has an open neighbourhood in~$X$
which is a $k_\omega$-space
in the induced topology \cite[Definition~4.1]{GGH}.
E.g., every locally compact topological
space is locally~$k_\omega$.
The topological space underlying
a topological group~$G$ is locally~$k_\omega$
if and only if~$G$ has an open subgroup
which is a $k_\omega$-space \cite[Proposition~5.3]{GGH}.
See \cite[Proposition~4.7]{GGH}
for the following fact.
The special case where each $X_n$ and $Y_n$
is locally compact was first proved in \cite[Theorem~4.1]{HSTH}
(cf.\ also \cite[Proposition~3.3]{DIR} for the strict case):
\begin{proposition}
Let
$X_1\subseteq X_2\subseteq \cdots$
and $Y_1\subseteq X_2\subseteq \cdots$
be topological spaces
with continuous inclusion maps $X_n\to X_{n+1}$
and $Y_n\to Y_{n+1}$.
If each $X_n$ and each $Y_n$
is locally $k_\omega$,
then
\[
{\displaystyle\lim_{\longrightarrow}}\, (X_n\times Y_n)\;=\; \big({\displaystyle\lim_{\longrightarrow}}\,X_n\big)\times\big({\displaystyle\lim_{\longrightarrow}}\,Y_n\big)
\]
as a topological space.
\end{proposition}
Using that direct limits of ascending sequences
of locally $k_\omega$-spaces are\linebreak
locally~$k_\omega$
by \cite[Proposition~4.5]{GGH}
(and thus Hausdorff),
the preceding discussion
immediately entails the following conclusion
from~\cite{GGH}
(cf.\ \cite[Theorem~2.7]{TSH}
for locally compact $G_n$,
as well as \cite[Corollary~3.4]{DIR}
(in the case of a strict direct system)).
\begin{corollary}
Consider a sequence
$G_1\subseteq G_2\subseteq\cdots$
of topological groups
such that each inclusion map $G_n\to G_{n+1}$
is a continuous homomorphism.
If the topological space underlying $G_n$
is locally $k_\omega$ for each $n\in {\mathbb N}$
$($for example, if each $G_n$ is locally compact$)$,
then the direct limit topology is Hausdorff
and makes $G=\bigcup_{n\in {\mathbb N}}G_n$
the direct limit topological group.
\end{corollary}
\begin{numba}
Given topological groups
$G_1\subseteq G_2\subseteq \cdots$
such that all inclusion maps $G_n\to G_{n+1}$
are continuous homomorphisms,
consider the conditions:
\begin{description}[(D)]
\item[(a)]
$G_n$ is an open subgroup of $G_{n+1}$
(with the induced topology)
for all sufficiently large $n$.
\item[(b)]
For each sufficiently large~$n$,
the topological group $G_n$ has an identity neighbourhood $U$
whose closure in $G_m$ is compact
for some $m\geq n$.
\end{description}
Then ${\mathcal O}_{\DL}={\mathcal O}_{\DLG}$ holds
if (a) or (b) is satisfied~\cite[Theorems~2 and 3]{Yam}.
By a most remarkable theorem of Yamasaki\index{Yamasaki's Theorem}
\cite[Theorem~4]{Yam},
the validity of (a) or (b)
is also \emph{necessary} in order that ${\mathcal O}_{\DL}={\mathcal O}_{\DLG}$,
provided that each $G_n$ is metrizable
and the inclusion maps $G_n\to G_{n+1}$
are topological embeddings.
\end{numba}
{\bf Criteria ensuring that a given group topology
coincides with {\boldmath${\mathcal O}_{\DLG}$}.}
Frequently, a given
topological group~$G$
is a union $G=\bigcup_{n\in {\mathbb N}}G_n$
of topological groups,
such that all inclusion maps $G_n\to G_{n+1}$
and $G_n\to G$ are continuous homomorphisms.
In many cases,
a criterion from~\cite{COM}
helps to see that
the given topology on~$G$ coincides
with ${\mathcal O}_{\DLG}$ (cf.\ \cite[Proposition~11.8]{COM}).\\[2.5mm]
The criterion uses the weak direct product
$\prod_{n\in {\mathbb N}}^*G_n$\index{weak direct product} as a tool.
The latter can be formed for
any sequence $(G_n)_{n\in {\mathbb N}}$ of topological
groups. It is defined as the subgroup
of all $(g_n)_{n\in {\mathbb N}}\in \prod_{n\in {\mathbb N}}G_n$
such that $g_n=1$ for all but finitely many~$n$.
The weak direct product is a topological
group; a basis for its topology (the so-called
``box topology'')\index{box topology}
is given by sets of the form
$\prod_{n\in {\mathbb N}}U_n\cap \prod^*_{n\in{\mathbb N}}G_n$
(the ``boxes''),\index{box}
where $U_n\subseteq G_n$ is open for each $n$
and ${\bf 1}\in U_n$ for almost all~$n$.\\[2.5mm]
Returning to the case where $G_1\subseteq G_2\subseteq\cdots$
and $G=\bigcup_{n\in {\mathbb N}}G_n$, we can consider the ``product
map''\index{product map}
\[
\pi\colon
{\textstyle \prod_{n\in {\mathbb N}}^*} \, G_n \to G\,,
\quad (g_n)_{n\in {\mathbb N}} \mapsto g_1g_2\cdots g_N\,,
\]
where $N\in {\mathbb N}$ is so large that $g_n={\bf 1}$ for all $n>N$.
\begin{proposition}\label{propcriter}
If the product map $\pi\colon
\prod_{n\in {\mathbb N}}^* G_n \to G$ is open at~${\bf 1}$,
then the given topology on~$G$ coincides with ${\mathcal O}_{\DLG}$
and thus $G={\displaystyle\lim_{\longrightarrow}}\, G_n$\vspace{-.3mm}
as a topological group.
The openness of $\pi$ at~${\bf 1}$ is guaranteed
if there exists a map $\sigma\colon \Omega \to \prod^*_{n\in {\mathbb N}}G_n$
on an identity neighbourhood $\Omega\subseteq G$
such that $\pi\circ\sigma=\id_\Omega$,
$\sigma({\bf 1})={\bf 1}$ and $\sigma$ is continuous at~${\bf 1}$.
\end{proposition}
\begin{remark}
Such a section~$\sigma$ to $\pi$ might be called
a \emph{fragmentation map},\index{fragmentation map}
in analogy to concepts in the theory
of diffeomorphism groups (cf.\ \cite[\S2.1]{Ban}).
\end{remark}
\begin{example}
It can be shown that the Lie groups
$\Diff_c(M)=\bigcup_{n\in {\mathbb N}}\Diff_{K_n}(M)$
and $C^r_c(M,H)=\bigcup_{n\in {\mathbb N}}C^r_{K_n}(M,H)$
(as defined in the introduction)
always admit fragmentation maps
(even smooth ones);
cf.\ \cite[Lemmas~5.5 and 7.7]{COM}.
Hence
$\Diff_c(M)={\displaystyle\lim_{\longrightarrow}}\,\Diff_{K_n}(M)$
and $C^r_c(M,H)={\displaystyle\lim_{\longrightarrow}}\,C^r_{K_n}(M,H)$\vspace{-.7mm}
as topological groups.
\end{example}
\section{\,Non-linear mappings on locally convex
direct limits}\label{seclcx}
Consider a sequence $E_1\subseteq E_2\subseteq \cdots$
of locally convex spaces,
such that each inclusion map $E_n\to E_{n+1}$
is continuous and linear.
Then there is a finest locally convex vector topology ${\mathcal O}_{\lcx}$
on $E:=\bigcup_{n\in {\mathbb N}}E_n$
making each inclusion map $E_n\to E$
continuous,
called the \emph{locally convex direct limit
topology}.\footnote{This
topology can be described in various ways.
We mention: (1) A convex set $U\subseteq E$ is
open if and only if $U\cap E_n$ is open in~$E_n$,
for each~$n\in {\mathbb N}$.
(2) A seminorm $q\colon E\to [0,\infty[$ is
continuous if and only if $q|_{E_n}$ is
continuous, for each $n\in {\mathbb N}$.}\index{locally convex direct limit topology}
\begin{numba}\label{baslcx}
Some basic properties
of locally convex direct limits
are frequently used:
\begin{description}[(D)]
\item[(a)]
If the direct sequence $E_1\subseteq E_2\subseteq \cdots$
is strict, then
$(E,{\mathcal O}_{\lcx})$ induces the given topology on~$E_n$,
for each $n\in {\mathbb N}$
(see Proposition~9\,(i) in
\cite[Chapter~II, \S4, no.\,6]{BTV}).
\item[(b)]
If the direct sequence $E_1\subseteq E_2\subseteq \cdots$
is strict and each $E_n$ is Hausdorff,
then also $(E,{\mathcal O}_{\lcx})$ is Hausdorff
(see Proposition~9\,(i) in
\cite[Chapter~II, \S4, no.\,6]{BTV}).
\item[(c)]
If the direct sequence $E_1 \subseteq E_2\subseteq \cdots$
is strict and each $E_n$ complete,
then the locally convex direct limit $E=\bigcup_{n\in {\mathbb N}}E_n$
is boundedly regular
(cf.\ Proposition~6
in \cite[Chapter~III, \S1, no.\,4]{BTV})
and hence also compactly regular, in view of~(a).
\item[(d)]
If the direct sequence $E_1\subseteq E_2\subseteq \cdots$ is strict and each $E_n$
complete, then also the locally
convex direct limit $E$ is complete
(see Proposition~9\,(iii) in
\cite[Chapter~II, \S4, no.\,6]{BTV}).
\item[(e)]
If also $F_1\subseteq F_2\subseteq\cdots$ is an ascending sequence
of locally convex spaces, with locally convex
direct limit
$F=\bigcup_{n\in {\mathbb N}}F_n$,
then the locally convex\linebreak
direct
limit topology on $\bigcup_{n\in {\mathbb N}}(E_n\times F_n)$
and the product topology on $E\times F$ coincide
\cite[Theorem~3.4]{HSTH}
(because finite direct products
coincide with finite direct sums
in the category of locally convex spaces).
\end{description}
\end{numba}
The reader may find \cite{Fl80}
and \cite{Bie} convenient points of entry to
the research literature on locally convex direct limits.
We mention that
few general results ensuring the
Hausdorff property for
locally convex direct limits
$E={\displaystyle\lim_{\longrightarrow}}\, E_n$\vspace{-.7mm}
are known
(besides \ref{baslcx}\,(b) just encountered
and Proposition~\ref{propslv} below).
Some Hausdorff criteria for
direct limits of
Banach spaces (and normed spaces) can be found in \cite{Fl79}
(see also \cite[p.\,214]{Fl80}).
In concrete examples, a very simple argument
frequently works:
If one can find an injective continuous linear map
from $E$ to a some Hausdorff locally convex space,
then~$E$ is Hausdorff.
However, the fact remains that
non-Hausdorff locally convex
direct limits do exist:
See~\cite{Mk63} for examples
where each $E_n$ is a Banach space;
\cite[p.\,207]{Fl80} for a simple example (due to \mbox{L.~Waelbroeck)}
with each $E_n$ a normed space;
and \cite[p.\,227, Corollary~2]{Fl80} for an example where
each $E_n$ is a nuclear Fr\'{e}chet space (cf.\ also \cite{Sm69}).
It is well known that ${\mathcal O}_{\lcx}$ and ${\mathcal O}_{\DLG}$
coincide on $E=\bigcup_{n\in {\mathbb N}}E_n$,
because on $\bigoplus_{n\in {\mathbb N}}E_n=\bigcup_{n\in {\mathbb N}}
\prod_{k=1}^nE_k$,
both ${\mathcal O}_{\lcx}$ and ${\mathcal O}_{\DLG}$
coincide with the box topology
and $\bigcup_{n\in {\mathbb N}}E_n$
(with either topology) can be considered
as a quotient of the direct sum
(see \cite[Lemma~2.7]{COM}; cf.\ \cite[Proposition~3.1]{HSTH}
for a different argument. Cf.\ also
\cite{Ko69} and \cite[Chapter~II, Exercise 14 to \S4]{BTV}).
It is also known that ${\mathcal O}_{\lcx}$ need
not coincide with ${\mathcal O}_{\DL}$ (see \cite{Sr59}
or\linebreak
Exercise 16\,(a) to \S4 in \cite[Chapter~II]{BTV};
cf.\ also \cite[p.\,506]{Dud}).
E.g., ${\mathcal O}_{\lcx}$ is properly coarser than
${\mathcal O}_{\DL}$ if each $E_n$ is an infinite-dimensional
Fr\'{e}chet space and $E_n$ is a proper vector subspace of $E_{n+1}$
with the induced topology, for each $n\in {\mathbb N}$
(\cite[Proposition~4.26\,(ii)]{KaM};
cf.\ Yamasaki's Theorem recalled in
Section~\ref{secgp}).
The following concrete
example shows that not even
smoothness or analyticity
of $f|_{E_n}$
ensures that a map
$f\colon E\to F$
on a locally convex direct limit $E=\bigcup_{n\in {\mathbb N}}E_n$
is continuous (let alone smooth or analytic).
\begin{example}\label{extenso}
Consider the map
\[
g\colon C^\infty_c({\mathbb R},{\mathbb C})\to C^\infty_c({\mathbb R}\times {\mathbb R},{\mathbb C})\,,\quad
g(\gamma)\,:=\, \gamma\otimes \gamma
\]
between spaces of compactly supported
smooth functions,
where $(\gamma\otimes \gamma)(x,y)$
$:=\gamma(x)\gamma(y)$
for $x,y\in {\mathbb R}$. It can be shown that
$g$ is discontinuous, although
$g|_{C^\infty_{[{-n},n]}({\mathbb R},{\mathbb C})}\colon
C^\infty_{[{-n},n]}({\mathbb R},{\mathbb C})\to C^\infty_c({\mathbb R}\times {\mathbb R},{\mathbb C})$
is a continuous homogeneous polynomial
(and hence complex analytic), for each $n\in {\mathbb N}$
(see Remark~7.9 in \cite{COM},
based on \cite[Theorem~2.4]{HSTH}).
\end{example}
\begin{remark}\label{whybad}
Consider the locally convex direct limit
$E=\bigcup_{n\in {\mathbb N}}E_n$
of Hausdorff locally convex spaces $E_1\subseteq E_2\subseteq \cdots$
over ${\mathbb K}\in \{{\mathbb R},{\mathbb C}\}$.
Let
$U_1\subseteq U_2\subseteq \cdots$ be an ascending sequence
of open sets $U_n\subseteq E_n$,
and $U:=\bigcup_{n\in {\mathbb N}}U_n$.
Let $r\in {\mathbb N}\cup \{\infty\}$,
$F$ be a Hausdorff locally convex space
and $f\colon U\to F$ be a map such that
$f|_{U_n}\colon E_n\supseteq U_n\to F$ is $C^r_{\mathbb K}$
for each $n\in {\mathbb N}$.
Assume that $E$ is Hausdorff and $U\subseteq E$ is open.\footnote{E.g.,
we might start with an open set $U\subseteq E$ and set
$U_n:=U\cap E_n$.}
Then the iterated directional
derivatives
\[
d^kf(x,y_1,\ldots,y_k)\;=\;
(D_{y_k}\cdots D_{y_1}f)(x)
\]
exist for all $k\in {\mathbb N}$ with $k\leq r$
and all $x\in U$ and $y_1,\ldots, y_k\in E$,
because $x\in U_n$ and $y_1,\ldots, y_k\in E_n$
for some $n\in {\mathbb N}$ and then
$(D_{y_k}\cdots D_{y_1}f)(x)=d^k(f|_{U_n})(x,y_1,\ldots, y_k)$.
Hence only continuity
of the maps $d^kf$,
which satisfy
\begin{equation}\label{givesCk}
d^kf|_{U_n\times (E_n)^k}\;=\; d^k(f|_{U_n})\quad\mbox{for all $n\in {\mathbb N}$,}
\end{equation}
may be missing for some~$k$,
and may prevent $f$ from being a $C^r_{\mathbb K}$-map.
\end{remark}
We mention that locally convex direct limits
of ascending sequences of Banach spaces
(resp., Fr\'{e}chet spaces) are called
(LB)-spaces (resp., (LF)-spaces).
If the sequence is strict,
we speak of LB-spaces (resp., LF-spaces).\footnote{These\hspace*{-.1mm}
conventions\hspace*{-.1mm} are\hspace*{-.1mm} local.\hspace*{-.1mm}
The\hspace*{-.1mm} meanings\hspace*{-.1mm} of\hspace*{-.1mm}
`LF'\hspace*{-.1mm} and\hspace*{-.1mm}
`(LF)'\hspace*{-.1mm}
vary\hspace*{-.1mm} in\hspace*{-.1mm}
the\hspace*{-.1mm} literature.}\index{(LB)-space}\index{LB-space}
\index{(LF)-space}\index{LF-space}
A locally convex space~$E$ is called
a \emph{Silva space}\index{Silva space}
if it is the locally convex direct limit
of an ascending sequence $E_1\subseteq E_2\subseteq\cdots$
of Banach spaces, such that all inclusion maps
$E_n\to E_{n+1}$ are compact operators
(cf.\ \cite{Si55} and \cite{Fl71}).\footnote{A locally
convex space is a Silva space
if and only if it is isomorphic to the dual
of a Fr\'{e}chet-Schwartz space~\cite{Fl71};
therefore Silva spaces are also called
(DFS)-spaces.}
Silva spaces are
very well-behaved
direct limits. We recall from~\cite{Fl71}:
\begin{proposition}\label{propslv}
If $E=\bigcup_{n\in {\mathbb N}}E_n$ is a Silva space, then
the following hold:
\begin{description}[(D)]
\item[\rm(a)]
$E$ is Hausdorff and complete;
\item[\rm(b)]
$E=\bigcup_{n\in {\mathbb N}}E_n$ is boundedly regular
and hence also compactly regular;\,\footnote{Using that
the inclusion maps $E_n\to E_{n+1}$ are
compact operators.}
\item[\rm(c)]
The locally convex direct limit topology on~$E$
coincides with the direct limit topology ${\mathcal O}_{\DL}$;
\item[\rm(d)]
If also $F=\bigcup_{n\in {\mathbb N}}F_n$ is a Silva space,
with $F_n\to F_{n+1}$ compact,
then $E\times F=\bigcup_{n\in {\mathbb N}}(E_n\times F_n)$
is a Silva space.\footnote{The inclusions
$E_n\times F_n\to E_{n+1}\times F_{n+1}$ are
compact operators, and \ref{baslcx}\,(e) holds.}
\end{description}
\end{proposition}
Some interesting
infinite-dimensional Lie groups are modelled
on Silva spaces, e.g.\ the group
$\Diff^\omega(K)$ of real analytic diffeomorphisms
of a compact real analytic manifold~$K$ (see \cite{Les};
cf.\ \cite[Theorem~43.4]{KaM}).
More examples will be encountered below.\\[3mm]
{\bf Mappings on Silva spaces or unions of
{\boldmath $k_\omega$}-spaces.}
In good cases,
the pathology described in Remark~\ref{whybad}
cannot occur (see \cite[Lemma~9.7]{COM}
and \cite[Proposition~8.12]{GGH}):
\begin{proposition}\label{silvkom}
Consider the locally convex direct limit
$E=\bigcup_{n\in {\mathbb N}}E_n$
of Hausdorff locally convex spaces $E_1\subseteq E_2\subseteq \cdots$
over ${\mathbb K}\in \{{\mathbb R},{\mathbb C}\}$.
Let\linebreak
$U_1\subseteq U_2\subseteq \cdots$ be an ascending sequence
of open sets $U_n\subseteq E_n$,
and $U:=\bigcup_{n\in {\mathbb N}}U_n$.
Let $r\in {\mathbb N}_0\cup \{\infty\}$,
$F$ be a Hausdorff locally convex space
and $f\colon U\to F$ be a map such that
$f|_{U_n}$ is $C^r_{\mathbb K}$
for each $n\in {\mathbb N}$.
Assume that
\begin{description}[(D)]
\item[\rm(a)]
Each $E_n$ is a $k_\omega$-space; or:
\item[\rm(b)]
$E_n$ is a Banach space
and the inclusion map $E_n\to E_{n+1}$
a compact operator, for each $n\in {\mathbb N}$
$($in which case $E$ a Silva space$)$.
\end{description}
Then $E$ is Hausdorff
and the locally convex direct
limit topology on~$E$ coincides with ${\mathcal O}_{\DL}$.
Moreover, $U$ is open in~$E$
and $f\colon U\to F$ is $C^r_{\mathbb K}$.
\end{proposition}
In the Silva case, the hypotheses
of Proposition~\ref{silvkom}
can be relaxed
(cf.\ \cite[Proposition~2.8]{Ls85}).
Real analyticity is more elusive.
E.g., there exists a real-valued
map~$f$ on the Silva space ${\mathbb R}^{({\mathbb N})}:={\displaystyle\lim_{\longrightarrow}}\,{\mathbb R}^n$\vspace{-.5mm}
which is not real analytic
although $f|_{{\mathbb R}^n}$ is real analytic
for each $n\in {\mathbb N}$ (cf.\ \cite[Example~10.8]{KaM}).\\[3mm]
{\bf Complex analytic maps on (LB)-spaces.}
A very useful result
from \cite{Dah}
frequently facilitates
to check complex analyticity
beyond Silva spaces.
\begin{theorem}[Dahmen's Theorem]\label{thmdah}
Let $E_1\subseteq E_2\subseteq\cdots$\index{Dahmen's Theorem}
be an ascending sequence of normed spaces
$(E_n,\|.\|_n)$ over~${\mathbb C}$ such that, for each $n\in {\mathbb N}$,
the inclusion map
$E_n\to E_{n+1}$ is continuous and complex linear,
of operator norm at most~$1$.
Let $r\in \;]0,\infty[$,
$U_n:=\{x\in E_n\colon \|x\|_n<r\}$
for $n\in {\mathbb N}$,
and $F$ be a complex
locally convex space.
Assume that the locally convex direct
limit $E=\bigcup_{n\in {\mathbb N}}E_n$ is Hausdorff.
Then $U:=\bigcup_{n\in {\mathbb N}}U_n$
is open in~$E$ and if
\[
f\colon U \to F
\]
is a map
such that $f|_{U_n}\colon E_n\supseteq U_n\to F$
is complex analytic and bounded
for each $n\in {\mathbb N}$,
then $f$ is complex analytic.
\end{theorem}
{\bf Mappings between direct sums.} If $(E_n)_{n\in {\mathbb N}}$
is a sequence of locally\linebreak
convex spaces,
we equip
$\bigoplus_{n\in {\mathbb N}} E_n$
with the box topology (as introduced before
Proposition~\ref{propcriter}).
See \cite[Proposition~7.1]{MEA} for the following result.
\begin{proposition}\label{dsummp}
Let $(E_n)_{n\in {\mathbb N}}$ and $(F_n)_{n\in {\mathbb N}}$
be sequences of Hausdorff\linebreak
locally convex spaces,
$r\in {\mathbb N}_0\cup\{\infty\}$,
$U_n\subseteq E_n$ be open
and $f_n\colon U_n\to F_n$ be~$C^r$.
Assume that $0\in U_n$
and $f_n(0)=0$ for all but finitely many
$n\in {\mathbb N}$.
Then $\bigoplus_{n\in {\mathbb N}}U_n:=(\bigoplus_{n\in {\mathbb N}} E_n)\cap \prod_{n\in{\mathbb N}}U_n$
is open in $\bigoplus_{n\in{\mathbb N}}E_n$ and the
map
$\oplus_{n\in {\mathbb N}}f_n\colon \bigoplus_{n\in {\mathbb N}}U_n\to
\bigoplus_{n\in {\mathbb N}}F_n$,
$(x_n)_{n\in {\mathbb N}}\mapsto (f_n(x_n))_{n\in {\mathbb N}}$
is $C^r$.\vspace{1mm}
\end{proposition}
{\bf Non-linear maps between spaces of test functions.}
Let \mbox{$r,s \in {\mathbb N}_0\cup\{\infty\}$,}
$M$ be a $\sigma$-compact, finite-dimensional $C^r$-manifold,
$N$ be a $\sigma$-compact,
finite-dimensional $C^s$-manifold,
$E$, $F$ be Hausdorff locally convex spaces,
$\Omega \subseteq C^r_c(M,E)$ be open
and $f\colon \Omega\to C^s_c(N,F)$ be a map.
We say that $f$ is \emph{almost local}\index{almost local map}
if there exist locally finite covers
$(U_n)_{n\in{\mathbb N}}$
and $(V_n)_{n\in {\mathbb N}}$
of~$M$ (resp., $N$)
by relatively compact, open sets
$U_n\subseteq M$ (resp., $V_n\subseteq N$)
such that $f(\gamma)|_{V_n}$ only depends
on $\gamma|_{U_n}$, i.e.,
\[
(\forall n\in {\mathbb N})\;(\forall \gamma,\eta \in \Omega)\, \quad
\gamma|_{U_n}\,=\,\eta|_{U_n}\;\,\Rightarrow \,\;
f(\gamma)|_{V_n}\,=\, f(\eta)|_{V_n}\,.
\]
E.g., $f$ is almost local
if $M=N$ and $f$ is \emph{local}\index{local map}
in the sense that $f(\gamma)(x)$ only depends
on the germ of~$\gamma$ at $x\in M$.
As shown in \cite{DIF} (see also \cite[Theorem~10.4]{ZOO}),
almost locality
prevents pathologies as
in Example~\ref{extenso}.
\begin{proposition}\label{almloc}
Let $r,s,t\in {\mathbb N}_0\cup\{\infty\}$
and $f\colon C^r_c(M,E)\supseteq \Omega\to C^s_c(N,F)$
be an almost local map.
Assume that the restriction of~$f$ to
$\Omega\cap C^r_K(M,E)$
is $C^t$, for each compact set $K\subseteq M$.
Then $f$ is $C^t$.
\end{proposition}
An analogous result is available
for mappings between open subsets
of spaces of compactly supported sections
in vector bundles.
Almost local maps between
subsets of the space of compactly supported
smooth vector fields
occur in the construction of
the Lie group structure on $\Diff_c(M)$ (see \cite{DIF};
cf.\ \cite{DFR} and \cite{ZOO}).\\[2.5mm]
The \emph{proof of Proposition}~\ref{almloc}
exploits that the map
\[
\sigma \colon C^s(N,F)\to \bigoplus_{n\in {\mathbb N}} C^s(V_n,F)\,,\quad
\gamma\mapsto(\gamma|_{V_n})_{n\in {\mathbb N}}
\]
is a linear topological embedding with closed
image \cite[Proposition~8.13]{ZOO},
for each locally finite cover $(V_n)_{n\in {\mathbb N}}$
of~$N$ by relatively compact, open sets~$V_n$.
It hence suffices to show
that $\sigma \circ f$ is~$C^t$.
Let us assume that $\Omega=C^r_c(M,E)$ for simplicity.
There is a locally finite cover
$(\widetilde{U}_n)_{n\in {\mathbb N}}$ of~$M$ by relatively compact, open sets
such that $\widetilde{U}_n$ contains the closure of~$U_n$.
Let $h_n\colon \widetilde{U}_n \to {\mathbb R}$
be a compactly supported
smooth map such that $h_n|_{U_n}=1$.
Then the following map is $C^t$:
\[
f_n\colon C^r(\widetilde{U}_n,E)\to C^s(V_n,F)\, ,\quad
f_n(\gamma)\; :=\; f(h_n\cdot \gamma)|_{V_n}\,.
\]
Set $\rho \colon C^r_c(M,E)\to \bigoplus_{n\in{\mathbb N}}
C^r(\widetilde{U}_n,E)$, $\gamma\mapsto (\gamma|_{\widetilde{U}_n})_n$.
Then $\sigma\circ f=(\oplus_{n\in {\mathbb N}}f_n)\circ \rho$,
where $\oplus_{n\in {\mathbb N}}f_n$ is $C^t$ by Proposition~\ref{dsummp}.
Hence $\sigma\circ f$ and thus $f$ is~$C^t$.\smartqed\qed
\section{\,Lie group structures on directed unions
of Lie groups}\label{secconstr}
In some situations, it is possible
to construct Lie group structures
on ascending unions of Lie groups.\\[2.5mm]
{\bf Unions of finite-dimensional Lie groups.}
In the case of finite-dimensional manifolds,
an Extension Lemma
for Charts is available \cite[Lemma~2.1]{FUN}:
\emph{If $M$ and $N$ are finite-dimensional
$C^\infty$-manifolds such that $M\subseteq N$
and the inclusion map $M\to N$ is an
immersion, then each chart $\phi\colon U\to V$
of~$M$ which is defined on a relatively compact,
(smoothly) contractible subset $U\subseteq M$
extends to a chart of~$N$ on a domain
with analogous properties.}
Now consider a sequence $G_1\subseteq G_2\subseteq \cdots$
of finite-dimensional Lie groups
such that the inclusion maps
are smooth homomorphisms. Let $x\in G:=\bigcup_{n\in {\mathbb N}}G_n$,
say $x\in G_{n_0}$. We then pick a chart $\phi_{n_0}$
of $G_{n_0}$ around~$x$ whose domain
is relatively compact and contractible,
and use the extension lemma to obtain charts
$\phi_n$ of~$G_n$ for $n>n_0$
which are defined on relatively compact,
contractible
open sets, and such that $\phi_n$ extends $\phi_{n-1}$.
One then easily verifies (using Proposition~\ref{silvkom})
that the homeomorphisms $\phi:={\displaystyle\lim_{\longrightarrow}}\, \phi_n$\vspace{-.7mm}
so obtained define a $C^\infty$-atlas
on $G$ (equipped with the direct limit topology),
which makes the latter a Lie group modelled
on ${\displaystyle\lim_{\longrightarrow}}\,\Lie(G_n)$\vspace{-.7mm}
(see \cite{FUN}).\footnote{Cf.\
\cite{NRW1}, \cite[Theorem~47.9]{KaM}
and \cite{DIR} for earlier,
less general results.}
By construction, $G=\bigcup_{n\in{\mathbb N}}G_n$
admits direct limit charts.
Moreover, it is clear from the construction
that
$G={\displaystyle\lim_{\longrightarrow}}\,G_n$\vspace{-.7mm}
as a topological space and as a topological
group. Using Proposition~\ref{silvkom},
one easily infers that
$G={\displaystyle\lim_{\longrightarrow}}\,G_n$\vspace{-.7mm}
also as a smooth manifold
and as a Lie group (see \cite[Theorem~4.3]{FUN}).
\begin{remark}
The preceding construction applies
just as well to ascending unions
of finite-dimensional smooth manifolds~$M_n$,
such that all inclusion maps
are immersions.\footnote{Compare \cite{Han}
for $\bigcup_{n\in {\mathbb N}}M_n$ as a topological
manifold.}
This enables
$G/H$ to be turned into the direct limit $C^\infty$-manifold
${\displaystyle\lim_{\longrightarrow}}\,G_n/(H\cap G_n)$,\vspace{-.7mm}
for each closed subgroup $H\subseteq G$
(see \cite[Proposition~7.5]{FUN}).
Then the quotient map $G\to G/H$ makes $G$
a principal $H$-bundle over $G/H$,
using a suitable extension lemma
for sections in nested principal bundles
\cite[Lemma~6.1]{FUN}.
We mention that an equivariant version
of the above extension lemma (namely \cite[Lemma~1.13]{Wk07})
can be used to turn
the gauge group $\Gau(P)$
into a Lie group,
for each smooth principal bundle
$P\to K$
over a compact smooth manifold~$K$
whose structure group is a direct limit $G={\displaystyle\lim_{\longrightarrow}}\,G_n$\vspace{-.7mm}
of finite-dimensional Lie groups (see
\cite[Lemma~1.14\,(e) and Theorem~1.11]{Wk07}).
\end{remark}
\begin{remark}\label{pathodl}
Direct limits $G={\displaystyle\lim_{\longrightarrow}}\,G_n$\vspace{-.7mm}
of finite-dimensional Lie groups
are\linebreak
regular Lie groups in Milnor's sense \cite[Theorem~8.1]{FUN},
but they can be quite pathological in other ways.
E.g., the exponential map
$\exp_G={\displaystyle\lim_{\longrightarrow}}\,\exp_{G_n}$\vspace{-.7mm}
need not be injective on any $0$-neighbourhood,
and
the exponential image
need not be an identity neighbourhood
in~$G$. Both pathologies occur
for
\[
G\; :=\; {\mathbb C}^{({\mathbb N})} \mbox{$\times\!$\rule{.15 mm}{1.83 mm}}\; {\mathbb R}
\; =\; {\displaystyle\lim_{\longrightarrow}}\,{\mathbb C}^n\mbox{$\times\!$\rule{.15 mm}{1.83 mm}}\;{\mathbb R}\,,\vspace{-1mm}
\]
where $t\in {\mathbb R}$ acts on ${\mathbb C}^{({\mathbb N})}$
via $t.(z_k)_{k\in {\mathbb N}}:=(e^{ikt}z_k)_{k\in {\mathbb N}}$.
This can be checked
quite easily, using that
the exponential map of~$G$ is given explicitly by
$\exp_G((z_k)_{k\in {\mathbb N}},t)=\big(
\big(\frac{e^{ikt}-1}{ikt} z_k\big)_{k\in {\mathbb N}},t\big)$
(see \cite[Example~5.5]{DIR}).
\end{remark}
The preceding general construction
implies that every countably-dimensional
locally finite Lie algebra ${\mathfrak g}$
(i.e., each union ${\mathfrak g}=\bigcup_{n\in {\mathbb N}}{\mathfrak g}_n$
of finite-dimensional Lie algebras ${\mathfrak g}_1\subseteq
{\mathfrak g}_2\subseteq\cdots$),\index{locally finite Lie algebra}
when endowed with the finest locally convex vector
topology, arises as the Lie algebra
of some regular Lie group.\footnote{One chooses
a simply
connected Lie group $G_n$ with Lie algebra~${\mathfrak g}_n$
and forms the direct limit group $G={\displaystyle\lim_{\longrightarrow}}\,G_n$\vspace{-.7mm}
(see \cite[Theorem~5.1]{FUN} for the details).}
Such locally finite Lie algebras
have been much studied in recent years, e.g.
by\linebreak
Yu.\ Bahturin,
A.\,A. Baranov, G. Benkart, I. Dimitrov, K.-H. Neeb, I. Penkov, H. Strade,
N. Stumme, A.\,E. Zalesski\u{\i}, and others
(see \cite{BBZ04}, \cite{BB4}, \cite{DP04},
\cite{Ne00}, \cite{NS01}, \cite{PS03}, \cite{St99}
and the references therein).\\[3mm]
{\bf Unions of Banach--Lie groups.}
These are Lie groups under
additional\linebreak
hypotheses (which, e.g., exclude the pathologies described
in Remark~\ref{pathodl}).
\begin{theorem}\label{dahm2}
Let $G_1\subseteq G_2\subseteq\cdots$
be Banach--Lie groups
over ${\mathbb K}\in \{{\mathbb R},{\mathbb C}\}$,
such that
all inclusion maps $\lambda_n \colon G_n\to G_{n+1}$
are $C^\infty_{\mathbb K}$-homomorphisms.
Set $G:=\bigcup_{n\in {\mathbb N}}G_n$.
Assume that {\rm(a)}--{\rm(c)}
are satisfied:
\begin{description}[(D)]
\item[\rm(a)]
For each $n\in {\mathbb N}$, there exists a norm $\|.\|_n$
on $\Lie(G_n)$ defining its topology,
such that $\|[x,y]\|_n \leq\|x\|_n\|y\|_n$
for all $x,y\in \Lie(G_n)$
and the continuous linear map
$\Lie(\lambda_n)\colon \Lie(G_n)\to \Lie(G_{n+1})$
has operator norm at most~$1$.
\item[\rm(b)]
The locally convex direct limit topology on
${\mathfrak g}:=\bigcup_{n\in {\mathbb N}}\Lie(G_n)$ is Hausdorff.
\item[\rm(c)]
$\exp_G:={\displaystyle\lim_{\longrightarrow}}\, \exp_{G_n}\colon {\mathfrak g}\to G$\vspace{-.7mm}
is injective on some $0$-neighbourhood.
\end{description}
Then there exists a ${\mathbb K}$-analytic Lie group
structure on~$G$ which makes $\exp_G$
a ${\mathbb K}$-analytic local diffeomorphism at~$0$.
If, furthermore, ${\mathfrak g}=\bigcup_{n\in {\mathbb N}}\Lie(G_n)$ is
compactly regular, then $G$
is a regular Lie group in Milnor's sense.
\end{theorem}
\emph{Sketch of proof.}
The Lie group structure is constructed in \cite{Dah},
along the following lines:
Applying Dahmen's Theorem~\ref{thmdah}
(to the complexification ${\mathfrak g}_{\mathbb C}$, if ${\mathbb K}={\mathbb R}$),
one finds that the Baker--Campbell--Hausdorff (BCH-)
series converges to a ${\mathbb K}$-analytic
map
$\bigcup_{n\in {\mathbb N}}B^{\Lie(G_n)}_r(0)\times B^{\Lie(G_n)}_r(0)\to {\mathfrak g}$
for some $r>0$.
Because $\exp_G$ is locally injective,
it induces an isomorphism~$\phi$ of local
groups from some $0$-neighbourhood
$U\subseteq \bigcup_{n\in {\mathbb N}}B^{\Lie(G_n)}_r(0)$
onto some subset~$V$ of~$G$.
We give $V$ the $C^\omega_{\mathbb K}$-manifold
structure making $\phi$ a $C^\omega_{\mathbb K}$-diffeomorphism.
Now standard arguments can be used to make~$G$
a Lie group with~$V$ as an open
submanifold.
The proof of regularity will be sketched
in Section~\ref{secregu}.\vspace{2mm}\smartqed\qed
\noindent
The author does not know whether the Lie groups $G$
in Theorem~\ref{dahm2}
are\linebreak
always the direct limit ${\displaystyle\lim_{\longrightarrow}}\, G_n$\vspace{-.7mm}
in the category of Lie groups
(unless additional hypotheses are satisfied).\\[3mm]
{\bf Another construction principle.}
There is another
construction principle for a Lie group
structure on a union $G=\bigcup_{n\in {\mathbb N}}G_n$
of Lie groups (or a group which is a union
$G=\bigcup_{n\in {\mathbb N}}M_n$ of manifolds),
which produces
Lie groups modelled on Silva spaces
or ascending unions of $k_\omega$-spaces.
A direct limit Lie group structure
can be constructed on~$G$ if (1) there
are compatible charts $\phi_n$
of the Lie groups $G_n$ (resp., the manifolds~$M_n$)
around each point in~$G$;
and (2) suitable
hypotheses are satisfied
which ensure that the transition maps
between charts of the form ${\displaystyle\lim_{\longrightarrow}}\,\phi_n$\vspace{-.7mm}
are $C^\infty_{\mathbb K}$,
because they are mappings of the form discussed
in Proposition~\ref{silvkom} (see \cite[Lemma~14.5]{COM}).
\section{\,Examples of directed unions of
Lie groups}\label{secuninf}
The main examples
of ascending unions of infinite-dimensional
Lie groups were already briefly described
in the introduction.
We now provide more\linebreak
details.
Notably, we discuss
the existence of direct limit charts,
and compact regularity.
As already mentioned,
the latter gives information
on the homotopy groups (see (\ref{dlhomot}))
and can help to verify regularity
in Milnor's sense (see Theorem~\ref{dahm2}
and Section~\ref{secregu}).
A special case of
\cite[Corollary~3.6]{HOM} is useful.
\begin{lemma}\label{thsreg}
If the Lie group
$G=\bigcup_{n\in {\mathbb N}}G_n$
admits a weak direct limit chart,
then $G=\bigcup_{n\in {\mathbb N}}G_n$
is compactly regular if and only if
$\Lie(G)=\bigcup_{n\in {\mathbb N}}\Lie(G_n)$
is compactly regular.
\end{lemma}
In the case of
an (LF)-space $E=\bigcup_{n\in{\mathbb N}} E_n$,
there is a quite concrete characterization of compact regularity
only in terms of properties of the steps~$E_n$
(see \cite[Theorem~6.4 and its corollary]{Wen}):\index{Wengenroth's Theorem}
\begin{theorem}\label{wengen}
Let $E_1\subseteq E_2\subseteq \cdots$ be Fr\'{e}chet
spaces, with continuous linear inclusion maps.
Give $E=\bigcup_{n\in {\mathbb N}}E_n$
the locally convex direct limit topology.
Then $E=\bigcup_{n\in {\mathbb N}}E_n$
is compactly regular
if and only if
for each $n\in {\mathbb N}$, there exists $m\geq n$
such that for all $k\geq m$,
there is a $0$-neighbourhood $U$ in $E_n$
on which $E_k$ and $E_m$ induce the
same topology.
In this case, $E$ is also boundedly regular and complete.
\end{theorem}
We mention that a Hausdorff (LF)-space
is boundedly regular if and only if it is
Mackey complete \cite[1.4\,(f), p.\,209]{Fl80}.\\[3mm]
{\bf Groups of compactly supported
diffeomorphisms.}
The Lie group\linebreak
$\Diff_c(M)=\bigcup_{n\in {\mathbb N}}\Diff_{K_n}(M)$
(discussed in the introduction)
admits a direct limit chart (cf.\ \cite[\S5.1]{COM}).
Moreover, the LF-space ${\mathcal V}_c(M)=\bigcup_{n\in {\mathbb N}}{\mathcal V}_{K_n}(M)$
is compactly regular (see \ref{baslcx}\,(c))
and hence also $\Diff_c(M)$
(by Lemma~\ref{thsreg}).
To avoid exceptional cases
in our later discussions
of direct limit properties,
we assume henceforth that $M$ is non-compact
and of positive dimension.\\[2.5mm]
{\bf Test function groups.}
Let $M$ and an exhaustion $K_1\subseteq K_2\subseteq\cdots$ of~$M$
be as in the definition of $\Diff_c(M)$,
$H$ be a Lie group modelled
on a locally convex space,
and $r\in {\mathbb N}_0\cup\{\infty\}$.
We consider the ``test function group''
$C^r_c(M,H)$ of
$C^r$-maps $\gamma\colon M\to H$
such that the closure of $\{x\in M\colon \gamma(x)\not={\bf 1}\}$
(the support of $\gamma$) is compact.
Let $C^r_{K_n}(M,H)$
be the subgroup of functions supported in~$K_n$.
Then $C^r_{K_n}(M,H)$ is a Lie group modelled
on $C^r_{K_n}(M,\Lie(H))$, and
$C^r_c(M,H)$ is a Lie group
modelled on the locally convex direct
limit $C^r_c(M,\Lie(H))={\displaystyle\lim_{\longrightarrow}}\,C^r_{K_n}(M,\Lie(H))$\vspace{-.7mm}
(\cite{GCX}; cf.\ \ref{introhis}
for special cases). Also,
\[
C^r_c(M,H)\;=\; {\textstyle \bigcup_{n\in {\mathbb N}}}\, C^r_{K_n}(M,H)
\]
admits a direct limit chart (cf.\ \cite[\S7.1]{COM}).
Furthermore, $C^r_c(M,\Lie(H))\!=$ $\bigcup_n\, C^r_{K_n}(M,\Lie(H))$
is compactly regular as a consequence
of \ref{baslcx}\,(c).
We now assume
that $H$ is non-discrete
and $M$ non-compact,
of positive dimension.\\[2.5mm]
{\bf Weak direct products of Lie groups.}
Given a sequence
$(H_n)_{n\in {\mathbb N}}$ of Lie groups,
its weak direct product
$G:=\prod_{n\in {\mathbb N}}^*H_n$
(as introduced before Proposition~\ref{propcriter})
has a natural Lie group structure~\cite[\S7]{MEA},
modelled on the locally convex direct
sum $\bigoplus_{n\in{\mathbb N}}\Lie(H_n)$.
Then $G=\bigcup_{n\in {\mathbb N}}G_n$, identifying the partial
product $G_n:=\prod_{k=1}^nH_k$ with a subgroup
of~$G$. By construction, $G=\bigcup_{n\in {\mathbb N}}G_n$
has a direct limit chart. Furthermore,
$\Lie(G)=\bigoplus_{n\in {\mathbb N}}\Lie(H_n)
={\displaystyle\lim_{\longrightarrow}}\, \Lie(G_n)$\vspace{-.5mm}
is compactly regular,
as locally convex direct sums are
boundedly regular \cite[Ch.\,3, \S1, no.\,4, Proposition~5]{BTV}
and induce the given topology on each finite
partial product
(cf.\ Propositions~7 or 8\,(i)
in \cite[Ch.\,2, \S4, no.\,5]{BTV}).
To avoid exceptional cases,
we assume henceforth
that each $H_n$ is non-discrete.\\[2.5mm]
{\bf Unit groups of
unions of Banach algebras.}
Let
$A_1\subseteq A_2\subseteq\cdots$ be
unital complex Banach algebras
(such that all inclusion maps
are continuous homomorphisms of unital algebras).
Give
$A:=\bigcup_{n\in {\mathbb N}}A_n$
the locally convex direct limit topology.
Then $A^\times$ is open in~$A$
and if $A$ is Hausdorff (which we assume now),
then
$A^\times$ is a complex Lie group
\cite[Proposition~12.1]{COM}.
Moreover, $A^\times=\bigcup_{n\in {\mathbb N}}A_n^\times$,
and the identity map $\id_{A^\times}$ is a direct
limit chart (cf.\ \cite{DaW} and \cite{Eda}
for related results).
If each inclusion map $A_n\to A_{n+1}$
is a topological embedding or each inclusion map
a compact
operator, then $A=\bigcup_{n\in {\mathbb N}}A_n$
and hence also $A^\times=\bigcup_{n\in {\mathbb N}}A_n^\times$
is compactly regular.
However, for particular
choices of the steps, $A=\bigcup_{n\in {\mathbb N}}A_n$
is not compactly regular
(see \cite[Example~7.8]{HOM},
based on \cite[Remark~1.5]{BMS}).\\[2.5mm]
{\bf Lie groups of germs of analytic mappings.}
Let $H$ be a
complex Banach-Lie group,
$\|.\|$ be a norm on $\Lie(H)$ defining its topology,
$X$ be a complex
metrizable locally convex space
and $K\subseteq X$ be a non-empty compact set.
Let $W_1\supseteq W_2\supseteq \cdots$
be a fundamental sequence of open neighbourhoods of~$K$
in~$X$ such that each connected
component of $W_n$ meets~$K$.
Then the set
$\Germ(K,H)$
of germs around~$K$ of $H$-valued complex analytic functions
on open neighbourhoods of~$K$ can be made a Lie group
modelled on the locally convex direct limit
\[
\Germ(K,\Lie(H))\; =\; {\displaystyle\lim_{\longrightarrow}}\, \Hol_b(W_n,\Lie(H))
\]
of the Banach spaces
${\mathfrak g}_n:=\Hol_b(W_n,\Lie(H))$ of bounded $\Lie(H)$-valued
complex analytic functions on~$W_n$,
equipped with the supremum norm (see~\cite{HOL}).
The group operation arises from
pointwise multiplication of representatives of germs.
The identity component $\Germ(K,H)_0$
is the union
\[
\Germ(K,H)_0\;=\; \bigcup_{n\in {\mathbb N}}G_n
\]
of the Banach--Lie
groups $G_n:=\langle [\exp_H\circ\, \gamma]\colon \gamma\in {\mathfrak g}_n\rangle$,
and $\Germ(K,H)_0=\bigcup_{n\in{\mathbb N}}G_n$
admits a direct limit chart~\cite[\S10.4]{COM}.
Theorem~\ref{wengen}
implies that $\Germ(K,\Lie(H))=\bigcup_{n\in {\mathbb N}}{\mathfrak g}_n$
is compactly regular
(see \cite{REG}),
and thus $\Germ(K,H)_0=\bigcup_{n\in {\mathbb N}}G_n$
is compactly regular (see already
\cite[Theorems~21.15 and 21.23]{Ch85}
for the bounded regularity and completeness
of $\Germ(K,\Lie(H))$ if $X$ is a normed
space; cf.\ \cite{Mj79}).
In the most relevant case
where $X$ and~$H$ are finite-dimensional,
we can choose $W_{n+1}$ relatively compact in~$W_n$.
Then the restriction maps
$\Hol_b(W_n,\Lie(H))\to \Hol_b(W_{n+1},\Lie(H))$
are compact
operators~\cite[\S10.5]{COM}
and thus $\Germ(K,\Lie(H))$
is a Silva space.\\[2.5mm]
{\bf Lie groups of germs of analytic diffeomorphisms.}
If $X$ is a complex
Banach space
and $K\subseteq X$ a non-empty compact set,
let $\GermDiff(K,X)$
be the set of germs around~$K$ of
${\mathbb C}$-analytic diffeo\-morphisms
$\gamma\colon U\to V$
between open neighbourhoods $U$ and $V$ of~$K$
(which may depend on $\gamma$),
such that $\gamma|_K=\id_K$.
Then $\GermDiff(K,X)$
is a Lie group
modelled on the locally convex direct limit
\[
\Germ(K,X)_K\; :=\; {\displaystyle\lim_{\longrightarrow}}\, \Hol_b(W_n,X)_K\,,
\]
where $W_n$ and $\Hol_b(W_n,X)$
are as in the last example
and $\Hol_b(W_n,X)_K:=\{\zeta \in \Hol_b(W_n,X)\colon \zeta|_K=0\}$
(see \cite[\S15]{COM} for the case
$\dim(X)<\infty$, and \cite{Dah} for the general
result).
The group operation arises from
composition of representatives of germs.
Now the set $M_n$
of all elements of $\GermDiff(K,X)$
having a representative in $\Hol_b(W_n,X)_K$
is a Banach manifold, and
\[
\GermDiff(K,X)\;=\; \bigcup_{n\in {\mathbb N}}M_n
\]
has a direct limit chart (see \cite{Dah}; cf.\
\cite[Lemma~14.5 and \S15]{COM}).\linebreak
$\GermDiff(K,X)\!=\!\bigcup_n \!M_n$
is compactly regular by Theorem~\ref{wengen}
and Lemma~\ref{thsreg} (see~\cite{Dah});
if $X$ is finite-dimensional,
then $\Germ(K,X)_K$ is a Silva space.\\[2.5mm]
{\bf Unions of Lie groups modelled on Sobolev spaces.}
The Lie groups
$H^{\downarrow s}(K,F)
=\bigcup_{n\in {\mathbb N}}H^{s+\frac{1}{n}}(K,F)$
(as in the introduction)
are studied in the work in progress~\cite{REG}.
By construction, they
admit a direct limit chart,
and they are modelled
on the Silva space
$H^{\downarrow s}(K,\Lie(F))
=\bigcup_{n\in {\mathbb N}}H^{s+\frac{1}{n}}(K,\Lie(F))$
(and hence compactly regular).
We mention that the Lie group structure
on $H^{\downarrow s}(K,F)$
can be obtained via Theorem~\ref{dahm2};
therefore
$H^{\downarrow s}(K,F)$ is a\linebreak
regular
Lie group in Milnor's sense.
Compare \cite{Pi08} (in this volume) for\linebreak
analysis
and probability theory
on variants of the Lie groups $H^s(K,F)$
(with $s>\dim(K)/2$), and limit processes
as $s\downarrow \dim(K)/2$.
\section{\,Direct limit properties of ascending unions}\label{secdlprop}
We now discuss the direct limit properties
of ascending unions of infinite-dimensional
Lie groups in the categories
of Lie groups, topological groups,
smooth manifolds and topological spaces.\vfill\pagebreak
\noindent
{\bf Tools to prove or disprove direct
limit properties.}
Such tools were
provided in~\cite{COM}.
Recall that a real locally convex space~$E$
is said to be \emph{smoothly regular}\index{smoothly regular space}
(or: to \emph{admit smooth bump
functions})\index{smooth bump function}\index{bump function}
if the topology on~$E$
is initial with respect to $C^\infty(E,{\mathbb R})$.
\begin{remark}
If $U\subseteq E$
is a $0$-neighbourhood and the topology is initial with respect to
$C^\infty(E,{\mathbb R})$, then $\bigcap_{j=1}^n
f_j^{-1}(]{-\varepsilon},\varepsilon[) \subseteq U$
for suitable $\varepsilon>0$
and $f_1,\ldots, f_n\in C^\infty(E,{\mathbb R})$
such that $f_1(0)=\cdots=f_n(0)=0$.
Then $f^{-1}(]{-\delta},\delta[)\subseteq U$
with $f:=f_1^2+\cdots+ f_n^2$ and $\delta:=\varepsilon^2$.
Let $g\colon {\mathbb R}\to {\mathbb R}$
be a smooth function such that $g({\mathbb R})\subseteq [0,1]$,
$g(0)=1$
and $g(x)=0$ if $|x|\geq\delta/2$.
Then $h:=g\circ f\colon E\to {\mathbb R}$
is a smooth function such that $h(0)=1$
and $\Supp(h)\subseteq U$
(a ``smooth bump function''
supported in~$U$). This explains
the terminology.
\end{remark}
\begin{example}
Every Hilbert space~$H$
admits smooth bump functions
(because $H\to {\mathbb R}$, $x\mapsto \|x\|^2$
is smooth).
As a consequence,
every locally convex space
which admits a linear topological embedding
into a direct product of Hilbert spaces
(for example, every nuclear locally convex space)
admits smooth bump functions
(cf.\ also \cite[Chapter~III]{KaM}).
\end{example}
\begin{proposition}\label{tools2}
Consider a Lie group $G=\bigcup_{n\in {\mathbb N}}G_n$,
where $G_1\subseteq G_2\subseteq\cdots$
are Lie groups and all inclusion maps
$G_n\to G_{n+1}$ and $G_n\to G$
are smooth homomorphisms.
Assume that $G=\bigcup_{n\in {\mathbb N}}G_n$
admits a direct limit chart.
Then the following hold:
\begin{description}[(D)]
\item[\rm(a)]
If $G={\displaystyle\lim_{\longrightarrow}}\,G_n$
as a topological group,
then $G={\displaystyle\lim_{\longrightarrow}}\,G_n$\vspace{-.7mm}
as a Lie group.
\item[\rm(b)]
$G={\displaystyle\lim_{\longrightarrow}}\,G_n$
as a topological space
if and only if $\Lie(G)={\displaystyle\lim_{\longrightarrow}}\,\Lie(G_n)$\vspace{-.7mm}
as a topological space.
\item[\rm(c)]
If $\Lie(G)$ admits smooth bump functions,
then $G={\displaystyle\lim_{\longrightarrow}}\,G_n$\vspace{-.7mm}
as a $C^\infty_{\mathbb R}$-manifold
if and only if $\Lie(G)={\displaystyle\lim_{\longrightarrow}}\,\Lie(G_n)$\vspace{-.7mm}
as a $C^\infty_{\mathbb R}$-manifold.\vspace{1mm}
\end{description}
\end{proposition}
{\bf Direct limit properties
of the main examples.}
Using Proposition~\ref{tools2},
Proposition~\ref{propcriter}
(to recognize direct
limits of topological groups)
and a\linebreak
counterpart of
Proposition~\ref{silvkom}
for analogous ascending unions
of manifolds~\cite[Proposition~9.8]{COM},
one obtains the following information
concerning the direct limit properties
of the examples from
in Section~\ref{secuninf} (see \cite{COM};
the
properties of $H^{\downarrow s}(K,F)$
follow from \cite[Proposition~9.8]{COM}).
The
entries
in the following table
indicate whether $G={\displaystyle\lim_{\longrightarrow}}\,G_n$\vspace{-.7mm}
holds in the category shown on the left,
for the Lie group described at the top.
The abbreviation ``dep''
is used if the answer depends on
special properties
of the group(s) involved.
We abbreviate
``category'' by ``cat,''
``group'' by ``gp,''
``space'' by ``sp,''
``topological'' by ``top'',\index{direct limit property}
and ``smooth manifold'' by ``mfd.''\vfill\pagebreak
\noindent\hspace*{-1.7mm}
{\footnotesize
\begin{tabular}{||c||c|c|c|c|c|c|c||}\hline\hline
cat$\backslash$gp & $\text{Diff}_c(M)$
& $C^\infty_c(\hspace*{-.2mm}M\hspace*{-.2mm},\hspace*{-.5mm}H)$
& $\prod^*_n \hspace*{-.7mm}H_n$ &
$A^\times$ & $\Germ(K\hspace*{-.2mm},\hspace*{-.4mm}H)_0 $ &
$\GermDiff(\hspace*{-.2mm}K,\hspace*{-.4mm}X)$
& $H^{\downarrow s}(K,\hspace*{-.4mm}F)$ \\ \hline\hline
Lie gps& yes & yes & yes & yes & yes & ---\, & yes \\ \hline
top gps & yes & yes & yes & yes & yes & ---\, & yes \\ \hline
mfds & no & no & dep* & dep**& \,yes$^\dag$
& \,yes$^{\dag\dag}$& yes \\ \hline
top sps & no & no & dep* & dep** & \,yes$^{\dag}$ & \,yes$^{\dag\dag}$
& yes \\ \hline\hline
\end{tabular}}\\[4.6mm]
{\footnotesize
\hspace*{1mm}* \hspace*{.5mm}``yes'' if each $H_n$ is finite-dimensional
or modelled on a $k_\omega$-space;
``no'' if each~$H_n$\linebreak
\hspace*{5mm}is modelled
on an infinite-dimensional Fr\'{e}chet space
(which we assume nuclear\linebreak
\hspace*{5mm}when dealing with the category
of smooth manifolds). Other cases unclear.\\[1mm]
** \hspace*{-.8mm}``yes''\hspace*{-.3mm} if each $A_n$ is finite-dimensional
or each inclusion map $\lambda_n\colon A_n\to A_{n+1}$ a\linebreak
\hspace*{5mm}compact operator;
``no'' (when dealing with the category of topological
spaces), if\linebreak
\hspace*{5mm}$A_n$ is infinite-dimensional,
$A_n\subset A_{n+1}$ and $\lambda_n$ a topological
embedding for\linebreak
\hspace*{5mm}each~$n$. Other cases unclear.\\[1mm]
\hspace*{.8mm}$\dag$ \hspace*{.5mm}``yes'' if $X$ and $H$
are finite-dimensional;
general case unknown.\\[1mm]
${\dag\dag}$ \hspace*{.2mm}``yes'' if $X$ is finite-dimensional;
general case unknown.}
\section{\,Regularity in Milnor's sense}\label{secregu}
Experience tells that
if one tries to prove regularity in Milnor's
sense for a Lie group
$G=\bigcup_{n\in {\mathbb N}}G_n$,
then regularity of the Lie groups~$G_n$
does not suffice to carry out the desired
arguments. But strengthened
regularity properties
increase the chances for success.
\begin{definition}
{\rm Given $k\in {\mathbb N}_0$, we say that
a Lie group~$G$ is \emph{$C^k$-regular}
if it is a regular Lie group
in Milnor's sense and
\[
\evol_G\colon C^\infty([0,1],\Lie(G))\to G
\]
is smooth with respect to the
$C^k$-topology on $C^\infty([0,1],\Lie(G))$
(induced by $C^k([0,1],\Lie(G))$).
If each $\gamma\in C^k([0,1],\Lie(G))$
has a product integral $\eta_\gamma$
and the map
\[
\evol_G\colon C^k([0,1],\Lie(G))\to G\,,\quad
\gamma\mapsto \eta_\gamma(1)
\]
is smooth,
then we say that the Lie group~$G$
is \emph{strongly
$C^k$-regular}.}\index{$C^k$-regular Lie group}\index{strongly
$C^k$-regular Lie group}
\end{definition}
E.g., every Banach--Lie group
is strongly $C^0$-regular~\cite{GaN}.
Although much of the following
remains valid for $C^k$-regular Lie groups,
we shall presume
strong $C^k$-regularity,
as this simplifies the presentation.
We also suppress possible variants involving
bounded regularity instead
of compact regularity.
All results presented in this section
are taken from~\cite{REG}.
In the regularity proofs for our main classes
of direct limit groups, we always use an isomorphism
$C^k([0,1],{\displaystyle\lim_{\longrightarrow}}\,E_n)\cong {\displaystyle\lim_{\longrightarrow}}\,C^k([0,1],E_n)$\vspace{-.7mm}
at a pivotal point.
Let us begin with the elementary
case of locally convex direct sums.
\begin{lemma}\label{dsanddl}
If $(E_n)_{n\in {\mathbb N}}$ is a sequence
of Hausdorff locally convex spaces,
then
$C^k([0,1],\bigoplus_{n\in {\mathbb N}}E_n)=\bigoplus_{n\in {\mathbb N}}C^k([0,1],E_n)$,
for all $k\in {\mathbb N}_0$.
\end{lemma}
\emph{Sketch of proof.}
The locally convex direct sum $\bigoplus_{n\in {\mathbb N}}E_n
=\bigcup_{n\in {\mathbb N}} (E_1\times\cdots\times E_n)$
is compactly regular, because it is boundedly regular
by \cite[Chapter~3, \S1, no.\,4, Proposition~5]{BTV}
and induces the given topology on each finite partial
product (cf.\ Propositions~7 or~8\,(i)
in \cite[Chapter~2, \S4, no.\,5]{BTV}).
Therefore\linebreak
$C^k([0,1],\bigoplus_{n\in {\mathbb N}}E_n)$
and $\bigoplus_{n\in {\mathbb N}}C^k([0,1],E_n)$
coincide as sets.
Comparing\linebreak
$0$-neighbourhoods,
we see that both vector topologies coincide
(using that boxes are typical $0$-neighbourhoods in a countable
direct sum).\smartqed\qed
\begin{remark}
Although
$C^\infty([0,1],{\mathbb R}^{({\mathbb N})})=\bigcup_{n\in {\mathbb N}}C^\infty([0,1],{\mathbb R}^n)$
as a set,
the topology on the left hand side
is properly coarser than the locally convex
direct limit topology ${\mathcal O}_{\lcx}$ on the right hand side,
because
\[
\Big\{\gamma=(\gamma_n)_{n\in {\mathbb N}}\in C^\infty([0,1],{\mathbb R}^{({\mathbb N})})\colon
(\forall n\in {\mathbb N}) \; {\textstyle \frac{d^n\gamma_n}{dx^n}}(0)\in \;]{-1,1}[\,\Big\}\;
\in\; {\mathcal O}_{\lcx}
\]
is not a $0$-neighbourhood
in $C^\infty([0,1],{\mathbb R}^{({\mathbb N})})
={\displaystyle\lim_{\longleftarrow}}_{m\in {\mathbb N}_0}\, C^m([0,1],{\mathbb R}^{({\mathbb N})})$.\vspace{-.7mm}
Thus Lemma~\ref{dsanddl}
becomes false for $k=\infty$,
explaining the need for
$C^k$-regularity with
finite~$k$.
\end{remark}
{\bf Weak direct products of Lie groups.}
If $k\in {\mathbb N}_0$ and $(H_n)_{n\in {\mathbb N}}$ is a sequence
of strongly $C^k$-regular Lie groups,
then $\prod_{n\in{\mathbb N}}^*H_n$ is strongly $C^k$-regular
(and hence regular) since its evolution map
can be obtained as the composition\vspace{-1mm}
\[
C^k\big([0,1],{\textstyle \bigoplus_n \Lie(H_n)}\big)\,
\stackrel{\cong}{\longrightarrow}\,
{\textstyle \bigoplus_n C^k([0,1], \Lie(H_n))}
\, \stackrel{\oplus_{n\in {\mathbb N}}\evol_{H_n}}{\longrightarrow}\,
{\textstyle \prod_n^*}\, H_n
\]
(cf.\ Proposition~\ref{dsummp} for the definition
and smoothness of $\oplus_n \evol_{H_n}$).\\[2.5mm]
{\bf Test function groups.}
Given a $\sigma$-compact, finite-dimensional
smooth\linebreak
manifold~$M$ and a $C^k$-regular
Lie group~$H$, pick
a locally finite family $(M_n)_{n\in {\mathbb N}}$ of compact
submanifolds with boundary of~$M$,
the interiors of which cover~$M$.
Then standard arguments (based
on suitable exponential laws for function spaces)
show that $H_n:=C^r(M_n,H)$ is $C^k$-regular,
for each $n\in {\mathbb N}$.
The map $\sigma\colon C^r_c(M,\Lie(H))\to \bigoplus_n C^r(M_n,\Lie(H))$,
$\gamma\mapsto (\gamma|_{M_n})_{n\in{\mathbb N}}$
is continuous linear and hence also
the map $\tau:= C^k([0,1],\sigma)$ from
$C^k([0,1],C^r_c(M,\Lie(H)))$ to
$C^k([0,1],\bigoplus_n C^r(M_n,\Lie(H)))
\cong \bigoplus_n C^k([0,1], C^r(M_n,\Lie(H)))$.
Furthermore,
\[
\rho \colon G:=C^r_c(M,H)\to {\textstyle \prod^*_{n\in {\mathbb N}}}C^r(M_n,H)\,,
\quad \gamma\mapsto (\gamma|_{M_n})_{n\in {\mathbb N}}
\]
is an isomorphism of Lie groups onto a
closed Lie subgroup (and embedded submanifold)
of the weak direct product~$P$.
Using point evaluations, one finds that
the composition
\[
\evol_P\circ \, \tau \, =\, \oplus_n \evol_{H_n}\circ \, \tau
\colon C^r_c(M,H)\to {\textstyle \prod^*_{n\in {\mathbb N}}}C^r(M_n,H)
\]
(which is smooth by the
preceding example)
takes its image in the image of $\rho$.
Then $f:=\rho^{-1}\circ \evol_P\circ \, \tau\colon C^k([0,1],C^r_c(M,\Lie(H)))
\to C^r_c(M,H)$ is a smooth map,
and one verifies using point evaluations
that $f=\evol_G$.\\[2.5mm]
A similar (but more complicated)
argument shows that $\Diff_c(M)$ is regular.\\[2.5mm]
{\bf Ascending unions of Banach--Lie groups.}
As a preliminary,
observe that regularity
in Milnor's sense (and strong $C^k$-regularity)
can be defined just as well for \emph{local}
Lie groups~$G$; in this case, one requires
that a smooth\linebreak
evolution $\evol_G$
exists on some open $0$-neighbourhood in $C^\infty([0,1],\Lie(G))$
(resp., $C^k([0,1],\Lie(G))$). In the case
of global Lie groups,
the local notions of regularity are equivalent
to the corresponding global ones (see \cite{REG}
and \cite{GaN}; cf.\ \cite[lemma on p.\,409]{KaM}).
See \cite{Sme} for
the next theorem
(and \cite{Muj} or \cite[Theorem~I.7.2]{Sm83}
for a variant beyond compact regularity).
\begin{theorem}\label{Muji}
Consider a Hausdorff locally convex space~$E$
which is the\linebreak
locally convex direct limit
of Hausdorff locally convex spaces
$E_1\subseteq E_2\subseteq \cdots$. If $E=\bigcup_{n\in {\mathbb N}}E_n$
is compactly regular,
then the natural continuous linear map
\[
{\displaystyle\lim_{\longrightarrow}}\, C([0,1],E_n)\to C([0,1],E)
\]
is an isomorphism of topological vector spaces.
\end{theorem}
\begin{remark}\label{bonet}
If $E$ is a locally convex space
which is \emph{integral complete},\footnote{That is,
every continuous curve $\gamma\colon [0,1]\to E$
has a Riemann integral in~$E\,$~\cite{LS00}.}
then
\begin{equation}\label{neednba}
C^k([0,1],E)\cong E^k\times C([0,1],E)
\end{equation}
naturally via
$\gamma\mapsto (\gamma(0),\ldots, \gamma^{(k-1)}(0),\gamma^{(k)})$,
for each $k\in {\mathbb N}$.
If $E={\displaystyle\lim_{\longrightarrow}}\,E_n$\vspace{-.7mm}
is a locally convex direct limit
of integral complete locally convex spaces
and $E=\bigcup_{n\in{\mathbb N}}E_n$ is compactly regular,
then also~$E$ is integral complete.
Moreover,~(\ref{neednba}),
\ref{baslcx}\,(e)
and Theorem~\ref{Muji} imply
that
\begin{equation}\label{needaga}
{\displaystyle\lim_{\longrightarrow}}\,C^k([0,1],E_n)\, \cong \, C^k([0,1],{\displaystyle\lim_{\longrightarrow}}\,E_n)\,.\vspace{-.7mm}
\end{equation}
Alternatively,
(\ref{needaga}) follows from Theorem~\ref{Muji}
because $C^k([0,1],E)\cong C([0,1],E)$
naturally if $E$ is integral complete~\cite{REG},
as follows from (\ref{neednba})
and the fact that $C([0,1],E)\cong E^m\times C([0,1], E)$
naturally for each $m\in {\mathbb N}$,
by a suitable (elementary) variant of Miljutin's Theorem~\cite{Miu}
provided in~\cite{REG}.
\end{remark}
\begin{numba}\label{nowprove}
Assume that
${\mathfrak g}=\bigcup_{n\in {\mathbb N}}\Lie(G_n)$ is compactly regular
in Theorem~\ref{dahm2}.\linebreak
Following~\cite{REG},
we now explain in the essential case where ${\mathbb K}={\mathbb C}$
that $G$ is regular in Milnor's sense.\footnote{The real case
follows easily via complexification
on the level of local groups.}
Let $r>0$ be as in the earlier parts
of the proof and $U_n:=B^{\Lie(G_n)}_r(0)$.
Because the BCH-series
has the same shape for each $n\in{\mathbb N}$,
one finds $s>0$
such that an evolution
$\evol_{U_n}$ exists as a map from
$C([0,1], B^{\Lie(G_n)}_s(0))$ to $U_n$,
for each $n\in{\mathbb N}$. Since~$U_n$ is bounded,
Theorem~\ref{thmdah} shows that
$\evol_U:={\displaystyle\lim_{\longrightarrow}}\,\evol_{U_n}\colon \bigcup_{n\in {\mathbb N}} C([0,1], B^{\Lie(G_n)}_s(0))
\to U:=\bigcup_{n\in {\mathbb N}}U_n$ is ${\mathbb C}$-analytic
and hence also $\exp_G\circ \evol_U$,
which is a local group version of the
evolution map for~$G$. Hence $G$ is regular and in fact
strongly $C^0$-regular.\smartqed\qed
\end{numba}
Using Theorem~\ref{dahm2},
one readily deduces that
$\Germ(K,H)$ and $H^{\downarrow s}(K,F)$
are strongly $C^0$-regular,
and also $A^\times=\bigcup_{n\in {\mathbb N}}A_n^\times$
if $A=\bigcup_{n\in {\mathbb N}}A_n$ is
compactly regular (see \cite{REG}).
The proof of compact regularity
for $\GermDiff(K,X)$ is more involved,
but eventually also boils down to
Theorem~\ref{thmdah}
(see \cite{Dah}).\\[3mm]
{\bf An idea which might lead to non-regular Lie groups.}
An observation from~\cite{REG} might be a source
of Lie groups which are not regular in Milnor's
sense
although they are modelled
on Mackey complete locally convex spaces:\index{non-regular Lie group}
\begin{proposition}\label{pathrg}
Suppose that, for each $n\in {\mathbb N}$,
there exists a Lie group~$H_n$
modelled on a Mackey complete
locally convex space
which is regular but
not $C^n$-regular
because
$\evol_{H_n}\colon C^\infty([0,1],\Lie(H_n))\to H_n$
is discontinuous with respect to the
$C^n$-topology.
Then $G:=\prod_{n\in {\mathbb N}}^*H_n$ is a Lie group
modelled on the Mackey complete
locally convex space $\Lie(G)=\bigoplus_{n\in {\mathbb N}}\Lie(H_n)$.
It has
an evolution $\evol_G\colon C^\infty([0,1],\Lie(G))\to G$,
but $\evol_G$ fails to be continuous
and thus $G$ is not a regular Lie group
in Milnor's sense.
\end{proposition}
\section{\,Homotopy groups of ascending unions
of Lie groups}\label{sechomotop}
We have seen that all
main examples of ascending unions $G=\bigcup_{n\in {\mathbb N}}G_n$
of Lie groups admit a direct limit chart,
and thus
\begin{equation}\label{again}
\pi_k(G)\;=\; {\displaystyle\lim_{\longrightarrow}}\, \pi_k(G_n)\vspace{-.7mm}\quad\mbox{for all $\,k \in {\mathbb N}_0$}
\end{equation}
(see \ref{subsechom}).
Alternatively, many (but not all)
of them are compactly regular.
In this case, (\ref{again})
holds by an elementary argument,
but one has to pay the price
that the proof of compact regularity may require
specialized functional-analytic tools
(like Wengenroth's theorem
recalled above).
It is an interesting feature that
the approach via (weak) direct limit charts
even extends to Lie groups~$G$
in which an ascending union $\bigcup_{n\in {\mathbb N}}G_n$
is merely \emph{dense} (and to similar,
more general situations). Weak direct
limit charts (as defined in \ref{susecdlcha})
have to be replaced by certain ``well-filled
charts'' then. The precise setting
will be described now.
Besides smooth manifolds,
it applies to topological manifolds
and more general
topological spaces (like
manifolds with boundary or corners).
Given a subset~$A$ of a real vector space~$V$,
let us write $\conv_2(A):=\{tx+(1-t)y\colon x,y\in A, t\in [0,1]\}$.
\begin{definition}
{\rm Let $M$ be a topological space
and $(M_\alpha)_{\alpha\in A}$ be a directed family of
topological spaces such that
$M_\infty:=\bigcup_{\alpha\in A}M_\alpha$
is dense in~$M$ and
all inclusion maps $M_\alpha\to M$ and
$M_\alpha\to M_\beta$ (for $\alpha\leq\beta$)
are continuous.
We say that a homeomorphism
$\phi\colon U\to V\subseteq E$
from an open subset
$U \subseteq M$ onto an arbitrary subset~$V$
of a topological vector space~$E$
is a \emph{well-filled chart} of~$M$\index{well-filled chart}
if there exist
$\alpha_0\in A$
and homeomorphisms
$\phi_\alpha\colon U_\alpha\to V_\alpha\subseteq E_\alpha$
from open subsets $U_\alpha\subseteq M_\alpha$
onto subsets $V_\alpha$ of certain topological
vector spaces $E_\alpha$ for $\alpha\geq\alpha_0$
such that the following conditions
are satisfied:
\begin{description}[(D)]
\item[(a)]
$E_\alpha\subseteq E$,
$E_\alpha\subseteq E_\beta$
if $\alpha\leq \beta$
and the inclusion maps $E_\alpha\to E$
and $E_\alpha\to E_\beta$ are continuous
and linear.
\item[(b)]
For all $\alpha\geq\alpha_0$, we have $U_\alpha\subseteq U$
and $\phi|_{U_\alpha}=\phi_\alpha$.
\item[(c)]
For all $\beta\geq \alpha\geq\alpha_0$, we have
$U_\alpha\subseteq U_\beta$
and $\phi_\beta|_{U_\alpha}=\phi_\alpha$.
\item[(d)]
$U_\infty:=\bigcup_{\alpha\geq\alpha_0}U_\alpha=U\cap M_\infty$.
\item[(e)]
There exists a non-empty (relatively) open set $V^{(2)}\subseteq V$
such that $\conv_2(V^{(2)})\subseteq V$
and $\conv_2(V_\infty^{(2)})\subseteq V_\infty$,
where $V_\infty:=\bigcup_{\alpha\geq\alpha_0}V_\alpha$
and $V_\infty^{(2)}:=V^{(2)}\cap V_\infty$.
\item[(f)]
For each $\alpha\geq\alpha_0$ and compact set
$K\subseteq V_\alpha^{(2)}:=V^{(2)}\cap V_\alpha$,
there exists $\beta\geq\alpha$ such that
$\conv_2(K)\subseteq V_\beta$.
\end{description}
Then
$U^{(2)}:=\phi^{-1}(V^{(2)})$ is an open
subset of~$U$, called a \emph{core} of~$\phi$.
If cores of well-filled charts
cover~$M$, then $M$ is said to \emph{admit
well-filled charts}.}
\end{definition}
On a first reading, the
reader may find the
notion of a well-filled chart
somewhat elusive.
Special cases of particular interest
(which are more concrete and
easier to understand)
are described in \cite[Examples~1.11 and 1.12]{HOM}.
See \cite[Theorem~1.13]{HOM}
for the following result.\index{homotopy group}
\begin{theorem}\label{wellfilled}
Let $M$ be a Hausdorff topological
space containing a
directed union $M_\infty:=\bigcup_{\alpha\in A}M_\alpha$
of Hausdorff topological spaces $M_\alpha$
as a dense subset, such that all inclusion maps
$M_\alpha\to M_\beta$ $($for $\alpha\leq \beta)$
and $M_\alpha\to M$ are continuous.
If $M$ admits well-filled
charts, then
\[
\pi_k(M,p)\;=\;{\displaystyle\lim_{\longrightarrow}}\, \pi_k(M_\alpha,p)\quad\mbox{for all $\,k\in {\mathbb N}_0$
and $p\in M_\infty$.}
\]
\end{theorem}
For a typical application, let $H$ be a Lie group,
$m\in {\mathbb N}$, ${\mathcal S}({\mathbb R}^m,\Lie(H))$ be the Schwartz space
of rapidly decreasing $\Lie(H)$-valued
smooth functions on~${\mathbb R}^m$,
and ${\mathcal S}({\mathbb R}^m,H)$ be the corresponding Lie group,
as in \cite{BCR} (for special~$H$) and \cite{Wa2}.
Then $C^\infty_c({\mathbb R}^m,H)=\bigcup_{n\in {\mathbb N}}C^\infty_{[{-n},n]^m}({\mathbb R}^m,H)$
is dense in ${\mathcal S}({\mathbb R}^m,H)$, and ${\mathcal S}({\mathbb R}^m,H)$
admits well-filled charts~\cite[Example~8.4]{HOM}.
Using Theorem~\ref{wellfilled}
and approximation results from~\cite{NeG},
it is then easy to see that
\begin{eqnarray*}
\pi_k({\mathcal S}({\mathbb R}^m,H)) \, &\cong & \, {\displaystyle\lim_{\longrightarrow}}\, \pi_k(C^\infty_{[{-n},n]^m}({\mathbb R}^m,H))\;\cong \;
\pi_k(C^\infty_c({\mathbb R}^m,H))\\
\,& \cong &\,
\pi_k(C_0({\mathbb R}^m,H))\;\cong\; \pi_k(C({\mathbb S}_m,H)_*)\;\cong\;
\pi_{k+m}(H)
\end{eqnarray*}
(see \cite[Remark~8.6]{HOM}).
This had been conjectured in
\cite{BCR} and was open since 1981.
\section{Subgroups of ascending unions and related topics}\label{secsub}
We now discuss various results concerning subgroups
of ascending unions of Lie groups (notably for
direct limits
of finite-dimensional Lie groups).\\[3mm]
{\bf Non-existence of small subgroups.}
It is an open problem whether infinite-dimensional Lie groups
may contain small torsion subgroups~\cite[p.\,293]{NeS}.
For direct limits of finite-dimensional Lie groups,
the pathology could be ruled out by proving that they do not
contain small subgroups \cite[Theorem~A]{SMA}:\index{small subgroup}
\begin{theorem}
If $G_1\subseteq G_2\subseteq \cdots$
is a direct sequence of finite-dimensional Lie groups,
then the Lie group
$G={\displaystyle\lim_{\longrightarrow}}\,G_n$\vspace{-.7mm}
does not have small subgroups.
\end{theorem}
\emph{Idea of proof.}
Given a compact identity neighbourhood
$C_1\subseteq G_1$ which does not contain non-trivial
subgroups of~$G_1$,
there exists a compact identity neighbourhood
$C_2\subseteq G_2$ with $C_1$ in its interior
relative $G_2$, which does not contain
non-trivial subgroups of $G_2$ (see \cite[Lemma~2.1]{SMA}).
Proceeding in this way, we find
a sequence $(C_n)_{n\in {\mathbb N}}$ of
compact identity neighbourhoods
$C_n\subseteq G_n$ not containing non-trivial subgroups,
such that $C_n\subseteq C_{n+1}^0$ for each~$n$.
Then $C:=\bigcup_{n\in {\mathbb N}}C_n$
is an identity neighbourhood in~$G$
and we may hope that~$C$ does not contain non-trivial
subgroups of~$G$. Unfortunately, this
is not true in general, as the example
${\mathbb R}^{({\mathbb N})}=\bigcup_{n\in {\mathbb N}}{\mathbb R}^n=\bigcup_{n\in{\mathbb N}}C_n$
with $C_n:=[{-n},n]^n$ shows.
However, if the sets $C_n$
are chosen carefully (which requires much work),
then indeed $C$ will
not contain non-trivial
subgroups~\cite{SMA}.\vspace{2mm}\smartqed\qed
\noindent
We mention that an analogous result is available for certain
ascending unions
of infinite-dimensional Lie groups
$G_1\subseteq G_2\subseteq \cdots$ (see
\cite[Theorem~B]{SMA}).
To enable compactness arguments,
each $G_n$ has to be locally $k_\omega$
or each $G_n$ a Banach--Lie group
and the tangent map
$\Lie(\lambda_n)\colon \Lie(G_n)\to \Lie(G_{n+1})$
of the inclusion map $\lambda_n\colon
G_n\to G_{n+1}$ a compact operator.\footnote{Further technical
hypotheses need to be imposed,
which we suppress here.}\\[3mm]
{\bf Initial Lie subgroups.}
If $G$ is a Lie group and $H\subseteq G$ a subgroup, then
$H$ is called an \emph{initial Lie subgroup}\footnote{Some readers
may prefer to omit the second condition,
or allow $M$ to be a manifold with $C^k$-boundary, with corners
or (more generally) a $C^k$-manifold with rough
boundary (as introduced in \cite{GaN}).\index{initial Lie subgroup}
The following results carry over to these
varied situations (see \cite{OPE}).}
if it admits a Lie group structure
making the inclusion map $\iota \colon H\to G$
a smooth map,
such that $\Lie(\iota)$ is injective
and mappings from $C^k$-manifolds~$M$ to~$H$
are $C^k$ if and only if they are $C^k$ as
mappings to~$G$,
for each $k\in {\mathbb N}\cup\{\infty\}$.\\[2.5mm]
Answering an open problem from \cite{NeS}
in the negative, it was shown in~\cite{OPE}
that subgroups of
infinite-dimensional Lie groups
not be initial Lie subgroups.
In fact, one can take $G={\mathbb R}^{\mathbb N}$ (with the product topology)
and $H=\ell^\infty$ (see \cite[Theorem~1.3]{OPE}).
For direct limits
of finite-dimensional Lie groups,
$G=\bigcup_{n\in {\mathbb N}}G_n$,
it was already shown in~\cite{FUN}
that every subgroup~$H\subseteq G$
admits a natural Lie group structure.
By \cite[Theorem~2.1]{OPE},
this Lie group structure makes~$H$
an initial Lie subgroup of~$G$
and thus the preceding pathology
does not occur for such
direct limit Lie groups~$G$.\vfill\pagebreak
\noindent
{\bf Continuous one-parameter groups and the
topology on {\boldmath$\Lie(G)$}.}
If $G=\bigcup_{n\in {\mathbb N}}G_n$
is a direct limit of finite-dimensional Lie groups,
then every continuous homomorphism
$({\mathbb R},+)\to G$ (i.e., each continuous one-parameter subgroup)
is a continuous homomorphism to some
$G_n$ (by compact regularity) and hence smooth.
It easily follows from this that
the natural map
\[
\theta\colon \Lie(G)\to \Hom_{\cts}({\mathbb R},G)\,,\quad
x\mapsto (t\mapsto \exp_G(tx))
\]
is a bijection onto the set $\Hom_{\cts}({\mathbb R},G)$
of continuous one-parameter subgroups of~$G$.
It was asked in \cite[Problem~VII.2]{NeS}
whether~$G$ is a \emph{topological group
with Lie algebra}\index{topological group
with Lie algebra} in the sense of \cite[Definition~2.11]{HaM}.
This holds if
$\theta$
is a homeomorphism onto
$\Hom_{\cts}({\mathbb R},G)$, equipped with the compact-open
topology
(which is not obvious because $\exp_G$ need
not be a local homeomorphism at~$0$).
As shown in \cite[Theorem~3.4]{OPE},
the latter property is always satisfied.
Thus $\Lie(G)$ is determined by the topological group
structure of~$G$. E.g., this implies
that every continuous homomorphism
from a locally exponential Lie group to~$G$
is smooth \cite[Proposition~3.7]{OPE}
(where a Lie group
is called \emph{locally exponential}\index{locally exponential Lie group}
if it has an exponential function
and the latter is a local
diffeomorphism at~$0$).\index{automatic smoothness}
It is an
open problem
whether
continuous homomorphisms between arbitrary Lie groups
are automatically smooth.
\begin{acknowledgement}
The author thanks K.-D. Bierstedt
and S.\,A. Wegner (Paderborn) for discussions
related to regularity properties
of (LF)-spaces, and J. Bonet (Valencia)
for comments which entered into Remark~\ref{bonet}.
K.-H. Neeb (Darmstadt) contributed useful comments
on an earlier version of the article.
The research was supported
by the German Research Foundation (DFG),
projects GL 357/5-1 and GL 357/7-1.
\end{acknowledgement}
| {
"timestamp": "2008-04-02T00:30:47",
"yymm": "0803",
"arxiv_id": "0803.0045",
"language": "en",
"url": "https://arxiv.org/abs/0803.0045",
"abstract": "Many infinite-dimensional Lie groups of interest can be expressed as a union of an ascending sequence of (finite- or infinite-dimensional) Lie groups. In this survey article, we compile general results concerning such ascending unions, describe the main classes of examples, and explain what the general theory tells us about these.In particular, we discuss:(1) Direct limit properties of ascending unions of Lie groups in the relevant categories;(2) Regularity in Milnor's sense;(3) Homotopy groups of direct limit groups and of Lie groups containing a dense union of Lie groups;(4) Subgroups of direct limit groups;(5) Constructions of Lie group structures on ascending unions of Lie groups.",
"subjects": "Group Theory (math.GR); Functional Analysis (math.FA)",
"title": "Direct limits of infinite-dimensional Lie groups",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9817357195106374,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7085610918123703
} |
https://arxiv.org/abs/2205.13656 | Finite difference schemes for the parabolic $p$-Laplace equation | We propose a new finite difference scheme for the degenerate parabolic equation \[ \partial_t u - \mbox{div}(|\nabla u|^{p-2}\nabla u) =f, \quad p\geq 2. \] Under the assumption that the data is Hölder continuous, we establish the convergence of the explicit-in-time scheme for the Cauchy problem provided a suitable stability type CFL-condition. An important advantage of our approach, is that the CFL-condition makes use of the regularity provided by the scheme to reduce the computational cost. In particular, for Lipschitz data, the CFL-condition is of the same order as for the heat equation and independent of $p$. | \section{Introduction} \label{sec:intro}
Recently, a new monotone finite difference discretization of the $p$-Laplacian was introduced by the authors in \cite{dTLi22}. It is based on the mean value property presented in \cite{dTLi20, BS18}. The aim of this paper is to propose an explicit-in-time finite difference numerical scheme for the following Cauchy problem
\begin{equation}\label{eq:ParabProb}
\begin{cases}
\partial_t u(x,t)-\ensuremath{\Delta_p} u(x,t) =f(x), & x\in \ensuremath{\mathbb{R}}^d\times{(0,T)},\\
u(x,0)=u_0(x), & x \in \ensuremath{\mathbb{R}}^d,
\end{cases}
\end{equation}
and study its convergence. Here, $p\geq 2$ and $\Delta_p$ is the $p$-Laplace operator,
\[
\ensuremath{\Delta_p} \psi=\mbox{div}(|\nabla \psi|^{p-2}\nabla \psi).
\]
The main result is the pointwise convergence of our scheme given H\"older continuous data ($f$ and $u_0$) and a stability type CFL-condition. See Theorem \ref{theo:main} for the precise statement and \eqref{as:CFL} for the CFL-condition. One of the advantages of our approach is that the CFL-condition makes use of the regularity provided by the scheme. As a consequence, for Lipschitz continuous data, the CFL-condition is of the same order as the one for the heat equation. In general, the order of the CFL-condition depends on $p$ and on the regularity of the data.
\subsection{Related literature} Equation \eqref{eq:ParabProb} has attracted much attention in the last decades. We refer to \cite{Db} and \cite{DGV} for the theory for weak solutions of this equation and to \cite{JLM} for the relation between viscosity solutions and weak solutions. To the best of our knowledge, the best regularity results known are $C^{1,\alpha}-$regularity in space for some $\alpha>0$ (see \cite[Chapter IX]{Db}) and $C^{0,1/2}-$regularity in time (see \cite[Theorem 2.3]{Bo}).
The literature regarding finite difference schemes for parabolic problems involving the $p$-Laplacian is quite scarce. One reason for that is naturally that, since the $p$-Laplacian is in divergence form, it is very well suited for methods based on finite elements, see for instance \cite{BL94,J00,DCR07,AGW04,FedPP-L} for related results.
In the stationary setting, there has been some development of finite difference methods the past 20 years. Section 1.1 in \cite{ObermanpLap} provides an accurate overview of such results, we will only mention a few. In \cite{CoLeMa17,dTMP18,FFGS13,ObermanpLap}, finite difference schemes for the $p$-Laplace equation based on the mean value formula for the \emph{normalized} $p$-Laplacian (cf. \cite{MPR12a}) are considered. Since the corresponding parabolic equation for the normalized $p$-Laplacian is completely different in nature (see \cite{JuKa06,Ker11}), these methods do not seem very well suited to be used for the parabolic equation considered in this paper. In \cite{dTLi22}, the authors of the present paper studied a monotone finite difference discretization of the $p$-Laplacian based on the mean value property presented in \cite{dTLi20, BS18}. We also seize the opportunity to mention \cite{Obe05}, where difference schemes for degenerate elliptic and parabolic equations (but not for equation \eqref{eq:ParabProb}) are discussed.
It is noteworthy that, in dimension $d=1$, the spatial derivative of a solution of \eqref{eq:ParabProb} is a solution of the Porous Medium Equation (PME). See \cite{Vaz06, Vaz07} for a general presentation of the PME, and \cite{IaSaVa08} for a proof of this fact. Finite difference schemes for the PME are well known, see \cite{DiHo84,EmSi12,Mon16,dTEnJa18,dTEnJa19}.
\section{Assumptions and main results}
In this section, we introduce a general form of finite difference discretizations of $\ensuremath{\Delta_p}$ and the associated numerical scheme for \eqref{eq:ParabProb}. This is followed by our assumptions, the notion of solutions for \eqref{eq:ParabProb} and the formulation of our main result.
\subsection{Discretization and scheme}
In order to treat \eqref{eq:ParabProb}, we consider a general discretization of $\Delta_p$ of the form
\begin{equation}\label{eq:GenDisc}
D^h_p\psi(x)=\sum_{y_\beta\in \mathcal{G}_h} J_p(\psi(x+y_\beta)-\psi(x)) \omega_{\beta},
\end{equation}
where
$$J_p(\xi)=|\xi|^{p-2}\xi,\quad \xi \in \ensuremath{\mathbb{R}},\quad \mathcal{G}_h:=h\ensuremath{\mathbb{Z}}^d=\{y_\beta:= h \beta \, : \, \beta \in \ensuremath{\mathbb{Z}}^d\}$$
and $\omega_\beta$ are certain weights $\omega_\beta=\omega_\beta(h)$ satisfying $\omega_\beta=\omega_{-\beta}\geq0$.
We also need to introduce a time discretization. We will employ an explicit and uniform-in-time discretization. Let $N\in \ensuremath{\mathbb{N}}$ and consider a discretization parameter $\tau>0$ given by
$\tau =T/N$. Consider also the sequence of times $\{t_j\}_{j=0}^{N}$ defined by $t_0=0$ and $t_j=t_{j-1}+ \tau= j\tau$. The time grid, $\mathcal{T}_\tau$, is given by
\[
\mathcal{T}_{\tau}= \bigcup_{j=0}^N \{t_j\}.
\]
Then, our general form of an explicit finite difference scheme of \eqref{eq:ParabProb} is given by
\begin{equation}\label{eq:numsch}
\begin{cases}
U^j_\alpha= U_\alpha^{j-1} +\tau\left(D_p^h U_{\alpha}^{j-1}+f_\alpha\right), & \alpha \in \ensuremath{\mathbb{Z}}^d,\, j=1,\ldots,N,\\
U^0_\alpha=(u_0)_\alpha & \alpha \in \ensuremath{\mathbb{Z}}^d,
\end{cases}
\end{equation}
where $f_\alpha:=f(x_\alpha)$, $(u_0)_\alpha=u_0(x_\alpha)$ and $D_p^h$ is given by \eqref{eq:GenDisc}.
\subsection{Assumptions}
In order to ensure convergence of the scheme \eqref{eq:numsch}, we impose the following hypotheses on the data and the discretization parameters. This entails a regularity assumption on the data, some assumptions on the discretization and a nonlinear CFL-condition on the parameters, as is customary for explicit schemes.
\medskip
\noindent\textbf{Hypothesis on the data. } We assume that
\begin{equation}\label{as:u0f}\tag{$\textup{A}_{u_0,f}$}
\textup{$u_0, f:\ensuremath{\mathbb{R}}^d \to \ensuremath{\mathbb{R}}$ are bounded and globally H\"older continuous functions for some $a\in(0,1]$.
}
\end{equation}
More precisely,
\[
|u_0(x)-u_0(y)|\leq L_{u_0}|x-y|^a \quad \textup{and} \quad |f(x)-f(y)|\leq L_{f}|x-y|^a, \qquad \textup{for all} \quad x,y\in \ensuremath{\mathbb{R}}^d,
\]
for some constants $L_{u_0},L_{f}\geq0$. Sometimes we will write
$\Lambda_{u_0}(\delta):=L_{u_0}\delta^a$ and $\Lambda_{f}(\delta):=L_{f}\delta^a
$
to simplify the presentation.
\medskip
\noindent\textbf{Hypothesis on the spatial discretization.}
For the discretization, we assume the following type of monotonicity and boundedness:
\begin{equation}\label{as:disc}\tag{$\textup{A}_\omega$}
\textup{$\omega_\beta=\omega_{-\beta}\geq0$, $w_\beta=0$ for $y_\beta \not\in B_r$ for some $r>0$, and $\sum_{y_\beta\in \mathcal{G}_h}\omega_\beta\leq M r^{-p}$ }
\end{equation}
Here $M=M(p,d)>0$.
In addition, we assume the following consistency for the discretization:
\begin{equation}\label{as:cons}\tag{$\textup{A}_{c}$}
\textup{For $\psi \in C^2_b(\ensuremath{\mathbb{R}}^d\times[0,T])$, we have that $D^h_p\psi=\ensuremath{\Delta_p} \psi+o_h(1)$ as $h\to0^+$ uniformly in $(x,t)$.}
\end{equation}
Examples of discretizations satisfying these properties can be found in Section \ref{sec:discretizations}.
\medskip
\noindent\textbf{Hypothesis on the discretization parameters.}
We assume the following stability condition on the numerical parameters:
\begin{equation}\label{as:CFL}\tag{$\textup{CFL}$}
h=o_r(1) \quad \textup{and} \quad \tau\leq C r^{2+(1-a)(p-2)}
\end{equation}
with
\[
C=\min\left\{1, \frac{1}{M (p-1)\left(L_{u_0}+TL_f +3\tilde{K}+1\right)^{p-2}}\right\}
\]
and $\tilde{K}$ a constant given in \eqref{eq:ctefam}, depending on $p$, the modulus of continuity in time of the discretized solution and some universal constants coming from a mollifier.
\begin{remark} For Lipschitz data $u_0$ and $f$, the condition \eqref{as:CFL} reads $\tau \leq{Cr^2}$ for a certain constant $C=C(u_0,f,d,p,T)>0$. We note that, regardless of the constant $C$, the relation between $\tau$ and $r$ is always quadratic (as in the linear case $p=2$) and independent of $p$. It is important to mention that this is computationally very relevant, especially if we want to deal with problems related to large $p$.
\end{remark}
\subsection{Main result} We now state our main result regarding the convergence of the scheme. Several other properties of the scheme are also obtained, but we will state them later.
\begin{theorem}\label{theo:main}
Let $p\in[2,\infty)$ and assume \eqref{as:u0f} and \eqref{as:disc}. Then
for every $h,\tau>0$, there exists a unique solution $U\in \ell^\infty(\mathcal{G}_h\times \mathcal{T}_\tau)$ of \eqref{eq:numsch}.
If in addition, \eqref{as:CFL} and \eqref{as:cons} hold, then
\[
\max_{(x_\alpha,t_j)\in \mathcal{G}_h \times \mathcal{T}_\tau}|U_\alpha^j- u(x_\alpha,t_j)|\to 0 \quad \textup{as} \quad h\to0^+,
\]
where $u$ is the unique viscosity solution of \eqref{eq:ParabProb}.
\end{theorem}
\subsection{Viscosity solutions} Throughout the paper, we will use the notion of viscosity solutions. For completeness, we define the concept of viscosity solutions of \eqref{eq:ParabProb}, adopting the definition in \cite{JLM}.
\begin{definition}
Assume \eqref{as:u0f}. We say that a bounded lower (\textup{resp.} upper) semicontinuous function $u$ in $\ensuremath{\mathbb{R}}^d\times[0,T]$ is a \textup{viscosity supersolution} (\textup{resp.} \textup{subsolution}) of \eqref{eq:ParabProb} if
\begin{enumerate}[(a)]
\item $u(x,0)\geq u_0(x)$ (resp. $u(x,0)\leq u_0(x)$);
\item whenever $(x_0,t_0)\in \ensuremath{\mathbb{R}}^d\times (0,T)$ and $\varphi\in C^2_b(B_R(x_0)\times(t_0-R,t_0+R))$ for some $R>0$ are such that $\varphi(x_0,t_0)=u(x_0,t_0)$ and $\varphi(x,t)< u(x,t)$ (\text{resp.} $\varphi(x,t)> u(x,t)$) for $(x,t) \in B_R(x_0)\times(t_0-R,t_0) $, then we have
\[
\varphi_t(x_0,t_0)-\Delta \varphi(x_0,t_0)\geq f(x_0) \quad (\text{resp.} \quad \varphi_t(x_0,t_0)-\Delta \varphi(x_0,t_0)\leq f(x_0) ).
\]
\end{enumerate}
A \textup{viscosity solution} of \eqref{eq:ParabProb} is a bounded continuous function $u$ being both a viscosity supersolution and a viscosity subsolution \eqref{eq:ParabProb}.
\end{definition}
\begin{remark} We remark that it is not necessary to require strict inequality in the definition above. It is enough to require $\varphi(x,t)\leq u(x,t)$ (\text{resp.} $\varphi(x,t)\geq u(x,t)$) for $(x,t) \in B_R(x_0)\times(t_0-R,t_0) $.
\end{remark}
We also state a necessary uniqueness result that will ensure convergence of the scheme. Without such a result, we would only be able to establish convergence up to a subsequence. The theorem below is a consequence of the fact that viscosity solutions are weak solutions (see Corollary 4.7 in \cite{JLM}) and that bounded weak solutions are unique (see Theorem 6.1 in \cite{Db}).
\begin{theorem}\label{teo:main2} Assume \eqref{as:u0f}. Then there is a unique solution of \eqref{eq:ParabProb}.
\end{theorem}
\section{Properties of the numerical scheme}
In this section we will study properties of the numerical scheme \eqref{eq:numsch}. More precisely, we establish existence and uniqueness for the numerical solution, stability in maximum norm, as well as conservation of the modulus of continuity of the data.
\subsection{Existence and uniqueness}
We have the following existence and uniqueness result for the numerical scheme.
\begin{proposition}
Assume \eqref{as:u0f}, \eqref{as:disc}, $p\geq2$ and $r,h,\tau>0$. Then there exists a unique solution $U\in \ell^\infty(\mathcal{G}_h \times \mathcal{T}_{\tau})$ of the scheme \eqref{eq:numsch}.
\end{proposition}
\begin{proof}
First we note that, for a function $\psi\in \ell^\infty(\mathcal{G}_h)$, we have that
\[
| D^h_p\psi_\alpha|\leq \sum_{y_\beta\in \mathcal{G}_h} J_p(\psi(x_\alpha+y_\beta)-\psi(x_\alpha)) \omega_{\beta} \leq (2\|\psi\|_{\ell^\infty(\mathcal{G}_h)})^{p-1}\sum_{y_\beta\in \mathcal{G}_h}\omega_{\beta}<+\infty.
\]
Then, for each $\alpha\in \ensuremath{\mathbb{Z}}$, $U^j_\alpha$ is defined recursively using the values of $U^{j-1}_\beta$ for $\beta\in \ensuremath{\mathbb{Z}}$, and we have that
\[
\sup_{y_\alpha\in \mathcal{G}_h}|U^j_\alpha|= \sup_{y_\alpha\in \mathcal{G}_h}|U^{j-1}_\alpha| + \tau \left(\left(2\sup_{y_\alpha\in \mathcal{G}_h}|U^{j-1}_\alpha|\right)^{p-1} \sum_{y_\beta\in \mathcal{G}_h}\omega_{\beta} + \sup_{y_\alpha\in \mathcal{G}_h}|f_\alpha|\right).
\]
The conclusion follows since
\[
\sup_{y_\alpha\in \mathcal{G}_h}|f_\alpha|\leq \|f\|_{L^\infty(\ensuremath{\mathbb{R}}^d)} \quad \textup{and} \quad \sup_{y_\alpha\in \mathcal{G}_h}|U^0_\alpha|=\sup_{y_\alpha\in \mathcal{G}_h}|u_0(y_\alpha)|\leq \|u_0\|_{L^\infty(\ensuremath{\mathbb{R}}^d)}.
\]
\end{proof}
\subsection{Stability and preservation of the modulus of continuity in space}
First we will prove that the scheme preserves the regularity of the data.
\begin{proposition}\label{prop:regusch} Assume \eqref{as:u0f}, \eqref{as:disc}, $p\geq2$, $r, h, \tau >0$ and \eqref{as:CFL}. Let $U$ be the solution of \eqref{eq:numsch}. For every $j=0,\ldots,N$, we have
\[
|U^j_\alpha-U^j_\gamma| \leq \Lambda_{u_0}(|x_\alpha-x_\gamma|) + t_j \Lambda_f(|x_\alpha-x_\gamma|), \quad \textup{for all} \quad x_\alpha,x_\gamma\in \mathcal{G}_h.
\]
\end{proposition}
\begin{remark}
In particular, if both $u_0$ and $f$ are Lipschitz functions with constants $L_{u_0}$ and $L_f$ respectively, the above result reads,
\[
|U^j_\alpha-U^j_\gamma| \leq (L_{u_0}+t_j L_f)|x_\alpha-x_\gamma|.
\]
\end{remark}
\begin{proof}[Proof of Proposition \ref{prop:regusch}]
By assumption \eqref{as:u0f}, for any given $x_\alpha,x_\gamma\in \mathcal{G}_h$, we have that
\[
|U^0_\alpha-U^0_\gamma|= |u_0(x_\alpha)-u_0(x_\gamma)|\leq \Lambda_{u_0}(|x_\alpha-x_\gamma|).
\]
Assume by induction that
\[
|U^{j}_\alpha-U^{j}_\gamma|\leq \Lambda_{u_0}(|x_\alpha-x_\gamma|) + t_j \Lambda_f(|x_\alpha-x_\gamma|).
\]
Using the scheme at $x_\alpha$ and $x_\gamma$ we get
\[
U^{j+1}_\alpha-U^{j+1}_\gamma= U^{j}_\alpha-U^{j}_\gamma+\tau \sum_{y_\beta\in \mathcal{G}_h} \left(J_p( U^{j}_{\alpha+\beta}-U^{j}_{\alpha}) - J_p( U^{j}_{\gamma+\beta}-U^{j}_{\gamma})\right) \omega_\beta + \tau (f_\alpha-f_\gamma).
\]
Now, since $p\geq2$, we have, by Taylor expansion, that
\[
J_p( U^{j}_{\alpha+\beta}-U^{j}_{\alpha}) - J_p( U^{j}_{\gamma+\beta}-U^{j}_{\gamma})=(p-1)|\eta_\beta|^{p-2} \left((U^{j}_{\alpha+\beta}-U^{j}_{\gamma+\beta})-(U^{j}_{\alpha}-U^{j}_{\gamma})\right),
\]
for some $\eta_\beta\in \ensuremath{\mathbb{R}}$ between $(U^{j}_{\alpha+\beta}-U^{j}_{\alpha})$ and $(U^{j}_{\gamma+\beta}-U^{j}_{\gamma})$. Thus,
\begin{equation}\label{eq:pres1}
\begin{split}
U^{j+1}_\alpha-U^{j+1}_\gamma=& (U^{j}_\alpha-U^{j}_\gamma)\left(1-\tau (p-1) \sum_{y_\beta\in \mathcal{G}_h} |\eta_\beta|^{p-2} \omega_\beta \right)\\
&+\tau (p-1) \sum_{y_\beta\in \mathcal{G}_h} |\eta_\beta|^{p-2} (U^{j}_{\alpha+\beta}-U^{j}_{\gamma+\beta}) \omega_\beta + \tau (f_\alpha-f_\gamma).
\end{split}
\end{equation}
Now observe that, by the induction assumption, we have
\[
|\eta_\beta|\leq \sup_{y_\alpha \in \mathcal{G}_h} \{|U^{j}_{\alpha+\beta}-U^{j}_{\alpha}|\}\leq \sup_{y_\alpha \in \mathcal{G}_h} \{\Lambda_{u_0}(|x_{\alpha+\beta}-x_\alpha|) + t_j \Lambda_f(|x_{\alpha+\beta}-x_\alpha|)\}= \Lambda_{u_0}(| x_{\beta}|) + t_j \Lambda_f( |x_{\beta}|).
\]
By \eqref{as:disc}, we have $w_\beta=0$ for $y_\beta \not\in B_r$ for some $r>0$, and we deduce that
\[
\sum_{y_\beta\in \mathcal{G}_h} |\eta_\beta|^{p-2} \omega_\beta \leq \left(\Lambda_{u_0}(r) + t_j \Lambda_f(r)\right)^{p-2}\sum_{y_\beta\in \mathcal{G}_h}\omega_{\beta}\leq \frac{(L_{u_0}+t_j L_f)^{p-2}M}{ r^{2+(1-a)(p-2)}}.
\]
Thus, by \eqref{as:CFL}, we get
\[
\tau (p-1) \sum_{y_\beta\in \mathcal{G}_h} |\eta_\beta|^{p-2} \omega_\beta\leq 1.
\]
Using the above estimate and the induction hypothesis in \eqref{eq:pres1}, we get that
\begin{equation*}
\begin{split}\
|U^{j+1}_\alpha-U^{j+1}_\gamma|\leq& |U^{j}_\alpha-U^{j}_\gamma|\left(1-\tau (p-1) \sum_{y_\beta\in \mathcal{G}_h} |\eta_\beta|^{p-2} \omega_\beta \right)\\
&+\tau (p-1) \sum_{y_\beta\in \mathcal{G}_h} |\eta_\beta|^{p-2} |U^{j}_{\alpha+\beta}-U^{j}_{\gamma+\beta}| \omega_\beta + \tau |f_\alpha-f_\gamma|\\
\leq& \left(\Lambda_{u_0}(|x_\alpha-x_\gamma|) + t_j \Lambda_f(|x_\alpha-x_\gamma|)\right) \left(1-\tau (p-1) \sum_{y_\beta\in \mathcal{G}_h} |\eta_\beta|^{p-2} \omega_\beta \right)\\
&+\tau (p-1) \sum_{y_\beta\in \mathcal{G}_h} |\eta_\beta|^{p-2} \left(\Lambda_{u_0}(|x_{\alpha+\beta}-x_{\gamma+\beta}|) + t_j \Lambda_f(|x_{\alpha+\beta}-x_{\gamma+\beta}|)\right) \omega_\beta \\
&+ \tau \Lambda_f (|x_\alpha-x_\gamma|)\\
\leq& \left(\Lambda_{u_0}(|x_\alpha-x_\gamma|) + t_j \Lambda_f(|x_\alpha-x_\gamma|)\right) \left(1-\tau (p-1) \sum_{y_\beta\in \mathcal{G}_h} |\eta_\beta|^{p-2} \omega_\beta \right)\\
&+\tau (p-1) \left(\Lambda_{u_0}(|x_\alpha-x_\gamma|) + t_j \Lambda_f(|x_\alpha-x_\gamma|)\right) \sum_{y_\beta\in \mathcal{G}_h} |\eta_\beta|^{p-2} \omega_\beta \\
&+ \tau \Lambda_f (|x_\alpha-x_\gamma|)\\
=& \Lambda_{u_0}(|x_\alpha-x_\gamma|) + (t_j+\tau) \Lambda_f(|x_\alpha-x_\gamma|),
\end{split}
\end{equation*}
which concludes the proof.
\end{proof}
We are now ready to state and prove the stability result: solutions with bounded data remain bounded (uniformly in the discretization parameters) for all times.\\
\begin{proposition}\label{prop:stab}
Under the assumptions of Proposition \ref{prop:regusch}, we have that
\[
\sup_{y_\alpha\in \mathcal{G}_h} |U^j_\alpha|\leq \|u_0\|_{L^\infty(\ensuremath{\mathbb{R}}^d)} + t_j \|f\|_{L^\infty(\ensuremath{\mathbb{R}}^d)}, \quad \textup{for all} \quad j=0,\ldots, N.
\]
\end{proposition}
\begin{proof}
By assumption \eqref{as:u0f}, we have that
\[
\sup_{y_\alpha\in \mathcal{G}_h}|U^0_\alpha|\leq \sup_{y_\alpha\in \mathcal{G}_h}|u_0(x_\alpha)| \leq \|u_0\|_{L^\infty(\ensuremath{\mathbb{R}}^d)}.
\]
Assume by induction that
\[
\sup_{y_\alpha\in \mathcal{G}_h} |U^j_\alpha|\leq \|u_0\|_{L^\infty(\ensuremath{\mathbb{R}}^d)} + t_j \|f\|_{L^\infty(\ensuremath{\mathbb{R}}^d)}.
\]
Direct computations lead to
\[
\begin{split}
U^{j+1}_\alpha&=U^{j}_\alpha + \tau \sum_{y_\beta \in \mathcal{G}_h} |U^{j}_{\alpha+\beta}-U^{j}_\alpha|^{p-2}(U^{j}_{\alpha+\beta}-U^{j}_\alpha)\omega_\beta+ \tau f_\alpha\\
&=U^{j}_\alpha\left(1-\tau \sum_{y_\beta \in \mathcal{G}_h} |U^{j}_{\alpha+\beta}-U^{j}_\alpha|^{p-2} \omega_\beta \right) + \tau \sum_{y_\beta \in \mathcal{G}_h} |U^{j}_{\alpha+\beta}-U^{j}_\alpha|^{p-2}U^{j}_{\alpha+\beta}\omega_\beta+ \tau f_\alpha.
\end{split}
\]
By Proposition \ref{prop:regusch} we have that
\[
|U^{j}_{\alpha+\beta}-U^{j}_\alpha|^{p-2}\leq (\Lambda_{u_0}(|y_\beta|) + t_j \Lambda_f(|y_\beta|))^{p-2},
\]
which together with assumptions \eqref{as:disc} and \eqref{as:CFL} imply that
\[
\tau \sum_{y_\beta \in \mathcal{G}_h} |U^{j}_{\alpha+\beta}-U^{j}_\alpha|^{p-2} \omega_\beta \leq \tau (\Lambda_{u_0}(r) + t_j \Lambda_f(r))^{p-2}\sum_{y_\beta \in \mathcal{G}_h} \omega_\beta \leq \frac{1}{p-1}\leq 1.
\]
Direct computations plus the induction hypothesis allow us to conclude that
\[
\begin{split}
|U^{j+1}_\alpha|\leq&\sup_{y_\alpha\in \mathcal{G}_h} |U^{j}_\alpha|\left(1-\tau \sum_{y_\beta \in \mathcal{G}_h} |U^{j}_{\alpha+\beta}-U^{j}_\alpha|^{p-2} \omega_\beta \right) \\
&+ \tau \sup_{y_\alpha\in \mathcal{G}_h} |U^{j}_\alpha| \sum_{y_\beta \in \mathcal{G}_h} |U^{j}_{\alpha+\beta}-U^{j}_\alpha|^{p-2}\omega_\beta+ \tau \|f\|_{L^\infty(\ensuremath{\mathbb{R}}^d)}\\
=&\sup_{y_\alpha\in \mathcal{G}_h} |U^{j}_\alpha| + \tau \|f\|_{L^\infty(\ensuremath{\mathbb{R}}^d)}\\
=& \|u_0\|_{L^\infty(\ensuremath{\mathbb{R}}^d)} + (t_j +\tau) \|f\|_{L^\infty(\ensuremath{\mathbb{R}}^d)},
\end{split}
\]
which concludes the proof.
\end{proof}
\subsection{Time equicontinuity for a discrete in time scheme}
Now we extend the scheme from $\mathcal{G}_h$ to $\ensuremath{\mathbb{R}}^d$ by considering $U:\ensuremath{\mathbb{R}}^d\times \mathcal{T}_{\tau}$ defined by
\begin{equation}\label{eq:numsch_ext}
\begin{cases}
U^j(x)= U^{j-1}(x) + \tau \left(D_p^h U^{j-1}(x)+f(x)\right), & x \in \ensuremath{\mathbb{R}}^d,\, j=1,\ldots,N,\\
U^0(x)=u_0(x) & x \in \ensuremath{\mathbb{R}}^d.
\end{cases}
\end{equation}
\begin{remark}
Clearly, if we restrict the solution of \eqref{eq:numsch_ext} to $\mathcal{G}_h$, we recover the solution of \eqref{eq:numsch}.
\end{remark}
\begin{proposition}[Continuous dependence on the data]\label{prop:contdep}
Assume \eqref{as:u0f}, \eqref{as:disc}, $p\geq2$, $r, h,\tau >0$ and \eqref{as:CFL}. Let $U,\widetilde{U}$ be the solutions of \eqref{eq:numsch} corresponding to $u_0, \widetilde{u}_0$ and $f,\widetilde{f}$. For every $j=0,\ldots,N$, we have
\[
\|U^j-\widetilde{U}^j\|_{L^\infty(\ensuremath{\mathbb{R}}^d)} \leq \|u_0- \widetilde{u}_0\|_{L^\infty(\ensuremath{\mathbb{R}}^d)} + t_j \|f- \widetilde{f}\|_{L^\infty(\ensuremath{\mathbb{R}}^d)}. \]
\end{proposition}
\begin{proof}
By assumption \eqref{as:u0f}, we have that
\[
\|U^0-\widetilde{U}^0\|_{L^\infty(\ensuremath{\mathbb{R}}^d)}= \|u_0-\widetilde{u}_0\|_{L^\infty(\ensuremath{\mathbb{R}}^d)}.
\]
Assume by induction that
\[
\|U^j-\widetilde{U}^j\|_{L^\infty(\ensuremath{\mathbb{R}}^d)}= \|u_0-\widetilde{u}_0\|_{L^\infty(\ensuremath{\mathbb{R}}^d)} + t_j \|f-\widetilde{f}\|_{L^\infty(\ensuremath{\mathbb{R}}^d)}.
\]
Similar computations as the ones in the proof of Proposition \ref{prop:regusch} yield
\begin{equation}\label{eq:pres2}
\begin{split}
U^{j+1}(x)-&\widetilde{U}^{j+1}(x)= (U^{j}(x)-\widetilde{U}^{j}(x))\left(1-\tau (p-1) \sum_{y_\beta\in \mathcal{G}_h} |\eta_\beta|^{p-2} \omega_\beta \right)\\
&+\tau (p-1) \sum_{y_\beta\in \mathcal{G}_h} |\eta_\beta|^{p-2} (U^{j}(x+y_\beta)-\widetilde{U}^{j}(x+y_\beta)) \omega_\beta + \tau (f(x)-\widetilde{f}(x)),
\end{split}
\end{equation}
where $\eta_\beta\in \ensuremath{\mathbb{R}}$ is some number between $(U^{j}(x+y_\beta)-U^{j}(x))$ and $(\widetilde{U}^{j}(x+y_\beta)-\widetilde{U}^{j}(x))$. From here, the proof follows as in the proof of Proposition \ref{prop:regusch}.
\end{proof}
\begin{proposition}[Equicontinuity in time]\label{prop:equitime}
Assume \eqref{as:u0f}, \eqref{as:disc}, $p\geq2$, $r,h,\tau >0$ and \eqref{as:CFL}. Let $U$ be the solution of \eqref{eq:numsch_ext}. Then
\[
\begin{split}
\|U^{j+k}-U^{j}\|_{L^\infty(\ensuremath{\mathbb{R}}^d)} &\leq \widetilde{K} (t_k)^{\frac{a}{2+(1-a)(p-2)}} + \|f\|_{L^\infty(\ensuremath{\mathbb{R}}^d)}t_k=:\overline{\Lambda}_{u_0,f}(t_k),
\end{split}
\]
with
\begin{equation}\label{eq:ctefam}
\tilde{K} = 4^{\frac{1+(1-a)(p-1)}{2+(1-a)(p-2)}} L_{u_0}^{\frac{p}{2+(1-a)(p-2)}}((p-1) K_1^{p-2}K_2 M)^\frac{a}{2+(1-a)(p-2)},
\end{equation}
where $M$ comes from assumption \eqref{as:disc}, and $K_1$ and $K_2$ are constants given in Section \ref{sec:ctes} (depending on a certain choice of mollifiers).
\end{proposition}
\begin{remark}
Actually, a close inspection of the proof reveals that for $u_0\in C^2_b(\ensuremath{\mathbb{R}}^d)$ we can get
\[
\|U^{j+k}-U^{j}\|_{L^\infty(\ensuremath{\mathbb{R}}^d)} \lesssim t_k.
\]
\end{remark}
\begin{proof}
Consider a mollification of the initial data $u_{0,\delta}=u_0* \rho_\delta$ where $\rho_\delta(x)$ is a standard mollifier (as defined in Appendix \ref{sec:moll}). Let $(U_\delta)^j$ be the corresponding solution of \eqref{eq:numsch_ext} with $u_{0,\delta}$ as initial data. Then,
\[
\|(U_\delta)^1-(U_\delta)^0\|_{L^\infty(\ensuremath{\mathbb{R}}^d)} \leq \tau\|D_p^hu_{0,\delta}\|_{L^\infty(\ensuremath{\mathbb{R}}^d)} + \tau \|f\|_{L^\infty(\ensuremath{\mathbb{R}}^d)}.
\]
Define $\widetilde{U}^j_\delta:= U^{j+1}_\delta$ for all $j=0,\ldots,N$. Clearly, $\widetilde{U}^j_\delta$ is the unique solution of \eqref{eq:numsch_ext} with initial data $\widetilde{U}^0_\delta=U^{1}_\delta$ and right hand side $f$. By Proposition \ref{prop:contdep}
\[
\begin{split}
\|U^{j+1}_\delta-U^j_\delta\|_{L^\infty(\ensuremath{\mathbb{R}}^d)} &=\|\widetilde{U}^{j}_\delta-U^j_\delta\|_{L^\infty(\ensuremath{\mathbb{R}}^d)} \leq \|\widetilde{U}^{0}_\delta-U^0_\delta\|_{L^\infty(\ensuremath{\mathbb{R}}^d)}=\|U^1_\delta-U^0_\delta\|_{L^\infty(\ensuremath{\mathbb{R}}^d)} \\
&\leq \tau\|D_p^hu_{0,\delta}\|_{L^\infty(\ensuremath{\mathbb{R}}^d)} + \tau \|f\|_{L^\infty(\ensuremath{\mathbb{R}}^d)}.
\end{split}
\]
A repeated use of the triangle inequality yields
\begin{equation}\label{eq:triangle}
\begin{split}
\|U^{j+k}_\delta-U^j_\delta\|_{L^\infty(\ensuremath{\mathbb{R}}^d)}&\leq \sum_{i=0}^{k-1} \|U^{j+i+1}_\delta-U^{j+i}_\delta\|_{L^\infty(\ensuremath{\mathbb{R}}^d)}\leq(k\tau) \|D_p^hu_{0,\delta}\|_{L^\infty(\ensuremath{\mathbb{R}}^d)} + (k\tau) \|f\|_{L^\infty(\ensuremath{\mathbb{R}}^d)}.
\end{split}
\end{equation}
The symmetry of the weights $\omega_\beta$ together with Lemma \ref{lem:pineq1} implies
\begin{equation}\label{eq:Dp}
\begin{split}
|D_p^hu_{0,\delta}(x)|&=\frac{1}{2} \left|\sum_{y_\beta\in \mathcal{G}_h} \left(J_p(u_{0,\delta}(x+y_\beta)-u_{0,\delta}(x))-J_p(u_{0,\delta}(x)-u_{0,\delta}(x-y_\beta))\right) \omega_{\beta}\right|\\
&\leq \frac{p-1}{2} \sum_{y_\beta\in \mathcal{G}_h}
\max\{|u_{0,\delta}(x+y_\beta)-u_{0,\delta}(x)|,|u_{0,\delta}(x)-u_{0,\delta}(x-y_\beta)|\}^{p-2} \times\\
&
\hspace{5cm}\times\left|u_{0,\delta}(x+y_\beta)+u_{0,\delta}(x-y_\beta)-2u_{0,\delta}(x) \right|\omega_\beta.
\end{split}
\end{equation}
Now note that, by the $a$-H\"older regularity of $u_0$ given by assumption \eqref{as:u0f}, Lemma \ref{lem:a1} and Lemma \ref{lem:a2} imply
\begin{equation}\label{eq:mollest}
|u_{0,\delta}(x\pm y_\beta)-u_{0,\delta}(x)|\leq K_1L_{u_0}\delta^{a-1}|y_\beta|, \quad |u_{0,\delta}(x+y_\beta)+u_{0,\delta}(x-y_\beta)-2u_{0,\delta}(x) |\leq K_2 L_{u_0} \delta^{a-2}|y_\beta|^2,
\end{equation}
where $K_1$ and $K_2$ depend only on the mollifier $\rho$. Now note that, by \eqref{as:disc}, we have
\begin{equation}\label{eq:weightest}
\sum_{y_\beta\in \mathcal{G}_h} |y_\beta|^p\omega_\beta \leq M.
\end{equation}
Combining \eqref{eq:triangle} and \eqref{eq:weightest}, we obtain
\[
\begin{split}
\|U^{j+k}_\delta-U^j_\delta\|_{L^\infty(\ensuremath{\mathbb{R}}^d)}&\leq \frac{p-1}{2} t_k (K_1L_{u_0}\delta^{a-1})^{p-2} K_2 L_{u_0} \delta^{a-2}\sum_{y_\beta\in \mathcal{G}_h} |y_\beta|^p\omega_\beta+ t_k \|f\|_{L^\infty(\ensuremath{\mathbb{R}}^d)}\\
&\leq \widehat{K} \delta^{(a-1)(p-2)+(a-2)}t_k+ \|f\|_{L^\infty(\ensuremath{\mathbb{R}}^d)}t_k,
\end{split}
\]
with $\widehat{K}=\frac{p-1}{2} K_1^{p-2}K_2 L_{u_0}^{p-1}M$. Using the triangle inequality, the above estimate and applying Proposition \ref{prop:contdep} several times we obtain
\[
\begin{split}
\|U^{j+k}-U^{j}\|_{L^\infty(\ensuremath{\mathbb{R}}^d)}&\leq \|U^{j+k}-U^{j+k}_\delta\|_{L^\infty(\ensuremath{\mathbb{R}}^d)}+\|U^{j+k}_\delta-U^{j}_\delta\|_{L^\infty(\ensuremath{\mathbb{R}}^d)}+\|U^{j}-U^{j}_\delta\|_{L^\infty(\ensuremath{\mathbb{R}}^d)}\\
&\leq 2\|u_0-u_{0,\delta}\|_{L^\infty(\ensuremath{\mathbb{R}}^d)}+\widehat{K} \delta^{(a-1)(p-2)+(a-2)}t_k+ \|f\|_{L^\infty(\ensuremath{\mathbb{R}}^d)}t_k\\
&\leq 2 L_{u_0}\delta^a +\widehat{K} \delta^{(a-1)(p-2)+(a-2)}t_k+ \|f\|_{L^\infty(\ensuremath{\mathbb{R}}^d)}t_k.
\end{split}
\]
By choosing $\delta=(\frac{\widehat{K}}{2L_{u_0}}t_k)^{\frac{1}{2+(1-a)(p-2)}}$ in the above estimate, we get the desired result
\[
\|U^{j+k}-U^{j}\|_{L^\infty(\ensuremath{\mathbb{R}}^d)} \leq \tilde{K} (t_k)^{\frac{a}{2+(1-a)(p-2)}} + \|f\|_{L^\infty(\ensuremath{\mathbb{R}}^d)}t_k,
\]
with
\[
\begin{split}
\tilde{K}&=4L_{u_0} \left(\frac{\widehat{K}}{2L_{u_0}} \right)^\frac{a}{2+(1-a)(p-2)}\\
&=4^{1-\frac{a}{2+(1-a)(p-2)}}L_{u_0}((p-1) K_1^{p-2}K_2 L_{u_0}^{p-2}M)^\frac{a}{2+(1-a)(p-2)}\\
&= 4^{\frac{2+(1-a)(p-2) -a}{2+(1-a)(p-2)}} L_{u_0}^{\frac{p}{2+(1-a)(p-2)}}((p-1) K_1^{p-2}K_2 M)^\frac{a}{2+(1-a)(p-2)}.
\end{split}
\]
\end{proof}
\subsection{Equiboundedness and equicontinuity estimates for a scheme in $\ensuremath{\mathbb{R}}^d\times[0,T]$}
We now need to extend the numerical scheme in time in a continuous way. This is done by continuous interpolation, i.e.,
\begin{equation}\label{eq:interp}
U(x,t):= \frac{t_{j+1}-t}{\tau} U^j(x)+ \frac{t-t_j}{\tau}U^{j+1}(x) \quad \textup{if} \quad t\in [t_j,t_{j+1}] \quad \textup{for some} \quad j=0,\ldots,N,
\end{equation}
where $U^j$ is the solution of \eqref{eq:numsch_ext}.
\begin{remark}\label{rem:schemenotimegrid}
It is standard to check that, for all $t\in[t_j,t_{j+1}]$, we have that the original scheme is preserved also outside the grid points, i.e.,
\begin{equation}
U(x,t)=U(x,t_j)+(t-t_j) D_p^h U(x,t_j)+ (t-t_j) f(x).
\end{equation}
\end{remark}
We have the following result.
\begin{proposition}[Stability and equicontinuity]\label{prop:stabcont}
Assume \eqref{as:u0f}, \eqref{as:disc}, $p\geq2$, $r,h,\tau>0$ and \eqref{as:CFL}. Let $U$ be the solution of \eqref{eq:interp}. Then
\begin{enumerate}[(a)]
\item \emph{(Equiboundedness)} $\|U\|_{L^\infty(\ensuremath{\mathbb{R}}^N\times[0,T])} \leq \|u_0\|_{L^\infty(\ensuremath{\mathbb{R}}^d)} +T \|f\|_{L^\infty(\ensuremath{\mathbb{R}}^d)} $,
\item \emph{(Equicontinuity)} For any $x,z\in \ensuremath{\mathbb{R}}^d$ and $t,\tilde{t}\in [0,T]$ we have that
\[
|U(x,t)-U(z,\tilde{t})| \leq \Lambda_{u_0}(|x-z|) +T \Lambda_{f}(|x-z|) + 3 \overline{\Lambda}_{u_0,f}(|\tilde{t}-t|).
\]
\end{enumerate}
\end{proposition}
\begin{proof}
Equiboundedness follows easily from a continuous in space version of Proposition \ref{prop:stab}, since
\[
\begin{split}
|U(x,t)|&\leq \frac{t_{j+1}-t}{\tau} \sup_{x\in \ensuremath{\mathbb{R}}^d} |U^j(x)|+ \frac{t-t_j}{\tau} \sup_{x\in \ensuremath{\mathbb{R}}^d}|U^{j+1}(x)|\\
& \leq \frac{t_{j+1}-t}{\tau} \left( \|u_0\|_{L^\infty(\ensuremath{\mathbb{R}}^d)} + T \|f\|_{L^\infty(\ensuremath{\mathbb{R}}^d)}\right)+ \frac{t-t_j}{\tau} \left( \|u_0\|_{L^\infty(\ensuremath{\mathbb{R}}^d)} + T \|f\|_{L^\infty(\ensuremath{\mathbb{R}}^d)}\right)\\
&\leq \|u_0\|_{L^\infty(\ensuremath{\mathbb{R}}^d)} +T \|f\|_{L^\infty(\ensuremath{\mathbb{R}}^d)}.
\end{split}
\]
Equicontinuity in space follows from the translation invariance of the scheme and Proposition \ref{prop:contdep}:
\[
|U(x+y,t)-U(x,t)|\leq \|u_0(\cdot+y)-u_0\|_{L^\infty(\ensuremath{\mathbb{R}}^d)} +T \|f(\cdot+y)-f\|_{L^\infty(\ensuremath{\mathbb{R}}^d)}.
\]
To prove equicontinuity in time, we first consider $t,\tilde{t} \in [t_j,t_{j+1}]$ for some $j=0,\ldots,N-1$. In this case we have
\[
\begin{split}
U(x,t)-U(x,\tilde{t})&=\left(\frac{t_{j+1}-t}{\tau} U^j(x)+ \frac{t-t_j}{\tau}U^{j+1}(x)\right)-\left(\frac{t_{j+1}-\tilde{t}}{\tau} U^j(x)+ \frac{\tilde{t}-t_j}{\tau}U^{j+1}(x)\right)\\
&= \frac{t-\tilde{t}}{\tau} \left(U^{j+1}(x)- U^j(x)\right).
\end{split}
\]
Then, from Proposition \ref{prop:equitime}, we get
\[
|U(x,t)-U(x,\tilde{t})|\leq |t-\tilde{t}| \frac{ \overline{\Lambda}_{u_0,f}(\tau) }{\tau}
\]
Note that the function $g(\tau)=\frac{ \overline{\Lambda}_{u_0,f}(\tau)}{\tau}$
is decreasing. Thus, since $|t-\tilde{t}|\leq \tau$, we have $g(\tau)\leq g(|t-\tilde{t}|)$. It follows that
\[
\begin{split}
|U(x,t)-U(x,\tilde{t})|&\leq \overline{\Lambda}_{u_0,f}(|t-\tilde{t}|).
\end{split}
\]
Now consider $t\in [t_j,t_{j+1})$ and $\tilde{t}\in [t_{j+k},t_{j+k+1})$ for $k\geq1$. By the triangle inequality, the previous step and Proposition \ref{prop:equitime}
\[
\begin{split}
|U(x,t)-U(x,\tilde{t})|&\leq |U(x,t)-U(x,t_{j+1})|+|U(x,t_{j+k})-U(x,\tilde{t})|+|U(x,t_{j+1})-U(x,t_{j+k})|\\
&\leq \overline{\Lambda}_{u_0,f}(|t_{j+1}-t|) + \overline{\Lambda}_{u_0,f}(|\tilde{t}-t_{j+k}|) +\overline{\Lambda}_{u_0,f}(|t_{j+k}-t_{j+1}|) .
\end{split}
\]
Since $t\leq t_{j+1} \leq \tilde{t}$ and $t\leq t_{j+k} \leq \tilde{t}$, the above estimate yields
\[
|U(x,t)-U(x,\tilde{t})|\leq 3 \overline{\Lambda}_{u_0,f}(|\tilde{t}-t|) .
\]
Finally, we conclude space-time equicontinuity combining the above estimates to get
\[
\begin{split}
|U(x,t)-U(z,\tilde{t})|&\leq |U(x,t)-U(z,t)|+|U(z,t)-U(z,\tilde{t})|\\
&\leq \Lambda_{u_0}(|x-z|) +T \Lambda_{f}(|x-z|) + 3\overline{\Lambda}_{u_0,f}(|\tilde{t}-t|).\qedhere
\end{split}
\]
\end{proof}
By Arzel\`a-Ascoli, we obtain as a corollary that, up to a subsequence, the numerical solution converges locally uniformly to a limit.
\begin{corollary}\label{coro:compactness}
Assume the hypotheses of Proposition \ref{prop:stabcont}. Let $\{U_h\}_{h>0}$ be a sequence of solutions of \eqref{eq:interp}. Then, there exist a subsequence $\{U_{h_l}\}_{l=1}^\infty$ and a function $v\in C_b(\ensuremath{\mathbb{R}}^d\times[0,T])$ such that
\[
U_{h_l}\to v \quad \textup{as} \quad l\to\infty \quad \textup{locally uniformly in $\ensuremath{\mathbb{R}}^N\times[0,T]$}.
\]
\end{corollary}
\section{Convergence of the numerical scheme}
From Corollary \ref{coro:compactness}, we have that the sequence of numerical solutions has a subsequence converging locally uniformly to some function $v$. We will now show that $v$ is a viscosity solution of \eqref{eq:ParabProb}.
\begin{theorem}\label{thm:convergence}
Let the assumptions of Corollary \ref{coro:compactness} hold. Then $v$ is a viscosity solution of \eqref{eq:ParabProb}.
\end{theorem}
\begin{proof} For notational simplicity, we avoid the subindex $j$ and consider
\[
U_{h}\to v \quad \textup{as} \quad h\to0 \quad \textup{locally uniformly in $\ensuremath{\mathbb{R}}^N\times[0,T]$}.
\]
First of all, by the local uniform convergence,
\[
v(x,0)=\lim_{h\to0} U_h(x,0)=u_0(x),
\]
locally uniformly. We will now show that $v$ is a viscosity supersolution. The proof that $v$ is a viscosity subsolution is similar.
Now let $\varphi$ be a suitable test function for $v$ at $(x^*,t^*)\in \ensuremath{\mathbb{R}}^d\times (0,T)$. We may assume that $\varphi$ satisfies
\begin{enumerate}[(i)]
\item $\varphi(x^*,t^*)=u(x^*,t^* )$,
\item $u(x,t)>\displaystyle \varphi(x,t)$ for all $(x,t)\in B_R(x^*)\times(t^*-R,t^*]\setminus \left(x^*,t^*\right)$.
\end{enumerate}
The local uniform convergence ensures (see Section 10.1.1 in \cite{Eva98}) that there exists a sequence $\{(x^h,t^h)\}_{h>0}$ such that
\begin{enumerate}[(i)]
\item $\varphi(x^h,t^h)-U_h(x^h,t^h)=\sup_{(x,t)\in B_R(x^h)\times(t^h-R,t^h]}\{\varphi(x,t)-U_h(x,t)\}=:M_h $,
\item $\varphi(x^h,t^h)-U_h(x^h,t^h)\geq \displaystyle \varphi(x,t)-U_h(x,t)$ for all $(x,t)\in B_R(x^h)\times(t^h-R,t^h]\setminus(x^h,t^h)$
\end{enumerate}
and
\[
(x^h,t^h)\to (x^*,t^*) \quad \textup{as} \quad h\to0.
\]
Now consider $t_j\in \mathcal{T}_{\tau}$ such that $t^h\in[t_j,t_{j+1}]$ (note that the index $j$ might depend on $h$, but this fact plays no role in the proof). By Remark \ref{rem:schemenotimegrid},
\[
U_h(x^h,t^h)= U_h(x^h,t_j)+ (t^h-t_j) \sum_{y_\beta \in \mathcal{G}_h} J_p(U_h(x^h+y_\beta,t_j)-U_h(x^h,t_j))\omega_\beta + (t^h-t_j) f(x^h).
\]
Define $\widetilde{U}_h=U_h+M_h$. It is clear that
\[
\widetilde{U}_h(x^h,t^h)= \widetilde{U}_h(x^h,t_j)+ (t^h-t_j) \sum_{y_\beta \in \mathcal{G}_h} J_p(\widetilde{U}_h(x^h+y_\beta,t_j)-\widetilde{U}_h(x^h,t_j))\omega_\beta+(t^h-t_j) f(x^h).
\]
Clearly, $\widetilde{U}_h(x^h,t^h)=\varphi(x^h,t^h)$ and $\widetilde{U}_h\geq\varphi$, which implies that
\begin{equation}\label{eq:aux1}
\varphi(x^h,t^h)= \widetilde{U}_h(x^h,t_j)+ (t^h-t_j) \sum_{y_\beta \in \mathcal{G}_h} J_p(\widetilde{U}_h(x^h+y_\beta,t_j)-\widetilde{U}_h(x^h,t_j))\omega_\beta+(t^h-t_j) f(x^h).
\end{equation}
Now consider the function $g:\ensuremath{\mathbb{R}}\to\ensuremath{\mathbb{R}}$ given by
\[
g(\xi)=\xi+(t^h-t_j) \sum_{y_\beta \in \mathcal{G}_h} J_p(\widetilde{U}(x^h+y_\beta,t_j)-\xi)\omega_\beta
\]
and note that
\[
g'(\xi)=1-(t^h-t_j) (p-1) \sum_{y_\beta \in \mathcal{G}_h} |\widetilde{U}(x^h+y_\beta,t_j)-\xi|^{p-2}\omega_\beta.
\]
We will check now that $g'(\xi)\geq0$ for any $\xi \in [\varphi(x^h,t_j), \widetilde{U}(x^h,t_j)]$. Indeed,
\[
\begin{split}
|\widetilde{U}(&x^h+y_\beta,t_j)-\xi| \leq |\widetilde{U}(x^h+y_\beta,t_j)-\widetilde{U}(x^h,t_j)|+|\widetilde{U}(x^h,t_j)-\xi|\\
&\leq |U(x^h+y_\beta,t_j)-U(x^h,t_j)|+|\widetilde{U}(x^h,t_j)-\varphi(x^h,t_j)|\\
&\leq |U(x^h+y_\beta,t_j)-U(x^h,t_j)|+|U(x^h,t_j)-U(x^h,t^h)|+|\varphi(x^h,t^h)-\varphi(x^h,t_j)|\\
&\leq \Lambda_{u_0}(|y_\beta|) +T \Lambda_{f}(|y_\beta|) + 3 \overline{\Lambda}_{u_0,f}(|t^h-t_j|)+|t^h-t_j| \|\partial_t \varphi\|_{L^\infty(B_R(x^h)\times[t^h-R, t^h])}\\
&\leq \Lambda_{u_0}(|y_\beta|) +T \Lambda_{f}(|y_\beta|) + 3 \overline{\Lambda}_{u_0,f}(\tau)+\tau \|\partial_t \varphi\|_{L^\infty(B_R(x^h)\times[t^h-R, t^h])},
\end{split}
\]
where we have used that $\widetilde{U}(x^h,t^h)=\varphi(x^h,t^h)$, Proposition \ref{prop:stabcont} and the fact that $|t^h-t_j|\leq \tau$. By \eqref{as:CFL}, and taking $\tau$ small enough, we have
\[
\begin{split}
3 \overline{\Lambda}_{u_0,f}(\tau)&+\tau \|\partial_t \varphi\|_{L^\infty(B_R(x^h)\times[t^h-R, t^h])}\\
& \leq 3\widetilde{K} \tau^{\frac{a}{2+(1-a)(p-2)}} + \left(3\|f\|_{L^\infty(\ensuremath{\mathbb{R}}^d)} + \|\partial_t \varphi\|_{L^\infty(B_R(x^h)\times[t^h-R, t^h])}\right)\tau\\
& \leq (3\widetilde{K}+1)\tau^{\frac{a}{2+(1-a)(p-2)}}\\
& \leq (3\widetilde{K}+1) r^a.
\end{split}
\]
Thus,
\[
\begin{split}
g'(\xi) &\geq 1-(t^h-t_j) (p-1) \sum_{y_\beta \in \mathcal{G}_h} | \Lambda_{u_0}(|y_\beta|) +T \Lambda_{f}(|y_\beta|) + (3\widetilde{K}+1) r^a|^{p-2}\omega_\beta\\
&\geq 1-\tau(p-1) ( L_{u_0}+T L_f + 3\widetilde{K}+1)^{p-2}r^{a(p-2)}\sum_{y_\beta \in \mathcal{G}_h}\omega_\beta \\
&\geq 1-\tau \frac{M(p-1) ( L_{u_0} +T L_{f} + 3\widetilde{K}+1)^{p-2}}{ r^{2+(1-a)(p-2)}}\\
&\geq 0,
\end{split}
\]
where we have used \eqref{as:disc} and where the last inequality is due to the \eqref{as:CFL} condition.
We can use this fact in \eqref{eq:aux1} to get
\[
\begin{split}
\varphi(x^h,t^h)&= \widetilde{U}_h(x^h,t_j)+ (t^h-t_j) \sum_{y_\beta \in \mathcal{G}_h} J_p(\widetilde{U}_h(x^h+y_\beta,t_j)-\widetilde{U}_h(x^h,t_j))\omega_\beta+(t^h-t_j) f(x^h)\\
&\geq \varphi(x^h,t_j)+ (t^h-t_j) \sum_{y_\beta \in \mathcal{G}_h} J_p(\widetilde{U}_h(x^h+y_\beta,t_j)-\varphi(x^h,t_j))\omega_\beta+(t^h-t_j) f(x^h)\\
&\geq \varphi(x^h,t_j)+ (t^h-t_j) \sum_{y_\beta \in \mathcal{G}_h} J_p(\varphi(x^h+y_\beta,t_j)-\varphi(x^h,t_j))\omega_\beta+(t^h-t_j) f(x^h).
\end{split}
\]
Consistency \eqref{as:cons} yields
\[
\partial_t\varphi(x^h,t^h) + o(\tau)\geq \ensuremath{\Delta_p} \varphi(x^h,t_j)+ o_h(1)+f(x^h).
\]
Passing to the limit as $h,\tau\to0$, we get the desired result by the regularity of $\varphi$ and the fact that $t^h,t_j\to t^*$ and $x^h\to x^*$ as $h\to0$.
\end{proof}
We are now ready to prove convergence of the scheme.
\begin{proof}[Proof of Theorem \ref{theo:main}]
By Corollary \ref{coro:compactness} and Theorem \ref{thm:convergence}, we know that, up to a subsequence, the sequence $U_h$ converges to a viscosity solution of \eqref{eq:ParabProb}. Moreover, since viscosity solutions are unique (cf. Theorem \ref{teo:main2}), the whole sequence converges to the same limit.
\end{proof}
\section{Discretizations}\label{sec:discretizations}
In this section, we present two examples of discretizations and verify that the assumptions \eqref{as:cons} and \eqref{as:disc} are satisfied. Moreover, we also give the precise form of corresponding CFL-condition.
\subsection{Discretization in dimension $d=1$} We consider the following finite difference discretization of $\ensuremath{\Delta_p}$ in dimension $d=1$
\[
D_p^h \phi(x) = \frac{J_p(\phi(x+h)-\phi(x))+J_p(\phi(x-h)-\phi(x))}{h^p}.
\]
A proof of consistency \eqref{as:cons} can be found in Theorem 2.1 in \cite{dTLi20}. Assumption \eqref{as:disc} is trivially true for $r=h$ since
\[
\omega_1=\omega_{-1}= \frac{1}{h^p} \quad \textup{and} \quad \omega_\beta=0 \quad \textup{otherwise}.
\]
so that
\[
\sum_{y_\beta\in \mathcal{G}_h} \omega_\beta=\frac{2}{h^p}
\]
\subsection{Discretization in dimension $d>1$}\label{subsec:disc2}
The following discretization was introduced in \cite{dTLi22}:
\[
D_p^h\phi(x)= \frac{h^d}{\mathcal{D}_{d,p}\, \omega_d\, r^{p+d}} \sum_{y_\beta\in B_r} J_p(\phi(x+y_\beta)-\phi(x)),
\]
where $\omega_d$ denotes the measure of the unit ball in $\ensuremath{\mathbb{R}}^d$, the relation between $r$ and $h$ is given by
\begin{equation}\label{eq:hr}
h=\begin{cases}
o(r^\frac{p}{p-1}), & \quad \textup{if} \quad p \in (2,3],\\
o(r^{\frac{3}{2}}),& \quad \textup{if} \quad p\in (3,\infty),\\
\end{cases}
\end{equation}
and $\mathcal{D}_{d,p}= \frac{d}{2(d+p)}\fint_{\partial B_1}|y_1|^p\,\mathrm{d} \sigma (y)$. When $p\in \ensuremath{\mathbb{N}}$, a more explicit value of this constant is given in \cite{dTLi22}. In general, the explicit value is given by
\[
\mathcal{D}_{d,p}
=
\frac{d}{4\sqrt\pi}\cdot\frac{p-1}{d+p}\cdot\frac{\Gamma(\frac{d}{2})\Gamma(\frac{p-1}{2})}{\Gamma(\frac{d+p}{2})}.
\]
A proof of consistency \eqref{as:cons} can be found in Theorem 1.1 in \cite{dTLi22}. Assumption \eqref{as:disc} trivially holds for $h=o(r^\alpha)$ for some $\alpha>0$ according to \eqref{eq:hr} since
\[
\omega_\beta=\omega_{-\beta }= \frac{h^d}{\mathcal{D}_{d,p}\, \omega_d\, r^{p+d}} \quad \textup{if} \quad |h\beta|<r \quad \textup{and} \quad \omega_\beta=0 \quad \textup{otherwise}.
\]
To check \eqref{as:disc} we rely on the following estimate given in the proof of Theorem 1.1 in \cite{dTLi22}:
\[
\sum_{y_\beta\in B_r}h^d \leq |B_{r+\sqrt{d}h}|.
\]
In particular, taking for example $h\leq r/\sqrt{d}$, we have
\[
\sum_{y_\beta\in B_r} \omega_\beta= \frac{1}{\mathcal{D}_{d,p}r^{p}}\frac{|B_{r+\sqrt{d}h}|}{|B_{r}|}\leq \frac{2^d}{\mathcal{D}_{d,p}r^{p}}.
\]
\section{Numerical experiments}
We will perform the numerical tests comparing the numerical solution with the explicit Barenblatt solution of \eqref{eq:ParabProb}. For $p>2$ this is given by
\[
B(x,t)=K t^{-\alpha}\left(1- \left(\frac{|x|}{t^\beta}\right)^{\frac{p}{p-1}}\right)^{\frac{p-1}{p-2}}_+,
\]
where the constants are,
\[
\alpha=\frac{d}{d(p-2)+p}, \quad \beta=\frac{1}{d(p-2)+p}, \quad \textup{and} \quad K=\left(\frac{p-2}{p}\beta^{\frac{1}{p-1}}\right)^{{\frac{p-1}{p-2}}}.
\]
\subsection{Simulations in dimension $d=1$}
We consider the initial condition
\[
u_0(x)=K\left(1-|x|^{\frac{p}{p-1}}\right)^{\frac{p-1}{p-2}}
\]
and $f=0$. The corresponding solution of problem \eqref{eq:ParabProb} is given by (see \cite{KaVa88})
\[
B(x,t)=K (t+1)^{-\alpha}\left(1- \left(\frac{|x|}{(t+1)^\beta}\right)^{\frac{p}{p-1}}\right)^{\frac{p-1}{p-2}}_+ .
\]
Let us now comment on the CFL-condition \eqref{as:CFL}. Clearly, $u_0$ is a Lipschitz function, and we can estimate its Lipschitz constant as follows
\[
L_{u_0}=\sup_{x\in [-1,1]} \left|\frac{du_0}{dx}(x)\right|=\sup_{r\in[0,1]}\left\{K \frac{p}{p-2}\left(1-r^{\frac{p}{p-1}}\right)^{\frac{1}{p-2}}r^{\frac{1}{p-1}}\right\}\leq K \frac{p}{p-2}.
\]
Thus, for all $p>2$, the CFL condition \eqref{as:CFL} can be take as $\tau\sim h^2$ (since $f=0$ in this case).
For completeness, we find the value of $K$ in dimension $d=1$. Note that
\[
K=\left(\frac{p-2}{p}\frac{1}{(2(p-1))^{\frac{1}{p-1}}}\right)^{{\frac{p-1}{p-2}}}=\left(\frac{p-2}{p}\right)^{{\frac{p-1}{p-2}}}\frac{1}{(2(p-1))^{\frac{1}{p-2}}},
\]
so that
$
L_{u_0}\leq \left(\frac{p-2}{2p(p-1)}\right)^{{\frac{1}{p-2}}}.
$
In Figure \ref{fig:errors222}, we show the numerical errors obtained. As it can be seen there, the errors seem to behave like $O(h^{p/(p-1)})$.
\begin{figure}[h!]
\includegraphics[width=0.9\textwidth]{Errors_dim1}
\caption{Errors in dimension $d=1$ for $p=3,4,10,100$. }
\label{fig:errors222}
\end{figure}
| {
"timestamp": "2022-05-30T02:03:46",
"yymm": "2205",
"arxiv_id": "2205.13656",
"language": "en",
"url": "https://arxiv.org/abs/2205.13656",
"abstract": "We propose a new finite difference scheme for the degenerate parabolic equation \\[ \\partial_t u - \\mbox{div}(|\\nabla u|^{p-2}\\nabla u) =f, \\quad p\\geq 2. \\] Under the assumption that the data is Hölder continuous, we establish the convergence of the explicit-in-time scheme for the Cauchy problem provided a suitable stability type CFL-condition. An important advantage of our approach, is that the CFL-condition makes use of the regularity provided by the scheme to reduce the computational cost. In particular, for Lipschitz data, the CFL-condition is of the same order as for the heat equation and independent of $p$.",
"subjects": "Numerical Analysis (math.NA); Analysis of PDEs (math.AP)",
"title": "Finite difference schemes for the parabolic $p$-Laplace equation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9817357189762611,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7085610914266878
} |
https://arxiv.org/abs/1507.01080 | More bounds for the Grundy number of graphs | A coloring of a graph $G=(V,E)$ is a partition $\{V_1, V_2, \ldots, V_k\}$ of $V$ into independent sets or color classes. A vertex $v\in V_i$ is a Grundy vertex if it is adjacent to at least one vertex in each color class $V_j$ for every $j<i$. A coloring is a Grundy coloring if every vertex is a Grundy vertex, and the Grundy number $\Gamma(G)$ of a graph $G$ is the maximum number of colors in a Grundy coloring. We provide two new upper bounds on Grundy number of a graph and a stronger version of the well-known Nordhaus-Gaddum theorem. In addition, we give a new characterization for a $\{P_{4}, C_4\}$-free graph by supporting a conjecture of Zaker, which says that $\Gamma(G)\geq \delta(G)+1$ for any $C_4$-free graph $G$. | \subsection{\large Clique number}
Zaker \cite{Zak06} showed that for a graph $G$, $\Gamma(G)=2$ if and
only if $G$ is a complete bipartite (see also the page 351 in \cite{
Cha}). Zaker and Soltani \cite{Zak15} showed that for any integer
$k\geq 2$, the smallest triangle-free graph of Grundy number $k$ has
$2k-2$ vertices. Let $B_k$ be the graph obtained from $K_{k-1,k-1}$
by deleting a matching of cardinality $k-2$, see $B_k$ for an
illustration in Fig. 1. The authors showed that $\Gamma(B_k)=k$.
\begin{center}
\scalebox{0.3}{\includegraphics{figure1.eps}}\\
\vspace{0.5cm} Fig. 1. The graph $B_k$ for $k\geq 2$
\end{center}
One may formulate the above result of Zaker and Soltani:
$\Gamma(G)\leq \frac{n+2} {2}$ for any triangle-free graph $G$ of
order $n$.
\begin{theorem}
\noindent (i) For a graph $G$ of order $n\geq 1$, $\Gamma(G)\leq\frac{n+\omega(G)}{2}.$\\
\noindent (ii) Let $k$ and $n$ be any two integers such that $k+n$ is even and $k\leq n$. Then there exists a graph $G_{k,n}$ on $n$ vertices such that $\omega(G_{k,n})=k$ and $\Gamma(G_{k,n})=\frac{n+k}{2}$.\\
\noindent (iii) In particular, if $G$ is a connected triangle-free graph of order $n\geq 2$, then
$\Gamma(G)=\frac{n+2}{2}$ if and only if $G\cong B_{\frac{n+2}{2}}$.
\end{theorem}
\begin{proof} We prove part (i) of the assertion by induction on $\Gamma(G)$.
Let $k=\Gamma(G)$. If $k=1$, then $G=\overline{K_n}$. The result is
trivially true. Next we assume that $k\geq 2$.
Let $V_{1}, V_{2}, \ldots, V_{k}$ be the color classes
of a Grundy coloring of $G$. Set $H=G\setminus V_{1}$. Then
$\Gamma(H)=k-1$. By the induction hypothesis,
$\Gamma(H)\leq\frac{n-|V_{1}|+\omega(H)}{2}$. Hence,
$$\Gamma(G)=\Gamma(H)+1\leq \frac{n-|V_{1}|+\omega(H)}{2}+1=\frac{n+\omega(H)+2-|V_{1}|}{2}.$$
We consider two cases. If $|V_{1}|\geq 2$, then
$$\Gamma(G)\leq \frac{n+\omega(H)+2-|V_{1}|}{2}\leq \frac{n+\omega(G)+2-2}{2}=\frac{n+\omega(G)}{2}.$$
Now assume that $|V_{1}|=1$. Since $V_1$ is a maximal independent
set of $G$, every vertex in $H$ is adjacent to the
vertex in $V_1$, and thus $\omega(H)=\omega(G)-1$. So,
$$\Gamma(G)=\frac{n+\omega(G)+1-|V_{1}|}{2}=\frac{n+\omega(G)}{2}.$$
To prove part (ii), we construct $G_{k,n}$ as follows. First
consider a complete graph on $k$ vertices and partition its vertex
set into two subsets $A$ and $B$ such that $||A|-|B||\leq 1$.
Let $t$ be an integer such that $t=\frac {n-k} 2 $.
Let also $H_t$ be the graph obtained from $K_{t,t}$ by deleting a
perfect matching of size $t$. It is easily observed that
$\Gamma(H_t)=t$, where any Grundy coloring with $t$ colors consists
of $t$ color classes $C_1, \ldots, C_t$ such that for each $i$,
$|C_i|=2$. Let $C_i=\{a_i, b_i\}$.
Now for each $i\in \{1, \ldots, t\}$, join all vertices of $A$ to $a_i$ and join all vertices of $B$ to $b_i$. We denote the resulting graph by $G_{k,n}$. We note by our construction that $\omega(G_{k,n})=k$ and $\Gamma(G_{k,n})\geq k+t$. Also, clearly $\Delta(G_{k,n})=t+k-1$. Then $\Gamma(G_{k,n}) = k+t = (n+k)/2$. This completes the proof of part (ii).\\
Now we show part (iii) of the statement. It is straightforward
to check that $\Gamma(B_k)=k$. Next, we assume that $G$ is connected
triangle-free graph of order $n\geq 2$ with
$\Gamma(G)=\frac{n+2}{2}$. Let $k=\frac{n+2}{2}$. To show $G\cong
B_k$, let $V_1, \ldots, V_k$ be a Grundy coloring of $G$.
\vspace{3mm}\noindent{\bf Claim 1.} (a) $|V_k|=1$; (b)
$|V_{k-1}|=1$; (c) $|V_i|=2$ for each $i\leq k-2$.
Since $G$ is triangle-free, there are at most two color classes with
cardinality 1 among $V_1, \ldots, V_k$. Since
$$2k-2=|V_1|+\cdots+|V_k|\geq 1+1+2+\cdots+2=2(k-1),$$
there are exactly two color classes with cardinality 1, and all
others have cardinality two. Let $u$ and $v$ the two vertices lying
the color classes of cardinality 1. Observe that $u$
and $v$ are adjacent.
We show (a) by contradiction. Suppose that $|V_k|=2$, and let
$V_k=\{u_k, v_k\}$. Since $u$ is adjacent to $v$ and
both $u_k$ and $v_k$ are adjacent to $u$ and $v$, we have a
contradiction with the fact that $G$ is triangle-free. This shows
$|V_k|=1$.
To complete the proof of the claim, it suffices to show (b). Toward
a contradiction, suppose $|V_{k-1}|=2$, and let $V_{k-1}=\{u_{k-1},
v_{k-1}\}$. By (a), let $|V_i|=1$ for an integer $i<k-1$. Without
loss of generality, let $V_i=\{u\}$ and $V_k=\{v\}$. Since
$u_{k-1}u\in E(G)$, $v_{k-1}u\in E(G)$, and at least one of
$u_{k-1}$ and $v_{k-1}$ is adjacent to $v$, it follows that there
must be a triangle in $G$, a contradiction.
So, the proof of the claim is completed.
\vspace{3mm} Note that $uv\in E(G)$. Let $V_i=\{u_i, v_i\}$ for each
$i\in \{1, \ldots, k-2\}$. Since $G$ is triangle-free, exactly one
of $u_i$ and $v_i$ is adjacent to $u$ and the other one is adjacent
to $v$. Without loss of generality, let $u_iv\in E(G)$ and $v_iu\in
E(G)$ for each $i$. Since $G$ is triangle-free, both $\{u_1, \ldots,
u_{k-1}, u\}$ and $\{v_1, \ldots, v_{k-1}, v\}$ are independent sets
of $G$, implying that $G$ is a bipartite graph.
To complete the proof for $G\cong B_k$, it remains to show that
$u_iv_j\in E(G)$ for any $i$ and $j$ with $i\neq j$. Without loss of
generality, let $i<j$. Since $v_iv_j\notin E(G)$, $u_iv_j\in E(G)$.
So, the proof is completed.
\end{proof}
Since for any graph $G$ of order $n$,
$\chi(\overline{G})\omega(G)=\chi(\overline{G})\alpha(\overline{G})\geq
n$, by Theorem 2.6, the following result is immediate.
\begin{corollary} (Zaker \cite{Zak07}) For any graph $G$ of order $n$, $\Gamma(G)\leq
\frac{\chi(\overline{G})+1} 2 \omega(G)$.
\end{corollary}
\begin{corollary} (Zaker \cite{Zak05}) Let $G$ be the complement of a bipartite graph. Then $\Gamma(G)\leq\frac{3\omega(G)}{2}$.
\end{corollary}
\begin{proof} Let $n$ be the order of $G$ and $(X, Y)$ be the bipartition of $V(\overline{G})$. Since $X$
and $Y$ are cliques of $G$, $\max\{|X|, |Y|\}\leq \omega(G)$. By
Theorem 2.6,
$$\Gamma(G)\leq\frac {n+\omega(G)} 2= \frac {|X|+|Y|+\omega(G)} 2 \leq \frac{3\omega(G)}{2}.$$
\end{proof}
The following result is immediate from by the inequality (2) and
Theorem 2.6.
\begin{corollary} For any graph $G$ of order $n$,
$\Gamma(G)\leq\frac{n+ \chi(G)}{2}\leq\frac{n+ col(G)}{2}$.
\end{corollary}
Chang and Hsu \cite{Chang} proved that $\Gamma(G)\leq log_{\frac
{col(G)} {col(G)-1}}n +2$ for a nonempty graph $G$ of order $n$.
Note that this bound is not comparable to that given in the above
corollary.
\section{\large Nordhaus-Gaddum type inequality}
In 1956, Nordhaus and Gaddum \cite{Nord} proved that for any graph
$G$ of order $n$,
$$\chi(G)+\chi(\overline{G})\leq n+1.$$
Since then, relations of a similar type have been proposed for many
other graph invariants, in several hundred papers, see the survey
paper of Aouchiche and Hansen \cite{Aou}. In 1982 Cockayne and
Thomason \cite{Coc} proved that
$$\Gamma(G)+\Gamma(\overline{G})\leq \lfloor\frac {5n+2} 4\rfloor$$
for a graph $G$ of order $n\geq 10$, and this is sharp. In 2008
F\"{u}redi et al. \cite{Fur} rediscovered the above theorem. Harary
and Hedetniemi \cite{Har} established that
$\psi(G)+\chi(\overline{G})\leq n+1$ for any graph $G$ of order $n$
extending the Nordhaus-Gaddum theorem.
Next, we give a theorem, which is stronger than the Nordhaus-Gaddum
theorem, but is weaker than Harary-Hedetniemi's theorem. Our proof
is turned out to be much simpler than that of Harary-Hedetniemi's
theorem.
It is well known that $\chi(G-S)\geq \chi(G)-|S|$ for a set
$S\subseteq V(G)$ of a graph $G$. The following result assures that
a stronger assertion holds when $S$ is a maximal clique of a graph
$G$.
\begin{lemma} Let $G$ be a graph of order at least two which is not a complete graph.
For a maximal clique $S$ of $G$, $\chi(G-S)\geq\chi(G)-|S|+1$.
\end{lemma}
\begin{proof} Let $V_1, V_2, \ldots, V_k$ be the color classes of a $k$-coloring of
$G-S$, where $k=\chi(G-S)$. Since $S$ is a maximal clique of $G$,
for each vertex $v\in V(G)\setminus S$, there exists a vertex $v'$
which is not adjacent to $v$. Hence $G[S\cup V_k]$ is $s$-colorable,
where $s=|S|$. Let $U_1, \ldots, U_s$ be the color classes of an
$s$-coloring of $G[S\cup V_k]$. Thus, we can obtain a
$(k+s-1)$-coloring of $G$ with the color classes $V_1, V_2, \ldots,
V_{k-1}, U_1, \ldots, U_s$. So,
$$\chi(G)\leq k+s-1=\chi(G-S)+|S|-1.$$
\end{proof}
\begin{theorem} For a graph $G$ of order $n$, $\Gamma(G)+\chi(\overline{G})\leq
n+1$, and this is sharp.
\end{theorem}
\begin{proof} We prove by induction on $\Gamma(G)$. If
$\Gamma(G)=1$, then $G=\overline{K_n}$. The result is trivially
holds, because $\Gamma(G)=1$ and $\chi(\overline{G})=n$. The result
also clearly true when $G=K_n$.
Now assume that $G$ is not a complete graph and $\Gamma(G)\geq 2$.
Let $V_{1},V_{2},\ldots,V_{k}$ be a Grundy coloring of $G$. Set
$H=G\setminus V_{1}$. Then $\Gamma(H)=\Gamma(G)-1$. By the induction
hypothesis,
$$\Gamma(H)+\chi(\overline{H})\leq n-|V_{1}|+1.$$
Since $V_{1}$ is a maximal independent set of $G$, it is a maximal
clique of $\overline{G}$. By Lemma 3.1, we have
$$\chi(\overline{H})\geq \chi(\overline{G})-|V_1|+1. $$
Therefore
\begin{eqnarray*}
\\ \Gamma(G)+\chi(\overline{G})
&\leq&\ (\Gamma(H)+1)+(
\chi(\overline{H})+|V_1|-1)\\
&\leq&\ (\Gamma(H)+\chi(\overline{H}))+|V_1| \\
&\leq&\ n-|V_1|+1+|V_1| \\
&=&\ n+1.
\end{eqnarray*}
To see the sharpness of the bound, let us consider $G_{n,k}$, which
is the graph obtained from the joining each vertex of $K_k$ to all
vertices of $\overline{K_{n-k}}$, where $1\leq k\leq n-1$. It can be
checked that $\Gamma(G_{n,k})=k+1$ and
$\chi(\overline{G_{n,k}})=n-k$. So
$\Gamma(G_{n,k})+\chi(\overline{G_{n,k}})=n+1$.
The proof is completed.
\end{proof}
Finck \cite{Finck} characterized all graphs $G$ of order $n$ such
that $\chi(G)+\chi(\overline{G})=n+1$. It is an interesting problem
to characterize all graphs $G$ attaining the bound in Theorem 3.2.
Since $\alpha(G)=\omega(\overline{G})\leq \chi(\overline{G})$ for
any graph $G$, the following corollary is a direct consequence of
Theorem 3.2.
\begin{corollary}( Effantin and Kheddouci \cite{Eff}) For a graph $G$ of order $n$,
$$\Gamma(G)+\alpha(G) \leq n+1.$$
\end{corollary}
\section{\large Perfectness}
Let $\mathcal{H}$ be a family of graphs. A graph $G$ is called {\it
$\mathcal{H}$-free} if no induced subgraph of $G$ is isomorphic to
any $H\in \mathcal{H}$. In particular, we simply write $H$-free
instead of $\{H\}$-free if $\mathcal{H}=\{H\}$. A graph $G$ is
called {\it perfect}, if $\chi(H)=\omega(H)$ for each induced
subgraph $H$ of $G$. It is well known that every $P_4$-free graph is
perfect.
A {\it chordal} graph is a simple graph which contains no induced
cycle of length four or more. Berge \cite{Ber} showed that every
chordal graph is perfect. A {\it simplicial} vertex of a graph is
vertex whose neighbors induce a clique.
\begin{theorem} (Dirac \cite{Dir}) Every chordal graph has a
simplicial vertex.
\end{theorem}
\begin{corollary} If $G$ is a chordal graph, then
$\delta(G)\leq \omega(G)-1$.
\end{corollary}
\begin{proof}
Let $v$ be a simplicial vertex. By Theorem 4.1, $N(v)$ is a clique,
and thus $d(v)\leq \omega(G)-1$.
\end{proof}
Markossian et al. \cite{Mar} remarked that for a chordal graph $G$,
$col(H)=\omega(H)$ for any induced subgraph $H$ of $G$. Indeed, its
converse is also true. For convenience, we give the proof here.
\begin{theorem} A graph $G$ is chordal if and only if
$col(H)=\omega(H)$ for any induced subgraph $H$ of $G$.
\end{theorem}
\begin{proof} The sufficiency is immediate from the fact that any cycle $C_k$ with
$k\geq 4$, $col(C_k)=3\neq 2=\omega(C_k)$.
Since every induced subgraph of a chordal graph is still a chordal
graph, to prove the necessity of the theorem, it suffices to show
that $col(G)=\omega(G)$. Recall that $col(G)=deg(G)+1$ and
$deg(G)=\max\{\delta(H): H\subseteq G\}$. Observe that
$$\max\{\delta(H): H\subseteq G\}=\max\{\delta(H): H
\text{ is an induced subgraph of}\ G\}.$$
Since $\omega(G)-1\leq\max\{\delta(H): H\subseteq G\}$ and
$\delta(H)\leq \omega(H)-1$ for any induced subgraph $H$ of $G$,
$\max\{\delta(H): H \text{ is an induced subgraph of}\ G\}\leq
\omega(G)-1,$ we have $deg(G)=\max\{\delta(H): H\subseteq
G\}=\omega(G)-1$. Thus, $col(G)=\omega(G)$.
\end{proof}
Let $\alpha, \beta\in \{\omega, \chi, \Gamma, \psi\}$.
A graph $G$ is called {\em $\alpha \beta$-perfect} if for each
induced subgraph $H$ of $G$, $\alpha(H)=\beta(H)$. Among other
things, Christen and Selkow proved that
\begin{theorem} (Christen and Selkow \cite{Chr}) For any graph $G$, the following statements are
equivalent:
(1) $G$ is $\Gamma\omega$-perfect.
(2) $G$ is $\Gamma\chi$-perfect.
(3) $G$ is $P_4$-free.
\end{theorem}
For a graph $G$, $m(G)$ denotes the number of of
maximal cliques of $G$. Clearly, $\alpha(G)\leq m(G)$. A graph $G$
is called {\em trivially perfect} if for every induced subgraph $H$
of $G$, $\alpha(H)=m(H)$. A partially order set $(V, <)$ is an
arborescence order if for all $x\in V$, $\{y:\ y<x\}$ is a totally
ordered set.
\begin{theorem} (Wolk \cite{Wo}, Golumbic \cite{Gol}) Let $G$ be a
graph. The following conditions are equivalent:
(i) $G$ is the comparability graph of an arborescence order.
(ii) $G$ is $\{P_4, C_4\}$-free.
(iii) $G$ is trivially perfect.
\end{theorem}
Next we provide another characterization of $\{P_{4}, C_4\}$-free
graphs.
\begin{theorem} Let $G$ be a graph. Then $G$ is $\{P_{4}, C_4\}$-free if and only if
$\Gamma(H)=col(H)$ for any induced subgraph $H$ of $G$.
\end{theorem}
\begin{proof}
To show its sufficiency, we assume that $\Gamma(H)=col(H)$ for any
induced subgraph $H$ of $G$. Since $col (C_{4})=3$ while
$\Gamma(C_{4})=2$, and $col(P_{4})=2$ while $\Gamma(P_{4})=3$, it
follows that $G$ is $C_{4}$-free and $P_{4}$-free.
To show its necessity, let $G$ be a $\{P_{4}, C_{4}\}$-free graph.
Let $H$ be an induced subgraph of $G$. Since $G$ is $P_4$-free, by
Theorem 4.4, $\Gamma(H)=\omega(H)$. On the other hand, by Theorem
4.3, $col(H)=\omega(H)$. The result then follows.
\end{proof}
Gastineau et al. \cite{Gas} posed the following conjecture.
\vspace{3mm}\noindent{\bf Conjecture 1.} (Gastineau, Kheddouci and
Togni \cite{Gas}). For any integer $r\geq 1$, every $C_4$-free
$r$-regular graph has Grundy number $r+1$.
\vspace{3mm} So, Theorem 4.6 asserts that the above conjecture is
true for every regular $\{P_{4}, C_{4}\}$-free graph. However, it is
not hard to show that a regular graph is $\{P_{4}, C_{4}\}$-free if
and only if it is a complete graph. Indeed, in 2011 Zaker
\cite{Zak11} made the following beautiful conjecture, which implies
Conjecture 1.
\vspace{3mm}\noindent{\bf Conjecture 2.} (Zaker \cite{Zak11}) If $G$
is $C_4$-free graph, then $\Gamma(G)\geq \delta(G)+1$.
\vspace{3mm} Note that Theorem 4.6 shows that Conjecture 2 is valid
for all $\{P_{4}, C_{4}\}$-free graphs.
\vspace{3mm}\noindent{\bf Acknowledgment.} The authors are grateful
to the referees for their careful readings and valuable comments.
| {
"timestamp": "2015-12-10T02:02:43",
"yymm": "1507",
"arxiv_id": "1507.01080",
"language": "en",
"url": "https://arxiv.org/abs/1507.01080",
"abstract": "A coloring of a graph $G=(V,E)$ is a partition $\\{V_1, V_2, \\ldots, V_k\\}$ of $V$ into independent sets or color classes. A vertex $v\\in V_i$ is a Grundy vertex if it is adjacent to at least one vertex in each color class $V_j$ for every $j<i$. A coloring is a Grundy coloring if every vertex is a Grundy vertex, and the Grundy number $\\Gamma(G)$ of a graph $G$ is the maximum number of colors in a Grundy coloring. We provide two new upper bounds on Grundy number of a graph and a stronger version of the well-known Nordhaus-Gaddum theorem. In addition, we give a new characterization for a $\\{P_{4}, C_4\\}$-free graph by supporting a conjecture of Zaker, which says that $\\Gamma(G)\\geq \\delta(G)+1$ for any $C_4$-free graph $G$.",
"subjects": "Combinatorics (math.CO)",
"title": "More bounds for the Grundy number of graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9817357184418847,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7085610910410052
} |
https://arxiv.org/abs/2102.07873 | Eigenvalue bounds for the Paneitz operator and its associated third-order boundary operator on locally conformally flat manifolds | In this paper we study bounds for the first eigenvalue of the Paneitz operator $P$ and its associated third-order boundary operator $B^3$ on four-manifolds. We restrict to orientable, simply connected, locally confomally flat manifolds that have at most two umbilic boundary components. The proof is based on showing that under the hypotheses of the main theorems, the considered manifolds are confomally equivalent to canonical models. This equivalence is proved by showing the injectivity of suitable developing maps. Then the bounds on the eigenvalues are obtained through explicit computations on the canonical models and its connections with the classes of manifolds that we are considering. The fact that $P$ and $B^3$ are conformal in four dimensions is key in the proof. | \section{Introduction}
Let $(M^4,g)$ be a 4-dimensional Riemannian manifold and denote by $\Ric$ and $W$ the Ricci and Weyl tensors of $g$, respectively. Define $J$ to be the trace of the Schouten tensor $A = \frac{1}{2}(\Ric-Jg)$ (actually $J$ is a multiple of the scalar curvature $R$, this is, $J=\frac{1}{6}R$) and $dv_g$ the volume element for the metric $g$.\\
The Paneitz operator $P_g$ on $(M,g)$, first introduced in 1983 \cite{Paneitz83}, is defined by
\begin{equation}\label{definition P}
P_g=(-\Delta_g)^2+\divergence_g\big\{4A_g-2Jg\big\}d,
\end{equation}
$P$ is a conformally covariant operator and, in particular, it satisfies that under a change of metric $g_f=e^{2f}g$,
\begin{equation}\label{conformal-Paneitz}
P_{g_f}=e^{-4f}P_{g} \quad\text{on }M.
\end{equation}
This operator describes the transformation law for Branson's $Q$-curvature \cite{Branson85}, which is defined by
\begin{equation*}
Q_g= \tfrac{1}{6}(-\Delta R_g-3|\Ric_g\!|^2 + R_g^2).
\end{equation*}
Indeed,
\begin{equation*}
P_{g} f+Q_{g_f} e^{4f}=Q_g,\quad\text{for}\quad g_f=e^{2f}g.
\end{equation*}
There is an extensive bibliography on the $Q$-curvature equation in dimension four. Without being exhaustive, we mention
\cite{Chang-Yang:extremal,Wei-Xu,Djadli-Malchiodi,Li-Li-Liu,Gursky-Malchiodi}.
Now, if $M$ is a compact 4-dimensional manifold without boundary, the Chern-Gauss-Bonnet formula \cite{Branson:sharp-inequalities} reads
\begin{equation}\label{Gauss-Bonnet}
\int_M Q_g \,dv_g+\frac{1}{4}\int_M |W_g|_g^2\,dv_g=8\pi^2\chi(M).
\end{equation}
Note then that we may regard $P$ as a generalization for 4-manifolds of the Laplace operator $\Delta$ in two dimensions (that is also conformally covariant in that setting) and the curvature $Q$ as a four-dimensional analog of the Gaussian curvature in the two-dimensional setting (which plays the same role as $Q$ in the classical Gauss-Bonnet formula).\\
The study of eigenvalues of differential operators has an extensive history. In the particular case of the Laplacian in two dimensions it is possible to obtain bounds that only depend on the topology of the manifold (see \cite{Yang-Yau}); more precisely, consider $N^2$ to be a two-dimensional compact orientable Riemanniann manifold with no boundary and take $\varsigma_1$ to be the first non-zero eigenvalue of the Laplace-Beltrami on $N$. Let
\begin{equation}\label{Lambda}
\Theta(N^2):=\sup\{\varsigma_1 \vol(N^2)\},
\end{equation}
where the $\sup$ is taken among all Riemannian metrics on $N^2$. It is well known that $\Theta(N^2)<\infty$ and
\begin{equation}\label{bound:surfaces}
\Theta(N^2)\leq 8\pi(\gamma+1),
\end{equation}
where $\gamma$ is the genus of the surface. See also \cite{Nadirashvili} for a discussion of attainability. In fact, in that paper the discussion is extended to the setting with boundary and a Neumann condition at that boundary.\\
The first goal in our paper is to generalize the bound \eqref{bound:surfaces} for the Paneitz operator $P$ on closed 4-manifolds. However, although it is well known that the spectrum of $P_g$ consists of a sequence of eigenvalues converging to $+\infty$, the principal eigenvalue $\lambda_0^g$ maybe negative. In consequence, one first imposes restrictions that ensure the positivity of the operator. With this objective, we recall two important conformal invariant quantities in four dimensions: Firstly, the \emph{total $Q$-curvature},
\begin{equation}\label{kappa}
\kappa_g:=\int_M Q_g \,dv_g,
\end{equation}
and, secondly, the well known \emph{Yamabe invariant}
\begin{equation}\label{Yamabe-invariant-closed}
\mathcal Y[g]=\inf_{g_f=e^{2f}g} \frac{\int_M R_{g_{f}}\,dv_{g_f}}{\left(\int_M dv_{g_f}\right)^{1/2}}.
\end{equation}
A key theorem by Gursky \cite{Gursky:principal-eigenvalue} yields that, if both the Yamabe invariant $Y[M]$ and the total $Q-$curvature $\kappa_g$ are nonnegative, then $\lambda_0^g=0$ and the kernel of $P_g$ contains only the constant functions. Thus, the next eigenvalue $\lambda_1^g$ is positive. A less restrictive condition was given in \cite{Gursky-Viaklovsky:fully-nonlinear-equation}: indeed, if $M$ is a closed 4-manifold with positive scalar curvature and
\begin{equation}\label{Gursky-condition}
\int_M Q_g\,dv_g +\frac{1}{3}(\mathcal Y[g])^2>0,
\end{equation}
then the same conclusion holds. It is interesting to observe that \eqref{Gursky-condition} is a conformally invariant quantity. Now, the first eigenvalue of $P_g$ (that we denote as $\lambda_1^g$) can be computed through the Rayleigh quotient
\begin{equation}\label{Rayleigh}
\lambda_1^g=\inf_{\int_M u\,dv_g=0,\\ u\ne 0} \,\,\frac{\mathcal E^M_g[u]}{\int_M u^2\,dv_g},
\end{equation}
where
\begin{equation}\label{Eg}
\mathcal E^M_g[u]=\int_{M} (\Delta_g u)^2\,dv_g+\int_M \Big(2Jg_{ab}-4A^g_{ab}\Big)\nabla^a u\nabla^b u\,dv_g.
\end{equation}
This is a conformal invariant quantity in 4-dimensions; indeed, if we have two metrics related by $g_f=e^{2f}g$, then
\begin{equation*}
\mathcal E_{g_f}[u]=\mathcal E_g[u].
\end{equation*}
Our first theorem is a generalization of the bound \eqref{bound:surfaces} for the first eigenvalue of the Paneitz operator:
\begin{theorem}\label{thm:closed} Let $(M,g)$ be a compact, orientable, closed, locally conformally flat ({\it l.c.f}) Riemannian 4-manifold, and define $\lambda_1^g$ to be the first (non-zero) eigenvalue of the Paneitz operator $P_g$. Then:
\begin{itemize}
\item[\emph{i.}] If $M$ is simply connected, then $M$ is conformally equivalent to $\mathbb S^4 $. In this setting we have $\lambda_1>0$ with $\Ker(P_g)=\{\text{constants}\}$ and
\begin{equation*}
\lambda_1^g \, {\vol(M)}\leq 64\pi^2.
\end{equation*}
Equality holds if and only if $M$ is diffeomorphic to $\mathbb S^4$.
\item[\emph{ii.}] If $\mathcal Y[g]>0$ and $\kappa_g>0$, and $M$ is orientable, then $M$ is conformally equivalent to $\mathbb S^4$ and the same conclusions hold.
\item[\emph{iii.}] If $\mathcal Y[g]>0$ and $\kappa_g=0$, then $M$ is conformally equivalent to a quotient $\mathbb R\times \mathbb S^3$.
Then $\lambda_1^g>0$ with $\Ker(P_g)=\{\text{constants}\}$.
Assume that the fundamental domain is exactly $[0,\varrho) \times\mathbb{S}^3$ for some $\varrho>0$ and let $\Psi: M\to [0,\varrho) \times\mathbb{S}^3$ be the conformal embedding described above. Set $\Psi_{\mathbb{S}^3}$ to be its projection onto the $\mathbb S^3$-coordinates. If, in addition, we impose the
geometric condition that for all $q\in M$, there exist $\delta_0\ge 0$ and $\varepsilon\in (0,1) $ such that, for every $\delta<\delta_0$ it holds \begin{equation}\label{concentrating}\frac{\vol_M( \mathcal B_\delta(\Psi_{\mathbb{S}^3}(q))}{\vol_M( \mathcal B^c_\delta(\Psi_{\mathbb{S}^3}(q))}<\varepsilon,\end{equation} then
\begin{equation*}
\lambda_1^g{\vol(M)}\leq C(\varepsilon, \delta_0) \varrho.
\end{equation*}
Here $C(\varepsilon, \delta_0)$ is a constant that only depends on $\varepsilon, \delta_0$, while
$\mathcal B_\delta(\cdot)$ is the geodesic ball on $\mathbb{S}^3$ with the standard metric centered at $\Psi_{\mathbb{S}^3}(q)$ and $\mathcal B^c_\delta(\cdot)$ its complement in $\mathbb{S}^3$. We also denoted $\vol_M(A)=\int_{M\cap \Psi^{-1}(A)} dv_g$.
\end{itemize}
\end{theorem}
Two remarks regarding statement \emph{iii.} are in order:
\begin{itemize}
\item The geometric condition \eqref{concentrating} can be understood as a quantitative measure that avoids concentration around lines.
\item The exact value of the constant $C(\alpha, \delta_0)$ can be calculated precisely, but it is cumbersome and does not provide additional information. It is worth noting, though, that it blows up if
certain parameter $\delta$ approaches 0, but this is ruled out by condition \eqref{concentrating}.
\end{itemize}
The jump from dimension 2 to dimension 4 is completely non-trivial, since in the two-dimensional case one can use conformal invariance to map any manifold to (a cover of) the sphere. In dimension 4 the difficulty is to find such conformal immersion of $M$ into a model manifold. Thus we restrict our study to locally conformally flat (l.c.f.) manifolds, where the {\it developing map} plays the role of this immersion. In fact, we will show that in the setting of Theorem \ref{thm:closed} the developing map is injective.
The idea of the proof of Theorem \ref{thm:closed} follows Hersch' original idea in \cite{Hersch}. Indeed, one uses a calibration type argument in order to show that coordinate functions are good test functions for the Rayleigh quotient \ref{Rayleigh}. In the case that $M$ is conformally equivalent to the sphere, this calculation has also been performed in \cite{PerezAyala}. The case of the cylinder $\mathbb R\times\mathbb S^3$ is much trickier but a variation of Hersch calibration argument can still be performed.
Related to this result is is the work of \cite{PerezAyala}, where the author shows that for the extremal metric for the quantity $\lambda_1^g\vol(M)$ one may obtain an orthonormal basis of eigenfunctions. In addition, these eigenfunctions are the coordinates for a Paneitz map, which is a 4-th order generalization of a harmonic map. \\
Finally, note that different bounds for $\lambda_1^g$ have been introduced in the literature. For instance, if $M$ can be conformally immersed into a unit sphere $\mathbb S^K$, then \cite{Xu-Yang:conformal-energy} gives a bound
in terms of an $K$-conformal energy inspired in the conformal volume of
Li-Yau \cite{Li-Yau}. In addition, \cite{Chen-Li:first-eigenvalue,Cheng:eigenvalues} showed some geometric bounds provided that $M$ is a compact submanifold of $\mathbb R^K$. A comparison theorem for this first eigenvalue was given in \cite{Wang-Zhou:comparison} for dimension $K\geq 5$ in some settings.\\
\subsection{Manifolds with boundary}\label{section:introduction-boundary}
Now we turn our attention to the boundary case. If $N^2$ is a compact surface with boundary, one may ask the same questions for the Steklov eigenvalues, which are the eigenvalues $\vartheta $ of the following the boundary value problem
\begin{equation*}
\left\{\begin{split}
&\Delta_g u=0 \text{ in }N,\\
&-\partial_\eta u=\vartheta u \text{ on }\partial N.
\end{split}\right.
\end{equation*}
A good reference for this problem is \cite{GP}. Given $N$, is well known that there exists an increasing sequence of eigenvalues $0=\vartheta_0<\vartheta_1\leq \vartheta_2\leq \ldots$
and, moreover,
\begin{equation*}
\vartheta_1=\inf_{\int_{\partial N}u=0,u\neq 0} \frac{\int_N |\nabla u|^2\,dv}{\int_{\partial N} u^2\,d\sigma}.
\end{equation*}
The extremal problem for the Steklov eigenvalue analogous to \eqref{Lambda} has been studied in a series of papers by Fraser-Schoen \cite{Fraser-Schoen:annulus,Fraser-Schoen:survey,Fraser-Schoen:free-boundary-minimal-surfaces}. If $N^2$ is a surface of genus $\gamma$ and $k$ boundary components, they show the bound
\begin{equation}\label{Steklov}
\vartheta_1(N) \Length(\partial N)\leq 2\pi( \gamma+k).
\end{equation}
For $\gamma = 0$ and $k = 1$ this result was obtained by Weinstock \cite{Weinstock} and it is sharp, while if the boundary has two boundary components (i.e., an annulus), it is not attained.
In addition, Weinstock showed that the bound is attained at a flat disk and the eigenfunctions can be identified with its coordinates. In the general case,
Fraser-Schoen~\cite{Fraser-Schoen:survey,Fraser-Schoen:free-boundary-minimal-surfaces} identified the eigenfunctions associated to maximal eigenvalues (with a given topology and number of boundary components) with coordinate functions of free boundary minimal surfaces in the unit ball $\mathbb B^K$.
In the particular case that $N$ is homeomorphic to the annulus, in \cite{Fraser-Schoen:annulus} and \cite{Fraser-Schoen:free-boundary-minimal-surfaces} it is shown that the quantity $\vartheta_1(N) \Length(\partial N)$ is maximized by the coordinate functions of a critical catenoid (in $\mathbb R^3$) which meets the boundary sphere orthogonally.
This problem has also been studied in the higher dimensional setting \cite{Fraser-Schoen:higher-dimension}, where conformal invariance is lost and the maximizer does not exist in the class of smooth metrics.\\
In this paper we are interested in the analog question for a conformal third-order boundary operator associated to the Paneitz operator, and which yields the natural 4-dimensional generalization of the Steklov problem from the conformal geometry point of view. In addition, it contains strong topological information thanks to the Chern-Gauss-Bonnet formula given in formula \eqref{CGB-full} (and the discussion above it).
It was introduced in \cite{Chang-Qing:zeta1,Chang-Qing:zeta2} (see also the surveys: \cite{Chang:survey,Chang:survey-CRM}, for instance), and fully generalized in \cite{Case:boundary-operators}. We follow the presentation in the latter.\\
Set $(M^4,g)$ be a 4-dimensional compact Riemannian manifold with boundary $\Sigma=\partial M$. We keep the notation above for the interior quantities, while tilde will mean the corresponding quantities for the boundary metric. Denote by $h$ the restriction of the metric $g$ to $T\Sigma$ and by $d\sigma_h$ the volume form for $h$ on $\Sigma$. Let $\eta$ be the outward-pointing normal, $I\!I=\nabla\eta|_{T\Sigma}$ the second fundamental form, $H = \trace_h \nabla\eta$ the mean curvature of $\Sigma$, and $I\!I_0=I\!I-\frac{H}{3}h$ the trace free part of the second fundamental form.\\
We set, on the boundary $\Sigma$:
\begin{equation*}
\begin{split}
&B_g^1 u= \eta u,\\
&B_g^2 u= -\tilde\Delta u+D^2 u (\eta,\eta)+\frac{1}{3}H\eta u,
\end{split}
\end{equation*}
and the third order operator
\begin{equation}\label{Paneitz-boundary}
B_g^{3} u =-\eta\Delta u-2\tilde \Delta \eta u + 2\langle I\!I_0,\tilde D^2 u\rangle -\frac{2}{3}H\tilde \Delta u+\frac{2}{3}\langle\tilde\nabla H,\tilde \nabla u\rangle+\Big(-\frac{1}{3}H^2-2A(\eta,\eta)+2\tilde J+\frac{1}{2}|I\!I_0|^2\Big)\eta u,
\end{equation}
These operators also satisfy a conformally covariance property coupled with \eqref{conformal-Paneitz}, this is
\begin{equation}\label{covariance}
B^k_{g_f} =e^{-kf}B^k_g\quad \text{on }\Sigma,\quad k=1,2,3.
\end{equation}
Define the bilinear form
\begin{equation*}
\mathcal Q_g(u_1,u_2)=\int_M u_1 P_g u_2\,dv_g+\oint_\Sigma \left(u_1 B_g^3(u_2)+B_g^1(u_1)B^2_g(u_2)\right)\,d\sigma_h
\end{equation*}
for $u_1,u_2\in\mathcal C^{\infty}(M)$. The main theorem in \cite{Case:boundary-operators} shows that $\mathcal Q_g$ is symmetric. The corresponding energy functional
\begin{equation*}\label{energy}
\mathcal E[u]=\mathcal Q_g(u,u)
\end{equation*}
is a conformal invariant. Indeed,
\begin{equation}\label{conformal-invariance}
\mathcal E_{g_f}[u]=\mathcal E_g[u].
\end{equation}
The boundary operator $B_g^3$ operator is associated to the following curvature quantity
\begin{equation}\label{T}
T_g=\eta J-\frac{2}{3}\tilde \Delta H-2\langle I\!I_0,\tilde A \rangle+\frac{4}{3}H\tilde J+\frac{1}{3}H|I\!I_0|^2-\frac{2}{27}H^3.
\end{equation}
For a conformal metric $g_f=e^{2f}g$, the $T$-curvature equation is
\begin{equation*}
B^3_{g} f+T_g=T_{g_f} e^{3f}.
\end{equation*}
In addition, the integral quantity
\begin{equation*}
\kappa_{g,h}:=\int_M Q_g\,dv_g+\int_\Sigma T_g\,d\sigma_h
\end{equation*}
is a conformal invariant. \\
It is well known that the mean curvature is the associated boundary curvature to the scalar curvature on $M$, and that the pair $(R,H)$ is conformally covariant. From the PDE point of view, these arise from a boundary value problem for the conformal Laplacian (see \eqref{LN} below). If one considers instead fourth-order equations on manifolds with boundary, the couple $(Q,T)$ is the natural generalization of the pair $(R,H)$, and has been well studied: for the construction of constant $Q$-curvature metrics with vanishing $T$-curvature, see \cite{Nidiaye:constant-Q}, while the constant $T$-curvature problem was considered in \cite{Ndiaye:constant-T}. A $Q$-curvature flow on manifolds with boundary was analyzed in \cite{Ndiaye:flow}. In the particular case of $(\mathbb B^4, \mathbb S^3)$ sharp Sobolev trace inequalities for the curvature $T$ were proved in
\cite{Ache-Chang}.\\
In addition, the pair $(Q,T)$ controls topology in the 4-dimensional setting. More precisely, there is a Chern-Gauss-Bonnet formula analogous to \eqref{Gauss-Bonnet} for 4-manifolds with boundary \cite{Chang-Qing:zeta1}:
\begin{equation}\label{CGB-full}
8\pi^2\chi(M)=\int_M \Big(\frac{|W|_g^2}{4}+Q_g\Big)\,dv_g+\int_{\Sigma} \Big(T_g-\frac{2}{3}\trace I\!I_0^3\Big)\,d\sigma_h.
\end{equation}
If $M$ is a l.c.f. manifold with umbilic boundary, this formula greatly simplifies:
\begin{equation}\label{CGB}
8\pi^2\chi(M)=\int_M Q_g\,dv_g+\int_{\Sigma} T_g\,d\sigma_h.
\end{equation}
Our first result in the boundary case is a classification statement based on the injectivity of the developing map $\Phi:M\to\mathbb S^4$ for a l.c.f. manifold, thus partially generalizing the seminal work by Schoen-Yau \cite{Schoen-Yau:paper}, \cite[Chapter VI]{Schoen-Yau:libro} to manifolds with boundary. We observe that the umbilicity assumption in the Theorem is a natural one, since it is a conformal invariant property.
We denote by $\mathcal Y[g]$ Yamabe invariant for manifolds with boundary (see \eqref{Y-invariant} for its precise expression). It is the natural generalization of \eqref{Yamabe-invariant-closed}, thus with a slight abuse of notation we denote it by the same letter.
\begin{theorem}\label{thm:positivity}
Let $M$ be a compact, orientable, l.c.f. Riemannian 4-manifold with umbilical boundary $\Sigma=\partial M$.
Then,
\begin{itemize}
\item[a.]
If $M$ is simply connected and $\Sigma$ has one connected component, then $M$ is conformally equivalent to a half-sphere
$$\mathbb S^4_+=\{(z_0,\ldots,z_4)\in\mathbb R^{5}\,:\, |z|=1,z_0\geq 0\}.$$
\item[b.] If $M$ is not necessarily simply connected, but $\chi(M)=1$ and $\mathcal Y[g]>0$, then the same conclusion holds \cite{Raulot}.
\item[c.] \label{2boundaries} Assume that $M$ is simply connected, $\Sigma$ has exactly two connected components, $R_g > 0$ and $Q_g>0$. Then $M$ is conformally equivalent to an annulus in $\mathbb R^4$, that can be chosen as
$$\mathcal A_\rho:=\{x\in\mathbb R^{4}\,:\,\rho\leq|x|\leq 1\} \quad\text{for some }\rho\in(0,1).$$
\end{itemize}
\end{theorem}
We remark that statement a. of the previous theorem follows from a classical doubling argument and it has been already studied in the literature. On the other hand,
statement b. above was proved by Raulot \cite{Raulot} and we include it here for completeness. Thus our main contribution is statement c. for the annulus case, which is partly inspired in the work of Chang, Hang and Yang \cite{Chang-Hang-Yang} for closed manifolds of positive $Q$-curvature.
Part a. may be understood as a 4-dimensional version of the classical Riemann mapping theorem in the plane. For the multiply connected case, part c. tells us that we cannot map two double-connected regions $M$ and $M'$ unless they share the same $\rho$. This is a very similar behavior to what happens in two dimensions, since two ring regions in the plane can only be mapped to one another unless they have the same \emph{extremal distance} or \emph{conformal modulus}, which is a conformal invariant quantity. This notion goes back to Ahlfors \cite{Ahlfors:book} (see also, for instance, the more modern exposition of \cite{Lawler}).\\
The proof of Theorem \ref{thm:positivity} also relies in the study of the developing map. A conformally invariant quantity, that will be relevant in analyzing this developing map was defined by Escobar and it is the analogue of the Yamabe invariant for manifolds with boundary. This invariant is crucial in the so-called Yamabe problem with boundary, which seeks a conformal metric on $M$ to a given one that has constant scalar curvature and zero mean curvature on the boundary. The Yamabe problem with boundary was solved in many cases by Escobar in \cite{Escobar:Yamabe-with-boundary} (in particular, in dimension four which suffices for our purposes). Related work can be found in \cite{Cherrier:problemes, Han-Li:Yamabe-with-boundary,Ambrosetti-Li-Malchiodi,Mayer-Ndiaye:remaining-cases,Brendle-Chen} from the variational point of view, and \cite{Brendle:Yamabe-flow,Almaraz-Sun:flow} for flow-type methods.
Note in addition that since the right hand side of \eqref{CGB} is conformally invariant, it is convenient to take Escobar's solution as a background metric in $M$ and in this particular case, $T\equiv 0$.\\
Our final goal in this paper is to understand the properties and eigenvalues of the third-order boundary operator $B_g^3$. To this operator we need to associate a second boundary condition, so we will work on the class of functions
\begin{equation*}
\begin{split}
\mathcal U_0=\{u:\overline M\to\mathbb R \,:\, u \text{ smooth}, \,\partial_\eta u=0\text{ on }\Sigma\}.
\end{split}
\end{equation*}
In this class the energy functional reduces to
\begin{equation}\label{energy1}
\begin{split}
\mathcal E_g^M[u]
&=
\int_{M} (\Delta_g u)^2\,dv_g+\int_M \Big(2Jg_{ab}-4A^g_{ab}\Big)\nabla^a u\nabla^b u\,dv_g+\frac{2}{3}\int_\Sigma H|\tilde\nabla u|_h^2\,d\sigma_h-2\int_\Sigma (I\!I_0)_{ij}\tilde\nabla^i u\tilde\nabla^j u\,d\sigma_h.
\end{split}
\end{equation}
Thus we would like to study the boundary eigenvalue problem
\begin{align}
P_gu &=0 \hbox{ in } M, \label{bi-laplacian-eq}\\
B^1_g u&=0 \hbox{ on } \Sigma, \label{boundary-condition}\\
B^3_g u&= \lambda u \hbox{ on } \Sigma. \label{eigenvalue-problem}
\end{align}
It is possible to show that there exists an increasing sequence of eigenvalues
\begin{equation*}
\lambda_0^g\leq \lambda_1^g\leq \lambda_2^g\leq\ldots
\end{equation*}
A straightforward calculation from the models yields a statement about the positivity of $B^3_g$:
\begin{corollary}\label{cor:positivity}
In all cases a., b., c. in Theorem \ref{thm:positivity} above we have $\lambda_0^g=0$ and the corresponding eigenspace consists only of constant functions.
\end{corollary}
This implies, in particular, that $\lambda_1^g>0$ may be characterized by the following Rayleigh-type quotient:
\begin{equation}\label{Rayleigh-quotient1}
\lambda_1^g=\min_{\mathcal U_0\,:\,\int_\Sigma u=0}\frac{\mathcal E_g^M[u]}{\displaystyle\int_\Sigma u^2\,d\sigma_h}.
\end{equation}
The question of positivity of $B^3_g$ has also been analyzed in other contexts, see for example the work in \cite{Case-Chang}.\\
Now we look at the min-max problem for $\lambda_1^g$. Our main Theorem is the four-dimensional generalization of \eqref{Steklov}, which may be applied to manifolds satisfying the hypothesis of Theorem \ref{thm:positivity}:
\begin{theorem} \label{thm:bounds-boundary}
We get the following bounds for $\lambda_1^g>0$:
\begin{itemize}
\item[\emph{a.}] If $M$ is conformally equivalent to a half-sphere $\mathbb S^4_+$,
\begin{equation*}
\lambda_1^g \vol(\Sigma)\leq 24\pi^2.
\end{equation*}
and it is attained at a flat disk.
\item[\emph{b.}] If $M$ is conformally equivalent to an annulus $\mathcal A_\rho$ (with boundaries $\Sigma_1$, $\Sigma_ \rho$),
\begin{equation}\label{statement-annulus}
\lambda_1^g \vol(\Sigma)\leq c\Big(\rho,\frac{\vol(\Sigma_\rho)}{\vol(\Sigma_1)}\Big),
\end{equation}
where this is a constant can be explicitly computed.
In addition, there is $\rho^*>0 $ such that for $\rho\leq \rho^*$ the bound is sharp.
\end{itemize}
\end{theorem}
The bounds of the previous theorem are obtained by comparison with explicit computations in two types of model manifolds: a 4-dimensional ball and 4-dimensional annuli (see Section \ref{section:calculations}). The computations in the ball model are a are straightforward and provide optimal bounds. On the other hand, the precise calculations for the annuli are based on the ideas in
\cite{Fraser-Schoen:annulus}, and although elementary, they soon become quite technical.\\
Finally, we make some bibliographical remarks on related eigenvalue problems. For a general introduction to boundary value problems for fourth-order operators we refer to the monograph \cite{Gazzola-Grunau-Sweers}. Many versions of (fourth-order) eigenvalue problems in which the eigenvalue appears in the boundary condition have appeared in the literature \cite{Buoso-Provenzano:biharmonic-Steklov,Bucur-Ferrerro-Gazzola,Liu:Weyl,Gazzola-Sweers,Bucur-Gazzola,Knopf-Liu}. These are known as biharmonic Steklov eigenvalue problems. One particular application of this is to study suitable boundary conditions for the Cahn-Hilliard equation. This is a model that describes phase separation
processes of binary mixtures by a non-linear fourth-order equation. In recent years, several types of dynamic
boundary conditions have been proposed in order to account for the
interactions of the material with the solid wall and, in particular, third order boundary conditions play an essential role in the model \cite{Liu-Wu:Cahn-Hilliard,Knopf-Lam-Liu-Metzger}.\\
The organization of the paper is as follows: In Section \ref{section:closed} we discuss the eigenvalue problem for the Paneitz operator on closed manifolds and we prove Theorem \ref{thm:closed}. In Section \ref{section:Escobar} we discuss preliminary notions that are necessary in the proof
the classification Theorem \ref{thm:positivity}, that is finally proved in Section \ref {section:injectivity}. Particular geometric models are analyzed in more detail in Section \ref{section:calculations} and in the Appendix. We finally prove Corollary \ref{cor:positivity} and Theorem \ref{thm:bounds-boundary}
in Section \ref{sec: boundary value}.\\
\noindent{\it Acknowledgments: The authors would like to thank Jeffrey Case, Alice Chang, Gaven Martin, Vicente Mun\~{o}z and Riccardo Piergallini, for many useful discussions and suggestions.
The authors would like to also thank the anonymous referee that pointed out a gap in the first version of this manuscript and led to improvements of our work.}
\section{The closed case}\label{section:closed}
In this section we give the proof of Theorem \ref{thm:closed}, which we summarize here: first, the positive curvature assumption allow us to control the topology of $M$ and, either $M$ is conformally equivalent to a sphere $\mathbb S^4$, or $M$ is covered by a cylinder $\mathbb R\times \mathbb S^3$. In the first case, we can obtain an upper bound for $\lambda_1$ using the scheme in Yang-Yau \cite{Yang-Yau} , which is based on a trick by Hersch \cite{Hersch}. Hersch's idea is to use the coordinate functions of the embedding as test functions in the Rayleigh quotient \eqref{Rayleigh}. A modification of this strategy yields the cylinder case too.\\
We recall now some facts about locally conformally flat (l.c.f.) manifolds; for additional background, we refer to the book \cite[Chapter VI]{Schoen-Yau:libro}. A Riemannian metric $g$ on a smooth manifold $M$ is called l.c.f. if for every point $p\in M$, there exists a neighbourhood $U$ of
$p$ and a smooth function $f$ on $U$ such that the metric $e^{2f}g$ is flat on $U$.
Note that, in dimension $4$, a Riemannian manifold is locally conformally flat if and only if the Weyl tensor $W$ vanishes.\\
We assume, to start with, that $M$ is a simply connected, closed, compact, l.c.f. manifold of dimension $n$. Liouville's theorem \cite[Theorem 1.6 in Chapter VI]{Schoen-Yau:libro} allows us to patch all these neighborhoods to obtain a globally defined conformal immersion $\Phi:M\to \mathbb R^n$ (or equivalently, $\Phi:M\to \mathbb S^n$ by stereographic projection), such that the locally conformally flat structure of $M$ is induced by $\Phi$. The function $\Phi$ is called the \emph{developing map} and it is unique up to conformal transformations of $\mathbb S^n$.\\
Note that a simple topological argument yields the well known characterization result by Kuiper \cite{Kuiper} (see also the notes \cite{Howard} for remarks on regularity). Indeed, $\Phi(M)$ is at the same time open and closed in $\mathbb S^n$. More precisely:
\begin{theorem}[Kuiper \cite{Kuiper}]
Any $n$-dimensional closed simply-connected locally conformally flat manifold is conformally equivalent to $\mathbb S^n$.
\end{theorem}
\begin{proof}[Proof of Theorem \ref{thm:closed}.i.]
By our previous discussion, there is a bijective conformal embedding $\Phi: (M^4, g)\to (\mathbb{S}^4, g_{\mathbb{S}^4})$, where $g_{\mathbb S^4}$ is the canonical metric on the sphere. We denote by $(z_0,z_1,\ldots,z_4)$ the coordinates of $\mathbb S^4$ in $\mathbb R^5$ and by $\Phi_i$ the $i$-th coordinate of the embedding $\Phi$, $i=0,1,2,3,4$.\\
In this setting, we must have $\lambda_1^g>0$ with $\Ker(P_g)=\{\text{constants}\}$ since condition \eqref{Gursky-condition} is trivially satisfied on the sphere.\\
Let us check now that $\Phi_i$ is an admissible function for the Rayleigh quotient \eqref{Rayleigh}.
A standard calibration argument (see Lemma 1.1 in \cite{Hersch} or page 107 in \cite{Gromov}) yields that we can choose the embedding satisfying
\begin{equation}\label{calibration}
\int_M \Phi_i\,dv_g=0,\quad i=0,\ldots,4.
\end{equation}
Moreover,
\begin{equation}\label{test-function}
\lambda_1^g \int_M \Phi_i^2\,dv_g\leq \mathcal E^M_g[\Phi_i]= \mathcal E^{\Phi(M)}_{ g_{\mathbb{S}^4}}[z_i],
\end{equation}
Adding on $i$ we have
$$ \lambda_1^g\vol(M)\leq \sum_{i=0}^4 \mathcal E^{\Phi(M)}_{ g_{\mathbb{S}^4}}[z_i],$$
here we have used that $\sum_{i=0}^4\Phi_i^2=1$. Now recall that $\Phi:M\to \mathbb S^4$ is bijection and calculate, from the expression of the energy \eqref{Eg},
$$\mathcal E^{\mathbb{S}^4}_{g_{\mathbb{S}^4}}[z_i]=\mu_1^2\int_{\mathbb{S}^4} z_i^2\,dv_{g_{\mathbb{S}^4}}+ 2\int_{\mathbb{S}^4} |\nabla z_i|^2 \,dv_{\mathbb{S}^4}=
\big(\mu_1^2+2\mu_1\big)\int_{\mathbb{S}^4} z_i^2\,dv_{g_{\mathbb{S}^4}}.$$
Here $\mu_1=4$ is the first non-zero eigenvalue of the (minus) Laplace-Beltrami operator on $\mathbb{S}^4$.
Thus we conclude
$$\sum \mathcal E^{\Phi(M)}_{ g_{\mathbb{S}^4}}[z_i]= 8\vol(\mathbb{S}^4)=64 \pi^2,$$
and that this bound is sharp, since the coordinate functions are already eigenfunctions. This completes the proof.
\end{proof}
We next consider the non-simply connected case; in the l.c.f. 4-dimensional setting it turns out that positive curvature gives information about the topology. Since the Weyl term vanishes for l.c.f manifolds, under the assumption $\kappa_g\geq 0$, the Gauss-Bonnet formula \eqref{Gauss-Bonnet} implies that $\chi(M)\geq 0$.
The classification of such manifolds according to the Euler characteristic was studied by Gursky in \cite[Theorem A]{Gursky:lcf-4-6}: if $M$ is a compact 4- or 6-dimensional manifold which admits a l.c.f. of non-negative scalar curvature $g$, then $\chi(M) \leq 2$. Furthermore, $\chi(M) = 2$ if and only if $(M,g)$ is conformally equivalent to the sphere with its canonical metric, and $\chi(M) = 1$ if and only if $(M,g)$ is conformally equivalent to projective space with its canonical metric.
The remaining case $\chi(M)=0$ was characterized in \cite[Corollary G]{Gursky:Weyl}: if $(M,g)$ is a compact, l.c.f. 4-manifold with $Y[g]>0$ and $\chi(M) = 0$, then $(M,g)$ is conformal to a quotient of the cylinder $\mathbb R\times \mathbb S^3$.\\
A related result was proven by Chang, Hang and Yang \cite[Corollary 1.2]{Chang-Hang-Yang}. More precisely, if $M$ has positive scalar curvature and positive $Q$-curvature, then $M$ is conformally equivalent to a quotient of the sphere. Note that if we remove the positive $Q$-curvature assumption one may construct examples of manifolds that are conformally equivalent to $\mathbb S^4\setminus\{p_1,\ldots,p_N\}$ (see \cite[Theorem 1.3]{Chang-Hang-Yang} and the discussion there).
As a side remark, closed, flat manifolds are isometric to $\mathbb R^n / \Gamma$, for $\Gamma$ a Bieberbach group.
A short overview on Bieberbach manifolds can be found in \cite[Section 4.1]{Bettiol-Piccione}.
\\
\begin{proof}[Proof of Theorem \ref{thm:closed}.ii.]
It follows as part \emph{i.} taking into account that $M$ is orientable.
\end{proof}
Now we deal with the remaining case in which $M$ is (conformally) covered by a cylinder $\mathbb R\times \mathbb S^3$. These manifolds have been studied in \cite{Hillman:cylinder}, \cite[Chapter 11]{Hillman:book}. It is known that a closed 4-manifold $M$ is covered by $\mathbb R\times\mathbb S^3$ if and only if $\pi_1 = \pi_1(M)$ has two ends and $\chi(M) = 0$. Its homotopy type is then determined by $\pi_1$ and the first nonzero $k$-invariant $k(M)$. While all the possible subgroups of $\pi_1( \mathbb R\times\mathbb S^3) $ are well known, there is not a complete classification of which manifolds can be actually realized with such fundamental groups (see also \cite{Hamilton} for examples of quotients with positive curvature). In any case, we assume that the fundamental domain $\Omega:=\Phi(M)$ is exactly a region $[0,\varrho)\times \mathbb S^3$ of $\mathbb R\times\mathbb S^3$ for some $\varrho>0$, with periodicity in the real variable.
\begin{proof}[Proof of Theorem \ref{thm:closed}.iii.]
Note first that condition \eqref{Gursky-condition} for positivity of $\lambda_1^g$ is trivially satisfied by our hypothesis.
To construct suitable test functions we consider the coordinates on the sphere $\mathbb S^3$ and use a variation of Hersch's calibration method in \cite{Hersch}. More precisely, if $(t, y_1, y_2, y_3, y_4)\in [0,\varrho)\times\mathbb S^3$ we take $(y_1, y_2, y_3, y_4)\in \mathbb{S}^3$, and apply a Moebius transformation $\varphi_{p,\delta}$ of $\mathbb S^3$ which is induced by dilations on the tangent plane at a point $p\in\mathbb S^3$. We briefly sketch Hersch's argument to show that by his procedure we can find $p\in \mathbb{S}^3$ and $\delta\in (0,1]$ such that
\begin{equation}\label{calib}\int_M x_i\circ \varphi_{p,\delta}\circ \Psi_{\mathbb{S}^3} \,dv_g=0 \quad\text{for}\quad i\in\{1,2, 3, 4\}. \end{equation}
Here $x_i$ is the $i$-th coordinate function on $\mathbb{S}^3$ and $\Psi_{\mathbb{S}^3}$ the projection onto $\mathbb{S}^3$ of the conformal embedding of $M$ into $[0,\varrho) \times \mathbb S^3$. Let
$$F_i(p,\delta)= \int_M x_i\circ \varphi_{p,\delta}\circ \Psi_{\mathbb{S}^3}\, dv_g$$ and $F=(F_1, F_2, F_3, F_4)\in \mathbb{R}^4 $.
Note first that $F\to p \vol(M)$ as $\delta\to 0$ and, in particular, the surface $F(\cdot,\delta)$ tends a sphere of radius $\vol(M)$ that does not touch the origin. On the other hand, $F(p,1)$ is a fixed point independent of $p$, while for any given $\delta\in (0,1)$, $F(\cdot,\delta)$ is an immersed 3-dimensional surface on $\mathbb{R}^4$ that is continuous in $\delta$. If $F(p,1)$ is 0 we already have the desired transformation, otherwise, a continuity argument in $\delta$ and $p$ implies that there are $p\in \mathbb{S}^3$ and $\delta\in (0,1)$ such that $F(p,\delta)$ agrees with the origin, which is precisely \eqref{calib}. Note, in addition, that
$$\sum_{i=1}^4(x_i\circ\varphi_{p,\delta}\circ \Psi_{\mathbb{S}^3})^2=1,$$
which yields
$$ \lambda_1^g\vol(M)\leq \sum_{i=1}^4 \mathcal E^{\Phi(M)}_{ g_{\mathbb R\times\mathbb{S}^3}}[x_i\circ\varphi_{p,\delta}],$$
here we used that the energy is a conformal invariant. To complete the proof we explicitly compute the energy of these transformations and maximize our computation in $\delta\in [0,1]$. By symmetry, it is enough to consider the Moebius transformations with $p=S$ the the South pole, which are given by $$\varphi_{p, \delta}(\hat{y}, y_4)=\left( \frac{2 \delta\hat{y}}{(1-y_4)+\delta^2 (1+y_4)}, \frac{y_4-1+\delta^2 (1+y_4)}{(1-y_4)+\delta^2 (1+y_4)} \right),$$
where we write $\hat y=(y_1,y_2,y_3)$.
Then, denoting by $\varphi^i$ the $i$-th coordinate of $\varphi_{p, \delta}(\hat{y}, y_4)$ (or equivalently, $x_i\circ \varphi_{p, \delta}$) and $f(y_4, \delta)= \frac{1}{(1-y_4)+\delta^2 (1+y_4)}$, the energy is
given by
\begin{equation*}
\begin{split}\sum_{i=1}^4 E^{\Phi(M)}_{ g_{[0,\varrho)\times\mathbb{S}^3}}[x_i\circ\varphi_{p,\delta}]=\int_{[0,\rho)\times \mathbb S^3}
4\delta^2 f^2 |\hat{y}|^2 \big[&3+5y_4(1-\delta^2)f-2 (1-\delta^2)^2(1-y_4^2)f^2]^2\\
&+ 16 \delta ^4 f^4[3y_4- 2(1-y_4^2)(1-\delta^2) f\big]^2\, d\mu_M.
\end{split}
\end{equation*}
To obtain a uniform bound in $\delta$ for this energy, we parametrize $\mathbb S^3$ by $(\sin \phi \, \, \omega, \cos \phi)$, where $\omega\in \mathbb S^2$ and $\phi\in[0,\pi)$. Then the volume element is given by $\sin^2\phi \, \mu_{\mathbb{S}^2}$, where $\mu_{\mathbb{S}^2}$ is the volume element of $\mathbb{S}^2$,
\begin{align*}
\sum_{i=1}^4 E^{\Phi(M)}_{ g_{[0,\varrho)\times\mathbb{S}^3}}[x_i\circ\varphi_{p,\delta}]&=16\delta ^2 \pi\varrho\int_0^\pi
f^2 \sin^4\phi\,\big[3+5\cos \phi(1-\delta^2)f-2 (1-\delta^2)^2f^2 \sin^2\phi\big]^2 \,d\phi\\ &+
64\delta ^4 \pi\varrho\int_0^\pi
f^4\sin^2 \phi \,\big[3\cos \phi- 2 \sin^2\phi(1-\delta^2) f\big]^2\, d\phi,
\end{align*}
and $f(\phi, \delta)=\frac{1}{1-\cos \phi+\delta^2(1+\cos\phi)}.$
As $\delta \to 1,$ it is easy to verify that
\begin{equation*}f\to \frac{1}{2}\qquad \text{and}
\qquad \sum_{i=1}^4 E^{\Phi(M)}_{ g_{[0,\varrho)\times\mathbb{S}^3}}[x_i\circ\varphi_{p,\delta}]\to
18 \pi^2 \varrho.
\end{equation*}
To study the behavior for $\delta$ small we observe that %
$$ E^{\Phi(M)}_{ g_{[0,\varrho)\times\mathbb{S}^3}}[x_i\circ\varphi_{p,\delta}]\leq C \varrho \pi \int_0^\pi f^4 \sin^4\phi \,d\phi.$$
Since $f$ is decreasing in $\delta,$ we have that $f(\cos \phi, \delta)\leq \frac{1}{1-\cos \phi}$ and $f\sin^2\phi \leq (1+\cos\phi)\leq 2.$ Then,
$$ E^{\Phi(M)}_{ g_{[0,\varrho)\times\mathbb{S}^3}}[x_i\circ\varphi_{p,\delta}]\leq C \delta^2 \varrho \pi \int_0^\pi f^2 \,d\phi \leq C \frac{\varrho \pi}{\delta} .$$
The constants and the full energy can be explicitly computed, but we avoid it here for simplicity.\\
Now we use Condition \eqref{concentrating} to find a lower bound for $\delta$. Again, for simplicity we assume that the Moebius transformation is centered at the South pole $p=S$. With a slight abuse of notation we identify $\mathcal B_\delta(N)=\Psi^{-1}(\mathcal B_\delta(N))$, where $N$ is the North pole.
Then \eqref{calib} yields
$$\int_{M\cap \mathcal B^c_\delta(N)} x_4\circ \varphi_{p,\delta}\circ \Psi_{\mathbb{S}^3} \,dv_g=- \int_{M\cap \mathcal B_\delta(N)} x_4\circ \varphi_{p,\delta}\circ \Psi_{\mathbb{S}^3} \,dv_g. $$
Observe that $y=(y_1,\, y_2, \, y_3, \, y_4) \in \mathcal B_\delta(N)$ implies that $0\leq 1-y_4 < C\delta$. Then for $(y_1,\, y_2, \, y_3, \, y_4)\in \mathcal B^c_\delta(N)$ it holds that $0\leq y_4\circ\varphi_{p, \, \delta}(y) +1 \leq C \delta,$ and
\begin{align*} - \int_{M\cap \mathcal B_\delta(N)} x_4\circ \varphi_{p,\delta}\circ \Psi_{\mathbb{S}^3} \,dv_g & = \int_{M\cap \mathcal B^c_\delta(N)} (1+x_4\circ \varphi_{p,\delta}\circ \Psi_{\mathbb{S}^3}) \,dv_g
-\vol_M(M\cap \mathcal B^c_\delta(N))\\
& \leq (C\delta -1)\vol_M(M\cap \mathcal B^c_\delta(N)). \end{align*}
Since $|x_4\circ \varphi_{p,\delta}\circ \Psi_{\mathbb{S}^3}|\leq 1,$
we conclude that
$$-\vol_M(M\cap \mathcal B_\delta(N))
\leq (C\delta -1) \vol_M(M\cap \mathcal B^c_\delta(N)). $$
Using condition \eqref{concentrating} we have that if $\delta<\delta_0$ then
$$1-\varepsilon\leq 1-\frac{\vol_M(M\cap \mathcal B_\delta(N)) }{\vol_M(M\cap \mathcal B^c_\delta(N))}\leq \delta.$$
This concludes the proof.\\
Finally, we point out that test functions need to be periodic in the $t\in [0,\varrho)$ variable since $[0,\varrho)\times \mathbb{S}^3$ is the fundamental domain of a quotient. Note, in addition, that the transformations $\varphi_{p,\delta}$ are periodic in $t$, but not conformal on the cylinder. Nonetheless, they provide suitable test functions for which the energy can be explicitly computed.
\end{proof}
In Lemma \ref{lemma:periodic} we will calculate the precise eigenvalue for the canonical metric in $[0,\varrho)\times \mathbb S^3$, which shows, on the one hand, that our bound is far to be sharp when $\varrho\to \infty$ and, on the other hand, justifies the need of some geometric condition such as \eqref{concentrating} when $\varrho\to 0$. Indeed, the lowest positive eigenvalue is
\begin{equation*}
\big[(2+(\tfrac{2\pi}{\varrho})^2\big]^2 -4.
\end{equation*}
\section{Preliminaries on the boundary case}\label{section:Escobar}
\subsubsection{Escobar's problem}
Here we recall some background on the Yamabe invariant for manifolds with boundary. We use the notation from Subsection \ref{section:introduction-boundary} in the Introduction. Let $(M,g)$ be a compact, $n$-dimensional, Riemannian manifold with boundary $\Sigma=\partial M$, and let $h$ be the restriction of the metric $g$ to the boundary. The first observation is that the conformal Laplacian on $M$ can be associated to a boundary operator $N_g$ on $\Sigma$. We set
\begin{equation}\label{LN}
\left\{\begin{split}
&L_g u:=-\Delta_g u+\tfrac{n-2}{4(n-1)}u\quad \text{in }M,\\
&N_g u:=\partial_\eta u+\tfrac{n-2}{2}H_g u\quad \text{on }\Sigma.
\end{split}\right.
\end{equation}
We note that $N$ plays the role of a Neumann (more precisely, Robin) condition. The most important property for this system is that the couple $(L,N)$ is conformally covariant. Indeed, for a conformal change $g_u=u^\frac{4}{n-2}g$ we have
\begin{equation}\label{eq-conformal}
\begin{split}
&L_{g_u}(u^{-1}\phi)=u^{-\frac{n+2}{n-2}}L_ {g}\phi\quad\text{in }M,\\
&N_{g_u}(u^{-1}\phi)=u^{-\frac{n}{n-2}}N_{g}\phi \quad\text{on }\Sigma.
\end{split}
\end{equation}
The Yamabe problem for manifolds with boundary asks to find a conformal metric to $g$ with constant scalar curvature on $M$ and zero mean curvature on $\Sigma$. In PDE language we look for a positive solution to
\begin{equation}\label{problem-Escobar}
\left\{\begin{split}
&L_g u=cu^{\frac{n+2}{n-2}}\quad \text{in }M,\\
&N_g u=0\quad \text{on }\Sigma.
\end{split}\right.
\end{equation}
This problem was first studied by Escobar \cite{Escobar:Yamabe-with-boundary}. He solved it in many cases, including the 4-dimensional, l.c.f., umbilic boundary case which is the setting of this paper. More precisely, a solution may be found using variational methods for the following Yamabe invariant
\begin{equation}\label{Y-invariant}\mathcal Y[g]=\inf\{\mathcal R_g[u]\,:\, u\in W^{1,2}(M),\, u\not\equiv 0\}\end{equation}
where
\begin{equation*}
\mathcal R_g[u]=
\frac{\displaystyle\int_M u L_gu\,dv_g}{\displaystyle\Big(\int_M u^{\frac{2n}{n-2}}\,dv_g\Big)^{\frac{n-2}{n}}}
=\frac{\displaystyle\int_M \left(|\nabla u|^2_g+\tfrac{n-2}{4(n-1)} R_g u^2\right)\,dv_g+\tfrac{n-2}{2}\int_\Sigma H_g u^2\,d\sigma_h}{\displaystyle\Big(\int_M u^{\frac{2n}{n-2}}\,dv_g\Big)^{\frac{n-2}{n}}}.
\end{equation*}
It is well known that a (positive) solution exists if $\mathcal Y[g]<\mathcal Y[g_{\mathbb S^n_+}]$, and that if equality is attained then $M$ is already diffeomorphic to the model $\mathbb S^n_+$. In addition, the sign of $\mathcal Y[g]$ coincides with the sign of the first eigenvalue for the conformal Laplacian on $M$ (coupled with the boundary condition $N_gu=0$).
\begin{remark}
In the following, we may assume without loss of generality that our background metric $g$ has constant scalar curvature and zero mean curvature on the boundary.
\end{remark}
\subsubsection{ Raulot's Theorem}
In this subsection we recall the following theorem of Raulot~\cite{Raulot} in dimensions 4 and 6:
\begin{theorem}[\cite{Raulot}]\label{thm:Raulot} Let $n$ be 4 or 6, and let $M$ be an $n$-dimensional l.c.f. manifold with umbilical boundary. Assume that the Yamabe invariant satisfies $\mathcal Y[g]\geq 0$. Then the Euler characteristic satisfies the bound $\chi(M)\leq 1$. In addition, if $\chi(M)=1$ and $\mathcal Y[g]>0$, then $(M, g)$ is conformally equivalent to the standard half-sphere $\mathbb S^n_+$.
\end{theorem}
Note that this immediatly yields part b. of Theorem \ref{thm:positivity}.
\section{Boundary case: Proof of Theorem \ref{thm:positivity}} \label{section:injectivity}
In this section we prove Theorem \ref{thm:positivity}. We recall that in the locally conformally flat setting there exists a conformal map $\Phi:M\to\mathbb S^4$ (the developing map). The key ingredient of our result is the injectivity of such map $\Phi$ under hypotheses of Thereom \ref{thm:positivity}.
\subsection{One boundary component.}\label{subsection:one-component} We begin by proving the following result.
\begin{proposition}\label{prop:injectivityi}
Let $(M,g)$ be a simply connected, compact, l.c.f. 4-dimensional Riemannian manifold with umbilic boundary $\Sigma$. Assume that
$\Sigma$ has one connected component, then the developing map $\Phi:M\to\mathbb S^4$ is injective and thus, a diffeomorphism onto its image.
\end{proposition}
\begin{proof}
In this situation the injectivity of the developing map relies on a classical doubling argument. We follow the presentation of Spiegel \cite{Spiegel} to describe the construction.
Consider the doubling of the manifold $M$ defined as $\hat M = M \cup (-M)$, where we write $-M$ for a
second copy of $M$ that is distinguished from $M$ itself (for instance by taking $M\times\{1\} $ and $M\times\{-1\} $). Here the manifold $M$ and its copy $-M$ are identified at their boundaries $\Sigma$ and hence $\hat M$ is a closed manifold (see \cite[Chapter 5]{book:doubling} for more details).
Since $\Sigma=\partial M \subset \hat M$ is umbilic, and this is a conformal invariant property, the image of $\Sigma$
must be umbilic in $\mathbb S^4$, thus it must be contained in a hypersphere of $\mathbb S^4$. Now, since $\Phi|_{\Sigma}$ is a local diffeomerphism from a compact manifold to a simply connected manifold, we have that $\Phi|_{\Sigma}$ is actually diffeomorphism.
Composing
with a M\"obius transformation of $\mathbb S^4$, if necessary, we may assume that $\Phi(\Sigma)$ is the
equator $\{z=(z_0,\ldots,z_{4}) \in\mathbb S^4\subset \mathbb R^{5} \,;\, z_{0} = 0\}$.
Now take the odd extension of $\Phi$ to $\hat M$,
as follows:
\begin{equation*}
\hat \Phi(p):=(\hat \Phi_0(p),\ldots,\hat \Phi_{4}(p)),
\end{equation*}
where $\hat \Phi_i(p)=\Phi_i(p)$ for $i=1,\ldots,4$ and
\begin{equation*}
\hat \Phi_{0}(p)=\left\{
\begin{split}
&\Phi_{0}(p) \,\text{ if }p\in M,\\
&-\Phi_{0}(p) \,\text{ if }p\in -M.
\end{split}\right.
\end{equation*}
Now, by a straightforward connectedness argument again, we conclude that $\hat \Phi:\hat M\to \mathbb S^4$ is a diffeomorphism and thus, $\Phi$ is injective, as desired.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:positivity}.a.]
From the proof above we have (perhaps after a Moebius transformation) that $\Phi(\Sigma)$ is an equator. Since $\Phi:\hat M\to \mathbb{S}^4$ is a bijective diffeormphism, then necessarily the restriction of $\Phi$ to $M$ maps the manifold diffeomorphically into a hemisphere $\mathbb{S}^4_+$.
\end{proof}
\subsection{Two boundary components.}
We now study the case with two boundaries. As explained in Section \ref{section:Escobar}, we may choose a conformal metric on $M$ such that the scalar curvature $R$ is constant in $M$ and the boundary is umbilic and minimal. Then,
we can again consider again the doubling $\hat M$ of the manifold $M$ (following the same construction of Subsection \ref{subsection:one-component}) and using the result in the Appendix
of \cite{Escobar:Yamabe-with-boundary} we have that $\hat M$ is smooth with Escobar's metric (and actually has a well characterized Green's function). Moreover,
since $\hat M$ is a compact l.c.f. Riemannian manifold without boundary, the developing map $\Phi:\hat M\to\mathbb S^4$ exists.
We remark that the metric in the doubling $\hat M$, denoted by $\hat g$, can be taken to be $C^{2,\alpha}$ smooth. In addition, $R_{\hat g}>0$. Thus we can apply the results in \cite[Theorem 4.1]{Schoen-Yau:libro} to conclude that $\hat M$ is conformally equivalent to a quotient $\Omega/\Gamma$ for some domain $\Omega$ in $\mathbb S^n$.
Now we restrict to the image of $M$ by the developing map, denoted by $\Omega'=\Phi(M)$. It is a subdomain in $\Omega$. The boundary of $\Omega'$ can be written as $\Gamma:=\Phi(\Sigma)\cup \mathcal B$, where the latter is a set of branching points.
We first analyze the boundary $\Sigma=\partial M$, which we recall is umbilic and hence the image of $\Sigma$ must be umbilic in $\mathbb S^4$, that is, each component of $\Sigma$ must be contained in a hypersphere of $\mathbb S^4$. Now take into account that, for each connected component $\Sigma'$ of $\Sigma$, $\Phi|_{\Sigma'}$ is a local diffeomorphism from a compact manifold to a simply connected manifold, so we must have that $\Phi|_{\Sigma'}$ is actually diffeomorphism. This implies that one can find a small neighborhood around $\Sigma$ in $M$ such that $\Phi$ is actually a local diffeomorphism and hence, no branching points can occur in this set.
To analyze $\mathcal B$ away from $\Phi(\Sigma)$, we consider $\Omega'$ with the metric induced by the original metric $g$ in $M$ (not Escobar's). Since with the metric $g$
we assumed that the $Q-$curvature is positive, we
can use the arguments in the proof of \cite[Theorem 1.2]{Chang-Hang-Yang} to conclude that the set $\mathcal B$ is empty. It is important to observe that this is possible since the proof of \cite{Chang-Hang-Yang} is local around each point $x\in\mathcal B$ and we argued in the paragraph above that $\mathcal B$ is at a positive distance of $\Gamma$.
In summary, we conclude that $\Phi$ cannot have branching points and it is a local diffeomorphism.
This in particular implies that $ \hat M$ can be identified with a quotient of $\mathbb{S}^4$ and thus, by restricting the developing map $\Phi:\hat M\to \mathbb S^4$ to $M$ we have that the developing map of $M$ is injective.
We remark that, in fact, under our assumptions, the classical proof of injectivity for the developing map by Schoen-Yau \cite{Schoen-Yau:libro} can be performed directly for manifolds with boundary since all the additional boundary integral terms that appear would vanish.
\begin{proof}[Proof of Theorem \ref{thm:positivity}.c.]
Consider the developing map $\Phi: \hat{M}\to \mathbb{S}^4$. We have that $\Phi$ is a conformal diffeomorphism onto its image (regular at all points).
Recall again the boundaries are assumed to be umbilic and hence their images are umbilic in $\mathbb S^4$.
Now, since $\Phi:M\to \mathbb S^4$ is an immersion, three situations can occur:
\begin{itemize}
\item Both components of $\Sigma$ are mapped to the same great circle in $\mathbb S^4$.
\item Each component of $\Sigma$ is mapped to two different great circles in $\mathbb S^4$ with non-empty intersection.
\item Each component of $\Sigma$ is mapped to two different circles in $\mathbb S^4$ with empty intersection. Thus $\Phi(M)$ an annulus type region in $\mathbb S^4$ (or $\mathbb R^4$ by stereographic projection).
\end{itemize}
The first and second situations are ruled out by the injectivity of the developing map,
thus we conclude that we are the third case and this finishes the proof of Theorem \ref{thm:positivity}.c.
\end{proof}
\section{Explicit calculations in known models}\label{section:calculations}
In this section we provide the explicit solution to the eigenvalue problem \eqref{bi-laplacian-eq}-\eqref{eigenvalue-problem} for a family of rotationally symmetric metrics.
In the particular case that $M$ is a flat ball $\mathbb B^4_{r_1}$ of radius $r_1$ in $\mathbb R^4$, and $\Sigma=\partial M$ the a sphere $\mathbb S^3_{r_1}$ with its canonical metric, we can simply write:
\begin{align}
&P_0=(-\Delta)^2,\nonumber\\
&B^1_0=\partial_r,\nonumber\\
\label{Panetiz-boundary-ball}
&B^3_{0}=-\partial_r\Delta-2\tilde \Delta \partial_r -\frac{2}{r_1}\tilde \Delta - \frac{1}{r_1^2}\partial_r.
\end{align}
However, for our purposes it is more convenient to rewrite these operators in cylindrical coordinates.
\subsection{Cylindrical coordinates}
First, we write the flat metric as
\begin{equation}\label{cylinder-metric}
|dx|^2=dr^2+r^2d\theta^2=e^{-2t}[dt^2+d\theta^2]=:e^{-2t}g_c,
\end{equation}
where $r=e^{-t}$ is the radial variable, $d\theta^2$ is the canonical metric on $\mathbb S^3$, and $g_c$ the cylindrical metric on $X=\mathbb R\times\mathbb S^3$. Consider the spherical harmonic decomposition of $\mathbb S^{3}$. For this, let $\mu_\ell$ and $Y_\ell^m$ be the eigenvalues and eigenfunctions for $-\Delta_{\mathbb S^{3}}$, respectively. This is,
$$\mu_\ell=\ell(\ell+2)\quad \text{and}\quad -\Delta_{\mathbb S^{3}} Y_\ell^m=\mu_\ell Y_\ell^m,\quad \ell=0,1,\ldots.$$
Then any function $u$ on $\mathbb R\times\mathbb S^{3}$ can be written as
$$u(t,\theta)=\sum_{\ell,m} u_\ell(t)Y_\ell^m(\theta), \quad t\in\mathbb R, \theta\in\mathbb S^3.$$
In order to write the Paneitz operator $P$ with respect to the metric $g_c$, we observe that $P_{g_c}$ diagonalizes under this eigenfunction decomposition.
Let $P^{(\ell)}$ its projection over the eigenspace $\langle Y_\ell^m\rangle$. Recalling again the conformal property \eqref{cylinder-metric}, we have that
\begin{equation}\label{Paneitz-cylinder}
\begin{split}
P^{(\ell)}u_\ell&=e^{-4t}(-\Delta_{\mathbb R^4})^2|_{\langle Y_\ell^m\rangle}u_\ell=r^4\Big(\partial_{rr}+\frac{3}{r}\partial_r -\frac{\mu_\ell}{r^2}\Big)\Big(\partial_{rr}+\frac{3}{r}\partial_r -\frac{\mu_\ell}{r^2}\Big)u_\ell\\
&=
\left(\partial_{tt}-\left(2+\ell\right)^2\right)\left( \partial_{tt}-\ell^2\right)u_\ell,
\end{split}
\end{equation}
after the change of variable $r=e^{-t}$.
We deal first with eigenfunctions for $P$ in $\mathbb R\times \mathbb S^3$ with its canonical metric $g_c$, that are $\varrho$-periodic in the variable $t$. More precisely:
\begin{lemma}\label{lemma:periodic}
Non-constant $\varrho$-periodic eigenfunctions of $P^{(\ell)}$ are of the form
\begin{equation*}
u_\ell(t)
=c_1\cos(\tfrac{2\pi}{\varrho}t )+c_2\sin(\tfrac{2\pi}{\varrho}t ),
\end{equation*}
for an eigenvalue
\begin{equation*}
\lambda_\ell^c=\big[(2+2\ell+\ell^2)+(\tfrac{2\pi}{\varrho})^2\big]^2 -(4+8\ell+4\ell^2).
\end{equation*}
\end{lemma}
The proof will be postponed to the Appendix.\\
Now let us consider $P_{g_c}$ when $M$ is a ball of radius $r_1$. The corresponding boundary operators on the boundary $\Sigma=\mathbb S^3_{r_1}$, which in $t$ coordinates is $\Sigma=\{t=-\log r_1\}$, will be denoted by $B^1_{r_1}$ and $B^3_{r_1}$. From the conformal change $g_c=e^{2t}|dx|^2$ we have
\begin{equation*}
B^1_{r_1} u=e^{-t} B^1_0 u|_{r=r_1}=-\partial_t u|_{t=-\log r_1},
\end{equation*}
and
\begin{equation}\label{Panetiz-boundary:cylinder}
B^{3,(\ell)}_{r_1}u_\ell=e^{-3t} B^{3}_{0}u_\ell|_{\langle Y_\ell^m\rangle,r=r_1}= \left\{ \partial_{ttt}-\left(3\mu_\ell+3\right)\partial_t\right\}u_\ell|_{t=-\log r_1},
\end{equation}
where we have denoted by $B^{3,(\ell)}_{r_1}$, $\ell=0,1,\ldots$ the projection over spherical harmonics of $B^3_{r_1}$.\\
We next take a new metric in $M$ given by $g_f=e^{2f}g_c$ for a radially symmetric conformal factor $f$. The Paneitz operator with respect to the metric $g_f$ on $M$ will be denoted by $P_f$, and the corresponding boundary operators on $\Sigma$ by $B^1_f$, $B^3_f$. By the conformal property of the operators, we have
\begin{align}
&P_f u=e^{-4f} P^{g_c},\nonumber\\
&B^{1}_f u= e^{-f} B^1_{r_1} u\big|_{t=-\log r_1} \nonumber\\
\label{Panetiz-boundary:cylinder-general-conformal-factor}
&B^{3,(\ell)}_{f}u_\ell=\left.e^{-3f} \{ \partial_{ttt}-\left(3\mu_\ell+3\right)\partial_t\right\}u_\ell\big|_{t=-\log r_1},
\end{align}
\subsection{Eigenvalues for the (unit) ball }\label{subsection:eigenvalues-ball}
Let $M$ be the unit ball in $\mathbb R^4$, parameterized in cylindrical coordinates (here $t\in[0,+\infty)$). We take a conformally flat metric $g_f=e^{2f}g_c$, where $f$ only depends on the radial variable $t$. We normalize $e^ {f(0)}=1$, which is the same as normalizing the volume of the boundary sphere to $2\pi^2$. After projection over spherical harmonics we obtain,
\begin{lemma}\label{lemma:eigenvalues-ball}
In this setting, eigenvalues for \eqref{bi-laplacian-eq}-\eqref{eigenvalue-problem} are given by
$$\lambda_\ell=4(\ell+2)\quad \text{for }\quad \ell\geq 1,\quad \lambda_0=0,$$
with associated eigenfunctions
\begin{equation}\label{first-eigenfunction-ball}
u_\ell(t)Y_\ell^m\quad \text{for }\ell=1,2,\ldots,\quad\text{and}\quad u_0(t)=1.
\end{equation}
where
$$u_\ell(t)=\frac{(\ell+2)}{2\ell}e^{-\ell t}-\frac{e^{-(\ell+2)t}}{2}.$$
\end{lemma}
The proof will be postponed to the Appendix.
\subsection{Radially symmetric metrics in an annulus}
We now let ${\mathcal A}_\rho=\{\rho \leq r\leq 1\}$ be an annulus in $\mathbb R^4$. In cylindrical coordinates we have $t\in[0,\tau]$, where $\tau=-\log \rho$. Take a conformally flat metric $g_f=e^{2f}g_c$, where $f=f(t)$ is a radially symmetric function. In addition, we impose the normalization
\begin{equation}\label{normalization}
e^{3f(0)}+e^{3f(\tau)}=1.
\end{equation}
This again fixes the volume of the boundary to be $2\pi^2$. We set $\alpha=e^{3f(0)}$, for $\alpha\in(0,1)$.
As in the case of the ball, we decompose in spherical harmonics and look for eigenfunctions for \eqref{bi-laplacian-eq}-\eqref{eigenvalue-problem}
of the form $u_\ell(t) Y_\ell^m(\theta)$, $\ell=0,1,\ldots$:
\begin{lemma}\label{lemma:quadratic-eq}
Fix $\alpha\in(0,1)$. Let $\lambda$ be the an eigenvalue for the $\ell$-th projection, $\ell\geq 1$. Then $\lambda$ satisfies the quadratic equation
\begin{equation}a(\ell)\lambda^2+b(\ell)\lambda +c(\ell)=0 \label{eqn:quadratic for lambda},
\end{equation}
where
\begin{align*}
a(\ell)=&-2\ell(\ell+2)+2(\ell+1)^2\cosh(2\tau)-2\cosh((2\ell+2)\tau),\\
b(\ell)=&\frac{1}{\alpha(1-\alpha)}4\ell(\ell+1)(\ell+2) [(\ell+1)\sinh(2\tau)+\sinh((2\ell+2)\tau)],\\
c(\ell)=&-\frac{1}{\alpha(1-\alpha)}8\ell^2(\ell+1)^2(\ell+2)^2 [\cosh((2\ell+2)\tau)-\cosh(2\tau)].\end{align*}
For each $\ell\geq 1$, equation \eqref{eqn:quadratic for lambda} has exactly two solutions $\lambda_\ell^-<\lambda_\ell^+$. For each $\lambda$ that solves \eqref{eqn:quadratic for lambda}, the corresponding eigenfunctions are the form $u_\ell^{\pm}(t)Y_\ell^m(\theta)$
for
$$u_\ell^{\pm}(t):=u_\ell^1+\frac{\lambda_\ell^{\pm} \alpha}{4\ell(\ell+2)(\ell+1)} u_\ell^2,$$
where
\begin{align*}
u_\ell^1(t)=&(\ell+2)\sinh((\ell+2)\tau)\cosh(\ell t)-\ell\sinh(\ell \tau)\cosh((\ell+2)t),\\
u_\ell^2(t)=&[\ell\sinh(\ell\tau)-(\ell+2)\sinh((\ell+2)\tau)][(\ell+2)\sinh(\ell t)-\ell\sinh((\ell+2) t)]\\
&-\ell(\ell+2)
[\cosh(\ell\tau)-\cosh((\ell+2)\tau)][\cosh(\ell t)-\cosh((\ell+2)t)].
\end{align*}
For $\ell=0$ there are two eigenvalues: $\lambda_0^-=0$ (with just constant eigenfunctions) and
$$\lambda_0^+=\frac{4}{\alpha(1-\alpha)}\frac{\sinh(2\tau)}{1-\cosh(2\tau)+\tau\sinh(2\tau)}>0,$$
with eigenfunction
$$u_0^+(t)=\sinh(2 \tau)(\sinh (2t)-2t)+(1-\cosh(2\tau))\cosh(2t)+2(1-\alpha)(1-\cosh(2\tau)+\tau\sinh(2\tau))-1+\cosh(2\tau).$$
\end{lemma}
The proof is just computational and it is also postponed to the Appendix.\\
If write
$$\lambda^-_\ell=\frac{-b(\ell)}{2a(\ell)}-\sqrt{\frac{b(\ell)^2}{4a^2(\ell)}-\frac{c(\ell)}{a(\ell)}}$$
it is clear that:
\begin{corollary} \label{cor:annulus model}
All the non-trivial eigenvalues associated to the eigenvalue problem \eqref{bi-laplacian-eq}-\eqref{boundary-condition}-\eqref{eigenvalue-problem} in $(\mathcal A_\rho, g_f)$ are strictly positive. In addition, the eigenspace corresponding to the zero eigenvalue consists only of constant functions.
\end{corollary}
Unfortunately, it is not possible to characterize the spectral gap for the operator since the calculations for a general $\tau$ are too complicated (even if elementary). However, we have strong numerical evidence for the following:
\begin{conjecture}
For each $\tau>0$, the sequence $\{\lambda^-_\ell\}$ is increasing in $\ell$.
\end{conjecture}
Note that a similar computation is performed in \cite{Fraser-Schoen:annulus} for Steklov eigenvalues in 2 dimensions and in their situation the conjecture holds. In addition, they prove that for each $\alpha$ there is a value $\tau^*(\alpha)$ such that for $\tau\leq \tau^*(\alpha)$ the smallest non-trivial eigenvalue is
given by $ \lambda^-_1$, while for $\tau\geq \tau^*(\alpha)$ we have that the smallest non-zero eigenvalue is $ \lambda^+_0$. In the result of \cite{Fraser-Schoen:annulus} there is an $\alpha^*$ such that for $\tau^*(\alpha^*)$ that the smallest eigenvalue is
maximized and in that case $ \lambda^-_1=\lambda^+_1=\lambda^+_0.$ We prove a partial result in that direction. Set $\beta=\alpha(1-\alpha)$.
\begin{proposition} \label{prop: first eigenvalue}
Given $\beta\in\left(0,\frac{1}{4}\right)$, there are values $\tau^-$, $\tau^*$ and $\tau^+$ such that: for $\tau^-$ it holds $\lambda_0^+>\lambda_1^-$, for $\tau^-$ we have $\lambda_1^->\lambda_0^+$, and for $\tau^*$ we encounter $\lambda_1^-=\lambda_0^+$ .
\end{proposition}
\begin{proof}
For $\ell=1$ we have
\begin{align*}
a(1)=&-4(\cosh(2\tau)-1)^2,\\
b(1)=&\frac{48}{\alpha(1-\alpha)} \sinh(2\tau) [1+\cosh (2\tau)], \\
c(1)=&- \frac{288}{\alpha(1-\alpha)} [\cosh(4\tau)-\cosh(2\tau)].
\end{align*}
Then, the associated eigenvalue is
\begin{equation*}
\begin{split}
\lambda_1^-= \frac{1}{\alpha(1-\alpha)} \Big[&6\frac{\sinh(2\tau)(1+\cosh (2\tau))}{(\cosh (2\tau)-1)^2}\\
&-6\sqrt{\frac{\sinh^2(2\tau)(1+\cosh(2\tau))^2}{(\cosh(2\tau)-1)^4}- 2\alpha(1-\alpha)\frac{ [\cosh(4\tau)-\cosh(2\tau)]}{(\cosh(2\tau)-1)^2}}\,\Big],
\end{split}
\end{equation*}
and it has multiplicity four (this is the number of spherical harmonics in 3 dimensions associated to $\ell=1$).\\
Now we take the quotient $\lambda_1^-/\lambda_0^+$, denoted by
\begin{equation*}
\begin{split}
F(\beta, \tau):=\frac{\lambda_1^-}{\lambda_0^+}= \frac{3}{2}\Big[&\frac{\sinh(2\tau)(1+\cosh (2\tau))}{(\cosh (2\tau)-1)^2}\\
&-\sqrt{\frac{\sinh^2(2\tau)(1+\cosh(2\tau))^2}{(\cosh(2\tau)-1)^4}- 2\beta \frac{ [\cosh(4\tau)-\cosh(2\tau)]}{(\cosh(2\tau)-1)^2}}\,\Big]\\
&\quad\cdot\left(\frac{\sinh(2\tau)}{1-\cosh(2\tau)+\tau\sinh(2\tau)}\right)^{-1}.
\end{split}
\end{equation*}
A tedious, but straightforward computation reveals that
\begin{itemize}
\item $F(\beta, \tau)\to 0 $ as $\tau\to 0$.
\item $F(\beta, \tau)\to \infty $ as $\tau \to \infty$
\end{itemize}
By the continuity of $F(\beta, \tau)$ we conclude that for each $\beta$ there are values of $\tau^-$, $\tau^*$ and $\tau^+$ for which $ F(\beta, \tau^-)<1$, $ F(\beta, \tau^+)>1$ and $ F(\beta, \tau^*)=1$, which concludes the proof.
\end{proof}
\begin{remark}
Note that for each value of $\beta$ we have two values of $\alpha$. This is because the problem is symmetric.
\end{remark}
\begin{remark}
Numeric computations strongly suggest that the function $F(\beta, \tau)$ in the proof above is increasing and the value $\tau^*$ is unique.
\end{remark}
\begin{remark}
If the conjecture above holds, then the smallest eigenvalue is either $\lambda_1^-$ or $\lambda_0^+$.
\end{remark}
\section{(Boundary) eigenvalue problems} \label{sec: boundary value}
In this section we study the eigenvalue problem \eqref{bi-laplacian-eq}-\eqref{boundary-condition}-\eqref{eigenvalue-problem}, which we recall here:
\begin{equation}\label{P1}
\left\{\begin{split}
&P_g u=0 \hbox{ in } M,\\
&B^1_g u=0\hbox{ on } \Sigma,\\
&B^3_g u=\lambda u\hbox{ on } \Sigma.
\end{split}\right.
\end{equation}
for $M$ a manifold with boundary satisfying the hypothesis in Theorem \ref{thm:positivity}.
For \eqref{P1}, one can show that there exists a non-decreasing sequence of eigenvalues $\{\lambda_0^g,\lambda_1^g,\lambda_2^g,\ldots\}$. Corollary \ref{cor:positivity} characterizes the zero-eigenspace and yields positivity of the operator if $\Sigma$ has either one or two connected components.
In addition, recall that $\lambda_1^g$ is characterized by the Rayleigh quotient \eqref{Rayleigh-quotient1}. Since the energy $\mathcal E^M_g[u]$ is conformally invariant, the quotient
\eqref{Rayleigh-quotient1} remains strictly positive when conformal transformations of $M$ are applied.
In what follows we prove Corollary \ref{cor:positivity} and Theorem \ref{thm:bounds-boundary}, by considering separately the cases that, either $M$ is conformally equivalent to a half-sphere $\mathbb S^4_+$, or an annulus $\mathcal A_\rho$ by Theorem \ref{thm:positivity}. This allows to reduce the study of problem \eqref{P1} to the model cases from the previous section.
\subsection{One boundary component. Proof of Corollary \ref{cor:positivity} and Theorem \ref{thm:bounds-boundary}}
Let $M$ be conformally equivalent to a half-sphere $\mathbb S^4_+$. By stereographic projection, we can assume that $M=\mathbb B^4$, $\Sigma=\partial \mathbb B^4=\mathbb S^3$, with a conformal metric $g=e^{2w}|dx|^2$.
We consider first the trivial setting, namely $w\equiv 0$. Lemma \ref{lemma:eigenvalues-ball} implies that $\lambda_0^g=0$ with $\Ker (B^3)=\{csts\}$ and $\lambda_1^g>0$. Nevertheless, this result can be proved directly by a simple integration by parts argument. Indeed, take the model $M=\mathbb R^4_+$, $\Sigma=\mathbb R^3$, with coordinates $(x_1,x_2,x_3,y)$, $y>0$, $x_1,x_2,x_3\in\mathbb R$. In this particular case we have
\begin{equation*}P=\Delta^2,\quad B^1=-\partial_y,\quad B^3=-\partial_y \Delta.
\end{equation*}
Let $\psi$ be any smooth solution to \eqref{P1}.
Integrating by parts we explicitly see that
\begin{equation*}
\lambda \int_\Sigma \psi^2 \, dx= \int_\Sigma \psi B^3 \psi\,dx=\int_M (\Delta \psi)^2\,dxdy\geq 0,
\end{equation*}
and it is zero if and only if $\Delta \psi=0$. Since we also have that $\partial_y u=0
$ at $\Sigma$, we conclude that $\psi$ is constant up to the boundary, as desired.
We also remark that existence such solution $\psi$ can be proved by a standard minimization argument, while the uniqueness follows from the previous proof (since the constant obtained above would be 0).\\
For the case of a general manifold $M$ in Corollary \ref{cor:positivity} we recall again that the energy $\mathcal E_g^M[u]$ in \eqref{Rayleigh-quotient1} is conformally invariant. In particular,
if $\tilde u$ is a minimizer of \eqref{Rayleigh-quotient1} with respect to the metric $g=e^{2w}|dx|^2$, we would have that
$$\lambda_1^g=\frac{\mathcal E_g^M[\tilde u]}{\int_\Sigma \tilde u^2 \,d\sigma_h}\geq \lambda_1^0\, \frac{\int_\Sigma \tilde u^2\, dv_0}{\int_\Sigma \tilde u^2 \,\sigma_h}>0 ,$$
where $\lambda^g_1$ is the first non-zero eigenvalue with respect to the metric $g$, $\lambda_1^0 $ is the first non-zero eigenvalue with respect to the flat metric and $dv_0, dv_g$ are volume elements with respect to the flat metric in the ball and the metric given by $g$, respectively.
To prove Theorem \ref{thm:bounds-boundary} we proceed as in the proof of Theorem \ref{thm:closed}. We use the unit ball model and assume that there exists a conformal embedding $\Phi:M\to \mathbb B^4$ satisfying $\Phi(\Sigma)=\partial \mathbb B^4=\mathbb S^3$. In the two-dimensional case, the bound for the Steklov eigenvalue is obtained by using the coordinate functions of the embedding as test functions in the Rayleigh quotient \eqref{Rayleigh-quotient1}. In our setting we will use instead the (four) eigenfunctions for the first non-zero eigenvalue in the ball model and calculated in \eqref{first-eigenfunction-ball} for $\ell=1$. The precise expression is
$$U_m(r,\theta)=u_1(-\log r)Y_1^m(\theta),\quad m=1,2,3,4.$$
Next, on $M$ we set $\overline U_m=U_m \circ \Phi$.
Denote by $\Phi_m$, $m=1,2,3,4$ be the coordinate functions of the embedding. By a calibration argument as in \eqref{calibration}
we can assume that
\begin{equation}\label{calibration1}
\int_\Sigma \Phi_m\,d\sigma_h=0.
\end{equation}
Indeed, after choosing a M\"obius transform of the boundary $\mathbb S^3$ in order to calibrate the center of mass \eqref{calibration1}, one takes its unique extension to a conformal transformation of $\mathbb B^4$ (which is known as the Poincar\'e extension, see \cite[Section 4.4]{Ratcliffe}).
In addition, $B_g^1 \overline U_m=0$ thanks to the covariance property \eqref{covariance} and the construction of $u_1$. Finally, noting that we have $u_1(0)=1$, this implies that $\overline U_m=\Phi_m$ along $\Sigma$. We conclude that $\overline U_m$ is an admissible test function in the Rayleigh quotient \eqref{Rayleigh-quotient1}.
We thus calculate
\begin{equation}\label{test-function}
\lambda^g_1 \int_\Sigma \overline U_m^2\,d\sigma_h\leq \mathcal E^M_g[\overline U_m]= \mathcal E^{\Phi(M)}_{ g_{\mathbb{B}^4}}[U_m].
\end{equation}
Adding on $m=1,2,3,4$ and recalling that $\sum_{m=1}^4 (Y_1^m)^2(\theta) =1$ we have
$$ \lambda^g_1 \vol(\Sigma)\leq \sum_m \mathcal E^{\Phi(M)}_{ g_{\mathbb{B}^4}}[U_m].$$
Next, since $U_m$ is an eigenfunction in the ball model,
$$\mathcal E^{\mathbb B^4}_{ g_{\mathbb{B}^4}}[U_m]=\lambda_1^0 \int_{\mathbb S^3} U_m^2\,d\sigma_{g_{\mathbb S^3}}.$$
Adding on $m$, taking into account that $\lambda_1^0=12$, we conclude
$$ \lambda^g_1 \vol(\Sigma)\leq 12\vol(\mathbb S^3)=24\pi^2.$$
\subsection{Two boundary components. Proof of Corollary \ref{cor:positivity} and Theorem \ref{thm:bounds-boundary}}
In the light of Theorem \ref{thm:positivity}, it is enough to take $M$ to be conformally equivalent to the annulus $\mathcal A_\rho=\{\rho\leq |x|\leq 1\}$ in $\mathbb R^4$. Denote by $\Sigma_1$, $\Sigma_\rho$ the boundaries of $\Sigma$ corresponding to $|x|=1$, $|x|=\rho$, respectively. In cylindrical variables, this annulus is conformally equivalent to $[0,\tau]\times\mathbb S^3$ with the metric $g_c$. Such $\tau$ plays the role of the conformal modulus, in analogy to the two-dimensional case.
First, Corollary \ref{cor:positivity} follows from Lemma \ref{lemma:quadratic-eq} and Corollary \ref{cor:annulus model}.
For the eigenvalue bound, we follow the same idea as in the proof of Theorem 4.1 in \cite{Fraser-Schoen:annulus}. Let $\tilde A_\rho$ be another annulus with the flat metric, having boundaries $\tilde \Sigma_1$ and $\tilde \Sigma_\rho$ with the same boundary volume as $\Sigma_1$, $\Sigma_\rho$, respectively. Without loss of generality, we may rescale the metric by constant to achieve that the total boundary volume equals $\vol(\mathbb{S}^3)$ (this corresponds with the normalization \eqref{normalization}).
In the $\tilde A_\rho$ model, we consider the eigenfunction $u_0^+(t)$, given in Lemma \ref{lemma:quadratic-eq}, with eigenvalue $\lambda_0^+>0$, which only depends on $\tau$ and $\alpha$ (recall \eqref{normalization} and the definition of $\alpha$).
We pull back this function to the original manifold, setting $U_0(r)=u_0^+(-\log r)$ and $\overline U_0=U_0 \circ \Phi$. Note that $\overline U_0$ is a constant function at each of the two boundary components $\Sigma_1$, $\Sigma_\rho$. Let us check that $\overline U_0$ is a suitable test function in the Rayleigh quotient \eqref{Rayleigh-quotient1}, Calculate (with a slight abuse of notation)
\begin{equation*}
\begin{split}
\int_\Sigma \overline U_0\,d\sigma_h &= \int_\Sigma U_0\circ\Phi\,d\sigma_h=u_0^+(0)\vol(\Sigma_1)+u_0^+(\tau)\vol(\Sigma_\rho)\\
&=u_0^+(0)\vol(\tilde \Sigma_1)+u_0^+(\tau)\vol(\tilde \Sigma_\rho)=\int_{\tilde\Sigma} U_0\,dx=0.
\end{split}
\end{equation*}
Next, using $\bar U_0$ as a test function,
\begin{equation}\label{test-function-annulus}
\lambda^g_1 \int_\Sigma \overline U_0^2\,d\sigma_h\leq \mathcal E^M_g[\overline U_0]= \mathcal E^{\Phi(M)}_{ g_{\text{flat}}}[U_0].
\end{equation}
Now, on the one hand,
$$\int_\Sigma \overline U_0^2\,d\sigma_h=(u_0^+)^2(0)\vol(\Sigma_1)+(u_0^+)^2(\tau)\vol(\Sigma_\rho)$$
while, on the other hand,
$$E^{\Phi(M)}_{ g_{\text{flat}}}[U_0]=\lambda_0^+ \int_{\tilde\Sigma} U_0^2\,dx=\lambda_0^+\Big((u_0^+)^2(0)\vol(\Sigma_1)+(u_0^+)^2(\tau)\vol(\Sigma_\rho)\Big),$$
which yields statement \eqref{statement-annulus}.
Finally, Proposition \ref{prop: first eigenvalue} implies that for $\tau\geq \tau^*$ (or equivalently, $\rho\leq \rho^*$) the bound is sharp.
\section{Appendix}
\begin{proof}[Proof of Lemma \ref{lemma:periodic}]
Use the formula \eqref{Paneitz-cylinder} for the Paneitz operador in cylindrical coordinates after spherical harmonic decomposition. We look for $\varrho$-periodic solutions for the constant coefficient ODE
\begin{equation}
\partial_{tttt}u_\ell-[(2+\ell)^2+\ell^2]\partial_{tt}u_\ell+(2+\ell)^2\ell^2u_\ell=\lambda_\ell u_\ell.
\end{equation}
For this, we need to find purely imaginary roots of its characteristic polynomial
$$m^4-[(2+\ell)^2+\ell^2]m^2+(2+\ell)^2\ell^2-\lambda_\ell=0,$$
which must satisfy
\begin{equation*}
m^2=2+2\ell+\ell^2-\sqrt{4+8\ell+4\ell^2+\lambda}=:-a^2\le 0.
\end{equation*}
Imposing periodicity ($a\varrho=2\pi$),
we arrive at the desired result.\\
\end{proof}
\begin{proof}[Proof of Lemma \ref{lemma:eigenvalues-ball}]
This is a straightforward calculation that we detail below.
Taking into account that the Paneitz operator $P$ is conformally invariant, in the interior of the ball equation \eqref{bi-laplacian-eq} reduces to
$$\left(\partial_{tt}-\left(2+\ell\right)^2\right)\left( \partial_{tt}-\ell^2\right)u_\ell=0$$
In addition we require
\begin{equation}\label{boundary-condition0}
u'_\ell(0)=0
\end{equation}
(which corresponds to the condition \eqref{boundary-condition} $B^1(u_\ell)=0$), and $u$ is finite at infinity. Let $v_\ell=(\partial_{tt}-\ell^2)u_\ell$, then
$$\left(\partial_{tt}-\left(2+\ell\right)^2\right)v_\ell=0$$
so
$$v_\ell(t)=A e^{-(\ell+2)t}.$$
Hence, for $\ell\geq 1$,
$$u_\ell(t)=Ce^{-\ell t}+\frac{A}{4\ell+4}e^{-(\ell+2)t}.$$
Imposing the boundary condition \eqref{boundary-condition0} we have
$$u_\ell(t)=-\frac{A(\ell+2)}{4\ell(\ell+1)}e^{-\ell t}+\frac{A}{4\ell+4}e^{-(\ell+2)t}.$$
The eigenvalues of $B^3$ are given by $\left.\frac{B^3u_\ell}{u_\ell}\right|_{t=0}$, this is, recalling \eqref{Panetiz-boundary:cylinder-general-conformal-factor},
$$\lambda_\ell=(\ell+2)\frac{\ell^2-(\ell+2)^2}{-(\ell+2)+1}=4(\ell+2),\quad \text{for }\quad \ell\geq 1.$$
For $\ell=0$ the calculation is slightly different, since
$$u_0=C+\frac{A}{4}e^{-2t}.$$
Imposing the boundary condition implies that $A=0$, this is, $u_0$ is constant and $\lambda_0=0$, as expected.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lemma:quadratic-eq}]
Fix $\ell\geq 1$. Using \eqref{Paneitz-cylinder} and the conformal property \eqref{conformal-Paneitz} we observe that the general solution $u_\ell$
can be written as
$$u_\ell(t)=A \sinh (\ell t)+B\cosh(\ell t)+C \sinh ((\ell+2)t)+D \cosh ((\ell+2)t).$$
Imposing that $B_1 u_\ell=0$ at the boundary of $\mathcal A_\rho$, it is easy to see that solutions are spanned by two functions that can be chosen as
\begin{align*}
u_\ell^1(t)=&(\ell+2)\sinh((\ell+2)\tau)\cosh(\ell t)-\ell\sinh(\ell \tau)\cosh((\ell+2)t)\\
u_\ell^2(t)=&[\ell\sinh(\ell\tau)-(\ell+2)\sinh((\ell+2)\tau)][(\ell+2)\sinh(\ell t)-\ell\sinh((\ell+2) t)]\\
&-\ell(\ell+2)
[\cosh(\ell\tau)-\cosh((\ell+2)\tau)][\cosh(\ell t)-\cosh((\ell+2)t)].
\end{align*}
Then, eigenfunctions can be written as $Au_\ell^1+B u_\ell^2$.\\
We can directly compute
\begin{align*}
\partial_{ttt} u_\ell^1(t)=&(\ell+2)\ell^3\sinh((\ell+2)\tau)\sinh(\ell t)-\ell(\ell+2)^3\sinh(\ell\tau)\sinh((\ell+2)t)\\
\partial_{ttt} u_\ell^2(t)=&\ell(\ell+2)[\ell\sinh(\ell\tau)-(\ell+2)\sinh((\ell+2)\tau)][\ell^2\cosh(\ell t)-(\ell+2)^2\cosh((\ell+2) t)]\\
&-\ell(\ell+2)
[\cosh(\ell\tau)-\cosh((\ell+2)\tau)][\ell^3\sinh(\ell t)-(\ell+2)^3\sinh((\ell+2)t)].
\end{align*}
The eigenfunction condition at $t=0$ can be written from \eqref{Panetiz-boundary:cylinder} as
$$e^{-3f(0)}(A \partial_{ttt} u_\ell^1(0)+B \partial_{ttt} u_\ell^2(0))=\lambda (Au_\ell^1(0)+B u_\ell^2(0)),$$
which implies
\begin{equation*}
\begin{split}
& e^{-3f(0)}B([\ell\sinh(\ell \tau)-(\ell+2)\sinh((\ell+2)\tau)][(\ell+2)\ell^3-\ell(\ell+2)^3]\\
&\qquad=\lambda A((\ell+2)\sinh((\ell+2)\tau)-\ell\sinh(\ell \tau)),
\end{split}
\end{equation*}
or equivalently,
\begin{equation}4B\ell(\ell+2)(\ell+1)=\lambda A e^{3f(0)}.\label{relation A and B}\end{equation}
For the eigenvalue condition at $\tau$ we need to take into account that the outward normal is reversed, so we have
$$-e^{-3f(\tau)}(A \partial_{ttt}u_\ell^1 (\tau)+B \partial_{ttt}u_\ell^2 (\tau))=\lambda(Au_\ell^1(\tau)+B u_\ell^2(\tau)).$$
Multiplying by $\lambda$ and using \eqref{relation A and B} we obtain
\begin{multline*}
- e^{-3f(\tau)}( 4\ell(\ell+2)(\ell+1) e^{-3f(0)} \partial_{ttt}u_\ell^1 (\tau)+ \lambda \partial_{ttt}u_\ell^2 (\tau))=\lambda 4\ell(\ell+2)(\ell+1) e^{-3f(0)} u_\ell^1(\tau)+\lambda^2 u_\ell^2(\tau)\end{multline*}
Or equivalently
\begin{equation} a(\ell)\lambda^2+ b(\ell)\lambda+c(\ell)=0 \label{eqn:quadratic for lambda app},\end{equation}
where
\begin{align*}
a(\ell)=& u_\ell^2(\tau),\\
b(\ell)=&4\ell(\ell+1)(\ell+2) e^{-3f(0)} u_\ell^1(\tau)+ e^{-3f(\tau)}\partial_{ttt}u_\ell^2 (\tau),\\
c(\ell)=&4\ell(\ell+1)(\ell+2)e^{-3(f(\tau)+f(0))} \partial_{ttt}u_\ell^1 (\tau).
\end{align*}
From the previous computations we have, at $t=\tau$,
\begin{align*}
u_\ell^1(\tau)
=&(\ell+1)\sinh(2\tau)+\sinh((2\ell+2)\tau),\\
u_\ell^2(\tau)
=&-2\ell(\ell+2)+2(\ell+1)^2\cosh(2\tau)-2\cosh((2\ell+2)\tau),
\end{align*}
and for the derivatives
\begin{align*}
\partial_{ttt}u_\ell^1(\tau)
=&-2\ell(\ell+1)(\ell+2)[\cosh((2\ell+2)\tau)-\cosh(2\tau)],\\
\partial_{ttt}u_\ell^2(\tau)
=&4\ell(\ell+1)(\ell+2)[(\ell+1)\sinh(2\tau)+\sinh((2\ell+2)\tau)].
\end{align*}
To conclude, recall our normalization \eqref{normalization} and set $\alpha=e^{3f(0)}$, for $\alpha\in(0,1)$, so $e^{-3f(\tau)}=(1-\alpha)^{-1}$.
Let us have a closer look at these eigenvalues now. By differentiating twice in $\tau$, we can easily check that $a(\ell)<0$ for $\tau>0$. In addition, $b(\ell)>0$ and $c(\ell)<0$. After some calculation, one can explicitly see that
the discriminant associated to \eqref{eqn:quadratic for lambda app} is strictly positive for $\alpha \in(0,1)$, hence there are two positive real roots, $0<\lambda_\ell^-<\lambda_\ell^+$.\\
Now we compute the eigenfunction for $\ell=0$. In that case we that the particular solutions are
\begin{align*}
&u^1_0(t)=\sinh(2 \tau)(\sinh (2t)-2t)+(1-\cosh(2\tau))\cosh(2t),
\\ &u^2_0(t)=1,
\end{align*}
so the general solution can be written as $Au^1_0+B u^2_0$. The eigenvalue condition at $t=0$ is equivalent to
\begin{equation}\label{eq10}8e^{-3 f(0)}A\sinh(2\tau) =\lambda(A(1-\cosh(2\tau))+B).\end{equation}
On the other hand, at $t=\tau$
we have
\begin{equation*}
\begin{split}
&-8e^{-3 f(\tau)}A\big\{\sinh(2 \tau)\cosh (2\tau)+(1-\cosh(2\tau))\sinh(2\tau)\big\}
\\ &\quad=\lambda\big\{A\sinh(2 \tau)(\sinh (2\tau)-2\tau)+A(1-\cosh(2\tau))\cosh(2\tau)+B\big\}.
\end{split}
\end{equation*}
Subtracting both equations, we obtain a simple non-trivial eigenvalue:
$$\lambda_0^+=\frac{4}{\alpha(1-\alpha)}\frac{\sinh(2\tau)}{1-\cosh(2\tau)+\tau\sinh(2\tau)}.$$
Substituting in \eqref{eq10} we obtain
\begin{equation*}
B=\left\{2(1-\alpha)(1-\cosh(2\tau)+\tau\sinh(2\tau))-1+\cosh(2\tau)\right\}A,
\end{equation*}
and this finishes the proof of Lemma \ref{lemma:quadratic-eq}.
\end{proof}
\noindent\textbf{Acknowledgements:} M.d.M. Gonz\'alez is supported by the Spanish government grant and MTM2017-85757-P and the Severo Ochoa program at ICMAT.
M. S\'aez is supported by the grant Fondecyt Regular 1190388.
| {
"timestamp": "2021-08-10T02:33:20",
"yymm": "2102",
"arxiv_id": "2102.07873",
"language": "en",
"url": "https://arxiv.org/abs/2102.07873",
"abstract": "In this paper we study bounds for the first eigenvalue of the Paneitz operator $P$ and its associated third-order boundary operator $B^3$ on four-manifolds. We restrict to orientable, simply connected, locally confomally flat manifolds that have at most two umbilic boundary components. The proof is based on showing that under the hypotheses of the main theorems, the considered manifolds are confomally equivalent to canonical models. This equivalence is proved by showing the injectivity of suitable developing maps. Then the bounds on the eigenvalues are obtained through explicit computations on the canonical models and its connections with the classes of manifolds that we are considering. The fact that $P$ and $B^3$ are conformal in four dimensions is key in the proof.",
"subjects": "Differential Geometry (math.DG); Analysis of PDEs (math.AP)",
"title": "Eigenvalue bounds for the Paneitz operator and its associated third-order boundary operator on locally conformally flat manifolds",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9817357248544007,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7085610897933116
} |
https://arxiv.org/abs/2211.01541 | Factorization conditions for nonlinear second-order differential equations | For the case of nonlinear second-order differential equations with a constant coefficient of the first derivative term and polynomial nonlinearities, the factorization conditions of Rosu and Cornejo-Perez are approached in two ways: (i) by commuting the subindices of the factorization functions in the two factorization conditions and (ii) by leaving invariant only the first factorization condition achieved by using monomials or polynomial sequences. For the first case, the factorization brackets commute and the generated equations are only equations of Ermakov-Pinney type. The second modification is non commuting, leading to nonlinear equations with different nonlinear force terms, but the same first-order part as the initially factored equation. It is illustrated for monomials with the examples of the generalized Fisher and FitzHugh-Nagumo initial equations. A polynomial sequence example is also included. | \section{Introduction}
Many dynamical systems in mechanics and in physics in general are described by non linear second order differential equations or evolve under the action of internal forces with small non-linear components, especially during external forcing or along the relaxing stage after the forcing has been canceled. In their homogeneous form,
\begin{equation}\label{eq1}
\frac{d^2x}{dt^2}+\gamma(x)\frac{dx}{dt}+f(x)=0~,
\end{equation}
these equations are traditionally known in the literature as Li\'enard equations \cite{hl-2016}, although in the case of the constant parameter $\gamma(x)=\gamma$, they may be
considered as of Duffing type because the Duffing oscillator corresponding to $f(x)=\alpha x +\beta x^3$ is a representative example.
A second important category of equations in the latter case consists of travelling wave reductions of the reaction-diffusion equations and nonlinear evolution equations for which the coefficient $\gamma$ denoted by $\nu$ is the constant velocity of the travelling wave frame.
\medskip
A simple way to obtain particular solutions of these non linear second order differential equations consists in using the factorization method, where
the second-order differential operator
\begin{equation}\label{oper}
D^2+\gamma(x)D+\frac{f(x)}{x}~, \qquad D=\frac{d}{dt}~,
\end{equation}
is factored in terms of two different first-order differential operators in the operatorial form of equation (\ref{eq1})
\begin{equation}\label{eq2}
\left(D-\phi_2(x)\right)\left(D-\phi_1(x)\right)x=0~.
\end{equation}
This may provide particular solutions of (\ref{eq1}) by a single quadrature of $\left(D-\phi_1(x)\right)x=0$.
To match the factored operator in (\ref{eq2}) to the operator (\ref{oper}), the factoring functions $\phi_i$
should satisfy the conditions
\begin{align}
&\phi_1+\phi_2+x\frac{d\phi_1}{dx}=-\gamma \label{eqc1}\\
&\phi_1\phi_2=\frac{f(x)}{x} \label{eqc2}~
\end{align}
that have been introduced in 2005 by two of the authors \cite{rcp1,rcp2}. They applied this kind of factorization to many well-known equations with polynomial nonlinearities by taking additional advantage from the polynomial factorization of the nonlinear part. The second condition shows that $\sqrt{f(x)/x}$ is the geometric mean of the functions $\phi_i$ that can be chosen
from combinations of the factors of $f(x)/x$ if $f(x)$ is a polynomial which does not have the zero degree power. This also assures that from (\ref{eq2}) one can obtain a particular solution of (\ref{eq1}) by the quadrature of $\left(D-\phi_1(x)\right)x=0$,
\begin{equation}
\int \frac{dx}{x\phi_1(x)}=t-t_0~
\end{equation}
since $\phi_1$ can be chosen as one of the factors of $f(x)/x$.
Moreover, as in supersymmetric quantum mechanics, the reverting of the factorization brackets has been used in \cite{rcp1,rcp2} to obtain particular solutions of equations with identical first-order operator part, but the different polynomial part $\tilde{f}$ given by
\begin{equation}\label{eqsusy}
\tilde{f}(x)=f(x)+x^2(\phi_{1,x}-\phi_{2,x})\phi_2~,
\end{equation}
where the subscript $x$ denotes the derivative with respect to $x$.
On the other hand, with a different grouping of terms, one can also obtain particular solutions of `supersymmetric' nonlinear equations of the form
\begin{equation}
\label{eqi}
\frac{d^2x}{dt^2}+\tilde{\gamma}\frac{dx}{dt}+f(x)=0~, \quad \tilde{\gamma}=\gamma+x(\phi_{1,x}-\phi_{2,x})~,
\end{equation}
i.e., with the same polynomial nonlinearities, but different first-order operator part.
All these calculus properties have yielded many interesting particular solutions of the kink and soliton type for well-known nonlinear equations obtained by the traveling wave change of variables from evolution equations \cite{rcp1,rcp2,cpetal,estev07,fahmy08,oct09,yesiltas09,justin12,mr13,tiwari15,ww17,rmf17,ziem19} and have been also widely implemented in Matlab and Maple algorithms \cite{GSbook}.\\
In this article, we discuss the equations and their particular solutions obtained through some additional conditions and/or modifications of the factorization functions in the factorization conditions (\ref{eqc1}) and (\ref{eqc2}) for equations (\ref{eq1}) of the Duffing type (constant parameter $\gamma(x)=\gamma$).
Regarding the variable $\gamma$ class, some cases have been presented previously in \cite{rcp2} and their study with the same focus as here is left for future work.
In particular, we consider here the effect of two types of modifications of the factorization brackets in equations (\ref{eqc1}) and (\ref{eqc2}):
\begin{itemize}
\item The first modification is performed in a way that keeps invariant the two factorization conditions, which leads to a commutative factorization setting in which the reverting of the factorization brackets does not generate a new equation.
\item The second type of modification is by adding a polynomial into the multiplication brackets in such a way that only the first factorization condition is kept invariant which generates a non-commutative factorization.
\end{itemize}
The paper is organized as follows.
In the second section, the conditions for having a commutative factorization scheme are presented together with some physical examples of this approach. In the third section, a non-commutative factorization which generalizes the Rosu and Cornejo-P\'erez factorization is introduced and some examples are presented for illustrative purposes. The conclusions are summarized in the last section.
\section{Commutative factorization setting}
We now study the consequences of interchanging the subindexes in the RCP pair of factorization conditions. This is equivalent to adding another pair of conditions obtained
by commuting the subindexes in both equations. However, one can instantly find that this is a minimal change since the second factorization condition keeps its form under such an interchange. Therefore, proceeding in this way, we obtain the following triplet of different factorization conditions
\begin{align
& \phi_1+\phi_2+x\frac{d\phi_1}{dx}= -\gamma \label{eqcc1}\\
& \phi_2+\phi_1+x\frac{d\phi_2}{dx}=-\gamma \label{eqcc2}\\
& \phi_2\phi_1 \left(=\phi_1\phi_2\right)=\frac{f(x)}{x}\label{eqcc3}~.
\end{align}
In this case, by comparing the first two equations, one can see that $d\phi_1/dx=d\phi_2/dx$ implying
\begin{equation}\label{r1}
\phi_2=\phi_1+c_0~,
\end{equation}
where $c_0$ is an arbitrary real constant. In other words, these extended (commuting) factorization conditions introduce the additional restriction on
the factoring functions of being different only by a constant. Furthermore, from (\ref{eqi}) one has $\tilde{\gamma}=\gamma$, so that the interchange of the factorization brackets does not produce a new equation in this case. Thus, in factored form, one deals with equations of the type
\begin{equation}\label{commf}
(D-\phi_1-c_0)(D-\phi_1)x=0~,
\end{equation}
where $\phi_1$ satisfies
\begin{equation}\label{r2}
x\frac{d\phi_1}{dx}+2\phi_1=-\gamma-c_0~,
\end{equation}
which is obtained by substituting (\ref{r1}) into (\ref{eqcc1}).
For constant $\gamma$, $(\ref{r2})$ implies
\begin{equation}\label{r3}
\phi_1(x)=-\frac{\gamma+c_0}{2}+\frac{\kappa_1}{x^2}~, \qquad \phi_2(x)=-\frac{\gamma-c_0}{2}+\frac{\kappa_1}{x^2}~,
\end{equation}
where $\kappa_1$ is an arbitrary integration constant. Besides, $f(x)$ is obtained from (\ref{eqcc3}) as
\begin{equation}\label{r4}
f(x)=\frac{\gamma^2-c_0^2}{4}x-\frac{\kappa_1\gamma}{x}+\frac{\kappa_1^2}{x^3}~.
\end{equation}
A direct connection, not depending on $\gamma$, between the factoring functions and the nonlinear term $f(x)$ is obtained by substituting (\ref{r1})
in (\ref{eqcc3})
\begin{equation}\label{r5
\phi_{1,2}=\frac{-c_0 \pm \sqrt{c_0^2+4f(x)/x}}{2}~.
\end{equation}
\medskip
(i) {\em Case $\gamma=0$}. For this case, let us take $c_0=-2a$ and $\kappa_1=b$ in (\ref{r3}), writing the factorization functions as
\begin{equation}\label{eqg1}
\phi_1(x)=a+\frac{b}{x^2} ~, \qquad \phi_2(x)=-a+\frac{b}{x^2}~.
\end{equation}
These factorization functions provide the standard Ermakov-Pinney differential equation
\begin{equation}\label{eq3}
\frac{d^2x}{dt^2}-a^2x+\frac{b^2}{x^3}=0~,
\end{equation}
which admits the following commuting factorizations
\begin{equation}\label{eq4}
\left(D\pm a-\frac{b}{x^2}\right)\left(D\mp a-\frac{b}{x^2}\right)x=0~,
\end{equation}
providing two particular solutions from each of the first-order equations
\begin{equation} \label{eq5}
\frac{dx}{dt}=\pm ax+\frac{b}{x}~.
\end{equation}
For each of the signs of the linear term, these particular solutions are given by
\begin{equation}\label{eq6}
x(t)=\pm\sqrt{-\frac{b}{a}+\frac{e^{2a(t+c)}}{a}}~, \quad x(t)=\pm\sqrt{-\frac{b}{a}+\frac{e^{-2a(t-c)}}{a}}~,
\end{equation}
respectively, where $c$ is an integration constant.
These particular Ermakov solutions are not of the Pinney type \cite{P51}. If one writes the general Ermakov solution for $d^2x/dt^2+p(t)x+\kappa x^{-3}=0$ in the known form $x_g=\sqrt{\alpha_1x_1^2+\alpha_2x_2^2+2\alpha_3 x_1x_2}$ with the superposition constants $\alpha_i$ of $x_{1,2}=e^{\pm at}$ related by $\alpha_1\alpha_2-\alpha_3^2=-\kappa/W^2$, where $W$ is the Wronskian determinant of $x_{1,2}$, two linear independent solutions of $d^2x/dt^2+p(t)x=0$, one can see that they correspond to $\alpha_1=1/a$, $\alpha_2=0$, and $\alpha_3=\sqrt{\kappa}/W$.
Moreover, if $\gamma=0$, one can obtain the general solution as follows. Substituting $\phi_2=\frac{f(x)}{\phi_1 x}$ in the first factorization condition, the Abel equation of the second kind \cite{estev07}
\begin{equation}\label{ab1}
x\phi_1\frac{d\phi_1}{dx}+\phi_1^2+\frac{f(x)}{x}=0~
\end{equation}
is obtained, which for $f(x)=-a^2x+b^2/x^3$ has the solution
\begin{equation}\label{ab2}
\phi_1(x)=\pm \sqrt{a^2+\frac{\tilde{\kappa}}{x^2}+\frac{b^2}{x^4}}~,
\end{equation}
where $\tilde{\kappa}$ is an integration constant. For $\tilde{\kappa}=\pm 2ab$, one obtains the previous particular cases in equation (\ref{eqg1}).
Next, from
\begin{equation}\label{ab3}
\frac{dx}{dt}=\phi_1(x)x=\pm\sqrt{a^2x^2+\tilde{\kappa}+\frac{b^2}{x^2}}
\end{equation}
for the positive sign, one obtains the solutions
\begin{equation}\label{ab4
x(t)=
\pm\frac{1}{2a}\sqrt{e^{2a(t-t_0)}-2\tilde{\kappa}
+(\tilde{\kappa}^2-4a^2b^2)e^{-2a(t-t_0)}}
\end{equation}
whereas for the negative sign, the solutions are
\begin{equation}\label{ab5
x(t)=\pm\frac{1}{2a}\sqrt{e^{-2a(t-t_0)}-2\tilde{\kappa}+(\tilde{\kappa}^2-4a^2b^2)e^{2a(t-t_0)}}~,
\end{equation}
all of which are general Ermakov-Pinney solutions. The solutions (\ref{eq6}) are obtained for $\tilde{\kappa}=2ab$ and $t_0=-\left(c+\frac{\ln(4a)}{2a}\right)$.
\medskip
(ii) {\em Case $\gamma=constant\neq 0$}. For this case, the simplest factorization is obtained by setting $\phi_1=\phi_2=\phi$ ($c_0=0$), making identical the two factorization brackets. In this special case, we have
\begin{equation}\label{eq7}
\phi(x)=-\frac{\gamma}{2}+\frac{b}{x^2}~,
\end{equation}
which one can easily verify that satisfies the triplet factorization conditions.
The obtained second order non linear differential equation is of the following Ermakov-Pinney type
\begin{equation}\label{eq8}
\frac{d^2x}{dt^2}+\gamma\frac{dx}{dt}+\frac{\gamma^2}{4}x-\frac{\gamma b}{x}+\frac{b^2}{x^3}=0~,
\end{equation}
or in operatorial form
\begin{equation}\label{eq9}
\left(D+\frac{\gamma}{2}-\frac{b}{x^2}\right)^2x=0~
\end{equation}
which yields the particular solutions given by
\begin{equation}\label{eq10}
x(t)=\pm\sqrt{\frac{2b}{\gamma}+\frac{e^{-\gamma (t-2c)}}{\gamma}}~,
\end{equation}
where $c$ is an integration constant.
\medskip
Another possible pair of factorization functions for this case is
\begin{equation}\label{eq11}
\phi_1(x)=a_1+\frac{b}{x^2}~, \qquad \phi_2(x)=-a_2+\frac{b}{x^2}~,
\end{equation}
which generate the following equation
\begin{equation}\label{eq12
\frac{d^2x}{dt^2}+(a_2-a_1)\frac{dx}{dt}-a_1a_2x-\frac{(a_2-a_1) b}{x}+\frac{b^2}{x^3}=0~.
\end{equation}
Thus, for $\gamma=a_2-a_1$, equation (\ref{eq12}) admits the following commuting factorizations
\begin{align}\label{eq13}
&\left(D+a_2-\frac{b}{x^2}\right)\left(D-a_1-\frac{b}{x^2}\right)x=0~,\\
&\left(D-a_1-\frac{b}{x^2}\right)\left(D+a_2-\frac{b}{x^2}\right)x=0~,
\end{align}
which lead to two particular solutions
obtained from
\begin{equation}\label{eq14}
\frac{dx}{dt}=a_1x+\frac{b}{x}~, \qquad \frac{dx}{dt}=-a_2x+\frac{b}{x}~.
\end{equation}
These solutions are
\begin{equation}\label{eq15
x(t)=\pm\sqrt{\frac{-b}{a_1}+\frac{e^{2a_1(t+c_1)}}{a_1}}~, \qquad x(t)=\pm\sqrt{\frac{b}{a_2}+\frac{e^{-2a_2(t-c_2)}}{a_2}}~,
\end{equation}
respectively; $c_1$ and $c_2$ are integration constants.
\medskip
In closing this section, we notice that multiplying each of the factorization brackets by an exponential factor in the independent variable,
\begin{equation}
e^{\pm c_0t}\left(D-\phi_1\right)e^{\pm c_0t}\left(D-\phi_1\right)x=0~,
\end{equation}
is another way of producing the triplet of commuting factorization conditions. However, in this case, only the constants $\pm c_0$ are introduced in the factorization brackets.
\section{Non-commutative factorization setting}
We move now to the study of additive extensions of the factorization functions,
\begin{equation}\label{tilde1}
\tilde{\phi}_1(x)=\phi_1+\epsilon_1(x)~, \qquad \tilde{\phi}_2(x)=\phi_2+\epsilon_2(x)~,
\end{equation}
where the $\epsilon$ functions are arbitrary functions so far.
Of course, both factorization conditions can change under the additive extension, but to keep
a link with the initial equation defined through the $\phi$ factorization functions, we are interested
in those $\tilde{\phi}$ functions for which the first factorization condition is satisfied for the same $\gamma$ parameter while
the product one is changed to a different nonlinear force $\tilde{f}$,
\begin{align}
& \tilde{\phi}_1+\tilde{\phi}_2+x\frac{d\tilde{\phi}_1}{dx}= -\gamma \label{tilde2}\\
& \tilde{\phi}_1\tilde{\phi}_2=\frac{\tilde{f}(x)}{x}~. \label{tilde3}
\end{align}
Therefore the factored equation $\left(D-\tilde{\phi}_2\right)\left(D-\tilde{\phi}_1\right)x=0$ is
\begin{equation}\label{tildeeq}
\frac{d^2x}{dt^2}+\gamma \frac{dx}{dt} +\tilde{f}(x)=0~.
\end{equation}
The additions $\epsilon_1(x)$ and $\epsilon_2(x)$ are not independent, but related through the following relation
\begin{equation}\label{epsilons}
\epsilon_1(x)=-\frac{\int \epsilon_2(x)dx}{x}~,
\end{equation}
obtained by substituting (\ref{tilde1}) into (\ref{tilde2}) and (\ref{tilde3}), (for zero integration constant). This condition
can be fulfilled by power functions or a finite sum of power functions.
For the monomial case, $\epsilon_1(x)=-{\rm a} x^m$ and $\epsilon_2(x)={\rm a}(m+1)x^m$, $m\in \mathbf{N}$,
the nonlinear force $\tilde{f}(x)$ has the expression
\begin{equation}\label{tildeef}
\tilde{f}_m(x)=
{\rm a}\big[(m+1)\phi_1-\phi_2\big]x^{m+1}-{\rm a}^2(m+1)x^{2m+1}~.
\end{equation}
From the physical point of view, it is useful to think of (\ref{tildeeq}) as an equation that replaces (\ref{eq1}) under small perturbations of the nonlinear force. In this perturbative context, the most interesting cases are the lowest powers, $m=0$ and $m=1$, which provide the following $\tilde{f}_m(x)$
\begin{align} \label{tildeef01}
&\tilde{f}_0(x)=f(x)+{\rm a}(\phi_1-\phi_2-{\rm a})x~, \\
&\tilde{f}_1(x)=f(x)+{\rm a}(2\phi_1-\phi_2)x^2-2{\rm a}^2x^3~.
\end{align}
\medskip
\subsection{Examples}
We illustrate the monomial extension with two cases that are traveling wave frame forms of reaction-diffusion equations and also provide a finite polynomial sequence case. In the traveling wave context, the $\gamma$ parameter is the velocity $\nu$ of the traveling wave.\\
(1). {\em The generalized Fisher equation}
\medskip
The generalized Fisher equation has the form \cite{rcp1}
\begin{equation}\label{dF1}
x''+\nu x' +x(1-x^n)=0~,\quad \nu\neq 0~, \quad n\geq 1~,
\end{equation}
where the primes stand for derivatives with respect to $\zeta=s-\nu t$.
In the reaction-diffusion form, the case $n=2$ has been proposed by Fisher as an equation
governing the population dynamics in the genetics context of the alleles. It has become over the years
the fundamental law of population genetics. The general solution, obtained using {\em Mathematica},
can be written in terms of Kummer's confluent hypergeometric function of the second kind (the Tricomi function), $U$, as
\begin{equation}\label{dF1gs2}
x(\zeta)=\frac{1}{\nu}\Bigg[\frac{\zeta}{\nu}+\frac{\zeta^2}{2}-
\frac{\zeta^{n+2}}{n+2}-U(1,n+3;\nu \zeta)+c_1 e^{\nu \zeta}\Bigg]+c_2~.
\end{equation}
where $c_1$ and $c_2$ are integration constants. Plots of particular solutions derived from this general Fisher solution are provided in Fig.~\ref{Figs12}.
\begin{figure}[h!]
\centering
\subfigure[\ $n=2$; $c_1=1$, $c_2=0$.]{
\includegraphics[scale=0.780]{fig1x.pdf}}
\subfigure[\ $n=2$; $c_1=-1$, $c_2=0$.]{
\includegraphics[scale=0.780]{fig2x.pdf}}
\caption{\label{Figs12} Particular solutions obtained from (\ref{dF1gs2}) for the values of $n$ and constants of integration as displayed.}
\end{figure}
On the other hand, equation (\ref{dF1}) can be factored with \cite{rcp1}
\begin{equation}\label{dF3}
\phi_1=h_n^{-1}\left(1-x^{n/2}\right)~, \qquad \phi_2=h_n\left(1+x^{n/2}\right)~,\qquad h_n^2=1+n/2~.
\end{equation}
for $\nu_n=-\left( h_{n} + h_{n}^{-1}\right)$.
\bigskip
The monomially-only-extended factoring functions read
\begin{equation}\label{eqrext}
\tilde{\phi}_1(x)= h_n^{-1}\left(1-x^{n/2}\right)-{\rm a}x^m~, \qquad \tilde{\phi}_2(x)=
h_n\left(1+x^{n/2}\right)+{\rm a}(m+1)x^m~,
\end{equation}
which lead to
\begin{equation}\label{tildef}
\tilde{f}(x)= f(x)+ {\rm a} \bigg[\frac{m+1}{h_n}-h_n-{\rm a} (m+1) x^m-
\left(\frac{m+1}{h_n}+h_n\right) x^{n/2} \bigg]x^{m+1}
\end{equation}
Using (\ref{eqrext}), a particular solution of
\begin{equation}\label{moeeq}
x''+\nu_n x' +x(1-x^n)+ {\rm a} \bigg[\frac{m+1}{h_n}-h_n-{\rm a} (m+1) x^m-
\left(\frac{m+1}{h_n}+h_n\right) x^{n/2} \bigg]x^{m+1}=0~,
\end{equation}
is obtained from
\begin{equation}\label{stildeeq}
\frac{dx}{d\zeta}-h_n^{-1}\left(1-x^{n/2}\right)x+{\rm a}x^{m+1}=0~,
\end{equation}
as
\begin{equation}\label{solpt}
\int\frac{dx}{x(h_n {\rm a}x^m-x^{n/2}+1)}=\frac{1}{h_n}\int d\zeta~.
\end{equation}
For $n=2$ and the cases $m=0$ and $m=1$, the quadrature in the latter equation provides the following particular solutions
\begin{equation}\label{solpt1}
x_0(\zeta)
\frac{1-\sqrt{2}{\rm a}}{2}e^{\frac{1-\sqrt{2}{\rm a}}{2}\left(\frac{\zeta}{\sqrt{2}}-c_0\right)}
{\rm sech} \frac{1-\sqrt{2}{\rm a}}{2}\left(\frac{\zeta}{\sqrt{2}}-c_0\right)~,
\quad
x_1(\zeta)=\frac{e^{\frac{\zeta}{\sqrt{2}}-c_1}}{1+\left(1+\sqrt{2}{\rm a}\right)
e^{\frac{\zeta}{\sqrt{2}}-c_1}}~,
\end{equation}
respectively, where $c_0$ and $c_1$ are integration constants. This kind of particular solutions are presented in Fig.~\ref{Figs34} and have typical traveling wave front profiles.
\begin{figure}[h!]
\centering
\subfigure[\ $n=2$; $\nu=-3/\sqrt{2}$, $c_0=0$.]{
\includegraphics[scale=0.780]{fig3.pdf}}
\subfigure[\ $n=2$; $\nu=-3/\sqrt{2}$, $c_0=0$.]{
\includegraphics[scale=0.780]{fig5.pdf}}
\caption{\label{Figs34} Particular solutions from (\ref{solpt1}) for negative values of ${\rm a}$ and the values of $n$, $\nu$, and constants of integration as displayed.}
\end{figure}
\medskip
The differences between the nonlinear forces for these cases are given by the expressions
\begin{align}
&\Delta f_0(\zeta)=\tilde{f}_0-f=-\frac{{\rm a}}{\sqrt{2}}\left(\sqrt{2}{\rm a}+1+3x_0(\zeta)\right)x_0(\zeta)~, \label{diff-ff0}\\
&\Delta f_1(\zeta)=\tilde{f}_1-f=-2{\rm a}({\rm a}+\sqrt{2})x_1^3(\zeta)~ \label{diff-ff1}
\end{align}
and are plotted in Fig.~\ref{Figs3b4b}. For small values of the parameter ${\rm a}$, they still have the switching profile of the solutions.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.720]{deltaf0ec56.pdf}
\includegraphics[scale=0.720]{deltaf1ec57.pdf}
\caption{\label{Figs3b4b} Differences between the nonlinear forces as given by (\ref{diff-ff0}) and (\ref{diff-ff1}), respectively.}
\end{figure}
\bigskip
(2). {\em The FitzHugh-Nagumo Equation}\\
The FitzHugh-Nagumo equation,
\begin{equation}\label{fhn1}
x''+\nu x' + f(x)=0~, \qquad f(x)=x(x-1)(\beta-x)~,
\end{equation}
emerged in a simplified system of two equations modelling the transmission of electrical impulses through a nerve axon with the variable $x$ representing the axon membrane potential. In the homogeneous equation (\ref{fhn1}) the effect of a slow negative feedback on the membrane potential is not taken into account which eliminates the evolution equation of the feedback.
The general solution is
\begin{equation}\label{fhngs}
x(\zeta)=-\frac{1}{\nu}\Bigg[\frac{2p_1(\nu)+\beta \nu^2}{\nu^3}\zeta-\frac{2p_1(\nu)+\beta \nu^2}{\nu^2}\frac{\zeta^2}{2}
+\frac{p_1(\nu)}{\nu}\frac{\zeta^3}{3}-\frac{\zeta^4}{4}+\tilde{c}_1e^{-\nu \zeta}\Bigg]+\tilde{c}_2~,
\end{equation}
where $p_1(\nu)=3+(\beta+1) \nu$ and $\tilde{c}_{1,2}$ are arbitrary integration constants.
Some particular solutions derived from this general Fitz-Hugh-Nagumo solution are plotted in Fig.~\ref{Figs56}. Interestingly, their profiles are not very different from the particular Fisher solutions obtained from the general Fisher solution.
\begin{figure}[h!]
\centering
\subfigure[\ $\beta=\pm 1$; $c_1=1$, $c_2=0$.]{
\includegraphics[scale=0.780]{fig7x.pdf}}
\subfigure[\ $\beta=\pm 1$; $c_1=-1$, $c_2=0$.]{
\includegraphics[scale=0.780]{fig8x.pdf}}
\caption{\label{Figs56} Particular solutions obtained from (\ref{fhngs}) for the values of $\beta$ and constants of integration as displayed.
The values of $\nu$, $-1/\sqrt{2}$ and $3/\sqrt{2}$, correspond to positive and negative $\beta$, respectively.}
\end{figure}
For the particular value $\nu_\beta= (1-2\beta)/\sqrt{2}$, equation (\ref{fhn1}) is a particular case of the generalized Burgers-Huxley equation and can be factorized \cite{rcp2} with $\phi_1(x)=(x-1)/\sqrt{2}$ and $\phi_2(x)=\sqrt{2}(\beta-x)$, which we use in the monomially-only-extended factorization functions
\begin{equation}\label{fhn2}
\tilde{\phi}_1=\frac{x-1}{\sqrt{2}}-{\rm a}x^m~, \qquad \tilde{\phi}_2=\sqrt{2}(\beta-x)+{\rm a}(m+1)x^m~
\end{equation}
to factorize the equation
\begin{equation}\label{fhn3}
x''+\nu_\beta x' + \tilde{f}(x)=0~,
\end{equation}
where
\begin{equation}\label{fhn4}
\tilde{f}(x)=f(x)-{\rm a}\left(\sqrt{2}\beta+\frac{m+1}{\sqrt{2}}\right)x^{m+1}+{\rm a}\left(\sqrt{2}+\frac{m+1}{\sqrt{2}}\right)x^{m+2}-{\rm a}^2(m+1)x^{2m+1}~.
\end{equation}
A particular solution of (\ref{fhn3}) is obtained from
\begin{equation}\label{fhn5}
\frac{dx}{d\zeta}-\frac{1}{\sqrt{2}}(x-1)x+{\rm a}x^{m+1}=0~
\end{equation}
through the following quadrature
\begin{equation}\label{fhn6}
\int \frac{dx}{x(x-1-\sqrt{2}{\rm a}x^m)}=\frac{1}{\sqrt{2}}\int d\zeta~.
\end{equation}
For the $m=0$ and $m=1$ cases, the particular solutions are given by
\begin{equation}\label{fhn7}
x_0(\zeta)=\frac{\sqrt{2}(1+\sqrt{2}{\rm a})}{\sqrt{2}-e^{(1+\sqrt{2}{\rm a})\frac{\zeta+2c_0}{\sqrt{2}}}}~, \qquad x_1(\zeta)=\frac{\sqrt{2}}{\sqrt{2}(1-\sqrt{2}{\rm a})-e^{\frac{\zeta+2c_1}{\sqrt{2}}}}
\end{equation}
respectively, where $c_0$ and $c_1$ are integration constants. These solutions plotted in Fig.~\ref{ffhn1} are manifestly singular; they blow up at finite traveling variables given by $\zeta^*_0(a)=\ln 2/(\sqrt{2}+2a)-2c_0$ and $\zeta^*_1(a)=\sqrt{2}\ln[\sqrt{2}(1-\sqrt{2}a)]-2c_1$, respectively.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.780]{fig9.pdf}
\includegraphics[scale=0.780]{fig11.pdf}
\caption{\label{ffhn1} Particular solutions as obtained from (\ref{fhn7}) for zero integration constants and the displayed values of the parameter ${\rm a}$.}
\end{figure}
\medskip
From (\ref{fhn4}), one can also obtain the differences between the nonlinear functions of the two equations
\begin{align}
&\Delta f_0(\zeta)=\tilde{f}_0-f=\frac{{\rm a}}{\sqrt{2}}\big[3x_0(\zeta)-2\beta-\sqrt{2}{\rm a}-1\big]x_0(\zeta)~, \label{fhn8a}\\
&\Delta f_1(\zeta)=\tilde{f}_1-f=\sqrt{2}{\rm a}\big[(2-\sqrt{2}{\rm a})x_1(\zeta)-\beta-1\big]x_1^2(\zeta)~, \label{fhn8b}
\end{align}
for $m=0$ and $m=1$, respectively. These differences are plotted in Fig.~\ref{diff-fhn} for several values of the parameter ${\rm a}$ and $\beta=1$.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.720]{deltaf0ec66beta1.pdf}
\includegraphics[scale=0.720]{deltaf1ec66beta1.pdf}
\caption{\label{diff-fhn} Differences between the nonlinear forces as given by (\ref{fhn8a}) and (\ref{fhn8b}), respectively, for the displayed values of the parameters.}
\end{figure}
The interesting feature to be noticed is that the particular solutions obtained for the monomially-only-extended FitzHugh-Nagumo equation depend only on the parameter ${\rm a}$, while the forces depend also on the parameter $\beta$. This is due to the fact that the factorization functions $\phi_1(x)$ and $\tilde{\phi}_1(x)$, do not depend on $\beta$.
\bigskip
(3). {\em Polynomial sequence example}\\
Finally, we discuss a polynomial sequence extension of the factorization functions involving $N$ terms of a $\gamma=0$ initial case for
which the factorization functions $\phi_{1,2}$ are both zero, i.e., a degenerate $D^2x=0$ case.
Then, we have
\begin{equation}\label{eq16}
\tilde{\phi}_1(x)=-\sum_{m=0}^{N}{\rm a}_mx^m~, \qquad \tilde{\phi}_2(x)=\sum_{m=0}^{N}(m+1){\rm a}_mx^m~,
\end{equation}
which one can easily verify that satisfies the conditions given in equations (\ref{tilde2}) and (\ref{tilde3}).
\medskip
Using the first pair of factorization functions,
the corresponding second order non linear differential equation has the form
\begin{equation}\label{eq17}
\frac{d^2x}{dt^2}-\left(\sum_{m=0}^{N}{\rm a}_mx^m\right)\left(\sum_{m=0}^{N}(m+1){\rm a}_mx^m\right)x \equiv
\frac{d^2x}{dt^2}-\Bigg[\sum_{m=0}^{N}\left(\sum_{l=0}^{m}(m-l+1){\rm a}_l{\rm a}_{m-l}\right)x^m\Bigg]x=0~,
\end{equation}
which admits the following non-commuting factorization
\begin{equation}\label{eq18}
\left(D-\sum_{m=0}^{N}(m+1){\rm a}_mx^m\right)\left(D+\sum_{m=0}^{N}{\rm a}_mx^m\right)x=0~.\\
\end{equation}
One particular solution for equation (\ref{eq17}) can be obtained from the first order equation
\begin{equation}\label{eq19}
\frac{dx}{dt}=-\sum_{m=0}^{N}{\rm a}_mx^{m+1}~. \\
\end{equation}
While for $N<2$ one can easily obtain simple explicit solutions of (\ref{eq19}), for $N\geq 2$, the solutions will be in general implicit solutions depending on the roots of the cubic, quartic, a.s.o., algebraic equations.
\medskip
Let us consider the $N=2$ case for which $\sum_{m=0}^{2}{\rm a}_mx^{m+1}=x({\rm a}_0+{\rm a}_1x+{\rm a}_2x^2)={\rm a}_2x(x-\alpha_1)(x-\alpha_2)$, where $\alpha_{1,2}$ are the roots of the quadratic algebraic equation. Then, we have the quadrature
\begin{equation}\label{eqexf1}
\int \frac{dx}{x(x-\alpha_1)(x-\alpha_2)}=-{\rm a}_2\int dt~.
\end{equation}
The classification of the solutions in terms of the roots is the following \cite{mr2016}:
\begin{itemize}
\item [(i).] If $\alpha_{1,2}=\frac{1}{2{\rm a}_2}\left(-{\rm a}_1\pm\sqrt{\Delta}\right)~, \quad \Delta={\rm a}_1^2-4{\rm a}_0{\rm a}_2 > 0$,
then by the method of partial fraction decompositions, one obtains
\begin{equation}\label{eqexf3}
\frac{1}{\alpha_1\alpha_2(\alpha_1-\alpha_2)}\bigg[\ln x^{(\alpha_1-\alpha_2)} +\ln(x-\alpha_1)^{\alpha_2}-\ln(x-\alpha_2)^{\alpha_1}\bigg]=-{\rm a}_2(t-t_0)~,
\end{equation}
which leads to the implicit solution
\begin{equation}\label{eqexf3b}
\frac{(x-\alpha_1)^{\alpha_2}}{(x-\alpha_2)^{\alpha_1}}x^{(\alpha_1-\alpha_2)}=e^{-{\rm a}_2\alpha_1\alpha_2(\alpha_1-\alpha_2)(t-t_0)}\equiv
e^{-\frac{{\rm a}_0}{{\rm a}_2}\sqrt{\Delta}(t-t_0)}~.
\end{equation}
\medskip
\item [(ii).] When $\alpha_1=\alpha_2=\alpha\,\, (=-\frac{{\rm a}_1}{2{\rm a}_2})$, $\Delta=0$, the implicit solution is
\begin{equation}\label{case2}
-\frac{1}{x-\alpha} + \frac{1}{\alpha}\ln\bigg|\frac{x}{x-\alpha}\bigg|=-\alpha\left({\rm a}_2t+{\rm c}\right)\equiv \frac{{\rm a}_1}{2}(t-t_0)~.
\end{equation}
\medskip
\item [(iii).] If $\alpha_1= \bar{\alpha}_2=r+ i s$, $\Delta<0$, the implicit solution is
\begin{equation}\label{case3}
-\ln\big|\sqrt{(x-r)^2+s^2}\big|+\frac{\ln |x|}{s}+\frac{r}{s}\arctan\frac{x-r}{s}
={\rm a}_2(r^2+s^2)(t-t_0)~.
\end{equation}
\item [(iv).] In the degenerate case $\alpha_1=\alpha_2=0$, i.e., ${\rm a}_0={\rm a}_1=0$, one obtains the simple explicit solution
\begin{equation}\label{case4}
\frac{1}{2x^2}={\rm a}_2(t-t_0)~.
\end{equation}
\end{itemize}
\noindent In all cases, $t_0$ is an arbitrary integration constant.
\medskip
In a very limited amount of these kinds of zero $\gamma$ cases, one can also obtain implicit solutions with two integration constants (general solutions). As in the Ermakov-Pinney case, this is obtained through the Abel equation of the second kind for the factorization function $\tilde{\phi}_1$,
which reads
\begin{equation}\label{ab2nd1}
\tilde{\phi}_1\frac{d\tilde{\phi}_1}{dx}+\frac{1}{x}\tilde{\phi}_1^2=\sum_{m=0}^{N}\left(\sum_{l=0}^{m}(m-l+1){\rm a}_l{\rm a}_{m-l}\right)x^{m-1}~.
\end{equation}
In terms of the function $\psi= \tilde{\phi}_1^2$, this equation is a linear first order equation, which in the $N=2$ leads to the solution
\begin{equation}\label{ab2nd2}
\tilde{\phi}_1(x)=\pm \sqrt{({\rm a}_0+{\rm a}_1x+{\rm a}_2x^2)^2+\frac{k_1}{x^2}}~,
\end{equation}
where $k_1$ is an integration constant. Then, from
\begin{equation}\label{ab2nd3}
\frac{dx}{dt}=\tilde{\phi}_1(x)x=\pm\sqrt{({\rm a}_0x+{\rm a}_1x^2+{\rm a}_2x^3)^2+k_1}~,
\end{equation}
one can obtain for ${\rm a}_0={\rm a}_1=0$, ${\rm a}_2\neq 0$ (case (iv) above) the general implicit solution
\begin{equation}\label{ab2ndd4}
\sqrt{(a_2x^3)^2+k_1}\,{}_{2}F_{1}\left(\frac{2}{3}, 1, \frac{7}{6}; -\frac{(a_2x^3)^2}{k_1}\right)=k_1(t-t_0)~,
\end{equation}
where
$_{2}F_{1}$ is Gauss' hypergeometric function.
For $k_1=0$, one obtains the explicit singular solution given in (\ref{case4}).
\bigskip
\section{Conclusions}
We have discussed some minimal extensions of the factorization conditions of Rosu and Cornejo-P\'erez in the case of the constant $\gamma$ coefficient of the first derivative with emphasis on the generated nonlinear equations and their particular solutions. In the case of commutative factorizations, one can obtain equations of the Ermakov-Pinney type at most as has been described in this paper. For the non-commutative factorization case, one can obtain equations with the same $\gamma$ parameter through designed additive monomial extensions of the factorization functions. The new equations have nonlinear forces that differ from the initial forces by supplementary terms. As illustrative examples, we have presented such kinds of modified counterparts of the generalized Fisher equation and the FitzHugh-Nagumo equation, and their particular solutions have been obtained by the factorization method. A polynomial sequence extension in the case $\gamma=0$ has been also provided, for which various types of implicit solutions have been given for the size of the sequence $N=2$.
\bigskip
| {
"timestamp": "2022-11-04T01:05:34",
"yymm": "2211",
"arxiv_id": "2211.01541",
"language": "en",
"url": "https://arxiv.org/abs/2211.01541",
"abstract": "For the case of nonlinear second-order differential equations with a constant coefficient of the first derivative term and polynomial nonlinearities, the factorization conditions of Rosu and Cornejo-Perez are approached in two ways: (i) by commuting the subindices of the factorization functions in the two factorization conditions and (ii) by leaving invariant only the first factorization condition achieved by using monomials or polynomial sequences. For the first case, the factorization brackets commute and the generated equations are only equations of Ermakov-Pinney type. The second modification is non commuting, leading to nonlinear equations with different nonlinear force terms, but the same first-order part as the initially factored equation. It is illustrated for monomials with the examples of the generalized Fisher and FitzHugh-Nagumo initial equations. A polynomial sequence example is also included.",
"subjects": "Exactly Solvable and Integrable Systems (nlin.SI); Mathematical Physics (math-ph)",
"title": "Factorization conditions for nonlinear second-order differential equations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9817357248544007,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7085610897933116
} |
https://arxiv.org/abs/math/0701014 | A lower bound for the size of the largest critical sets in Latin squares | A critical set in an $n \times n$ array is a set $C$ of given entries, such that there exists a unique extension of $C$ to an $n\times n$ Latin square and no proper subset of $C$ has this property. The cardinality of the largest critical set in any Latin square of order $n$ is denoted by $\lcs{n}$. We give a lower bound for $\lcs{n}$ by showing that $\lcs{n} \geq n^2(1-\frac{2 + \ln 2}{\ln n})+n(1+\frac {\ln (8 \pi)} {\ln n})-\frac{\ln 2}{\ln n}.$ | \section{Introduction}
A {\sf Latin square} of order $n$ is an $n$ $\times$ $n$ array of
integers chosen from the set $X = \{1,2, \ldots, n\}$ such that
each element of $X$ occurs exactly once in each row and exactly
once in each column. A Latin square can also be written as a set
of ordered triples $\{ (i,j;k) \mid$ symbol $k$ occurs in cell
$(i,j)$ of the array$\}$.
A {\sf partial Latin square} $P$ of order $n$ is an $n\times n$
array with entries chosen from the set $X = \{1,2, \ldots, n\}$,
such that each element of $X$ occurs at most once in each row
and at most once in each column. Hence there are cells in the
array that may be empty, but the cells that are filled have been
filled so as to conform with the Latin property of the array. Let
$P$ be a partial Latin square of order $n$. Then $| P |$ is said
to be the {\sf size} of the partial Latin square and the set of
cells ${\cal S}_P=\{(i,j) \mid (i,j;k)\in P\}$ is said to
determine the {\sf shape} of $P$.
A partial Latin square $C$ contained in a Latin square $L$ is said
to be {\sf uniquely completable} if $L$ is the only Latin square
of order $n$ with $k$ in the cell $(i,j)$ for every $(i,j;k) \in
C$. A {\sf critical set} $C$ contained in a Latin square $L$ is a
partial Latin square that is uniquely completable and no proper
subset of $C$ satisfies this requirement. The name ``critical
set'' and the concept were invented by a statistician, John
Nelder, about 1977, and his ideas were first published in a
note~{\bf\cite{Nelder77}}. This note posed the problem of giving
a formula for the size of the largest and smallest critical sets
for a Latin square of a given order. Let $ \lcs{n}$ denote the
size of the {\sf largest critical set} in any Latin square of
order $n$. Nelder~{\bf\cite{Nelder?}} constructed a critical set
of size $(n^2-n)/2$ for the $n \times n$ back circulant Latin
square. He conjectured that $ \lcs{n} =(n^2-n)/2$. This equality
was shown to be false in 1978, when Curran and van
Rees~{\bf\cite{MR80j:05022}}, found that $\lcs{4} \geq 7$. The
following is an example of a largest critical set of size $11$
for a $5\times 5$ Latin square, taken
from~{\bf\cite{BeanMahmoodian}}, which also contradicts Nelder's
conjecture.
$$
\begin{latinsq}[1]
\hline 2&&4&3& \\
\hline&&1&2& \\
\hline&2&3&1& \\
\hline 3&1&2&& \\
\hline&&&& \\
\hline
\end{latinsq}
$$
\noindent
In the following table some known values of $\lcs{n}$ for $n \leq
6$ are listed,
$$
\begin{array}{c|cccccc}
n & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline
$\lcs{$n$}$ & 0 & 1 & 3 & 7 & 11 & 18 \\
\end{array}
$$
and in the following table some known lower bounds for $\lcs{n}$
are shown for $7 \le n \le 10$,
$$
\begin{array}{c|cccc}
n & 7 & 8 & 9 & 10 \\ \hline
\lcs{$n$} \ge & 25 & 37 & 44 & 57 \\
\end{array}
$$
See~{\bf\cite{BeanMahmoodian}} for the references. Recently Bean
and Mahmoodian~{\bf\cite{BeanMahmoodian}} have found the upper
bound $\lcs{n} \leq n^2-3n+3$. Nelder's $(n^2-n)/2$ is the best
lower bound that is found for $\lcs{n}$ so far. In this note we
improve this bound asymptotically for $n$ large enough ($n \ge
195$).
\section{A lower bound for $\lcs{n}$}
\begin{theorem}
For any integer $n$ we have,
\[
\lcs{n} \ge n^2(1-\frac{2 + \ln 2}{\ln n}) +n(1+\frac {\ln \left(
8 \pi \right)} {\ln n})-\frac{\ln 2}{\ln n}.
\]
\end{theorem}
\begin{proof} By Theorem 17.2 in~{\bf\cite{vanLintWilson}},
as a result of van der Warden conjecture, we know the following
bound for $L(n)$, the number of Latin squares of order~$n$: \ \
$L(n) \ge \frac{(n!)^{2n}}{n^{n^2}} $.
If in a partial Latin square all the entries, except the entries
of the first row and the first column be given, then it is
uniquely completable. So every Latin square has at least one
critical set which has no intersection with its first row and
first column. And also obviously the number of these critical sets
is greater than or equal to $L(n)$. For choosing the shape of such
a critical set we have at most $2^{(n-1)^2}$ ways, and for
choosing the entries of each given shape we have at most
$n^{\lcs{n}}$ different ways. So the number of critical sets is
less than or equal to $2^{n^2-2n+1}n^{\lcs{n}}$. Thus the
following inequalities hold:
\[ \frac{(n!)^{2n}}{n^{n^2}} \leq L(n) \leq 2^{n^2-2n+1}n^{\lcs{n}}. \]
Now by Stirling's approximation formula, (see for example
{\bf\cite{CLR}}), we can replace $n!$ with a smaller value
$\sqrt{2\pi n} \left( n \over e \right) ^ n$. So
\[ \frac{(2
\pi)^n n^{2n^2+n}}{e^{2n^2} n^{n^2}} \leq
2^{n^2-2n+1}n^{\lcs{n}},\] or
\[ \frac{(2 \pi)^n n^{n^2+n}}{e^{2n^2} 2^{n^2-2n+1}}
\leq n^{\lcs{n}}. \] Thus, \ \ $ n \ln (2 \pi) + (n^2+n) \ln n -
2n^2 - (n^2-2n+1) \ln 2 \leq \lcs{n}\ln n. $ \ This implies that $
n^2(1-\frac{2+\ln 2}{\ln n})+n(1+\frac {2{\ln 2}+\ln \left( 2 \pi
\right)} {\ln n})-\frac{\ln 2}{\ln n} \leq \lcs{n}.$
\end{proof}
\noindent {\bf Note.} Stinson and van
Rees~{\bf\cite{MR84g:05036}} have shown that
$\lcs{2^m} \geq 4^m-3^m$. This lower bound for $n=2^m$, is better
than the bound given in Theorem~1.
\newpage
\def$'${$'$}
| {
"timestamp": "2006-12-31T05:22:04",
"yymm": "0701",
"arxiv_id": "math/0701014",
"language": "en",
"url": "https://arxiv.org/abs/math/0701014",
"abstract": "A critical set in an $n \\times n$ array is a set $C$ of given entries, such that there exists a unique extension of $C$ to an $n\\times n$ Latin square and no proper subset of $C$ has this property. The cardinality of the largest critical set in any Latin square of order $n$ is denoted by $\\lcs{n}$. We give a lower bound for $\\lcs{n}$ by showing that $\\lcs{n} \\geq n^2(1-\\frac{2 + \\ln 2}{\\ln n})+n(1+\\frac {\\ln (8 \\pi)} {\\ln n})-\\frac{\\ln 2}{\\ln n}.$",
"subjects": "Combinatorics (math.CO)",
"title": "A lower bound for the size of the largest critical sets in Latin squares",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9817357227168956,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7085610882505817
} |
https://arxiv.org/abs/2001.09000 | Optimal error estimate of the finite element approximation of second order semilinear non-autonomous parabolic PDEs | In this work, we investigate the numerical approximation of the second order non-autonomous semilnear parabolic partial differential equation (PDE) using the finite element method. To the best of our knowledge, only the linear case is investigated in the literature. Using an approach based on evolution operator depending on two parameters, we obtain the error estimate of the scheme toward the mild solution of the PDE under polynomial growth condition of the nonlinearity. Our convergence rate are obtain for smooth and non-smooth initial data and is similar to that of the autonomous case. Our convergence result for smooth initial data is very important in numerical analysis. For instance, it is one step forward in approximating non-autonomous stochastic partial differential equations by the finite element method. In addition, we provide realistic conditions on the nonlinearity, appropriated to achieve optimal convergence rate without logarithmic reduction by exploiting the smooth properties of the two parameters evolution operator. | \section{Introduction}
\label{intro}
Nonlinear partial differential equations are powerful tools in modelling real-world phenomena in many fields such as in geo-engineering. For instance processes such as oil
and gas recovery from hydrocarbon reservoirs and mining heat from geothermal reservoirs can be modelled by nonlinear equations with possibly degeneracy
appearing in the diffusion and transport terms. Since explicit solutions of many PDEs are rarely known, numerical approximations are forceful ingredients to
quantify them. Approximations are usually done at two levels, namely space and time approximations.
In this paper, we focus on spatial approximation of the following advection-diffusion problem with a nonlinear reaction term using the finite element method.
\begin{eqnarray}
\label{model}
\frac{\partial u}{\partial t}=\mathcal{A}(t)u+F(t,u), \quad u(0)=u_0, \quad t\in(0,T], \quad T>0,
\end{eqnarray}
on the Hilbert space $H=L^2(\Lambda)$, where $\Lambda$ is an open bounded subset of $\mathbb{R}^d$ $(d=1,2,3)$, with smooth boundary. The second order differential operator $\mathcal{A}(t)$ is given by
\begin{eqnarray}
\label{family}
\mathcal{A}(t)u=\sum_{i,j=1}^d\frac{\partial}{\partial x_i}\left(q_{ij}(t,x)\frac{\partial u}{\partial x_j}\right)-\sum_{j=1}^dq_j(t,x)\frac{\partial u}{\partial x_j}+q_0(t,x)u,
\end{eqnarray}
where $q_{i,j}, q_{j}$ and $q_0$ are smooth coefficients. Also, there exists $c_1\geq0$, $0<\gamma\leq 1$ such that
\begin{eqnarray*}
\vert q_{i,j}(t,x)-q_{i,j}(s,x)\vert\leq c_2\vert t-s\vert^{\gamma},\quad x\in\Lambda,\; t,s\in[0, T],\; i,j\in\{1,\cdots,d\}.
\end{eqnarray*}
Moreover, $q_{i,j}$ satisfies the following ellipticity condition
\begin{eqnarray}
\label{ellip}
\sum_{i,j=1}^dq_{ij}(t,x)\xi_i\xi_j\geq c\vert \xi\vert^2, \quad (t,x)\in [0,T]\times \overline{\Lambda},
\end{eqnarray}
where $c> 0$ is a constant.
The finite element approximation of \eqref{model} with constant linear operator $\mathcal{A}(t)=\mathcal{A}$ are widely investigated in the scientific
literature, see e.g. \cite{Stig2,Thomee2,Suzuki,Antjd2} and the references therein. The finite volume method for $\mathcal{A}(t)=\mathcal{A}$ was recently investigated in \cite{Tambueseul}. If we turn our attention
to the non-autonomous case, the list of references becomes remarkably short. In the linear homogeneous case ($F(t,u)=0$), the finite element approximation has been
investigated in \cite{Luskin}, \cite[Chapter III, Section 14.2]{Suzuki}.
The linear inhomogeneous version of \eqref{model} ($F(t,u)=f(t)$)
was investigated in \cite{Luskin,Dimitri,Thomee1}, \cite[Chapter III, Section 12]{Suzuki} and the references therein. To the best of our knowledge, the nonlinear case is not yet investigated in the scientific literature.
This paper fills that gap by investigating the error estimate of the finite element method of \eqref{model} with a nonlinear source $F(t,u)$, which is more challenging due to the presence of the unknown $u$ in the source term $F$. This become more challenging when the nonlinear function satisfies the polynomial growth condition. Our strategy is based on an introduction of two parameters evolution operator by exploiting carefully its smooth regularity properties. Our key intermediate result, namely \lemref{spaceerrorlemma} generalizes \cite[Theorem 3.5]{Thomee2} for time dependent and not necessary self-adjoint operators. It also generalizes \cite[Theorem 4.2]{Thomee2}, the results in \cite[Chapter III, Section 12]{Suzuki} and in \cite{Luskin,Dimitri,Thomee1} to smooth and non-smooth initial data. Note that \lemref{spaceerrorlemma} for non-smooth initial data is of great important in numerical analysis. It is key to obtain the convergence of the finite element method for many nonlinear problems, including stochastic partial differential equations(SPDEs), see e.g. \cite{Kovcas,Kruse1,Xiaojie2} and references therein for time independent SPDEs. In fact, in the case of SPDEs, due to the It\^{o}-isometry formula or the Burkh\"{o}lder Davis-Gundy inequality, the non-smooth version of \lemref{spaceerrorlemma} cannot be applied since it brings degenerates integrals, which causes difficulties in the error estimates or reduces considerably the order of convergence. Hence our result is more general than the existing results and also has many applications.
The convergence rate achieved for semilinear problem is in agreement with many results in the literature on autonomous problems and on non-autonomous linear problems. More precisely, we achieve convergence order $\mathcal{O}\left(h^{2}t^{-1+\beta/2}+h^2\left(1+\ln(t/h^2)\right)\right)$ or $\mathcal{O}(h^{\beta})$, where $\beta$ is a regularity parameter defined in \assref{assumption1}. Under optimal regularity of the nonlinear function $F$ or under a linear growth assumption on $F$, we achieve optimal convergence order $\mathcal{O}(h^2t^{-1+\beta/2})$. Following \cite{Tambueseul} and using the similar approach based on the two parameters evolution operator, this work can be extended to the finite volume method.
The rest of this paper is structured as follows. In Section \ref{nummethod}, the well-posedness results are provided along with the finite element approximation. The error estimate is analysed in Section \ref{proof1} for both Lipschitz nonlinearity and polynomial growth nonlinearity.
\section{Mathematical setting and numerical method}
\label{nummethod}
\subsection{Notations, settings and well well-posedness problem}
We denote by $\Vert \cdot \Vert$ the norm associated to
the inner product $\langle\cdot ,\cdot \rangle_H$ in the Hilbert space $H=L^{2}(\Lambda)$. We denote by $\mathcal{L}(H)$ the set of bounded linear operators
in $H$. Let $\mathcal{C}:=\mathcal{C}(\overline{\Lambda}, \mathbb{R})$ be the set of continuous functions equipped with the norm $\Vert u\Vert_{\mathcal{C}}=\sup\limits_{x\in\overline{\Lambda}}\vert u(x)\vert$, $u\in\mathcal{C}$. Next, we make the following assumptions.
\begin{Assumption}
\label{assumption1}
The initial data $u_0$ belongs to $\mathcal{D}\left(\left(-A(0)\right)^{\frac{\beta}{2}}\right)$, $0\leq \beta\leq 2$.
\end{Assumption}
\begin{Assumption}
\label{assumption3}
The nonlinear function $F : [0,T]\times H\longrightarrow H$ is Lipschitz continuous, i.e. there exists a constant $K$ such that
\begin{eqnarray}
\label{Lipschitz}
\Vert F(t,v)-F(s,w)\Vert\leq K(\vert t-s\vert+\Vert v-w\Vert),\quad s,t\in[0, T],\quad v,w\in H.
\end{eqnarray}
\end{Assumption}
We introduce two spaces $\mathbb{H}$ and $V$, such that $\mathbb{H}\subset V$, depending on the boundary conditions of $- \mathcal{A}(t)$. For Dirichlet boundary conditions, we take $
V=\mathbb{H}=H^1_0(\Lambda)$.
For Robin boundary condition, we take $V=H^1(\Lambda)$ and
\begin{eqnarray}
\label{espaceR}
\mathbb{H}=\{v\in H^2(\Lambda) : \partial v/\partial v_{\mathcal{A}}+\alpha_0v=0,\quad \text{on}\quad \partial \Lambda\}, \quad \alpha_0\in\mathbb{R},
\end{eqnarray}
where $\partial v/\partial v_{\mathcal{A}}$ stands for the differentiation along the outer conormal vector $v_{\mathcal{A}}$.
One can easily check that \cite[Chapter III, (11.14$^{\prime}$)]{Suzuki} the bilinear operator $a(t)$, associated to $-\mathcal{A}(t)$ defined by $a(t)(u,v)=\langle-\mathcal{A}(t)u,v\rangle_H,\quad u\in\mathcal{D}(\mathcal{A}(t)),\quad v\in V$ satisfies
\begin{eqnarray}
\label{ellip2}
a(t)(v,v)\geq \; \lambda_0\Vert v\Vert_{1}^{2},\;\;\;\;\; v \in V,\quad t\in[0,T],
\end{eqnarray}
where $\lambda_0$ is a positive constant, independent of $t$.
Note that $a(t)(\cdot,\cdot)$ is bounded in $V\times V$ (\cite[Chapter III, (11.13)]{Suzuki}), so the following operator $A(t):V \rightarrow V^*$ is well defined
\begin{eqnarray*}
a(t)(u,v) = \langle -A(t) u, v \rangle \quad u, v\in V,\quad t\in[0, T],
\end{eqnarray*}
where $V^*$ is the dual space of V and $\langle\cdot ,\cdot \rangle$ the duality pairing between $V^*$ and $V$.
Identifying $H$ to its adjoint space $H^*$, we get the following continuous and dense inclusions
\begin{eqnarray*}
V \subset H \subset V^*,\quad \text{and therefore}\quad \langle u, v \rangle_H=\langle u, v \rangle, \quad u \in H,\quad v\in V.
\end{eqnarray*}
So if we want to replace $\langle\cdot ,\cdot \rangle$ by the scalar product of $\langle\cdot,\cdot \rangle_H$ on $H$, we therefore need to have
$A(t) u \in H$, for $u \in V$. So the domain of $-A(t)$ is defined as $$D:= \mathcal{D}\left(-A(t)\right) =\mathcal{D}\left(A(t)\right) =\{ u\in V,\,\, A(t)u \in H \}.$$
It is well known that \cite[Chapter III, (11.11) \& (11.11$^{\prime}$)]{Suzuki} in the case of Dirichlet boundary conditions $D=H^1_0(\Lambda)\cap H^2(\Lambda)$ and in the case of Robin boundary conditions $D=\mathbb{H}$ in \eqref{espaceR}.
We write the restriction of $A(t):V \longrightarrow V^*$ to $\mathcal{D}\left(A(t)\right)$ again $A(t)$ which is therefore regarded as
an operator of $H$ (more precisely the $H$ realization of $\mathcal {A}(t)$).
The coercivity property \eqref{ellip2} implies that $-A(t)$ is a positive operator and its fractional powers are well defined (\cite{Stig2,Suzuki}).
The following equivalence of norms holds \cite{Suzuki,Stig2}
\begin{eqnarray}
\label{equivalence}
\Vert v\Vert_{H^{\alpha}(\Lambda)}&\equiv& \Vert ((-A(t))^{\frac{\alpha}{2}}v\Vert:=\Vert v\Vert_{\alpha},\; v\in \mathcal{D}((-A(t))^{\frac{\alpha}{2}})\cap H^{\alpha}(\Lambda),\quad \alpha\in[0, 2].
\end{eqnarray}
It is well known that the family of operators $\{A(t)\}_{0\leq t\leq T}$ generate a two parameters operators $\{U(t,s)\}_{0\leq s\leq t\leq T}$, see e.g. \cite{Sobolev} or \cite[Page 832]{Suzuki}.
The evolution equation \eqref{model} can be written as follows
\begin{eqnarray}
\label{semi0}
\dfrac{du(t)}{dt}=A(t)u(t)+F(t,u(t)), \quad u(0)=u_0,\quad t\in(0,T].
\end{eqnarray}
The following theorem provides the well posedness of problem \eqref{model} (or \eqref{semi0}).
\begin{theorem}\cite{Sobolev}
\label{theorem1}
Let \assref{assumption3} be fulfilled. If $u_0\in H$, then the initial value problem \eqref{model} has a unique mild solution $u(t)$ given by
\begin{eqnarray}
\label{mild0}
u(t)=U(t,0)u_0+\int_0^tU(t,s)F(s,u(s))ds,\quad t\in(0,T].
\end{eqnarray}
Moreover, if \assref{assumption1} is fulfilled, then the following space regularity holds \footnote{This estimate also holds when $u$ is replaced by its semi-discrete version $u^h$ defined in \secref{finiteelement}.}
\begin{eqnarray}
\label{spacereg1}
\Vert (-A(s))^{\frac{\beta}{2}}u(t)\Vert+\Vert F(u(t))\Vert\leq C\left(1+\Vert (-A(s))^{\frac{\beta}{2}}u_0\Vert\right),\quad \beta\in[0,2),\quad s,t\in[0,T].
\end{eqnarray}
\end{theorem}
\subsection{Finite element discretization}
\label{finiteelement}
Let $\mathcal{T}_h$ be a triangulation of $\Lambda$ with maximal length $h$. Let $V_h \subset V$ denotes the space of continuous and piecewise
linear functions over the triangulation $\mathcal{T}_h$.
We defined the projection $P_h$ from $H=L^2(\Lambda)$ to $V_h$ by
\begin{eqnarray}
\label{discrete1}
\left\langle P_hu,\chi\right\rangle_H=\langle u,\chi\rangle_H, \quad \chi\in V_h,\, u\in H.
\end{eqnarray}
For any $t\in[0, T]$, the discrete operator $A_h(t) : V_h\longrightarrow V_h$ is defined by
\begin{eqnarray}
\label{discrete2}
\left\langle A_h(t)\phi,\chi\right\rangle_H=\left\langle A(t)\phi,\chi\right\rangle_H=-a(t)(\phi,\chi),\quad \phi \in D \cap V_h, \chi\in V_h.
\end{eqnarray}
The space semi-discrete version of problem \eqref{semi0} consists of finding $u^h(t)\in V_h$ such that
\begin{eqnarray}
\label{semi1}
\dfrac{du^h(t)}{dt}=A_h(t)u^h(t)+P_hF(t,u^h(t)), \quad u^h(0)=P_hu_0,\quad t\in(0,T].
\end{eqnarray}
For $t\in[0,T]$, we introduce the Ritz projection $R_h(t) :V\longrightarrow V_h$ defined by
\begin{eqnarray}
\label{ritz1}
\left\langle -A(t)R_h(t)v,\chi\right\rangle_H=\left\langle -A(t)v,\chi\right\rangle_H=a(t)(v,\chi),\quad v\in V \cap D,\quad \chi\in V_h.
\end{eqnarray}
It is well known (see e.g. \cite[(3.2)]{Luskin} or \cite{Suzuki}) that the following error estimate holds
\begin{eqnarray}
\label{ritz2}
\Vert R_h(t)v-v\Vert+h\Vert R_h(t)v-v\Vert_{H^1(\Lambda)}\leq Ch^{r}\Vert v\Vert_{H^{r}(\Lambda)},\quad v\in V\cap H^{r}(\Lambda),\quad r\in[1,2].
\end{eqnarray}
The following error estimate also holds (see e.g. \cite[(3.3)]{Luskin} or \cite{Suzuki})
\begin{eqnarray}
\label{ritz3}
\Vert D_t\left(R_h(t)v-v\right)\Vert+h\Vert D_t\left(R_h(t)v-v\right)\Vert_{H^1(\Lambda)}\leq Ch^{r}\left(\Vert v\Vert_{H^{r}(\Lambda)}+\Vert D_tv\Vert_{H^{r}(\Lambda)}\right),
\end{eqnarray}
for any $r\in[1,2]$ and $v\in V\cap H^r(\Lambda)$, where $D_t:=\frac{\partial }{\partial t}$ and $D_tR_h(t)=R^{\prime}_h(t)$ is the time derivative of $R_h$.
According to the generation theory, $A_h(t)$ generates a two parameters evolution operator $\{U_h(t,s)\}_{0\leq s\leq t\leq T}$, see e.g. \cite[Page 839]{Suzuki}.
Therefore the mild solution of \eqref{semi1} can be written as follows
\begin{eqnarray}
\label{mild4}
u^h(t)=U_h(t,0)P_hu_0+\int_0^tU_h(t,s)P_hF(s,u^h(s))ds,\quad t\in[0,T].
\end{eqnarray}
In the rest of this paper, $C\geq 0$ stands for a constant indepemdent of $h$, that may change from one place to another. It is well known (see e.g. \cite[Chapter III, (12.3) \& (12.4)]{Suzuki}) that for any $0\leq\gamma\leq\alpha\leq 1$ and $0\leq s< t\leq T$, the following estimates hold\footnote{These estimates remain true if $A_h(t)$ and $U_h(t,s)$ are replaced by $A(t)$ and $U(t,s)$ respectively.}
\begin{eqnarray}
\label{ae1}
\left\Vert (-A_h(t))^{\alpha}U_h(t,s)\right\Vert_{\mathcal{L}(H)}\leq C(t-s)^{-\alpha},\quad
\left\Vert U_h(t,s)(-A_h(s))^{\alpha}\right\Vert_{\mathcal{L}(H)}\leq C(t-s)^{-\alpha}.
\end{eqnarray}
\section{Main result}
\label{proof1}
\subsection{Preliminaries result}
We consider the following linear homogeneous problem: find $w\in D\subset V$ such that
\begin{eqnarray}
\label{determ1}
w'=A(t)w,\quad w(\tau)=v,\quad t\in(\tau,T],\quad \text{with}\quad 0\leq\tau\leq T.
\end{eqnarray}
The corresponding semi-discrete problem in space is: find $w_h\in V_h$ such that
\begin{eqnarray}
\label{determ2}
w_h'(t)=A_h(t)w_h,\quad w_h(\tau)=P_hv,\quad t\in(\tau,T],\quad \text{with}\quad 0\leq\tau\leq T.
\end{eqnarray}
The following lemma will be useful in our convergence analysis.
\begin{lemma}
\label{spaceerrorlemma}
Let $r\in[0,2]$ and $\gamma\leq r$. Let \assref{assumption3} be fulfilled. Then the following error estimate holds for the semi-discrete approximation \eqref{determ2}
\begin{eqnarray*}
\label{er0}
\left\Vert w(t)-w_h(t)\right\Vert=\left\Vert[U(t,\tau)-U_h(t,\tau)P_h]v\right\Vert\leq Ch^r(t-\tau)^{-\frac{(r-\gamma)}{2}}\Vert v\Vert_{\gamma},\; v\in \mathcal{D}\left(\left(-A(0)\right)^{\frac{\gamma}{2}}\right).
\end{eqnarray*}
\end{lemma}
\begin{proof}
We split the desired error as follows
\begin{eqnarray}
\label{espa0}
w_h(t)-w(t)=\left(w_h(t)-R_h(t)w(t)\right)+\left(R_h(t)w(t)-w(t)\right)\equiv \theta(t)+\rho(t).
\end{eqnarray}
Using the definition of $R_h(t)$ and $P_h$ (\eqref{discrete1}--\eqref{discrete2}), we can prove exactly as in \cite{Stig2} that
\begin{eqnarray}
\label{espacetamb}
A_h(t)R_h(t)=P_hA(t),\quad t\in[0,T].
\end{eqnarray}
One can easily compute the following derivatives
\begin{eqnarray}
\label{espace1a}
D_t\theta&=&A_h(t)w_h(t)-D_tR_h(t)w(t)-R_h(t)A(t)w(t),\\
\label{espace1b}
D_t\rho&=&D_tR_h(t)w(t)+R_h(t)A(t)w(t)-A(t)w(t).
\end{eqnarray}
Endowing $V$ and the linear subspace $V_h$ with the norm $\Vert .\Vert_{H^1(\Lambda)}$, it follows from \eqref{ritz2} that $R_h(t)\in L(V, V_h)$, $t\in [0, T]$. By the definition of the differential operator, it follows that $D_tR_h(t)\in L(V, V_h)$ for all $t\in[0, T]$. Hence $P_hD_tR_h(t)=D_tR_h(t)$ for all $t\in[0,T]$
and it follows from \eqref{espace1b} that
\begin{eqnarray}
\label{espace1c}
P_hD_t\rho=D_tR_h(t)w(t)+R_h(t)A(t)w(t)-P_hA(t)w(t).
\end{eqnarray}
Adding and subtracting $P_hA(t)w(t)$ in \eqref{espace1a} and using \eqref{espacetamb}, it follows that
\begin{eqnarray}
\label{espa1}
D_t\theta=A_h(t)\theta-P_hD_t\rho,\quad t\in(\tau,T],
\end{eqnarray}
From \eqref{espace1a}, the mild solution of $\theta$ is given by
\begin{eqnarray}
\label{espa2}
\theta(t)=U_h(t,\tau)\theta(\tau)-\int_{\tau}^tU_h(t,s)P_hD_s\rho(s)ds.
\end{eqnarray}
Splitting the integral part of \eqref{espa2} in two and integrating by parts the first one yields
\begin{eqnarray}
\label{espa3}
\theta(t)&=& U_h(t,\tau)\theta(\tau)+U_h(t,\tau)P_h\rho(\tau)-U_h\left(t,(t+\tau)/2\right)P_h\rho\left((t+\tau)/2\right)\nonumber\\
&+&\int_{\tau}^{(t+\tau)/2}\frac{\partial}{\partial s}\left(U_h(t,s)\right)P_h\rho(s)ds-\int_{(t+\tau)/2}^tU_h(t,s)P_hD_s\rho(s)ds.
\end{eqnarray}
Using the expression of $\theta(\tau)$, $\rho(\tau)$ (see \eqref{espa0}) and the fact that $u_h(\tau)=P_hv$, it holds that $\theta(\tau)+P_h\rho(\tau)=0$.
Hence \eqref{espa3} reduces to
\begin{eqnarray}
\label{espa5}
\theta(t)=-U_h(t,s)P_h\rho((t+\tau)/2)+\int_{\tau}^{\frac{(t+\tau)}{2}}\frac{\partial}{\partial s}\left(U_h(t,s)
\right)P_h\rho(s)ds-\int_{\frac{(t+\tau)}{2}}^tU_h(t,s)P_hD_s\rho(s)ds.
\end{eqnarray}
Taking the norm in both sides of \eqref{espa5} and using \eqref{ae1} yields
\begin{eqnarray}
\label{espa6}
\Vert\theta(t)\Vert&\leq& C\left\Vert \rho\left((t+\tau)/2\right)\right\Vert+\int_{\tau}^{\frac{(t+\tau)}{2}}\left\Vert U_h(t,s)A_h(s)\right\Vert_{\mathcal{L}(H)}\Vert \rho(s)\Vert ds+\int_{\frac{(t+\tau)}{2}}^t\Vert D_s\rho(s)\Vert ds\nonumber\\
&\leq& C\left\Vert \rho\left((t+\tau)/2\right)\right\Vert+\int_{\tau}^{\frac{(t+\tau)}{2}}(t-s)^{-1}\Vert \rho(s)\Vert ds+\int_{\frac{(t+\tau)}{2}}^t\Vert D_s\rho(s)\Vert ds.
\end{eqnarray}
Using \eqref{ritz2} and \eqref{ritz3}, it holds that
\begin{eqnarray}
\label{espa7}
\Vert \rho(s)\Vert\leq Ch^r\Vert w(s)\Vert_r,\quad \Vert D_s\rho(s)\Vert \leq Ch^r\left(\Vert w(s)\Vert_r+\Vert D_sw(s)\Vert_r\right).
\end{eqnarray}
Note that the solution of \eqref{determ1} can be represented as follows.
\begin{eqnarray}
\label{encore1}
w(s)=U(s,\tau)v,\quad s\geq \tau.
\end{eqnarray}
Pre-multiplying both sides of \eqref{encore1} by $(-A(s))^{\frac{r}{2}}$ and using \eqref{ae1} yields
\begin{eqnarray}
\label{encore2}
\left\Vert (-A(s))^{\frac{r}{2}}w(s)\right\Vert&\leq& \left\Vert (-A(s))^{\frac{r}{2}}U(s,\tau)(-A(\tau))^{-\frac{\gamma}{2}}\right\Vert_{\mathcal{L}(H)}\left\Vert (-A(\tau))^{\frac{\gamma}{2}}v\right\Vert\nonumber\\
&\leq& C(s-\tau)^{-\frac{(r-\gamma)}{2}}\left\Vert (-A(\tau))^{\frac{\gamma}{2}}v\right\Vert\leq C(s-\tau)^{-\frac{(r-\gamma)}{2}}\Vert v\Vert_{\gamma}.
\end{eqnarray}
Therefore it holds that
\begin{eqnarray}
\label{espa8}
\Vert w(s)\Vert_r\leq C(s-\tau)^{-\frac{(r-\gamma)}{2}}\Vert v\Vert_{\gamma}, \quad 0\leq \gamma\leq r\leq 2,\quad \tau<s.
\end{eqnarray}
Substituting \eqref{espa8} in \eqref{espa7} yields
\begin{eqnarray}
\label{espa8a}
\Vert \rho(s)\Vert_r\leq Ch^r(s-\tau)^{-\frac{(r-\gamma)}{2}}\Vert v\Vert_{\gamma}.
\end{eqnarray}
Taking the derivative with respect to $s$ in both sides of \eqref{encore1} yields
\begin{eqnarray}
\label{encore3}
D_sw(s)=-A(s)U(s,\tau)v.
\end{eqnarray}
As for \eqref{encore2}, pre-multiplying both sides of \eqref{encore3} by $(-A(s))^{\frac{r}{2}}$ and using \eqref{ae1} yields
\begin{eqnarray}
\label{encore4a}
\Vert D_sw(s)\Vert_r\leq C(s-\tau)^{-1-\frac{(r-\gamma)}{2}}\Vert v\Vert_{\gamma}.
\end{eqnarray}
Substituting \eqref{espa8} and \eqref{encore4a} in the second estimate of \eqref{espa7} yields
\begin{eqnarray}
\label{espa15}
\Vert D_s\rho(s)\Vert\leq Ch^r\left((s-\tau)^{-\frac{(r-\gamma)}{2}}\Vert v\Vert_{\gamma}+(s-\tau)^{-1-\frac{(r-\gamma)}{2}}\Vert v\Vert_{\gamma}\right)\leq Ch^r(s-\tau)^{-1-\frac{(r-\gamma)}{2}}\Vert v\Vert_{\gamma}.
\end{eqnarray}
Substituting the first estimate of \eqref{espa7} and \eqref{espa15} in \eqref{espa6} and using \eqref{espa8a} yields
\begin{eqnarray}
\label{espa17}
\Vert\theta(t)\Vert&\leq& Ch^r(t-\tau)^{-\frac{(r-\gamma)}{2}}\Vert v\Vert_{\gamma}+Ch^r\int_{\tau}^{\frac{t+\tau}{2}}(t-s)^{-1}(s-\tau)^{-\frac{(r-\gamma)}{2}}\Vert v\Vert_{\gamma}ds\nonumber\\
&+&Ch^r\int_{\frac{t+\tau}{2}}^t(s-\tau)^{-1-\frac{(r-\gamma)}{2}}\Vert v\Vert_{\gamma}ds.
\end{eqnarray}
Using the estimate
\begin{eqnarray}
\label{espa18}
\int_{\tau}^{\frac{t+\tau}{2}}(t-s)^{-1}(s-\tau)^{-\frac{(r-\gamma)}{2}}ds+\int_{\frac{t+\tau}{2}}^t(s-\tau)^{-1-\frac{(r-\gamma)}{2}}ds\leq C(t-\tau)^{-\frac{(r-\gamma)}{2}},\nonumber
\end{eqnarray}
it follows from \eqref{espa17} that
\begin{eqnarray}
\label{espa20}
\Vert \theta(t)\Vert\leq Ch^r(t-\tau)^{-\frac{(r-\gamma)}{2}}\Vert v\Vert_{\gamma}.
\end{eqnarray}
Substituting \eqref{espa20} and \eqref{espa8a} in \eqref{espa0} completes the proof of \lemref{spaceerrorlemma}.
\end{proof}
\subsection{Error estimate of the semilinear problem under global Lipschitz condition}
\begin{theorem}
\label{theorem2}
Let Assumptions \ref{assumption1} and \ref{assumption3} be fulfilled. Let $u(t)$ and $u^h(t)$ be defined by \eqref{mild0} and \eqref{mild4} respectively.
Then the following error estimate holds
\begin{eqnarray}
\label{time1}
\Vert u(t)-u^h(t)\Vert\leq Ch^{2}t^{-1+\beta/2}+Ch^2(1+\ln(t/h^2)),\quad 0< t\leq T.
\end{eqnarray}
If in addition the nonlinearity $F$ satisfies the linear growth condition $\Vert F(t,v)\Vert\leq C\Vert v\Vert$ or if there exists $\delta>0$ small enough such that $\Vert(-A(s))^{\delta}F(t, v)\Vert\leq Ct+C\Vert (-A(s))^{\delta}v\Vert$, $s,t\in[0, T]$, $v\in H$, then the following optimal error estimate holds
\begin{eqnarray}
\label{time2}
\Vert u(t)-u^h(t)\Vert\leq Ch^{2}t^{-1+\beta/2},\quad 0<t\leq T,
\end{eqnarray}
where $\beta$ is defined in Assumption \ref{assumption1}.
\end{theorem}
\begin{remark}
Note that the hypothesis $\Vert F(t,v)\Vert \leq C\Vert v\Vert$ is not too restrictive. An example of class of nonlinearities for which such hypothesis is fulfilled is a class of functions satisfying $F(t, 0)=0$, $t\in[0, T]$. Concrete examples are operators of the form $F(t,v)=f(t)\frac{v}{1+\vert v\vert}$, with $f:[0, T]\longrightarrow\mathbb{R}$ continuous or bounded on $[0, T]$.
\end{remark}
\begin{remark}
It is possible to obtain an error estimate without irregularities terms of the form $t^{-1+\beta/2}$ with a drawback that the convergence rate will not be $2$, but will depend on the regularity of the initial data. The proof follows the same lines as that of \thmref{theorem2} using \lemref{spaceerrorlemma} and this yields
\begin{eqnarray*}
\Vert u(t)-u^h(t)\Vert\leq Ch^{\beta},\quad t\in [0, T].
\end{eqnarray*}
\end{remark}
\begin{proof} of \thmref{theorem2}.
We start with the proof of \eqref{time1}.
Subtracting \eqref{mild4} form \eqref{mild0}, taking the norm in both sides and using triangle inequality yields
\begin{eqnarray}
\label{estiI}
\Vert u(t)-u^h(t)\Vert&\leq& \left\Vert U(t,0)u_0-U_h(t,0)P_hu_0\right\Vert\nonumber\\
&+&\left\Vert\int_0^{t}\left[U(t,s)F\left(s,u(s)\right)-U_h(t,s)P_hF\left(s,u^h(s)\right)\right]
ds\right\Vert=:I_0+I_1.
\end{eqnarray}
Using \lemref{spaceerrorlemma} with $r=2$ and $\gamma=\beta$ yields
\begin{eqnarray}
\label{jour1}
I_0\leq Ch^{2}t^{-1+\beta/2}\Vert u_0\Vert_{\beta}\leq Ch^{2}t^{-1+\beta/2}.
\end{eqnarray}
Using \assref{assumption3}, \eqref{ae1} and \eqref{spacereg1} yields
\begin{eqnarray}
\label{estiI2}
I_1&\leq& \int_0^t\left\Vert U(t,s)\left[F\left(s,u(s)\right)-F\left(s,u^h(s)\right)\right]\right\Vert ds+\int_0^t\left\Vert \left[U(t,s)-U_h(t,s)P_h\right]F\left(s,u^h(s)\right)\right\Vert ds\nonumber\\
&\leq& C\int_0^t\left\Vert u(s)-u^h(s)\right\Vert ds+C\int_0^t\left\Vert \left[U(t,s)-U_h(t,s)P_h\right]]F\left(s,u^h(s)\right)\right\Vert ds.
\end{eqnarray}
If $0\leq t\leq h^2$, then using \eqref{ae1} easily yields $I_1\leq Ch^2+\int_0^t\Vert u(s)-u^h(s)\Vert ds$. If $0<h^2\leq t$, using \lemref{spaceerrorlemma} (with $r=2$ and $\gamma=0$), and splitting the second integral in two parts yields
\begin{eqnarray}
\label{estiI3}
I_1&\leq& C\int_0^t\Vert u(s)-u^h(s)\Vert ds+Ch^2\int_0^{t-h^2}(t-s)^{-1}ds+Ch^2\int_{t-h^2}^t(t-s)^{-1}ds\nonumber\\
&\leq& C\int_0^t\Vert u(s)-u^h(s)\Vert ds+Ch^2(1+\ln(t/h^2)).
\end{eqnarray}
Substituting \eqref{estiI3} and \eqref{jour1} in \eqref{estiI} and applying Gronwall's lemma proves \eqref{time1}. To prove \eqref{time2}, we only need to re-estimate the term $I_3:=\int_0^t\left\Vert \left[U(t,s)-U_h(t,s)P_h\right]F\left(s,u^h(s)\right)\right\Vert ds$. Note that under assumption $\Vert(-A(s))^{\delta}F(t, v)\Vert\leq Ct+C\Vert (-A(s))^{\delta}v\Vert$, using \lemref{spaceerrorlemma} (with $r=2$ and $\gamma=\delta$) and \eqref{spacereg1}, following the same lines as above one easily obtain $I_3\leq Ch^2$. Let us now estimate $I_3$ under the hypothesis $\Vert F(t,v)\Vert \leq C\Vert v\Vert$. Using \assref{assumption3}, \eqref{spacereg1} and exploiting the mild solution \eqref{mild4} one easily obtain
\begin{eqnarray}
\label{estiI4}
\Vert F(t,u^h(t))\Vert\leq \Vert u^h(t)\Vert\leq C\vert t-s\vert^{\epsilon}s^{-\epsilon},\;\;\; \Vert F(s, u^h(s))-F(t, u^h(t))\Vert\leq C\vert t-s\vert^{\epsilon}s^{-\epsilon},
\end{eqnarray}
for some $\epsilon\in(0, 1)$ and any $s, t\in[0, T]$. Using \lemref{spaceerrorlemma} (with $r=2$ and $\gamma=0$), triangle inequality and \eqref{estiI4} yields
\begin{eqnarray*}
I_3&\leq& Ch^2\int_0^t(t-s)^{-1}\left\Vert F\left(s,u^h(s)\right)-F(t,u^h(t))\right\Vert ds+Ch^2\int_0^t(t-s)^{-1}\Vert F(t,u^h(t))\Vert ds\nonumber\\
&\leq& Ch^2\int_0^t(t-s)^{-1+\epsilon}s^{-\epsilon}ds\leq Ch^2.
\end{eqnarray*}
Hence the new estimate of $I_1$ is given below
\begin{eqnarray}
\label{estiI5}
I_1\leq Ch^2+C\int_0^t\Vert u(s)-u^h(s)\Vert ds.
\end{eqnarray}
Substituting \eqref{estiI5} and \eqref{jour1} in \eqref{estiI} and applying Gronwall's lemma proves \eqref{time2} and the proof of \thmref{theorem2} is completed.
\end{proof}
\subsection{Error estimate of the semilinear problem under polynomial growth condition}
\label{sectpoly}
In this section, we take $\beta\in\left(\frac{d}{2}, 2\right]$. We make the following assumptions on the nonlinearity.
\begin{Assumption}
\label{assumption2}
there exist two constants and $L_1, c_1\in[0, \infty)$ such that the nonlinear function $F$ satisfies the following
\begin{eqnarray}
\label{Polynome1}
\Vert F(w)\Vert&\leq& L_1+ L_1\Vert w\Vert\left(1+\Vert w\Vert^{c_1}_{\mathcal{C}}\right), \quad w\in H,\\
\label{Polynome2}
\Vert F(w)-F(v)\Vert&\leq& L_1\Vert w-v\Vert\left(1+\Vert u\Vert^{c_1}_{\mathcal{C}}+\Vert v\Vert^{c_1}_{\mathcal{C}}\right),\quad w, v\in H.
\end{eqnarray}
\end{Assumption}
Let us recall the following Sobolev embedding (continuous embedding). \begin{eqnarray}
\label{sobolev1}
\mathcal{D}\left((-A(0))^{\delta}\right)\subset C\left(\Lambda, \mathbb{R}\right),\quad \text{for}\quad \delta >\frac{d}{2},\quad d\in\{1,2,3\}.
\end{eqnarray}
It is a classical solution that under \assref{assumption2} \eqref{semi0} has a unique mild solution $u$ satisfying\footnote{This remains true if $u$ is replaced by its discrete version $u^h$.} $u\in C\left([0, T], \mathcal{D}\left((-A(0))^{\beta}\right)\right)$, see e.g. \cite{Sobolev}. Hence using the Sobolev embbeding \eqref{sobolev1}, it holds that
\begin{eqnarray}
\label{sobolev2}
\Vert u(t)\Vert_{\mathcal{C}}\leq C\left\Vert (-A(0))^{\frac{\beta}{2}}u(t)\right\Vert\leq C,\quad \Vert u^h(t)\Vert_{\mathcal{C}}\leq C\left\Vert (-A(0))^{\frac{\beta}{2}}u^h(t)\right\Vert\leq C,\quad t\in[0, T].
\end{eqnarray}
\begin{theorem}
\label{mainresul2}
Let $u(t)$ and $u^h(t)$ be solution of \eqref{semi0} and \eqref{semi1} respectively. Let Assumptions \ref{assumption1} and \ref{assumption2} be fulfilled. Then the following error estimate holds
\begin{eqnarray}
\Vert u(t)-u^h(t)\Vert\leq Ch^{2}t^{-1+\beta/2}+Ch^2\left(1+\ln(t/h^2)\right),\quad t\in[0, T].
\end{eqnarray}
If in addition there exists $c_1, c_2\geq 0$ such that the nonlinearity $F$ satisfies the polynomial growth condition
\begin{eqnarray}
\label{Polysharp1}
\Vert F(t,v)\Vert\leq C\Vert v\Vert^{c_1}\Vert v\Vert_{\mathcal{C}}^{c_2},
\end{eqnarray}
then the following optimal error estimate holds
\begin{eqnarray}
\label{time2}
\Vert u(t)-u^h(t)\Vert\leq Ch^{2}t^{-1+\beta/2},\quad 0<t\leq T,
\end{eqnarray}
\end{theorem}
\begin{proof}
The proof goes along the same lines as that of \thmref{theorem2} by using appropriately \assref{assumption2} and \eqref{sobolev2}.
\end{proof}
\begin{remark}
It is possible in \thmref{mainresul2} to obtain convergence estimate without irregularities terms $t^{-1+\beta/2}$. But the convergence rate will depend on the regularity of the initial data and will be of the form
\begin{eqnarray*}
\Vert u(t)-u^h(t)\Vert\leq Ch^{\beta},\quad t\in[0, T].
\end{eqnarray*}
\end{remark}
\begin{remark}
\assref{assumption2} is weaker than \assref{assumption3} and therefore include more nonlinearities. However, the price to pay when using \assref{assumption2} is that one requires more regularity on the initial data.
\end{remark}
\begin{remark}
Let $\varphi: \mathbb{R}\longrightarrow \mathbb{R}$ be polynomial of any order. The nonlinear operator $F$ is defined as the Nemytskii operator
\begin{eqnarray*}
F\left(u\right)(x)=\varphi\left(u(x)\right),\quad u\in H,\quad x\in \Lambda,
\end{eqnarray*}
is a concrete example satisfying \assref{assumption2}.
In fact, let us assume without loss of generality that $\varphi$ is polynomial of degree $l>1$, that is
\begin{eqnarray}
\varphi(x)=\sum_{i=0}^la_ix^i,\quad x\in\mathbb{R}.
\end{eqnarray}
\label{Remarkpol}
Note that the proofs in the cases $l=0, 1$ are obvious.
For any $u\in H\cap \mathcal{C}(\overline{\Lambda}, \mathbb{R})$, using traingle inequality and the fact $\left(\sum\limits_{i=0}^lc_i\right)^2\leq (l+1)\sum\limits_{i=0}^lc_i^2$, $c_i\geq 0$, we obtain
\begin{eqnarray}
\Vert F(v)\Vert^2&=&\int_{\Lambda}\vert F(v)(x)\vert^2dx=\int_{\Lambda}\vert \varphi(v(x))\vert^2\leq (l+1)\sum_{i=0}^l\vert a_i\vert^2\int_{\Lambda}\vert v(x)\vert^{2i}dx\nonumber\\
&\leq&(l+1)\vert a_0\vert^2+(l+1)\vert a_1\vert\int_{\Lambda}\vert u(x)\vert^2dx\nonumber\\
&+&(l+1)\max_{2\leq i\leq l}\sup_{x\in\overline{\Lambda}}\vert v(x)\vert^{2i-2}\sum_{i=2}^l\vert a_i\vert^2\int_{\Lambda}\vert v(x)\vert^2dx\nonumber\\
&\leq&(l+1)\vert a_0\vert^2+(l+1)\vert a_1\vert^2\Vert v\Vert^2+(l+1)\max_{2\leq i\leq 2}\Vert v\Vert^{2i-2}_{\mathcal{C}}\left(\max_{2\leq i\leq l}\vert a_i\vert^2\right)\Vert v\Vert^2\nonumber\\
&\leq&(l+1)\vert a_0\vert^2+(l+1)\vert a_1\vert^2\Vert v\Vert^2+(l+1)\max_{2\leq i\leq 2}\left(\Vert v\Vert_{\mathcal{C}}+1\right)^{2i-2}\left(\max_{2\leq i\leq l}\vert a_i\vert^2\right)\Vert v\Vert^2\nonumber\\
&\leq&(l+1)\vert a_0\vert^2+(l+1)\vert a_1\vert^2\Vert v\Vert^2+(l+1)2^{2l-3}\left(\Vert v\Vert_{\mathcal{C}}^{2l-2}+1\right)\left(\max_{2\leq i\leq l}\vert a_i\vert^2\right)\Vert v\Vert^2\nonumber\\
&\leq& L_1+L_1\Vert v\Vert^2\left(1+\Vert u\Vert^{2l-2}_{\mathcal{C}}\right).
\end{eqnarray}
This completes the proof of \eqref{Polynome1}. The proof of \eqref{Polynome2} is similar to that of \eqref{Polynome1} by using the following well known fact
\begin{eqnarray}
a^n-b^n=(a-b)\sum_{i=0}^{n-1}a^ib^{n-1-i},\quad a, b\in \mathbb{R},\quad n\geq 1.
\end{eqnarray}
\end{remark}
\begin{remark}
If in \rmref{Remarkpol} we take the constant term of $\varphi$ to be $0$, then the hypothesis \eqref{Polysharp1} is fulfilled.
\end{remark}
| {
"timestamp": "2020-01-27T02:10:16",
"yymm": "2001",
"arxiv_id": "2001.09000",
"language": "en",
"url": "https://arxiv.org/abs/2001.09000",
"abstract": "In this work, we investigate the numerical approximation of the second order non-autonomous semilnear parabolic partial differential equation (PDE) using the finite element method. To the best of our knowledge, only the linear case is investigated in the literature. Using an approach based on evolution operator depending on two parameters, we obtain the error estimate of the scheme toward the mild solution of the PDE under polynomial growth condition of the nonlinearity. Our convergence rate are obtain for smooth and non-smooth initial data and is similar to that of the autonomous case. Our convergence result for smooth initial data is very important in numerical analysis. For instance, it is one step forward in approximating non-autonomous stochastic partial differential equations by the finite element method. In addition, we provide realistic conditions on the nonlinearity, appropriated to achieve optimal convergence rate without logarithmic reduction by exploiting the smooth properties of the two parameters evolution operator.",
"subjects": "Numerical Analysis (math.NA); Functional Analysis (math.FA)",
"title": "Optimal error estimate of the finite element approximation of second order semilinear non-autonomous parabolic PDEs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9817357216481429,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7085610874792168
} |
https://arxiv.org/abs/2105.02715 | Generalized tournament matrices with the same principal minors | A generalized tournament matrix $M$ is a nonnegative matrix that satisfies $M+M^{t}=J-I$, where $J$ is the all ones matrix and $I$ is the identity matrix. In this paper, a characterization of generalized tournament matrices with the same principal minors of orders $2$, $3$, and $4$ is given. In particular, it is proven that the principal minors of orders $2$, $3$, and $4$ determine the rest of the principal minors. | \section{Introduction}
Let $M=(m_{ij})$ be an $n\times n$ matrix. With each nonempty subset $X
\subseteq \{1,\ldots,n\}$, we associate the \emph{principal submatrix} $M[X]$
of $M$ whose rows and columns are indexed by the elements of $X$. A
\emph{principal minor} of $M$ is the determinant of a principal submatrix of $M$.
The \emph{order} of a minor is $k$ if it is the determinant of a $k\times k$
submatrix. In this paper, we address the following problem.
\begin{problem}\label{prob:1}
What is the relationship between matrices with equal corresponding
principal minors.
\end{problem}
Clearly, if two matrices are diagonally similar then they have the same
corresponding principal minors. Conversely, it follows from the main result
of Engel and Schneider \cite{engel1980matrices} that two symmetric matrices
with no zeroes off the diagonal having the same principal minors of order $1$,
$2$ and $3$ are necessarily diagonally similar.
Hartfiel and Lowey \cite{hartfiel1984matrices} identified a special class of
matrices in which two matrices with equal corresponding principal minors of
all orders are diagonally similar up to transposition. This result was
improved in \cite{boussairi2015skew} for skew-symmetric matrices with no
zeroes off the diagonal by considering only the equality of corresponding
principal minors of order $2$ and $4$.
Boussaïri and Chergui \cite{boussairi2016transformation} consider the class
of skew-symmetric matrices with entries from $\{-1,0,1\}$ and such that all
off-diagonal entries of the first row are nonzero. They characterize the
pairs of matrices of this class that have equal corresponding principal
minors of order $2$ and $4$. This characterization involves a new
transformation that generalizes diagonal similarity up to transposition.
A \emph{tournament matrix} of order $n$ is the adjacency matrix of some
tournament. In other word, it is an $n\times n$ $(0, 1)$-matrix $M$ which
satisfies
\begin{equation}\label{eq:1} M + M^{t} = J_n - I_n,\end{equation}
where $J_n$ denotes the all ones $n\times n$ matrix and $I_n$ denotes the $n
\times n$ identity matrix. Boussaïri et al. \cite{boussairi2004c3}
characterize the pairs of tournaments having the same $3$-cycles. Clearly,
two tournaments have the same 3-cycles if and only if their adjacency
matrices have the same principal minors of order $3$. This implies a
characterization of tournament matrices with the same principal minors of
order $3$.
A \emph{generalized tournament matrix} $M = (m_{ij})$ is a nonnegative matrix that
satisfies \eqref{eq:1}. By definition $m_{ij} = 1 - m_{ji}\in [0, 1]$ for all $i
\neq j\in\{1,\ldots,n\}$. Thus, we can interpret $m_{ij}$ as the a priori
probability that player $i$ defeats player $j$ in a round-robin tournament
\cite{moon1970generalized}.
In this work, we characterize the pairs of generalized tournament matrices with
the same principal minors of order at most $4$. We prove in particular that if
two generalized tournament matrices have the same principal minors of orders
at most $4$, then they have the same principal minors of all orders.
\section{Preliminaries and main result}
Let $T$ be a tournament with vertex set $V$. A \emph{clan} of $T$ is a subset
$X$ of $V$, such that for all $a, b\in X$ and $x\in V\setminus X$, $(a, x)$ is
an arc of $T$ if and only if $(b, x)$ is an arc of $T$. For a subset $Y$ of $V
$, we denote by ${\rm Inv}(T, Y)$ the tournament obtained by reversing all the arcs
with both ends in $Y$. If $Y$ is a clan, we call this operation \emph{clan
reversal}. It is easy to check that clan reversal preserves $3$-cycles.
Conversely, Boussaïri et al. \cite{boussairi2004c3} proved that two
tournaments on the same vertex set have the same $3$ cycles if and only if
one is obtained from the other by a sequence of clan reversals.
Let $M = (m_{ij})$ be an $n\times n$ matrix. A \emph{clan} $X$ of $M$ is a
subset of $[n]:=\{1,\ldots,n\}$ such that for all $i, j\in X$ and $k\in [n]\setminus
X$, $m_{ik}=m_{jk}$ and $m_{ki} = m_{kj}$. Denote by $M[X, [n]\setminus X]$ the
submatrix of $M$ whose rows and columns are indexed by elements of $X$ and $[n]
\setminus X$ respectively. Clearly, $X$ is a clan of $M$ if and only if $M[X,
[n]\setminus X] = \mathbf{1}\cdot v^{t}$ and $M[[n]\setminus X, X] = w\cdot
\mathbf{1}^{t}$ for some column vectors $v$ and $w$.
The empty set, the singletons $\{i\}$ where $i\in [n]$, and $[n]$ are clans
called \emph{trivial}. We say that $M$ is \emph{indecomposable} if all its clans are
trivial, otherwise it is called \emph{decomposable}. For a subset $Y$ of $[n]$,
we denote by ${\rm Inv}(M, Y)$ the matrix obtained from $M$ by replacing the entry $
m_{ij}$ by $m_{ji}$ for all $i, j\in Y$. As for tournaments, if $Y$ is a clan
of $M$, we call this operation clan reversal.
Let $M$ be a tournament matrix and let $T$ be its corresponding tournament. A
subset $X$ of $[n]$ is a clan of $M$ if and only if it is a clan of $T$. Moreover,
for every $Y\subset [n]$, the corresponding tournament of ${\rm Inv}(M, Y)$ is ${\rm Inv}(
T, Y)$. As the two possible tournaments on $3$ vertices have different
determinants, we can write Theorem 2 of \cite{boussairi2004c3}
as follows.
\begin{theorem}\label{theo11}
Let $A$ and $B$ be two tournament matrices. The following assertions are
equivalent:
\begin{enumerate}[i)]
\item $A$ and $B$ have the same principal minors of order $3$.
\item There exists a sequence $A_0 = A, \ldots, A_m=B$, such that $A_{i+1}
= {\rm Inv}(A_i, X_i)$ where $X_i$ is a clan of $A_i$ for all $i\in\{0, \ldots, m-1
\}$.
\end{enumerate}
\end{theorem}
This theorem solves Problem \ref{prob:1} completely in the case of tournament
matrices. Another result, in relation to our work, is the following theorem
due to Lowey \cite{loewy1986principal}.
\begin{theorem}\label{theo:lowey}
Let $A, B$ be two $n\times n$ matrices. Suppose that $n\geq 4$, $A$
irreducible and for every partition of $[n]$ into two subsets $X, Y$ with $|X|
\geq 2$ and $|Y|\geq 2$, $rank(A[X, Y])\geq 2$ or $rank(A[Y, X])\geq 2$. If $A
$ and $B$ have equal corresponding minors of all orders, then they are
diagonally similar up to transposition.
\end{theorem}
Let $M$ be an $n\times n$ generalized tournament matrix and let $X, Y$ be a
bipartition of $[n]$ with $|X|\geq 2$ and $|Y|\geq 2$. It is not hard to prove
that if $rank(M[X, Y]) \leq 1$ and $rank(M[Y, X])\leq 1$ then $X$ or $Y$ is a
nontrivial clan of $A$. It follows that an indecomposable generalized
tournament matrix satisfies the conditions of Theorem \ref{theo:lowey}.
Another fact is that if two generalized tournament matrices are diagonally
similar, then they are equal. Then, from Theorem \ref{theo:lowey}, we have the
following proposition.
\begin{proposition}\label{propo:12}
Let $A$ and $B$ be two $n\times n$ generalized tournament matrices. Suppose
that $n\geq 4$ and $A$ is indecomposable. If $A$ and $B$ have equal
corresponding minors of all orders, then $A=B$ or $A=B^{t}$.
\end{proposition}
It follows from Theorem \ref{theo11} that it is enough to consider only
principal minors of orders at most $3$ in the case of tournament matrices.
This fact is not true for arbitrary generalized tournament matrices. Indeed,
we will give in Section \ref{section:indec} two indecomposable $4\times 4$
matrices which have the same principal minors of orders $2$ and $3$, but do
not have the same determinant. Afterward, we will prove that Proposition
\ref{propo:12} still holds if we consider principal minors of orders at most
$4$. Then, we prove the following theorem.
\begin{theorem}\label{theo:1}
Let $A$ and $B$ be two $n\times n$ generalized tournament matrices. The
following assertions are equivalent:
\begin{enumerate}[i)]
\item $A$ and $B$ have the same minors of orders at most $4$.
\item $A$ and $B$ have the same minors of every order.
\item There exists a sequence $A_0=A,\ldots,A_m=B$ of $n\times n$
generalized tournament matrices, such that for $k=0,\ldots,m-1$,
$A_{k+1} = {\rm Inv}(A_k, X_k)$, where $X_k$ is a clan of $A_k$.
\end{enumerate}
\end{theorem}
It is worth noting that the proof of Theorem \ref{theo:lowey} in
\cite{loewy1986principal} uses tools from linear algebra. It seems hard to
prove Theorem \ref{theo:1} in a similar fashion, even for $n=5$. We will use
graph theoretic tools via a correspondence between generalized tournament matrices
and weighted oriented graphs.
Let $M=(m_{ij})$ be an $n\times n$ generalized tournament matrix. For all
$i\neq j \in [n]$, $m_{ij}$ is in $[0, 1]$, and $m_{ij} = m_{ji}$ if and only
if $m_{ij} = 1/2$. Then, we associate to $M$ a weighted oriented graph
$\Gamma_M$ with vertex set $[n]:=\{1,\ldots,n\}$, such that $(i, j)$ is an arc
with weight $m_{ij}$ if and only if $m_{ij} \in (1/2, 1]$. Conversely, let
$\Gamma$ be a weighted oriented graph with vertex set $[n]$ and weights
in $(1/2, 1]$. We associate to $\Gamma$ a generalized tournament matrix $M=(m_
{ij})$, such that if $(i, j)$ is an arc then $m_{ij}$ is equal to the weight
of $(i, j)$, and $m_{ij} = m_{ji} = 1/2$ if $(i, j)$ and $(j, i)$ are not
arcs of $\Gamma$.
This correspondence between generalized tournament matrices and weighted
oriented graphs allows us to use some techniques from \cite{boussairi2004c3}
in the proof of Theorem \ref{theo:1}.
\section{Decomposable and indecomposable weighted oriented graphs}
Let $\Gamma$ be a weighted oriented graph with vertex set $V$. We write $x
\overset{\alpha}{\rightarrow} y$ if $(x, y)$ is an arc of $\Gamma$ with
weight $\alpha$, and $x \cdots y$ if there is no arc between $x$ and $y$.
Similarly, if $X$ and $Y$ are two disjoint subsets of $V$, we write $X \overset
{\alpha}{\rightarrow} Y$ if $(x, y)$ is an arc with weight $\alpha$ for every
$x\in X$ and $y\in Y$. If $X={x}$ we simply write $x\overset{\alpha}{
\rightarrow} Y$ and $Y\overset{\alpha}{\rightarrow}x$ instead of $\{x
\}\overset{\alpha}{\rightarrow} Y$ and $Y\overset{\alpha}{\rightarrow}\{x\}$.
The notations $X\cdots Y$, $x\cdots Y$ and $Y\cdots x$ are defined in the
same way.
A \emph{clan} of a weighted oriented graph $\Gamma$ with vertex set $V$
is a subset $X$ of $V$ such that for every $x\in V\setminus X$, either $x
\cdots X$, $x\overset{\alpha}{\rightarrow} X$ or $X\overset{\alpha}{
\rightarrow}x$ for some weight $\alpha$. The empty set, the singletons $\{x\}$
where $x\in V$, and $V$ are clans called \emph{trivial}. We say that $\Gamma$ is
\emph{indecomposable} if all its clans are trivial, otherwise it is called
\emph{decomposable}. The notion of clans was introduced, under different names,
for graphs, digraphs, and more generally $2$-structures
\cite{ehrenfeucht1990theory}. The next proposition gives some basic properties
of clans.
\begin{proposition}\label{eq:clan_prop}
Let $\Gamma$ be a weighted oriented graph with vertex set $V$. Let $X$, $Y
$ and $Z$ be subsets of $V$.
\begin{enumerate}[i)]
\item If $X$ is a clan of $\Gamma$, then $X\cap Z$ is a clan of
$\Gamma [Z] $.
\item If $X$ and $Y$ are clans of $\Gamma$, then $X\cap Y$ is a clan of
$\Gamma$.
\item If $X$ and $Y$ are clans of $\Gamma$,\ such that $X\cap
Y\neq \emptyset$, then $X\cup Y$ is a clan of $\Gamma$.
\item If $X$ and $Y$ are clans of $\Gamma$, such that $X\setminus
Y\neq \emptyset$, then $Y\setminus X$ is a clan of $\Gamma$.
\item If $X$ and $Y$ are clans of $\Gamma$, such that $X\cap Y=\emptyset
$, then either $X\overset{\alpha}{\rightarrow}Y$, $Y\overset{\alpha}{
\rightarrow}X$ or $X\cdots Y$ for some weight $\alpha$.
\end{enumerate}
\end{proposition}
The following theorem due to Ehrenfeucht and Rozenberg \cite{ehrenfeucht1990}
shows that indecomposability is hereditary.
\begin{theorem}\label{theo:rozenberg}
Let $\Gamma$ be an indecomposable weighted oriented graph with $n\geq5$
vertices. Then, $\Gamma$ contains an indecomposable weighted oriented graph
with $n-1$ or $n-2$ vertices.
\end{theorem}
A weighted oriented graph $\Gamma$ is said to be \emph{separable} if its
vertex set $V$ can be partitioned into two non empty clans, otherwise it is
\emph{inseparable}. If $\Gamma$ is separable, then there exists a bipartition
$X, Y$ of $V$ such that $X\overset{\alpha}{\rightarrow} Y$ for some weight $
\alpha$, or $X\cdots Y$. In the first case, $\Gamma$ is called
\emph{$\alpha$-separable}. We say that $\Gamma$ is \emph{$\alpha$-linear} if
its vertices can be ordered into a sequence $x_1,\ldots,x_n$ such that $x_i
\overset{\alpha}{\rightarrow} x_j$ if $i<j$. The notions defined above can be
extended naturally to generalized tournament matrices.
By definition, a tournament is inseparable if and only if it is irreducible.
It is well-known that every irreducible tournament with $n$ vertices contains
an irreducible tournament with $n-1$ vertices. The next theorem extends this
result to weighted oriented graphs and will be used in the proof of the main
theorem.
\begin{theorem}\label{theo:moon_wog}
Let $\Gamma$ be an inseparable weighted oriented graph with $n\geq5$
vertices. Then, $\Gamma$ contains an inseparable weighted oriented graph
with $n-1$ vertices.
\end{theorem}
\begin{proof}
Suppose that $\Gamma$ is decomposable and let $C$ be a non trivial clan
of $\Gamma$. Let $u$ be a vertex in $C$. We will prove that $\Gamma[V\setminus\{
u\}]$ is inseparable. Suppose, for the sake of contradiction, that $\Gamma[V
\setminus\{u\}]$ is separable, and let $X, Y$ be a bipartition of $V
\setminus\{u\}$ into two clans. Without loss of generality, we can suppose
that $X\overset{\alpha}{\rightarrow} Y$. Since $C\neq V$, we have $C
\setminus\{u\}\neq X\cup Y$.
\\
\textbf{1.} If $C\setminus\{u\}\subseteq X$, then $C\setminus\{u\} \overset{
\alpha}{\rightarrow} Y$. As $C$ is a clan of $\Gamma$, $u
\overset{\alpha}{\rightarrow} Y$. Hence, $X\cup\{u\} \overset{\alpha}{
\rightarrow} Y$, which contradicts the fact that $\Gamma$ is inseparable.
Similarly, $C\setminus\{u\}\subseteq Y$ yields a contradiction.\\
\textbf{2.} If $C\setminus\{u\} \cap X$ and $C\setminus\{u\} \cap Y$ are
non empty. Let $z\in V\setminus C$, we can suppose that $z\in X\setminus C$.
We have $ z\overset{\alpha}{\rightarrow} Y$, in particular $ z\overset{\alpha
}{\rightarrow} Y\cap C$. Since $C$ is a clan, $ z\overset{\alpha}{\rightarrow}
C$. It follows that $(X\setminus C)\overset{\alpha}{\rightarrow} V\setminus (
X\cap C)$.
Suppose now that $\Gamma$ is indecomposable. The result is trivial if $
\Gamma$ contains an indecomposable graph with $n-1$ vertices. If no such
graph exists, then, by Theorem \ref{theo:rozenberg}, there exist two distinct
vertices $x, y\in V$ such that $\Gamma[V\setminus\{x, y\}]$ is indecomposable.
Then, $\Gamma[V\setminus\{x\}]$ or $\Gamma[V\setminus\{y\}]$ is inseparable.
Indeed, if $\Gamma[V\setminus\{x\}]$ is separable, then there exists a
bipartition $X, Y$ of $V\setminus\{x\}$ into two clans. Suppose that $X
\overset{\alpha}{\rightarrow}Y$. If $V\setminus \{x, y\}\cap X$ and $V
\setminus \{x, y\}\cap Y$ are both non empty, then they are a
bipartition of $V\setminus \{x, y\}$ into two clans, which contradicts the
fact that $\Gamma[V\setminus \{x, y\}]$ is indecomposable. Hence, $V\setminus
\{x, y\}$ is a clan of $\Gamma[V\setminus\{x\}]$. Similarly, if $\Gamma[V
\setminus\{y\}]$ is separable, then $V\setminus \{x, y\}$ is a clan of $\Gamma[
V\setminus\{y\}]$. It follows that if $\Gamma[V\setminus\{x\}]$ and $\Gamma[V
\setminus\{y\}]$ are both separable, then $V\setminus \{x, y\}$ is a clan of $
\Gamma$, which contradicts the assumption that $\Gamma$ is indecomposable.
\end{proof}
\section{Indecomposable generalized tournament matrices}\label{section:indec}
In this section, we improve Proposition \ref{propo:12} by showing that it is
enough to consider principal minors of orders at most $4$. More precisely, we
prove the following result.
\begin{theorem}\label{propo:BILT}
Let $A$ and $B$ be two $n\times n$ generalized tournament matrices. Suppose
that $n\geq 4$ and $A$ is indecomposable. If $A$ and $B$ have equal
corresponding principal minors of orders at most $4$, then $A=B$ or $A=B^{t}$.
\end{theorem}
Let $A = (a_{ij})$ and $B = (b_{ij})$ be two $n \times n$ generalized
tournament matrices with the same principal minors of orders $2$. Then, for all
$i\neq j\in [n]$, $a_{ij} = b_{ij}$ or $a_{ij} = 1- b_{ij}$. It follows that
the set $\binom{ [n]}{2}$ can be partitioned into three subsets.
\begin{itemize}
\item $\mathcal{P}_{=}:= \{ \{ i,j \} \in \binom{ [ n]
{2} : \text{ }a_{ij}=b_{ij}\text{ and }a_{ij}\neq1/2 \} $
\item $P_{\neq}:= \{ \{ i,j \} \in \binom{ [ n] }{2} : \text{
}a_{ij}=1-b_{ij}\text{ and }a_{ij}\neq1/2 \} $
\item $P_{1/2}:= \{ \{ i,j \} \in \binom{ [ n] }{2} : \text{
}a_{ij}=b_{ij}=1/2 \}$
\end{itemize}
The \emph{equality graph} and the \emph{difference graph} of $A$ and $B$,
denoted by $\mathcal{E}(A, B)$ and $\mathcal{D}(A, B)$ respectively, are the
undirected graphs with vertex set $V=[n]$, and whose arc sets are $P_{=}$ and $
P_{\neq}$. It follows from the definition that \begin{align}\label{eq:transpose}
\mathcal{E}(A, B) = \mathcal{D}(A, B^{t}).\end{align}
In what follows, we give some information about generalized tournament
matrices with the same principal minors of orders $2$ and $3$, via the
equality and difference graphs.
\begin{lemma}\label{lemma:three_of_triangle}
Let $A = (a_{i, j})$ and $B = (b_{i, j})$ be two $n \times n$ generalized
tournament matrices with the same principal minors of orders $2$ and $3$.
For every $i,j,k\in [ n]$ we have
\begin{enumerate}[i)]
\item if $ \{ i,j \} \in P_{\neq}$ and
$ \{ i,k \} , \{ j,k \} \in P_{=}$, then $a_{ik}=a_{jk}=b_{ik}=b_{jk}$.
\item if $ \{
i,j \} \in P_{=}$ and $ \{ i,k \} , \{ j,k \} \in
P_{\neq}$, then $a_{ik}=a_{jk}=1-b_{ik}=1-b_{jk}$.
\item if $ \{
i,j \} \in P_{=}$ and $ \{ i,k \} , \{ j,k \} \notin
P_{=}$, then $a_{ik}=1/2$ if and only if $a_{jk}=1/2$.
\item if $ \{
i,j \} \in P_{\neq}$ and $ \{ i,k \} , \{ j,k \} \notin
P_{\neq}$, then $a_{ik}=1/2$ if and only if $a_{jk}=1/2$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $i,j,k\in [ n] $. Then we have
\begin{align*}
\det A[\{i, k\}] &= \det{B[\{i, k\}]}\\
\det A[\{j, k\}] &= \det{B[\{j, k\}]}\\
\det A[\{i, j, k\}] &= \det{B[\{i, j, k\}]}
\end{align*}
It follows that
\begin{align}
a_{i k} & = b_{ik} \mbox{ or } a_{ik} = 1 - b_{ik} \label{eq:01}\\
a_{j k} & = b_{jk} \mbox{ or } a_{jk} = 1-b_{jk}\label{eq:02}\\
a_{ik}-a_{ij}a_{ik}+a_{ij}a_{jk}-a_{ik}a_{jk} &= b_{ik}-b_{ij}b_{ik}+b_{ij}b_{jk}-b_{ik}b_{jk} \label{eq:03}
\end{align}
If $ \{ i,j \} \in P_{\neq}$ and $ \{ i,k \} ,
\{ j,k \} \in P_{=}$, then $a_{ij}=1-b_{ij}$, $a_{ik}=b_{ik}$
and $a_{jk}=b_{jk}$. Using \eqref{eq:03}, we get $a_{jk}=a_{ik}$ and then $b_{
jk}=b_{ik}$. This proves assertion $i)$.
To prove $iii)$ suppose that $ \{i,j \} \in P_{=}$, $ \{
j,k \}\notin P_{=}$ and $a_{ik}=1/2$. Then $b_{ij}=a_{ij} \neq 1/2$, $b_
{jk}=1-a_{jk}$ and $b_{ik}=1/2$. By substituting in \eqref{eq:03}, we get $a_
{jk}=1/2$. Assertions $ii)$ and $iv)$ can be obtained from $i)$ and $ii)$
by using \eqref{eq:transpose}.
\end{proof}
\begin{proposition}\label{coro:same_clans}
Let $A = (a_{i, j})$ and $B = (b_{i, j})$ be two $n \times n$ generalized
tournament matrices. If $A$ and $B$ have the same principal minors of orders
$2$ and $3$, then the connected components of $\mathcal{E}(A, B)$ and
$\mathcal{D}(A, B)$ are clans of $A$ and $B$.
\end{proposition}
\begin{proof}
By \eqref{eq:transpose}, it suffices to consider $\mathcal{E}(A, B)$. Let $C
$ be a connected component of $\mathcal{E}(A, B)$. If $C = [n]$ or $C = \{i\}$
for some $i\in [n]$, then $C$ is a trivial clan of $A$ and $B$. Otherwise,
let $i\neq j\in C$ be two adjacent vertices and let $k\in [n]\setminus C$.
Then $ \{i,j \} \in P_{=}$ and $ \{i,k \} , \{j,k
\} \notin P_{=}$. We have to prove that $a_{ik} = a_{jk}$ and $b_{ik}
= b_{jk}$. For this, there are two cases to consider.
\begin{itemize}
\item[1)] If $a_{ik} = 1/2$ then by assertion $iii)$ of Lemma
\ref{lemma:three_of_triangle} we have $a_{jk} = 1/2$, and hence
$b_{ik} = b_{jk} = 1/2$.
\item[2)] If $a_{ik}\neq 1/2$, then $a_{jk}\neq 1/2$. It follows that $
\{ i,k \} , \{ j,k \} \in P_{\neq}$. We conclude
by assertion $ii)$ of Lemma \ref{lemma:three_of_triangle}.
\end{itemize}
\end{proof}
Let $A$ and $B$ be two $n \times n$ generalized tournament matrices with the
same principal minors of orders $2$ and $3$. Suppose that $A$ is indecomposable.
By Proposition \ref{coro:same_clans}, $A=B$ or $A=B^{t}$ if
and only if $\mathcal{E}(A, B)$ and $\mathcal{D}(A, B)$ are not both connected.
In general, $\mathcal{E}(A, B)$ and $\mathcal{D}(A, B)$ can be both
connected. Indeed, let $a,b\in [0,1] \setminus \{1/2 \}$ and
consider the matrix $M_{a, b}$ defined as follows
\[
M_{a,b}=
\begin{pmatrix}
0 & a & b & b\\
1-a & 0 & 1-a & b\\
1-b & a & 0 & a\\
1-b & 1-b & 1-a & 0
\end{pmatrix}
\]
It easy to check that the matrices $M_{a,b}$ and $M_{1-a,b}$ have equal
corresponding minors of orders $2$ and $3$, and that both $\mathcal{E}(M_{a,b}, M_{1
-a,b})$ and $\mathcal{D}(M_{a,b}, M_{1-a,b})$ are connected. Moreover, if $a
\neq b$ and $a\neq 1-b$, then $M_{a,b}$ and $M_{1-a,b}$ are indecomposable
and do not have the same determinant. This shows the necessity of the
equality of principal minors of order $4$ in the assumptions of Theorem
\ref{propo:BILT}. With this strengthening, we obtain the following result
which implies Theorem \ref{propo:BILT}.
\begin{proposition}\label{propo:several_classes}
Let $A$ and $B$ be two $n \times n$ generalized
tournament matrices with the same principal minors of orders at most $4$.
If $A$ is inseparable, then $\mathcal{E}(A, B)$ or $\mathcal{D}(A, B)$ is
not connected.
\end{proposition}
We will prove this proposition by induction on $n$. The next lemma allows
us to solve the base case $n=4$.
\begin{lemma}\label{propo:connected_cases_gtm}
Let $A = (a_{ij})$ and $B = (b_{ij})$ be two $4 \times 4$ generalized
tournament matrices with equal corresponding minors of orders $2$ and $3$. If
$\mathcal{D}(A, B)$ and $\mathcal{E}(A, B)$ are connected, then there exists
a permutation matrix $P$ such that $A=PM_{a,b}P^{t}$ and $B=PM_{1-a,b}P^{t
}$ where, $a,b\in [ 0,1] \setminus \{1/2 \}$. Moreover, $\det(A)=
\det(B)$ if and only if $a= b$ or $a= 1-b$.
\end{lemma}
\begin{proof}
Suppose that $\mathcal{D}(A, B)$ and $\mathcal{E}(A, B)$ are connected. The
only possibility is that $\mathcal{D}(A, B)$ and $\mathcal{E}(A, B)$ are
disjoint paths of length three. Then, there is a permutation matrix $P$
such that
\begin{itemize}
\item the edges of $\mathcal{D}(P^{t}AP,P^{t}BP)$ are $\{1,2\}$, $\{2,3\}$ and $\{3,4\}$.
\item the edges of $\mathcal{E}(P^{t}AP,P^{t}BP)$ are $\{1,3\}$, $\{1,4\}$ and $\{2,4\}$.
\end{itemize}
Let $A^{'}:=P^{t}AP$ and $B^{'}:=P^{t}BP$. The off-diagonal entries of $A^{'
}=(a_{ij}^{'})$ and $B^{'}=(b_{ij}^{'})$ are not equal to $1/2$. Moreover,
we have $a_{12}^{\prime}=1-b_{12}^{\prime}$, $a_{32}^{\prime}=1-b_{32}^{\prime
}$, $a_{34}^{\prime}=1-b_{34}^{\prime}$, $a_{13}^{\prime}=b_{13}^{\prime}$, $a
_{14}^{\prime}=b_{14}^{\prime}$ and $a_{24}^{\prime}=b_{24}^{\prime}$.
The matrices $A^{'}$ and $B^{'}$ have equal corresponding minors of orders $2$
and $3$. Then, by assertions $i)$ and $ii)$ of Lemma \ref{lemma:three_of_triangle},
we get $a_{12}^{\prime}=a_{32}^{\prime}=a_{34}^{\prime}$ and $a_{
13}^{\prime}=a_{14}^{\prime}=a_{24}^{\prime}$. Let $a:=a_{12}^{\prime}$ and $
b:=a_{13}^{\prime}$. We have $a,b\in [ 0,1] \setminus \{1/2 \}$, $A
^{'}=M_{a,b}$ and $B^{'}=M_{1-a,b}$. Hence $A=PM_{a,b}P^{t}$ and $B=PM_{1-a,b
}P^{t}$. Then $\det ( A) -\det ( B) =\det (M_{
a,b}) -\det ( M_{1-a,b}) = ( 2b-1) (2a-1
) ( a-b) ( a+b-1)$. It follows that $\det(A)=
\det(B)$ if and only if $a= b$ or $a= 1-b$.
\end{proof}
\begin{proof}[Proof of Proposition \ref{propo:several_classes}]
The result is trivial if $n=2$ or $n=3$. For $n=4$, suppose that $\mathcal{D}
(A,B) $ and $\mathcal{E} ( A,B) $ are both
connected. By Lemma \ref{propo:connected_cases_gtm},
$A=PM_{a,a}P^{t}$ for some real number $a\in [
0,1] \setminus \{1/2 \}$ and permutation matrix $P$.
Hence $A$ is separable.
We will continue by induction on $n$ for $n\geq5$. Suppose, by contradiction,
that $A$ is inseparable and that $\mathcal{D} (
A,B)$ and $\mathcal{E} (A,B)$ are connected. By Theorem
\ref{theo:moon_wog}, there is $i\in [ n]$ such that the principal
submatrix $A[ [n] \setminus \{i\}]$ is inseparable. By induction
hypothesis and without loss of generality, we can assume that $\mathcal{D}^{\prime}
:=\mathcal{D}(A[[n]\setminus i], B[[n]\setminus i])$ is not connected.
Since $\mathcal{E} ( A,B)
$ is connected, there exists $j\neq i$ such that $ \{ i,j
\} $ is an edge of $\mathcal{E} ( A,B) $. Let $C$ be the
connected component of $\mathcal{D}^{\prime}$ containing $j$. As $A[ [ n
]\setminus \{i\}]$ is inseparable, there exists $k\in C$ and $h\in
( [n] \setminus \{i\} ) \setminus C$ such that $a_{ij}\neq
a_{hk}$. Let $C^{\prime}$ be the connected component of $\mathcal{D}^{\prime
}$ containing $h$. Since $\mathcal{D} ( A,B) $ is
connected, there exists $l\in C^{\prime}$ such that $ \{ l,i
\}$ is an edge of $\mathcal{D} ( A,B) $. By Proposition
\ref{coro:same_clans}, $C$ and $C^{\prime}$ are clans of $A[ [ n]\setminus\{i\}]$
and $B[ [ n] \setminus \{i\}]$, then $a_{lj}
=a_{hk}$. It follows that $\det A[\{i,j,l\}]-\det B[\{i,j,l\}]= ( 2a_{il}
-1) (a_{ij}+a_{jl}-1) = ( 2a_{il}-1) ( a_
{ij}-a_{lj})$, then $\det A[\{i,j,l\}]\neq \det B[\{i,j,l\}]$, because $
a_{il}\neq1/2$ and $a_{ij}\neq a_{lj}$. This contradicts the fact that $A$
and $B$ have the same principal minors of order $3$.
\end{proof}
Let $A$ and $B$ be two generalized tournament matrices with the same principal
minors of orders at most $4$. Suppose that $A$ is indecomposable. Then $A$ is
inseparable. Moreover, by the proposition we have just proved, $\mathcal{E}(A, B)$
or $\mathcal{D}(A, B)$ is not connected. Since the connected components of
$\mathcal{E}(A, B)$ are $\mathcal{D}(A, B)$ are intervals of $A$, and the later
are all trivial, one of $\mathcal{E}(A, B)$ and $\mathcal{D}(A, B)$ is an empty
graph. Hence, $A=B$ or $A=B^{t}$. This proves Theorem \ref{propo:BILT}.
Let $\mathcal{F}$ be the family of $4 \times 4$ matrices permutationally
similar to a matrix $M_{a,b}$ with $a,b\in [ 0,1] \setminus \{1/
2 \}$, $a\neq b$ and $a\neq 1-b$. We say that a matrix is \emph{$\mathcal{F
}$-free} if it contains no member of $\mathcal{F}$ as a principal submatrix.
Let $A $ and $B $ be two $n \times n$ $\mathcal{F}$-free generalized
tournament matrices with the same principal minors of orders at most $3$.
By Lemma \ref{propo:connected_cases_gtm}, $A$ and $B$ have the same principal
minors of orders at most $4$. Hence, for $\mathcal{F}$-free generalized
tournament matrices, it is enough to consider equality of principal minors of
orders at most $3$ in Theorem \ref{propo:BILT}.
\section{Proof of the main theorem}
Let $A$ be a generalized tournament matrix and let $X$ be a clan of $A$.
If $X$ is a trivial clan then ${\rm Inv}(A, X) = A$ or ${\rm Inv}(A, X) = A^{t}$.
In both cases $\det A = \det {\rm Inv}(A, X)$. Assume now that $X$ is a
nontrivial clan of $A$. Then, up to permutation, $A$ can be written as follows
\[
A=
\begin{pmatrix}
A_{11} & \alpha\beta^{t}\\
\beta\alpha^{t} & A[X]
\end{pmatrix}
\mbox{ and }
{\rm Inv}(A, X) =
\begin{pmatrix}
A_{11} & \alpha\beta^{t}\\
\beta\alpha^{t} & A[X]^{t}
\end{pmatrix}
\mbox{,}
\]
where $\beta=
\begin{pmatrix}
1\\
\vdots\\
1
\end{pmatrix}$ and
$\alpha=
\begin{pmatrix}
a_1\\
a_2\\
\vdots\\
a_{n-|X|}
\end{pmatrix}$, where $a_i\in [0, 1]$. By Proposition 3 of
\cite{bankoussou2019spectral}, $A$ and ${\rm Inv}(A, X)$ have the same determinant.
Then we have the following result.
\begin{lemma}\label{lemma:12}
Clan inversion preserves principal minors.
\end{lemma}
It follows from this lemma that matrices obtained from a series of clan
inversions have the same principal minors. This proves the implication $iii)
\Rightarrow ii)$ of Theorem \ref{theo:1}. The implication $ii)\Rightarrow iii)
$ is trivial. The remaining of the section is devoted to proving the implication
$i)\Rightarrow iii)$, that is, pairs of matrices with the same principal minors
of orders at most $4$ are obtained by a series of clan inversions.
We start by reducing the problem to the case of matrices with a common
nontrivial clan.
\begin{proposition}\label{propo1}
Let $A $ and $B $ be two $n \times n$ generalized tournament matrices with
the same principal minors of orders at most $4$. Suppose that $A$ is
inseparable. If $A$ and $B$ have no common nontrivial clans then $A = B$ or
$A = B^{t}$.
\end{proposition}
\begin{proof}
Since $A$ is inseparable, by Proposition \ref{propo:several_classes},
$\mathcal{E}(A, B)$ or $\mathcal{D}(A, B)$ is not connected. If $\mathcal{D}(A,
B)$ is not connected, then by Proposition \ref{coro:same_clans} its connected
components are common clans of $A$ and $B$. If $A$ and $B$ have no common
nontrivial clans, then the connected components of $\mathcal{D}(A, B)$ must be
singletons and hence $A = B$. If $\mathcal{E}(A, B)$ is not connected, using
\eqref{eq:transpose}, we get $A=B^{t}$.
\end{proof}
\begin{proposition}\label{propo22}
Let $A$ and $B$ be two $n\times n$ generalized tournament matrices. If $A$ and
$B$ are $\alpha$-linear for some $\alpha>1/2$, then there exists a clan $X$ of
$A$ such that ${\rm Inv}(A, X)$ and $B$ have a common nontrivial clan.
\end{proposition}
\begin{proof}
Let $\Gamma_A$ and $\Gamma_B$ be the corresponding graphs of $A$ and $B$.
Without loss of generality, we can suppose that for all $i\neq j\in [n]$, $i
\overset{\alpha}{\rightarrow}j$ in $\Gamma_A$ if $i<j$. There
exists a permutation $\sigma$ of $[n]$, such that for all $i\neq j\in [n]$,
$\sigma(i)\overset{\alpha}{\rightarrow}\sigma(j)$ in $\Gamma_B$ if $i<j$.
Consider the clan $X = \{1, \ldots, \sigma(1)\}$ of $A$. Clearly, $\sigma(1)
\overset{\alpha}{\rightarrow}[n]\setminus\{\sigma(1)\}$ in the graph
corresponding to ${\rm Inv}(A, X)$. Hence, $\{\sigma(2),\ldots,\sigma(n)\}$ is a
common nontrivial clan of ${\rm Inv}(A, X)$ and $B$.
\end{proof}
For the last case, when $A$ is separable and there is no $\alpha>1/2$ such
that $A$ and $B$ are $\alpha$-linear, we need the following results.
\begin{lemma}\label{lemma3}
Let $A$ be an $n\times n$ decomposable generalized tournament matrix and let
$I$ be a nontrivial clan of $A$. Let $x\in I$, then $A$ is
inseparable if and only if $A[([n]\setminus I) \cup \{x\}]$ is inseparable.
\end{lemma}
\begin{proof}
Suppose that $V:=[n]$ can be partitioned into two clans $X$, $Y$ of $A$. If
$((V\setminus I) \cup \{x\})\cap X$ and $((V\setminus I) \cup \{x\})\cap Y$ are
nonempty, then they are a bipartition of $(V\setminus I) \cup \{x\}$ into two
clans of $A[(V\setminus I) \cup \{x\}]$. Otherwise, suppose for example that
$((V\setminus I) \cup \{x\})\cap X$ is empty, then $X\subset I\setminus \{x\}$.
Hence $\{x\}$, $V\setminus I$ is a bipartition of $(V\setminus I) \cup \{x\}$
into two clans. In both cases $A[(V\setminus I) \cup \{x\}]$ is separable.
Conversely, let $X, Y$ be a bipartition of $(V\setminus I) \cup \{x\}$ into
two clans of $A[(V\setminus I) \cup \{x\}]$ and assume for example that
$x\in X$. Then $X\cup I, Y$ is a bipartition of $V$ into two clans of $A$ and,
hence, $A$ is separable.
\end{proof}
\begin{proposition}
Let $A$ and $B$ be two $n\times n$ generalized tournament matrices with the
same principal minors of orders at most $4$. Then $A$ is inseparable if and
only if B is inseparable.
\end{proposition}
\begin{proof}
We proceed by induction on $n$. For $n=3$ the result is trivial. Suppose
that $A$ is inseparable. If $B = A$ or $B = A^{t}$ then $B$ is inseparable.
Otherwise, by Proposition \ref{propo1}, $A$ and $B$ have a common nontrivial
clan $I$. Let $x\in I$, then by Lemma \ref{lemma3}, $A[(V\setminus I)\cup \{x
\}]$ is inseparable and so is $B[(V\setminus I)\cup \{x\}]$ by induction
hypothesis. It follows by Proposition \ref{propo1} again that $B$ is
inseparable.
\end{proof}
\begin{corollary}\label{remark1}
Under the assumptions of the previous lemma, for $\alpha>1/2$, $A$ is $
\alpha$-separable if and only if $B$ is $\alpha$-separable.
\end{corollary}
Clearly, if a matrix $A$ is $\alpha$-linear, then for every clan $I$ of $A$, $
A[I]$ is $\alpha$-separable. Conversely, by induction on $n$, we obtain the
following result.
\begin{lemma}\label{lemma2}
If there exists $\alpha>1/2$ such that for every clan $I$ of $A$, $A[I]$ is
$\alpha$-separable, then $A$ is an $\alpha$-linear.
\end{lemma}
\begin{proposition}\label{propo:19}
Let $A$ and $B$ be two $n\times n$ generalized tournament matrices with the
same principal minors of orders at most $4$. Suppose that $A$ is
separable. If there is no $\alpha>1/2$ such that $A$ and $B$ are $\alpha$-linear,
then $A$ and $B$ have a common nontrivial clan.
\end{proposition}
\begin{proof}
Let $\Gamma_A$ and $\Gamma_B$ be the corresponding graphs of $A$ and $B$.
Suppose that there exists a bipartition $X, Y$ of $[n]$ such that $X\cdots Y$
in $\Gamma_A$. Clearly, $X\cdots Y$ in $\Gamma_B$. As $n\geq 3$, $X$ or $Y$
is a common nontrivial clan of $A$ and $B$. Suppose now that $A$ is
$\alpha$-separable for some $\alpha>1/2$. Let $\mathcal{J}_A$ be the set of clans
$I$ of $A$ such that $A[I]$ is not $\alpha$-separable, $\mathcal{J}_B$ is
defined similarly. Assume that $A$ or $B$ is not $\alpha$-linear. Then, by Lemma
\ref{lemma2}, $\mathcal{J}_A \cup \mathcal{J}_B$ is not empty. Let $I$ be an
element of $\mathcal{J}_A \cup \mathcal{J}_B$ with maximum cardinality and assume,
for example, that $I\in\mathcal{J}_A$. Consider the smallest clan $\tilde{I}$ of $B$
containing $I$. Clearly, $B[\tilde{I}]$ is not $\alpha$-separable. Indeed, if
$X, Y$ is a bipartition of $\tilde{I}$ such that $X\overset{\alpha}{\rightarrow}
Y$ in $\Gamma_B$, then $I\subset X$ or $I\subset Y$ because, by Corollary
\ref{remark1}, $B[I]$ is not $\alpha$-separable. This contradicts the minimality
of $\tilde{I}$ because $X$ and $Y$ are both clans of $B$. Then, $\tilde{I}\in
\mathcal{J}_B$ and, hence, $\tilde{I} = I$ by maximality of the cardinality
of $I$. It follows that $I$ is a common nontrivial clan of $A$ and $B$.
\end{proof}
Now we are able to complete the proof of Theorem \ref{theo:1}.
The implications $\mathbf{iii) \Rightarrow ii)}$ and $\mathbf{ii)\Rightarrow
i)}$ are already proven. The proof of the implication $\mathbf{i) \Rightarrow
iii)}$ is similar to that of \cite[Theorem~2]{boussairi2004c3}, and it will be
added in order for the paper to be self-contained.
Let $A$ and $B$ be two $n\times n$ generalized tournament matrices with the same
principal minors of orders at most $4$. We want to prove that $B$ is obtained
from $A$ by a sequence of clan inversions. For this, we proceed by induction on
$n$. The result is trivial for $n=2$. Assume that $n\geq 4$.
There is nothing to prove if $A = B$ or $A=B^{t}$.
Otherwise, by Propositions \ref{propo1}, \ref{propo22} and
\ref{propo:19}, we can suppose that $A$ and $B$ have a common nontrivial clan $X$.
Let $x\in X$ and denote by $U$ the set $(V\setminus X)\cup \{x\}$. By induction
hypothesis, there exist matrices $S_0=A[U],\ldots,S_l=B[U]$ such that $S_{k+1
}={\rm Inv}(S_k, Y_k)$, where $Y_k$ is a clan of $S_k$ for all $k\in \{0,\ldots,l-1\}$.
For each $i\in\{0,\ldots,l-
1\}$, the subsets $\tilde{Y}_i$ of $V$ is defined from $Y_i$ as $\tilde{Y}_i =
Y$ if $x\notin Y_i$ and $\tilde{Y}_i = Y_i\cup X$ if $x\in Y_i$. Now, the
sequence $(\tilde{S}_i)$ is defined by $\tilde{S}_0 = A$ and for all $i \in \{0,
\ldots, l-1\}$, $\tilde{S}_{i+1} = {\rm Inv}(\tilde{S}_i, \tilde{Y}_i)$. Clearly, $
\tilde{S}_m[U] = B[U]$, $\tilde{S}_m[X] = A[X]$ or $A[X]^{t}$, and since $A[X]
$ and $B[X]$ have also the same principal minors of orders at most $4$, $
\tilde{S}_m[X]$ and $B[X]$ have the same principal minors of orders at most $4$.
By the induction hypothesis, there are matrices $R_0=\tilde{S}
_m[X],\ldots,R_p=B[X]$ such that $R_{i+1} = {\rm Inv}(R_i, Z_i)$, where $Z_i$ is a
clan of $R_i$. By considering $\tilde{R}_0=A^{''}$ and for all $i\in\{0,\ldots,p-1\}$,
$\tilde{R}_{i+1} = {\rm Inv}(\tilde{R}_i, Z_i)$, it is obtained that $\tilde{R}_p = B$.
\section{Remarks and Questions}
\textbf{1.} Let $T$ be an tournament with vertex set $V$. We can associate to $T$
the $3$-uniform hypergraph $\mathcal{H}_T$ with vertex set $V$ whose hyperedges
are the $3$-subsets of $V$ that induce $3$-cycles in $T$. We call this hypergraph
the \emph{$C3$-structure} of $T$. Clearly, not every $3$-uniform hypergraph arises
as the $C3$-structure of some tournament. Linial and Morgenstern
\cite{linial2016number} asked if the $C3$-structure of tournaments can be
recognized in polynomial time. Some progress on this problem has been made in
\cite{boussairi20203}.
As the determinant of a tournament on $3$ vertices is $1$ if it is a $3$-cycle
and $0$ otherwise, Linial and Morgenstern's problem can be stated matricially
as follows. Does there exist a polynomial time algorithm that decides if a
vector $P\in\{0, 1\}^{\binom{n}{3}}$ arises as the principal minors of order $3$
of a tournament matrix. This problem can be generalized naturally to generalized
tournament matrices.
\begin{problem}
Is there a polynomial time algorithm that decides whether a collection
$(P_\alpha)_{\alpha\in 2^{[n]}, 2\leq|\alpha|\leq4}$ of real numbers arises
as the principal minors of orders $2,3$ and $4$ of a generalized tournament
matrix?
\end{problem}
\textbf{2.} Let $n\geq4$ be an integer. Denote by $GT_n$ the set of all
$n\times n$ generalized tournament matrices and by $PM_n$ the set of collections
$(P_\alpha)_{\alpha\subset [n], 2\leq|\alpha|\leq4}$ of real numbers that arise
as the principal minors of orders $2,3$ and $4$ of $n\times n$ generalized
tournament matrices. Let $\phi:GT_n\rightarrow PM_n$ the map which associates to
each generalized tournament matrix the collection of its principal minors of
orders $2,3$ and $4$. By Theorem \ref{theo:1}, the determinant of a generalized
tournament matrix is determined by the principal minors of orders at most $4$.
Hence, there exists a unique map $\psi:PM_n\rightarrow\mathbb{R}$, such that
$\psi o\phi(M) = \det(M)$, for every $n\times n$ generalized tournament matrix
$M$. That is, the following diagram is commutative.
\begin{center}
\begin{tikzcd}
GT_n \arrow[r, "\phi"] \arrow[dr, "\det"]
& PM_n \arrow[d, "\psi"]\\
& \mathbb{R}
\end{tikzcd}
\end{center}
We can ask if the map $\psi$ can be found explicitly, that is if the determinant
of an $n\times n$ generalized tournament matrix can be expressed in terms of its
principal minors of orders at most $4$.
\bibliographystyle{plain}
| {
"timestamp": "2021-05-07T02:21:37",
"yymm": "2105",
"arxiv_id": "2105.02715",
"language": "en",
"url": "https://arxiv.org/abs/2105.02715",
"abstract": "A generalized tournament matrix $M$ is a nonnegative matrix that satisfies $M+M^{t}=J-I$, where $J$ is the all ones matrix and $I$ is the identity matrix. In this paper, a characterization of generalized tournament matrices with the same principal minors of orders $2$, $3$, and $4$ is given. In particular, it is proven that the principal minors of orders $2$, $3$, and $4$ determine the rest of the principal minors.",
"subjects": "Combinatorics (math.CO)",
"title": "Generalized tournament matrices with the same principal minors",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9817357205793903,
"lm_q2_score": 0.721743206297598,
"lm_q1q2_score": 0.7085610867078519
} |
https://arxiv.org/abs/2201.12391 | Efficient optimization-based quadrature for variational discretization of nonlocal problems | Casting nonlocal problems in variational form and discretizing them with the finite element (FE) method facilitates the use of nonlocal vector calculus to prove well-posedeness, convergence, and stability of such schemes. Employing an FE method also facilitates meshing of complicated domain geometries and coupling with FE methods for local problems. However, nonlocal weak problems involve the computation of a double-integral, which is computationally expensive and presents several challenges. In particular, the inner integral of the variational form associated with the stiffness matrix is defined over the intersections of FE mesh elements with a ball of radius $\delta$, where $\delta$ is the range of nonlocal interaction. Identifying and parameterizing these intersections is a nontrivial computational geometry problem. In this work, we propose a quadrature technique where the inner integration is performed using quadrature points distributed over the full ball, without regard for how it intersects elements, and weights are computed based on the generalized moving least squares method. Thus, as opposed to all previously employed methods, our technique does not require element-by-element integration and fully circumvents the computation of element-ball intersections. This paper considers one- and two-dimensional implementations of piecewise linear continuous FE approximations, focusing on the case where the element size h and the nonlocal radius $\delta$ are proportional, as is typical of practical computations. When boundary conditions are treated carefully and the outer integral of the variational form is computed accurately, the proposed method is asymptotically compatible in the limit of $h \sim \delta \to 0$, featuring at least first-order convergence in L^2 for all dimensions, using both uniform and nonuniform grids. | \section{Introduction}
\label{sec:Introduction}
Nonlocal models have become viable alternatives to partial differential equations (PDEs) for applications where small-scale effects affect the global behavior of a system or when discontinuities in the quantity of interest make it impractical to use differential operators. In fact, nonlocal operators embed length scales in their definitions and allow for irregular functions. For these reasons, nonlocal models are currently employed in several scientific and engineering applications including surface or subsurface transport
\cite{Benson2000,Benson2001,d2021analysis,Deng2004,Schumer2003,Schumer2001},
fracture mechanics
\cite{Ha2011,Littlewood2010,Silling2000},
turbulence
\cite{Scalar_FSGS,DiLeoni-2020,Pang2020},
image processing
\cite{Buades2010,DElia2019imaging,Gilboa2007}
and stochastic processes
\cite{Burch2014,DElia2017,Meerschaert2012,Metzler2000,Metzler2004}.
Nonlocal operators are integral operators that embed length scales in the domain of integration; as such, they allow one to model long-range forces within the length scale and to reduce the regularity requirements on the solutions. The most general form of nonlocal Laplace operator is given by \cite{DElia2020Unified}
\begin{equation*}
\mathcal{L}_\delta u(\mathbf{x}) = 2\int_{\mathbb{R}^n}
(u(\mathbf{y})-u(\mathbf{x})) \gamma(\mathbf{x},\mathbf{y})\,d\mathbf{y},
\end{equation*}
where $u:{\mathbb{R}^n}\to\mathbb{R}$ is a scalar function and $\gamma$ is a symmetric {\it kernel} function whose support is $\mathscr{H}{(\mathbf{x},\delta)}$, the ball centered at $\mathbf{x}$ of radius $\delta$, the so-called horizon or interaction radius. In most cases, the ball is understood in the Euclidean sense (to maintain rotational invariance), but recent works also employ more general balls, including $\ell^\infty$ balls, see, e.g. \cite{capodaglio2019,xu2021feti,xu2021machine}. The function $\gamma$ determines the function space that the nonlocal solution belongs to. Its choice is nontrivial and non-intuitive; in fact, the selection of the optimal kernel is a widely studied research question \cite{Pang2020,burkovska2020,DElia2014DistControl,DElia2016ParamControl,Gulian2019,Pang2019fPINNs,Pang2017discovery,Xu2020learning,You2020Regression,You2020aaai,you2021data}.
Because of the integral nature of nonlocal operators, the discretization and numerical solution of nonlocal equations raises several unresolved challenges. These include the design of accurate and efficient discretization schemes and the development of efficient numerical solvers \cite{AinsworthGlusa2018,Capodaglio2020DD,acta20,DEliaFEM2020,Pasetto2019,Pasetto2018,silling2005meshfree,Wang2010}. With the ultimate goal of easily handling nontrivial domains and possibly using mesh adaptivity, this work focuses on variational discretizations and, specifically, the finite element method. However, we point out that the nonlocal literature offers a broad class of meshfree techniques, widely used at the engineering level. We refer the interested reader to, e.g., \cite{Pasetto2018,silling2005meshfree,Chen2006meshless,parks2012peridigm,parks2010lammps,trask2019asymptotically}. One advantage of the FE method is that the nonlocal vector calculus facilitates its numerical analysis. This theoretical framework, first introduced in \cite{Gunzburger2010}, further developed in \cite{Du2013}, and generalized in \cite{DElia2020Unified}, allows us to cast nonlocal equations in a variational setting and analyze them in the same way as PDEs. Using this framework, one can prove well-posedness, convergence, and stability of nonlocal FE schemes. Nonetheless, variational discretizations introduce further computational challenges due to the presence of an additional integration over the domain of the problem. In fact, the nonlocal weak problem associated with the operator $\mathcal{L}_\delta$ involves the computation of a double integral. Specifically, the core computation required by standard codes to assemble the FE stiffness matrix is given by an integral of the form
\begin{equation}\label{eq:double-int}
\int_{\Omega^h_i}\int_{\Omega^h_j\cap \mathscr{H}{(\mathbf{x},\delta)}}
[\psi_k(\mathbf{y})-\psi_k(\mathbf{x})]\gamma(\mathbf{x},\mathbf{y})[\psi_l(\mathbf{y})-\psi_l(\mathbf{x})]\,d\mathbf{y}\,d\mathbf{x},
\end{equation}
where $\Omega^h_i$ is the $i$-th element of the partition and $\psi_k$ is the $k$-th FE basis function. (See Section \ref{sec:discrete-form} for a complete formulation.) In the formula above, we have purposely written the inner domain of integration explicitly, to highlight the fact that, prior to numerically evaluating the integral, we must identify the region of the $j$-th element that overlaps with the support of the kernel function, since a na\"ive, global integration over the whole element would not guarantee numerical convergence of the overall scheme. Identifying this region is a nontrivial, time-consuming task. Furthermore, the presence of a double integral inevitably adds computational cost and it is often the case that the integrand function is singular, requiring the use of sophisticated, possibly adaptive quadrature rules.
A thorough description of the computational challenges that arise in the computation of the double integral \eqref{eq:double-int} can be found in \cite{DEliaFEM2020}. For the case of finite horizon, the authors of \cite{DEliaFEM2020} propose efficient ways to circumvent the problem of finding intersections between FEs and nonlocal neighborhoods by introducing the concept of ``approximate balls'' given by FE patches that roughly approximate $\mathscr{H}{(\mathbf{x},\delta)}$; their results indicate that in the case of piecewise linear FE spaces, optimal numerical convergence can be preserved. Alternatively, in \cite{aulisa2021efficient}, the authors propose a technique that allows one to compute the inner integral over the whole element $\Omega^h_j$ by introducing a smoothing of the kernel function. The smoothed kernel is still compactly supported, but it continuously decays to zero, allowing for simple Gaussian quadrature rules over each FE.
In this work, under the assumption that the discretization parameter $h$ (the size of the FE) and the nonlocal radius $\delta$ are proportional, we propose a change of perspective and introduce a technique where the inner integration is performed over the ball $\mathscr{H}{(\mathbf{x},\delta)}$ rather than on a single element, i.e., the core computation in the stiffness matrix assembly now becomes
\begin{equation}\label{eq:double-int-B}
\int_{\Omega^h_i}\int_{\mathscr{H}{(\mathbf{x},\delta)}}
[\psi_k(\mathbf{y})-\psi_k(\mathbf{x})]\gamma(\mathbf{x},\mathbf{y})[\psi_l(\mathbf{y})-\psi_l(\mathbf{x})]\,d\mathbf{y}\,d\mathbf{x},
\end{equation}
where we utilize special quadrature rules for the numerical computation of the inner integral. Specifically, we consider quadrature rules based on the generalized moving least squares (GMLS) method, successfully used for strong-form meshfree discretizations of nonlocal problems in \cite{trask2019asymptotically,Leng2021_AsymptoticallyCR,Gross2020}.
The main idea behind this approach is to determine the quadrature weights associated with quadrature points (the meshfree discretization nodes in a meshfree setting) by solving an equality constrained optimization problem (see Section \ref{sec:GMLS_construction} for a thorough discussion).
The introduction of a technique that fully circumvents the computation of element--ball intersections and that allows for the use of global quadrature rules over the support of the kernel function is the major contribution of this work. Additionally, the technique we propose requires minimal implementation effort, as the GMLS subroutine can be embedded into an existing FE code. As such, we envision this technique as a key component of agile FE engineering codes. In this work, we consider one- and two-dimensional implementations of piecewise linear continuous FE approximations. When boundary conditions are carefully treated and when the outer integral in \eqref{eq:double-int-B} is accurately computed, this method is asymptotically compatible in the limit of $h$ and $\delta$ vanishing and features first-order convergence in the $L^2$ norm for all dimensions and for both uniform and nonuniform grids. Furthermore, in the case of uniform grids, the proposed method is patch-test consistent (i.e., it is machine-precision accurate for linear solutions) and, according to numerical evidence, features an optimal, second-order convergence rate. Our numerical tests also indicate that, even for nonuniform grids, second-order convergence may be observed in the pre-asymptotic regime. Another contribution of the current work is a preliminary theoretical study, where, in a simplified, uniformly-discretized, one-dimensional setting, we show that the proposed method features optimal first-order numerical convergence in the $H^1$ norm. These results set the groundwork for more rigorous studies that we will pursue in future works.
\paragraph{Paper outline} Section \ref{sec:Nonlocal_diff_model} introduces the nonlocal Laplace operator and the corresponding volume-constrained nonlocal problem in its strong and weak form. Section \ref{sec:GMLS_construction} introduces the GMLS technique for the numerical evaluation of general integrals. In Section \ref{sec:discrete-form}, we formulate the discrete variational problem for a FE discretization and we provide a detailed description of the quadrature rules and the resulting, fully-discrete problem. We also introduce a technique for the treatment of nonlocal boundary conditions that guarantees an improved convergence behavior. In Section \ref{sec:convergence}, we introduce the concept of asymptotic compatibility and prove that, under certain assumptions, the proposed method features linear numerical convergence in the $H^1$ norm with respect to the mesh size (and, as a consequence, with respect to the interaction radius). Section \ref{sec:numerics} illustrates the accuracy of the proposed method with several one- and two-dimensional numerical tests on uniform and non-uniform grids using piecewise linear continuous FE discretizations.
We first show the improved convergence behavior induced by the special treatment of the nonlocal boundary condition and then show that, in the $L^2$ norm, our scheme is second-order accurate for uniform discretizations and at least first-order accurate for non-uniform ones, with respect to $h$ and $\delta$ and is, hence, asymptotically compatible. Moreover, we also show that convergence in the $H^1$ norm is consistent with the theoretical predictions discussed in Section \ref{sec:convergence}. Lastly, we make some concluding remarks in Section \ref{sec:conclusion}.
\section{Nonlocal Laplace operator and model problem}\label{sec:Nonlocal_diff_model}
In this section we set the notation that will be used throughout the paper and introduce relevant definitions and results. In particular, we formulate the strong and weak forms of the nonlocal Poisson problem used to describe the technique proposed in this work.
Let $\gamma(\mathbf{x},\mathbf{y}):\mathbb{R}^d\times\mathbb{R}^d\rightarrow\mathbb{R}^{+}_0$ be a symmetric, i.e., $\gamma(\mathbf{x},\mathbf{y})=\gamma(\mathbf{y},\mathbf{x})$, non-negative kernel\footnote{Examples and analysis of nonsymmetric and sign-changing kernels can be found in \cite{d2017nonlocal,felsinger2015dirichlet} and \cite{mengesha2013analysis}, respectively.} with bounded support in the norm-induced ball of radius $\delta$, i.e.,
\begin{equation}\label{ball}
\mathscr{H}{(\mathbf{x},\delta)}\coloneqq\supp(\gamma(\mathbf{x},\cdot))=
\left \{\mathbf{y}\in {\mathbb{R} ^{d}}:
\lvert \mathbf{y}-\mathbf{x} \rvert_{\ell^{\tilde{p}}} \leq \delta
\right \},
\end{equation}
where $\delta>0$ is referred to as the \textit{horizon} and $\tilde{p}\in\left[1,\infty\right]$. In this work, without loss of generality, we consider Euclidean balls, i.e., we take $\tilde{p}=2$. Furthermore, we restrict ourselves to kernels of the form
\begin{equation}\label{kernel_form1}
\gamma{(\mathbf{x},\mathbf{y})}=\left\{\begin{aligned}
\ \frac{\zeta}{\delta^{d+2}} \quad \ &\rm{for}\ \lvert\mathbf{y}-\mathbf{x}\rvert_{\ell^{\tilde{p}}}\leq\delta,\\\
\ 0 \ \ \quad\ &\rm{for}\ \lvert\mathbf{y}-\mathbf{x}\rvert_{\ell^{\tilde{p}}}>\delta,\\
\end{aligned}\right.
\end{equation}
and
\begin{equation}\label{kernel_form2}
\gamma{(\mathbf{x},\mathbf{y})}=\left\{\begin{aligned}
\ \frac{\zeta}{\delta^{d+1}\lvert \mathbf{y}-\mathbf{x} \rvert_{\ell^{\tilde{p}}}} \quad \ &\rm{for}\ \lvert\mathbf{y}-\mathbf{x}\rvert_{\ell^{\tilde{p}}}\leq\delta,\\\
\ 0 \ \ \quad\ &\rm{for}\ \lvert\mathbf{y}-\mathbf{x}\rvert_{\ell^{\tilde{p}}}>\delta,\\
\end{aligned}\right.
\end{equation}
\noindent
with $\zeta\in\mathbb{R}^+$.
Let $\Omega\subset\mathbb{R}^d$ be a bounded open domain. Its associated \textit{interaction domain} is defined as the set of points outside of $\Omega$ that interact with points inside of it (see Figure \ref{Interaction_domain}), i.e.,
\begin{equation}\label{interaction domain}
\mathscr{B}\Omega\coloneqq
\left \{\mathbf{y}\in {\mathbb{R} ^{d}}\setminus\Omega:
\exists \mathbf{x}\in\Omega \; \text{such that} \;\lvert \mathbf{y}-\mathbf{x} \rvert_{\ell^{\tilde{p}}} \leq \delta
\right \}.
\end{equation}
Note that $\mathscr{B}\Omega\cap\partial\Omega=\partial\Omega$, where $\partial\Omega$ is the boundary of $\Omega$ \cite{delia2020cookbook}.
\begin{figure}[H]
\begin{center}
\scalebox{0.75}{\includegraphics[trim = 50mm 375mm 50mm 375mm, clip=true,width=1\textwidth]{Figures/Interaction_domain}}
\caption{A square domain $\Omega$ (in white) with its corresponding interaction domain of thickness $\delta$ (in light-blue). In yellow, two balls of radius $\delta$, centred at two points in $\Omega\cup\mathscr{B}\Omega$, depicted by black dots, one of which is in $\Omega$, while the other is located on the boundary $\partial\Omega$ between $\Omega$ and $\mathscr{B}\Omega$.}
\label{Interaction_domain}
\end{center}
\end{figure}
We introduce the strong form of a nonlocal volume-constrained Poisson problem \cite{Leng2021_AsymptoticallyCR,Du2012_NonlocalVolumes,Gunzburger2010_NonlocalVolumes2,DElia2020_Acta}: given $b:{\Omega}\rightarrow{\mathbb{R}}$ and $g:\mathscr{B}\Omega\rightarrow{\mathbb{R}}$, find $u:\Omega\cup\mathscr{B}\Omega\rightarrow{\mathbb{R}}$, such that
\begin{equation}
\left\{\begin{aligned}
-\mathcal{L}_{\delta}u(\mathbf{x})
&=b(\mathbf{x}), \quad \ \mathbf{x}\in \Omega,\\
u(\mathbf{x})&=g(\mathbf{x}), \quad\ \mathbf{x}\in\mathscr{B}\Omega,
\end{aligned}\right.
\label{strong form_NLD.}
\end{equation}
where $\mathcal{L}_{\delta}u(\mathbf{x})$ is the nonlocal Laplacian
\begin{equation}
\begin{aligned}
\mathcal{L}_{\delta}u(\mathbf{x})
&=2\int_{\Omega\cup\mathscr{B}\Omega}\gamma(\mathbf{x},\mathbf{y})(u(\mathbf{y})-u(\mathbf{x}))d\mathbf{y},
\end{aligned}
\label{nonlocal_Laplacian.}
\end{equation}
and where the second equation is a \textit{Dirichlet volume constraint}. In this work, we only consider Dirichlet constraints\footnote{Examples of the numerical treatment of Neumann constraints can be found in, e.g., \cite{d2020physically,d2021prescription}}.
\subsection{Weak formulation}\label{weak_form}
To derive the weak formulation associated with the problem in Eq.~(\ref{strong form_NLD.}), we multiply the first equation in (\ref{strong form_NLD.}) by a test function $v(\mathbf{x}):\Omega\cup\mathscr{B}\Omega\rightarrow\mathbb{R}$ and then integrate over $\Omega$, i.e.
\begin{equation}
\begin{aligned}
0&=\int_{\Omega}v(\mathbf{x})\left[-\mathcal{L}_{\delta}u(\mathbf{x})-b(\mathbf{x})\right]d\mathbf{x}\\
&=-2\int_{\Omega}v(\mathbf{x})\int_{\Omega\cup\mathscr{B}\Omega}\gamma(\mathbf{x},\mathbf{y})(u(\mathbf{y})-u(\mathbf{x}))d\mathbf{y}d\mathbf{x}-\int_{\Omega}v(\mathbf{x})b(\mathbf{x})d\mathbf{x}.
\end{aligned}
\label{weak_form_s1}
\end{equation}
Now we recast $-2\int_{\Omega}v(\mathbf{x})\int_{\Omega\cup\mathscr{B}\Omega}\gamma(\mathbf{x},\mathbf{y})(u(\mathbf{y})-u(\mathbf{x}))d\mathbf{y}d\mathbf{x}$ as
\begin{equation}
\begin{aligned}
&-2\int_{\Omega}v(\mathbf{x})\int_{\Omega\cup\mathscr{B}\Omega}\gamma(\mathbf{x},\mathbf{y})(u(\mathbf{y})-u(\mathbf{x}))d\mathbf{y}d\mathbf{x}\\
=&-2\int_{\Omega}v(\mathbf{x})\int_{\Omega\cup\mathscr{B}\Omega}\frac{1}{2}\left[\gamma(\mathbf{x},\mathbf{y})(u(\mathbf{y})-u(\mathbf{x}))-\gamma(\mathbf{x},\mathbf{y})(u(\mathbf{x})-u(\mathbf{y}))\right]d\mathbf{y}d\mathbf{x}\\
=&-\int_{\Omega}v(\mathbf{x})\int_{\Omega\cup\mathscr{B}\Omega}\left[\gamma(\mathbf{x},\mathbf{y})(u(\mathbf{y})-u(\mathbf{x}))-\gamma(\mathbf{y},\mathbf{x})(u(\mathbf{x})-u(\mathbf{y}))\right]d\mathbf{y}d\mathbf{x},
\end{aligned}
\label{weak_form_s2}
\end{equation}
where we employed the symmetry of $\gamma$.
As is standard in the presence of Dirichlet conditions, we require $v(\mathbf{x})$ to be zero on $\mathscr{B}\Omega$. We then apply \textit{Green's first identity of nonlocal vector calculus} \cite{Gunzburger2010_NonlocalVolumes2} to the term in Eq.~(\ref{weak_form_s2}), which gives us, with $v(\mathbf{x})=0$ for $\mathbf{x}\in\mathscr{B}\Omega$,
\begin{equation}
\begin{aligned}
&-\int_{\Omega}v(\mathbf{x})\int_{\Omega\cup\mathscr{B}\Omega}\left[\gamma(\mathbf{x},\mathbf{y})(u(\mathbf{y})-u(\mathbf{x}))-\gamma(\mathbf{y},\mathbf{x})(u(\mathbf{x})-u(\mathbf{y}))\right]d\mathbf{y}d\mathbf{x}\\
=&\int_{\Omega\cup\mathscr{B}\Omega}\int_{\Omega\cup\mathscr{B}\Omega}\left[v(\mathbf{y})-v(\mathbf{x})\right]\gamma(\mathbf{x},\mathbf{y})\left[u(\mathbf{y})-u(\mathbf{x})\right]d\mathbf{y}d\mathbf{x}.
\end{aligned}
\label{weak_form_s3}
\end{equation}
Therefore, by combining Eq.~(\ref{weak_form_s2}) and Eq.~(\ref{weak_form_s3}) we get
\begin{equation}
\begin{aligned}
&-2\int_{\Omega}v(\mathbf{x})\int_{\Omega\cup\mathscr{B}\Omega}\gamma(\mathbf{x},\mathbf{y})(u(\mathbf{y})-u(\mathbf{x}))d\mathbf{y}d\mathbf{x}\\
=&\int_{\Omega\cup\mathscr{B}\Omega}\int_{\Omega\cup\mathscr{B}\Omega}\left[v(\mathbf{y})-v(\mathbf{x})\right]\gamma(\mathbf{x},\mathbf{y})\left[u(\mathbf{y})-u(\mathbf{x})\right]d\mathbf{y}d\mathbf{x}.
\end{aligned}
\label{weak_form_s4}
\end{equation}
By substituting the latter in Eq.~(\ref{weak_form_s1}), we obtain
\begin{equation}
\begin{aligned}
&\int_{\Omega\cup\mathscr{B}\Omega}\int_{\Omega\cup\mathscr{B}\Omega}\left[v(\mathbf{y})-v(\mathbf{x})\right]\gamma(\mathbf{x},\mathbf{y})\left[u(\mathbf{y})-u(\mathbf{x})\right]d\mathbf{y}d\mathbf{x}
&=\int_{\Omega}v(\mathbf{x})b(\mathbf{x})d\mathbf{x}.
\end{aligned}
\label{weak_form_s5}
\end{equation}
By defining the bilinear form $D(\cdot,\cdot)$ and the linear functional $G(\cdot)$ as
\begin{equation}
\begin{aligned}
D(u,v)\coloneqq\int_{\Omega\cup\mathscr{B}\Omega}\int_{\Omega\cup\mathscr{B}\Omega}\left[v(\mathbf{y})-v(\mathbf{x})\right]\gamma(\mathbf{x},\mathbf{y})\left[u(\mathbf{y})-u(\mathbf{x})\right]d\mathbf{y}d\mathbf{x},
\end{aligned}
\label{weak_form_s6}
\end{equation}
and
\begin{equation}
\begin{aligned}
G(v)\coloneqq\int_{\Omega}v(\mathbf{x})b(\mathbf{x})d\mathbf{x},
\end{aligned}
\label{weak_form_s7}
\end{equation}
we can rewrite Eq.~(\ref{weak_form_s5}) as
\begin{equation}
\begin{aligned}
D(u,v)=G(v).
\end{aligned}
\label{weak_form_s8}
\end{equation}
In double integral operators of the form $\int\left(\int\left(...\right)d\mathbf{y}\right)d\mathbf{x}$, we refer to $\int\left(...\right)d\mathbf{y}$ as the \textit{inner integral}, and to $\int\left(...\right)d\mathbf{x}$ as the \textit{outer integral}.
We define the following function spaces for functions $w(\mathbf{x})$ defined for $\mathbf{x}\in\Omega\cup\mathscr{B}\Omega$:
\begin{equation}
\begin{aligned}
\mathcal{V}(\Omega\cup\mathscr{B}\Omega)\coloneqq\left\{w\in L^2(\Omega\cup\mathscr{B}\Omega):\vertiii{w}<\infty\right\},
\end{aligned}
\label{func_spaces1}
\end{equation}
where we define the norm
\begin{equation}
\begin{aligned}
\vertiii{w}^2& =\int_{\Omega\cup\mathscr{B}\Omega}\int_{\Omega\cup\mathscr{B}\Omega}\lvert w(\mathbf{y})-w(\mathbf{x})\rvert^2\gamma(\mathbf{x},\mathbf{y})d\mathbf{y}d\mathbf{x}+\lVert w \rVert_{L^2(\Omega\cup\mathscr{B}\Omega)}^2\\[2mm]
& = D(w,w) + \lVert w \rVert_{L^2(\Omega\cup\mathscr{B}\Omega)}^2.
\end{aligned}
\label{NL_energy_norm1}
\end{equation}
We also introduce the constrained energy space
\begin{equation}
\begin{aligned}
\mathcal{V}_0(\Omega\cup\mathscr{B}\Omega)\coloneqq\left\{w\in \mathcal{V}(\Omega\cup\mathscr{B}\Omega):\left.w\right\vert_{\mathscr{B}\Omega}=0\right\},
\end{aligned}
\label{func_spaces2}
\end{equation}
for which
\begin{equation}\label{eq:V0norm}
\vertiii{w}_0^2 = D(w,w),
\end{equation}
defines a norm. Finally, we define the nonlocal trace space as $\mathcal{V}_t(\Omega\cup\mathscr{B}\Omega)=\left\{\left.w\right\vert_{\mathscr{B}\Omega}:w\in\mathcal{V}(\Omega\cup\mathscr{B}\Omega)\right\}$. Let $\mathcal{V}':\mathcal{V}_0\rightarrow\mathbb{R}$ denote the dual space of bounded linear functionals on $\mathcal{V}_0$ via $L^2$ duality pairing, i.e. the space of functionals $\varphi:\mathcal{V}_0\times\mathcal{W}\rightarrow\mathbb{R}$ of the type
\begin{equation}
\begin{aligned}
\varphi(\cdot,\cdot)=\int_{\Omega}(\cdot)(\cdot)d\mathbf{x},
\end{aligned}
\label{func_spaces_att1}
\end{equation}
where $\mathcal{W}:\Omega\rightarrow\mathbb{R}$. Thus, $\forall w\in\mathcal{W}(\Omega)$, we can write $\varphi(\cdot,w):\mathcal{V}_0\rightarrow\mathbb{R}\in\mathcal{V}'$ as
\begin{equation}
\begin{aligned}
\varphi(\cdot,w)=\int_{\Omega}(\cdot)w(\mathbf{x})d\mathbf{x}.
\end{aligned}
\label{func_spaces_att2}
\end{equation}
By comparing Eqs.~(\ref{weak_form_s7}) and (\ref{func_spaces_att2}) we see that $G(\cdot)=\varphi(\cdot,w)$, $\forall w\in\mathcal{W}(\Omega)$.
Then, the weak form of (\ref{strong form_NLD.}) is defined as follows: given $g(\mathbf{x})\in\mathcal{V}_t(\Omega\cup\mathscr{B}\Omega)$, and $b(\mathbf{x})\in\mathcal{W}(\Omega)$, find $u(\mathbf{x})\in\mathcal{V}(\Omega\cup\mathscr{B}\Omega)$ such that $\forall v(\mathbf{x})\in\mathcal{V}_0(\Omega\cup\mathscr{B}\Omega)$
\begin{equation}
\begin{aligned}
D(u,v)=G(v),
\end{aligned}
\label{weak_form_s9}
\end{equation}
subject to $u(\mathbf{x})=g(\mathbf{x})$ for $\mathbf{x}\in\mathscr{B}\Omega$. Discussions on the well-posedness of (\ref{weak_form_s9}) can be found in \cite{Du2012_NonlocalVolumes,DElia2020_Acta}.
\section{Quadrature weights using generalized moving least squares}\label{sec:GMLS_construction}
In this section we review the quadrature approach based on the generalized moving least squares (GMLS) \cite{Mirzaei2012,Salehi2013,Mirzaei2013} method, proposed in \cite{trask2019asymptotically}. For given positions of quadrature points, this method determines their associated quadrature weights by solving an equality constrained optimization problem. In \cite{trask2019asymptotically}, the GMLS-based quadrature was employed within the framework of collocation-based meshfree discretizations of strong-form nonlocal problems.
Consider a collection of points $\mathbf{X}_p=\{\mathbf{x}_j\}_{j=1,...,N_p}\subset\mathscr{H}(\mathbf{x},\delta)$, with $N_p\in\mathbb{N}$, and a quadrature rule for functions $f(\mathbf{x},\mathbf{y})\in\mathbf{V}$, given by
\begin{equation}\label{GMLSpurposeFunc1}
\int_{\mathscr{H}(\mathbf{x},\delta)}f(\mathbf{x},\mathbf{y})d\mathbf{y}\approx\sum_{\substack{j=1\\j:\mathbf{x}_j\neq\mathbf{x}}}^{N_p}f_{j}\omega_{j},
\end{equation}
where $\mathbf{V}$ denotes a Banach space, $f_{j}=f(\mathbf{x},\mathbf{x}_j)$, and $\{{\omega}_j\}_{j=1,...,N_p}\in\mathbb{R}^{N_p}$ is a collection of quadrature weights to be determined. Notice that in Eq.~(\ref{GMLSpurposeFunc1}) we are excluding $\mathbf{x}_j=\mathbf{x}$ to account for the possibility of $f(\mathbf{x},\mathbf{y})$ having a singularity when $\mathbf{x}=\mathbf{x}_j=\mathbf{y}$. If the function does not exhibit such singularity, then $\mathbf{x}_j=\mathbf{x}$ could also be included in the summation. In order to find the quadrature weights we define the following equality constrained optimization problem: find
\begin{equation}
\argmin_{\{\omega_j\}\in\mathbb{R}^{N_p}}\sum_{\substack{j=1\\j:\mathbf{x}_j\neq\mathbf{x}}}^{N_p}\omega_j^2
\label{GMLS_opt1}
\end{equation}
\begin{equation*}
\text{subject to }\sum_{\substack{j=1\\j:\mathbf{x}_j\neq\mathbf{x}}}^{N_p}f_{j}\omega_{j}=\int_{\mathscr{H}(\mathbf{x},\delta)}f(\mathbf{x},\mathbf{y})d\mathbf{y}\;\;\forall f\in\mathbf{V}_h\subset\mathbf{V},
\label{GMLS_opt2}
\end{equation*}
where $\mathbf{V}_h$ is a finite dimensional subspace of $\bf V$, consisting of functions to be integrated exactly. The problem in (\ref{GMLS_opt1}) leads to the following saddle-point problem:
\begin{equation}
\setlength{\arraycolsep}{2.2pt}
\renewcommand\arraystretch{0.6}
\begin{bmatrix}
\mathbf{I} & \mathbf{B}^T \\
\mathbf{B} & \mathbf{0}
\end{bmatrix}
\begin{bmatrix}
\boldsymbol{\omega} \\ \boldsymbol{\lambda}
\end{bmatrix}
=
\begin{bmatrix}
\mathbf{0}\\\mathbf{g}
\end{bmatrix},
\label{saddle-point}
\end{equation}
where $\mathbf{I}\in\mathbb{R}^{N_p\times N_p}$ is the identity matrix; $\boldsymbol{\omega}=\{{\omega}_j\}_{j=1,...,N_p}\in\mathbb{R}^{N_p}$ is the vector containing the set of quadrature weights; and $\boldsymbol{\lambda}\in\mathbb{R}^{ \operatorname{dim}(\mathbf{V}_h)}$ is the vector of Lagrange multipliers enforcing the constraint. $\mathbf{B}\in\mathbb{R}^{N_p\times \operatorname{dim}(\mathbf{V}_h)}$ is defined by $B_{aj}=f^{\alpha}(\mathbf{x},\mathbf{x}_j),\,\forall f^{\alpha}\in\mathbf{V}_h$, where $\{f^{\alpha}\}_{\alpha=1,...,\operatorname{dim}(\mathbf{V}_h)}$ is a basis of $\mathbf{V}_h$. The vector $\mathbf{g}\in\mathbb{R}^{\operatorname{dim}(\mathbf{V}_h)}$ contains the exact integrals of each function in $\{f^{\alpha}\}_{\alpha=1,...,\operatorname{dim}(\mathbf{V}_h)}$, i.e., $g_{\alpha}=\int_{\mathscr{H}(\mathbf{x},\delta)}f^\alpha(\mathbf{x},\mathbf{y})d\mathbf{y}$. Based on Eq.~(\ref{saddle-point}), the quadrature weights can be obtained as
\begin{equation}
\boldsymbol{\omega}=\mathbf{B}^\mathrm{T}\mathbf{S}^{-1}\mathbf{g},
\label{gmls_final_equation}
\end{equation}
where $\mathbf{S}=\mathbf{BB}^\mathrm{T}$. It has to be noted that the set of integration weights for a given constraint might not be unique \cite{Pasetto2019,Leng2021_AsymptoticallyCR} and that redundant (linearly dependent) conditions might be present in the constraints. This results in the singularity of the matrix $\mathbf{S}$. In this work, as in \cite{trask2019asymptotically}, a pseudoinverse is used to compute $\mathbf{S}^{-1}$, whenever $\mathbf{S}$ is singular. It should also be noted that, as discussed in \cite{Pasetto2019,Leng2021_AsymptoticallyCR}, this set of integration weights can be constructed equivalently by using the reproducing kernel particle method (RKPM) \cite{MP:progress20years}, due to the equivalence of RKPM and GMLS.
\section{Discrete variational form}\label{sec:discrete-form}
In this section we introduce the discrete form of the variational problem in Eq.~(\ref{weak_form_s9}); specifically, for piecewise linear, finite element discretizations, we describe the computational domain, the discrete representation of the unknown field $u(\mathbf{x})$ and the trial function $v(\mathbf{x})$, and the quadrature rules utilized for the numerical evaluation of the integrals.
\subsection{Finite element discretization of the weak formulation}\label{FEM_discretization}
Let $\mathcal{M}^h_\Omega\coloneqq\{\Omega^h_e\}_{e=1,\ldots,n_{el,\Omega}}$, $n_{el,\Omega}\in\mathbb{N}$, be a collection of non-overlapping elements, which are open, simply connected subsets of $\mathbb{R}^d$, and let $\partial\Omega^h_e$ be their corresponding boundary. Therefore, $\Omega^h_i\cap\Omega^h_j=\varnothing$ and { $\overline{\Omega^h_i}\cap\overline{\Omega^h_j}=\partial\Omega^h_i\cap\partial\Omega^h_j$} with $i\neq j$, $i,j=1,\ldots,n_{el}${, where the bar denotes the closure of the set}.
We assume that the domain $\Omega$, introduced in Section~\ref{sec:Nonlocal_diff_model}, is a polyhedral so that it can be exactly covered by the mesh $\mathcal M_\Omega^h$, i.e. $\Omega =\cup_{e=1}^{n_{el,\Omega}}\Omega^h_e$. Note that when $\Omega$ is not polyhedral, one can introduce a polyhedral approximation $\Omega^h\approx\Omega$ for which a covering exists. When the nonlocal interaction region is a Euclidean ball, the interaction domain $\mathcal B\Omega$ is generally not a polyhedral domain since vertices of $\Omega$ create rounded corners in $\mathscr{B}\Omega$ (see, for example, Figure \ref{Interaction_domain}). Therefore, following \cite{delia2020cookbook}, we approximate $\mathscr{B}\Omega$ by a polyhedral domain by replacing rounded corners by vertices (see Figure \ref{Interaction_domain_poly} for illustration). From now on, we will refer to this approximate polyhedral domain also as $\mathscr{B}\Omega$. Note that there is no need to extend the boundary data $g(\mathbf{x})$ to added regions between the original curved corners and the new corners of the polyhedral approximation since these portions of the domain are never accessed during the numerical solution process.
\begin{figure}[H]
\begin{center}
\scalebox{0.75}{\includegraphics[trim = 0mm 225mm 0mm 145mm, clip=true,width=1\textwidth]{Figures/Interaction_domain_poly2}}
\caption{Left: a square domain $\Omega$ (in white) with its corresponding interaction domain $\mathscr{B}\Omega$ (in light-blue). Right: the same square domain and a polygonal approximate interaction domain, still referred to as $\mathscr{B}\Omega$.}
\label{Interaction_domain_poly}
\end{center}
\end{figure}
Since we consider a polyhedral $\mathscr{B}\Omega$, we can construct another exact mesh $\mathcal{M}^h_{\mathscr{B}\Omega}\coloneqq\{\Omega^h_e\}_{e=n_{el,\Omega}+1,\ldots,n_{el}}$ containing $n_{el}-n_{el,\Omega}$ elements, i.e., $\mathscr{B}\Omega^h\coloneqq\cup_{e=n_{el,\Omega}+1}^{n_{el}-n_{el,\Omega}}\Omega^h_e=\mathscr{B}\Omega$, with $n_{el}\in\mathbb{N}$. Meshing $\Omega$ and $\mathscr{B}\Omega$ separately guarantees that elements do not straddle the shared boundary between $\Omega$ and $\mathscr{B}\Omega$, i.e., $\partial\Omega=\overline{\Omega}\cap\mathscr{B}\Omega$. Moreover, we require that the vertices of the elements of $\mathcal{M}^h_{\mathscr{B}\Omega}$ and $\mathcal{M}^h_{\Omega}$ coincide along the boundary $\partial\Omega$, so that $\mathcal{M}^h=\mathcal{M}^h_{\Omega}\cup\mathcal{M}^h_{\mathscr{B}\Omega}=\{\Omega^h_e\}_{e=1}^{n_{el}}$ is a regular mesh for $\Omega\cup\mathscr{B}\Omega$.\newline
\smallskip
We consider continuous finite element spaces with Lagrange-type compactly supported linear polynomial bases defined with respect to the nodes of $\mathcal{M}^h$.
With $J\in\mathbb{N}$ and $J_\Omega\in\mathbb{N}$, let $\{\tilde{\mathbf{x}}_j\}_{j=1}^{J}$ be the set of all the nodes in $\mathcal{M}^h$, with $\{\tilde{\mathbf{x}}_j\}_{j=1}^{J_\Omega}$ and $\{\tilde{\mathbf{x}}_j\}_{j=J_\Omega+1}^{J}$ being the subset of nodes located in the open domain $\Omega$ and in the closed domain $\mathscr{B}\Omega$, respectively. Notice that in this way, the nodes located on $\partial\Omega=\overline{\Omega}\cap\mathscr{B}\Omega$ are assigned to $\mathscr{B}\Omega$. Then, for $j=1,\ldots,J$, let $\psi_j(\mathbf{x})$ denote a piecewise linear polynomial function such that $\psi_j(\tilde{\mathbf{x}}_k)=\delta_{jk}$ for $k=1,\ldots,J$, where $\delta_{jk}$ is the Kronecker delta function. Then, we define the finite element spaces as
\begin{equation}
\mathcal{V}^h=\operatorname{span}\{\psi_j\}_{j=1}^{J}\subset\mathcal{V}(\Omega\cup\mathscr{B}\Omega),
\end{equation}
and
\begin{equation}\label{Vh0}
\mathcal{V}^h_0=\operatorname{span}\{\psi_j\}_{j=1}^{J_\Omega}\subset\mathcal{V}_0(\Omega\cup\mathscr{B}\Omega),
\end{equation}
of dimensions $J$ and $J_\Omega$, respectively. Note that all functions belonging to $\mathcal{V}^h$ and $\mathcal{V}^h_0$ are continuous by construction.
The finite element approximation $u^h(\mathbf{x})\in\mathcal{V}^h$ of the solution $u(\mathbf{x})$ of the nonlocal problem is defined as
\begin{equation}
u^h(\mathbf{x})=\sum_{j=1}^{J}\psi_j(\mathbf{x})u_j=\sum_{j=1}^{J_\Omega}\psi_j(\mathbf{x})u_j+\sum_{j=J_\Omega+1}^{J}\psi_j(\mathbf{x})g(\tilde{\mathbf{x}}_j)=w^h+g^h,
\label{uh_fem}
\end{equation}
for a set of coefficients $\{u_j\}_{j=1}^{J}$. Here, the volume constraint in (\ref{strong form_NLD.}) has been applied to a subset associated with the nodes in $\mathscr{B}\Omega$
\begin{equation}
u_j=g(\tilde{\mathbf{x}}_j)\;\;\;\text{for}\;j=J_\Omega+1,\ldots,J,
\end{equation}
so that
\begin{equation}
w^h\coloneqq\sum_{j=1}^{J_\Omega}\psi_j(\mathbf{x})u_j\;\;\;\text{and}\;\;\;g^h\coloneqq\sum_{j=J_\Omega+1}^{J}\psi_j(\mathbf{x})g(\tilde{\mathbf{x}}_j).
\label{uh_fem2}
\end{equation}
The finite element approximation $u^h$ associated with the nonlocal problem in (\ref{weak_form_s9}) is then found by solving the following discrete weak formulation: given $g(\mathbf{x})\in\mathcal{V}_t(\Omega\cup\mathscr{B}\Omega)$, and {$b(\mathbf{x})\in\mathcal{W}(\Omega)$} (see Section~\ref{weak_form}), find $u^h\in\mathcal{V}^h$ such that $\forall v^h\in\mathcal{V}^h_0$
\begin{equation}
\begin{aligned}
D(u^h,v^h)=G(v^h).
\end{aligned}
\label{weak_form_discrete1}
\end{equation}
By substituting Eq.~(\ref{uh_fem}) in Eq.~(\ref{weak_form_discrete1}) and by choosing $v^h(\mathbf{x})$ from the set of basis functions $\{\psi_i\}_{i=1}^{J_\Omega}$ we get
\begin{equation}
\begin{aligned}
D(w^h,v^h)=G(v^h)-D(g^h,v^h),
\end{aligned}
\label{weak_form_discrete2}
\end{equation}
which results in the linear system
\begin{equation}
\begin{aligned}
\sum_{j=1}^{J_\Omega}D(\psi_j,\psi_i)u_j=G(\psi_i)-D(g^h,\psi_i),
\end{aligned}
\label{weak_form_discrete3}
\end{equation}
for $i=1,\ldots,J_\Omega$.
Eq.~(\ref{weak_form_discrete3}) can be expressed in matrix form as
\begin{equation}
\begin{aligned}
\mathbf{A}\mathbf{u}=\mathbf{f},
\end{aligned}
\label{weak_form_discrete4}
\end{equation}
where $\mathbf{A}$ is a $J_\Omega\times J_\Omega$ matrix with components
\begin{equation}
\begin{aligned}
A_{ij}&=D(\psi_j,\psi_i)\\
&=\int_{\Omega\cup\mathscr{B}\Omega}\int_{\Omega\cup\mathscr{B}\Omega}\left[\psi_i(\mathbf{y})-\psi_i(\mathbf{x})\right]\gamma(\mathbf{x},\mathbf{y})\left[\psi_j(\mathbf{y})-\psi_j(\mathbf{x})\right]d\mathbf{y}d\mathbf{x},
\end{aligned}
\label{weak_form_discrete5}
\end{equation}
$\mathbf{f}$ is a $J_\Omega\times 1$ vector with components
\begin{equation}
\begin{aligned}
f_{i}&=G(\psi_i)-D(g^h,\psi_i)=\int_{\Omega}\psi_i(\mathbf{x})b(\mathbf{x})d\mathbf{x}\\
&\phantom{=}-\int_{\Omega\cup\mathscr{B}\Omega}\int_{\Omega\cup\mathscr{B}\Omega}\left[\psi_i(\mathbf{y})-\psi_i(\mathbf{x})\right]\gamma(\mathbf{x},\mathbf{y})\left[g^h(\mathbf{y})-g^h(\mathbf{x})\right]d\mathbf{y}d\mathbf{x},
\end{aligned}
\label{weak_form_discrete6}
\end{equation}
and $\mathbf{u}$ is a vector of size $J_\Omega\times 1$ containing the set of unknown coefficients $\{u_j\}_{j=1}^{J_\Omega}$ to be determined.
\subsection{Discrete quadrature}\label{quadrature_discretization}
We introduce the numerical quadrature used to solve Eq.~(\ref{weak_form_discrete2}). As described in Section \ref{FEM_discretization} we discretize $\Omega\cup\mathscr{B}\Omega$ using the mesh $\mathcal{M}^h$, and $\Omega$ with $M^h_\Omega\subset\mathcal{M}^h$. Therefore, we can express the left-hand side (LHS) and the right-hand side (RHS) of Eq.~(\ref{weak_form_discrete2}) as
\begin{equation}
\begin{aligned}
&D(w^h,v^h)\\=&\int_{\Omega\cup\mathscr{B}\Omega}\int_{\Omega\cup\mathscr{B}\Omega}\left[v^h(\mathbf{y})-v^h(\mathbf{x})\right]\gamma(\mathbf{x},\mathbf{y})\left[w^h(\mathbf{y})-w^h(\mathbf{x})\right]d\mathbf{y}d\mathbf{x}\\
=&\sum_{\Omega_e^h\in\mathcal{M}^h}\int_{\Omega_e^h}\int_{\Omega\cup\mathscr{B}\Omega}\left[v^h(\mathbf{y})-v^h(\mathbf{x})\right]\gamma(\mathbf{x},\mathbf{y})\left[w^h(\mathbf{y})-w^h(\mathbf{x})\right]d\mathbf{y}d\mathbf{x}\\
=&\sum_{\Omega_e^h\in\mathcal{M}^h}\int_{\Omega_e^h}\int_{\left(\Omega\cup\mathscr{B}\Omega\right)\cap\mathscr{H}(\mathbf{x},\delta)}\left[v^h(\mathbf{y})-v^h(\mathbf{x})\right]\gamma(\mathbf{x},\mathbf{y})\left[w^h(\mathbf{y})-w^h(\mathbf{x})\right]d\mathbf{y}d\mathbf{x},
\end{aligned}
\label{weak_form_discretequad1}
\end{equation}
and
\begin{equation}
\begin{aligned}
&G(v^h)-D(g^h,v^h)\\=&\int_{\Omega}v^h(\mathbf{x})b(\mathbf{x})d\mathbf{x}\\
&-\int_{\Omega\cup\mathscr{B}\Omega}\int_{\Omega\cup\mathscr{B}\Omega}\left[v^h(\mathbf{y})-v^h(\mathbf{x})\right]\gamma(\mathbf{x},\mathbf{y})\left[g^h(\mathbf{y})-g^h(\mathbf{x})\right]d\mathbf{y}d\mathbf{x}\\
=&\sum_{\Omega_e^h\in\mathcal{M}^h_\Omega}\int_{\Omega_e^h}v^h(\mathbf{x})b(\mathbf{x})d\mathbf{x}\\
&-\sum_{\Omega_e^h\in\mathcal{M}^h}\int_{\Omega_e^h}\int_{\left(\Omega\cup\mathscr{B}\Omega\right)\cap\mathscr{H}(\mathbf{x},\delta)}\left[v^h(\mathbf{y})-v^h(\mathbf{x})\right]\gamma(\mathbf{x},\mathbf{y})\left[g^h(\mathbf{y})-g^h(\mathbf{x})\right]d\mathbf{y}d\mathbf{x},
\end{aligned}
\label{weak_form_discretequad2}
\end{equation}
where we have restricted the inner integral to $\left(\Omega\cup\mathscr{B}\Omega\right)\cap\mathscr{H}(\mathbf{x},\delta)$ (see Eq.~(\ref{ball})).
We now describe how to discretize the integrals over the mesh elements $\Omega_e^h$ and over $\left(\Omega\cup\mathscr{B}\Omega\right)\cap\mathscr{H}(\mathbf{x},\delta)$.
For the outer integral (over the elements) we consider a high-order Gauss quadrature; for further discussion on outer quadrature schemes we refer the reader to \cite{delia2020cookbook}. For $N_q\in\mathbb{N}$, we denote the set of element Gauss quadrature points and weights to be used for the element integrals present in $D(w^h,v^h)$ as $\{\mathbf{x}^e_q\}_{q=1}^{N_q}\in\Omega^h_e$ and $\{\omega^e_q\}_{q=1}^{N_q}$, respectively. We also define, for $N_{b}\in\mathbb{N}$, the set of element Gauss quadrature points and weights to be employed for the integration over the elements in $G(v^h)$ as $\{\mathbf{x}^e_b\}_{b=1}^{N_b}\in\Omega^h_e$ and $\{\omega^e_b\}_{b=1}^{N_b}$, respectively. Therefore, from Eqs.~(\ref{weak_form_discretequad1}) and (\ref{weak_form_discretequad2}), we get
\begin{equation}
\begin{aligned}
& D(w^h,v^h)\\=&\sum_{\Omega_e^h\in\mathcal{M}^h}\int_{\Omega_e^h}\int_{\left(\Omega\cup\mathscr{B}\Omega\right)\cap\mathscr{H}(\mathbf{x},\delta)}\left[v^h(\mathbf{y})-v^h(\mathbf{x})\right]\gamma(\mathbf{x},\mathbf{y})\\&\left[w^h(\mathbf{y})-w^h(\mathbf{x})\right]d\mathbf{y}d\mathbf{x}\\
\approx &\sum_{\Omega_e^h\in\mathcal{M}^h}\sum_{q=1}^{N_q}\int_{\left(\Omega\cup\mathscr{B}\Omega\right)\cap\mathscr{H}(\mathbf{x}^e_q,\delta)}\left[v^h(\mathbf{y})-v^h(\mathbf{x}^e_q)\right]\gamma(\mathbf{x}^e_q,\mathbf{y})\\&\left[w^h(\mathbf{y})-w^h(\mathbf{x}^e_q)\right]d\mathbf{y}\omega^e_q,
\end{aligned}
\label{weak_form_discretequad3}
\end{equation}
and
\begin{equation}
\begin{aligned}
& G(v^h)-D(g^h,v^h)\\=&\sum_{\Omega_e^h\in\mathcal{M}^h_\Omega}\int_{\Omega_e^h}v^h(\mathbf{x})b(\mathbf{x})d\mathbf{x}\\
&-\sum_{\Omega_e^h\in\mathcal{M}^h}\int_{\Omega_e^h}\int_{\left(\Omega\cup\mathscr{B}\Omega\right)\cap\mathscr{H}(\mathbf{x},\delta)}\left[v^h(\mathbf{y})-v^h(\mathbf{x})\right]\gamma(\mathbf{x},\mathbf{y})\left[g^h(\mathbf{y})-g^h(\mathbf{x})\right]d\mathbf{y}d\mathbf{x}\\
\approx&\sum_{\Omega_e^h\in\mathcal{M}^h_\Omega}\sum_{b=1}^{N_b}v^h(\mathbf{x}^e_b)b(\mathbf{x}^e_b)\omega^e_b\\
&-\sum_{\Omega_e^h\in\mathcal{M}^h}\sum_{q=1}^{N_q}\int_{\left(\Omega\cup\mathscr{B}\Omega\right)\cap\mathscr{H}(\mathbf{x}^e_q,\delta)}\left[v^h(\mathbf{y})-v^h(\mathbf{x}^e_q)\right]\gamma(\mathbf{x}^e_q,\mathbf{y})\left[g^h(\mathbf{y})-g^h(\mathbf{x}^e_q)\right]d\mathbf{y}\omega^e_q.
\end{aligned}
\label{weak_form_discretequad4}
\end{equation}
To discretize the remaining inner integrals over $\left(\Omega\cup\mathscr{B}\Omega\right)\cap\mathscr{H}(\mathbf{x}^e_q,\delta)$ in Eqs.~(\ref{weak_form_discretequad3}) and (\ref{weak_form_discretequad4}) we use the GMLS quadrature introduced in Section \ref{sec:GMLS_construction}. We start with the case in which $\left(\Omega\cup\mathscr{B}\Omega\right)\cap\mathscr{H}(\mathbf{x}^e_q,\delta)=\mathscr{H}(\mathbf{x}^e_q,\delta)$, i.e., the integration domain is the full ball of radius $\delta$ around $\mathbf{x}^e_q$. Note that this is the case for all $\mathbf{x}^e_q\in\left(\Omega\cup\partial\Omega\right)$. We then consider the following set of points placed in a regular uniform grid, symmetric around $\mathbf{x}^e_q$:
\begin{equation}
\begin{aligned}\label{quad_points_ball_grid}
\left\{{\mathbf{x}}^e_{qp}\right\}_{p=1}^{\overline{N}_{qp}}\coloneqq\bigg\{&{\mathbf{x}}^e_{qp}\in\mathbb{R}^d;k_1,k_2,\ldots,k_d\in\mathbb{Z}\setminus\left\{0\right\}:
\\&{\mathbf{x}}^e_{qp}=\left({x}^e_{qp1},{x}^e_{qp2},\ldots,{x}^e_{qpd}\right)\\
&=\left(x^e_{q1}+(2k_1-\sgn(k_1))\frac{\overline{h}}{2},x^e_{q2}+(2k_2-\sgn(k_2))\frac{\overline{h}}{2},\ldots,\right.\\& \left. x^e_{qd}+(2k_d-\sgn(k_d))\frac{\overline{h}}{2}\right),\\
&-\overline{N}_{qp,\delta}\leq k_1,k_2,\ldots,k_d\leq \overline{N}_{qp,\delta}
\bigg\},
\end{aligned}
\end{equation}
where $\overline{N}_{qp,\delta}\in\mathbb{N}$,
\begin{equation}
\overline{h}=\frac{\delta}{\overline{N}_{qp,\delta}}
\end{equation}
is the spacing between grid points, and
\begin{equation}
\overline{N}_{qp}=\left(2\overline{N}_{qp,\delta}\right)^d,
\end{equation}
is the overall number of points. In this work, we take $\overline{N}_{qp,\delta}$ to be a constant independent of $q$, i.e., $\overline{N}_{{q_i}p,\delta}=\overline{N}_{{q_j}p,\delta}$ $\forall q_i,q_j$ such that $\mathbf{x}^e_{q_i},\mathbf{x}^e_{q_j}\in\Omega\cup\mathscr{B}\Omega$.
The subset of $N_{qp}$ points of $\left\{{\mathbf{x}}^e_{qp}\right\}_{p=1}^{\overline{N}_{qp}}$ contained in $\mathscr{H}(\mathbf{x}^e_q,\delta)\cap\left(\Omega\cup\mathscr{B}\Omega\right)$ is given by
\begin{equation}\label{Inner_quadrature_set}
\begin{aligned}
\left\{{\mathbf{x}}^e_{qp}\right\}_{p=1}^{{N}_{qp}}&\coloneqq\left\{{\mathbf{x}}^e_{qp}\right\}_{p=1}^{\overline{N}_{qp}}\cap\mathscr{H}(\mathbf{x}^e_q,\delta)\cap\left(\Omega\cup\mathscr{B}\Omega\right)\\
&=\left \{\mathbf{x}^e_{qp}\in \left\{{\mathbf{x}}^e_{qp}\right\}_{p=1}^{\overline{N}_{qp}}\cap\left(\Omega\cup\mathscr{B}\Omega\right):
\lvert \mathbf{x}^e_{qp}-\mathbf{x}^e_q \rvert_{\ell^{\tilde{p}}} \leq \delta
\right \}.
\end{aligned}
\end{equation}
This is the set of quadrature points used to discretize the integrals over $\left(\Omega\cup\mathscr{B}\Omega\right)\cap\mathscr{H}(\mathbf{x}^e_q,\delta)$ in Eqs.~(\ref{weak_form_discretequad3}) and (\ref{weak_form_discretequad4}). When $\left(\Omega\cup\mathscr{B}\Omega\right)\cap\mathscr{H}(\mathbf{x}^e_q,\delta)=\mathscr{H}(\mathbf{x}^e_q,\delta)$, this set reduces to
\begin{equation}\label{Inner_quadrature_set2}
\begin{aligned}
\left\{{\mathbf{x}}^e_{qp}\right\}_{p=1}^{{N}_{qp}}=\left \{\mathbf{x}^e_{qp}\in \left\{{\mathbf{x}}^e_{qp}\right\}_{p=1}^{\overline{N}_{qp}}:
\lvert \mathbf{x}^e_{qp}-\mathbf{x}^e_q \rvert_{\ell^{\tilde{p}}} \leq \delta
\right \}.
\end{aligned}
\end{equation}
Figures \ref{1D_ball_quadrature} and \ref{2D_ball_quadrature} show the distribution of quadrature points for one-dimensional and two-dimensional Euclidean balls.
\begin{figure}[H]
\begin{center}
\scalebox{1.0}{\includegraphics[trim = 50mm 430mm 50mm 320mm, clip=true,width=1\textwidth]{Figures/Quadrature_schemes_1d}}
\caption{One-dimensional Euclidean ball quadrature points. The filled red dot represents $x^e_q$ while the blue crosses are the associated quadrature points $x^e_{qp}$.}
\label{1D_ball_quadrature}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
\scalebox{1.0}{\includegraphics[trim = 50mm 300mm 50mm 175mm, clip=true,width=1\textwidth]{Figures/Quadrature_schemes4}}
\caption{Two-dimensional Euclidean ball quadrature points. The filled red dot represents $\mathbf{x}^e_q$ while the blue crosses are the associated quadrature points $\mathbf{x}^e_{qp}$.}
\label{2D_ball_quadrature}
\end{center}
\end{figure}
To determine the set of quadrature weights $\left\{\omega^e_{qp}\right\}_{p=1}^{N_{qp}}$ associated with $\left\{{\mathbf{x}}^e_{qp}\right\}_{p=1}^{{N}_{qp}}$, we employ the approach presented in Section \ref{sec:GMLS_construction}, with $\mathbf{x}=\mathbf{x}^e_q$. As our finite dimensional space $\mathbf{V}_h$, i.e., as the space of functions for which we impose exactness of integration, we select
\begin{equation}\label{Vh_exact_constraints}
\begin{aligned}
\mathbf{V}_h=\{&f(\mathbf{x},\mathbf{y}):\Omega\cup\mathscr{B}\Omega\times\Omega\cup\mathscr{B}\Omega\rightarrow\mathbb{R}, \\
&f(\mathbf{x},\mathbf{y})=\gamma(\mathbf{x},\mathbf{y})\left(\mathbf{y}-\mathbf{x}\right)^{\boldsymbol{\beta}} \text{ with }\lvert\boldsymbol{\beta}\rvert=2\},
\end{aligned}
\end{equation}
where we are using multi-index notation. Here, $\boldsymbol{\beta}$ is a collection of $d$ non-negative integers, $\boldsymbol{\beta}=(\beta_1,\ldots,\beta_d)$ with length $\lvert\boldsymbol{\beta}\rvert=\sum_{i=1}^{d}\beta_i$. For a given $\boldsymbol{\beta}$, $(\mathbf{y}-\mathbf{x})^{\boldsymbol{\beta}}=(y_1-x_1)^{\beta_1}\ldots(y_d-x_d)^{\beta_d}$. Eq.~(\ref{Vh_exact_constraints}) can be related to assuming the trial and test functions $v(\mathbf{x})$ and $u(\mathbf{x})$ to be linear functions in Eqs.~(\ref{weak_form_discretequad3}) and (\ref{weak_form_discretequad4}), consistently with our choice to approximate them with linear finite element approximations (see Section \ref{FEM_discretization}). In fact, in a one-dimensional case, Eq.~(\ref{Vh_exact_constraints}) corresponds to imposing exact integration of $\int_{\left(\Omega\cup\mathscr{B}\Omega\right)\cap\mathscr{H}(x,\delta)}(y-x)\gamma(x,y)(y-x)dy$, while in a two-dimensional case, to imposing exact integration of $\int_{\left(\Omega\cup\mathscr{B}\Omega\right)\cap\mathscr{H}(\mathbf{x},\delta)}(y_1-x_1)\gamma(\mathbf{x},\mathbf{y})(y_1-x_1)d\mathbf{y}$, $\int_{\left(\Omega\cup\mathscr{B}\Omega\right)\cap\mathscr{H}(\mathbf{x},\delta)}(y_2-x_2)\gamma(\mathbf{x},\mathbf{y})(y_2-x_2)d\mathbf{y}$, and $\int_{\left(\Omega\cup\mathscr{B}\Omega\right)\cap\mathscr{H}(\mathbf{x},\delta)}(y_1-x_1)\gamma(\mathbf{x},\mathbf{y})(y_2-x_2)d\mathbf{y}$. Furthermore, for kernels $\gamma(\mathbf{x},\mathbf{y})$ of the types expressed in Eqs.~(\ref{kernel_form1}) and (\ref{kernel_form2}), the functions in $\mathbf{V}_h$ only depend on $\mathbf{y}-\mathbf{x}$, meaning that the quadrature weights depend only on the relative position between the quadrature points in $\left\{{\mathbf{x}}^e_{qp}\right\}_{p=1}^{{N}_{qp}}$ and the center of the ball $\mathbf{x}^e_{q}$, i.e., $\mathbf{x}^e_{qp}-\mathbf{x}^e_{q}$, for arbitrary $p,q$. Since the positions of the points $\mathbf{x}^e_{qp}$ are defined relative to $\mathbf{x}^e_{q}$ (see Eq. \ref{quad_points_ball_grid}), their relative positions with respect to the centers of the balls is always the same for all full balls. Therefore, the quadrature weights can be evaluated once for a representative full ball and used for all full balls $\mathscr{H}(\mathbf{x}^e_q,\delta)$, $\forall\mathbf{x}^e_q\in\Omega\cup\partial\Omega$.
Note that in \cite{Leng2021_AsymptoticallyCR} a similar placement of quadrature points within the full ball $\mathscr{H}(\mathbf{x}^e_q,\delta)$, i.e., quadrature points in a regular uniform grid, symmetrically distributed around $\mathbf{x}^e_q$, was employed for the numerical quadrature of strong-form nonlocal diffusion. Furthermore, conditions for obtaining positive quadrature weights, as well as expressions for them, are also provided in \cite{Leng2021_AsymptoticallyCR}. While in this work we do not explicitly impose any restriction on the positivity of the weights, in all our tests the quadrature weights $\left\{\omega^e_{qp}\right\}_{p=1}^{N_{qp}}$ are verified to be positive.
Next, we consider the case in which $\mathbf{x}^e_q\in\mathscr{B}\Omega\setminus\partial\Omega$. In this case, $\left(\Omega\cup\mathscr{B}\Omega\right)\cap\mathscr{H}(\mathbf{x}^e_q,\delta)\subset\mathscr{H}(\mathbf{x}^e_q,\delta)$, i.e. the integration over $\left(\Omega\cup\mathscr{B}\Omega\right)\cap\mathscr{H}(\mathbf{x}^e_q,\delta)$ is over a \textit{partial} or \textit{truncated} ball (see Figure \ref{2D_ball_quadrature_boundary} for an example of a two-dimensional partial Euclidean ball). Therefore, the set of points defined by Eq. (\ref{Inner_quadrature_set}) will not be symmetrical with respect to its associated $\mathbf{x}^e_q$. Moreover, its dimension $N_{qp}$ will also be different depending on the position of $\mathbf{x}^e_q$. For this reason, a different set of weights $\left\{\omega^e_{qp}\right\}_{p=1}^{N_{qp}}$ needs to be computed $\forall q$ such that $\mathbf{x}^e_q\in\left(\mathscr{B}\Omega\setminus\partial\Omega\right)$.
Recall that the expressions in Section \ref{sec:GMLS_construction} were presented for integrals over full balls. For partial balls, the constraints in the optimization problem in Eq.~(\ref{GMLS_opt1}) can be stated as
\begin{equation}
\sum_{\substack{p=1\\p:\mathbf{x}^e_{qp}\neq\mathbf{x}^e_q}}^{N_{qp}}f_{j}\omega_{j}=\int_{\mathscr{H}(\mathbf{x}^e_q,\delta)\cap\left(\Omega\cup\mathscr{B}\Omega\right)}f(\mathbf{x}^e_q,\mathbf{y})d\mathbf{y}\;\;\forall f\in\mathbf{V}_h.
\label{GMLS_opt2_partial_balls}
\end{equation}
Due to the complex geometry of $\mathscr{H}(\mathbf{x}^e_q,\delta)\cap\left(\Omega\cup\mathscr{B}\Omega\right)$ , the analytical integral on the right-hand side of Eq.~(\ref{GMLS_opt2_partial_balls}) is particularly cumbersome. Therefore, in this work, we follow \cite{You2020Regression,trask2019asymptotically,Leng2021_AsymptoticallyCR,Gross2020} and approximate the right-hand side of Eq.~(\ref{GMLS_opt2_partial_balls}) with the integral over the full ball, as in Eq. (\ref{GMLS_opt1}).
\subsubsection{Special treatment of the nonlocal boundary}
Let us now consider $q_1\in e_1$, and $q_2\in e_2$ such that $\mathbf{x}^{e_1}_{q_1}\in\Omega\cup\partial\Omega$, and $\mathbf{x}^{e_2}_{q_2}\in\mathscr{B}\Omega\setminus\partial\Omega$, and two points $\mathbf{x}^{e_1}_{q_1p_i}\in\left\{\mathbf{x}^{e_1}_{q_1p}\right\}_{p=1}^{N_{{q_1}p}}$, and $\mathbf{x}^{e_2}_{q_2p_j}\in\left\{\mathbf{x}^{e_2}_{q_2p}\right\}_{p=1}^{N_{{q_2}p}}$, such that
\begin{equation}
\mathbf{x}^{e_1}_{q_1p_i}-\mathbf{x}^{e_1}_{q_1}=\mathbf{x}^{e_2}_{q_2p_j}-\mathbf{x}^{e_2}_{q_2}.
\end{equation}
By employing the optimization-based procedure described in Section~\ref{sec:GMLS_construction} with the finite-dimensional space in (\ref{Vh_exact_constraints}), we can determine the sets of weights $\{\omega^{e_1}_{q_1p}\}_{p=1}^{N_{q_1p}}$ and $\{\omega^{e_2}_{q_2p}\}_{p=1}^{N_{q_2p}}$ associated with $\left\{\mathbf{x}^{e_1}_{q_1p}\right\}_{p=1}^{N_{{q_1}p}}$ and $\mathbf{x}^{e_2}_{q_2p_j}\in\left\{\mathbf{x}^{e_2}_{q_2p}\right\}_{p=1}^{N_{{q_2}p}}$, respectively. In general, $\omega^{e_1}_{{q_1}p_i}\neq\omega^{e_2}_{{q_2}p_j}$, meaning that two quadrature points with the same relative position with respect to the center $\mathbf{x}^e_{q}$ of the corresponding ball will have different weights. As illustrated numerically in Section \ref{numerical_1D_uniform}, this fact may cause the discretization error to increase near the boundary $\mathscr{B}\Omega$. To circumvent this issue, we consider an extension of the interaction domain of size $t_e$, with $0\leq t_e\leq\delta$, for the computation of the inner quadrature weights. To this end, we define
\begin{equation}\label{interaction domain_extended}
\mathscr{B}\Omega^{t_e}\coloneqq
\left \{\mathbf{y}\in {\mathbb{R} ^{d}}\setminus\Omega:
\exists \mathbf{x}\in\Omega \; \text{such that} \;\lvert \mathbf{y}-\mathbf{x} \rvert_{\ell^{\tilde{p}}} \leq \left(\delta+t_e\right)
\right \}\setminus\mathscr{B}\Omega.
\end{equation}
As discussed above, for the interaction domain $\mathscr{B}\Omega$, in the case of Euclidean balls, i.e., $\tilde{p}=2$, $\mathscr{B}\Omega^{t_e}$ will have rounded corners, which we replace with vertices to make $\mathscr{B}\Omega^{t_e}$ a polyhedral domain that can be easily meshed (see Figure \ref{Interaction_domain_poly_te} for a two-dimensional illustration).
\begin{figure}[H]
\begin{center}
\scalebox{0.75}{\includegraphics[trim = 0mm 200mm 0mm 100mm, clip=true,width=1\textwidth]{Figures/Interaction_domain_poly2_te}}
\caption{Left: a square domain $\Omega$ (in white) with its corresponding interaction domain $\mathscr{B}\Omega$ (in light-blue) and its interaction domain extension $\mathscr{B}\Omega^{t_e}$ (in yellow) . Right: the same square domain and polygonal approximate interaction domain and interaction domain extension, still referred to as $\mathscr{B}\Omega$ and $\mathscr{B}\Omega^{t_e}$, respectively.}
\label{Interaction_domain_poly_te}
\end{center}
\end{figure}
Now, $\forall \mathbf{x}^e_q\in\left(\Omega\cup\mathscr{B}\Omega\right)$, we define the following set of points
\begin{equation}\label{Inner_quadrature_set_extended}
\begin{aligned}
\left\{{\mathbf{x}}^e_{qp}\right\}_{p=1}^{\tilde{N}_{qp}}&\coloneqq\left\{{\mathbf{x}}^e_{qp}\right\}_{p=1}^{\overline{N}_{qp}}\cap\mathscr{H}(\mathbf{x}^e_q,\delta)\cap\left(\Omega\cup\mathscr{B}\Omega\cup\mathscr{B}\Omega^{t_e}\right)\\
&=\left \{\mathbf{x}^e_{qp}\in \left\{{\mathbf{x}}^e_{qp}\right\}_{p=1}^{\overline{N}_{qp}}\cap\left(\Omega\cup\mathscr{B}\Omega\cup\mathscr{B}\Omega^{t_e}\right):
\lvert \mathbf{x}^e_{qp}-\mathbf{x}^e_q \rvert_{\ell^{\tilde{p}}} \leq \delta
\right \},
\end{aligned}
\end{equation}
which coincides to the one defined in (\ref{Inner_quadrature_set}) for $t_e=0$ and, regardless of $t_e$, $\forall\mathbf{x}^e_q\in(\Omega\cup\partial\Omega).$
For $t_e=\delta$, we have $\left(\Omega\cup\mathscr{B}\Omega\cup\mathscr{B}\Omega^{t_e}\right)\cap\mathscr{H}(\mathbf{x}^e_q,\delta)=\mathscr{H}(\mathbf{x}^e_q,\delta)$, meaning that for every $\mathbf{x}^e_q\in\left(\Omega\cup\mathscr{B}\Omega\right)$ the set of points defined in (\ref{Inner_quadrature_set_extended}) is distributed across each full ball $\mathscr{H}(\mathbf{x}^e_q,\delta)$, as illustrated in Figures \ref{1D_ball_quadrature_boundary} and \ref{2D_ball_quadrature_boundary} for a one-dimensional and a two-dimensional case, respectively.
\begin{figure}[H]
\begin{center}
\scalebox{0.5}{\includegraphics[trim = 0mm 0mm 0mm 0mm, clip=true,width=1\textwidth]{Figures/extension_1D}}
\caption{One-dimensional partial integration ball for the filled red point and mesh extension of size $t_e$. The shown region is the left region of a domain $\mathscr{B}\Omega\cup\Omega=\left[-\delta,1+\delta\right]$, with $\Omega=\left(0,1\right)$.}
\label{1D_ball_quadrature_boundary}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
\scalebox{0.5}{\includegraphics[trim = 23mm 0mm 23mm 0mm, clip=true,width=1\textwidth]{Figures/extension_2D}}
\caption{Two-dimensional partial integration Euclidean ball for the filled red point (shaded area) and mesh extension of size $t_e$. The shown region is the top-left region of a domain $\mathscr{B}\Omega\cup\Omega=\left[-\delta,1+\delta\right]\times\left[-\delta,1+\delta\right]$, with $\Omega=\left(0,1\right)\times\left(0,1\right)$.}
\label{2D_ball_quadrature_boundary}
\end{center}
\end{figure}
We can now employ the optimization procedure from Section \ref{sec:GMLS_construction}, for the finite dimensional space defined in (\ref{Vh_exact_constraints}), to determine the set of weights $\{\omega^e_{qp}\}_{p=1}^{\tilde{N}_{qp}}$.
We can then select $\left\{{\mathbf{x}}^e_{qp}\right\}_{p=1}^{{N}_{qp}}\subseteq\left\{{\mathbf{x}}^e_{qp}\right\}_{p=1}^{\tilde{N}_{qp}}$ as
\begin{equation}\label{Inner_quadrature_set_extended_reduced}
\begin{aligned}
\left\{{\mathbf{x}}^e_{qp}\right\}_{p=1}^{{N}_{qp}}&\coloneqq\left \{\mathbf{x}^e_{qp}\in \left\{{\mathbf{x}}^e_{qp}\right\}_{p=1}^{\tilde{N}_{qp}}\cap\left(\Omega\cup\mathscr{B}\Omega\right)
\right \},
\end{aligned}
\end{equation}
and their associated weights $\left\{\omega^e_{qp}\right\}_{p=1}^{{N}_{qp}}\subseteq\left\{{\omega}^e_{qp}\right\}_{p=1}^{\tilde{N}_{qp}}$. Therefore, from Eqs. (\ref{weak_form_discretequad3}) and (\ref{weak_form_discretequad4}), we can obtain
\begin{equation}
\begin{aligned}
& D(w^h,v^h)\\
\approx&\sum_{\Omega_e^h\in\mathcal{M}^h}\sum_{q=1}^{N_q}\int_{\left(\Omega\cup\mathscr{B}\Omega\right)\cap\mathscr{H}(\mathbf{x}^e_q,\delta)}\left[v^h(\mathbf{y})-v^h(\mathbf{x}^e_q)\right]\gamma(\mathbf{x}^e_q,\mathbf{y})\left[w^h(\mathbf{y})-w^h(\mathbf{x}^e_q)\right]d\mathbf{y}\omega^e_q\\
\approx&\sum_{\Omega_e^h\in\mathcal{M}^h}\sum_{q=1}^{N_q}\sum_{p=1}^{N_{qp}}\left[v^h(\mathbf{x}^e_{qp})-v^h(\mathbf{x}^e_q)\right]\gamma(\mathbf{x}^e_q,\mathbf{x}^e_{qp})\left[w^h(\mathbf{x}^e_{qp})-w^h(\mathbf{x}^e_q)\right]\omega^e_{qp}\omega^e_q\\
=& D^h(w^h,v^h),
\end{aligned}
\label{weak_form_discretequad_full1}
\end{equation}
and
\begin{equation}
\begin{aligned}
&G(v^h)-D(g^h,v^h)\\
\approx&\sum_{\Omega_e^h\in\mathcal{M}^h_\Omega}\sum_{b=1}^{N_b}v^h(\mathbf{x}^e_b)b(\mathbf{x}^e_b)\omega^e_b\\
&-\sum_{\Omega_e^h\in\mathcal{M}^h}\sum_{q=1}^{N_q}\int_{\left(\Omega\cup\mathscr{B}\Omega\right)\cap\mathscr{H}(\mathbf{x}^e_q,\delta)}\left[v^h(\mathbf{y})-v^h(\mathbf{x}^e_q)\right]\gamma(\mathbf{x}^e_q,\mathbf{y})\left[g^h(\mathbf{y})-g^h(\mathbf{x}^e_q)\right]d\mathbf{y}\omega^e_q\\
\approx&\sum_{\Omega_e^h\in\mathcal{M}^h_\Omega}\sum_{b=1}^{N_b}v^h(\mathbf{x}^e_b)b(\mathbf{x}^e_b)\omega^e_b\\
&-\sum_{\Omega_e^h\in\mathcal{M}^h}\sum_{q=1}^{N_q}\sum_{p=1}^{N_{qp}}\left[v^h(\mathbf{x}^e_{qp})-v^h(\mathbf{x}^e_q)\right]\gamma(\mathbf{x}^e_q,\mathbf{x}^e_{qp})\left[g^h(\mathbf{x}^e_{qp})-g^h(\mathbf{x}^e_q)\right]\omega^e_{qp}\omega^e_q\\
=&G^h(v^h)-D^h(g^h,v^h),
\end{aligned}
\label{weak_form_discretequad_full2}
\end{equation}
where we defined
\begin{equation}
\begin{aligned}
D^h(\cdot,v^h)\coloneqq\sum_{\Omega_e^h\in\mathcal{M}^h}\sum_{q=1}^{N_q}\sum_{p=1}^{N_{qp}}&\left[v^h(\mathbf{x}^e_{qp})-v^h(\mathbf{x}^e_q)\right]\gamma(\mathbf{x}^e_q,\mathbf{x}^e_{qp})\\&\left[(\cdot)(\mathbf{x}^e_{qp})-(\cdot)(\mathbf{x}^e_q)\right]\omega^e_{qp}\omega^e_q,
\end{aligned}
\label{weak_form_Dh}
\end{equation}
and
\begin{equation}
\begin{aligned}
&G^h(v^h)\coloneqq\sum_{\Omega_e^h\in\mathcal{M}^h_\Omega}\sum_{b=1}^{N_b}v^h(\mathbf{x}^e_b)b(\mathbf{x}^e_b)\omega^e_b.
\end{aligned}
\label{weak_form_Gh}
\end{equation}
\subsection{Fully discrete variational form for nonlocal diffusion}
We combine the finite element discretization from Section \ref{FEM_discretization} with the discrete quadrature approach discussed in Section \ref{quadrature_discretization}. By substituting the continuous operators in (\ref{weak_form_discrete2}) with the discrete ones defined in
Eqs. (\ref{weak_form_Dh}) and (\ref{weak_form_Gh}), we get
\begin{equation}
\begin{aligned}
D^h(w^h,v^h)=G^h(v^h)-D^h(g^h,v^h),
\end{aligned}
\label{weak_form_fullydiscrete1}
\end{equation}
which, by employing Eqs. (\ref{uh_fem}) and (\ref{uh_fem2}), results in the following linear system
\begin{equation}
\begin{aligned}
\sum_{j=1}^{J_\Omega}D^h(\psi_j,\psi_i)u_j=G^h(\psi_i)-D^h(g^h,\psi_i),
\end{aligned}
\label{weak_form_fullydiscrete2}
\end{equation}
for $i=1,\ldots,J_\Omega$.
Eq.~(\ref{weak_form_fullydiscrete2}) can be expressed in matrix form as
\begin{equation}
\begin{aligned}
\mathbf{A}^h\mathbf{u}=\mathbf{f}^h,
\end{aligned}
\label{weak_form_fullydiscrete3}
\end{equation}
where $\mathbf{A}^h$ is a $J_\Omega\times J_\Omega$ matrix with components
\begin{equation}
\begin{aligned}
A^h_{ij}&=D^h(\psi_j,\psi_i)\\
&=\sum_{\Omega_e^h\in\mathcal{M}^h}\sum_{q=1}^{N_q}\sum_{p=1}^{N_{qp}}\left[\psi_i(\mathbf{x}^e_{qp})-\psi_i(\mathbf{x}^e_q)\right]\gamma(\mathbf{x}^e_q,\mathbf{x}^e_{qp})\left[\psi_j(\mathbf{x}^e_{qp})-\psi_j(\mathbf{x}^e_q)\right]\omega^e_{qp}\omega^e_q,
\end{aligned}
\label{weak_form_fullydiscrete4}
\end{equation}
$\mathbf{f}^h$ is a $J_\Omega\times 1$ vector with components
\begin{equation}
\begin{aligned}
f^h_{i}&=G^h(\psi_i)-D^h(g^h,\psi_i)=\sum_{\Omega_e^h\in\mathcal{M}^h_\Omega}\sum_{b=1}^{N_b}\psi_i(\mathbf{x}^e_b)b(\mathbf{x}^e_b)\omega^e_b\\
&-\sum_{\Omega_e^h\in\mathcal{M}^h}\sum_{q=1}^{N_q}\sum_{p=1}^{N_{qp}}\left[\psi_i(\mathbf{x}^e_{qp})-\psi_i(\mathbf{x}^e_q)\right]\gamma(\mathbf{x}^e_q,\mathbf{x}^e_{qp})\left[g^h(\mathbf{x}^e_{qp})-g^h(\mathbf{x}^e_q)\right]\omega^e_{qp}\omega^e_q,
\end{aligned}
\label{weak_form_fullydiscrete5}
\end{equation}
and $\mathbf{u}$ is a vector of size $J_\Omega\times 1$ containing the set of unknown coefficients $\{u_j\}_{j=1}^{J_\Omega}$ to be determined.
\section{Properties of the numerical scheme}\label{sec:convergence}
In this section, we investigate the numerical properties of the proposed scheme. We first describe different types of convergence in the context of nonlocal models (see Figure \ref{Convergence_types_PD}) and then provide a convergence analysis in the $H^1$ norm in a simplified, one-dimensional setting.
\begin{figure}[H]
\begin{center}
\scalebox{1.0}{\includegraphics[trim = 23mm 40mm 23mm 23mm, clip=true,width=1\textwidth]{Figures/Convergence_types_PD}}
\caption{Different convergence paths in finite-length nonlocal model. $u$ and $u_0$ represent the continuum nonlocal and local solutions, respectively, while $u^h$ and $u_0^h$ are their discrete counterparts.}
\label{Convergence_types_PD}
\end{center}
\end{figure}
\subsection{Brief review of asymptotically compatible schemes}
As described in Section \ref{sec:Nonlocal_diff_model}, continuum nonlocal models are characterized by the length scale $\delta$. Under proper regularity assumptions, as $\delta\to 0$, nonlocal solutions converge to their local, PDE counterparts \cite{du12}; we refer to this type of convergence as $\delta$-convergence. When a discretization scheme is employed, its size $h$ introduces a second length scale. For a fixed horizon $\delta$, a discretization scheme is $h$-convergent if the nonlocal discrete solution converges to the continuum nonlocal solution as $h\rightarrow0$.
Lastly, a discretization scheme is called \textit{asymptotically compatible} if, in addition to the $\delta$- and $h$-convergence above, the discrete solution to the nonlocal problem also converges to the analytical solution of its local PDE counterpart as $\delta\rightarrow0$ and $h\rightarrow0$ \cite{XTian2014_AC,peryhandbook}. For numerical schemes where $h$ and $\delta$ are tied together by the relationship $m=\delta/h$, the term \textit{asymptotic compatibility} refers only to the last type of convergence described above, i.e., meaning that as $\delta\rightarrow0$ and $h\rightarrow0$, the discrete solution to the nonlocal problem converges to the analytical solution of the associated local problem \cite{trask2019asymptotically,Leng2021_AsymptoticallyCR}.
As the proposed scheme is such that $m=\delta/h\in\mathbb{N}$, we will only focus on the latter type of convergence (i.e., $\delta\rightarrow0$ and $h\rightarrow0$ simultaneously).
\subsection{Convergence analysis}
In this section we derive a preliminary estimate for the convergence of discrete solutions obtained via inexact quadrature of the inner integral. We assume that the outer quadrature is performed with a high-accuracy scheme whose contribution to the overall error can be considered negligible. In the following analysis we hence assume the outer integration to be exact. Therefore,
\begin{equation}
\begin{aligned}
D^h(u^h,u^h)&=\int_{\Omega\cup\mathscr{B}\Omega}\sum_{j=1}^{N_j}\gamma(\mathbf{x},\mathbf{x}_j)\left[u^h(\mathbf{x}_j)-u^h(\mathbf{x})\right]^2\omega_j d\mathbf{x}\\
&=\sum_{\Omega_e^h\in\mathcal{M}^h}\int_{\Omega_e^h}\sum_{j=1}^{N_j}\gamma(\mathbf{x},\mathbf{x}_j)\left[u^h(\mathbf{x}_j)-u^h(\mathbf{x})\right]^2\omega_j d\mathbf{x},
\end{aligned}
\label{Proof1_Dh}
\end{equation}
where $\mathbf{x}_j$ and $\omega_j$ are the $j$-th inner quadrature point and associated weight in the ball $\mathscr{H}(\mathbf{x},\delta)$, respectively, and $N_j$ is the total number of inner quadrature points. Furthermore, we also restrict ourselves to kernels of the form \eqref{kernel_form1}.
\subsubsection{Uniform $\mathcal{V}^h$-coercivity}\label{proof_coercivity}
We show, under certain conditions, that the approximate bilinear form $D^h(\cdot,\cdot):\mathcal{V}^h\times\mathcal{V}^h\rightarrow\mathbb{R}$ is uniformly $\mathcal{V}^h$-coercive. Recall that we are considering a $C^0$ linear FE approximation. Therefore, $\forall \mathbf{x}$
\begin{equation}\label{Proof2_uh}
u^h({\xb_j})-u^h(\mathbf{x}) =\left\{\begin{aligned}
\left.\nabla u^h(\mathbf{x})\right|_e\cdot({\xb_j}-\mathbf{x}), &\ \text{for ${\xb_j}$, $\mathbf{x}$ in element $e$,}\\\
u^h({\xb_j})-u^h(\mathbf{x}), &\ \text{otherwise.}\\
\end{aligned}\right.
\end{equation}
We assume that we have a subset of the quadrature points $\{\mathbf{x}_j\}$ that are in the same element as $\mathbf{x}$, $\{\mathbf{x}_{j_\text{in}}\}$, and another subset of the quadrature points that are not, $\{\mathbf{x}_{j_\text{out}}\}$. Membership of points in these subsets is linked to the chosen spacing for the quadrature points; we assume that this spacing is small enough relative to the element size so that $\{\mathbf{x}_{j_\text{in}}\}$ is nonempty. We also assume that the inner quadrature weights $\{\omega_j\}$ are positive. Then, Eq.~(\ref{Proof1_Dh}) can be recast as
\begin{equation}
\begin{aligned}
D^h(u^h,u^h)=&\sum_{\Omega_e^h\in\mathcal{M}^h}\int_{\Omega_e^h}\left[\sum_{{j_\text{in}}=1}^{N_{j_\text{in}}}\gamma(\mathbf{x},\mathbf{x}_{j_\text{in}})\left[u^h(\mathbf{x}_{j_\text{in}})-u^h(\mathbf{x})\right]^2\omega_{j_\text{in}}\right.\\
&\left.+\sum_{{j_\text{out}}=1}^{N_{j_\text{out}}}\gamma(\mathbf{x},\mathbf{x}_{j_\text{out}})\left[u^h(\mathbf{x}_{j_\text{out}})-u^h(\mathbf{x})\right]^2\omega_{j_\text{out}}\right] d\mathbf{x}\\
\geq& \sum_{\Omega_e^h\in\mathcal{M}^h}\int_{\Omega_e^h}\sum_{{j_\text{in}}=1}^{N_{j_\text{in}}}\gamma(\mathbf{x},\mathbf{x}_{j_\text{in}})\left[u^h(\mathbf{x}_{j_\text{in}})-u^h(\mathbf{x})\right]^2\omega_{j_\text{in}}d\mathbf{x}\\
=&\frac{\zeta}{\delta^{d+2}}\sum_{\Omega_e^h\in\mathcal{M}^h}\int_{\Omega_e^h}\sum_{{j_\text{in}}=1}^{N_{j_\text{in}}}\left[u^h(\mathbf{x}_{j_\text{in}})-u^h(\mathbf{x})\right]^2\omega_{j_\text{in}}d\mathbf{x}\\
=&\frac{\zeta}{\delta^{d+2}}\sum_{\Omega_e^h\in\mathcal{M}^h}\int_{\Omega_e^h}\sum_{{j_\text{in}}=1}^{N_{j_\text{in}}}\left[\nabla u^h(\mathbf{x})\right|_e\cdot(\mathbf{x}_{j_\text{in}}-\mathbf{x})]^2\omega_{j_\text{in}}d\mathbf{x}.
\end{aligned}
\label{Proof3_Dh_inout}
\end{equation}
For simplicity, we now restrict our discussion to the one-dimensional case. In this setting, we assume that there exists a constant $C > 0$ independent of $h$ and $\delta$, such that $C\delta < \omega_\text{min} < \omega_{j_\text{in}}$ (which we verify by direct computation in \ref{sec:1d-weights}). Under these assumptions, we have the following coercivity result for the discrete bilinear form.
\begin{lem}\label{lem:coercive} There exists a constant $c>0$ independent of $h$ and $\delta$ such that $\forall u^h\in\mathcal{V}^h_0$
\begin{equation}\label{Proof5_Dh_inout2}
\begin{aligned}
D^h(u^h,u^h)\geq c\vert u^h\vert_{H^1}^2.
\end{aligned}
\end{equation}
\end{lem}
\begin{proof}
Restricting Eq.~(\ref{Proof3_Dh_inout}) to the one-dimensional setting, using our assumed lower bound on $\omega_{j_\text{in}}$, abbreviating restriction of $u^h$ to element $e$ as $u^h_e$, and allowing the symbol $C$ to be a generic constant independent of $h$ and $\delta$ (possibly with different numerical values in different places), we get
\newpage
\begin{equation}\label{Proof4_Dh_inout2}
\begin{aligned}
D^h(u^h,u^h)&\geq\frac{\zeta}{\delta^{3}}\sum_{\Omega_e^h\in\mathcal{M}^h}\int_{\Omega_e^h}\sum_{{j_\text{in}}=1}^{N_{j_\text{in}}}\left[u^h({x}_{j_\text{in}})-u^h({x})\right]^2\omega_{j_\text{in}}d{x}\\
&=\frac{\zeta}{\delta^{3}}\sum_{\Omega_e^h\in\mathcal{M}^h}\int_{\Omega_e^h}\left\lbrack\sum_{{j_\text{in}}=1}^{N_{j_\text{in}}}\left\{\frac{du^h_e}{dx}\cdot(x_{j_\text{in}}-x)\right\}^2\omega_{j_\text{in}}\right\rbrack\,d{x}\\
&=\frac{\zeta}{\delta^{3}}\sum_{\Omega_e^h\in\mathcal{M}^h}\left(\frac{du^h_e}{dx}\right)^2\int_{\Omega_e^h}\left\lbrack\sum_{{j_\text{in}}=1}^{N_{j_\text{in}}}(x_{j_\text{in}}-x)^2\omega_{j_\text{in}}\right\rbrack\,d{x}\\
&=\frac{\zeta}{\delta^{3}}\sum_{\Omega_e^h\in\mathcal{M}^h}\left(\frac{du^h_e}{dx}\right)^2\sum_{{j_\text{in}}=1}^{N_{j_\text{in}}}\left\{\int_{\Omega_e^h}\sum_{{j_\text{in}}=1}^{N_{j_\text{in}}}(x_{j_\text{in}}-x)^2\omega_{j_\text{in}}d{x}\right\}\\
&\geq\frac{\zeta}{\delta^{3}}\sum_{\Omega_e^h\in\mathcal{M}^h}\left(\frac{du^h_e}{dx}\right)^2\sum_{{j_\text{in}}=1}^{N_{j_\text{in}}}\left\{\int_{\Omega_e^h\setminus (x_{j_\text{in}}-h/4,x_{j_\text{in}}+h/4)}(x_{j_\text{in}}-x)^2\omega_{j_\text{in}}d{x}\right\}\\
&\geq\frac{\zeta}{\delta^{3}}\sum_{\Omega_e^h\in\mathcal{M}^h}\left(\frac{du^h_e}{dx}\right)^2\sum_{{j_\text{in}}=1}^{N_{j_\text{in}}}\left\{\int_{\Omega_e^h\setminus (x_{j_\text{in}}-h/4,x_{j_\text{in}}+h/4)}\left(\frac{h}{4}\right)^2\omega_{j_\text{in}}d{x}\right\}\\
&\geq\frac{\zeta}{\delta^{3}}\sum_{\Omega_e^h\in\mathcal{M}^h}\left(\frac{du^h_e}{dx}\right)^2\sum_{{j_\text{in}}=1}^{N_{j_\text{in}}}\left\{\left(\frac{h}{2}\right)\left(\frac{h}{4}\right)^2\omega_{j_\text{in}}\right\}\\
&\geq\frac{\zeta}{\delta^{3}}\sum_{\Omega_e^h\in\mathcal{M}^h}\left(\frac{du^h_e}{dx}\right)^2\left\{Ch^3\omega_\text{min}\right\}\\
&\geq\frac{\zeta}{\delta^{3}}\sum_{\Omega_e^h\in\mathcal{M}^h}\left(\frac{du^h_e}{dx}\right)^2\left\{Ch^3\delta\right\}\\
&\geq\frac{C}{\delta^{3}}\left(\sum_{\Omega_e^h\in\mathcal{M}^h}\left(\frac{du^h_e}{dx}\right)^2h\right)h^2\delta\\
&\geq\frac{Ch^2}{\delta^2}\vert u^h\vert_{H^1}^2,
\end{aligned}
\end{equation}
which gives the desired result when $h\sim\delta$.
\end{proof}
\subsubsection{Preliminary convergence estimate}
We prove the convergence of numerical solutions for a simple one-dimensional case with a uniform grid and $\delta=h$. To do that, we first prove a lemma that holds in more general situations.
We assume that for any $v^h\in \mathcal{V}^h_0$, $\int b(\mathbf{x}) v^h(\mathbf{x})d\mathbf{x}$ is exactly integrated. {By using the same arguments as in Strang's first lemma,} we have the following result.
\begin{lem}
\label{lem:strang}
There exists $C>0$ independent of $\delta$ such that
\[
\| u-u^h\|_{\mathcal{V}} \leq C \inf_{v^h\in \mathcal{V}_0^h} \left(\| u-v^h\|_{\mathcal{V}} +
\sup_{w^h\in \mathcal{V}_0^h}\frac{|D(v^h, w^h)-D^h( v^h, w^h )|}{\| w^h\|_{\mathcal{V}}} \right).
\]
In addition, if $u\in H^1$, then
\[
\| u-u^h\|_{H^1} \leq C \inf_{v^h\in \mathcal{V}_0^h} \left(\| u-v^h\|_{H^1} +
\sup_{w^h\in \mathcal{V}_0^h}\frac{|D(v^h, w^h)-D^h( v^h, w^h )|}{\| w^h\|_{\mathcal{V}}} \right).
\]
\end{lem}
\begin{proof}
By Lemma \ref{lem:coercive}, for any $v^h \in \mathcal{V}_0^h$, we have
\[
\begin{split}
c |u^h - v^h|_{H^1}^2 \leq& D^h(u^h- v^h, u^h- v^h ) \\
=& D(u-v^h, u^h -v^h) +(D(v^h, u^h -v^h)
- D^h( v^h, u^h- v^h )) \\
&+ (D^h(u^h, u^h- v^h) - D(u,u^h -v^h )) \\
=& D(u-v^h, u^h -v^h) + (D(v^h, u^h -v^h)-D^h( v^h, u^h- v^h )).
\end{split}
\]
By the boundedness of the bilinear form, i.e., $|D(u-v^h, u^h -v^h)|\leq C \| u-v^h\|_{\mathcal{V}} \| u^h -v^h\|_{\mathcal{V}} $, where we write $\| w\|_{\mathcal{V}} = \sqrt{D(w,w)}$ for $w\in \mathcal{V}$, we have
\begin{equation}
\label{eq:stranglemma_1}
\begin{split}
c \frac{|u^h - v^h|_{H^1}^2}{\| u^h -v^h\|_{\mathcal{V}}} &\leq C \| u-v^h\|_{\mathcal{V}} + \frac{|D(v^h, u^h -v^h)-D^h( v^h, u^h- v^h )|}{\| u^h -v^h\|_{\mathcal{V}}} \\
& \leq C \| u-v^h\|_{\mathcal{V}} +
\sup_{w^h\in \mathcal{V}_0^h}\frac{|D(v^h, w^h)-D^h( v^h, w^h )|}{\| w^h\|_{\mathcal{V}}}.
\end{split}
\end{equation}
Notice that $H^1$ is continuously embedded in $\mathcal{V}$, i.e.,
\[
\| v\|_{\mathcal{V}}\leq C \| v\|_{H^1} \quad \forall v\in H^1,
\]
where the constant $C$ is independent of $\delta$ (see e.g., \cite{BBM01}). From \eqref{eq:stranglemma_1}, for any $v^h \in \mathcal{V}_0^h$
\[
\| u^h - v^h \|_{\mathcal{V}}\leq C \left( \| u-v^h\|_{\mathcal{V}} +
\sup_{w^h\in \mathcal{V}_0^h}\frac{|D(v^h, w^h)-D^h( v^h, w^h )|}{\| w^h\|_{\mathcal{V}}} \right)
\]
and, if in addition $u\in H^1$,
\[
\| u^h - v^h \|_{H^1}\leq C \left( \| u-v^h\|_{H^1} +
\sup_{w^h\in \mathcal{V}_0^h}\frac{|D(v^h, w^h)-D^h( v^h, w^h )|}{\| w^h\|_{\mathcal{V}}} \right).
\]
Therefore from the triangle inequality
\[
\| u - u^h\|\leq \| u - v^h\| + \| u^h - v^h \|
\]
with either the $\mathcal{V}$-norm or the $H^1$-norm, we can get the desired result.
\end{proof}
{The following theorem provides a convergence result for} the simple one-dimensional case with a uniform grid, $\delta=h$ and a kernel function $\gamma(x,y) = \frac{1}{\delta^3} 1_{\{ |y-x|<\delta \}}$, where, for simplicity, we removed the constant $\zeta$.
\begin{thm}\label{thm:conv}
Assume we have a uniform grid in one-dimension and $\delta=h$.
In addition, assume that $u\in H^2$. Then,
\[
\| u - u^h\|_{H^1} \leq C h ,
\]
{where $C>0$ is a constant that depends on $\| u\|_{H^2}$, but is independent of $h$ and $\delta$.}
\end{thm}
\begin{proof}
By Lemma \ref{lem:strang} and $u\in H^2$, we have
\[
\| u-u^h\|_{H^1} \leq C \inf_{v^h\in \mathcal{V}_0^h} \left(\| u-v^h\|_{H^1} +
\sup_{w^h\in \mathcal{V}_0^h}\frac{|D(v^h, w^h)-D^h( v^h, w^h )|}{\| w^h\|_{\mathcal{V}}} \right)
\]
where, from Eq. (\ref{Vh0}), $\mathcal{V}^h_0$ is the space of continuous piecewise linear functions that satisfy zero Dirichlet boundary conditions. Taking $v^h:= I_h u$, the piecewise linear interpolation of $u$, then it is well-known in finite element analysis that
\[
\| u-I_h u \|_{H^1}\leq C h \| u\|_{H^2}.
\]
Let $\Omega\cup\mathscr{B}\Omega = \cup_{i=1}^N \Omega^h_i$ and extend functions by zero outside $\Omega\cup\mathscr{B}\Omega$ when necessary (e.g., on $\mathscr{B}\Omega^{t_e}$). Then, by assuming $\delta=h$, we have
\begin{equation}
\begin{split}
&D(v^h, w^h)=\\ &= \sum_{i=1}^N \int_{\Omega^h_i } \int_{\mathscr{H}{(x,\delta)}} \gamma(x,y) (v^h(y) - v^h (x)) (w^h(y) - w^h (x)) dy dx \\
&= \sum_{i=1}^N \int_{\Omega^h_i } \int_{\Omega^h_{i-1} \cup \Omega^h_i \cup \Omega^h_{i+1}} \gamma(x,y) (v^h(y) - v^h (x)) (w^h(y) - w^h (x)) dy dx \,.
\end{split}
\end{equation}
Notice that in the above equation, $\Omega^h_0$ and $\Omega^h_{N+1}$ are outside $\Omega\cup\mathscr{B}\Omega$. Since the functions are always zero on $\Omega^h_0, \Omega^h_1, \Omega^h_N, \Omega^h_{N+1}$, we see that $\int_{\Omega^h_1} \int_{\Omega^h_0} \cdots$ and $\int_{\Omega^h_N} \int_{\Omega^h_{N+1}} \cdots$ are zero.
Assume that on each $\Omega^h_i$, $v^h(x) $ is a linear function of slope $a_i \in \mathbb{R}$, and $w^h(x) $ is a linear function of slope $b_i \in \mathbb{R}$, then we have
\begin{equation}
\begin{aligned}
&\int_{\Omega^h_i } \int_{\Omega^h_i} \gamma(x,y) (v^h(y) - v^h (x)) (w^h(y) - w^h (x)) dy dx =\\&= \int_{\Omega^h_i } \int_{\Omega^h_i} \gamma(x,y) (y-x)^2 a_i b_i dy dx .
\end{aligned}
\end{equation}
Now we calculate $\int_{\Omega^h_i } \int_{\Omega^h_{i+1}} \cdots dy dx$. In this case, we have $y \in \Omega^h_{i+1}$ and $x \in \Omega^h_i$. Let $s_i$ denote the point that connects $\Omega^h_i$ and $\Omega^h_{i+1}$, then we can write
\begin{equation}
\begin{split}
&v^h(y) = v^h(s_i) + (y-s_i) a_{i+1}, \quad w^h(y) = w^h(s_i) + (y-s_i) b_{i+1} , \\
&v^h(x) = v^h(s_i) + (x-s_i) a_{i}, \quad w^h(x) = w^h(s_i) + (x-s_i) b_{i} ,
\end{split}
\end{equation}
for all $y\in \Omega^h_{i+1}$ and $x \in \Omega^h_i$. Therefore,
\begin{equation}
v^h(y) - v^h(x) = (y-s_i) a_{i+1} + (s_i-x) a_i = (y-s_i) (a_{i+1} -a_i) + (y-x) a_i
\end{equation}
and similarly for $w^h(y) - w^h(x) $. We then have
\begin{equation}
\begin{aligned}
&\int_{\Omega^h_i } \int_{\Omega^h_{i+1}} \gamma(x,y) (v^h(y) - v^h (x)) (w^h(y) - w^h (x)) dy dx \\
= &\int_{\Omega^h_i } \int_{\Omega^h_{i+1}} \gamma(x,y)\left[(y-s_i) (a_{i+1} -a_i) + (y-x) a_i \right] \\ &\phantom{ \int_{\Omega^h_i } \int_{\Omega^h_{i+1}} \gamma(x,y)}\left[(y-s_i) (b_{i+1} -b_i) + (y-x) b_i \right] dy dx . \\
\end{aligned}
\end{equation}
Notice that, in the above, there is a term $\int_{\Omega^h_i } \int_{\Omega^h_{i+1}} \gamma(x,y) (y-x)^2 a_i b_i dydx $ which can be combined with $\int_{\Omega^h_i } \int_{\Omega^h_{i}} \gamma(x,y) (y-x)^2 a_i b_i dy dx$.
The rest of the terms can be written as
\begin{equation}\label{eq:I_i_right}
\begin{aligned}
&\int_{\Omega^h_i } \int_{\Omega^h_{i+1}} \gamma(x,y) \left((y-s_i)^2 (a_{i+1} -a_i)(b_{i+1} - b_i)\right. \\ &\left.+ (y-s_i)(y-x) \left[ (a_{i+1}-a_i) b_i + (b_{i+1} - b_i) a_i \right] \right) dy dx \\
= & \int_{\Omega^h_i } \int_{\Omega^h_{i+1}} \gamma(x,y) \left((y-s_i)^2 (a_{i+1} b_{i+1} - a_i b_i) \right.\\ &\left.+ (y-s_i)(s_i-x) \left[ (a_{i+1}-a_i) b_i + (b_{i+1} - b_i) a_i \right] \right) dy dx \\
= & (a_{i+1} b_{i+1} - a_i b_i) \int_{\Omega^h_i} \int_{s_i}^{x+\delta} \frac{(y-s_i)^2}{\delta^3} dy dx \\ & + \left[ (a_{i+1}-a_i) b_i + (b_{i+1} - b_i) a_i \right] \int_{\Omega^h_i} \int_{s_i}^{x+\delta} \frac{(y-s_i)(s_i-x)}{\delta^3} dy dx .
\end{aligned}
\end{equation}
We can similarly calculate $\int_{\Omega^h_i} \int_{\Omega^h_{i-1}} \cdots dydx$ and get $\int_{\Omega^h_i } \int_{\Omega^h_{i-1}} \gamma(x,y) (y-x)^2 a_i b_i dydx $ (which is to be combined with $\int_{\Omega^h_i } \int_{\Omega^h_{i}} \gamma(x,y) (y-x)^2 a_i b_i dy dx$) and
\begin{equation}
\label{eq:I_i_left}
\begin{aligned}
&(a_{i-1} b_{i-1} - a_i b_i) \int_{\Omega^h_i} \int_{x-\delta}^{s_{i-1}} \frac{(y-s_{i-1})^2}{\delta^3} dy dx + \\& \left[ (a_{i-1}-a_i) b_i + (b_{i-1} - b_i) a_i \right] \int_{\Omega^h_i} \int_{x-\delta}^{s_{i-1}} \frac{(y-s_{i-1})(s_{i-1}-x)}{\delta^3} dy dx .
\end{aligned}
\end{equation}
Replacing $i-1$ with $i$ in \eqref{eq:I_i_left}, we get the contribution from $\int_{\Omega^h_{i+1}} \int_{\Omega^h_i} \cdots dydx$:
\begin{equation}
\label{eq:I_i+1_left}
\begin{aligned}
&(a_{i} b_{i} - a_{i+1} b_{i+1}) \int_{\Omega^h_{i+1}} \int_{x-\delta}^{s_{i}} \frac{(y-s_{i})^2}{\delta^3} dy dx \\&+ \left[ (a_{i}-a_{i+1}) b_{i+1} + (b_{i} - b_{i+1}) a_{i+1} \right] \int_{\Omega^h_{i+1}} \int_{x-\delta}^{s_{i}} \frac{(y-s_{i})(s_{i}-x)}{\delta^3} dy dx
\end{aligned}
\end{equation}
Now by adding \eqref{eq:I_i_right} with \eqref{eq:I_i+1_left} and noticing, from symmetry, that
\begin{equation}
\label{eq:symmetery_continuous}
\begin{split}
& \int_{\Omega^h_i} \int_{s_i}^{x+\delta} \frac{(y-s_i)^2}{\delta^3} dy dx = \int_{\Omega^h_{i+1}} \int_{x-\delta}^{s_{i}} \frac{(y-s_{i})^2}{\delta^3} dy dx \\
&\int_{\Omega^h_i} \int_{s_i}^{x+\delta} \frac{(y-s_i)(s_i-x)}{\delta^3} dy dx = \int_{\Omega^h_{i+1}} \int_{x-\delta}^{s_{i}} \frac{(y-s_{i})(s_{i}-x)}{\delta^3} dy dx ,
\end{split}
\end{equation}
we get
\begin{equation}
\begin{aligned}
| \eqref{eq:I_i_right} + \eqref{eq:I_i+1_left} | &\leq 2 |a_{i+1} -a_{i}| |b_{i+1} - b_i| \int_{\Omega^h_i} \int_{s_i}^{x+\delta} \frac{|y-s_i||s_i-x|}{\delta^3} dy dx \\& \leq C h |a_{i+1} -a_{i}| |b_{i+1} - b_i|.
\end{aligned}
\end{equation}
Combining the above results, we have
\begin{equation}\label{eq:continuous_form}
\begin{aligned}
D(v^h, w^h) &= \sum_{i=1}^N \int_{\Omega^h_i} \int_{\mathscr{H}{(x,\delta)}} \gamma(x,y) (y-x)^2 a_i b_i dydx \\&\phantom{= }+ \sum_{i=0}^N |a_{i+1} -a_{i}| |b_{i+1} - b_i| O(h) .
\end{aligned}
\end{equation}
Now to estimate $ D^h(v^h, w^h) $, we follow the exact procedure for $D(v^h, w^h)$, but with the inner integral replaced by GMLS quadrature.
In particular, if we have symmetry of the quadrature points, then
\begin{equation}
\label{eq:symmetery_discrete}
\begin{split}
& \int_{\Omega^h_i} \sum_{ s_i < y_j < x+\delta} \frac{(y_j -s_i)^2}{\delta^3} \omega_j dx = \int_{\Omega^h_{i+1}} \sum_{x-\delta< y_j < s_{i}} \frac{(y_j-s_{i})^2}{\delta^3} \omega_j dx \\
&\int_{\Omega^h_i} \sum_{ s_i < y_j < x+\delta} \frac{(y_j-s_i)(s_i-x)}{\delta^3} \omega_j dx = \int_{\Omega^h_{i+1}} \sum_{x-\delta< y_j < s_{i}} \frac{(y_j-s_{i})(s_{i}-x)}{\delta^3} \omega_j dx .
\end{split}
\end{equation}
Then we can show that
\begin{equation}\label{eq:discrete_form}
\begin{aligned}
D^h(v^h, w^h) &= \sum_{i=1}^N \int_{\Omega^h_i} \sum_{j=1}^{NP} \gamma(x,y_j) (y_j-x)^2 a_i b_i \omega_j dx\\&\phantom{= } + \sum_{i=0}^N |a_{i+1} -a_{i}| |b_{i+1} - b_i| O(h) .
\end{aligned}
\end{equation}
Comparing \eqref{eq:continuous_form} with \eqref{eq:discrete_form}, we notice that $\int_{\Omega^h_i}\int_{\mathscr{H}{(x,\delta)}} \gamma(x,y) (y-x)^2 dy dx= \int_{\Omega^h_i} \sum_{j=1}^{NP} \gamma(x,y_j) (y_j-x)^2 \omega_j dx$. We therefore only need an estimate of
\begin{equation}
\label{eq:strang_estimate}
\sup_{w^h \in \mathcal{V}^h_0} \frac{ \sum_{i=0}^N |a_{i+1} -a_{i}| |b_{i+1} - b_i| O(h) }{\| w^h\|_{\mathcal{V}}}
\end{equation}
with $v^h=I_h u$.
Notice that $\| w^h\|_{\mathcal{V}} = \sqrt{D(w^h, w^h)}$, so we can write it out by the same procedure above and get
\begin{equation}
\begin{aligned}
\| w^h\|_{\mathcal{V}}^2 &= \sum_{i=1}^N \int_{\Omega^h_i} \int_{\mathscr{H}{(x,\delta)}} \gamma(x,y) (y-x)^2 b^2_i dydx \\ &\phantom{= } -2 \sum_{i=0}^N (b_{i+1} - b_i)^2 \int_{\Omega^h_i} \int_{s_i}^{x+\delta} \frac{(y-s_i)(s_i-x)}{\delta^3} dy dx.
\end{aligned}
\end{equation}
By letting $\gamma(x,y) = \frac{1}{\delta^3} 1_{\{ |y-x|<\delta \}}$ and a direct calculation of the above integrals, we get
\begin{equation}
\| w^h\|_{\mathcal{V}}^2 = \frac{2h}{3} \sum_{i=1}^N b_i^2 -\frac{h }{12} \sum_{i=0}^N (b_{i+1} - b_i)^2 = h \left(\frac{2}{3 } \sum_{i=1}^{N} b_i^2 - \frac{1}{12} \sum_{i=1}^{N-1} (b_{i+1} - b_i)^2 \right) ,
\end{equation}
where the last equality is a result of $b_0=b_1=b_{N}=b_{N+1} = 0$. Therefore,
\begin{equation}
\begin{split}
& \phantom{= }\;\; \frac{ \sum_{i=0}^N |a_{i+1} -a_{i}| |b_{i+1} - b_i| O(h) }{\| w^h\|_{\mathcal{V}^h_0}} \\ &= \frac{ \sum_{i=1}^{N-1} |a_{i+1} -a_{i}| |b_{i+1} - b_i| O(h) }{ \sqrt{h \left(\frac{2}{3 } \sum_{i=1}^{N} b_i^2 - \frac{1}{12} \sum_{i=1}^{N-1} (b_{i+1} - b_i)^2\right)} } \\
&\leq O(h) \frac{ \sqrt{ \sum_{i=1}^{N-1}|a_{i+1} -a_{i}|^2 } \sqrt{\sum_{i=1}^{N-1} |b_{i+1} - b_i|^2} }{ \sqrt{h \left(\frac{2}{3 } \sum_{i=1}^{N} b_i^2 - \frac{1}{12} \sum_{i=1}^{N-1} (b_{i+1} - b_i)^2\right)} } \\
& = O(h) \frac{ \sqrt{ \sum_{i=1}^{N-1}|a_{i+1} -a_{i}|^2 }}{ \sqrt{h \left( \frac{2}{3 } (\sum_{i=1}^{N} b_i^2)/ (\sum_{i=1}^{N-1} (b_{i+1} - b_i)^2) - \frac{1}{12} \right)} } .
\end{split}
\end{equation}
Notice that $\sum_{i=1}^{N-1} (b_{i+1} - b_i)^2 = 2 \sum_{i=1}^N b_i^2 - 2\sum_{i=1}^{N-1} b_{i+1}b_{i} \leq 4 \sum_{i=1}^N b_i^2 $, therefore,
\begin{equation}
\begin{aligned}
\frac{ \sum_{i=0}^N |a_{i+1} -a_{i}| |b_{i+1} - b_i| O(h) }{\| w^h\|_{\mathcal{V}}} & \leq O(h) \frac{ \sqrt{ \sum_{i=1}^{N-1}|a_{i+1} -a_{i}|^2 }}{ \sqrt{h \left( \frac{8}{3 } - \frac{1}{12} \right)} } \\& = O(h) \sqrt{\frac{ \sum_{i=1}^{N-1}|a_{i+1} -a_{i}|^2 }{ h}}.
\end{aligned}
\end{equation}
Since $v^h $ is the piecewise linear interpolation of $u$, then $a_i = u^\prime(x_i)$ for some $x_i \in \Omega^h_i$, so
\begin{equation}
|a_{i+1} - a_{i}| = |u^\prime(x_{i+1}) - u^\prime(x_i)|= \left|\int_{x_i}^{x_{i+1}} u^{\prime\prime}(s)ds\right| \leq h \int_{x_i}^{x_{i+1}} |u^{\prime\prime}(s)|ds,
\end{equation}
where the last inequality comes from Cauchy-Schwartz inequality.
Therefore we have $\sqrt{\frac{ \sum_{i=1}^{N-1}|a_{i+1} -a_{i}|^2 }{ h} }\leq \| u\|_{H^2}$.
All together, we have shown
\begin{equation}
\sup_{w^h \in \mathcal{V}^h_0} \frac{|D(v^h, w^h) - D^h(v^h, w^h)|}{\| w^h\|_{\mathcal{V}}} \leq C h \| u\|_{H^2}
\end{equation}
for $v^h = I_h u$,
and therefore the desired result.
\end{proof}
\begin{rem}\label{remark_rate}
The proof of Theorem \ref{thm:conv} utilizes the structure of the uniform grid. In particular, \eqref{eq:symmetery_continuous} and \eqref{eq:symmetery_discrete} hold only if we have a uniform grid. For quasi-uniform grids, i.e., non-uniform grids with bounded ratio between the maximum mesh size $h_{\max}$ and the miniumum mesh size $h_{\min}$, we can follow the similar arguments so that \eqref{eq:strang_estimate} is then replaced with \[
\sup_{w^h \in \mathcal{V}^h_0} \frac{ \left( \sqrt{\sum_{i=0}^N |a_{i}|^2}\sqrt{\sum_{i=0}^N |b_i|^2} \right) O(h_{\max}) }{\| w^h\|_{\mathcal{V}}}
\]
from where one can proceed to show an $O(1)$ estimate of $\| u- u^h \|_{H^1}$.
This estimate will also be numerically verified later.
\end{rem}
\section{Numerical examples}\label{sec:numerics}
In this section, we present numerical convergence results obtained by employing the proposed quadrature scheme. We consider one-dimensional and two-dimensional problems discretized on uniform and non-uniform grids.
To evaluate the accuracy of the numerical solutions and test the convergence properties of the proposed method, we employ the $L^2$ and $H^1$ norms of the difference between the nonlocal numerical solution, $u^h$, and the analytical solution, $u_0$, to a local Poisson problem, i.e.,
\begin{equation}
\lVert u^h(\mathbf{x})-u_0(\mathbf{x})\rVert_{L^2}=\left[\int_{\Omega}\left(u^h(\mathbf{x})-u_0(\mathbf{x})\right)^2d\mathbf{x}\right]^{\frac{1}{2}},
\end{equation}
and
\begin{equation}
\lVert u^h(\mathbf{x})-u_0(\mathbf{x})\rVert_{H^1}=\left[\int_{\Omega}\left(u^h(\mathbf{x})-u_0(\mathbf{x})\right)^2+\left(\nabla u^h(\mathbf{x})-\nabla u_0(\mathbf{x})\right)^2d\mathbf{x}\right]^{\frac{1}{2}}.
\end{equation}
These norms are computed numerically with Gauss quadrature over the mesh elements, i.e.
\begin{equation}
\lVert u^h(\mathbf{x})-u_0(\mathbf{x})\rVert_{L^2}\approx\left[\sum_{\Omega^h_e\in\mathcal{M}^h_\Omega}\sum_{\mathbf{x}^e_{gs}\in\Omega^h_e}\left(u^h(\mathbf{x}_{gs})-u_0(\mathbf{x}_{gs})\right)^2\omega_{gs}\right]^{\frac{1}{2}},
\end{equation}
and
\begin{equation}
\begin{aligned}
\lVert u^h(\mathbf{x})-u_0(\mathbf{x})\rVert_{H^1}\approx&\Bigg\{\sum_{\Omega^h_e\in\mathcal{M}^h_\Omega}\sum_{\mathbf{x}^e_{gs}\in\Omega^h_e}\left[\left(u^h(\mathbf{x}_{gs})-u_0(\mathbf{x}_{gs})\right)^2\right.\\&+\left.\left(\nabla u^h(\mathbf{x}_{gs})-\nabla u_0(\mathbf{x}_{gs})\right)^2\right]\omega_{gs}\Bigg\}^{\frac{1}{2}},
\end{aligned}
\end{equation}
where $\left\{\mathbf{x}_{gs}\right\}_{gs=1}^{N_{gs}}$ and $\left\{\omega_{gs}\right\}_{gs=1}^{N_{gs}}$, ${N_{gs}}\in\mathbb{N}$, are the element Gauss quadrature points and weights, respectively. In this work, we take $N_{gs}=8^d$, where $d$ is the dimension of the problem. Also, in all our numerical examples, we employ fixed ratios $m=\delta/h\in\mathbb{N}$.
\subsection{One-dimensional test cases}\label{numerical_1D}
\begin{figure} [H]
\centering
\vspace{0pt}
\includegraphics[trim = 0mm 0mm 0mm -10mm, clip=true,width=0.65\textwidth]{Figures/PD_1Ddomain.png}
\caption{One-dimensional domain $\Omega$, with associated boundary layer $\mathscr{B}\Omega$.}
\label{PD_1Ddomain_figure}
\end{figure}
We consider a one-dimensional domain $\Omega=(0,1)$. For a given horizon $\delta$, its associated interaction domain is $\mathscr{B}\Omega=[-\delta,0]\cup[1,\delta]$. Note that the inner domain, where the function $u$ is unknown (see Eq. (\ref{strong form_NLD.})), is considered constant in size, while the boundary layer varies with the value of $\delta$. Thus, during our convergence studies, the inner solution domain $\Omega$ remains consistent during the refinement ($\delta\rightarrow 0$) so that the $L^2$ error norms associated with each considered value of $\delta$ are comparable. We consider the following kernel functions: the constant kernel
\begin{equation}
\gamma_{1,c}({x},{y}) = \left\{\begin{aligned}
\ \frac{3}{2\delta^3} \quad\ &\rm{for}\ |{y}-{x}|\leq\delta,\\\
\ 0 \ \ \quad\ &\rm{for}\ |{y}-{x}|>\delta,\\
\end{aligned}\right.
\label{kernel_form1_1D}
\end{equation}
and the rational kernel
\begin{equation}
\gamma_{1,r}({x},{y}) =\left\{\begin{aligned}
\ \frac{1}{\delta^2 |{y}-{x}|} \quad \ &\rm{for}\ |{y}-{x}|\leq\delta,\\\
\ 0 \ \ \quad\ &\rm{for}\ |{y}-{x}|>\delta,\\
\end{aligned}\right.
\label{kernel_form2_1D}
\end{equation}
which correspond to the expressions in Eqs. (\ref{kernel_form1}) and (\ref{kernel_form2}) for $\zeta=3/2$ and $\zeta=1$, respectively. These values of $\zeta$ are such that
\begin{equation}
\lim_{\delta\to0}\mathcal{L}_{\delta}u({x})=\Delta u(x),
\label{limit_for1D_operator}
\end{equation}
where $\Delta$ is the local Laplace operator.
To illustrate the numerical convergence of the proposed method, we consider manufactured solutions, i.e. we choose analytical solutions, $u_0(x)$, to the local Poisson equation and compute the corresponding forcing term $b(x)$ and Dirichlet volume constraint $g(x)$. These are then used for the nonlocal Poisson problem (\ref{strong form_NLD.}). Specifically, we consider two cases: a sinusoidal and a linear solution (with the purpose of performing the so-called patch test). Therefore, for the first case we set $u_0(x)=\sin(2\pi x)$, for which $g(x)=\sin(2\pi x)$ and
\begin{equation}
\begin{aligned}
b(x) &= -\Delta u_0(x)= -\Delta \sin{(2\pi x)}= 4\pi^2 \sin{(2\pi x)}.
\end{aligned}
\end{equation}
For the second case, instead, we have $u_0(x)=x$, $g(x)=x$ and
\begin{equation}
\begin{aligned}
b(x) &= 0.
\end{aligned}
\end{equation}
\subsubsection{Uniform discretizations}\label{numerical_1D_uniform}
We investigate the convergence behavior for uniform discretizations. The finite element mesh has a uniform discretization size, $h$, over $[-\delta,1+\delta]=\left([-\delta,0]\cup[1,\delta]\right)\cup(0,1)$. Recall that we consider cases for which $m=\delta/h\in\mathbb{N}$, meaning that elements of size $h$ subdivide $(0,1)$ and $[-\delta,1+\delta]$ exactly. The same applies when the domain extension $[-\delta-t_e,1+\delta+t_e]$, with $t_e=\delta$, is employed. For the outer quadrature, we consider $N_q=40$ Gauss points, while for inner quadrature we use $\overline{N}_{qp}=10$.
For the sinusoidal solution we use $\gamma_{1,r}$, $h=0.01$, and $m=2$, meaning that $\Omega$ is discretized using 100 elements and $\delta=0.02$. For the construction of the inner quadrature weights, we consider two cases: one without domain extension, i.e., $t_e=0$, and one with domain extension $t_e=\delta$. The obtained numerical solutions are reported in Figure \ref{d_m2_IF2_bc2bc3}, while Figure \ref{absError_m2_IF2_bc2bc3} shows the absolute error obtained for the two considered cases. We observe that, when $t_e=0$, the error concentrates near the boundary of the domain, whereas this does not occur for $t_e=\delta$. Next, we perform an $L^2$ norm convergence study by varying $h$ and $\delta$, with fixed ratio $m=2$. As shown in Figure \ref{1D_CG_U_sine_extension_noextension_convergence}, for $t_e=0$, we observe a linear convergence, whereas, for $t_e=\delta$, the rate is quadratic. This suggests that the concentration of error near the boundary observed for $t_e=0$ reduces the overall convergence rate. Therefore, from now on, we only employ $t_e=\delta$ in the construction of the inner quadrature weights.
\begin{figure}[H]
\begin{center}
\subfigure[Numerical and exact solutions]{\includegraphics[trim = 5mm 60mm 15mm 60mm, clip=true,width=0.49\textwidth]{Figures/1D_U/d_m2_IF2_bc2bc3}\label{d_m2_IF2_bc2bc3}}
\subfigure[Absolute error]{\includegraphics[trim = 5mm 60mm 15mm 60mm, clip=true,width=0.49\textwidth]{Figures/1D_U/absError_m2_IF2_bc2bc3}\label{absError_m2_IF2_bc2bc3}}
\caption{Numerical solution and associated absolute error for the one-dimensional problem with sinusoidal solution for $\gamma_{1,r}$, $m=2$, ${N}_{q}=40$, and $\overline{N}_{qp}=10$. A uniform element size $h=0.01$, corresponding to 100 elements for the discretization of $\Omega$ is employed.}
\label{1D_CG_U_sine_extension_noextension_solutionanderror}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
\subfigure[$\gamma_{1,c}$]{\includegraphics[trim = 15mm 60mm 15mm 60mm, clip=true,width=0.49\textwidth]{Figures/1D_U/1D_CG_U_OuterGauss_InnerMethod2_sine_f3_bx1_1del_2del_m2_IF3_No40_Ni10}\label{1D_CG_U_OuterGauss_InnerMethod2_sine_f3_bx1_1del_2del_m2_IF3_No40_Ni10}}
\subfigure[$\gamma_{1,r}$]{\includegraphics[trim = 15mm 60mm 15mm 60mm, clip=true,width=0.49\textwidth]{Figures/1D_U/1D_CG_U_OuterGauss_InnerMethod2_sine_f3_bx1_1del_2del_m2_IF2_No40_Ni10}\label{1D_CG_U_OuterGauss_InnerMethod2_sine_f3_bx1_1del_2del_m2_IF2_No40_Ni10}}
\caption{$L^2$ norm convergence behaviors of the one-dimensional numerical solutions for the case with sinusoidal solution. $m=2$, uniform discretization, and $t_e=0$ and $t_e=\delta$. ${N}_{q}=40$ and $\overline{N}_{qp}=10$ are employed. $\gamma_{1,c}$ and $\gamma_{1,r}$ are both considered.}
\label{1D_CG_U_sine_extension_noextension_convergence}
\end{center}
\end{figure}
\noindent
Figures \ref{1D_CG_U_oGauss_iGMLS_sine_f3_bx1_2del_m123_IF3_No40_Ni10} and \ref{1D_CG_U_oGauss_iGMLS_sine_f3_bx1_2del_m123_IF2_No40_Ni10} show the $L^2$ norm convergence behavior for $\gamma_{1,c}$ and $\gamma_{1,r}$, respectively. For both cases, we employ $t_e=\delta$. ${N}_{q}=40$, and $\overline{N}_{qp}=10$. For all considered values of $m$ (i.e., $m=1,2,3$), we observe a second-order convergence rate in the $L^2$ norm. The convergence behavior in the $H^1$ norm is presented in Figure \ref{1D_CG_U_sine_m123_H1}. For all of the considered cases, a first-order convergence is obtained, which is consistent with the theoretical prediction from Section \ref{sec:convergence}. It can also be noted that convergence in the $H^1$ norm is one rate lower than in $L^2$.
\begin{figure}[H]
\begin{center}
\subfigure[$\gamma_{1,c}$]{\includegraphics[trim = 15mm 60mm 15mm 60mm, clip=true,width=0.49\textwidth]{Figures/1D_U/1D_CG_U_oGauss_iGMLS_sine_f3_bx1_2del_m123_IF3_No40_Ni10}\label{1D_CG_U_oGauss_iGMLS_sine_f3_bx1_2del_m123_IF3_No40_Ni10}}
\subfigure[$\gamma_{1,r}$]{\includegraphics[trim = 15mm 60mm 15mm 60mm, clip=true,width=0.49\textwidth]{Figures/1D_U/1D_CG_U_oGauss_iGMLS_sine_f3_bx1_2del_m123_IF2_No40_Ni10}\label{1D_CG_U_oGauss_iGMLS_sine_f3_bx1_2del_m123_IF2_No40_Ni10}}
\caption{$L^2$ norm convergence behaviors of the one-dimensional numerical solutions for the case with sinusoidal solution. $m=1,2,3$, uniform discretization, and $t_e=\delta$. ${N}_{q}=40$ and $\overline{N}_{qp}=10$ are employed. $\gamma_{1,c}$ and $\gamma_{1,r}$ are both considered.}
\label{1D_CG_U_sine_m123}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
\subfigure[$\gamma_{1,c}$]{\includegraphics[trim = 15mm 60mm 15mm 60mm, clip=true,width=0.49\textwidth]{Figures/1D_U/1D_CG_U_oGauss_iGMLS_sine_f3_bx1_2del_m123_IF3_No40_Ni10_H1}\label{1D_CG_U_oGauss_iGMLS_sine_f3_bx1_2del_m123_IF3_No40_Ni10_H1}}
\subfigure[$\gamma_{1,r}$]{\includegraphics[trim = 15mm 60mm 15mm 60mm, clip=true,width=0.49\textwidth]{Figures/1D_U/1D_CG_U_oGauss_iGMLS_sine_f3_bx1_2del_m123_IF2_No40_Ni10_H1}\label{1D_CG_U_oGauss_iGMLS_sine_f3_bx1_2del_m123_IF2_No40_Ni10_H1}}
\caption{$H^1$ norm convergence behaviors of the one-dimensional numerical solutions for the case with sinusoidal solution. $m=1,2,3$, uniform discretization, and $t_e=\delta$. ${N}_{q}=40$ and $\overline{N}_{qp}=10$ are employed. $\gamma_{1,c}$ and $\gamma_{1,r}$ are both considered.}
\label{1D_CG_U_sine_m123_H1}
\end{center}
\end{figure}
Next, we consider the case with a linear solution. For the reasons illustrated above, we consider $t_e=\delta$; we set $h=0.01$ and $m=2$, meaning that $\Omega$ is discretized using 100 elements and $\delta=0.02$. The $L^2$ norms of the error for the cases with $\gamma_{1,r}$ and $\gamma_{1,c}$ are $1.59$E$-13$ and $6.96$E$-14$, respectively. This fact implies that the proposed approach passes the patch test for uniform discretizations, i.e. the numerical solution is accurate up to machine precision for linear solutions. This is expected since the exact, local solution belongs to the discretization space $\mathcal V^h$.
\subsubsection{Nonuniform discretizations}\label{numerical_1D_nonuniform}
Next, we investigate the performance of the proposed method for non-uniform discretizations. The non-uniform discretizations are constructed by perturbing uniform discretizations of size $h$. This is achieved by moving each finite element node in $(0,1)$ and $(-\delta,0)\cup(1,1+\delta)$ from their original position $x^{u}$ to a new randomly selected position $x^{nu}=x^u+\epsilon h R_a$, where $\epsilon$ is a chosen perturbation factor and $R_a$ is a random number in $\left[-1,1\right]$.
As for uniform discretizations, we first consider the sinusoidal solution. We select $t_e=\delta$, ${N}_{q}=40$, and $\overline{N}_{qp}=10$. Figures \ref{1D_CG_NU_sine_m23} and \ref{1D_CG_NU_sine_m23_H1} show the convergence behavior for $m=2,3$ for both $\gamma_{1,c}$ and $\gamma_{1,r}$, in the $L^2$ and $H^1$ norms, respectively. We observe an apparent second-order convergence rate in the $L^2$, and first-order for the $H^1$ norm, i.e., one rate lower. However, it should be noted that Figure \ref{1D_CG_NU_sine_m23_H1} shows a reduction in the $H^1$ convergence rate for the finer cases, suggesting that, asymptotically, the convergence rate may reach a zeroth-order convergence, as discussed in Remark \ref{remark_rate}.
\begin{figure}[H]
\begin{center}
\subfigure[$\gamma_{1,c}$]{\includegraphics[trim = 15mm 60mm 15mm 60mm, clip=true,width=0.49\textwidth]{Figures/1D_NU/1D_CG_NU_OuterGauss_InnerMethod2_sine_f3_bx1_2del_m2_m3_IF3_No40_Ni10_unih}\label{1D_CG_NU_OuterGauss_InnerMethod2_sine_f3_bx1_2del_m2_m3_IF3_No40_Ni10_unih}}
\subfigure[$\gamma_{1,r}$]{\includegraphics[trim = 15mm 60mm 15mm 60mm, clip=true,width=0.49\textwidth]{Figures/1D_NU/1D_CG_NU_OuterGauss_InnerMethod2_sine_f3_bx1_2del_m2_m3_IF2_No40_Ni10_unih}\label{1D_CG_NU_OuterGauss_InnerMethod2_sine_f3_bx1_2del_m2_m3_IF2_No40_Ni10_unih}}
\caption{$L^2$ convergence behaviors of the one-dimensional numerical solutions for the case with sinusoidal solution. $m=2,3$, non-uniform discretization with $\epsilon=0.1$, and $t_e=\delta$. ${N}_{q}=40$ and $\overline{N}_{qp}=10$ are employed. $\gamma_{1,c}$ and $\gamma_{1,r}$ are both considered.}
\label{1D_CG_NU_sine_m23}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
\subfigure[$\gamma_{1,c}$]{\includegraphics[trim = 15mm 60mm 15mm 60mm, clip=true,width=0.49\textwidth]{Figures/1D_NU/1D_CG_NU_OuterGauss_InnerMethod2_sine_f3_bx1_2del_m2_m3_IF3_No40_Ni10_unih_H1}\label{1D_CG_NU_OuterGauss_InnerMethod2_sine_f3_bx1_2del_m2_m3_IF3_No40_Ni10_unih_H1}}
\subfigure[$\gamma_{1,r}$]{\includegraphics[trim = 15mm 60mm 15mm 60mm, clip=true,width=0.49\textwidth]{Figures/1D_NU/1D_CG_NU_OuterGauss_InnerMethod2_sine_f3_bx1_2del_m2_m3_IF2_No40_Ni10_unih_H1}\label{1D_CG_NU_OuterGauss_InnerMethod2_sine_f3_bx1_2del_m2_m3_IF2_No40_Ni10_unih_H1}}
\caption{$H^1$ norm convergence behaviors of the one-dimensional numerical solutions for the case with sinusoidal solution. $m=2,3$, non-uniform discretization with $\epsilon=0.1$, and $t_e=\delta$. ${N}_{q}=40$ and $\overline{N}_{qp}=10$ are employed. $\gamma_{1,c}$ and $\gamma_{1,r}$ are both considered.}
\label{1D_CG_NU_sine_m23_H1}
\end{center}
\end{figure}
We then consider the linear solution. As before, we take $t_e=\delta$, ${N}_{q}=40$, and $\overline{N}_{qp}=10$.
In contrast to the uniform case, for non-uniform discretizations, the proposed method does not pass the patch test. As shown in Figures \ref{1D_CG_NU_linear_m23} and \ref{1D_CG_NU_linear_m23_H1}, which report convergence behavior in the $L^2$ and $H^1$ norms, respectively, for $m=2,3$ for $\gamma_{1,c}$ and $\gamma_{1,r}$, the method shows a first-order $L^2$ norm convergence and a zeroth-order $H^1$ norm convergence (see Remark \ref{remark_rate}). By comparing Figure \ref{1D_CG_NU_sine_m23} and Figure \ref{1D_CG_NU_linear_m23}, it can be noted that the magnitude of the $L^2$ norm errors obtained for the case with a linear solution is much smaller compared with the magnitude obtained for the problem with a sinusoidal solution.
This confirms our conjecture that the method has a first-order asymptotic convergence in the $L^2$ norm, and that second-order convergence is observed in the pre-asymptotic regime.
A natural question is whether further refinement of the sinusoidal case would show a reduction in convergence rate in the $L^2$ norm; we note that attempting to refine the sinusoidal case further, the error becomes dominated by floating point arithmetic.
Nonetheless, since, as shown in Figure \ref{1D_CG_NU_sine_m23_H1}, the convergence rate in the $H^1$ norm starts to reduce for the finer refinements, and we have observed one-order lower convergence in the $L^2$ norm, it is reasonable to expect that the convergence rate in the $L^2$ norm would reduce with further refinement.
\begin{figure}[H]
\begin{center}
\subfigure[$\gamma_{1,c}$]{\includegraphics[trim = 15mm 60mm 15mm 60mm, clip=true,width=0.49\textwidth]{Figures/1D_NU/1D_CG_NU_oGauss_iGMLS_linear_f3_bx1_2del_m2_m3_IF3_No40_Ni10}\label{1D_CG_NU_oGauss_iGMLS_linear_f3_bx1_2del_m2_m3_IF3_No40_Ni10}}
\subfigure[$\gamma_{1,r}$]{\includegraphics[trim = 15mm 60mm 15mm 60mm, clip=true,width=0.49\textwidth]{Figures/1D_NU/1D_CG_NU_oGauss_iGMLS_linear_f3_bx1_2del_m2_m3_IF2_No40_Ni10}\label{1D_CG_NU_oGauss_iGMLS_linear_f3_bx1_2del_m2_m3_IF2_No40_Ni10}}
\caption{$L^2$ norm convergence behaviors of the one-dimensional numerical solutions for the case with linear solution. $m=2,3$, non-uniform discretization with $\epsilon=0.1$, and $t_e=\delta$. ${N}_{q}=40$ and $\overline{N}_{qp}=10$ are employed. $\gamma_{1,c}$ and $\gamma_{1,r}$ are both considered.}
\label{1D_CG_NU_linear_m23}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
\subfigure[$\gamma_{1,c}$]{\includegraphics[trim = 15mm 60mm 15mm 60mm, clip=true,width=0.49\textwidth]{Figures/1D_NU/1D_CG_NU_oGauss_iGMLS_linear_f3_bx1_2del_m2_m3_IF3_No40_Ni10_H1}\label{1D_CG_NU_oGauss_iGMLS_linear_f3_bx1_2del_m2_m3_IF3_No40_Ni10_H1}}
\subfigure[$\gamma_{1,r}$]{\includegraphics[trim = 15mm 60mm 15mm 60mm, clip=true,width=0.49\textwidth]{Figures/1D_NU/1D_CG_NU_oGauss_iGMLS_linear_f3_bx1_2del_m2_m3_IF2_No40_Ni10_H1}\label{1D_CG_NU_oGauss_iGMLS_linear_f3_bx1_2del_m2_m3_IF2_No40_Ni10_H1}}
\caption{$H^1$ norm convergence behaviors of the one-dimensional numerical solutions for the case with linear solution. $m=2,3$, non-uniform discretization with $\epsilon=0.1$, and $t_e=\delta$. ${N}_{q}=40$ and $\overline{N}_{qp}=10$ are employed. $\gamma_{1,c}$ and $\gamma_{1,r}$ are both considered.}
\label{1D_CG_NU_linear_m23_H1}
\end{center}
\end{figure}
\subsection{Two-dimensional test cases}
\begin{figure} [H]
\centering
\vspace{0pt}
\includegraphics[trim = 75mm 95mm 75mm 60mm, clip=true,width=0.9\textwidth]{Figures/PD_2Ddomain_dashedBoundary}
\caption{Two-dimensional domain $\Omega$, with associated boundary layer $\mathscr{B}\Omega$}
\label{PD_2Ddomain_figure}
\end{figure}
We consider the two-dimensional domain $\Omega=(0,1)\times(0,1)$ with associated interaction domain $\mathscr{B}\Omega=\left([-\delta,1+\delta]\times[-\delta,1+\delta]\right)\setminus\Omega$.
As in the previous section, this guarantees that in the convergence studies the inner solution domain $\Omega$ remains consistent during the refinement ($\delta\rightarrow 0$), so that the $L^2$ error norms are comparable for all $\delta$. We consider two kernel functions: a constant influence function
\begin{equation}
\gamma_{2,c}(\mathbf{x},\mathbf{y}) = \left\{\begin{aligned}
\ \frac{4}{\pi\delta^4} \quad\ &\rm{for}\ \lVert\mathbf{y}-\mathbf{x}\rVert\leq\delta,\\\
\ 0 \ \ \quad\ &\rm{for}\ \lVert\mathbf{y}-\mathbf{x}\rVert>\delta,\\
\end{aligned}\right.
\label{kernel_form1_2D}
\end{equation}
and a rational one
\begin{equation}
\gamma_{2,r}(\mathbf{x},\mathbf{y}) =\left\{\begin{aligned}
\ \frac{3}{\pi\delta^3 \lVert\mathbf{y}-\mathbf{x}\rVert} \quad \ &\rm{for}\ \lVert\mathbf{y}-\mathbf{x}\rVert\leq\delta,\\\
\ 0 \ \ \quad\ &\rm{for}\ \lVert\mathbf{y}-\mathbf{x}\rVert>\delta,\\
\end{aligned}\right.
\label{kernel_form2_2D}
\end{equation}
which correspond to the expressions in Eqs. (\ref{kernel_form1}) and (\ref{kernel_form2}) for $\zeta=4/\pi$ and $\zeta=3/\pi$, respectively. As for the one-dimensional case, these values of $\zeta$ are such that
\begin{equation}
\lim_{\delta\to0}\mathcal{L}_{\delta}u(\mathbf{x})=\Delta u(\mathbf{x}).
\label{limit_for2D_operator}
\end{equation}
As discussed in
Section \ref{sec:Nonlocal_diff_model}, the supports of the kernels presented in (\ref{kernel_form1_2D}) and (\ref{kernel_form2_2D}) correspond to circular Euclidean $\ell^2$ balls. However, kernels associated with $\ell^\infty$ balls (i.e., square supports) were also investigated and similar results as the ones presented in this section for Euclidean balls were obtained.
As before, we employ the method of manufactured solutions.
We select $u_0(\mathbf{x})=\sin(2\pi x_1)\sin(2\pi x_2)$, where $\mathbf{x}=(x_1,x_2)$, which corresponds to $g(\mathbf{x})=\sin(2\pi x_1)\sin(2\pi x_2)$ and to the following source term:
\begin{equation}
\begin{aligned}
b(\mathbf{x}) &= -\Delta u_0(\mathbf{x})\\
&= -\Delta\left( \sin(2\pi x_1)\sin(2\pi x_2)\right)\\
&= 8\pi^2\sin(2\pi x_1)\sin(2\pi x_2).
\end{aligned}
\end{equation}
\subsubsection{Uniform discretizations}
As for the one-dimensional case, we first investigate the convergence behavior for uniform discretizations. The two-dimensional uniform mesh is constructed as a tensor product $\mathcal{M}^{h,u}_{2}=\mathcal{M}^{h,u}_{x_2}\times\mathcal{M}^{h,u}_{x_1}$, where $\mathcal{M}^{h,u}_{x_1}$ and $\mathcal{M}^{h,u}_{x_2}$ are one-dimensional uniform meshes of size $h$ over $\left[-\delta,0\right]\cup\left(0,1\right)\cup\left[1,1+\delta\right]$. For the convergence study, we set $t_e=\delta$, ${\overline{N}_{qp}=64}$, and use a four by four Gauss quadrature rule for the outer integral (${N}_{q}=16$). Figures \ref{2D_CG_U_OutGauss_IGMLS_sine_f3_bx1_2del_m2_IF5_No4_Ni8} and \ref{2D_CG_U_OutGauss_IGMLS_sine_f3_bx1_2del_m2_IF4_No4_Ni8} show the obtained results for $\gamma_{2,c}$ and $\gamma_{2,r}$, respectively. Up to the considered level of refinement, we observe a second-order convergence rate in the $L^2$ norm for both kernels.
\begin{figure}[H]
\begin{center}
\subfigure[$\gamma_{2,c}$]{\includegraphics[trim = 15mm 75mm 15mm 80mm, clip=true,width=0.49\textwidth]{Figures/2D_U/2D_CG_U_OutGauss_IGMLS_sine_f3_bx1_2del_m2_IF5_No4_Ni8}\label{2D_CG_U_OutGauss_IGMLS_sine_f3_bx1_2del_m2_IF5_No4_Ni8}}
\subfigure[$\gamma_{2,r}$]{\includegraphics[trim = 15mm 75mm 15mm 80mm, clip=true,width=0.49\textwidth]{Figures/2D_U/2D_CG_U_OutGauss_IGMLS_sine_f3_bx1_2del_m2_IF4_No4_Ni8}\label{2D_CG_U_OutGauss_IGMLS_sine_f3_bx1_2del_m2_IF4_No4_Ni8}}
\caption{$L^2$ convergence behaviors of the two-dimensional numerical solutions for $m=2$, uniform discretization, and $t_e=\delta$. ${N}_{q}=16$ (as $4\times4$) and ${\overline{N}_{qp}=64}$. $\gamma_{2,c}$ and $\gamma_{2,r}$ are both considered.}
\label{2D_CG_U_OuterGauss_InnerGMLS_sine}
\end{center}
\end{figure}
\subsubsection{Nonuniform discretizations}
In this section, we investigate the performance of the proposed quadrature approach for two-dimensional non-uniform discretizations. We construct the two-dimensional non-uniform mesh as a tensor product of one-dimensional non-uniform discretization, i.e., $\mathcal{M}^{h,nu}_{2}=\mathcal{M}^{h,nu}_{x_1}\times\mathcal{M}^{h,nu}_{x_2}$, where $\mathcal{M}^{h,nu}_{x_1}$ and and $\mathcal{M}^{h,nu}_{x_2}$ are obtained by perturbing $\mathcal{M}^{h,u}_{x_1}$ and and $\mathcal{M}^{h,u}_{x_2}$, which are uniform meshes with spacing $h$ over $\left[-\delta,0\right]\cup\left(0,1\right)\cup\left[1,1+\delta\right]$. Similarly to the one-dimensional non-uniform case, the perturbation is achieved by moving the finite element nodes in $(0,1)$ and $(-\delta,0)\cup(1,1+\delta)$ from their original positions $x_1^u$ and $x_2^u$ to new randomly selected positions $x_1^{nu}=x_1^{u}+\epsilon h R_a$ and $x_2^{nu}=x_2^{u}+\epsilon h R_a$, where $\epsilon$ is a chosen perturbation factor and $R_a$ is a random number in $\left[-1,1\right]$.
For a visual example of $\mathcal{M}^{h,nu}_{2}$, see Figure \ref{Discretizations_schemes_2D}.
\begin{figure} [H]
\centering
\vspace{0pt}
\includegraphics[trim = 550mm 265mm 75mm 250mm, clip=true,width=0.5\textwidth]{Figures/Discretizations_schemes_2D}
\caption{Example of two-dimensional non-uniform mesh obtained as a tensor product of perturbed one-dimensional meshes}
\label{Discretizations_schemes_2D}
\end{figure}
For the convergence studies we use $t_e=\delta$, ${\overline{N}_{qp}=64}$, and ${N}_{q}=16$ (four by four Gauss quadrature), with $\epsilon=0.1$. Figures \ref{2D_CG_NU_OutGauss_IGMLS_sine_f3_bx1_2del_m2_IF5_No4_Ni8} and \ref{2D_CG_NU_OutGauss_IGMLS_sine_f3_bx1_2del_m2_IF4_No4_Ni8} show the results for $\gamma_{2,c}$ and $\gamma_{2,r}$, respectively. Up to the considered level of refinement, we observe a second-order convergence rate in the $L^2$ norm for both kernels also for the non-uniform case. As discussed in more detail for the one-dimensional nonuniform case in Section \ref{numerical_1D_nonuniform}, we conjecture that this rate is pre-asymptotic, and that the first-order asymptotic regime is difficult to observe in practice.
\begin{figure}[H]
\begin{center}
\subfigure[$\gamma_{2,c}$]{\includegraphics[trim = 15mm 75mm 15mm 80mm, clip=true,width=0.49\textwidth]{Figures/2D_NU/2D_CG_NU_OutGauss_IGMLS_sine_f3_bx1_2del_m2_IF5_No4_Ni8}\label{2D_CG_NU_OutGauss_IGMLS_sine_f3_bx1_2del_m2_IF5_No4_Ni8}}
\subfigure[$\gamma_{2,r}$]{\includegraphics[trim = 15mm 75mm 15mm 80mm, clip=true,width=0.49\textwidth]{Figures/2D_NU/2D_CG_NU_OutGauss_IGMLS_sine_f3_bx1_2del_m2_IF4_No4_Ni8}\label{2D_CG_NU_OutGauss_IGMLS_sine_f3_bx1_2del_m2_IF4_No4_Ni8}}
\caption{$L^2$ convergence behaviors of the two-dimensional numerical solutions for $m=2$, non-uniform discretization with $\epsilon=0.1$, and $t_e=\delta$. ${N}_{q}=16$ (as $4\times4$) and ${\overline{N}_{qp}=64}$. $\gamma_{2,c}$ and $\gamma_{2,r}$ are both considered.}
\label{2D_CG_NU_OuterGauss_InnerGMLS_sine}
\end{center}
\end{figure}
\section{Conclusions}
\label{sec:conclusion}
We proposed a novel quadrature rule for the computation of integrals that arise in the matrix assembly of finite-element discretizations of nonlocal problems. In contrast to all previously employed methods, our technique does not require element-by-element integration, but relies on global integration over the nonlocal neighborhood. Specifically, we consider quadrature rules based on the generalized moving least squares method where the (global) quadrature weights are obtained by solving an equality-constrained optimization problem. The major advantage of this technique is the fact that the computation of element--ball intersections, a nontrivial and time consuming task, is avoided. Additionally, this technique requires minimal implementation effort, as it can be implemented in an existing finite element code. For this reason, we expect the proposed approach to become a building block of agile engineering codes. Our numerical experiments show that, when boundary conditions are treated carefully and the outer integral is computed accurately, our method is asymptotically compatible in the limit of $h\sim\delta\to 0$, featuring at least first-order convergence in $L^2$ for all dimensions and for both uniform and nonuniform grids. For piecewise linear finite-element implementations, in the case of uniform grids, our method features an optimal, second-order convergence rate in $L^2$ and passes the patch test. For nonuniform grids, we see effective second-order convergence over a long pre-asyptotic regime, whereas the asymptotic first-order convergence is only evident in deviations from the patch test, which are very small relative to errors in more complicated solutions. Convergence rates in $H^1$ are consistently one order lower than the $L^2$ rates.
We also carry out a preliminary numerical analysis of the method, but using the $H^1$ norm and restricted to the case of $h = \delta$ in one spatial dimension. This analysis is consistent with the convergence rates observed in numerical experiments, but it does not account for the increase in convergence rate when measuring the $L^2$ norm instead of $H^1$. As such, we believe that an interesting future direction for numerical analysis of this quadrature scheme would be to obtain sharp $L^2$ error estimates.
\section*{Acknowledgements}
Kamensky and Pasetto were supported by start-up funding from the University of California San Diego. The work of Tian was partially supported by the National Science
Foundation grant DMS-2111608.
D'Elia and Trask are supported by the Sandia National Laboratories (SNL) Laboratory-directed Research and Development program and by the U.S. Department of Energy, Office of Advanced Scientific Computing Research under the Collaboratory on Mathematics and Physics-Informed Learning Machines for Multiscale and Multiphysics Problems (PhILMs) project. SNL is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract {DE-NA0003525}. This paper, SAND2022-0834, describes objective technical results and analysis. Any subjective views or opinions that might be expressed in this paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government.
| {
"timestamp": "2022-02-01T02:01:47",
"yymm": "2201",
"arxiv_id": "2201.12391",
"language": "en",
"url": "https://arxiv.org/abs/2201.12391",
"abstract": "Casting nonlocal problems in variational form and discretizing them with the finite element (FE) method facilitates the use of nonlocal vector calculus to prove well-posedeness, convergence, and stability of such schemes. Employing an FE method also facilitates meshing of complicated domain geometries and coupling with FE methods for local problems. However, nonlocal weak problems involve the computation of a double-integral, which is computationally expensive and presents several challenges. In particular, the inner integral of the variational form associated with the stiffness matrix is defined over the intersections of FE mesh elements with a ball of radius $\\delta$, where $\\delta$ is the range of nonlocal interaction. Identifying and parameterizing these intersections is a nontrivial computational geometry problem. In this work, we propose a quadrature technique where the inner integration is performed using quadrature points distributed over the full ball, without regard for how it intersects elements, and weights are computed based on the generalized moving least squares method. Thus, as opposed to all previously employed methods, our technique does not require element-by-element integration and fully circumvents the computation of element-ball intersections. This paper considers one- and two-dimensional implementations of piecewise linear continuous FE approximations, focusing on the case where the element size h and the nonlocal radius $\\delta$ are proportional, as is typical of practical computations. When boundary conditions are treated carefully and the outer integral of the variational form is computed accurately, the proposed method is asymptotically compatible in the limit of $h \\sim \\delta \\to 0$, featuring at least first-order convergence in L^2 for all dimensions, using both uniform and nonuniform grids.",
"subjects": "Numerical Analysis (math.NA); Analysis of PDEs (math.AP)",
"title": "Efficient optimization-based quadrature for variational discretization of nonlocal problems",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9817357189762611,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7085610855508042
} |
https://arxiv.org/abs/2207.01320 | City products of right-angled buildings and their universal groups | We introduce the notion of city products of right-angled buildings that produces a new right-angled building out of smaller ones.More precisely, if $M$ is a right-angled Coxeter diagram of rank $n$ and $\Delta_1,\dots,\Delta_n$ are right-angled buildings, then we construct a new right-angled building $\Delta := \mathrm{cityproduct}_M(\Delta_1,\dots,\Delta_n)$. We can recover the buildings $\Delta_1,\dots,\Delta_n$ as residues of $\Delta$, but we can also construct a skeletal building of type $M$ from $\Delta$ that captures the large-scale geometry of $\Delta$.We then proceed to study universal groups for city products of right-angled buildings, and we show that the universal group of $\Delta$ can be expressed in terms of the universal groups for the buildings $\Delta_1,\dots,\Delta_n$ and the structure of $M$. As an application, we show the existence of many examples of pairs of different buildings of the same type that admit (topologically) isomorphic universal groups, thereby vastly generalizing a recent example by Lara Beßmann. | \section{Introduction}
A building is called \emph{right-angled} if its Coxeter group is right-angled, which means that the only values occurring in its Coxeter matrix are $1$, $2$ and $\infty$.
The prototypical example is the case where the Coxeter matrix has rank $2$ with a label $\infty$, in which case the building is a tree. In general, the behavior of right-angled buildings is somewhat comparable to that of trees, but in a combinatorially much more complicated (and therefore much more interesting) way.
The first systematic study of right-angled buildings is by Fr\'ed\'eric Haglund and Fr\'ed\'eric Paulin \cite{haglundpaulin}, who showed the existence and uniqueness of right-angled buildings for any set of parameters (see \cref{thm:rabsexist} below).
Later, right-angled buildings have been used to construct interesting examples of \emph{lattices}, as in the work of Angela Kubena, Anne Thomas and Kevin Wortman \cite{thomas1,thomas3,thomas2}.
Our motivation for studying right-angled buildings, initiated by Pierre-Emmanuel Caprace in \cite{caprace2014}, is the connection with totally disconnected locally compact groups. More precisely, the automorphism group of a right-angled building is always totally disconnected with respect to the permutation topology, and if the building is locally finite, then the automorphism group is also locally compact. This is not true in general, but these automorphism groups contain lots of interesting subgroups, namely so-called \emph{universal groups}, that can still be locally compact even if the building is not locally finite.
These universal groups were first introduced and studied for trees by Marc Burger and Shahar Mozes in their seminal paper \cite{burgermozes}.
This concept has been generalized to right-angled buildings by the second author in joint work with Ana C. Silva and Koen Struyve in \cite{silva1,silva2} in the locally finite case, and has been further generalized and studied without this assumption in our paper \cite{bossaert}, focussing on topological properties.
\medskip
For some right-angled buildings, the large-scale geometry looks like a tree; see, for instance, \cref{fig:coxeterrab} below. This raises the question whether it is possible, in these cases, to somehow reverse the process, i.e., whether we can start from a tree and obtain a more complicated right-angled building by ``inserting'' more complicated blocks at each vertex of the tree.
This idea gave rise to the construction that we introduce and study in this paper. We call it the \emph{city product} of buildings, as it is a way to construct larger objects out of a given number of buildings, guided by the rough structure of yet another right-angled diagram.
More precisely, if $M$ is a right-angled Coxeter diagram of rank $n$ and $\Delta_1,\dots,\Delta_n$ are right-angled buildings, then we construct a new right-angled building $\Delta := \ensuremath\operatorname{\faMapMarker}_M(\Delta_1,\dots,\Delta_n)$. We can recover the buildings $\Delta_1,\dots,\Delta_n$ as residues of $\Delta$, but we can also construct a \emph{skeletal building} $\Phi$ of type $M$ from $\Delta$ that captures the large-scale geometry of $\Delta$.
Constructing this building $\Phi$ is not difficult, but it turns out to be far from trivial to show that it is indeed a building. This is the content of \cref{prop:cityproductskeletalstuff}, which relies on the new notions of \emph{weak homotopies} and \emph{parkour maps} that we have introduced for this purpose.
\medskip
It turns out that the universal groups for these city products can be described as the universal group of this skeletal building $\Phi$ with respect to universal groups for each of the smaller buildings $\Delta_1,\dots,\Delta_n$; this is the content of \cref{thm:universalcityproduct}.
\medskip
In a recent preprint \cite{bessmann}, Lara Be{\ss}mann has shown the existence of pairs of different right-angled buildings, both of type
\raisebox{.8ex}{%
\begin{tikzpicture}[scale=.6]
\useasboundingbox (-.2,.1) rectangle (2.2,.4);
\path (0,0) node[myvertex] (A) {}
++(1,0) node[myvertex] (B) {}
++(1,0) node[myvertex] (C) {};
\draw[myedge] (A) -- node[above] {\footnotesize $\infty$} (B) -- node[above] {\footnotesize $\infty$} (C);
\end{tikzpicture}},
admitting universal groups that are topologically isomorphic.
Her method relies on the notion of tree-wall trees from \cite{silva1} and only works for star-shaped diagrams (see \itemref{ex:isom}{1}).
We show that this can be interpreted in terms of city products, which allows us to produce many more examples of such pairs. This is the content of \cref{thm:application}.
\subsection*{Acknowledgment.}
The first author has been supported by the UGent BOF PhD mandate BOF17/DOC/274.
\section{Preliminaries}
\subsection{Coxeter systems}
\begin{definition}\label{def:cox}
\begin{enumerate}
\item
Let $I$ be any index set and $M$ a function
\[M\colon I\times I\to \mathbb N\cup\{\infty\}\colon (i,j)\mapsto m_{ij}\]
satisfying $m_{ii}=1$, $m_{ij}\geq 2$, and $m_{ij}=m_{ji}$ for all $i\neq j\in I$. Then the \emph{Coxeter group of type $M$} is the group defined by the presentation
\[W = \bigl\langle s_i \mathrel{\bigm|} \text{$(s_i s_j)^{m_{ij}} = 1$ for all $i,j\in I$}\bigr\rangle.\]
When $m_{ij}=\infty$, this means that no relation on $s_is_j$ should be imposed. Note that the assumption that $m_{ii}=1$ for all $i\in I$ immediately implies that the generators $s_i$ are involutions. Additionally, note that when $m_{ij}=2$, the generators $s_i$ and $s_j$ commute.
Together with the generating set $S=\{s_i\mid i\in I\}$, the pair $(W,S)$ is called the \emph{Coxeter system of type $M$}. The \emph{rank} of $(W,S)$ is the cardinality of $I$.
We can represent $M$ be means of its \emph{Coxeter matrix $(m_{ij})$}, or more commonly its \emph{Coxeter diagram}: the nodes of the diagram are the elements of $I$ (sometimes with explicit labels), and two nodes are connected by a decorated edge according to the following rules:
\[\newcommand\coxeterdiagram[2]{\begin{tikzpicture}[x=6mm]
\node[myvertex,label=above:$i$] (A) at (-1,0) {};
\node[myvertex,label=above:$j$] (B) at (1,0) {};
\path[myedge] #2;
\node[yshift=-6mm] {#1};
\end{tikzpicture}}
\coxeterdiagram{$m_{ij}=2$}{}
\qquad\qquad\coxeterdiagram{$m_{ij}=3$}{(A) -- (B)}
\qquad\qquad\coxeterdiagram{$m_{ij}=4$}{(A.15) -- (B.165) (A.345) -- (B.195)}
\qquad\qquad\coxeterdiagram{$m_{ij}\geq 5$}{(A) -- node[above] {\smash{\scriptsize $m_{ij}$}} (B)}\]
\item
We call a Coxeter system $(W,S)$ \emph{irreducible} if the underlying graph of its Coxeter diagram is connected, and \emph{reducible} otherwise.
\item\label{def:cox:RA}
We call a Coxeter system $(W,S)$ \emph{right-angled} if $m_{ij}\in\{2,\infty\}$ for all $i\neq j$.
\end{enumerate}
\end{definition}
In general, non-isomorphic Coxeter systems may have isomorphic Coxeter groups, but this cannot occur for right-angled Coxeter systems:
\begin{theorem}
\label{thm:racgrigid}
If a right-angled Coxeter group $W$ admits two Coxeter systems $(W,S)$ and $(W,S')$, then these Coxeter systems are isomorphic (i.e.~there is a diagram-preserving bijection $S\to S'$).
\end{theorem}
\begin{proof}
We refer to \cite{radcliffe} or \cite{hosaka}.
\end{proof}
\begin{definition}
\label{def:evaluationmorphism}
Let $(W,S)$ be a Coxeter system over some index set $I$.
\begin{enumerate}
\item
We will write $I^*$ for the free monoid over $I$. The elements of $I^*$ will be called \emph{words}.
\item
There is a natural surjective \emph{evaluation morphism} of monoids
\end{enumerate}
\[\epsilon\colon I^* \to W\colon i\mapsto s_i.\]
\end{definition}
\begin{definition}
For every $i\neq j$ such that $m_{ij}$ is finite, define in $I^*$ the word
\[p(i,j) = \begin{cases}
(ij)^k & \text{if $m_{ij}=2k$ is even,}\\
j(ij)^k & \text{if $m_{ij}=2k+1$ is odd.}
\end{cases}\]
In other words, $p(i,j)$ is the word with $m_{ij}$ alternating letters $i$ and $j$, ending in $j$. When $m_{ij}=\infty$, $p(i,j)$ remains undefined.
\end{definition}
\begin{definition}
\label{def:homotopy}
Let $i,j\in I$ and $w_1,w_2\in I^*$.
\begin{enumerate}
\item An \emph{elementary homotopy} (or also a \emph{braid relation}) is a transformation of a word $w_1\,p(i,j)\,w_2$ into the word $w_1\,p(j,i)\,w_2$.
\item Two words $w$ and $w'$ are \emph{homotopic} if $w$ can be transformed into $w'$ by a sequence of elementary homotopies; we denote this by $w\simeq w'$. Clearly, homotopy is an equivalence relation and preserves the length of the words.
\item An \emph{elementary contraction} is a transformation of a word $w_1\,ii\,w_2$ into the word $w_1\,w_2$.
\item An \emph{elementary expansion} is a transformation of a word $w_1\,w_2$ into a word $w_1\,ii\,w_2$.
\item A word is called \emph{reduced} if it is not homotopic to a word of the form $w_1\,ii\,w_2$ (for some $i\in I$).
\item Two words $w$ and $w'$ are called \emph{equivalent} if $w$ can be transformed into $w'$ by a sequence of elementary homotopies, contractions, and expansions. Clearly, every equivalence class contains some reduced word.
\end{enumerate}
\end{definition}
\begin{theorem}
\label{thm:homotopicstuff}
\begin{enumerate}
\item\label{thm:homotopicstuff:1} Two words $w$ and $w'$ are equivalent if and only if $\epsilon(w)=\epsilon(w')$.
\item\label{thm:homotopicstuff:2} Two reduced words $w$ and $w'$ are equivalent if and only if they are homotopic.
\item\label{thm:homotopicstuff:3} Let $w$ be a reduced word and let $i\in I$. If $iw$ (or $wi$) is not reduced, then $w$ is homotopic to a word that begins (or ends, respectively) with $i$.
\end{enumerate}
\end{theorem}
\begin{proof}
By the defining relations $(s_is_j)^{m_{ij}} = 1$ in the presentation, $p(i,j)$ and $p(j,i)$ have the same image under $\epsilon$, and $\epsilon(ii)$ is the identity. Statement \ref{thm:homotopicstuff:1} follows immediately. For \ref{thm:homotopicstuff:2}, we refer to \cite[Theorem 2.11]{ronan}. Statement \ref{thm:homotopicstuff:3} is \cite[Corollary 2.13]{ronan}.
\end{proof}
\subsection{Chamber systems}
Our approach is based on \cite{ronan}.
\begin{definition}
Let $I$ be any index set. A \emph{chamber system over $I$} is a set $\Delta$ together with, for every $i\in I$, an equivalence relation called \emph{$i$-adjacency}. The elements of $\Delta$ are called \emph{chambers}. If two chambers $c$ and $d$ are $i$-adjacent, we write $c\sim_i d$, or simply $c\sim d$ if we do not want to stress the adjacency type. The cardinality $|I|$ is called the \emph{rank} of $\Delta$. In this paper, the rank will always be finite.
We will usually say that ``$\Delta$ is a chamber system'' when the equivalence relations on~$\Delta$ are clear from the context.
\end{definition}
\begin{definition}
Let $\Delta$ be a chamber system over $I$.
A \emph{gallery $\gamma$} in $\Delta$ is a finite sequence of pairwise adjacent chambers
\[c_0 \sim_{i_1} c_1 \sim_{i_2} {\cdots} \sim_{i_n} c_n\]
for certain $i_1,\dots,i_n\in I$. We call the word $i_1\mathbin{\cdots} i_n \in I^*$ the \emph{type} of~$\gamma$, and the integer $n$ the \emph{length} of $\gamma$. If there is no strictly shorter gallery from $c_0$ to $c_n$, then we call $\gamma$ a \emph{minimal} gallery.
Chamber systems come equipped with a natural metric
\[\dist\colon \Delta\times\Delta\to\mathbb N\cup\{\infty\}\]
defined by declaring $\dist(c,d)$ to be the minimal length of all galleries joining $c$~and~$d$ (or $\infty$ if there is no such gallery). It is clear that this distance function is positive-definite, symmetric, and satisfies the triangle inequality.
\end{definition}
\begin{definition}
Let $J\subseteq I$. A subset $C\subseteq\Delta$ is called \emph{$J$-connected} if any two chambers in $C$ can be joined by a gallery of type in $J^*$. A \emph{residue of type $J$}, or simply a \emph{$J$-residue}, is a $J$-connected component of $\Delta$. A \emph{panel of type $j$}, or simply a \emph{$j$-panel}, is a residue of type $\{j\}$. The set of all $J$-residues of the chamber system $\Delta$ will be denoted by $\Res_J(\Delta)$.
Note that each $J$-residue is, in its own right, a connected chamber system over the index set $J$.
\end{definition}
\begin{definition}
A chamber system is called \emph{thin} if every panel contains exactly two chambers, and \emph{thick} if every panel contains at least three chambers. (Panels containing only a single chamber are degenerate cases that should not occur in any reasonable application.)
\end{definition}
Note that a chamber system might be neither thin nor thick.
\begin{definition}
A map $\varphi\colon\Delta_1\to\Delta_2$ between two chamber systems is a \emph{morphism} if $\varphi(c) \sim \varphi(d)$ in $\Delta_2$ whenever $c \sim d$ in $\Delta_1$. As usual, an \emph{isomorphism} is a bijective morphism, and an \emph{automorphism} is an isomorphism to the same chamber system. Assuming that $\Delta_1$ and $\Delta_2$ have the same index set, a morphism is \emph{type-preserving} if $\varphi(c) \sim_i \varphi(d)$ whenever $c \sim_i d$. In this paper, we shall always assume morphisms to be type-preserving.
The set of all automorphisms of a chamber system $\Delta$ forms a group, denoted by $\Aut(\Delta)$.
\end{definition}
\begin{definition}
Let $(W,S)$ be a Coxeter system of type $M$ over $I$. Define a chamber system over $I$ with the elements of $W$ as chambers, and declare two group elements $v$ and $w$ to be $i$-adjacent if and only if $vs_i = w$.
The resulting chamber system is called the \emph{Coxeter complex of type $M$}.
Coxeter complexes are always connected and thin: every chamber is $i$-adjacent to exactly one other chamber for every $i\in I$.
Observe that the Coxeter complex associated to a Coxeter system $(W,S)$ is nothing more than the (undirected) Cayley graph of $W$ with respect to the generating set $S$.
\end{definition}
\begin{theorem}\label{thm:coxcpx}
A gallery in a Coxeter complex is minimal if and only if its type is reduced.
\end{theorem}
\begin{proof}
See \cite[Theorem 2.11]{ronan}.
\end{proof}
\begin{figure}[!ht]
\centering
\begin{tikzpicture}[scale=.8]
\pgfmathsetmacro{\s}{1+sqrt(2)/2}
\foreach\P/\M in {(0,0)/C, (-\s,-\s)/LL, (-\s,\s)/UL, (\s,\s)/UR, (\s,-\s)/LR}
\foreach\Q/\N in {(-.5,-.5)/LL, (-.5,.5)/UL, (.5,.5)/UR, (.5,-.5)/LR}
\path \P +\Q node[myvertex] (\M-\N) {};
\begin{scope}[myedge,ugentgreen]
\draw[-dots] (LL-LR) -- ++(285:.4);
\draw[-dots] (LL-LL) -- ++(225:.4);
\draw[-dots] (LL-UL) -- ++(165:.4);
\draw[-dots] (UL-LL) -- ++(195:.4);
\draw[-dots] (UL-UL) -- ++(135:.4);
\draw[-dots] (UL-UR) -- ++(75:.4);
\draw[-dots] (UR-UL) -- ++(105:.4);
\draw[-dots] (UR-UR) -- ++(45:.4);
\draw[-dots] (UR-LR) -- ++(345:.4);
\draw[-dots] (LR-UR) -- ++(15:.4);
\draw[-dots] (LR-LR) -- ++(315:.4);
\draw[-dots] (LR-LL) -- ++(255:.4);
\draw (C-LL) -- (LL-UR) (C-UL) -- (UL-LR) (C-UR) -- (UR-LL) (C-LR) -- (LR-UL);
\end{scope}
\draw[myedge,ugentred]
\foreach\M in {C, LL, UL, UR, LR} {(\M-LL) -- (\M-LR) (\M-UL) -- (\M-UR)};
\draw[myedge,ugentblue]
\foreach\M in {C, LL, UL, UR, LR} {(\M-LL) -- (\M-UL) (\M-LR) -- (\M-UR)};
\end{tikzpicture}
\par\bigskip
\begin{tikzpicture}
\path (0,0) node[myvertex,ugentblue,label=below:$1$] (A) {}
++(1,0) node[myvertex,ugentgreen,label=below:$2$] (B) {}
++(1,0) node[myvertex,ugentred,label=below:$3$] (C) {};
\draw[myedge] (A) -- node[above] {$\infty$} (B) -- node[above] {$\infty$} (C);
\end{tikzpicture}
\caption{A right-angled Coxeter complex}
\label{fig:coxeterrab}
\end{figure}
\subsection{Right-angled buildings}
\begin{definition}
Let $(W,S)$ be a Coxeter system of type $M$ over some index set~$I$. A \emph{building $(\Delta,\delta)$ of type $M$} is a chamber system $\Delta$ over $I$ such that every panel contains at least two chambers, equipped with a map $\delta\colon\Delta\times\Delta\to W$ satisfying the following property for every reduced word $w\in I^*$:
\[\text{$\delta(c,d) = \epsilon(w)$ if and only if $c$ and $d$ can be joined by a gallery of type $w$.}\]
Such a gallery is automatically minimal by \cref{thm:coxcpx}. In particular, the distance between two chambers $c$ and $d$ is exactly the length of $\delta(c,d)$ in the word metric of~$W$ (w.r.t.~generating set $S$).
The group $W$ is called the \emph{Weyl group} of the building, and the map $\delta$ is called the \emph{$W$\!-distance} or \emph{Weyl distance function}.
We shall usually identify the building with its chamber set and abbreviate $(\Delta,\delta)$ to $\Delta$.
\end{definition}
\begin{definition}
A building $\Delta$ is called \emph{right-angled} if its underlying Coxeter system $(W,S)$ is right-angled (as defined in \itemref{def:cox}{RA}).
\end{definition}
\begin{definition}
A building $\Delta$ over $I$ is called \emph{semiregular with parameters $(q_i)_{i\in I}$} if for each $i\in I$, all panels of type $i$ have the same (possibly infinite) cardinality $q_i\geq 2$. Note that the thin buildings are precisely the semiregular buildings with parameters $q_i=2$ for all $i$.
\end{definition}
The following result is attributed to Haglund and Paulin, but they point out that this fact was already known to Mark Globus (but unpublished), Michael Davis and Gabor Moussong, and Tadeusz Januszkiewicz and Jacek \'Swi\c{a}tkowski.
\begin{theorem}
\label{thm:rabsexist}
For any choice of (possibly infinite) cardinal numbers $(q_i)_{i\in I}$ with $q_i \geq 2$, there exists a semiregular right-angled building $\Delta$ with these parameters. Moreover, $\Delta$ is unique up to isomorphism, the automorphism group $\Aut(\Delta)$ acts transitively on the chambers, and every automorphism of a residue of $\Delta$ extends to an automorphism of $\Delta$.
\end{theorem}
\begin{proof}
See \cite[Proposition 1.2]{haglundpaulin}.
\end{proof}
\subsection{Colorings and implosions of right-angled buildings}
In order to keep track of the local behavior of a building automorphism, it is useful to introduce colorings of the building.
Throughout this section, $\Delta$ is a semiregular right-angled building with parameters $(q_i)_{i\in I}$.
The following notion of legal colorings was introduced in \cite[Definition 2.42]{silva1}.
\begin{definition}
\label{def:coloring}
Consider a set $\Omega_i$ of cardinality $q_i$ for each $i\in I$, the elements of which we call \emph{$i$-colors} or \emph{$i$-labels}. A \emph{legal coloring} of $\Delta$ (with color sets $\Omega_i$) is a map
\[\lambda\colon \Delta\to\prod_{i\in I} \Omega_i\colon c\mapsto(\lambda_i(c))_{i\in I}\]
satisfying the following properties for every $i\in I$ and for every $i$-panel $\P$:
\begin{enumerate}
\item the restriction $\restrict{\lambda_i}{\P}\colon \P\to\Omega_i$ is a bijection;
\item for every $j\neq i$, the restriction $\restrict{\lambda_j}{\P}\colon \P\to\Omega_j$ is a constant map.
\end{enumerate}
\end{definition}
Such a legal coloring is essentially unique:
\begin{proposition}
\label{prop:colortransformation}
Let $\lambda$ and $\lambda'$ be two legal colorings of a right-angled building $\Delta$ using identical color sets. Let $c$ and $c'$ be two chambers such that $\lambda(c) = \lambda'(c')$. Then there exists an automorphism $g\in\Aut(\Delta)$ such that $g\acts c=c'$ and $\lambda'\circ g = \lambda$.
\end{proposition}
\begin{proof}
See \cite[Proposition 2.44]{silva1}.
\end{proof}
We now recall the notion of an implosion of a right-angled building, introduced in \cite[Definition 5.2]{bossaert}.
\begin{definition}
\label{def:implosion1}
Let $\Delta$ be a semiregular right-angled building over $I$ and let $\lambda$ be a legal coloring of $\Delta$ (using color sets $\Omega_i$).
For each $i \in I$, consider an equivalence relation $\equiv_i$ on $\Omega_i$, let $\Omega'_i := \Omega_i/{\equiv_i}$ and set $q'_i := \lvert \Omega'_i \rvert$. For each $\lambda_i \in \Omega_i$, we write $[\lambda_i]$ for the corresponding element of $\Omega'_i$.
Let
\[ I' = \{i\in I \mid \text{${\equiv_i}$ is not the universal relation}\} = \{ i \in I \mid q'_i \neq 1 \}. \]
Define a new semiregular right-angled building $\Delta'$ over $I'$ with diagram induced by the diagram of $\Delta$, with parameters $q'_i$ (for every $i\in I'$), and with a legal coloring $\lambda'$ using the quotient $\Omega'_i$ as the set of $i$-colors.
\end{definition}
Recall that a map $f\colon X\to Y$ between metric spaces is called \emph{nonexpansive} if it does not increase distances, i.e., if $\dist_Y(f(x_1),f(x_2)) \leq \dist_X(x_1,x_2)$ for every pair $(x_1,x_2)$ of points in $X$.
\begin{proposition}
\label{prop:implosion}
Let $\Delta$, $\lambda$, $\equiv_i$ and $\Delta'$ be as in \cref{def:implosion1}. Let $c_0\in\Delta$ be any chamber and let $c'_0\in\Delta'$ be such that $\lambda'_i(c'_0) = [\lambda_i(c_0)]$ for every $i\in I'$. Then there exists a unique nonexpansive epimorphism $\tau$ of chamber systems from $\Delta$ onto $\Delta'$ mapping $c_0$ to $c'_0$ such that $\lambda'_i(\tau(c)) = [\lambda_i(c)]$ for all $c\in\Delta$.
\end{proposition}
\begin{proof}
See \cite[Proposition 5.1 and Remark 5.4]{bossaert}.
\end{proof}
\begin{definition}
We call the pair $(\Delta', \tau)$ from \cref{prop:implosion} the \emph{implosion} of $\Delta$ with \emph{centre} $c_0$ (with respect to the relations $\equiv_i$).
\end{definition}
\begin{corollary}
\label{cor:residuecoloring}
Let $\Delta$ be a semiregular right-angled building of type $M$ over $I$, let $J\subseteq I$, and let $\Gamma$ be the semiregular building of type $M_J$ over $J$ with the same parameters as $\Delta$. Then there is a map $\varphi_J\colon\Delta\to\Gamma$ with the following properties:
\begin{enumerate}
\item for every residue $\ensuremath\mathcal{R}$ of type $J$, the restriction $\restrict{\varphi_J}{\ensuremath\mathcal{R}}$ is an isomorphism;
\item for every residue $\ensuremath\mathcal{R}$ of type $I\setminus J$, the restriction $\restrict{\varphi_J}{\ensuremath\mathcal{R}}$ is a constant map.
\end{enumerate}
\end{corollary}
\begin{proof}
This follows from \cref{prop:implosion} by taking as equivalence relations $\equiv_i$ either the equality relation if $i\in J$ or the universal relation if $i\notin J$.
\end{proof}
\subsection{Universal groups}
Universal groups for right-angled buildings were first introduced in \cite{silva1} and further studied in the locally finite case in \cite{silva2}.
Their topological properties in the general case have been further investigated in \cite{bossaert}.
\begin{definition}
\label{def:universal}
Let $\Delta$ be a semiregular right-angled building over $I$, with parameters $(q_i)_{i \in I}$.
For each $i$, let $\Omega_i$ be a color set of size $q_i$ and let $\lambda$ be a corresponding legal coloring of $\Delta$.
\begin{enumerate}
\item
Consider an automorphism $g\in\Aut(\Delta)$ and an arbitrary $i$-panel $\P$. Then we define the \emph{local action of $g$ at $\P$} as the map
\[\sigma_\lambda(g,\P) = \restrict{\lambda_i}{g\P} \circ \restrict{g}{\P} \circ \restrict[-1]{\lambda_i}{\P}, \]
which is a permutation of $\Omega_i$ by definition of $\lambda$.
In other words, the local action $\sigma_\lambda(g,\P)$ is the map that makes the following diagram commute.
\[\begin{tikzcd}[dims={7em}{4em}]
\P \ar[r,"g"] \ar[d,"\lambda_i"]
& g\P \ar[d,"\lambda_i"]\\
\Omega_i \ar[r,"{\sigma_\lambda(g,\,\P)}"] & \Omega_i
\end{tikzcd}\]
\item
Let $\ensuremath\boldsymbol{F}\@Fhack$ be a collection of permutation groups $F_i\leq\Sym(\Omega_i)$, indexed by $i\in I$. The \emph{universal group} of $\ensuremath\boldsymbol{F}\@Fhack$ over $\Delta$ (with respect to $\lambda$) is the group
\[
\U_\Delta^\lambda(\ensuremath\boldsymbol{F}\@Fhack) = \bigl\{g\in\Aut(\Delta) \bigm| \sigma_\lambda(g,\P)\in F_i \text{ for each $i\in I$ and each $\P\in\Res_i(\Delta)$}\bigr\}.
\]
In words, $\U_\Delta^\lambda(\ensuremath\boldsymbol{F}\@Fhack)$ is the group of automorphisms that locally act like permutations in $F_i$. We hence call the groups $F_i$ the \emph{local groups} and we refer to the collection $\ensuremath\boldsymbol{F}\@Fhack$ as the \emph{local data} for the universal group.
\end{enumerate}
\end{definition}
\begin{remark}\label{rem:Unotation}
\begin{enumerate}
\item
When the coloring $\lambda$ is clear from the context, we will usually omit the explicit reference to $\lambda$ and simply use the notation $\sigma(g,\P)$ and $\U_\Delta(\ensuremath\boldsymbol{F}\@Fhack)$ instead.
In fact, the choice of $\lambda$ is irrelevant, since different colorings give rise to conjugate subgroups of $\Aut(\Delta)$; see \cite[Proposition 3.7(1)]{silva1}\footnote{The statement of \cite[Proposition 3.7(1)]{silva1} is for \emph{locally finite} right-angled buildings only, but the proof continues to hold for arbitrary right-angled buildings, as pointed out already in \cite[\S 2.3]{bossaert}.}.
\item\label{rem:Unotation:2}
When each of the groups $F_i$ in the local data $\ensuremath\boldsymbol{F}\@Fhack$ is given as a permutation group acting on some set $\Omega_i$ which is clear from the context, then we will also use the notation $\U_M(\ensuremath\boldsymbol{F}\@Fhack)$ for $\U_\Delta(\ensuremath\boldsymbol{F}\@Fhack)$, where $\Delta$ is then the unique right-angled building of type $M$ over $I$ with parameters $(\lvert \Omega_i \rvert)_{i \in I}$.
\end{enumerate}
\end{remark}
The universal groups come equipped with a natural topology, namely the \emph{permutation topology}, which is defined by taking as an identity neighborhood basis the collection of all pointwise stabilizers of finite subsets of $\Delta$.
\medskip
The following observation is worth mentioning, because this is precisely the type of result we will be generalizing later.
\begin{lemma}
\label{lem:universalreducible}
Let $\Delta$ be a reducible right-angled building $\Delta$ over $I$.
Let $J_1,\dots, J_m$ be the connected components of the diagram of $\Delta$. Then the universal group $\U_\Delta(\ensuremath\boldsymbol{F}\@Fhack)$ splits as~a direct product
\[\U_\Delta(\ensuremath\boldsymbol{F}\@Fhack) \cong \U_{\ensuremath\mathcal{R}_1}\!\left(\textstyle\restrict\ensuremath\boldsymbol{F}\@Fhack{J_1}\right) \times \dotsm \times \U_{\ensuremath\mathcal{R}_m}\!\left(\textstyle\restrict\ensuremath\boldsymbol{F}\@Fhack{J_m}\right)\!,\]
where each $\ensuremath\mathcal{R}_\ell$ is a residue of type $J_\ell$.
\end{lemma}
\begin{proof}
Since $\Delta$ is isomorphic to the direct product $\ensuremath\mathcal{R}_1 \times \dotsm \times \ensuremath\mathcal{R}_m$ and has automorphism group $\Aut(\Delta) \cong \Aut(\ensuremath\mathcal{R}_1) \times \dotsm \times \Aut(\ensuremath\mathcal{R}_m)$, this follows immediately from the definition.
\end{proof}
\section{City products}
\label{sec:cityproduct}
In this section, we develop a construction for creating new right-angled buildings of a higher rank by gluing together lower rank buildings along another diagram. Our construction is inspired by the observation that the large-scale geometry of certain right-angled buildings (such as \cref{fig:coxeterrab}) resembles that of a tree; the city product structure explains this behavior in a broad sense.
\subsection{Weak homotopies}
We start with some combinatorics, the goal of which will become clear later on.
\begin{definition}
\label{def:weakhomotopy}
Let $i,j\in I$ with $m_{ij}=2$ and define the set
\[P(i,j) = \bigl\{w\in \{i,j\}^* \bigm| \text{$w$ contains at least one $i$ and at least one $j$}\bigr\}.\]
A \emph{weak homotopy} is a transformation of a word $w_1\,p\,w_2$ into a word $w_1\,p'\,w_2$ where $w_1,w_2\in I^*$ and $p,p'\in P(i,j)$. Two words $w$ and $w'$ are \emph{weakly homotopic} if $w$ can be transformed into $w'$ by a sequence of weak homotopies.
\end{definition}
\begin{definition}
Let $\prec$ be a total order on $I$. Endow $I^*$ with the induced lexico\-graphical order. Then every word $w\in I^*$ is homotopic to a unique lexicographically minimal word that we call the \emph{normal form} of $w$
\end{definition}
\begin{proposition}
\label{prop:normalform}
Let $\prec$ be a total order on $I$.
\begin{enumerate}
\item If two words are homotopic, then their normal forms are equal.
\item\label{prop:normalform:2} A word is reduced if and only if its normal form contains no consecutive duplicate letters.
\item\label{prop:normalform:3} The normal forms of weakly homotopic words are equal up to consecutive duplicate letters.
\end{enumerate}
\end{proposition}
\begin{proof}
Claim (i) follows immediately from the definitions. For (ii), let $w\simeq w_1\,ii\,w_2$ and assume that the normal form contains no subword $ii$. Mark the two letters $i$ in $w_1\,ii\,w_2$ and write the normal form as $n_1\,i\,n_2\,i\,n_3$ (where the two letters $i$ are the marked ones). Then $n_2$ is not the empty word, so let $k$ be its first letter; by assumption, $k\neq i$. By homotopy, all letters in $n_2$ are contained in $\{i\} \cup \{i\}^\perp$. It follows that the normal form cannot be lexicographically minimal: if $i\prec k$, then the homotopic word $n_1\,ii\,n_2\,n_3$ is lexicographically smaller, and if $i\succ k$, then $n_1\,n_2\,ii\,n_3$ is smaller. Claim (ii) follows. For claim (iii), it suffices to note that the effect of a weak homotopy of a word on its normal form is that a subword $i^mj^n$ with $m\geq 1$, $n\geq 1$, is replaced by another such word.
\end{proof}
\begin{corollary}
\label{cor:weaklyhomotopicreduced}
If two reduced words $w,w'\in I^*$ are weakly homotopic, then they are homotopic.
\end{corollary}
\begin{proof}
Letting $\prec$ be any total order, this follows readily from \cref{prop:normalform}\ref{prop:normalform:2} and \ref{prop:normalform:3}.
\end{proof}
\subsection{City product of diagrams}
Now let us go back to the building realm and define an operation on the diagrams first.
\begin{definition}
Let $M$ be a diagram of rank $n$ over the index set $\{1,\dots,n\}$, and for each $\ell \in \{ 1,\dots,n \}$, let $M_\ell$ be a diagram over an index set $I_\ell$. Then we define a new diagram as follows:
\begin{enumerate}
\item the index set is the disjoint union $I = \bigsqcup_{\ell=1}^n I_\ell$;
\item for every pair of elements $i\in I_\ell$ and $i'\in I_{\ell'}$, we set
\[ m_{ii'} := \begin{cases}
m_{ii'} \text{ (considered in $M_\ell$)} & \text{ if } \ell = \ell' ; \\
m_{\ell\ell'} \text{ (considered in $M$)} & \text{ if } \ell \neq \ell' .
\end{cases} \]
\end{enumerate}
We call this the \emph{city product} of the diagrams $M_1,\dots,M_n$ over $M$ and denote it by $\ensuremath\operatorname{\faMapMarker}_{M}(M_1,\dots,M_n)$. Clearly its rank is $\sum_{\ell=1}^n |I_\ell|$.
\end{definition}
Notice that the special case of a city product over an empty diagram (i.e., $m_{ij}=2$ for all $1\leq i\neq j\leq n$) results in nothing more than the disjoint union of the diagrams $M_1,\dots,M_n$. Two more examples are given in \cref{fig:cityprod}. (More examples will occur in \cref{ex:isom} later.) Our choice for the symbol $\ensuremath\operatorname{\faMapMarker}$ for city products is inspired by the example from \cref{fig:cityprod:B}.
\begin{figure}[ht]
\centering
\begin{subfigure}[b]{.46\textwidth}
\centering
\begin{tikzpicture}[scale=.85]
\path (0,0) node[myvertex,ugentblue] (A1) {}
+(1,0) node[myvertex,ugentblue] (A2) {}
++(0,1) node[myvertex,ugentblue] (A3) {}
+(1,0) node[myvertex,ugentblue] (A4) {}
++(.5,1) node[myvertex,ugentblue] (A5) {}
(2,0) node[myvertex,ugentred] (B1) {}
++(0,1) node[myvertex,ugentred] (B2) {}
++(0,1) node[myvertex,ugentred] (B3) {};
\draw[myedge,ultra thick,mydarkgray,rounded corners=5pt]
(-.4,-.4) rectangle (1.4,.4) (1.4,0) -- (B1)
(-.4,.6) rectangle (1.4,1.4) (1.4,1) -- (B2)
(-.4,1.6) rectangle (1.4,2.4) (1.4,2) -- (B3);
\draw[myedge] (A1) -- node[below] {$\infty$} (A2);
\draw[myedge] (B1) -- (B2) (B2.105) -- (B3.255) (B2.75) -- (B3.285);
\end{tikzpicture}
\hspace*{1ex}$\Rightarrow$\hspace*{1ex}
\begin{tikzpicture}[scale=.85]
\path (0,0) node[myvertex,ugentblue] (A1) {}
+(1,0) node[myvertex,ugentblue] (A2) {}
++(0,1) node[myvertex,ugentblue] (A3) {}
+(1,0) node[myvertex,ugentblue] (A4) {}
++(.5,1) node[myvertex,ugentblue] (A5) {};
\draw[myedge] (A1) -- node[below] {$\infty$} (A2) -- (A3) -- (A1) -- (A4) -- (A2)
(A3.80) -- (A5.225) (A3.50) -- (A5.255) (A4.130) -- (A5.285) (A4.100) -- (A5.315);
\draw[myedge,ultra thick,mydarkgray,rounded corners=5pt] (-.4,-.4) rectangle (1.4,2.4);
\end{tikzpicture}
\caption{A general example}
\end{subfigure}
\qqua
\begin{subfigure}[b]{.46\textwidth}
\centering
\begin{tikzpicture}[scale=.85]
\path (0,0) node[myvertex,ugentblue] (A1) {}
+(1,0) node[myvertex,ugentblue] (A2) {}
+(.5,1) node[myvertex,ugentblue] (A5) {}
+(0,2) node[myvertex,ugentblue] (A3) {}
+(1,2) node[myvertex,ugentblue] (A4) {}
(2,0) node[myvertex,ugentred] (B1) {}
++(0,1) node[myvertex,ugentred] (B2) {}
++(0,1) node[myvertex,ugentred] (B3) {};
\draw[myedge,ultra thick,mydarkgray,rounded corners=5pt]
(-.4,-.4) rectangle (1.4,.4) (1.4,0) -- (B1)
(-.4,.6) rectangle (1.4,1.4) (1.4,1) -- (B2)
(-.4,1.6) rectangle (1.4,2.4) (1.4,2) -- (B3);
\draw[myedge] (A1) -- node[below] {$\infty$} (A2);
\draw[myedge] (A3) -- node[above] {$\infty$} (A4);
\draw[myedge] (B1) -- node[left] {$\infty$} (B2) -- node[left] {$\infty$} (B3);
\end{tikzpicture}
\hspace*{1ex}$\Rightarrow$\hspace*{1ex}
\begin{tikzpicture}[scale=.85]
\path (0,0) node[myvertex,ugentblue] (A1) {}
+(1,0) node[myvertex,ugentblue] (A2) {}
+(.5,1) node[myvertex,ugentblue] (A5) {}
+(0,2) node[myvertex,ugentblue] (A3) {}
+(1,2) node[myvertex,ugentblue] (A4) {};
\draw[myedge,ultra thick,mydarkgray,rounded corners=5pt] (-.4,-.4) rectangle (1.4,2.4);
\draw[myedge] (A1) -- node[left] {$\infty$} (A5) -- node[right] {$\infty$} (A2);
\draw[myedge] (A3) -- node[left] {$\infty$} (A5) -- node[right] {$\infty$} (A4);
\draw[myedge] (A1) -- node[below] {$\infty$} (A2);
\draw[myedge] (A3) -- node[above] {$\infty$} (A4);
\end{tikzpicture}
\caption{A right-angled example}
\label{fig:cityprod:B}
\end{subfigure}
\caption{City products of diagrams}
\label{fig:cityprod}
\end{figure}
\begin{lemma}
The diagram $\ensuremath\operatorname{\faMapMarker}_{M}(M_1,\dots,M_n)$ with $n\geq 2$ is irreducible if and only if $M$ is irreducible.
\end{lemma}
\begin{proof}
This follows immediately from the definition.
\end{proof}
\subsection{City product of right-angled buildings}
We can now continue to define city products of right-angled buildings.
\begin{definition}
Let $M$ be a right-angled diagram over the index set $\{1,\dots,n\}$, and for each $\ell \in \{ 1,\dots,n \}$, let $\Delta_\ell$ be a semiregular right-angled building of type $M_\ell$ over $I_\ell$. Then we define the \emph{city product} of the buildings $\{\Delta_1,\dots,\Delta_n\}$ over $M$ as follows:
\begin{enumerate}
\item the index set is the disjoint union $I = \bigsqcup_{\ell=1}^n I_\ell$;
\item the (right-angled) diagram is the city product of diagrams $\ensuremath\operatorname{\faMapMarker}_{M}(M_1,\dots,M_n)$;
\item for each $i\in I$, the parameter $q_i$ of the new building is the parameter $q_i$ of $\Delta_\ell$, where $i\in I_\ell$.
\end{enumerate}
By \cref{thm:rabsexist}, this defines a unique semiregular right-angled building (up to isomorphism), that we denote by $\ensuremath\operatorname{\faMapMarker}_M(\Delta_1,\dots,\Delta_n)$.
It will be convenient to define $\ell(i)$ (for $i \in I$) as the unique number in $\{1,\dots,n\}$ such that $i\in I_{\ell(i)}$.
\end{definition}
Note that for each $\ell \in \{ 1,\dots,n \}$, the residues of type $I_\ell\subseteq I$ of the city product $\ensuremath\operatorname{\faMapMarker}_M(\Delta_1,\dots,\Delta_n)$ are isomorphic to the original building $\Delta_\ell$. As a special case of \cref{cor:residuecoloring}, we then obtain:
\begin{lemma}
\label{lem:cityproductresidue}
Let $\Delta = \ensuremath\operatorname{\faMapMarker}_M(\Delta_1,\dots,\Delta_n)$ be a city product. Then for each $\ell \in \{ 1,\dots,n \}$, there is a map $\varphi_\ell\colon \Delta\to\Delta_\ell$ with the following properties:
\begin{enumerate}
\item for each residue $\ensuremath\mathcal{R}$ of type $I_\ell$, the restriction $\restrict{\varphi_\ell}{\ensuremath\mathcal{R}}\colon\ensuremath\mathcal{R}\to\Delta_\ell$ is an isomorphism;
\item for each residue $\ensuremath\mathcal{R}$ of type $I\setminus I_\ell$, the restriction $\restrict{\varphi_\ell}{\ensuremath\mathcal{R}}\colon\ensuremath\mathcal{R}\to\Delta_\ell$ is a constant map.
\end{enumerate}
\end{lemma}
\begin{proof}
This follows immediately from \cref{cor:residuecoloring}.
\end{proof}
We can then easily lift colorings of the subbuildings to a coloring of the full city product.
\begin{lemma}
\label{lem:cityproductcoloring}
Let $\Delta = \ensuremath\operatorname{\faMapMarker}_M(\Delta_1,\dots,\Delta_n)$ be a city product. For each $\ell \in \{ 1,\dots,n \}$, let $\lambda^\ell$ be a legal coloring of $\Delta_\ell$ with color sets $\Omega_i$ (where $i$ ranges over $I_\ell$). Then the collection of maps
\[\lambda'_i = \lambda^{\ell(i)}_i\circ\varphi_{\ell(i)}\]
provides a legal coloring of $\Delta$ with color sets $\Omega_i$ (where $i$ ranges over $I = \bigsqcup_{\ell=1}^n I_\ell$).
\end{lemma}
\begin{proof}
This follows immediately from \cref{lem:cityproductresidue} and the definition of legal colorings.
\end{proof}
The city product construction over a diagram $M$ essentially glues together smaller rank buildings as if they were chambers of a building of type $M$, hence the fact that the original buildings reemerge locally as residues (\cref{lem:cityproductresidue}) should not be surprising. However, we can also recover a building of type $M$ at the global scale by relaxing the adjacencies.
\begin{definition
Let $\Delta = \ensuremath\operatorname{\faMapMarker}_M(\Delta_1,\dots,\Delta_n)$ be a city product, where $M$ is a right-angled diagram over $\{1,\dots,n\}$. The \emph{skeletal building} of $\Delta$ is the chamber system $\Phi$ over the index set $\{1,\dots,n\}$ with the same chamber set as $\Delta$, but with coarser adjacencies: we declare two chambers $c,d\in\Delta$ to be $\ell$-adjacent in $\Phi$ if and only if they lie in the same residue of type $I_\ell$ in $\Delta$.
\end{definition}
We will prove in \cref{prop:cityproductskeletalstuff} that the skeletal building of a city product is, in fact, a building. First, we need an auxiliary definition and some combinatorial lemmas, laying the bridge between city products and weak homotopies.
\begin{definition
Let $I = \bigsqcup_{\ell=1}^n I_\ell$.
\begin{enumerate}
\item
The \emph{parkour map} of $(I_1,\dots,I_n)$ is the map
\[r\colon I^* \to \{1,\dots,n\}^*\]
that first replaces every letter $i\in I$ by $\ell(i)\in\{1,\dots,n\}$ and then removes consecutive duplicates (i.e., replaces them by a single letter).
\item
The maximal subwords of a word $w\in I^*$ with letters in a common subset $I_\ell$ are called the \emph{blocks} of $w$. These are precisely the maximal subwords such that the image under~$r$ is a single letter.
\end{enumerate}
\end{definition}
\begin{example}
Consider the index sets
\[I_1 = \{1_a, 1_b, 1_c\},
\quad I_2 = \{2_a, 2_b\},
\quad I_3 = \{3_a, 3_b, 3_c\},
\quad I = I_1\cup I_2\cup I_3.\]
Then for the word $w = 2_a\, 2_b\, 3_c\, 1_c\, 1_a\, 1_b\, 1_a\, 3_b$, the image is $r(w) = 2313$. The blocks are the words
\[2_a\, 2_b,\quad 3_c,\quad 1_c\, 1_a\, 1_b\, 1_a,\quad 3_b.\]
\end{example}
The interpretation in terms of the skeletal building is now clear: Let $\Delta = \ensuremath\operatorname{\faMapMarker}_M(\Delta_1,\dots,\Delta_n)$ be a city product of type $I = \bigsqcup_{\ell=1}^n I_\ell$ and let $\Phi$ be its skeletal building.
If $w$ is the type of a gallery in $\Delta$, then $r(w)$ is the type of a gallery in $\Phi$ with the same extremities, but replacing subgalleries in residues of type $I_\ell$ by a single jump of type $\ell$. (This behavior explains our choice for the terminology ``parkour map''.)
\medskip
When viewed as elements of the corresponding Coxeter groups, the interplay between words in $I^*$ and words in $\{1,\dots,n\}^*$ is not completely trivial --- especially when considering reduced words. As illustrated in \cref{fig:parkourmap}, images of reduced words under the parkour map are not necessarily reduced, nor are images of equivalent words necessarily equivalent.
\begin{figure}[!ht]
\newcommand{\drawgrid}{%
\foreach\i in {0,...,4}
\foreach\j in {0,...,3}
\node[myvertex] (P\i\j) at (\i,\j) {};
\foreach\i in {0,...,4}
{\draw[myedge,mydarkgray] (P\i0) -- (P\i1) -- (P\i2) -- (P\i3);
\draw[myedge,mydarkgray,-dots] (P\i0) -- (\i,-.3);
\draw[myedge,mydarkgray,-dots] (P\i3) -- (\i,3.3);
}
\foreach\j in {0,...,3}
{\draw[myedge,mydarkgray] (P0\j) -- (P1\j) -- (P2\j) -- (P3\j) -- (P4\j);
\draw[myedge,mydarkgray,-dots] (P0\j) -- (-.3,\j);
\draw[myedge,mydarkgray,-dots] (P4\j) -- (4.3,\j);
}
}
\centering
\begin{subfigure}{\textwidth}
\centering
\begin{tikzpicture}
\path (0,0) node[myvertex,ugentblue] (A1) {}
+(1,0) node[myvertex,ugentblue] (A2) {}
++(0,1) node[myvertex,ugentblue] (B1) {}
+(1,0) node[myvertex,ugentblue] (B2) {}
(2,0) node[myvertex,ugentred] (X1) {}
++(0,1) node[myvertex,ugentred] (X2) {};
\draw[myedge,ultra thick,mydarkgray,rounded corners=5pt]
(-.4,-.4) rectangle (1.4,.4) (1.4,0) -- (X1)
(-.4,.6) rectangle (1.4,1.4) (1.4,1) -- (X2);
\draw[myedge] (A1) -- node[above] {$\infty$} (A2);
\draw[myedge] (B1) -- node[above] {$\infty$} (B2);
\end{tikzpicture}
\quad$\Rightarrow$\quad
\begin{tikzpicture}
\path (0,0) node[myvertex,ugentblue,label=left:$2_a$] (A1) {}
+(1,0) node[myvertex,ugentblue,label=right:$2_b$] (A2) {}
++(0,1) node[myvertex,ugentblue,label=left:$1_a$] (B1) {}
+(1,0) node[myvertex,ugentblue,label=right:$1_b$] (B2) {};
\draw[myedge,ultra thick,mydarkgray,rounded corners=5pt] (-.6,-.4) rectangle (1.6,1.4);
\draw[myedge] (A1) -- node[above] {$\infty$} (A2) (B1) -- node[above] {$\infty$} (B2);
\end{tikzpicture}
\caption{The ambient (reducible) city product. Notice that this occurs as a residue of the irreducible city product from \cref{fig:cityprod:B}.}
\end{subfigure}
\par
\begin{subfigure}{\textwidth}
\begin{subfigure}[b]{.32\textwidth}
\centering
\begin{tikzpicture}[x=7mm,y=7mm]
\drawgrid
\draw[myedge,black] (P03) -- (P13) -- (P23) -- (P33) -- (P43) -- (P42) -- (P41) -- (P40);
\end{tikzpicture}
$r(w) = 12$,
\end{subfigure}
\begin{subfigure}[b]{.32\textwidth}
\centering
\begin{tikzpicture}[x=7mm,y=7mm]
\drawgrid
\draw[myedge,black] (P03) -- (P13) -- (P23) -- (P22) -- (P21) -- (P20) -- (P30) -- (P40);
\end{tikzpicture}
$r(w) = 121$,
\end{subfigure}
\begin{subfigure}[b]{.32\textwidth}
\centering
\begin{tikzpicture}[x=7mm,y=7mm]
\drawgrid
\draw[myedge,black] (P03) -- (P02) -- (P12) -- (P11) -- (P21) -- (P31) -- (P30) -- (P40);
\end{tikzpicture}
$r(w) = 212121$.
\end{subfigure}
\caption{The parkour map $r \colon \{ 1_a, 1_b, 2_a, 2_b \}^* \to \{ 1, 2 \}^*$. Horizontal lines alternate between $1_a$~and~$1_b$, vertical lines alternate between $2_a$ and $2_b$.}
\end{subfigure}
\caption{The effect of the parkour map on equivalent types of minimal galleries}
\label{fig:parkourmap}
\end{figure}
The following slightly technical lemma explains the connection in more detail.
\begin{lemma}
\label{lem:parkourmap}
Let $M$ be a diagram of rank $n$ over the index set $\{1,\dots,n\}$ and for each $\ell \in \{ 1,\dots,n \}$, let $M_\ell$ be a diagram over $I_\ell$.
Consider the city product $\ensuremath\operatorname{\faMapMarker}_{M}(M_1,\dots,M_n)$, with index set $I = \bigsqcup_{\ell=1}^n I_\ell$.
Let $u\in I^*$ and let $r\colon I^*\to\{1,\dots,n\}^*$ be the parkour map.
\begin{enumerate}
\item\label{lem:parkourmap:1} If $u\simeq u'$, then $r(u)$ and $r(u')$ are weakly homotopic (in the sense of \cref{def:weakhomotopy}).
\item\label{lem:parkourmap:2} Assume that we have a homotopy $r(u)\simeq v$ and let $v'$ be the word obtained from $v$ by removing consecutive duplicate letters.
Then there exists $u'\in I^*$ such that $u'\simeq u$ and $r(u')=v'$.
\[\begin{tikzcd}[dims={5em}{4.5em}]
u \ar[d,"r"] \ar[rr,"\simeq"] && u' \ar[d,"r"] \\
r(u) \ar[r,"\simeq"] & v \ar[r,dashed] & v'
\end{tikzcd}\]
\item\label{lem:parkourmap:3} If\, $u$ is reduced, then all blocks of $u$ are reduced.
\item\label{lem:parkourmap:4} If all blocks of $u$ are reduced and $r(u)$ is reduced, then $u$ is reduced.
\item\label{lem:parkourmap:5} If\, $u\simeq u'$ and both $r(u)$ and $r(u')$ are reduced, then $r(u)\simeq r(u')$.
\end{enumerate}
\end{lemma}
\begin{proof}
\begin{enumerate}
\item
Consider an elementary homotopy $u=u_1\,ij\,u_2 \simeq u_1\,ji\,u_2$. If $\ell(i)=\ell(j)$, then the image under $r$ remains unchanged. Assume now that $\ell(i)\neq\ell(j)$. We distinguish three cases for the subword $u_1$ of $u$:
\begin{description}\setlength{\itemindent}{-7ex}
\item[{[\textcolor{ugentblue}{\textsf{L.a}}]}] $u_1$ is nonempty and the last letter of $u_1$ is in $I_{\ell(i)}$,
\item[{[\textcolor{ugentred}{\textsf{L.b}}]}] $u_1$ is nonempty and the last letter of $u_1$ is in $I_{\ell(j)}$,
\item[{[\textsf{L.c}]}] $u_1$ is the empty word, or the last letter of $u_1$ is neither in $I_{\ell(i)}$ nor $I_{\ell(j)}$.
\end{description}
Analogously, we distinguish three cases for $u_2$:
\begin{description}\setlength{\itemindent}{-7ex}
\item[{[\textcolor{ugentblue}{\textsf{R.a}}]}] $u_2$ is nonempty and the first letter of $u_2$ is in $I_{\ell(i)}$,
\item[{[\textcolor{ugentred}{\textsf{R.b}}]}] $u_2$ is nonempty and the first letter of $u_2$ is in $I_{\ell(j)}$,
\item[{[\textsf{R.c}]}] $u_2$ is the empty word, or the first letter of $u_2$ is neither in $I_{\ell(i)}$ nor $I_{\ell(j)}$.
\end{description}
\smallskip
Depending on the nine combinations of possibilities, the elementary homotopy $u_1\,ij\,u_2 \simeq u_1\,ji\,u_2$ transforms the image $r(u)$ by substituting some subword in $\{ij,ji,iji,jij,ijij,jiji\}$ into another such word; see \cref{fig:parkourhomotopy}. In each case, the result is weakly homotopic to $r(u)$.
\begin{table}[!ht]
\centering
\begin{tikzpicture}
\path (0,0) node[myvertex] (L) {}
+(45:1) node[myvertex] (U) {}
++(-45:1) node[myvertex] (D) {}
++(45:1) node[myvertex] (R) {};
\path (D.center) to node[midway,rotate=-90] {\strut$\smash{\Longrightarrow}$} (U.center);
\draw[myedge,ugentblue] (L) -- node[above left] {$i$\vphantom{$j$}} (U);
\draw[myedge,ugentblue,dashed] (D) -- node[below right] {$i$} (R);
\draw[myedge,ugentblue,-dots] (L) -- +(225:.667) node[pos=1.7,left] {\bfseries\sffamily\textcolor{ugentblue}{L.a}};
\draw[myedge,ugentblue,-dots] (R) -- +(45:.667) node[pos=1.7,right] {\bfseries\sffamily\textcolor{ugentblue}{R.a}};
\draw[myedge,ugentred,dashed] (L) -- node[below left] {$j$} (D);
\draw[myedge,ugentred] (U) -- node[above right] {$j$} (R);
\draw[myedge,ugentred,-dots] (L) -- +(135:.667) node[pos=1.7,left] {\bfseries\sffamily\textcolor{ugentred}{L.b}};
\draw[myedge,ugentred,-dots] (R) -- +(-45:.667) node[pos=1.7,right] {\bfseries\sffamily\textcolor{ugentred}{R.b}};
\draw[myedge,-dots] (L) -- +(180:.667) node[pos=2.3] {\bfseries\sffamily L.c};
\draw[myedge,-dots] (R) -- +(0:.667) node[pos=2.3] {\bfseries\sffamily R.c};;
\end{tikzpicture}
\par\bigskip
\renewcommand{\arraystretch}{1.25}
\newcommand{\blah}[2]{$\mathllap{#1}\rightsquigarrow\mathrlap{#2}$}
\begin{tabular}{p{6mm}*{3}{>{\centering\arraybackslash}p{20mm}}}
\toprule
& {\bfseries\sffamily\textcolor{ugentblue}{R.a}} & {\bfseries\sffamily\textcolor{ugentred}{R.b}} & {\bfseries\sffamily {R.c}}\\
\cmidrule(l){2-4}
{\bfseries\sffamily\textcolor{ugentblue}{L.a}} & \blah{iji}{iji} & \blah{ij}{ijij} & \blah{ij}{iji} \\
{\bfseries\sffamily\textcolor{ugentred}{L.b}} & \blah{jiji}{ji} & \blah{jij}{jij} & \blah{jij}{ji} \\
{\bfseries\sffamily L.c} & \blah{iji}{ji} & \blah{ij}{jij} & \blah{ij}{ji} \\
\bottomrule
\end{tabular}
\bigskip
\caption{The effect of an elementary homotopy on the image of the parkour map. We have simply written $i$ and $j$ instead of $\ell(i)$ and $\ell(j)$ for better readability.}
\label{fig:parkourhomotopy}
\end{table}
\vspace*{-3ex}
\item
Consider an elementary homotopy $r(u) = v_1\,\ell_1\ell_2\,v_2 \simeq v_1\,\ell_2\ell_1\,v_2 = v$ with $1\leq\ell_1\neq \ell_2\leq n$ and such that $\ell_1$ and $\ell_2$ commute in $M$. Then we can write $u = u_1\,b_1b_2\,u_2$, where $b_1$ and $b_2$ are the blocks corresponding to $\ell_1$ and~$\ell_2$, respectively. Since $b_1$ and $b_2$ have only letters in $I_{\ell_1}$ and in $I_{\ell_2}$, which are sets of pairwise commuting generators in the Coxeter system, we have a homotopy $u' = u_1\,b_2b_1\,u_2 \simeq u_1\,b_1b_2\,u_2$ satisfying $r(u') = v'$.
The claim now follows by induction on the number of elementary homotopies needed to go from $r(u)$ to~$v$.
\item
This is obvious since any subword of a reduced word is reduced.
\item
Assume by means of contraposition that every block of $u$ is reduced while $u$ is not, i.e., there is a homotopy $u\simeq w_1\,ii\,w_2$. Mark these two letters $i$ and let $b_1$ and $b_2$ be the two blocks of $u$ containing these two marked letters. Since every block is reduced, we have $b_1\neq b_2$, hence $u = u_1\,b_1\,u_2\,b_2\,u_3$ such that $u_2$ is nonempty and $m_{ij}=2$ for every letter $j$ in $u_2$. The image then satisfies $r(u) = r(u_1)\,\ell(i)\,r(u_2)\,\ell(i)\,r(u_3)$. By construction, $\ell(i)$ commutes with every letter in $r(u_2)$. Hence $r(u)$ is not reduced.
\item
This follows from \ref{lem:parkourmap:1} and \cref{cor:weaklyhomotopicreduced}.
\qedhere
\end{enumerate}
\end{proof}
\begin{proposition}
\label{prop:cityproductskeletalstuff}
Let $M$ be a diagram of rank $n$, let $\Delta_1,\dots,\Delta_n$ be right-angled buildings,
let $\Delta :=\ensuremath\operatorname{\faMapMarker}_M(\Delta_1,\dots,\Delta_n)$ be their city product and let $\Phi$ be its skeletal building.
Then
\begin{enumerate}
\item $\Phi$ is a right-angled building of type $M$ over $\{1,\dots,n\}$;
\item $\Phi$ is semiregular with parameters $q_\ell = \lvert\Delta_\ell\rvert$ for every $\ell \in \{ 1,\dots,n \}$;
\item $\ell$-panels of\, $\Gamma$ (as sets of chambers) are $I_\ell$-residues of $\Delta$ and vice versa;
\item the maps $\varphi_\ell$ with $\ell \in \{ 1,\dots,n \}$ (introduced in \cref{lem:cityproductresidue}) provide a legal coloring of\, $\Gamma$ with color sets $\Delta_\ell$.
\end{enumerate}
\end{proposition}
\begin{proof}
The only nontrivial claim is that $\Phi$ is indeed a building of type $M$; the other claims will then follow immediately from the definitions. Let us first write out the Weyl distance function in $\Phi$ and then verify that it satisfies the necessary properties.
Denote by $W_\Delta$ be the Weyl group of the building $\Delta$. Recall from \cref{def:evaluationmorphism} the evaluation morphism $\epsilon_\Delta\colon I^* \to W_\Delta$. On the other hand, we have a Coxeter group $W_\Phi$ of type $M$, together with an evaluation morphism $\epsilon_\Phi\colon \{1,\dots,n\}^* \to W_\Phi$. Next, we let $s\colon W_\Delta\to I^*$ be a section of $\epsilon_\Delta$ with reduced images such that for each $w\in W_\Delta$, the word length $\lvert r(s(w))\rvert$ is minimal among all possible choices for $s(w)$.
Observe that, by \itemref{lem:parkourmap}{2}, this implies that $r(s(w))$ is a reduced word in $\{ 1,\dots,n \}^*$, for all $w \in W_\Delta$.
Finally, we can define
\[\delta_\Phi = \epsilon_\Phi\circ r\circ s\circ \delta_\Delta \colon \Phi\times\Phi\to W_\Phi.\]
(Notice that the composition $\epsilon_\Phi\circ r\circ s$ is a map $W_\Delta\to W_\Phi$ but by no means a group homomorphism.)
\[\begin{tikzcd}[dims={8em}{4.5em}]
\Delta\times\Delta \ar[r,"\delta_\Delta"] \ar[d,equal] &
W_\Delta \ar[d,dashed] \arrow[bend left=18]{r}{s}
& I^* \ar[d,"r"] \arrow[bend left=18]{l}{\epsilon_\Delta} \\
\Phi\times\Phi \ar[r,"\delta_\Phi"]
& W_\Phi
& \{1,\dots,n\}^* \ar[l,"\epsilon_\Phi",swap]
\end{tikzcd}\]
\smallskip
Clearly, panels of $\Phi$ contain at least two chambers, since every such panel of $\Phi$ contains a panel of $\Delta$. Now consider a reduced word $v$ in $\{1,\dots,n\}^*$. Our goal is to show that $\delta_\Phi(c,d)=\epsilon_\Phi(v)$ if and only if there exists a gallery of type $v$ from $c$ to $d$ in $\Phi$.
First, assume that $\delta_\Phi(c,d)=\epsilon_\Phi(v)$. By definition of $\delta_\Phi$, this means that the words $(r\circ s\circ\delta_\Delta)(c,d)$ and $v$ are equivalent. Moreover, both words are reduced, and hence homotopic by \itemref{thm:homotopicstuff}{2}. By \itemref{lem:parkourmap}{2}, this homotopy can be realized in $I^*$, i.e., we can find a word $u\in I^*$ such that $u \simeq (s\circ\delta_\Delta)(c,d)$ and $r(u) = v$. The homotopy $u \simeq (s\circ\delta_\Delta)(c,d)$ yields that $u$ is reduced, hence by the building axioms for $\Delta$, there is a minimal gallery in $\Delta$ of type $u$ from $c$ to $d$. Then $r(u)=v$ is the type of a gallery in $\Phi$ from $c$ to $d$.
Conversely, assume that $\gamma$ is a gallery of type $v$ from $c$ to $d$ in $\Phi$. We can ``lift'' $\gamma$ to a gallery $\widebar\gamma$ in $\Delta$ with the same extremities, by replacing each $\ell$-adjacency in $\gamma$ by a minimal gallery in a residue of type $I_\ell$ of $\Delta$. Let $\widebar v$ be the type of $\widebar\gamma$. Note that $r(\widebar v)=v$ and that $\widebar v$ is reduced by \itemref{lem:parkourmap}{4}. Hence, we have $\delta_\Delta(c,d)=\epsilon_\Delta(\widebar v)$, so that $s(\delta_\Delta(c,d))$ and $\widebar v$ are homotopic by \cref{thm:homotopicstuff}\ref{thm:homotopicstuff:1}~and~\ref{thm:homotopicstuff:2}. Then by \itemref{lem:parkourmap}{5}, the images $(r\circ s\circ\delta_\Delta)(c,d)$ and $r(\widebar v)=v$ are homotopic, so that finally
\[\delta_\Phi(c,d) = (\epsilon_\Phi\circ r\circ s\circ\delta_\Delta)(c,d) = \epsilon_\Phi(v).\]
This concludes our proof that $\Phi$ is a right-angled building of type $M$.
\end{proof}
\begin{remark}
The city product of right-angled diagrams can be interpreted in a purely graph-theoretical way.
More precisely, a right-angled diagram is a non-trivial city product of lower rank right-angled diagrams if and only if its underlying graph is not \emph{prime}, i.e., has no non-trivial \emph{modules}, where a subset $X$ of the vertex set of a graph is called a \emph{module} if it has the property that every vertex $v\notin X$ is either adjacent to all vertices in $X$ or adjacent to no vertex in $X$.
A decomposition of a right-angled diagram as a non-trivial city product of lower rank right-angled diagrams then corresponds to a \emph{modular partition} of the underlying graph.
\end{remark}
\section{Universal groups of city products}
Earlier in \cref{lem:universalreducible}, we already observed that the universal group construction behaves nicely with respect to disjoint unions of diagrams. This operation on diagrams is a special case of the city product from \cref{sec:cityproduct} (over a diagram with only isolated nodes). In this section, we generalize \cref{lem:universalreducible} to arbitrary city products.
More precisely, we will show that the universal group over a city product of buildings $\ensuremath\operatorname{\faMapMarker}_M(\Delta_1,\dots,\Delta_n)$ is isomorphic to the universal group over the skeletal building of the universal groups over the buildings $\Delta_i$.
\begin{theorem}
\label{thm:universalcityproduct}
Let $M$ be a right-angled diagram of rank $n$. For each $\ell \in \{ 1,\dots,n \}$, let $\Delta_\ell$ be semiregular right-angled building of type $M_\ell$ over $I_\ell$, equipped with a legal coloring $\lambda_\ell$ with color sets $\Omega_i$ (indexed by $i \in I_\ell$).
Let $\Delta :=\ensuremath\operatorname{\faMapMarker}_M(\Delta_1,\dots,\Delta_n)$ be their city product over $I = \bigsqcup_{\ell=1}^n I_\ell$ and let $\Phi$ be its skeletal building over $\{ 1,\dots,n \}$.
Assume that for each $i \in I$, we have a permutation group $F_i \leq \Sym(\Omega_{\ell(i)})$, giving rise to local data $\ensuremath\boldsymbol{F}\@Fhack$ over $I$.
These local data restrict to local data $\ensuremath\boldsymbol{F}\@Fhack_\ell$ over $I_\ell$ for each $\ell \in \{ 1,\dots,n \}$.
Then we have an isomorphism of topological groups
\[ \U_\Delta(\ensuremath\boldsymbol{F}\@Fhack) \cong \U_\Phi\Bigl( \bigl( \U_{\Delta_\ell}(\ensuremath\boldsymbol{F}\@Fhack_\ell) \bigr)_{\ell \in \{ 1,\dots,n \}} \Bigr) .\]
\end{theorem}
\begin{proof}
Equip $\Delta$ with the coloring $\lambda'$ from \cref{lem:cityproductcoloring}, assigning colors in the sets $\Omega_i$ (indexed by $i\in I$). Also equip its skeletal building $\Phi$ with the coloring $\varphi$ from \cref{prop:cityproductskeletalstuff}, assigning colors in the sets $\Delta_\ell$ (indexed by $\ell\in\{1,\dots,n\}$).
Let $\ensuremath\boldsymbol{F}\@Fhack'$ be the local data over $\{ 1,\dots,n \}$ defined by
\[ F'_\ell := \U_{\Delta_\ell}(\ensuremath\boldsymbol{F}\@Fhack_\ell) \quad \text{for each } \ell \in \{ 1,\dots,n \} . \]
First, every automorphism of $\Delta$ induces an automorphism of its skeletal building, hence we have a natural monomorphism
\[\iota \colon \Aut(\Delta) \hookrightarrow \Aut(\Phi).\]
Let $g\in\U_\Delta(\ensuremath\boldsymbol{F}\@Fhack) \leq \Aut(\Delta)$, let $\ensuremath\mathcal{R}$ be any panel of $\Phi$ of type $\ell$, and consider the local action of $\iota(g)$ as an automorphism of $\Phi$ at the panel $\ensuremath\mathcal{R}$. For readability, we will identify $g$ with its image $\iota(g)$. We can also identify $\ensuremath\mathcal{R}$ with a residue of $\Delta$ of type $I_\ell$ (which is isomorphic to $\Delta_\ell$). Then the local action
\[\sigma_\varphi(g,\ensuremath\mathcal{R})
= \restrict{\varphi_\ell}{g\acts\ensuremath\mathcal{R}} \circ \restrict{g}{\ensuremath\mathcal{R}} \circ \restrict[-1]{\varphi_\ell}{\ensuremath\mathcal{R}}\]
is the composition of three isomorphisms $\Delta_\ell \to \ensuremath\mathcal{R} \to g\acts\ensuremath\mathcal{R} \to \Delta_\ell$ and is hence an automorphism of the building $\Delta_\ell$. It thus makes sense to consider the local action of $\sigma_\varphi(g,\ensuremath\mathcal{R})$ at an $i$-panel $\P$ of $\Delta_\ell$ with $i\in I_\ell$. Let $\P' = \restrict[-1]{\varphi_\ell}{\ensuremath\mathcal{R}}(\P)$, which is an $i$-panel of $\Delta$ by \cref{lem:cityproductresidue}.
We then have $\P' \subseteq \ensuremath\mathcal{R} \subseteq \Delta$, and the map $\varphi_\ell \colon \Delta \to \Delta_\ell$ restricts to isomorphisms $\restrict{\varphi_\ell}{\ensuremath\mathcal{R}} \colon \ensuremath\mathcal{R} \to \Delta_\ell$ and $\restrict{\varphi_\ell}{\P'} \colon \P' \to \P$.
Then
\begin{align*}
\label{eq:locallocal}
\sigma_{\lambda_\ell}(\sigma_\varphi(g,\ensuremath\mathcal{R}),\P)
& = \restrict{(\lambda_\ell)_i}{\sigma_\varphi(g,\ensuremath\mathcal{R})\acts\P} \ \circ\
\restrict{\sigma_\varphi(g,\ensuremath\mathcal{R})}{\P} \ \circ\
\restrict[-1]{(\lambda_\ell)_i}{\P}\\
& = \restrict{(\lambda_\ell)_i}{\sigma_\varphi(g,\ensuremath\mathcal{R})\acts\P} \ \circ\
\restrict{\bigl(\restrict{\varphi_\ell}{g\acts\ensuremath\mathcal{R}} \circ \restrict{g}{\ensuremath\mathcal{R}} \circ \restrict[-1]{\varphi_\ell}{\ensuremath\mathcal{R}}\bigr)}{\P} \ \circ\
\restrict[-1]{(\lambda_\ell)_i}{\P}\\
& = \restrict{(\lambda_\ell)_i}{\sigma_\varphi(g,\ensuremath\mathcal{R})\acts\P} \ \circ\
\restrict{\varphi_\ell}{g\acts\P'} \circ \restrict{g}{\P'} \circ \restrict[-1]{\varphi_\ell}{\P'} \ \circ\
\restrict[-1]{(\lambda_\ell)_i}{\P}\\
& = \restrict{\lambda'_i}{g\acts\P'}
\circ \restrict{g}{\P'}
\circ \restrict[-1]{\lambda'_i}{\P'}\\
& = \sigma_{\lambda'}(g,\P').
\tag{$\ast$}
\end{align*}
%
\[\begin{tikzcd}[dims={9em}{4.2em}]
\Delta \ar[r,"g"{name=A}] \ar[d,equal] \ar[ddd,"{\lambda'_i}",swap,relay arrow=-10mm]
& \Delta \ar[d,equal] \ar[ddd,"{\lambda'_i}",relay arrow=10mm]\\
\Phi \ar[r,"\iota(g)"{name=B},swap] \ar[d,"\varphi_\ell",swap]
& \Phi \ar[d,"\varphi_\ell"]\\
\Delta_\ell \ar[r,"{\sigma_\varphi(g,\,{\bullet})}"] \ar[d,"(\lambda_\ell)_i",swap]
& \Delta_\ell \ar[d,"(\lambda_\ell)_i"]\\
\Omega_i \ar[r,"{\sigma_{\lambda_\ell}(\sigma_\varphi(g,\,{\bullet}),\,{\bullet})}"]
& \Omega_i
\arrow[from=A, to=B, Rightarrow, shorten <=12pt, shorten >=10pt, "\strut\iota"]
\end{tikzcd}\]
\smallskip
Since $g\in\U_\Delta(\ensuremath\boldsymbol{F}\@Fhack)$, the result of \cref{eq:locallocal} is a permutation in $F_i$, so we conclude that $\sigma_\varphi(g,\ensuremath\mathcal{R}) \in \U_{\Delta_\ell}(\ensuremath\boldsymbol{F}\@Fhack_\ell)$. Since this holds for every $\ell$-panel $\ensuremath\mathcal{R}$ of $\Phi$, this shows that $\iota(g) \in \U_\Phi(\boldsymbol{F'})$.
\medskip
Conversely, let $g\in\U_\Phi(\boldsymbol{F'})$. We can identify $g \in \Aut(\Phi)$ with a permutation of~$\Delta$, and we first verify that this permutation is type-preserving, i.e., that $g$ is in fact an automorphism of $\Delta$. Indeed, let $c\sim_i d$ be $i$-adjacent chambers in $\Delta$. Let $\ell := \ell(i)$ and let $\ensuremath\mathcal{R}$ be the residue of $\Delta$ of type $I_\ell$ containing $c$ and $d$. Then $\ensuremath\mathcal{R}$ is an $\ell$-panel of $\Phi$. The local action $\sigma_\varphi(g,\ensuremath\mathcal{R})$ is an element of $F'_\ell = \U_{\Delta_\ell}(\ensuremath\boldsymbol{F}\@Fhack_\ell) \leq \Aut(\Delta_\ell)$. Hence
\[\restrict{g}{\ensuremath\mathcal{R}} = \restrict[-1]{\varphi_\ell}{g\acts\ensuremath\mathcal{R}} \circ \sigma_\varphi(g,\ensuremath\mathcal{R}) \circ \restrict{\varphi_\ell}{\ensuremath\mathcal{R}}\]
is a composition of isomorphisms $\ensuremath\mathcal{R} \to \Delta_\ell \to \Delta_\ell \to g\acts\ensuremath\mathcal{R}$, each of which preserves $i$-adjacency. In particular, $g\acts c\sim_i g\acts d$. Since $c$ and $d$ were arbitrary, we conclude that $g\in\Aut(\Delta)$.
Next, let $\P'$ be any $i$-panel in $\Delta$ with $i\in I_\ell$, let $\ensuremath\mathcal{R}$ be the $I_\ell$-residue of $\Delta$ containing $\P'$ and let $\P := \varphi_\ell(\P')$ in $\Delta_\ell$. The reverse calculation of \cref{eq:locallocal} shows that the local action satisfies
\[\sigma_{\lambda'}(g,\P') = \sigma_{\lambda_\ell}(\sigma_\varphi(g,\ensuremath\mathcal{R}),\P).\]
Since $\sigma_\varphi(g,\ensuremath\mathcal{R}) \in \U_{\Delta_\ell}(\ensuremath\boldsymbol{F}\@Fhack_\ell)$, we have $\sigma_{\lambda'}(g,\P') \in (\ensuremath\boldsymbol{F}\@Fhack_\ell)_i = F_i$. We conclude that indeed $g\in \U_\Delta(\ensuremath\boldsymbol{F}\@Fhack)$.
\medskip
In conclusion, the restriction of $\iota \colon \Aut(\Delta) \hookrightarrow \Aut(\Phi)$ to $\U_\Delta(\ensuremath\boldsymbol{F}\@Fhack) \leq \Aut(\Delta)$ is an isomorphism $\kappa \colon \U_\Delta(\ensuremath\boldsymbol{F}\@Fhack) \to \U_\Phi({\boldsymbol F'})$.
Finally, notice that it is obvious that $\iota$ is a \emph{homeomorphism} onto its image because $\Delta$ and $\Phi$ have the same underlying set, and the topology on $\Aut(\Delta)$ and $\Aut(\Phi)$ is independent of the additional building structure on $\Delta$ and $\Phi$.
In particular, $\kappa$ is an isomorphism of topological groups.
\end{proof}
\section{Application: Different right-angled buildings of the same type with isomorphic universal groups}
Inspired by \cite{bessmann}, we provide a construction to produce pairs of right-angled buildings of the same type $M$ over $I$ but with different parameters $(q_i)_{i \in I}$, that nevertheless admit isomorphic universal groups for appropriate choices of the local data $\ensuremath\boldsymbol{F}\@Fhack$.
The following result is certainly not the most general result possible, but it seems a good trade-off between producing a large amount of examples and still being ``readable''.
\begin{theorem}\label{thm:application}
Let $M$ be a right-angled diagram of rank $n$ admitting a non-trivial symmetry $\rho \in \Sym(n)$, and assume that $k \in \{ 1,\dots,n \}$ is not fixed by $\rho$. Let $\Lambda$ be the support of $\rho$ (i.e., the set of elements not fixed by $\rho$).
For each $\ell \in \{ 1,\dots, n \}$, let $M_\ell$ be a right-angled diagram over $I_\ell$, such that
\begin{itemize}
\item $|I_\ell| = 1$ for each $\ell \in \Lambda \setminus \{ k \}$, and
\item $|I_k| =: t > 1$. Write $I_k = \{ i_1,\dots,i_t \}$.
\end{itemize}
Consider the city product $N := \ensuremath\operatorname{\faMapMarker}_M(M_1,\dots,M_n)$, with index set $I = \bigsqcup_{\ell=1}^{n} I_\ell$.
\begin{itemize}
\item For each $i \in I_k$, let $G_i$ and $G'_i$ be two arbitrary permutation groups (acting on sets $\Omega_i$ and $\Omega'_i$, respectively).
\item For each $i \in I_\ell$ for $\ell \not\in \{ k, \rho(k) \}$, let $H_i$ be an arbitrary permutation group (acting on a set $\Omega_i$).
\end{itemize}
Finally, consider the two collections $\ensuremath\boldsymbol{F}\@Fhack$ and $\ensuremath\boldsymbol{F}\@Fhack'$ of local data over $I$ defined by
\begin{itemize}
\item $F_i := G_i$ for $i \in I_k$,
\item $F_i := \U_{M_k}(G'_{i_1},\dots,G'_{i_t})$ for the unique $i \in I_{\rho(k)}$,
\item $F_i := H_i$ for all other $i \in I$, \\[-1.8ex]
\item $F'_i := G'_i$ for $i \in I_k$,
\item $F'_i := \U_{M_k}(G_{i_1},\dots,G_{i_t})$ for the unique $i \in I_{\rho^{-1}(k)}$,
\item $F'_i := H_j$ for all $i \in I_\ell$ with $\ell \in \Lambda \setminus \{ k, \rho^{-1}(k) \}$, where $I_{\rho(\ell)} = \{ j \}$,
\item $F'_i := H_i$ for all other $i \in I$.
\end{itemize}
Then the universal groups $\U_N(\ensuremath\boldsymbol{F}\@Fhack)$ and $\U_N(\ensuremath\boldsymbol{F}\@Fhack')$ are isomorphic (as topological groups).
\end{theorem}
\begin{proof}
We will use the notation from \itemref{rem:Unotation}{2}.
Consider the collections $\ensuremath\boldsymbol{L}\@Fhack$ and $\ensuremath\boldsymbol{L}\@Fhack'$ of local data over $\{ 1,\dots,n \}$ defined by
\begin{align*}
L_k &= \U_{M_k}(G_{i_1},\dots,G_{i_t}), \\
L_{\rho(k)} &= \U_{M_k}(G'_{i_1},\dots,G'_{i_t}), \\
L_\ell &= H_i \quad \text{for all } \ell \in \Lambda \setminus \{ k, \rho(k) \}, \text{ where } I_\ell = \{ i \} , \\
L_\ell &= \U_{M_\ell}(H_i \mid i \in I_\ell) \quad \text{for all } \ell \notin \Lambda, \\[1.6ex]
%
L'_k &= \U_{M_k}(G'_{i_1},\dots,G'_{i_t}), \\
L'_{\rho^{-1}(k)} &= \U_{M_k}(G_{i_1},\dots,G_{i_t}), \\
L'_\ell &= H_j \quad \text{for all } \ell \in \Lambda \setminus \{ k, \rho^{-1}(k) \}, \text{ where } I_{\rho(\ell)} = \{ j \} , \\
L'_\ell &= \U_{M_\ell}(H_i \mid i \in I_\ell) \quad \text{for all } \ell \notin \Lambda.
\end{align*}
By \cref{thm:universalcityproduct}, we then have
\begin{align*}
\U_N(\ensuremath\boldsymbol{F}\@Fhack) &\cong \U_M(\ensuremath\boldsymbol{L}\@Fhack) \quad \text{and} \\
\U_N(\ensuremath\boldsymbol{F}\@Fhack') &\cong \U_M(\ensuremath\boldsymbol{L}\@Fhack').
\end{align*}
Since $L'_\ell = L_{\rho(\ell)}$ for all $\ell$, it is now immediately clear that the symmetry $\rho$ of the diagram $M$ ensures that $\U_M(\ensuremath\boldsymbol{L}\@Fhack) \cong \U_M(\ensuremath\boldsymbol{L}\@Fhack')$, and the result follows.
\end{proof}
\begin{examples}\label{ex:isom}
\begin{enumerate}[label={\rm (\arabic*)}]
\item\label{ex:isom:1}
The first set of examples covers precisely the cases that can also be obtained with Lara Be\ss man's method from \cite{bessmann}.
Let $M$ be the diagram of rank $2$ with label $\infty$, let $M_1$ be the diagram of rank $1$ and let $M_2$ be the diagram of rank $t > 1$ without edges.
The city product $N := \ensuremath\operatorname{\faMapMarker}_M(M_1, M_2)$ of these diagrams has rank $t+1$; we label the $t+1$ nodes of $N$ as below.
\[
\begin{tikzpicture}[scale=.81]
\path (0,0) node[myvertex,ugentblue] (A1) {}
+(.6,0) node[myvertex,ugentblue] (A2) {}
+(1.32,0) node[ugentblue] (A3) {$\dots$}
+(2,0) node[myvertex,ugentblue] (A4) {}
++(1,1) node[myvertex,ugentblue] (A5) {}
(3,0) node[myvertex,ugentred] (B1) {}
++(0,1) node[myvertex,ugentred] (B2) {};
\draw[myedge,ultra thick,mydarkgray,rounded corners=5pt]
(-.4,-.4) rectangle (2.4,.4) (2.4,0) -- (B1)
(-.4,.6) rectangle (2.4,1.4) (2.4,1) -- (B2);
\draw[myedge] (B1) -- node[left] {$\infty$} (B2);
\node[] at (-1,1) {$M_1$};
\node[] at (-1,0) {$M_2$};
\end{tikzpicture}
\hspace*{2ex} \Rightarrow \hspace*{2ex}
\begin{tikzpicture}
\path (0,0) node[myvertex,ugentblue] (A1) {}
+(.6,0) node[myvertex,ugentblue] (A2) {}
+(1.32,0) node[ugentblue] (A3) {$\dots$}
+(2,0) node[myvertex,ugentblue] (A4) {}
++(1,1) node[myvertex,ugentblue] (B1) {};
\draw[myedge,ultra thick,mydarkgray,rounded corners=5pt] (-.4,-.5) rectangle (2.5,1.5);
\draw[myedge] (A1) -- node[left] {$\infty$} (B1) -- node[right] {$\infty$} (A2);
\draw[myedge] (A4) -- node[right] {$\infty$} (B1);
\node[] at ([yshift=.8em] B1) {$t+1$};
\node[] at ([yshift=-.8em] A1) {$1$};
\node[] at ([yshift=-.8em] A2) {$2$};
\node[] at ([yshift=-.8em] A4) {$t$};
\end{tikzpicture}
\]
We can now apply \cref{thm:application}, with $\rho$ the unique non-trivial symmetry of $M$ and with $k=2$, so $\rho(k) = \rho^{-1}(k) = 1$ and $|I_k| = t > 1$.
For each $i \in \{ 1,\dots, t \}$, let $G_i$ and $G'_i$ be two arbitrary permutation groups (acting on sets $\Omega_i$ and $\Omega'_i$, respectively).
Notice that there are no values $\ell \notin \{ k, \rho(k) \}$, so we do not have to choose groups $H_i$ as in the theorem.
Moreover, notice that because $M_2$ has no edges, we simply have $\U_{M_2}(G_1,\dots,G_t) \cong G_1 \times \dots \times G_t$ (see \cref{lem:universalreducible}).
It now follows from \cref{thm:application} that
\[ \U_N(G_1,\ G_2,\ \dots,\ G_t,\ G'_1 \times \dots \times G'_t) \cong \U_N(G'_1,\ G'_2,\ \dots,\ G'_t,\ G_1 \times \dots \times G_t) . \]
\item\label{ex:isom:2}
We now present an example with a non-involutory symmetry.
Let $M$ be the diagram of rank $3$ with all labels $\infty$, let $M_1$ and $M_3$ be diagrams of rank $1$ and let $M_2$ be the diagram of rank $2$ without edges.
The city product $N := \ensuremath\operatorname{\faMapMarker}_M(M_1, M_2, M_3)$ of these diagrams has rank $4$, labeled as below.
\[
\begin{tikzpicture}[scale=.81]
\path (.5,-1) node[myvertex,ugentblue] (A1) {}
(0,0) node[myvertex,ugentblue] (A2) {}
+(1,0) node[myvertex,ugentblue] (A3) {}
++(.5,1) node[myvertex,ugentblue] (A4) {}
(3,-1) node[myvertex,ugentred] (B1) {}
+(-.7,1) node[myvertex,ugentred] (B2) {}
+(0,2) node[myvertex,ugentred] (B3) {};
\draw[myedge,ultra thick,mydarkgray,rounded corners=5pt]
(-.4,-.4) rectangle (1.4,.4) (1.4,0) -- (B2)
(-.4,.6) rectangle (1.4,1.4) (1.4,1) -- (B3)
(-.4,-1.4) rectangle (1.4,-.6) (1.4,-1) -- (B1);
\draw[myedge] (B1) -- node[left] {$\infty$} (B2) -- node[left] {$\infty$} (B3) -- node[right] {\!$\infty$} (B1);
\node[] at (-1,1) {$M_1$};
\node[] at (-1,0) {$M_2$};
\node[] at (-1,-1) {$M_3$};
\end{tikzpicture}
\hspace*{2ex} \Rightarrow \hspace*{2ex}
\begin{tikzpicture}
\path (1,0) node[myvertex,ugentblue] (A1) {}
+(-1,1) node[myvertex,ugentblue] (A2) {}
+(1,1) node[myvertex,ugentblue] (A3) {}
+(0,2) node[myvertex,ugentblue] (A4) {};
\draw[myedge,ultra thick,mydarkgray,rounded corners=5pt] (-.5,-.5) rectangle (2.5,2.5);
\draw[myedge] (A1) -- node[left] {$\infty$} (A2) -- node[left] {$\infty$} (A4);
\draw[myedge] (A1) -- node[right] {$\infty$} (A3) -- node[right] {$\infty$} (A4);
\draw[myedge] (A1) -- node[right] {\!$\infty$} (A4);
\node[] at ([yshift=-.8em] A1) {$3$};
\node[] at ([xshift=-.8em] A2) {$1$};
\node[] at ([xshift=.8em] A3) {$2$};
\node[] at ([yshift=.8em] A4) {$4$};
\end{tikzpicture}
\]
We now choose $\rho$ to be the cyclic symmetry $(123)$ of $M$ of order $3$ and we choose $k=2$, so $\rho(k)=3$ and $\rho^{-1}(k)=1$.
We let $G_1,G_2,G'_1,G'_2,H$ be five permutation groups (acting on sets $\Omega_1,\Omega_2,\Omega'_1,\Omega'_2,\Omega$, respectively).
It now follows from \cref{thm:application} that
\[ \U_N(G_1,\ G_2,\ G'_1 \times G'_2,\ H) \cong \U_N(G'_1,\ G'_2,\ H,\ G_1 \times G_2) . \]
\end{enumerate}
\end{examples}
Notice that both examples can exist for \emph{locally finite} buildings (i.e., buildings where each of the parameters $q_i$, $i \in I$ is finite). This happens because the diagram $M_k$ has no edges, so that the corresponding universal group $\U_{M_k}(G_1,\dots,G_t)$ is just a direct product of the groups $G_i$. On the other hand, if the diagram $M_k$ has edges, then the universal group $\U_{M_k}(G_1,\dots,G_t)$ is never finite, so those examples do not occur in the locally finite case.
\clearpage
\nocite{*}
\footnotesize
\bibliographystyle{alpha}
| {
"timestamp": "2022-07-05T02:25:54",
"yymm": "2207",
"arxiv_id": "2207.01320",
"language": "en",
"url": "https://arxiv.org/abs/2207.01320",
"abstract": "We introduce the notion of city products of right-angled buildings that produces a new right-angled building out of smaller ones.More precisely, if $M$ is a right-angled Coxeter diagram of rank $n$ and $\\Delta_1,\\dots,\\Delta_n$ are right-angled buildings, then we construct a new right-angled building $\\Delta := \\mathrm{cityproduct}_M(\\Delta_1,\\dots,\\Delta_n)$. We can recover the buildings $\\Delta_1,\\dots,\\Delta_n$ as residues of $\\Delta$, but we can also construct a skeletal building of type $M$ from $\\Delta$ that captures the large-scale geometry of $\\Delta$.We then proceed to study universal groups for city products of right-angled buildings, and we show that the universal group of $\\Delta$ can be expressed in terms of the universal groups for the buildings $\\Delta_1,\\dots,\\Delta_n$ and the structure of $M$. As an application, we show the existence of many examples of pairs of different buildings of the same type that admit (topologically) isomorphic universal groups, thereby vastly generalizing a recent example by Lara Beßmann.",
"subjects": "Group Theory (math.GR)",
"title": "City products of right-angled buildings and their universal groups",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9817357253887771,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7085610843031104
} |
https://arxiv.org/abs/1909.06022 | Error Analysis of Supremizer Pressure Recovery for POD based Reduced Order Models of the time-dependent Navier-Stokes Equations | For incompressible flow models, the pressure term serves as a Lagrange multiplier to ensure that the incompressibility constraint is satisfied. In engineering applications, the pressure term is necessary for calculating important quantities based on stresses like the lift and drag. For reduced order models generated via a Proper orthogonal decomposition, it is common for the pressure to drop out of the equations and produce a velocity-only reduced order model. To recover the pressure, many techniques have been numerically studied in the literature; however, these techniques have undergone little rigorous analysis. In this work, we examine two of the most popular approaches: pressure recovery through the Pressure Poisson equation and recovery via the momentum equation through the use of a supremizer stabilized velocity basis. We examine the challenges that each approach faces and prove stability and convergence results for the supremizer stabilized approach. We also investigate numerically the stability and convergence of the supremizer based approach, in addition to its performance against the Pressure Poisson method. | \section{Introduction}
Let $\Omega \subset \mathbb{R}^{d}$, $d=2,3$ be a regular open domain with Lipschitz continuous boundary $\Gamma$. We consider the Navier-Stokes equations (NSE) with no-slip boundary conditions:
\begin{equation}\label{eqn:nse-1}
\begin{aligned}
&u_t + u\cdot\nabla u + \nabla p - \nu\Delta u = f,\ \text{and } \nabla \cdot u = 0,\ \text{in} \ \Omega \times (0,T] \\
&u = 0, \ \text{on} \ \Gamma \times (0,T], \ \text{and } u(x,0) = u_0(x), \ \text{in} \ \Omega, \\
\end{aligned}
\end{equation}
where $u$ is the velocity, $p$ is the pressure, $f$ is the known body force, and $\nu$ is the viscosity.
In recent years, there has been a growing interest in the application of reduced order models (ROMs) to modeling incompressible flows \cite{FMPT18,HRS15,NMT11,RAMBR17,SR18,XWWI17,V05}. Galerkin-based ROMs use experimental data, or solutions generated from full-order numerical schemes, i.e., finite element or finite volumes schemes, to generate a low dimensional basis. Due to the low dimensionality of the ROM basis, computational costs can be orders of magnitude smaller when compared to these full-order schemes. In practice, the data used to generate the ROM basis will often be weakly divergence-free. This divergence-free property causes the pressure term to drop out of the ROM formulation, leading to a velocity-only ROM. However, in almost every setting, accurate recovery of the pressure is required to calculate forces on walls or immersed boundaries. Additionally, the pressure term can be used to calibrate codes and models with (reliable) pressure data.
The problem tackled herein is how to recover the discrete pressure, $p_{m}$, reliably and accurately from a (discretely) divergence-free POD velocity $u_{r}$. Several approaches have been used in the literature, but no validation of their accuracy and stability has been conducted. The two most popular approaches are:
\\\\
{ (1) Solving the Pressure Poisson equation (PPE)}: \
\begin{equation}
\Delta p_{m}
= - \nabla\cdot((u_{r} \cdot \nabla) u_{r}) + \nabla \cdot f + BC
\quad \mbox{in } \Omega\,,
\label{eqn:pressure-poisson}
\end{equation}
which is obtained by taking the divergence of the NSE~\eqref{eqn:nse-1}. Here, $BC$ is a Neumann boundary condition which will be derived in Section \ref{sec:PPE}.
\\\\
{(2) Determining the pressure via the momentum equation recovery formulation \newline (MER):} \
\begin{equation}
\nabla p_{m} = u_{t,r} + u_{r} \cdot \nabla u_{r} - \nu \Delta u_{r} + f \quad \mbox{in } \Omega\,.
\end{equation}
In practice, this involves using the supremizer stabilization technique developed in \cite{BMQR15,RV07} to ensure compatibility between the pressure and velocity spaces.
Herein, we analyze the stability and convergence of the MER method's, and briefly review the PPE approach. In the ROM literature, the PPE has yielded accurate results; however, we we will see in the derivation of the discrete equations, as well as in the numerical experiments that the Neumann boundary condition leads to a loss of accuracy, especially within the boundary layer.
The MER method does not require any boundary conditions. Surprisingly, however, it does not work universally. Its reliability will be dependent on the classic inf-sup condition, as well as an \textit{a priori} computable constant dependent on the angle between the initial POD velocity space and supremizer space. We show in the numerical experiments that for the same number of basis functions, the MER approach yields more accurate solutions for the pressure than the PPE method.
The rest of this paper is organized as follows: In Section \ref{sec:notation}, we introduce notation and state preliminary results. In Section \ref{sec:POD_sec}, we outline the construction of our ROM via a proper orthogonal decomposition. In Section \ref{sec:pressure_recovery}, we present the derivation of the PPE and MER. In Section \ref{sec:error_analysis}, we prove stability and convergence results for the PPE and MER formulations. In Section \ref{sec:numerical_experiments}, we numerically investigate the performance of these pressure recovery techniques. In Section \ref{sec:conclusions}, we end the paper with conclusions and discussion of future research directions.
\subsection{Related Work}
For pressure recovery, the PPE has been studied extensively within both the finite element setting \cite{GM87,JL04,SSPG06} and the ROM setting \cite{ANR09,CIJS14,NPM05,SHMLR17}. In \cite{CIJS14}, a numerical comparison was performed for a formulation of the PPE involving pressure basis functions versus one which strictly relied on the velocity modes. In \cite{NPM05}, the authors explored the need for a pressure term, determined via the PPE, for ROM simulations of shear flows. In \cite{SHMLR17}, the authors used the PPE to recover the pressure for a finite volume based ROM of vortex shedding around a circular cylinder.
The supremizer stabilization approach for recovering the pressure was introduced in \cite{BMQR15} for the parameterized steady NSE. It was extended to the case where a strongly divergence-free POD velocity basis is used in \cite{FBKR19}. Supremizers have also been used in the context of Petrov Galkerin methods in \cite{AB15, CC19, Y14}.
A different class of approaches studied for recovering the pressure incorporates a pressure stabilization. This approach relaxes the incompressbility constraint, ensuring that the pressure term does not drop out of the ROM formulation. These include the artificial compression scheme studied in the ROM setting in \cite{DILMS19} and the Local Projection Stabilization ROM studied in \cite{R19}.
\section{Notation and Preliminaries}\label{sec:notation}
In this section, we establish notation and collect preliminary results needed for the numerical analysis and experiments in the following sections. We denote by $\|\cdot\| = \|\cdot\|_{0}$ the $L^{2}(\Omega)$ norm and by $(\cdot,\cdot)$ the $L^{2}(\Omega)$ inner product. The standard velocity space $X$ and pressure space $Q$ are defined as:
$$
\begin{aligned}
X : =& H^{1}_{0}(\Omega)^{d} = \{ v \in H^{1}(\Omega)^{d} \,:\, v|_{\Gamma} = 0 \} \\
Q : =& L^{2}_{0}(\Omega) = \{ q \in L^{2}(\Omega) \,:\, \int_{\Omega} q dx = 0 \}.
\end{aligned}
$$
For functions $v \in X$, the Poincar\'{e} inequality holds
\begin{equation*}
\begin{aligned}
&\|v\| \leq C_{P}\|\nabla v\|.
\end{aligned}
\end{equation*}
\noindent The space $H^{-1}(\Omega)$ denotes the dual space of bounded linear functionals defined on $H^{1}_{0}(\Omega)=\{v\in H^{1}(\Omega)\,:\,v=0 \mbox{ on } \Gamma\}$; this space is equipped with the norm
$$
\|f\|_{-1}=\sup_{0\neq v\in X}\frac{(f,v)}{\| \nabla v\| }
\quad\forall f\in H^{-1}(\Omega).
$$
\noindent We assume that the solution of the NSE is a strong solution satisfying the weak formulation
\begin{equation}\label{wfwf}
\begin{aligned}
(u_{t},v)+(u\cdot\nabla u,v)+\nu(\nabla u,\nabla v)-(p
,\nabla\cdot v) & =(f,v)&\quad\forall v\in X\\
(\nabla\cdot u,q) & =0&\quad\forall q\in Q.\\
\end{aligned}
\end{equation}
We will consider a discretization of the time interval $[0,T]$ into $N$ separate intervals such that $\Delta t = \frac{T}{N}$ and $t_{n} = n \Delta t$ for $n = 0, \ldots, N$.
We then define the norms
$$
||v||_{p,s} : = \Big(\int_{0}^{T}\|v(\cdot,t)\|_{s}^{p}dt\Big)^{\frac{1}{p}}
\qquad \text{and} \qquad
||v||_{\infty,s} := \text{ess\,sup}_{[0,T]}\|v(\cdot,t)\|_{s} ,
$$
and their discrete counterparts
$$
|||v|||_{p,s} : = \Big(\sum_{n=0}^{N}\|v^{n}\|_{s}^{p} \Delta t\Big)^{\frac{1}{p}}
\qquad \text{and} \qquad
|||v|||_{\infty,s} := \max_{0 \leq n \leq N}\| v^{n}\|_{s}
$$
For the spatial discretization of the NSE, we use a conforming finite element space for the velocity $X_{h}\subset X$ and pressure $Q_{h}\subset Q$ based on a regular triangulation of $\Omega$ having maximum triangle diameter $h$.
We assume that the finite element spaces satisfy the discrete inf-sup condition: There exists a constant $\beta_{h} >0$ independent of $h$ such that
\begin{equation}\label{eqn:inf-supFE}
\inf_{q_{h} \in Q_{h} \backslash \{0\} } \sup_{v_{h} \in X_{h} \backslash \{0\}} \frac{(\nabla \cdot v_{h},q_{h})}{\| \nabla v_{h} \| \|q_{h}\|} \geq \beta_{h}.
\end{equation}
In addition, we assume that these finite element spaces fulfill the following approximation properties:
$$
\begin{aligned}
\inf_{v_h\in X_h}\| v- v_h \|&\leq C(v,\nu) h^{s+1}&\forall v\in H^{s+1}(\Omega)^d,\\
\inf_{v_h\in X_h}\| \nabla ( v- v_h )\|&\leq C(v,\nu) h^s&\forall v\in H^{s+1}(\Omega)^d,\\
\inf_{q_h\in Q_h}\| q- q_h \|&\leq C(q,\nu) h^k&\forall q\in H^{k}(\Omega).
\end{aligned}
$$
\noindent We define the trilinear form
$$
b(w,u,v) = (w\cdot\nabla u,v)
\qquad\forall u,v,w\in H^1(\Omega)^d
$$
and the explicitly skew-symmetric trilinear form by
$$
b^{\ast}(w,u,v):=\frac{1}{2}(w\cdot\nabla u,v)-\frac{1}{2}(w\cdot\nabla v,u)
\qquad\forall u,v,w\in H^1(\Omega)^d \, .
$$
The term $b^{\ast}$ satisfies the following bound
\begin{lemma}\label{lemma:trilinear}
There exists a constant $C_{b^{\ast}}>0$ only dependent on the domain $\Omega$ such that
\begin{equation*}
b^{\ast}(w,u,v)\leq C_{b^{\ast}} \|\nabla w\| \| \nabla u\| \| \nabla
v \| \qquad\forall u, v, w \in X.
\end{equation*}
\end{lemma}
\begin{proof}
See Lemma 6.11 of \cite{J16}.
\end{proof}
We define the space of discretely divergence free functions as
\begin{equation}
V^{div}_{h}:= \{v_{h} \in X_{h} : (\nabla \cdot v_{h},q_{h}) = 0 \ \forall q_{h}\in Q_{h}\} \subset X.
\end{equation}
From Hilbert space theory, the function space $X_{h}$ can be decomposed into the orthogonal subspaces
\begin{equation}\label{ortho_subspaces}
X_{h} = V^{div}_{h} \oplus (V^{div}_{h})^{\perp},
\end{equation}
where the orthogonality is in the sense of the $H^{1}$ inner product.
Throughout the rest of this paper we assume that the solution to the NSE satisfies the following regularity assumptions
\\
\begin{assumption}\label{assumption:regularity} In \eqref{wfwf} we assume that $u$, $p,$ and $f$ satisfy:
\begin{equation*}
\begin{aligned}
&u \in L^{\infty}(0,T,X \cap H^{s+1}(\Omega)), u_{t} \in L^{2}(0,T,H^{s+1}(\Omega)), u_{tt} \in L^{2}(0,T,H^{s+1}(\Omega)),
\\
&f \in L^{2}(0,T,L^{2}(\Omega)), p \in L^{2}(0,T,Q \cap H^{k}(\Omega)).
\end{aligned}
\end{equation*}
\end{assumption}
The calculation of snapshots to construct the ROM in the ensuing sections is done using the $P^{2}-P^{1}$ Taylor-Hood finite element pair along with a backward Euler time discretization. Specifically, given $u^{0}_h, \in X_h$ for $n=1,2,\ldots,N-1$, we find $u^{n+1}_h\in X_h$ and $p_h^{n+1}\in Q_h$ satisfying
\begin{equation}\label{eqn:BE_FEM}
\begin{aligned}
&\Big(\frac{u^{n+1}_h - u^{n}_h}{\Delta t}, v_h \Big) + b^{\ast}(u_h^{n} , u^{n+1}_h ,v_h) +\nu (\nabla u^{n+1}_h, \nabla v_h) \\
&- (p^{n+1}_h , \nabla \cdot v_h) =( f^{n+1}, v_h) \quad \quad \qquad \forall v_h\in X_h\\
&(\nabla \cdot u_h^{n+1}, q_h )= 0 \qquad \qquad \qquad \qquad \ \ \ \forall q_h\in Q_h.
\end{aligned}
\end{equation}
It has been shown in Theorem 7.78 of \cite{J16}, using Taylor-Hood elements and under the regularity conditions given in Assumption \ref{assumption:regularity}, \eqref{eqn:BE_FEM} will satisfy the following error estimate
\begin{equation}
\|u(t^{N}) - u_{h}^{N}\|^{2} + \nu \Delta t \sum_{n=1}^{N} \|\nabla u(t^{n}) - u_{h}^{n} \|^{2} \leq C(\nu) \left(h^{2s}(1 + \nu^{-1}\|p\|_{\infty,k}^{2}) + \Delta t^{2} \right),
\end{equation}
with $C$ independent of $h$, $p$, and $\Delta t$.
\section{Proper Orthogonal Decomposition Preliminaries}
\label{sec:POD_sec}
In this section, we briefly describe the POD method. We will closely follow the notation and presentation in \cite{DILMS19}. A more detailed description of this method can be found in \cite{KV01}.
We discretize the time interval $[0,T]$ into $N$ separate intervals such that $\Delta t = \frac{T}{N}$ and $t_{n} = n \Delta t$ for $n = 0, \ldots, N$. We will denote by $u_{h}^{n}(x)\in X_h$, $p_{h}^{n}(x)\in Q_h$, $n=0,\ldots,N$, the finite element solution to \eqref{eqn:BE_FEM} evaluated at $t=t_n$, $n=1,\ldots,N$.
Letting ${u}_S^{n}$ and ${p}_S^{n}$ be the vector of coefficients corresponding to the finite element functions $u_{h}^{n}(x)$ and $p_{h}^{n}(x)$, we define the velocity snapshot matrix $\mathbb{V}$ and pressure snapshot matrix $\mathbb{P}$ as
\begin{equation*}
\begin{aligned}
\mathbb{V} = \big({u}_S^{0},{u}_S^{1}, \ldots , {u}_S^{N})\ \ \text{and} \ \ \mathbb{P} = \big({p}_S^{0},{p}_S^{1}, \ldots , {p}_S^{N}).
\end{aligned}
\end{equation*}
We consider the set of finite element velocity $\{u^{n}_{h,S}\}_{n=0}^{N}$ and pressure $\{p^{n}_{h,S}\}_{n=0}^{N}$ functions corresponding to the velocity and pressure snapshots.
Defining the velocity and pressure spaces spanned by these functions as
\begin{equation*}
X_{h,S} :=\text{span}\{u^{n}_{h,S}\}_{n=0}^{N} \subset X_h \ \ \text{and} \ \
Q_{h,S} :=\text{span}\{p^{n}_{h,S}\}_{n=0}^{N} \subset Q_h,
\end{equation*}
the POD method then seeks a low-dimensional representation of these spaces. Denoting by $\{{\varphi_i(x)}\}_{i=1}^r$ the velocity POD basis and $\{{\psi_i(x)}\}_{i=1}^m$ the pressure POD basis we define the reduced velocity and pressure spaces as
\begin{equation*}
X_r :=\text{span}\{{\varphi}_i\}_{i=1}^r \subset X_{h,S} \subset X_h \ \ \text{and} \ \
Q_m :=\text{span}\{{\psi}_i\}_{i=1}^m \subset P_{h,S} \subset Q_h.
\end{equation*}
We let $\delta_{ij}$ denote the Kronecker delta and $\mathcal{H}_{V}$ and $\mathcal{H}_{P}$ a Hilbert space for the velocity and pressure space, respectively. The POD method determines these bases by solving the constrained minimization problems: find $\{{\varphi}_i\}_{i=1}^r$ and $\{{\psi}_i\}_{i=1}^m$ satisfying
\begin{equation}\label{Min-velocity}
\begin{aligned}
\frac{1}{N+1}\min \sum_{n=0}^{N} \Big \| u_{h}^{n}-\sum_{j=1}^r (u_{h}^{n}, \varphi_j)_{\mathcal{H}_{V}}\varphi_j\Big \|_{\mathcal{H}_{V}}^2 \\
\text{subject to } (\varphi_i, \varphi_j)_{\mathcal{H}_{V}}= \delta_{ij}\quad\mbox{for $i,j=1,\ldots,r$},
\end{aligned}
\end{equation}
and
\begin{equation}\label{Min-pressure}
\begin{aligned}
\frac{1}{N+1} \min \sum_{n=0}^{N_{}} \Big \| p_{h}^{n}-\sum_{j=1}^m (p_{h}^{n}, \psi_j)_{\mathcal{H_{P}}}\psi_j\Big \|_{\mathcal{H}_{P}} ^2 \\
\text{subject to } (\psi_i, \psi_j)_{\mathcal{H}_{P}}= \delta_{ij}\quad\mbox{for $i,j=1,\ldots,m$}.
\end{aligned}
\end{equation}
Defining the velocity and pressure correlation matrices $\mathbb{C}_{V} = \frac{1}{N+1}(u^{n}_{S},u^{k}_{S})_{\mathcal{H}_{V}}$ and $\mathbb{C}_{P} = \frac{1}{N+1}(p^{n}_{S},p^{k}_{S})_{\mathcal{H}_{P}}$ for $n,k = 0, \ldots N,$ these problems can then be solved by considering the eigenvalue problems
\begin{equation*}
\mathbb{C}_{V}\vec{a}_{i} = \lambda_{i}\vec{a}_{i},
\end{equation*}
and
\begin{equation*}
\mathbb{C}_{P}\vec{b}_{i} = \sigma_{i}\vec{b}_{i}.
\end{equation*}
The eigenvalues for ${C}_{V}$, $\lambda_{1} \geq \lambda_{N_V} > 0$, and ${C}_{P}$, $\sigma_{1} \geq \sigma_{N_P} > 0$, are sorted in descending order. Here, $N_{V}$ and $N_{P}$ are the rank of $\mathbb{V}$ and $\mathbb{P},$ respectively. It follows that the finite element basis coefficients corresponding to the POD basis functions will be given by
\begin{equation*}
\vec\varphi_i = \frac{1}{\sqrt{\lambda_i}}\mathbb{C}_{V}\vec{a}_{i}, \ \ \ i = 1, \ldots, r,
\end{equation*}
and
\begin{equation*}
\vec\psi_i = \frac{1}{\sqrt{\sigma_i}}\mathbb{C}_{P}\vec{b}_{i}, \ \ \ i = 1, \ldots, m.
\end{equation*}
Throughout the rest of this paper, we will assume that $\mathcal{H}_{V} = L^{2}$ and $\mathcal{H}_{P} = L^{2}$. POD error analysis has been conducted for $\mathcal{H}_{V} = H_{0}^{1}$ in the semidiscrete setting for the NSE in \cite{KV02}. Analysis and numerical tests comparing the different POD bases was conducted in the semidiscrete setting for the heat equation using a variety of different error norms in \cite{IW142} and for the NSE in \cite{S14}. We note that results in this paper could be extended to the case where $\mathcal{H}_{V} = H_{0}^{1}$ and $\mathcal{H}_{P} = H^{1}$, but do not do so here for clarity of presentation. A rigorous comparison between the $L^{2}$ and $H^{1}$ POD basis in the fully discrete setting for the velocity approximation and the pressure recovery techniques explored in this paper is a subject of ongoing research.
Using the velocity POD basis $\{\varphi\}_{i=1}^{r}$ we will construct the {BE-ROM} scheme. We seek a solution in $X_{r}$ using the POD basis $\{\varphi_{i}\}_{i=1}^{r}$ as opposed to the finite element basis as done in \eqref{eqn:BE_FEM}. The {BE-ROM} scheme can be written as:
\begin{equation}\label{eqn:BE_ROM}
\Big(\frac{u^{n+1}_r - u_{r}^{n}}{\Delta t}, \varphi \Big) + b^{\ast}(u_r^{n} , u^{n+1}_r ,\varphi) +\nu (\nabla u^{n+1}_r, \nabla \varphi) =( f^{n+1}, \varphi) \ \ \forall \varphi \in X_r.
\end{equation}
The terms involving the pressure have dropped out of \eqref{eqn:BE_ROM} due to the fact that $X_{r} \subset V^{div}_{h}$, yielding a velocity only ROM.
\section{Pressure Recovery Formulations}\label{sec:pressure_recovery}
It was show in the derivation of {BE-ROM} \eqref{eqn:BE_ROM}, due to the fact that $X_{r} \subset V^{div}_{h}$, the pressure term drops out of the formulation yielding a velocity-only ROM. In this section, we review two ways in which the pressure can be recovered from the velocity solution $u_{r}^{n+1}$.
\subsection{Momentum Equation Recovery}
The MER approach for recovering the pressure involves just the weak form of the momentum equation, i.e., given the ROM solution $u^{n}_{r}$, $u_{r}^{n+1},$ determined by \eqref{eqn:BE_ROM}, find $p^{n+1}_{m} \in Q_{m}$ satisfying
\begin{equation}\label{eqn:mom-temp}
\begin{aligned}
(p^{n+1}_m , \nabla \cdot s) &= -( f^{n+1}, s) +\Big(\frac{u^{n+1}_r - u^{n}_r}{\Delta t}, s \Big) + b^{\ast}(u_r^{n} , u^{n+1}_r ,s)
\\
&+\nu (\nabla u^{n+1}_r, \nabla s) \ \ \ \quad \quad \qquad \qquad \forall s\in S \subset (V^{div}_{h})^{\perp}.\\
\end{aligned}
\end{equation}
This method was studied in the ROM setting for the steady NSE in \cite{FBKR19}. An important consideration is that the test space $S$ must be determined such that it is inf-sup stable with respect to the pressure space $Q_{m}$. To do so, we follow the same approach from \cite{FBKR19}, and use the supremizer stabilization method developed in \cite{BMQR15,RV07}.
\\
\begin{remark}
Due to the fact that $u^{n+1}_{r} \in V_{h}^{div}$ and $s\in (V^{div}_{h})^{\perp}$, we have $\nu (\nabla u^{n+1}_r, \nabla s) = 0$ in \eqref{eqn:mom-temp}.
\end{remark}
\subsubsection{Supremizer Stabilization and weak formulation}
We consider the discrete inf-sup condition \eqref{eqn:inf-supFE} replacing the pressure finite element space with the ROM space $Q_{m}$
\begin{equation}\label{eqn:inf-supROM1}
\inf_{\psi \in Q_{m} \backslash \{0\} } \sup_{v_{h} \in X_{h} \backslash \{0\}} \frac{(\nabla \cdot v_{h},\psi)}{\| \nabla v_{h} \| \|\psi\|}.
\end{equation}
Given a function $p_{m} \in Q_{m}$, its supremizer will be the velocity function $s_{h} \in X_{h}$ that realizes the inf-sup condition in \eqref{eqn:inf-supROM1}. This can be interpreted as the Reisz representation in $X_{h}$ of the linear functional $(\nabla \cdot, p_{m})$, i.e., the solution of find $s_{h} \in X_{h}$ such that
\begin{equation}\label{eqn:supremizer}
(\nabla s_{h}, \nabla v_{h}) = - (\nabla \cdot v_{h},p_{m}) \ \ \ \forall v_{h} \in X_{h}.
\end{equation}
The supremizer enrichment algorithm consists of solving \eqref{eqn:supremizer} for each basis function $\{\psi_{i}\}_{i=1}^{m}$. Then, applying a Gram-Schmidt orthonormalization procedure to the set of solutions yields a set of basis functions $\{\zeta_{i}\}_{i=1}^{m}$. Letting
\begin{equation}
S_m :=\text{span}\{{\zeta}_i\}_{i=1}^m \subset (V^{div}_{h})^{\perp} \subset X_h ,
\end{equation}
the following inf-sup stability condition holds for the spaces $S_{m}$ and $Q_{m}$.
\begin{lemma}\label{lemma:inf-sup}
Let $\beta_{h} >0$ be the inf-sup constant for the finite element basis in \eqref{eqn:inf-supFE}. The spaces $S_{m}$ and $Q_{m}$ will then be inf-sup stable with a constant $\beta_{m} \geq \beta_{h}$, i.e.,
\begin{equation}\label{eqn:inf-supROM2}
\beta_{m} = \inf_{\psi \in Q_{m} \backslash \{0\} } \sup_{\zeta \in S_{m} \backslash \{0\}} \frac{(\nabla \cdot \zeta ,\psi)}{\| \nabla \zeta \| \|\psi\|} \geq \beta_{h}.
\end{equation}
\end{lemma}
\begin{proof}
See section 4 of \cite{BMQR15}.
\end{proof}
Using the space $S_{m}$ in \eqref{eqn:mom-temp} the MER formulation is then given by: find $p_{m}^{n+1} \in Q_{m}$ satisfying
\begin{equation}\label{eqn:posteriori-momentum}
\begin{aligned}
(p_{m}^{n+1}, \nabla \cdot \zeta) = &\Big(\frac{u^{n+1}_r - u^{n}_r}{\Delta t}, \zeta \Big) + b^{\ast}(u_r^{n} , u^{n+1}_r,\zeta)
\\
& - ( f^{n+1}, \zeta) \ \ \forall \zeta \in S_m.
\end{aligned}
\end{equation}
It can be shown (see section 4 of \cite{FBKR19}) that solving \eqref{eqn:BE_ROM} followed by \eqref{eqn:posteriori-momentum} is equivalent to the coupled system generated by discretizing \eqref{eqn:BE_FEM} with the combined velocity basis $X_{r} \bigoplus S_{m}$ and pressure space $Q_{m}$. The disadvantage to this approach is that it results in needing to solve a system of size $r + 2m$ instead of separate ones of size $r$ and $m$.
\\
\begin{remark}
We note that the computational cost of this approach is comparable to other methods used for pressure recovery in the time-dependent setting. In \cite{BMQR15}, the authors considered the steady NSE in a parameterized domain. This resulted in the inf-sup constant \eqref{eqn:inf-supROM2} to be parameter dependent. Therefore, each time a different parameter was sampled, the supremizer stabilization algorithm needed to be rerun for \eqref{eqn:inf-supROM2} to be satisfied. Because of the large computational cost, the authors proposed an approximate supremizer algorithm that did not rigorously satisfy \eqref{eqn:inf-supROM2}. We stress for the problem setting studied in this paper the inf-sup constant will not be parameter dependent. Therefore the supremizer stabilization algorithm only needs to be run once in the offline stage. This cost is negligible compared to the cost of generating the snapshot matrices $\mathbb{V}$ and $\mathbb{P}$ in the offline phase.
\end{remark}
\subsection{Pressure Poisson}\label{sec:PPE}
In the ROM literature, the most frequently used technique for recovering the pressure is the PPE. The PPE has been studied in the continuous, finite difference, and finite element settings \cite{GM87,JL04,SSPG06}. In the ROM setting, numerical studies have been performed in \cite{CIJS14,NPM05}.
In this section, we rederive the PPE and its corresponding weak formulation. We follow the approach used in \cite{JL04}.
\subsubsection{Pressure Poisson Formulation}
Taking the divergence of the momentum equation in \eqref{eqn:nse-1}, assuming sufficient regularity, and using $\nabla \cdot u = 0$ gives
\begin{equation}\label{eqn:PPE-strongI}
\Delta p = - \nabla \cdot (u \cdot \nabla u) + \nabla \cdot f.
\end{equation}
For this equation to be equivalent to the NSE, we need to impose additional constraints on \eqref{eqn:PPE-strongI}. Some possibilities include the enforcement of a no-slip boundary for the divergence of the velocity, retaining the term $\Delta (\nabla \cdot u)$ in \eqref{eqn:PPE-strongI}, or incorporating a Neumann boundary condition into \eqref{eqn:PPE-strongI}. Full details on these different approaches can be found in \cite{GM87,JL04,SSPG06}.
We will consider the most common approach used in the ROM setting, adding a Neumann boundary condition to \eqref{eqn:PPE-strongI}. To this end, we take the normal component of the momentum equation along the boundary $\Gamma$. Using the vector identity
\begin{equation*}
\Delta u = -\nabla \times \nabla \times u + \nabla(\nabla \cdot u)
\end{equation*}
along with $\nabla \cdot u = 0$ gives
\begin{equation}
\frac{\partial p}{\partial n}\biggr|_{\Gamma} = \bigg [-\nu \cdot (\nabla \times \nabla \times u) + n \cdot f \bigg]\biggr|_{\Gamma},
\end{equation}
where $n$ is the unit normal along $\Gamma$. Equipping \eqref{eqn:PPE-strongI} with this boundary condition then gives the full PPE
\begin{subequations}\label{eqn:PPE-strongII}
\begin{align}
&\Delta p = - \nabla \cdot (u \cdot \nabla u) + \nabla \cdot f \label{eqn:PPE-strongIIa}
\\
&\frac{\partial p}{\partial n}\biggr|_{\Gamma} = \bigg [-\nu \cdot (\nabla \times \nabla \times u) + n \cdot f \bigg]\biggr|_{\Gamma}. \label{eqn:PPE-strongIIb}
\end{align}
\end{subequations}
Putting this into a weak formulation, we multiply \eqref{eqn:PPE-strongIIa} by a test function $q$.
Integrating the left hand side and right hand side of \eqref{eqn:PPE-strongIIa} by parts and applying the vector identity
\begin{equation}
\int_{\Gamma} n\cdot (\nabla \times \nabla \times u) q = - \int_{\Gamma} (\nabla \times u) \cdot (n \times \nabla q)
\end{equation}
gives the weak form of the PPE
\begin{equation}\label{eqn:PPE-weak}
(\nabla p, \nabla q) = - (u \cdot \nabla u,\nabla q) + (f,\nabla q) + \nu \int_{\Gamma} (\nabla \times u) \cdot (n \times \nabla q).
\end{equation}
Equation \eqref{eqn:PPE-weak} can then be discretized using the pressure POD basis along with the discrete velocity solution to recover the pressure at each time step. Specifically, given the ROM velocity solution $u_{r}^{n+1},$ we find $p_{m}^{n+1} \in Q_{m}$ satisfying
\begin{equation}
\begin{aligned}\label{eqn:D-PPE}
(\nabla p^{n+1}_{m},\nabla \psi) &= -\left( u^{n+1}_{r} \cdot \nabla u_{r}^{n+1},\nabla \psi \right) + (f^{n+1}, \nabla \psi)
\\
&+ \nu \int_{\Gamma} (\nabla \times u^{n+1}_{r}) \cdot (n \times \nabla \psi) \qquad \qquad \qquad \forall \psi \in Q_{m}.
\end{aligned}
\end{equation}
\begin{remark}\label{remark:ppe}
For the boundary term appearing in \eqref{eqn:D-PPE} to be well posed, this will require either that $(\nabla \times u^{n+1}_{r}))|_{\Gamma} \in H^{1/2}(\Gamma)$ and $(n \times \nabla \psi) \in H^{-1/2}(\Gamma)$ or that $(\nabla \times u^{n+1}_{r}))|_{\Gamma} \in L^{2}(\Gamma)$ and $(n \times \nabla \psi) \in L^{2}(\Gamma)$. The first of these conditions will be satisfied if $u_{r}^{n+1} \in H^{2}$ and $\psi \in H^{1}$. Since $u^{n+1}_{r} \in X_{r} \subset X_{h}$ and $\psi \in Q_{m} \subset Q_{h},$ this will not hold when a $C^{0}$ finite element space is used in the offline phase. The second condition, however, will be true for $C^{0}$ finite elements. Since $u^{n+1}_{r}$ and $\psi$ will be piecewise polynomials on the boundary, they will be in $L^{2}(\Gamma)$.
Even though this term will be well defined, it will present difficulties in terms of the theoretical analysis. In order to obtain stability and error estimates the terms involving the boundary need to be bounded in terms of the domain $\Omega$. A standard finite element approach would be to use a trace inequality (see \cite{BS02}) on these terms. However, due to the lack of regularity of these terms, it is not possible to do so here. To our knowledge, the analysis of this equation, even in the finite element setting, is an open problem.
\end{remark}
\section{Error Analysis}\label{sec:error_analysis}
In this section, we conduct an error analysis for the pressure determined by the MER formulation, \eqref{eqn:posteriori-momentum}. We begin by stating preliminary results and establishing notation.
The following stability result for {BE-ROM}, \eqref{eqn:BE_ROM}, holds.
\begin{lemma}\label{lemma:BE-ROM-stability}
Consider the method \eqref{eqn:BE_ROM}. Let
\begin{equation*}
C_{stab} := \|u^{0}_{r}\|^{2} + \nu^{-1}\Delta t\sum_{n=0}^{N'}\|f^{n+1}\|_{-1}^{2},
\end{equation*}
then for any $1 < N' \leq N $
\begin{equation}\label{energy_inequality}
\|u_{r}^{N'}\|^{2} + \nu \Delta t \sum_{n=0}^{N'}\|\nabla u_{r}^{n+1}\|^{2} \leq C_{stab}.
\end{equation}
\end{lemma}
\begin{proof}
The results follows by letting $\varphi = u_{r}^{n+1}$ and using Cauchy-Schwarz, skew-symmetry of $b^{\ast}$, Young's inequality, and a polarization identity.
\end{proof}
\begin{definition}
{Let $C$ be a constant which may depend on $f,u,p,C_{b^*},\nu, C_{stab}$, but is independent of $h, \Delta t, r,m, , \lambda_{i},\sigma_{i}$.}
\end{definition}
The POD mass and stiffness matrices of the velocity space are defined as
\begin{equation*}
\mathbb{M}_{r} = (\varphi_{i}, \varphi_{j})_{L^{2}}, \ \ \ \mathbb{S}_{r} = (\nabla \varphi_{i}, \nabla \varphi_{j})_{L^{2}}.
\end{equation*}
The following POD inverse estimate then holds:
\begin{lemma}\label{lemma:POD-inveq}
For all $\varphi \in X_{r}$ and $\psi \in Q_{m}$ it holds
\begin{equation*}
\|\nabla \varphi \| \leq |||\mathbb{S}_{r}|||_{2}^{1/2}\|\varphi\|.
\end{equation*}
\end{lemma}
\begin{proof}
See Lemma 2 of \cite{KV01}.
\end{proof}
We next define the $L^{2}$ projection into the velocity space $X_{r}$, and the pressure space $Q_{m}$.
\begin{definition} We define the $L^{2}$ projection into the velocity space $X_{r}$, and the pressure space $Q_{m}$ as $P_{r}: L^{2}(\Omega) \rightarrow X_{r}$ and $\chi_{m}: L^{2}(\Omega) \rightarrow Q_{m}$ such that
\begin{equation}
\begin{aligned}
(u - P_{r}u,\varphi) &= 0, \qquad \forall \varphi \in X_{r}, \ \ \text{and} \\
(p - \chi_{m}p, \psi) &= 0, \qquad \forall \psi \in Q_{m}.
\end{aligned}
\end{equation}
\end{definition}
The following lemmas, proven in \cite{KV01,S14},
provide bounds for the error between the snapshots and their projections onto the POD space.
\begin{lemma}\label{v-proj-errL2} It holds that
\begin{equation}
\begin{aligned}
&\frac{1}{N+1}\sum_{n = 0}^{N} \left \|u_{h}^{n} - \sum_{i=1}^{r}(u_{h}^{n},{\varphi_i}){\varphi}_i \right \|^{2} = \sum_{i=r+1}^{N_{V}} {\lambda_i},\ \ \text{and} \\
&\frac{1}{N+1}\sum_{n = 0}^{N} \left \| p_{h}^{n} - \sum_{i=1}^{m}(p_{h}^{n},\psi_i)\psi_i \right \|^{2} = \sum_{i = m + 1}^{N_P} \sigma_i .
\end{aligned}
\end{equation}
\end{lemma}
We also have the following $H^{1}$ error bound for the velocity.
\begin{lemma}\label{v-proj-errH1}It holds that
\begin{equation}
\begin{aligned}
&\frac{1}{N+1}\sum_{n = 0}^{N} \left \|\nabla(u_{h}^{n} - \sum_{i=1}^{r}(u_{h}^{n},{\varphi_i}){\varphi}_i) \right \|^{2} = \sum_{i=r+1}^{N_{V}} \| \nabla {\varphi}_i\| ^2\lambda_i.
\end{aligned}
\end{equation}
\end{lemma}
From these projection estimates we can derive error estimates for the $L^{2}$ projection error into the velocity space $X_{r}$ using the approach of Lemma 3.3 in \cite{IW14}.
\begin{lemma}\label{pod-velo-proj-lemma-L2}
For any $u^{n} \in V$ the $L^{2}$ projection error into $X_{r}$ satisfies the following estimates
\begin{equation}\label{proj-err-1}
\begin{aligned}
&\frac{1}{N+1}\sum_{n=0}^{N} \|u^{n} - P_{r}u^{n}\|^{2} \leq C(\nu,p) \left (h^{2s} + \Delta t^{2} + \sum_{i= r+1}^{N_V} {\lambda}_{i} \right), \ \ \text{and} \\
& \frac{1}{N+1}\sum_{n=0}^{N} \|\nabla(u^{n} - P_{r}u^{n})\|^{2} \leq {C(\nu,p)}\bigg((1+|||{\mathbb S}_r|||_{2}) h^{2s} + (1+|||{\mathbb S}_r|||_{2})\Delta t^2
\\ & \hspace{4cm} + \sum_{i=r+1}^{N_{V}} \| \nabla {\varphi}_i\| ^2\lambda_i\bigg) .
\end{aligned}
\end{equation}
\end{lemma}
A similar results holds for the for the $L^{2}$ projection error into the pressure space $Q_{m}$.
\begin{lemma}\label{pod-press-proj-lemma-L2-basis}
For any $p^{n} \in Q$ the $L^{2}$ projection error satisfies the following estimates
\begin{equation}\label{proj-err-2}
\begin{aligned}
&\frac{1}{N+1}\sum_{n=0}^{N} \|p^{n} - \chi_{m}p^{n}\|^{2} \leq C(\nu,p) \left (h^{2k} + \Delta t^{2} + \sum_{i= m+1}^{N_{P}} \sigma_{i} \right)
\end{aligned}
\end{equation}
\end{lemma}
To prove pointwise in time error estimates for the velocity, we must make the following assumption similar to the one stated in \cite{IW14}.
\begin{assumption}\label{assumption:conv}
For any $u^{n} \in V,$ the $L^{2}$ projection error into $X_{R}$ satisfies the following estimates
\begin{equation}\label{proj-err-v-assump}
\begin{aligned}
& \max_{n} \|u^{n} - P_{r}u^{n}\|^{2} \leq C(\nu,p) \left (h^{2s} + \Delta t^{2} + \sum_{i= r+1}^{N_V} {\lambda}_{i} \right), \ \ \text{and} \\
&\max_{n} \|\nabla(u^{n} - P_{r}u^{n})\|^{2} \leq {C(\nu,p)}\bigg((1+|||{\mathbb S}_r|||_{2}) h^{2s} + (1+|||{\mathbb S}_r|||_{2})\Delta t^2
\\ & \hspace{4cm} + \sum_{i=r+1}^{N_{V}} \| \nabla {\varphi}_i\| ^2\lambda_i\bigg) . \\
\end{aligned}
\end{equation}
\end{assumption}
We denote by $e_{u}$ and $e_{p}$ the error between the true velocity and pressure solution and their respective POD approximations. We then split the error for the velocity and pressure via the $L^{2}$ projection into the space $X_{r}$ and $Q_{m},$ respectively
\begin{equation*}
\begin{aligned}
e^{n+1}_{u} = u^{n+1} - u^{n+1}_{r} = (u^{n+1} - P_{r}(u^{n+1})) + (P_{r}(u^{n+1}) - u^{n+1}_{R}) &= \eta^{n+1} - \xi_{r}^{n+1}
\\ e^{n+1}_{p} = p^{n+1} - p^{n+1}_{m} = (p^{n+1} - \chi_{m}(p^{n+1})) + (\chi_m(p^{n+1}) - p^{n+1}_{m}) &= \kappa^{n+1} - \pi_{m}^{n+1}.
\end{aligned}
\end{equation*}
Lastly, we state a convergence result for the velocity determined by the BE-ROM scheme \eqref{eqn:BE_ROM}.
\begin{theorem}\label{theorem:velocity_err}
Consider BE-ROM \eqref{eqn:BE_ROM} and let $C$ be a constant which may depend on $f,u,p,C_{b^{\ast}},C_{stab}$ and, $\nu$, but is independent of $h, \Delta t,r,m, \lambda_{i},$ and ${\mathbb S}_r$. Under the regularity conditions from Assumption \ref{assumption:regularity} and the projection error estimates from Assumption \ref{assumption:conv}, for any $0 \leq n \leq N$, the following bound on the velocity error holds
\begin{equation}
\|e_{u}^{n+1}\|^{2} + \nu ||| \nabla e_{u} |||_{2,0}^{2} \leq C\left((1 + |||{\mathbb S}_r|||_{2}) (h^{2s} + \Delta{t}^{2}) + \sum_{i=r+1}^{N_{V}}\lambda_i + \sum_{i=r+1}^{N_{V}}\lambda_i \|\nabla \varphi_{i}\|^{2}\right) .
\end{equation}
\begin{proof}
The proof is identical to that of Theorem 4.1 in \cite{MRXI17}.
\end{proof}
\end{theorem}
\subsection{Momentum Equation Stability and Error Analysis}
Next, we conduct a full stability and error analysis for the MER formulation \eqref{eqn:posteriori-momentum}. We begin by stating some preliminary definitions and lemmas.
The spaces $X_{r}$ and $S_{m}$ have the following dual norms
\begin{equation*}
\|w\|_{X^{\ast}_{r}}:= \sup_{\varphi \in X_{r}} \frac{(w,\varphi)}{\| \nabla \varphi \|} \qquad \|w\|_{S^{\ast}_{m}}:= \sup_{\zeta \in S_{m}} \frac{(w,\zeta)}{\| \nabla \zeta\|}.
\end{equation*}
We recall the strengthened Cauchy-Buniakowskii-Schwarz (CBS). This inequality has been used in the analysis for multilevel schemes \cite{EV91} and recently in the analysis of ROMs \cite{DILMS19,M14,R19}.
\begin{lemma}\label{CBS}
Given a Hilbert space V and two finite dimensional subspaces $V_{1} \subset V$ and $V_{2} \subset V$ with trivial intersection:
\begin{equation*}
V_{1} \cap V_{2} = \{0\},
\end{equation*}
then there exists $ 0 \leq \alpha < 1$ such that
\begin{equation*}
|(v_{1},v_{2})| \leq \alpha \|v_{1}\|\|v_{2}\| \ \ \ \forall v_1 \in V_{1}, v_2 \in V_{2}.
\end{equation*}
\end{lemma}
In the ensuing analysis we will be interested in computing the value of $\alpha$ between the spaces $X_{r}$ and $S_{m}$. This can also be interpreted as determining the first principal angle defined as
\begin{equation}\label{first_principal_angle}
\theta_{1} := \min_{\varphi \neq 0, \zeta \neq 0} \left\{\arccos\left({\frac{|(\varphi,\zeta)|}{\|\varphi\|\|\zeta\|}}\right) \bigg| \varphi \in X_{r}, \zeta \in S_{m} \right\},
\end{equation}
with $0 < \theta_{1} \leq \frac{\pi}{2}$.
Numerous methods for calculating the principal angle between two spaces using either a QR or SVD factorization have been devised in \cite{KA02,WS03} and the references therein. We note that due to the relative small size of the reduced basis, this computation is negligible in terms of computational cost and storage.
Next, we prove an $H^{1}$ stability results for the $L^{2}$ projection from $X_{r}$ into $S_{m}$. In the finite element setting, this type of result is known to hold independent of the cardinality of the basis for quasi-uniform and certain regular meshes \cite{BY14}. In the ROM setting, however, this is currently an open problem (see Remark 4.1 in \cite{XWWI18}).
\begin{lemma}
Let $u_{m} \in S_{m}$ and $P_{r}:S_{m} \rightarrow X_{r}$ denote the $L^{2}$ projection from $S_{m}$ to $X_{r}$. Letting
\begin{equation*}
C_{r}^{H^{1}} := \left\|\sum_{i=1}^{r} \nabla \varphi_{i} \right\|,
\end{equation*}
the following stability bound holds
\begin{equation}
\|\nabla P_{r} u_{m}\| \leq \alpha C_{P} C_{r}^{H^{1}} \|\nabla u_{m}\|.
\end{equation}
\end{lemma}
\begin{proof}
By the definition of the $L^{2}$ projection into $X_{r}$ we have
\begin{equation}
\|\nabla P_{r} u_{m}\| = \left\|\sum_{i=1}^{r}(u_{m},\varphi_{i}) \nabla \varphi_{i} \right\|.
\end{equation}
Since $X_{r} \subset (V^{div}_{h})$ and $S_{m} \subset (V^{div}_{h})^{\perp}$ it follows that $X_{r} \cap S_{m} = \{0\}$. Therefore, by Lemma \ref{CBS} it follows that
\begin{equation}
\left\|\sum_{i=1}^{r}(u_{m},\varphi_{i}) \nabla \varphi_{i} \right\| \leq \alpha \|u_{m}\| \left\|\sum_{i=1}^{r}\|\varphi_{i}\| \nabla \varphi_{i} \right\| .
\end{equation}
Then by the $L^{2}$ orthonormality of the basis and Poincar\'{e} inequality we have
\begin{equation}
\alpha \|u_{m}\| \left\|\sum_{i=1}^{r}\|\varphi_{i}\| \nabla \varphi_{i} \right\| \leq \alpha C_{P} \left\|\sum_{i=1}^{r} \nabla \varphi_{i} \right\|\|\nabla u_{m}\| .
\end{equation}
\end{proof}
Unlike the finite element setting, this stability result indicates that the bound will not be independent of the number of POD basis functions used. However, if $\alpha$ is sufficiently small; i.e., $\theta_{1}$ is close to $\pi/2$ indicating that the spaces $X_{r}$ and $S_{m}$ are nearly orthogonal in the $L^{2}$ sense, then the stability bound will be well behaved.
Using this stability result we prove a bound on the dual norm of $S^{\ast}_{m}$ in terms of $X^{\ast}_{r}$.
\begin{lemma}\label{lemma:eqvn_norms}
Let $u_{r} \in X_{r}$, the following bound will then hold between the dual norms
\begin{equation}
\|u_{r}\|_{S^{\ast}_{m}} \leq \alpha C_{P} C_{r}^{H^{1}} \|u_{r}\|_{X^{\ast}_{r}}.
\end{equation}
\end{lemma}
\begin{proof}
\begin{equation}
\begin{aligned}
\|u_{r}\|_{S^{\ast}_{m}} &= \sup_{\zeta \in S_{m}} \frac{(u_{r},\zeta)}{\| \nabla \zeta \|}
\\
&= \sup_{\zeta \in S_{m}} \frac{(u_{r}, P_{r}\zeta + P^{\perp}_{r}\zeta)}{\| \nabla \zeta \|}
\\
&= \sup_{\zeta \in S_{m}} \frac{(u_{r}, P_{r}\zeta )}{\| \nabla \zeta \|}
\\
&\leq {\alpha C_{P} C_{r}^{H^{1}}} \sup_{\zeta \in S_{m}} \frac{(u_{r}, P_{r}\zeta )}{\|P_{r} \nabla \zeta \|}
\\
& \leq {\alpha C_{P} C_{r}^{H^{1}}} \sup_{\varphi \in X_{r}} \frac{(u_{r}, \varphi )}{\|\nabla \varphi \|} = {\alpha C_{P} C_{r}^{H^{1}}} { \|u_{r}\|_{X^{\ast}_{r}}}.
\end{aligned}
\end{equation}
\end{proof}
Next, we give an $L^{1}(0,T,L^{2}(\Omega))$ stability result for the pressure determined via the MER formulation.
\begin{theorem}\label{theorem:stability_ME}
Consider the pressure approximation determined from \eqref{eqn:posteriori-momentum}. \newline The following energy inequality holds
\begin{equation}\label{eqn:stability_mom_L2}
\begin{aligned}
\beta_{m}|||p_{m}|||_{1,0}&\leq \left(1 + {\alpha C_{P} C_{r}^{H^{1}}}\right)\bigg(C_{b^{\ast}} C_{stab}\nu^{-1} + \Delta{t} \sum_{n=0}^{N}\|f^{n+1}\|_{-1}\bigg)
\\
&+{\alpha C_{P} C_{r}^{H^{1}}} \sqrt{\nu T C_{stab}}.
\end{aligned}
\end{equation}
\end{theorem}
\begin{proof}
We follow a similar proof path to that in \cite{J18}. Let $\varphi \in X_{r}$, then taking equation \eqref{eqn:BE_ROM} and isolating the time derivative gives
\begin{equation}\label{theorem:stability_ME-eqn1}
\Big(\frac{u^{n+1}_r - u_{r}^{n}}{\Delta t}, \varphi \Big) = ( f^{n+1}, \varphi) - b^{\ast}(u_r^{n} , u^{n+1}_r ,\varphi) - \nu (\nabla u^{n+1}_r, \nabla \varphi).
\end{equation}
Standard bounds on the right hand side yield
\begin{equation}\label{theorem:stability_ME-eqn2}
\begin{aligned}
- b^{\ast}(u_r^{n} , u^{n+1}_r ,\varphi) &\leq C_{b^{\ast}} \|\nabla u_r^{n} \|\|\nabla u_r^{n+1} \|\|\nabla \varphi \| \\
- \nu (\nabla u^{n+1}_r, \nabla \varphi) &\leq \nu \|\nabla u^{n+1}_{r}\|\|\nabla \varphi \| \\
( f^{n+1}, \varphi) &\leq \| f^{n+1}\|_{-1} \| \nabla \varphi \|.
\end{aligned}
\end{equation}
It then follows, using these estimates, dividing both sides by $\|\nabla \varphi\|$ and taking the supremum over $\varphi \in X_{r}$ that
\begin{equation}\label{theorem:stability_ME-eqn3}
\left\|\frac{u^{n+1}_r - u_{r}^{n}}{\Delta t} \right\|_{X^{\ast}_{r}} \leq C_{b^{\ast}} \|\nabla u^{n}_{r} \| \|\nabla u^{n+1}_{r} \| + \nu\|\nabla u^{n+1}_{r} \| + \|f^{n+1}\|_{-1}.
\end{equation}
Using Lemma \ref{lemma:eqvn_norms} we then
have
\begin{equation}\label{theorem:stability_ME-eqn4}
\left\|\frac{u^{n+1}_r - u_{r}^{n}}{\Delta t} \right\|_{S^{\ast}_{m}} \leq {\alpha C_{P} C_{r}^{H^{1}}} \left( (C_{b^{\ast}}\|\nabla u^{n}_{r} \| + \nu)\|\nabla u^{n+1}_{r} \| + \|f^{n+1}\|_{-1}\right).
\end{equation}
Now considering \eqref{eqn:posteriori-momentum} and using the bounds from \eqref{theorem:stability_ME-eqn2}
\begin{equation}\label{theorem:stability_ME-eqn5}
\begin{aligned}
(p_{m}^{n+1}, \nabla \cdot \zeta) &\leq \Big(\frac{u^{n+1}_r - u_{r}^{n}}{\Delta t}, \zeta \Big)
+ C_{b^{\ast}} \|\nabla u^{n}_{r} \| \|\nabla u^{n+1}_{r} \|\|\nabla \zeta \| + \|\nabla \zeta \|\|f^{n+1}\|_{-1}.
\end{aligned}
\end{equation}
Dividing both sides by $\|\nabla \zeta \|$, taking the supremum over $\zeta \in S_{m}$, and using the discrete inf-sup condition from Lemma \ref{lemma:inf-sup} and estimate \eqref{theorem:stability_ME-eqn4} gives
\begin{equation}
\begin{aligned}
\beta_{m}\|p_{m}^{n+1}\| &\leq \left(1 + \alpha C_{P} C_{r}^{H^{1}}\right) \left(C_{b^{\ast}} \|\nabla u^{n}_{r} \| \|\nabla u^{n+1}_{r} \|+ \|f^{n+1}\|_{-1}\right)
\\
&+ \alpha C_{P} C_{r}^{H^{1}}\nu\|\nabla u^{n+1}_{r} \|.
\end{aligned}
\end{equation}
Multiplying by $\Delta{t}$ and summing from $n=0$ to $n=N$ then yields
\begin{equation}
\begin{aligned}
\beta_{m} \Delta{t} \sum_{n=0}^{N}\|p_{m}^{n+1} \| \leq &\left(1 + \alpha C_{P} C_{r}^{H^{1}}\right) \times \biggr(C_{b^{\ast}} \Delta{t} \sum_{n=0}^{N}\|\nabla u^{n}_{r} \| \|\nabla u^{n+1}_{r} \|
\\
&+ \Delta{t} \sum_{n=0}^{N}\|f^{n+1}\|_{-1}\biggr)+ \alpha C_{P} C_{r}^{H^{1}}\nu \Delta{t} \sum_{n=0}^{N}\|\nabla u_{r}^{n+1}\|.
\end{aligned}
\end{equation}
Bounding the terms on the right-hand side by Cauchy-Schwarz, Young's inequality, and Lemma \ref{lemma:BE-ROM-stability}
\begin{equation}
\begin{aligned}
&C_{b^{\ast}} \Delta{t} \sum_{n=0}^{N}\|\nabla u^{n}_{r} \| \|\nabla u^{n+1}_{r} \| \leq \frac{C_{b^{\ast}} \Delta{t}}{2} \sum_{n=0}^{N}\|\nabla u^{n+1}_{r} \|^{2} + \frac{C_{b^{\ast}} \Delta{t}}{2} \sum_{n=0}^{N}\|\nabla u^{n}_{r} \|^{2} \leq \frac{C_{b^{\ast}} C_{stab}}{\nu}
\\
&\nu \Delta{t} \sum_{n=0}^{N}\|\nabla u_{r}^{n+1}\| \leq \sqrt{\nu T} \sqrt{\nu \Delta{t} \sum_{n=0}^{N}\|\nabla u_{r}^{n+1}\|^{2}} \leq \sqrt{\nu T C_{stab}}.
\end{aligned}
\end{equation}
Combining and simplifying terms \eqref{eqn:stability_mom_L2} follows.
\end{proof}
According to Theorem \ref{theorem:stability_ME}, if the product $\alpha C_{r}^{H^{1}}$ is sufficiently small, the stability estimate for the pressure will scale similarly to the velocity determined by the BE-ROM scheme.
Finally, we state the main result of this section, an $L^{1}(0,T,L^{2}(\Omega))$ convergence result for the pressure determined via the MER formulation.
\begin{theorem}\label{theorem:pressure_convergence}
Consider the MER scheme \eqref{eqn:posteriori-momentum} and {BE-ROM} \eqref{eqn:BE_ROM}. Under the regularity conditions made in Assumption \ref{assumption:regularity}, the following bound on the pressure error holds
\begin{equation}
\begin{aligned}
\beta_m &|||e_{p}|||_{1,0}
\\
&\leq C\biggr[ (1+\beta_m)\sqrt{T} ||| \kappa |||_{2,0}
+ \Delta{t} \|\eta_{t} \|_{L^2(0,T,L^2(\Omega))} +
\Delta{t}^{3/2} \|\eta_{tt} \|_{L^2(0,T,L^2(\Omega))}
\\
&
+ (1 + \alpha C^{H^{1}}_{r}) \biggr( \Delta{t}^{3/2} \| u_{tt} \|_{L^2(0,T,L^2(\Omega))} +
\Delta{t}^{2} \|\nabla u_{t} \|_{L^2(0,T,L^2(\Omega))}+
\\
& + \Delta{t}^{5/2} \|\nabla u_{tt} \|_{L^2(0,T,L^2(\Omega))}+ \Big(\sqrt{T} + {C_{stab}} \Big)|||\nabla e_{u} |||_{2,0}
\biggr) \biggr] .
\end{aligned}
\end{equation}
\begin{proof}
The weak solution of the NSE satisfies
\begin{equation}
\begin{aligned}
\label{er-eq1-lc}
\left(u_t^{n+1}, \varphi \right) + \bstar{u^{n+1}}{u^{n+1}}{\varphi} + \nu(\nabla u^{n+1},\ensuremath{\nabla}{\varphi}) = (f^{n+1},\varphi).
\end{aligned}
\end{equation}
Subtracting \eqref{eqn:BE_ROM} from \eqref{er-eq1-lc} yields
\begin{equation}
\begin{aligned}
\Big(\frac{e_u^{n+1} - e_u^{n}}{\Delta t}, \varphi \Big) +
b^{\ast}(&u^{n+1}- u^{n} , u^{n+1} ,\varphi)
+b^{\ast}(e_u^{n}, u^{n+1} ,\varphi)
\\ &+b^{\ast}(u_r^{n} , e_u^{n+1} ,\varphi)
+\nu (\nabla e_u^{n+1}, \nabla \varphi)
=\Big(\frac{u^{n+1} - u^{n}}{\Delta t}-u_t^{n+1}, \varphi \Big).
\end{aligned}
\end{equation}
Splitting the error, using the fact that $\left( \frac{\eta^{n+1} - \eta^{n}}{\Delta t} ,\varphi \right) = 0$ by the definition of the $L^{2}$ projection, and rearranging terms gives
\begin{equation}
\begin{aligned}
\Big(\frac{\xi^{n+1} - \xi^{n}}{\Delta t}, \varphi \Big)& =
\nu (\nabla e_u^{n+1}, \nabla \varphi)
-\Big(\frac{u^{n+1} - u^{n}}{\Delta t}-u_t^{n+1}, \varphi \Big)
\\
&+ b^{\ast}(u^{n+1}- u^{n} , u^{n+1} ,\varphi)
+ b^{\ast}(e_u^{n}, u^{n+1} ,\varphi) +
b^{\ast}(u_r^{n} , e_u^{n+1} ,\varphi) .
\end{aligned}
\end{equation}
Applying Cauchy-Schwarz, Taylor's Theorem, Poincar\'{e} inequality, and Lemma \ref{lemma:trilinear} to the terms on the right hand side yields
\begin{equation}\label{a_1}
\begin{aligned}
\nu (\nabla e_u^{n+1}, \nabla \varphi) &\leq \nu \| \nabla e_u^{n+1} \| \| \nabla \varphi \|
\\
\Big(\frac{u^{n+1} - u^{n}}{\Delta t}-u_t^{n+1}, \varphi \Big) &\leq C C_{P} \sqrt{\Delta t} \| u_{tt} \|_{L^2(t^n,t^{n+1},L^2(\Omega))} \| \nabla \varphi \|
\\
b^{\ast}(e_u^{n}, u^{n+1} ,\varphi) &\leq C_{b^{\ast}} \| \nabla e_u^n \|\| \nabla u^{n+1} \| \| \nabla \varphi \|
\\
b^{\ast}(u_r^{n} , e_u^{n+1} ,\varphi) &\leq C_{b^{\ast}} \| \nabla u_r^n \|\| \nabla e_u^{n+1} \| \| \nabla \varphi \|
\\
b^{\ast}(u^{n+1}- u^{n} , u^{n+1} ,\varphi) &\leq C\Delta t^{3/2} \| \nabla u_{tt} \|_{L^2(t^n,t^{n+1},L^2(\Omega))}\| \nabla u^{n+1} \| \| \nabla \varphi \|
\\
&+ C \Delta t \| \nabla u_{t} \|_{L^2(t^n,t^{n+1},L^2(\Omega))}\| \nabla u^{n+1} \| \| \nabla \varphi \|.
\end{aligned}
\end{equation}
Next, dividing by $\| \nabla \varphi \|$ and taking the supremum over all $ \varphi \in X_r$, gives a bound on the dual norm $X^{\ast}_{r}$
\begin{equation}\label{eqn:xi_bound1}
\begin{aligned}
\Big\|\frac{\xi_{r}^{n+1} - \xi_{r}^{n}}{\Delta t} \Big\|_{X^{\ast}_r} \leq
& \nu\| \nabla e_u^{n+1}\|+ C C_{P} \sqrt{\Delta t} \| u_{tt} \|_{L^2(t^n,t^{n+1},L^2(\Omega))}+
C_{b^{\ast}}\| \nabla e_u^{n+1} \|\| \nabla u_r^{n}\| +
\\&
\| \nabla u^{n+1} \| \Big(\Delta t \| \nabla u_t \|_{L^2(t^n,t^{n+1},L^2(\Omega))}+
\\
&\Delta t^{3/2} \| \nabla u_{tt} \|_{L^2(t^n,t^{n+1},L^2(\Omega))}
+ C_{b^{\ast}} \| \nabla e_u^n \| \Big) .
\end{aligned}
\end{equation}
Using Lemma \ref{lemma:eqvn_norms} then yields a bound on the dual norm $S_{m}^{\ast}$
\begin{equation}\label{eqn:xi_bound2}
\Big\|\frac{\xi_{r}^{n+1} - \xi_{r}^{n}}{\Delta t} \Big\|_{S^{\ast}_m} \leq
{\alpha C_{P} C_{r}^{H^{1}}} \Big\|\frac{\xi_{r}^{n+1} - \xi_{r}^{n}}{\Delta t} \Big\|_{X^{\ast}_r}.
\end{equation}
Next, we consider the weak form of the NSE, with a test function $\zeta \in S_m$
\begin{equation}\label{cts press}
\begin{aligned}
(p^{n+1}, \nabla \cdot \zeta) = (u_t, \zeta ) + b^{\ast}(u^{n+1} , u^{n+1},\zeta)
&+\nu (\nabla u^{n+1}, \nabla \zeta) - ( f^{n+1}, \zeta) \ \ \forall \zeta \in S_m.
\end{aligned}
\end{equation}
Subtracting \eqref{eqn:posteriori-momentum} from \eqref{cts press} splitting the pressure error, and adding and subtracting $\eta^{n+1}_{t}$ gives
\begin{equation}
\begin{aligned}
(\pi_{m}^{n+1}, \nabla \cdot \zeta) &= (\kappa^{n+1}, \nabla \cdot \zeta) - \nu (\nabla e_u^{n+1}, \nabla \zeta) -
b^{\ast}(u^{n+1}-u^n ,u^{n+1},\zeta)
\\
& -
b^{\ast}(e_u^{n} , u_r^{n+1},\zeta) -b^{\ast}(u_r^{n} , e_u^{n+1},\zeta)
- \Big(u_{t} - \frac{u^{n+1} - u^{n}}{\Delta t}, \zeta \Big)
\\
&- \Big(\frac{ \eta^{n+1} - \eta^{n}}{\Delta t} - \eta^{n+1}_{t}, \zeta \Big) -\Big(\eta^{n+1}_{t}, \zeta \Big) + \Big(\frac{\xi_{r}^{n+1} - \xi_{r}^{n}}{\Delta t}, \zeta \Big).
\end{aligned}
\end{equation}
The first eight terms on the right hand side are bounded using Cauchy-Schwarz, Taylor's Theorem, the Poincar\'{e} inequality, and Lemma \ref{lemma:trilinear}
\begin{equation}
\begin{aligned}
(\kappa^{n+1}, \nabla \cdot \zeta) \leq& \sqrt{d} \| \kappa^{n+1}\| \| \nabla \zeta \|
\\
- \nu (\nabla e_u^{n+1}, \nabla \zeta) \leq& \nu \|\nabla e_u^{n+1}\|\|\nabla \zeta\|
\\
-b^{\ast}(e_u^{n}, u^{n+1} ,\zeta) \leq& C_{b^{\ast}} \| \nabla e_u^n \|\| \nabla u^{n+1} \| \| \nabla \zeta \|
\\
-b^{\ast}(u_r^{n} , e_u^{n+1} ,\zeta) \leq& C_{b^{\ast}} \| \nabla u_r^n \|\| \nabla e_u^{n+1} \| \| \nabla \zeta \|
\\
-b^{\ast}(u^{n+1}- u^{n} , u^{n+1} ,\zeta) \leq& C\Delta t^{3/2} \| \nabla u_{tt} \|_{L^2(t^n,t^{n+1},L^2(\Omega))}\| \nabla u^{n+1} \| \| \nabla \zeta \|
+ \\
&C \Delta t \| \nabla u_{t} \|_{L^2(t^n,t^{n+1},L^2(\Omega))}\| \nabla u^{n+1} \| \| \nabla \zeta \|
\\
-\Big(\frac{u^{n+1} - u^{n}}{\Delta t}-u_t^{n+1}, \zeta \Big) \leq& C C_{P} \sqrt{\Delta t} \| u_{tt} \|_{L^2(t^n,t^{n+1},L^2(\Omega))} \| \nabla \zeta\|
\\
-\Big(\frac{\eta^{n+1} - \eta^{n}}{\Delta t}-\eta_t^{n+1}, \zeta \Big) \leq& C C_{P} \sqrt{\Delta t} \| \eta_{tt} \|_{L^2(t^n,t^{n+1},L^2(\Omega))} \| \nabla \zeta\|
\\
-\Big(\eta^{n+1}_{t}, \zeta \Big) \leq& C_{P} \| \eta_{t} \|_{L^2(t^n,t^{n+1},L^2(\Omega))} \| \nabla \zeta\|.
\end{aligned}
\end{equation}
Now, applying these bounds, dividing by $\| \nabla \zeta \|$, and taking the supremum over all $\zeta \in S_m$ gives
\begin{equation}
\begin{aligned}
\sup_{\zeta \in S_m}& \frac{(\pi_m^{n+1}, \nabla \cdot \zeta)}{\| \nabla \zeta \|} \leq
\sqrt{d} \| \kappa^{n+1}\| + \nu \| \nabla e_u^{n+1} \|
+ C_{b^{\ast}} \| \nabla e_u^{n}\| \| \nabla u^{n+1}\|
\\
&+ C_{b^{\ast}} \| \nabla u^{n}_r\| \| \nabla e_u^{n+1}\|
+
C\Delta t^{3/2} \| \nabla u_{tt} \|_{L^2(t^n,t^{n+1},L^2(\Omega))}\| \nabla u^{n+1} \|
+ \\
&C \Delta t \| \nabla u_{t} \|_{L^2(t^n,t^{n+1},L^2(\Omega))}\| \nabla u^{n+1} \| +
CC_{P} \sqrt{\Delta t} \| u_{tt} \|_{L^2(t^n,t^{n+1},L^2(\Omega))} +
\\ &
CC_{P} \sqrt{\Delta t} \| \eta_{tt} \|_{L^2(t^n,t^{n+1},L^2(\Omega))}
+ C_P \| \eta_t \|_{L^2(t^n,t^{n+1},L^2(\Omega))}
+ \Big\| \frac{\xi^{n+1} - \xi^{n}}{\Delta t} \Big\|_{S^{\ast}_m}.
\end {aligned}
\end{equation}
Recalling from Lemma \ref{lemma:inf-sup} that $S_m$ and $Q_m$ are inf-sup stable with constant $\beta_m$ and using the bound on $ \Big\| \frac{\xi^{n+1} - \xi^{n}}{\Delta t} \Big\|_{S^{\ast}_m}$ from \eqref{eqn:xi_bound1}-\eqref{eqn:xi_bound2} yields
\begin{equation}\label{error1}
\begin{aligned}
\beta_m \| \pi_m^{n+1} \|
\leq & \sqrt{d} \| \kappa^{n+1}\|
+ (1+ {\alpha C_{P} C_{r}^{H^{1}}}) \Big[
\nu\| \nabla e_u^{n+1}\|
+ C C_{P} \sqrt{\Delta t} \| u_{tt}\|_{L^2(t^n,t^{n+1},L^2(\Omega))}
\\
& + C_{b^{\ast}}\| \nabla e_u^n \|\| \nabla u_r^{n+1}\|
+ \| \nabla u^{n+1} \| \Big(\Delta t \| \nabla u_t \|_{L^2(t^n,t^{n+1},L^2(\Omega))}
\\
&+ \Delta t^{3/2} \| \nabla u_{tt} \|_{L^2(t^n,t^{n+1},L^2(\Omega))}
+ C_{b^{\ast}} \| \nabla e_u^{n} \| \Big)\Big] +
\\ &
C_P \| \eta_t \|_{L^2(t^n,t^{n+1},L^2(\Omega))}
+ CC_{P} \sqrt{\Delta t} \| \eta_{tt} \|_{L^2(t^n,t^{n+1},L^2(\Omega))}.
\end{aligned}
\end{equation}
Now, multiplying by $\Delta t$, taking a maximum $C$ over all constants, using the regularity from Assumption \ref{assumption:regularity}, summing from $n = 0$ to $n =N-1$, using Cauchy-Schwarz, and the fact that $||| \nabla u_r |||_{2,0} \leq \sqrt{\frac{ C_{stab}}{\nu}}$ by Lemma \ref{lemma:BE-ROM-stability} we have
\begin{equation}
\begin{aligned}
\beta_m &\Delta t \sum_{n = 0}^{N-1} \| \pi_m^{n+1} \|
\\
&\leq C\biggr[ \sqrt{T} ||| \kappa |||_{2,0}
+ \Delta{t} \|\eta_{t} \|_{L^2(0,T,L^2(\Omega))} +
\Delta{t}^{3/2} \|\eta_{tt} \|_{L^2(0,T,L^2(\Omega))}+
\\
&
(1 + \alpha C^{H^{1}}_{r}) \biggr( \Delta{t}^{3/2} \| u_{tt} \|_{L^2(0,T,L^2(\Omega))} +
\Delta{t}^{2} \|\nabla u_{t} \|_{L^2(0,T,L^2(\Omega))}
\\
& + \Delta{t}^{5/2} \|\nabla u_{tt} \|_{L^2(0,T,L^2(\Omega))}+\Big(\sqrt{T} +\sqrt{C_{stab}}\Big)|||\nabla e_{u} |||_{2,0}
\biggr) \biggr] .
\end{aligned}
\end{equation}
By the triangle inequality we have
\begin{equation}
\beta_m \Delta t \sum_{n = 0}^{N-1} \| e_p^{n+1} \| \leq \beta_m \Delta t \sum_{n = 0}^{N-1} \| \pi_m^{n+1} \| + \beta_m \Delta t \sum_{n = 0}^{N-1} \| \kappa^{n+1} \|.
\end{equation}
Then, applying Cauchy-Schwarz on the second term
\begin{equation}
\beta_m \Delta t \sum_{n = 0}^{N-1} \| \kappa^{n+1} \| \leq \beta_m \Delta t \sqrt{N} \sqrt{\sum_{n = 0}^{N-1} \| \kappa^{n+1} \|^{2}} = \beta_m \sqrt{T}||| \kappa|||_{2,0}.
\end{equation}
This then yields the estimate
\begin{equation}
\begin{aligned}
\beta_m &\Delta t \sum_{n = 0}^{N-1} \| e_p^{n+1} \|
\\
&\leq C\biggr[ (1+\beta_m)\sqrt{T} ||| \kappa |||_{2,0}
+ \Delta{t} \|\eta_{t} \|_{L^2(0,T,L^2(\Omega))} +
\Delta{t}^{3/2} \|\eta_{tt} \|_{L^2(0,T,L^2(\Omega))}+
\\
&
(1 + \alpha C^{H^{1}}_{r}) \biggr( \Delta{t}^{3/2} \| u_{tt} \|_{L^2(0,T,L^2(\Omega))} +
\Delta{t}^{2} \|\nabla u_{t} \|_{L^2(0,T,L^2(\Omega))}+
\\
& + \Delta{t}^{5/2} \|\nabla u_{tt} \|_{L^2(0,T,L^2(\Omega))}+ \Big(\sqrt{T} + \sqrt{C_{stab}} \Big)|||\nabla e_{u} |||_{2,0}
\biggr) \biggr] .
\end{aligned}
\end{equation}
\end{proof}
\end{theorem}
\begin{corollary}\label{corollary:convergence}
Under the assumptions of \ref{theorem:pressure_convergence} along with Assumption \ref{assumption:conv} the following inequality on the pressure error holds.
\begin{equation}
\begin{aligned}
\beta_{m}|||e_{p}|||_{1,0} &\leq C\bigg\{\alpha C^{H^{1}}_{r}\sqrt{((1 + |||{\mathbb S}_r|||_{2}) (h^{2s} + \Delta{t}^{2}) + \sum_{i=r+1}^{N_{V}}\lambda_i + \sum_{i=r+1}^{N_{V}}\lambda_i \|\nabla \varphi_{i}\|^{2}}
\\
&+ \sqrt{h^{2k} + \Delta{t}^{2} + \sum_{i=m+1}^{N_{P}}\sigma_{i}}\bigg\}.
\end{aligned}
\end{equation}
\begin{proof}
Using the regularity condition from Assumption \ref{assumption:regularity}, applying the estimates from Theorem \ref{theorem:velocity_err} and Assumption \ref{assumption:conv} to the inequality from Theorem \ref{theorem:pressure_convergence} the result follows.
\end{proof}
\end{corollary}
\section{Numerical Experiments}\label{sec:numerical_experiments}
In this section, we perform a numerical investigation of the MER formulation \eqref{eqn:posteriori-momentum} and the PPE \eqref{eqn:D-PPE}. To carry out the numerical experiments, we utilize the FEniCS software suite \cite{LNW12}.
\subsection{Problem Setting}
The problem setting is the same as that used in Section 6 of \cite{DILMS19}. Letting $r_{1}=1$, $r_{2}=0.1$, $c_{1}=1/2$, and $c_{2}=0$; the domain is given by
\[
\Omega=\{(x,y):x^{2}+y^{2}\leq r_{1}^{2} \text{ and } (x-c_{1})^{2}%
+(y-c_{2})^{2}\geq r_{2}^{2}\}.
\]
This represents a disk with a smaller off-center disc inside (see Fig. \ref{fig:mesh}).
\begin{figure}[!ht]
\centering
\includegraphics[width = .25\linewidth]{pics_jpg/mesh.png}
\caption{Spatial mesh for the finite element approximation.}
\label{fig:mesh}
\end{figure}
The viscosity is $\nu = \frac{1}{100}$ and the counterclockwise rotational body force is given by
\begin{equation}\notag
f(x) = (-4y(1-x^2-y^2),4x(1-x^2-y^2)).
\end{equation}
No-slip boundary conditions are imposed on both cylinders. Because of the fact that $f = 0$ at the outer circle, most of the complex structures occur from the interaction of the flow with the inner cylinder. Specifically, the inner cylinder causes Von K\'{a}rm\'{a}n vortex street to develop, which then rotates and reinteracts with the inner cylinder.
For the offline calculation, the snapshots are calculated via the $P^{2}-P^{1}$ Taylor-Hood backward Euler discretization \eqref{eqn:BE_FEM}. The flow is initialized at rest with $u_h^0 \equiv 0$. The velocity space $X_h$ and pressure space $Q_h$ have 114,792 and 14,474 degrees of freedom, respectively. We take $\Delta t=2.5e-4$ and collect velocity and pressure snapshots at every time step in the interval $[12,16]$. The first fifty singular values for the velocity and pressure are shown in Fig. \ref{singular_value_plots}.
\begin{figure}
\centering
\begin{subfigure}[b]{0.48\linewidth}
\includegraphics[width=\linewidth]{pics_jpg/v_sing_vals.jpg}
\end{subfigure}
\begin{subfigure}[b]{0.48\linewidth}
\includegraphics[width=\linewidth]{pics_jpg/p_sing_vals.jpg}
\end{subfigure}
\caption{The first 50 singular values for the velocity (left) and pressure (right) modes. }
\label{singular_value_plots}
\end{figure}
The smaller cylinder exerts a force due to lift and a force due to drag on the flow. The drag force is in opposition to the counterclockwise rotation, and the force due to lift is perpendicular to the rotation, in this case chosen to be inward. We calculate the lift and drag using the volume integral approach from \cite{J04}.
\subsection{MER Convergence Test}
In this section, we numerically verify the convergence rates for the pressure determined by the MER formulation with respect to the ROM projection errors established in Theorem \ref{theorem:pressure_convergence}. We measure the $\ell^{1}L^{2}$ error between the ROM solution $p_{m}$ and the offline solution $p_{h}$ for varying values of $r$ and $m$. The same stepsize $\Delta t=2.5e-4$ used in the offline stage is used in the calculation of the ROM solution.
Corollary \ref{corollary:convergence} shows that the pressure error bound depends on $h$, $\Delta t$, $|||{\mathbb S}_r|||_{2}$, $\beta_{m}$, $\alpha C_r^{H_1}$, and the ROM truncation errors $\Lambda_m = \sqrt{\sum_{i=m+1}^{N_{P}}\sigma_{i}}$ and \newline $\Lambda_r =\sqrt{\sum_{i=r+1}^{N_{V}}\lambda_{i} + \sum_{i=r+1}^{N_{V}}\lambda_{i}\| \nabla \varphi_i \|^{2}}$. Because we are comparing the MER solution, $p_{m}$, to the offline solution, $p_{h}$, with the same underlying spatial and time discretization, the contribution to the error from terms involving $h$ and $\Delta t$ will be negligible. Therefore, we examine the convergence of the pressure with respect to the terms $\beta_{m}$, $\alpha C_r^{H_1}$, $\Lambda_m$, and $\Lambda_r$.
First, we examine the convergence with respect to $\Lambda_m$. Setting $r = 50$, $\Lambda_r$ becomes negligible, therefore isolating the dependence of the pressure recovery error on $\Lambda_m$. In Fig. \ref{alphbeta_1}, we see that the inf-sup constant $\beta_{m}$ and term $\alpha C^{H_{1}}_{r}$ remain well behaved for $m=1$ to $m=50$. Thus, Corollary \ref{corollary:convergence} predicts the following convergence of $||| e_p^{MER} |||_{1,0}$ with respect to $\lambda_m$:
\begin{equation}
||| e_p^{MER} |||_{1,0} = \mathcal{O}(\Lambda_m).
\end{equation}
We list the error $||| e_p^{MER} |||_{1,0}$ for increasing $m$ in Table \ref{table:MER_lambda_m}. In addition, the corresponding power law regression is given in Fig. \ref{mer-conv-lambdam}. This regression agrees with our theoretical estimate, yielding:
\begin{equation}
||| e_p^{MER} |||_{1,0} = \mathcal{O}((\Lambda_m)^{.993}).
\end{equation}
\begin{figure}
\centering
\begin{subfigure}{0.4\linewidth}
\includegraphics[width=\linewidth]{pics_jpg/beta_m.jpg}
\end{subfigure}
\begin{subfigure}{0.4\linewidth}
\includegraphics[width=\linewidth]{pics_jpg/alphcrh1_vary_m.jpg}
\end{subfigure}
\caption{Value of the inf-sup constant, $\beta_m$, (left) and $\alpha C^{H_{1}}_{r}$ (right).}
\label{alphbeta_1}
\end{figure}
\begin{center}
\begin{table}
\centering\renewcommand{\arraystretch}{1.3}
\begin{tabular}{|c| c| c |}
\hline
$m$ & $||| e_p^{MER} |||_{1,0}$ & $\Lambda_m$ \\
\hline
3 & 6.533e-01 & 1.596e-01 \\
\hline
6 & 1.594e-01 & 4.021e-02 \\
\hline
9 & 1.028e-01 & 2.495e-02 \\
\hline
12 & 5.762e-02 & 1.504e-02\\
\hline
15 & 3.494e-02 & 8.767e-03 \\
\hline
18 & 2.586e-02 & 6.293e-03 \\
\hline
21 & 1.928e-02 & 4.482e-03 \\
\hline
24 & 1.432e-02 & 3.039e-03 \\
\hline
27 & 1.002e-02 & 2.253e-03 \\
\hline
30 & 7.838e-03 & 1.709e-03 \\
\hline
\end{tabular}
\caption{MER approximation errors for increasing $m$ values.}
\label{table:MER_lambda_m}
\end{table}
\end{center}
\begin{figure}
\centering
\begin{subfigure}{0.4\linewidth}
\includegraphics[width=\linewidth]{pics_jpg/MER_conv_pressure2.jpg}
\end{subfigure}
\caption{Power law regression of $||| e_p^{MER} |||_{1,0}$ with respect to $\Lambda_m$. }
\label{mer-conv-lambdam}
\end{figure}
Next, we examine the convergence with respect to $\Lambda_r$. Setting $m = 50$, we isolate the relationship between pressure recovery error and $\Lambda_r$. For fixed $m,$ the inf-sup value stays constant with $\beta_m = .6402$. In Fig. \ref{alphbeta_2}, we show that $\alpha C^{H_{1}}_{r}$ grows slowly for $r=1$ to $r=50$. Corollary \ref{corollary:convergence} predicts the following convergence of $||| e_p^{MER} |||_{1,0}$ with respect to $\lambda_r$:
\begin{equation}
||| e_p^{MER} |||_{1,0} = \mathcal{O}(\Lambda_r).
\end{equation}
In Table \ref{table:MER_lambda_r}, we list the error $||| e_p^{MER} |||_{1,0}$ for increasing $r$. In addition, we give the corresponding power law regression in Fig. \ref{mer-conv-lambdar}. This agrees with our theoretical estimate, yielding:
\begin{equation}
||| e_p^{MER} |||_{1,0} = \mathcal{O}((\Lambda_r)^{1.32}).
\end{equation}
\begin{figure}
\centering
\begin{subfigure}{0.4\linewidth}
\includegraphics[width=\linewidth]{pics_jpg/alphcrh1_vary_m.jpg}
\end{subfigure}
\caption{Value of $\alpha C^{H_{1}}_{r}$ with $m = 50$ and varying $r$.}
\label{alphbeta_2}
\end{figure}
\begin{center}
\begin{table}
\centering\renewcommand{\arraystretch}{1.3}
\begin{tabular}{|c| c| c |}
\hline
$r$ & $||| e_p^{MER} |||_{1,0}$ & $\Lambda_r$ \\
\hline
10 & 2.500e-01& 1.922e+01 \\
\hline
15& 9.054e-02 & 1.062e+01 \\
\hline
20 & 4.407e-02& 8.225e+00 \\
\hline
25 & 2.704e-02 & 5.421e+00\\
\hline
30& 1.338e-02 & 2.963e+00 \\
\hline
35& 9.698e-03 & 2.652e+00 \\
\hline
40& 6.195e-03 & 2.582e+00 \\
\hline
45& 5.885e-03 & 1.302e+00 \\
\hline
50 & 3.442e-03& 1.051e+00 \\
\hline
\end{tabular}
\caption{MER approximation errors for increasing $r$ values.}
\label{table:MER_lambda_r}
\end{table}
\end{center}
\begin{figure}
\centering
\begin{subfigure}{0.48\linewidth}
\includegraphics[width=\linewidth]{pics_jpg/MER_conv_velocity2.jpg}
\end{subfigure}
\caption{Power law regression of $||| e_p^{MER} |||_{1,0}$ with respect to $\Lambda_r$. }
\label{mer-conv-lambdar}
\end{figure}
\subsection{Comparison between MER and PPE}
Lastly, we compare the performance of the PPE against the MER formulation for pressure recovery. To this end, we set $r=50$ while varying $m$ to examine the convergence of the two schemes as the size of the pressure basis increases. For both approaches, we calculate the force due to lift, the force due to drag, and the errors $||| e_p |||_{1,0}$.
Based on the discussion from Remark \ref{remark:ppe}, we expect there to be a consistency error present in the PPE pressure solution due to the Neumann boundary condition. In Table \ref{table:PPE-MER-comp}, we list the errors for the MER solution, $||| e_p^{MER} |||_{1,0}$ and the PPE solution for increasing values of $m$. While the MER solution error improves for increasing $m$, the error for the PPE stagnates. We can also see the error stagnation in the time evolution of the lift and drag error for $m=21$ in Fig. \ref{fig:lift_drag_errors}. In Fig. \ref{Avg_err_plots}, we show the time-averaged pressure error for the PPE and MER methods with $m = 50$. We see that the error for the PPE approach is primarily located at the boundary of the smaller offset cylinder, where the average error for the MER is evenly distributed throughout the domain.
\begin{table}
\centering\renewcommand{\arraystretch}{1.3}
\begin{tabular}{|c| c| c |}
\hline
$m$ & $||| e_p^{MER} |||_{1,0}$ & $||| e_p^{PPE} |||_{1,0}$ \\
\hline
3 & 6.533e-01 & 6.754e-01 \\
\hline
6 & 1.594e-01 & 2.530e-01 \\
\hline
9 & 1.028e-01 & 2.247e-01 \\
\hline
12 & 5.762e-02 & 2.090e-01\\
\hline
15 & 3.494e-02 & 1.819e-01 \\
\hline
18 & 2.586e-02 & 1.781e-01 \\
\hline
21 & 1.928e-02 & 1.738e-01 \\
\hline
24 & 1.432e-02 & 1.733e-01 \\
\hline
27 & 1.002e-02 & 1.773e-01 \\
\hline
30 & 7.838e-03 & 1.756e-01 \\
\hline
\end{tabular}
\caption{Pressure error for MER and PPE approximations with $r = 50$ and varying $m$.}
\label{table:PPE-MER-comp}
\end{table}
\begin{figure}
\centering
\begin{subfigure}[b]{0.4\linewidth}
\includegraphics[width=\linewidth]{pics_jpg/Drag_err.jpg}
\end{subfigure}
\begin{subfigure}[b]{0.4\linewidth}
\includegraphics[width=\linewidth]{pics_jpg/Lift_err.jpg}
\end{subfigure}
\caption{Time evolution of the drag (left) and lift (right) errors with $r = 50$ and $m =21$.}
\label{fig:lift_drag_errors}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}{0.48\linewidth}
\includegraphics[width=\linewidth]{pics_jpg/Pressure_poisson_average_error.jpeg}
\end{subfigure}
\begin{subfigure}{0.48\linewidth}
\includegraphics[width=\linewidth]{pics_jpg/momentum_equation_average_error.jpeg}
\end{subfigure}
\caption{Time averaged error for the pressure solution recovered from the PPE (left) and MER (right) methods with r = m = 50.}
\label{Avg_err_plots}
\end{figure}
\section{Conclusion}\label{sec:conclusions}
In this paper, we analyze the MER approach for recovering the pressure from a velocity-only ROM. We prove stability and convergence of the method and conduct numerical experiments illustrating the efficacy of this approach. Additionally, we perform a numerical comparison of the MER and PPE approach. We see that the Neumann boundary condition present in the PPE formulation leads to a loss of accuracy when a $C^{0}$ finite element space is used in the offline basis construction.
In the future, we intend to pursue multiple research directions. First, we will conduct an analysis of the MER scheme for the time-dependent NSE with a parameterized domain. Second, we will investigate improving the supremizer stabilization algorithm by accounting for the computable constant, $\alpha C^{H_{1}}_{r}$, in constructing the supremizer space. Lastly, will examine whether the loss of accuracy in the PPE approach still occurs when other numerical scheme such as finite volume or discontinuous Galerkin methods are used to collect solution snapshots for the POD basis construction.
\bibliographystyle{plain}
| {
"timestamp": "2019-09-16T02:05:34",
"yymm": "1909",
"arxiv_id": "1909.06022",
"language": "en",
"url": "https://arxiv.org/abs/1909.06022",
"abstract": "For incompressible flow models, the pressure term serves as a Lagrange multiplier to ensure that the incompressibility constraint is satisfied. In engineering applications, the pressure term is necessary for calculating important quantities based on stresses like the lift and drag. For reduced order models generated via a Proper orthogonal decomposition, it is common for the pressure to drop out of the equations and produce a velocity-only reduced order model. To recover the pressure, many techniques have been numerically studied in the literature; however, these techniques have undergone little rigorous analysis. In this work, we examine two of the most popular approaches: pressure recovery through the Pressure Poisson equation and recovery via the momentum equation through the use of a supremizer stabilized velocity basis. We examine the challenges that each approach faces and prove stability and convergence results for the supremizer stabilized approach. We also investigate numerically the stability and convergence of the supremizer based approach, in addition to its performance against the Pressure Poisson method.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Error Analysis of Supremizer Pressure Recovery for POD based Reduced Order Models of the time-dependent Navier-Stokes Equations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9817357248544007,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7085610839174279
} |
https://arxiv.org/abs/1907.07866 | On the equality of domination number and $ 2 $-domination number | The 2-domination number $\gamma_2(G)$ of a graph $G$ is the minimum cardinality of a set $ D \subseteq V(G) $ for which every vertex outside $ D $ is adjacent to at least two vertices in $ D $. Clearly, $ \gamma_2(G) $ cannot be smaller than the domination number $ \gamma(G) $. We consider a large class of graphs and characterize those members which satisfy $\gamma_2=\gamma$. For the general case, we prove that it is NP-hard to decide whether $\gamma_2=\gamma$ holds. We also give a necessary and sufficient condition for a graph to satisfy the equality hereditarily. | \section{Introduction}
In this paper, we continue to expand on the study of graphs that satisfy the equality $\gamma(G) = \gamma_2(G)$, where $\gamma(G)$ and $\gamma_2(G)$ stand for the domination number and the $ 2 $-domination number of a graph $ G $, respectively. If $\gamma(G) = \gamma_2(G)$ holds for a graph $G$, then we call it $ (\gamma,\gamma_2) $\textit{-graph}. We prove that the corresponding recognition problem is NP-hard and there is no forbidden subgraph characterization for $ (\gamma,\gamma_2) $-graphs in general. On the other hand, in one of our main results, we consider a large graph class ${\cal H}$ and give a special type of forbidden subgraph characterization for $ (\gamma,\gamma_2) $-graphs over ${\cal H}$. Although the number of these forbidden subgraphs is infinite, we prove that the recognition problem is solvable in polynomial time on ${\cal H}$.
Putting the question into another setting, we give a complete characterization for $(\gamma, \gamma_2)$-perfect graphs, that is, we characterize the graphs for which all induced subgraphs with minimum degree at least two satisfy the equality of domination number and $ 2 $-domination number.
\subsection{Terminology and Notation}
\indent Let $ G $ be a simple undirected graph, where $ V(G) $ and $ E(G) $ denote the set of vertices and the set of edges of $ G $, respectively. The \textit{(open) neighborhood} of a vertex $ v $ is the set $ N_G(v) = \{u \in V(G): uv \in E(G)\} $ and its \textit{closed neighborhood} is $ N_G[v] = N_G(v) \cup\{v\}$. The \textit{degree} of $ v $ is given by the cardinality of $ N_G(v) $, that is, $ \deg_G(v) =|N_G(v)| $. We will write $N(v)$, $N[v]$ and $\deg(v)$ instead of $N_G(v)$, $N_G[v]$ and $\deg_G(v)$, if $G$ is clear from the context. An edge $ uv $ is a \textit{pendant edge} if $ \deg(u)=1 $ or $ \deg(v)=1 $, otherwise the edge is \textit{non-pendant}. The minimum and maximum vertex degrees of $ G $ are denoted by $ \delta(G) $ and $ \Delta(G) $, respectively. For a subset $ S\subseteq V(G) $, let $ G[S] $ denote the subgraph induced by $S$. We say that $S$ is \textit{independent} if $G[S]$ does not contain any edges. For disjoint subsets $ U, W \subseteq V(G)$, we let $ E[U,W] $ denote the set of edges between $ U $ and $ W $.
For a positive integer $ k $, the \textit{$ k^{th} $ power} of a graph $ G $, denoted by $ G^k $, is the graph on the same vertex set as $ G $ such that $ uv $ is an edge if and only if the distance between $ u $ and $ v $ is at most $ k $ in $ G $. An edge $ uv \in E(G) $ is \textit{subdivided} by deleting the edge $ uv $, then adding a new vertex $ x $ and two new edges $ ux $ and $ xv $. Let $ K_n $, $ C_n $ and $P_n$ denote the complete graph, the cycle and the path, all of order $ n $, respectively; and let $ S_n $ denote the star of order $ n+1 $. For any positive integer $ n $, let $ [n] $ be the set of positive integers not exceeding $ n $. For notation and terminology not defined here, we refer the reader to \cite{West2001}.
For a positive integer $ k $, a subset $ D \subseteq V(G) $ is a \textit{$ k $-dominating set} of the graph $ G $ if $ |N_G(v) \cap D|\geq k $ for every $ v \in V(G)\setminus D $. The \textit{$ k $-domination number} of $ G $, denoted by $ \gamma_k(G) $, is the minimum cardinality among the $ k $-dominating sets of $ G $. Note that the $ 1 $-domination number, $ \gamma_1(G) $, is the classical domination number $ \gamma(G) $.
A graph $G$ is called \textit{$F$-free} if it does not contain any induced subgraph isomorphic to $F$. More generally, let ${\cal F}$ be a (finite or infinite) class of graphs, then $G$ is ${\cal F}$-free if it is $F$-free for all $F\in {\cal F}$. On the other hand, let $G^D$ denote a graph $G$ with a specified subset $D \subseteq V(G)$. Then, $F^{D'}$ is a (induced) subgraph of $G^D$ if $F$ is a (induced) subgraph of $G$ and $D'=V(F)\cap D$. We say that $F_1^{D_1}$ is isomorphic to $F_2^{D_2}$ if there is an edge-preserving bijection between $V(F_1)$ and $V(F_2)$ which maps $D_1$ onto $D_2$. Analogously, we may define the $F^{D'}$-freeness of $G^D$ and forbidden (induced) subgraph characterization with a specified vertex subset $D$.
\subsection{Preliminary results}
The concept of $ k $-domination in graphs was introduced by Fink and Jacobson \cite{Fink85, Fink85-2} and it has been studied extensively by many researchers (see for example \cite{bonomo2018, Bujtas2017, Caro1990-2, Caro1990, desormeaux2014, Favaron1988, Favaron2008, Hansberg2015, Hansberg2013, krzywkowski2017, Shaheen2009, yue2020}). For more details, we refer the reader to the books on domination by Haynes, Hedetniemi and Slater \cite{ DominationBook2, DominationBook1} and to the survey on $ k $-domination and $ k $-independence by Chellali \textit{et al.}\ \cite{ChellaliSurvey}.
Fink and Jacobson \cite{Fink85} established the following basic theorem.
\begin{thm}\label{thm:FJ} \cite{Fink85}
For any graph $ G $ with $ \Delta(G)\geq k\geq 2 $, $ \gamma_k(G) \geq \gamma(G)+k-2$.
\end{thm}
Although it is proved that the above inequality is sharp for every $k\ge 2$, the characterization of graphs attaining the equality is still open, even for the case when $ k = 2$. The corresponding characterization problem was studied in~\cite{Hansberg2015, Hansberg2016, Hansberg2008},
while similar problems involving different domination-type graph and hypergraph invariants were considered for example in~\cite{Arumugam2013, Blidia2006, hartnell1995, krzywkowski2017, Randerath1998}.
In this paper, we study $ (\gamma,\gamma_2) $-graphs that is graphs for which Theorem~\ref{thm:FJ} holds with equality if $k=2$. Note that $ G $ is a $ (\gamma,\gamma_2) $-graph, that is $\gamma_2(G)=\gamma(G)$, if and only if every component of $ G $ is a $ (\gamma,\gamma_2) $-graph. Thus, we only deal with connected graphs in the rest of the paper.
Hansberg and Volkmann \cite{Hansberg2008} characterized the cactus graphs (i.e., graphs in which no two cycles share an edge) which are $ (\gamma,\gamma_2) $-graphs and they also gave some general properties of the graphs attaining the equality. In 2016, the claw-free (i.e., $S_3$-free) $ (\gamma,\gamma_2) $-graphs and the line graphs which are $ (\gamma,\gamma_2) $-graphs were characterized by Hansberg \textit{et al.} \cite{Hansberg2016}. We will refer to the following basic lemmas proved in these papers.
\begin{lemma} \cite{Hansberg2008} \label{lem:0}
If $ G $ is a connected nontrivial graph with $ \gamma_2(G)=\gamma(G) $, then $ \delta(G)\geq 2 $.
\end{lemma}
\begin{lemma}\cite{Hansberg2016} \label{lem:1}
Let $D$ be a minimum $ 2 $-dominating set of a graph $G$. If $ \gamma_2(G)=\gamma(G) $, then $ D $ is independent.
\end{lemma}
\begin{lemma}\cite{Hansberg2016} \label{lem:2}
Let $ G $ be a connected nontrivial graph with $ \gamma_2(G)=\gamma(G) $ and let $ D $ be a minimum $ 2 $-dominating set of $ G $. Then, for each vertex $ u' \in V\setminus D $ and $ u,v \in D \cap N(u') $, there is a vertex $ v' \in V \setminus D $ such that $ u,u',v $ and $ v' $ induce a $ C_4 $.
\end{lemma}
We strengthen Lemma \ref{lem:2} by proving the following statement.
\begin{lemma}
\label{lem:2nghbrs}
Let $ G $ be a connected nontrivial graph with $ \gamma_2(G)=\gamma(G) $ and let $ D $ be a minimum $ 2 $-dominating set of $ G $. For every pair $ u,v \in D$, if $ N_G(u) \cap N_G(v) \neq \emptyset $, then there exists a nonadjacent pair $ u',v' \in V\setminus D $ such that $ N_G(u') \cap D = N_G(v')\cap D = \{u,v\}$.
\end{lemma}
\begin{proof}
For every vertex $ x \in N_G(u)\cap N_G(v) $, there is a vertex $ y $ different from $x$ such that $N_G(y)\cap D = \{u,v\}$ and $xy \notin E(G)$, since otherwise $ (D \setminus \{u,v\})\cup\{x \} $ would be a dominating set of $ G $, a contradiction. This proves that we have at least two non-adjacent vertices $u'$ and $v'$ with the property $ N_G(u') \cap D = N_G(v')\cap D = \{u,v\}$.
\end{proof}
The following simple proposition demonstrates that $(\gamma,\gamma_2)$-graphs form a rich class and it indicates the possible difficulties in a general characterization.
\begin{prop}
\label{prop:0}
There is no forbidden (induced) subgraph for the graphs satisfying the equality of domination number and $ 2 $-domination number.
\end{prop}
\begin{proof} Consider an arbitrary graph $F$ and a four-cycle $ C_4 $, which is vertex-disjoint to $F$. Let $ u $ and $ v $ be two non-adjacent vertices of $ C_4$. Construct the graph $ G_F $ by joining each vertex of $ F $ to both $ u $ and $ v $. Since, for any $F$, the graph $ G_F$ contains $ F $ as an induced subgraph and it satisfies the equality $ \gamma_2(G_F) = \gamma(G_F) =2$, there is no forbidden induced subgraph for $(\gamma,\gamma_2)$-graphs.
\end{proof}
As a consequence of the Lemmas \ref{lem:0}-\ref{lem:2nghbrs}, we will prove that all $(\gamma,\gamma_2)$-graphs belong to the following graph class ${\cal G}$ that we define together with its subclasses ${\cal G}_1$ and ${\cal G}_2$.
\begin{definition} Given an arbitrary simple graph $F$ with vertex set $V(F)=D=\{v_1,\dots v_d\}$, a graph $G$ belongs to the class ${\cal G}(F)$ if $G$ can be obtained from $F$ by the following rules.
\begin{itemize}
\item[$(i)$] Define a pair of vertices $X_{i,j}=\{x_{i,j}^1, x_{i,j}^2\}$ for every edge $v_iv_j$ of $F$, and further, let $Y$ be an arbitrary (possibly empty) set of vertices, such that $D$, $Y$ and all the pairs $X_{i,j}$ are mutually disjoint sets of vertices. Define $V(G)=D \cup X \cup Y$, where $X=\bigcup_{v_iv_j\in E(F)}X_{i,j}$.
\item[$(ii)$] The edges between $D$ and $X\cup Y$ are defined such that $N_G(x_{i,j}^s)\cap D=\{v_i,v_j\}$ for every vertex $x_{i,j}^s\in X$, and the set $N_G(u)\cap D$ contains at least two vertices and induces a complete subgraph in $F$ for any $u\in Y$. The induced subgraph $G[D]$ cannot contain edges.
\item[$(iii)$] The edges inside $X\cup Y$ can be chosen arbitrarily, but each $X_{i,j}$ must remain independent.
\end{itemize}
Moreover, $G$ belongs to ${\cal G}_1(F)$ if $ |N_G(y)\cap D| = 2 $ for each $ y \in Y $; and $G$ belongs to ${\cal G}_2(F)$ if $Y=\emptyset$. The graph classes ${\cal G}$, ${\cal G}_1$, ${\cal G}_2$ contain those graphs $G$ for which there exists a graph $F$ such that $G$ belongs to ${\cal G}(F)$, ${\cal G}_1(F)$, ${\cal G}_2(F)$, respectively.
\end{definition}
For $G\in {\cal G}(F)$ with the fixed partition $V(G)=D\cup X\cup Y$ as per above definition, a vertex $v$ is a \textit{$D$-vertex} (or original vertex) if $v\in D$; $v$ is a \textit{subdivision vertex} if $v\in X$; and $v$ is a \textit{supplementary vertex} if $v\in Y$. The edges inside $G[X\cup Y]$ are called \textit{supplementary edges}, and $F$ is said to be the \textit{underlying graph} of $G$. In Section 5, we will show that the underlying graph is not necessarily unique by presenting a $(\gamma, \gamma_2)$-graph having two non-isomorphic underlying graphs. Note that the construction in the proof of Proposition \ref{prop:0} always belongs to the class $ {\cal G}_1 $. Hence, Proposition \ref{prop:0} remains true under the condition $ G \in {\cal G}_1 $. This motivates us to focus on the smaller class ${\cal G}_2$.
Alternatively, we may define the graph class ${\cal G}_2(F)$ in the following constructive way. Let $ F $ be a simple graph with vertex set $ V(F) $ and edge set $ E(F) $. Consider the \textit{double subdivision graph} $ F^* $ obtained by substituting each edge $ v_iv_j $ by two parallel edges and subdividing each edge once by adding the vertices $ x_{i,j}^1 $ and $ x_{i,j}^2 $. Let $ X_{i,j} = \{x_{i,j}^1, x_{i,j}^2\} $ and define the set of subdivision vertices $X = \bigcup_{v_iv_j \in E(F)}^{} X_{i,j}$. The graph class ${\cal G}_2(F)$ consists of the graphs obtained by adding some (maybe zero) supplementary edges between subdivision vertices of $ F^* $ such that each $ X_{i,j}$ remains independent.
\begin{prop}
\label{prop:1}
If $G$ is a graph with $ \gamma_2(G) = \gamma(G) $, then $G \in \mathcal{G} $.
\end{prop}
\begin{proof} Assuming $ \gamma_2(G) = \gamma(G) $, choose a minimum $ 2 $-dominating set $D$ of $G$ and define the graph $ F = G^2[D] $. We first note that, by Lemma~\ref{lem:1}, $D$ is independent in $G$. Since $D$ is a $ 2 $-dominating set, every $u\in V(G)\setminus D$ has at least two neighbors in $D$ and, by the definition of $F$, the set $N_G(u)\cap D$ induces a complete subgraph in $F$. By Lemma~\ref{lem:2nghbrs}, for every edge $v_iv_j$ of $F$, there exist at least two different and non-adjacent vertices $u$, $u' \in V(G) \setminus D$ such that $ N_G(u) \cap D = N_G(u') \cap D = \{v_i,v_j\}$. If we select such a pair and define $X_{i,j}=\{u, u'\}$ for every $v_iv_j \in E(F)$, and let $Y=V(G)\setminus (D\cup X)$, then $G$ can be obtained from the underlying graph $F$ with the vertex partition $V(G)=D\cup X\cup Y$, proving that $G\in {\cal G}(F)$.
\end{proof}
In a follow-up paper of the present work \cite{ekinci2020}, we studied the analogous problem for each $k \ge 3$. There we gave a characterization for connected bipartite graphs satisfying $\gamma_k(G)=\gamma(G)+k-2$ and $\Delta(G) \ge k$. This result is based on the notion of the $k$-uniform ``underlying hypergraph'' that corresponds to the underlying graph, as defined here, if $k=2$.
\subsection{Structure of the paper}
In Section 2, we define the class ${\cal H}$ of those graphs which are contained in ${\cal G}_2$ with an underlying graph of girth at least 5 and we give a characterization for $(\gamma, \gamma_2)$-graphs over ${\cal H}$. Then, in Section 3, we discuss algorithmic complexity questions. First, we prove that the recognition problem of $(\gamma, \gamma_2)$-graphs is NP-hard on ${\cal G}_1$ (even if a minimum $ 2 $-dominating set is given together with the problem instance). Then, on the positive side, we show that there is a polynomial-time algorithm which recognizes $(\gamma, \gamma_2)$-graphs over the class ${\cal H}$ if the instance is given together with the minimum $ 2 $-dominating set $D=V(F)$. The algorithm is based on our characterization theorem and Edmond's Blossom Algorithm. In Section 4, we consider the hereditary version of the property and characterize $(\gamma, \gamma_2)$-perfect graphs. As a direct consequence, we get that $(\gamma, \gamma_2)$-perfect graphs are easy to recognize. In the concluding section, we put remarks on the underlying graphs and discuss some open problems.
\section{Characterization of $(\gamma, \gamma_2)$-graphs over ${\cal H}$} \label{sec:2}
To formulate the main result of this section, we will refer to the following definitions.
\begin{definition} Let ${\cal H}$ be the union of those graph classes ${\cal G}_2(F)$ where the underlying graph $F$ is $(C_3,C_4)$-free.
\end{definition}
When we consider a graph $G\in {\cal H}$, we will always assume that a fixed $(C_3,C_4)$-free underlying graph $F$ and a corresponding partition $V(G)=D\cup X$ are given. In order to indicate this structure, we will use the notation $ G^D $.
\begin{figure}[h]
\centerline{\includegraphics[width=0.4\textwidth]{A4_v2.eps}}
\vspace*{8pt}
\caption{The graph $ A_4 $}
\label{fig:A4B4}
\end{figure}
\begin{figure}[h]
\centerline{\includegraphics[width=0.7\textwidth]{C1_v2.eps}}
\vspace*{8pt}
\caption{The graph $ B$ }
\label{fig:C1C2}
\end{figure}
\begin{definition}
For a positive integer $k\geq 2$, let $A_k^W$ be the graph on the vertex set
$$V(A_k)=\{v, w_1,\dots ,w_k, x_1^1, \dots, x_k^1, x_1^2, \dots ,x_k^2\}$$
and with the edge set
$$E(A_k)=\{vx_i^1, vx_i^2, w_ix_i^1, w_ix_i^2: 1\le i \le k\}\cup \{x_i^1x_{i+1}^2: 1\le i \le k\}\cup \{x_k^1x_1^2\}.$$
The specified vertex set is $W_k = W =\{v\} \cup\{w_i: 1\le i \le k\}$ (for illustration see Fig.~\ref{fig:A4B4}).
\end{definition}
\begin{definition}
Let $B^W$ be the graph of order 8 with
$$V(B)=\{v_1, u_1, v_2, u_2, x_1^1,x_1^2,x_2^1,x_2^2\},$$
$$E(B) = \{v_ix_i^1, v_ix_i^2,u_ix_i^1,u_ix_i^2: 1\le i \le 2\}\cup \{x_1^1x_2^1\}$$
The specified vertex set is $W=\{v_1, u_1, v_2, u_2\}$ (for illustration see Fig.~\ref{fig:C1C2}).
\end{definition}
Note that $ A_k \in {\cal G}_2(S_k)$ and $ B \in {\cal G}_2(2K_2) $.
We first prove a lemma which will be referred to in the proof of our main theorem and also in later sections.
\begin{lemma} \label{lem:d2dom}
If $ G^D \in \mathcal{G}_1(F) $, then $D$ is a minimum $ 2 $-dominating set of $G$.
\end{lemma}
\begin{proof}
By definition, every vertex from $X$ has two neighbors in $ D $. Thus, $D$ is a $ 2 $-dominating set in $G$.
Suppose, to the contrary that, $ D' $ is a $ 2 $-dominating set of $ G $ such that $ |D'|<|D| $. Let $ D_1 = D \cap D'$ and $ D_2 = D \setminus D' $. Since $ D $ is independent in $ G $, the vertices in $ D_2 $ have to be $ 2 $-dominated by the vertices of $ D' \setminus D $, that is, every vertex in $ D_2 $ has at least two neighbors in $ D' $. Then we have \[|E[D',D_2]|\geq 2|D_2|.\]
Moreover, by the definition of $\mathcal{G}_1(F) $, every vertex in $ D'\setminus D $ has exactly two neighbors in $ D $, so we have \[2|D'\setminus D|\geq |E[D',D_2]|.\]
Thus, $ |D'\setminus D| \geq |D_2| $.
Since $ D'=(D' \setminus D) \cup D_1 $, we conclude $ |D'| \geq |D_2|+|D_1|=|D| $, a contradiction.
\end{proof}
\begin{thm} \label{thm:2}
Let $G^D$ be a graph from ${\cal H}$. Then $\gamma(G)=\gamma_2(G)$ holds if and only if $G^D$ contains no subgraph isomorphic to $B^W$ and no subgraph isomorphic to $A_k^{W}$ for any $k\ge 2$.
\end{thm}
\begin{proof}
Throughout the proof, we assume that $G\in {\cal H}$ and hence there exists a $(C_3,C_4)$-free underlying graph $F$ such that $G \in {\cal G}_2(F)$. By Lemma~\ref{lem:d2dom}, $D=V(F)$ is a minimum $ 2 $-dominating set of $G$.
First assume that $G^D$ contains a (not necessarily induced) subgraph which is isomorphic to $B^W$. We may assume, without loss of generality, that this subgraph contains the vertices $ S = \{v_1, u_1, v_2, u_2, x_1^1,x_1^2,x_2^1,x_2^2\} $, the edges correspond to those in Fig.~\ref{fig:C1C2}, and $ S \cap D =\{v_1, u_1, v_2, u_2\} $. Since $F$ is $ \{C_3,C_4\}$-free, the induced subgraph $F[S \cap D]$ is $ \{C_3,C_4\}$-free as well. Therefore, as $|S\cap D|=4$, $F[S \cap D]$ is a forest. It contains at least two edges, namely $v_1u_1$ and $v_2u_2$. Hence, $F[S \cap D]$ contains a leaf, say $ v_1 $. Consider the set $ D' = (D \setminus S) \cup \{u_1, x_1^1, x_2^2\} $. Observe that $ D' $ dominates all the vertices in $D$; the vertex $x_1^1 \in D'$ dominates $ x_2^1 $; the vertex $ u_1 $ dominates $ x_1^2 $. By the choice of $ v_1 $ and $u_1$, $ F[\{v_1,v_2,u_2\}] $ contains only the edge $ v_2u_2 $. Hence, all the subdivision vertices different from $ \{x_1^1,x_1^2,x_2^1,x_2^2\} $ are dominated either by $ D \setminus S $ or $ u_1 $. Hence, $D'$ is a dominating set in $G$ and $|D'|<|D|$. These imply $\gamma(G)<\gamma_2(G)$.
Next assume that $G^D$ contains a subgraph which is isomorphic to $A_k^W$. We may assume, without loss of generality, that the vertices of this subgraph are named as given in the definition of $ A_k^W $. Consider the set $D'=(D\setminus W) \cup \{x_1^1, \dots x_k^1\}$. Observe that $D'$ dominates all the vertices in $D$; the set $\{x_1^1, \dots x_k^1\} \subseteq D'$ dominates all the vertices of the form $x_i^s$ ($i \in [k]$, $s\in [2]$). Since $F$ is assumed to be $C_3$-free, for any further subdivision vertex $x_{i,j}^s$ of $G$, at least one of its neighbors which is a $D$-vertex, namely at least one of $v_i$ and $v_j$, is not included in $W$. Thus, $x_{i,j}^s$ is dominated by a vertex in $D\setminus W$. We may conclude that $D'$ is a dominating set in $G$. Since $|W|=k+1$, we have $|D'|<|D|$ from which $\gamma(G)<\gamma_2(G)$ follows. This finishes the proof of one direction of our theorem.
For the converse, we assume that $G$ contains no subgraph isomorphic to $B^W$ and no subgraph isomorphic to $A_k^{W}$ for any $k\ge 2$, and then prove that $\gamma(G)=\gamma_2(G)$. In particular, having no subgraph isomorphic to $B^W$ means that every supplementary edge is inside a neighborhood of a $D$-vertex and, therefore, $N[x_{i,j}^s]\subseteq N[v_i]\cup N[v_j]$ holds for each supplementary vertex $x_{i,j}^s$. Now, suppose for a contradiction that $\gamma(G)<\gamma_2(G)$. Let $D'$ be a minimum dominating set of $G$ such that $|D'\cap D|$ is maximum under this condition. It is clear that $|D'|=\gamma(G)<\gamma_2(G)=|D|$.
We first prove that no pair $x_{i,j}^1$, $x_{i,j}^2$ are contained together in $D'$. Suppose, to the contrary, that $\{x_{i,j}^1$, $x_{i,j}^2 \}\subseteq D'$. Then, since $N[x_{i,j}^1]\cup N[x_{i,j}^2]\subseteq N[v_i]\cup N[v_j]$,
the set $D''= (D'\setminus \{x_{i,j}^1, x_{i,j}^2\}) \cup \{v_i,v_j\}$ would be a dominating set of $ G $. This contradicts either the minimality of $ |D'| $ or the maximality of $|D'\cap D|$.
If we have some edges $v_iv_j \in E(F)$ such that $|X_{i,j}\cap D'|=0$, then we delete all these $X_{i,j}$ pairs from $G$, delete all the associated edges from $F$ and obtain $G'$ and $F'$. Note that, by definition, $G'\in {\cal G}_2(F')$ and $F'$ is still $(C_3,C_4)$-free. As $D'$ contains exactly one vertex from each remaining pair $X_{i,j}$, we infer that $|E(F')| \le |D'|$. By Lemma~\ref{lem:d2dom}, $\gamma_2(G')$ remains $|D|$ (we did not delete the possibly arising isolated vertices). We deleted only subdivision vertices not contained in $D\cup D'$ and $D'$ contains exactly one vertex from each pair $X_{i,j}$ corresponding to an edge $v_iv_j \in E(F')$. Therefore,
\begin{equation} \label{eq:1}
|E(F')| \le |D' \cap V(G')|< |D\cap V(G')|
\end{equation}
holds and $D' \cap V(G')$ is a dominating set in $G'$. By Lemma~\ref{lem:d2dom}, $D\cap V(G')$ remains a minimum $2$-dominating set in $G'$.
$G'$ might contain several components. By the inequality (\ref{eq:1}), there is a component, say $G''$, such that $|D'\cap V(G'')| <|D \cap V(G'')|=\gamma_2(G'')$. It is clear that $G''$ is not an isolated vertex.
Recall that $N_G[x_{i,j}^s] \subseteq N_G[v_i] \cup N_G[v_j]$ holds for each supplementary vertex $x_{i,j}^s$ in $G$ and hence, by construction, the analogous statement remains true in $G''$. Thus, the connectivity of the underlying graph $F''$ of $G''$ follows from the connectivity of $G''$. It also holds that $V(F'')= D \cap V(G'')$. Moreover, as $D' \cap V(G'')$ intersects each pair $X_{i,j} $ from $G''$, we have $|E(F'')| \le |D' \cap V(G'')|$. We may conclude
\begin{equation} \label{eq:2}
|E(F'')| \le |D' \cap V(G'')| < |V(F'') |.
\end{equation}
The underlying graph $F''$ is therefore a tree and
\begin{equation} \label{eq:3}
|E(F'')|= |D' \cap V(G'')| = |V(F'') |-1
\end{equation}
holds. By the first equality in (\ref{eq:3}), $D' \cap D \cap V(G'')=\emptyset$. Note that $F''$ is not necessarily an induced subgraph of $F$ but, as $F$ is $C_3$-free, all the star-subgraphs of $F''$ are induced stars in $F$.
Consider a non-pendant edge $ v_iv_j $ in $ F'' $ (if there exists). We know that $ D'\cap V(G'') $ is a dominating set in $G''$ and it contains exactly one vertex from $ X_{ij}$. Renaming the vertices if necessary, we may suppose $ x_{i,j}^1 \in D' $. Then the vertex $ x_{i,j}^2 $ must be dominated by a vertex from $ D' $, which is a neighbor of either $ v_i $ or $ v_j $. Without loss of generality, assume that $ x_{i,j}^2 $ is dominated by a neighbor of $ v_i $. Let $ S = V(G'')\setminus (N_{G''}(v_j) \setminus X_{ij}) $ and consider the induced subgraph $ G''[S] $. Let $ H $ be the component of the resulting graph, which contains both $ v_i $ and $ v_j $.
Recall that $D' \cap V(G'')$ dominates all vertices in $G''$. By construction, $N_{G''}[v_p] \subseteq V(H)$ is true for every vertex $v_p\neq v_j$ from $ D' \cap V(H)$ and
$$N_{G''}[x_{p,q}^s] \subseteq N_{G''}[v_p] \cup N_{G''}[v_q] \subseteq V(H)$$
holds for every $x_{p,q}^s \in X \cap V(H)$ if $p \neq j \neq q$. The set $D' \cap V(H)$ therefore dominates all vertices from $V(H) \setminus N_H[v_j]$. As $N_H[v_j]=\{v_j, x_{i,j}^1, x_{i,j}^2 \}$, it can be readily seen that $D'\cap V(H)$ is a dominating set in $H$.
Repeate sequentially this procedure of deleting non-pendant edges in the underlying graph. At the end we obtain a graph $ H_r $ with an underlying graph $ F_r $ such that $F_r$ is isomorphic to a star graph $ K_{1,m} $. Then the set $ D_r = V(H_r) \cap D' $ is a dominating set of $ H_r $ and it contains exactly one vertex from each pair $ X_{i,j} $ of subdivision vertices.
We will construct a directed graph $ R $ as follows. We create a vertex $ x_{i,j} $ corresponding to each pair $ X_{i,j} \subset V(H_r) $ of subdivision vertices. Then, we add a directed edge from $ x_{i,j} $ to $ x_{k,\ell} $ in $R $, if the vertex in $ X_{i,j}\setminus D_r $ is dominated by the vertex in $ X_{k,\ell}\cap D_r $. As $ D_r $ has exactly one vertex from each pair $ X_{i,j} $, the outdegree of each vertex $ x_{i,j} \in V(R) $ is at least one. Thus, there is a directed cycle of order at least $ t \ge 2 $, which corresponds to a subgraph isomorphic to $ A_t^W $ in $ H_r^{D \cap V(H_r)} \subseteq G^D $. This contradicts our assumption and finishes the proof of the theorem.
\end{proof}
\section{Algorithmic complexity} \label{sec:3}
Since there are infinitely many forbidden subgraphs, Theorem \ref{thm:2} does not give directly a polynomial time recognition algorithm for $(\gamma,\gamma_2)$-graphs on ${\cal H}$. However, based on this characterization, we can design a polynomial time algorithm to check whether $ \gamma(G) = \gamma_2(G) $ holds for a general instance $ G^D \in \mathcal{H} $.
\begin{thm}
\label{thm:complexity}
Let $ G^D \in {\cal H}$ be given. It can be decided in polynomial time whether the graph $ G^D $ satisfies the equality $ \gamma(G) = \gamma_2(G) $.
\end{thm}
\begin{proof}
By Theorem \ref{thm:2}, $\gamma(G)=\gamma_2(G)$ holds if and only if $G^D$ contains no subgraph isomorphic to $B^W$ and no subgraph isomorphic to $A_k^{W}$ for any $k\ge 2$.
\medskip
\noindent \hrulefill\\
\noindent \textbf{Algorithm}
\noindent \hrulefill\\
\indent \textit{Input:} A graph $ G^D \in {\cal H}$ \\
\indent \textit{Output:} If $ \gamma(G) = \gamma_2(G) $, then true; else false. \\
\indent \hspace{1cm} for each supplementary edge $ uv $ in $ G $ \\
\indent \hspace{2cm}if $D \cap (N_G(u) \cap N_G(v)) = \emptyset $, then return false\\
\indent \hspace{1cm} for each vertex $ x $ in $ D $ \\
\indent \hspace{2cm} $ X \leftarrow N_G(x)$ and $ G'\leftarrow G[X] $\\
\indent \hspace{2cm} $ k= (\deg_{G}x) /2$ \\
\indent \hspace{2cm} for $ i \leftarrow 1 $ to $ k $ do \\
\indent \hspace{3cm} $ E \leftarrow E(G') $\\
\indent \hspace{3cm} for $ j \leftarrow 1 $ to $ k $ do \\
\indent \hspace{4cm} if $ j\ne i $, then $ E \leftarrow E \cup \{x_j^1 x_j^2\}$ \\
\indent \hspace{3cm} $ \mu $ $ \leftarrow $ the order of the maximum matching in $ E $\\
\indent \hspace{3cm} if $ \mu = k $, then return false\\
\indent \hspace{2cm}end-for\\
\indent \hspace{1cm} end-for\\
\indent \hspace{1cm} return true\\
\indent end.
\noindent \hrulefill
\medskip
The algorithm above, first, determines whether $B^W \subseteq G^D $. If it holds, then the algorithm halts. It can be readily checked that this part of the algorithm requires polynomial time.
\noindent Then, in the next steps of the algorithm, the existence of subgraphs isomorphic to $A_\ell^{W}$ is tested. In order to find such a subgraph (if it exists), the algorithm searches for an appropriate matching in $G[N_G(v_i)]$ for every vertex $ v_i$ from $ D $. Since a subgraph $A_\ell^W$ does not necessarily contain all the neighbors of $ v_i $, it is not enough to check the existence of a perfect matching in $ G[N_G(v_i)] $. Instead, we define the edge set $E_i= \{x_{i,j}^1x_{i,j}^2: v_j \in N_F(v_i)\}$. Let $ G_i^* $ be the graph $ G[N_G(v_i)] $ extended by the edges from $ E_i $. Clearly, $G_i^* $ contains a perfect matching which is $ E_i $. On the other hand, $ G_i^* $ contains a perfect matching different from $ E_i $ if and only if $ G[N_G(v_i)] $ has a subgraph isomorphic to $ A_\ell^W $. Hence, the algorithm checks all possible $G_i^*-e$ graphs, where $e\in E_i$, and if any of them has a perfect matching, then there exists a subgraph isomorphic to $ A_\ell^W $.
In order to find a maximum matching in $G_i^*-e$, we can use Edmond's Blossom Algorithm \cite{Edmonds1965}, which was improved by Micali and Vazirani in \cite{Micali1980} to run in time $ O(\sqrt{n}m) $ for any graph of order $n $ and size $ m $. The procedure will be repeated $ (\deg_G(x) /2) = \deg_F(x)$ times for every vertex $ x\in D $, that is, $ \Sigma_{v\in V(F)}\deg(v) = 2|E(F)| $, in total. Thus, the second part of the algorithm requires polynomial-time. This finishes the proof.
\end{proof}
\medskip
\noindent We now show that the same problem is NP-hard even on the graph class $ \mathcal{G}^D_1 $.
\begin{thm} \label{thm:4}
Consider every graph $ G \in \mathcal{G}_1 $ together with a specified set $ D $ such that $ G^2[D] \cong F $ and $ G \in \mathcal{G}_1(F) $. Then, it is NP-complete to decide whether the inequality $ \gamma(G) < \gamma_2(G)$ holds for a general instance $ G \in \mathcal{G}_1 $.
\end{thm}
\begin{proof}
By Lemma \ref{lem:d2dom}, we have $ \gamma_2(G) = |D| $ and it can be checked in polynomial time whether a given set $ D' $ with $ |D'| < |D| $ is a dominating set of $ G $. Thus, the decision problem belongs to NP.
In order to prove the NP-hardness, we present a polynomial-time reduction from the well-known $ 3 $-SAT problem, which is proved to be NP-complete \cite{Garey1979}.
Let $ X = \{x_1, x_2, \dots, x_k\}$ be a set of Boolean variables. A truth assignment for $ X $ is a function $ \varphi:X \rightarrow \{t,f\} $. If $ \varphi(x_i)=t $ holds, then the variable $ x_i $ is called $ true $; else if $ \varphi(x_i)=f$ holds, then $ x_i $ is called $ false $. If $ x_i $ is a variable in $ X $, then $ x_i $ and $ \neg{x_i} $ are literals over $ X $. The literal $ x_i $ is true under $ \varphi $ if and only if the variable $ x_i $ is true under $ \varphi $; the literal $ \neg x_i $ is true if and only if the variable $ x_i $ is false. A clause over $ X $ is a set of three literals over $ X $, represents the disjunction of those literals and it is satisfied by a truth assignment if and only if at least one of its members is true under that assignment. A collection $ \mathcal{C} $ of clauses over $ X $ is \textit{satisfiable} if and only if there exists some truth assignment for $ X $ that satisfies all the clauses in $ \mathcal{C} $. Such a truth assignment is called a \textit{satisfying truth assignment} for $ \mathcal{C} $. The $ 3 $-SAT problem is specified as follows.
\medskip
\medskip
\noindent \textbf{3-SATISFIABILITY (3-SAT) PROBLEM}
\vspace{0.25cm}
\noindent\textbf{\textit{Instance:}} A collection $ \mathcal{C} = \{C_1,C_2, \dots ,C_\ell\} $ of clauses over a finite set $ X $ of variables such that $ |\mathcal{C}_j| = 3 $, for $1 \le j \le \ell\} $.
\vspace{0.25cm}
\noindent\textbf{\textit{Question:}} Is there a truth assignment for $ X $ that satisfies all the clauses in $ \mathcal{C} $?
\vspace{0.5cm}
Let $ \mathcal{C} $ be a $ 3 $-SAT instance with clauses $ C_1,C_2,\dots, C_\ell $ over the Boolean variables $ X=\{x_1,x_2, \dots, x_k\} $. We may assume that for every three variables $ x_{i_1}, x_{i_2}, x_{i_3} $ there exists a clause $ C_j $, where $ j\in [\ell] $, such that $ C_j $ does not contain any of the variables $ x_{i_1}, x_{i_2}, x_{i_3} $ (neither in positive form, nor in negative form). Otherwise, the problem could be reduced to at most eight (separated) $ 2$-SAT problems, which are solvable in polynomial time.
We now construct a graph $ G \in \mathcal{G}_1(F) $, where $ F \cong S_{k+1} $, such that the given instance $ \mathcal{C} $ of $ 3 $-SAT problem is satisfiable if and only if $ \gamma(G) < \gamma_2(G)$.
The construction is as follows.
For every variable $ x_i $, we create three vertices $ \{x_i^t, x_i^f, v_i\} $ and then we add the edges $ x_i^tv_i $ and $ x_i^fv_i $. For every clause $ C_j \in \mathcal{C}$, we create a vertex $ c_j $, and if $ x_i $ is a literal in $ C_j $, then $ x_i^tc_j \in E(G) $; if $ \neg x_i $ is a literal in $ C_j $, then $ x_i^fc_j \in E(G) $. Moreover, we add a vertex $ c^* $ and the edges $ c^*x_i^t $ and $ c^*x_i^f $ for every $ i\in [k] $. We also add a vertex $ v_{k+1} $ and the edge set $ \{c_iv_{k+1}:1\le i\le\ell\} \cup \{c^*v_{k+1}\}$. Finally, we add a new vertex $ v_0 $, which is adjacent to every vertex in $ V(G)\setminus \{v_1,v_2\dots,v_{k+1}\} $ (for an illustration of the construction see Fig.~\ref{fig:construction}). The order of $ G $ is obviously $ 3k+\ell +3 $ and this construction can be done in polynomial time. Note that $ G \in \mathcal{G}_1(F) $, where $ F $ is a star with center $ v_0 $ and leaves $ v_1,\dots, v_{k+1} $. Thus, we have $ \gamma_2(G)=k+2 $, by Lemma \ref{lem:d2dom}.
\begin{figure}[h]
\centerline{\includegraphics[width=0.8\textwidth]{construction.eps}}
\captionsetup{width=0.8\textwidth}
\centering
\caption{\protect An illustration of the construction for $ 3 $-SAT reduction: The clauses $ C_1 $ and $ C_\ell $ corresponding to the vertices $ c_1 $ and $ c_\ell $, resp., are $ C_1=(x_1 \vee \neg x_3 \vee \neg x_k) $ and $ C_\ell=(x_1 \vee \neg x_2 \vee x_k) $.}
\label{fig:construction}
\end{figure}
We now prove that $ \mathcal{C} $ is satisfiable if and only if $ \gamma(G) < \gamma_2(G) $. First, consider a truth assignment $ \varphi : x_i \rightarrow \{t,f\} $ which satisfies $ \mathcal{C} $. Let $D_1=\bigcup_{i\in [k] }\{x_i^t :\varphi(x_i)=t\} $ and let $D_2 = \bigcup_{i\in [k] }\{x_i^f : \varphi(x_i)=f\}$. Consider the set $ D' = D_1 \cup D_2 \cup \{c^*\} $. It can be readily checked that $ D' $ is a dominating set of cardinality $ k+1 $. Hence, $ \gamma(G)< \gamma_2(G)$ follows.
Conversely, assume that $ \gamma(G) < \gamma_2(G) $ and consider a minimum dominating set $ D' $ of cardinality at most $ k+1 $. In order to dominate $ v_i $, the set $ D' $ contains at least one vertex from the set $ \{x_i^t, x_i^f, v_i\} $, for each $ i\in [k] $. Similarly, to dominate $ v_{k+1} $, the set $ D' $ contains at least one vertex from the set $ \{c_1,c_2,\dots,c_\ell,c^*,v_{k+1}\}$. Since $ |D'|\le k+1 $, we have $|D' \cap \{x_i^t, x_i^f, v_i\}| = 1 $ for every $ i\in [k] $. Moreover, $ |D'\cap \{c_1,c_2,\dots,c_\ell,c^*,v_{k+1}\}| = 1$ and $ v_0 \notin D' $.
Suppose that $ v_{k+1} \in D'$. In order to dominate the vertices $ x_i^t $ and $ x_i^f $, the set $ D' $ contains the vertex $ v_i $ for all $ i \in [k] $. Hence, $ N_G(v_0) \cap D' = \emptyset $. From the discussion above, we know that $ v_0 \notin D' $. Thus, $ v_0 $ is not dominated by a vertex from $ D' $, a contradiction.
Suppose that $ c_j \in D' $ for some $ j \in [\ell] $. Let $ C_j $ be the corresponding clause containing the variables $x_{i_1},x_{i_2}, x_{i_3} $. Consider any variable $ x_s \in X \setminus \{x_{i_1},x_{i_2}, x_{i_3}\} $. Since $|D' \cap \{x_i^t, x_i^f, v_i\}| = 1 $ for each $ i\in [k] $, $ D' $ contains $ v_s $ in order to dominate both of the vertices $ x_s^t $ and $ x_s^f $. By our assumption, there exists a clause $C_q $ not containing the variables $x_{i_1},x_{i_2}, x_{i_3} $ neither in positive nor in negative form. Thus, $ c_q $ is not dominated by a vertex from $ D' $, a contradiction.
Since $ |D'\cap \{c_1,c_2,\dots,c_\ell,c^*\}| = 1$, the only remaining case is $ c^*\in D' $. Under this assumption, every vertex $ c_i $ must be dominated by the vertices corresponding to the literals in $ C_i$. Thus, the truth assignment
\[ \varphi(x_i) = \begin{cases}
t, & \text{if }x_i^t \in D' \\
f, & \text{if }x_i^f \in D' \text { or if }v_i \in D' \\
\end{cases}
\]
satisfies $ \mathcal{C} $. This finishes the proof.
\end{proof}
Theorem~\ref{thm:4} implies that it is coNP-complete to decide whether the equality $ \gamma(G) = \gamma_2(G)$ holds for a general instance $ G $ from $ \mathcal{G}_1$. On the other hand, we cannot prove that the problem belongs to NP. Instead, we will consider the complexity class $\Theta^p_2$, which consists of those problems solvable by a polynomial-time deterministic algorithm using NP-oracle asked for only $O(\log n)$ times. (For a detailed introduction, please, see \cite{Marx2006}.)
\begin{prop}
The complexity of deciding whether $\gamma(G)=\gamma_2(G)$ holds for a general instance $G$ is in the class $\Theta^p_2$.
\end{prop}
\begin{proof}
Using binary search, the parameters $\gamma(G)$ and $\gamma_2(G)$ can be determined by asking the NP-oracle $O(\log n)$ times whether the inequalities $\gamma(G) \le k$ and $\gamma_2(G) \le k$ hold. Thus, the decision problem belongs to $\Theta^p_2$.
\end{proof}
Note that in \cite{Arumugam2013}, a similar statement was proved for the problem of deciding whether the transversal number $\tau({\cal H})$ equals the domination number $\gamma({\cal H})$ for a general instance hypergraph ${\cal H}$.
\section{Characterization of $(\gamma, \gamma_2)$-perfect graphs} \label{sec:4}
Recently, Alvarado, Dantas, Rautenbach \cite{Alvarado2015-2, Alvarado2015} and Henning, J\"ager, Rautenbach \cite{Henning2018} studied graphs for which the equality between two fixed domination-type invariants hereditarily holds. The analogous problem for transversal and domination numbers of graphs and hypergraphs was considered in \cite{Arumugam2013}.
In this section, we characterize $(\gamma, \gamma_2)$-perfect graphs, that is, we characterize the graphs for which the equality between the domination and the $ 2 $-domination numbers hereditarily holds. By Lemma \ref{lem:0}, $ \delta(G) \geq 2$ is a necessary condition for $ \gamma(G)=\gamma_2(G) $. Hence, we define $(\gamma, \gamma_2)$-perfect graphs as follows.
\begin{definition}
Let $ G $ be a graph with $ \delta(G) \geq 2 $. Then $ G $ is a $(\gamma, \gamma_2)$-perfect graph if the equality $ \gamma(H)=\gamma_2(H) $ holds for every induced subgraph $ H $ of minimum degree at least two.
\end{definition}
Note that a disconnected graph $ G $ is $(\gamma, \gamma_2)$-perfect if and only if all of its components are $(\gamma, \gamma_2)$-perfect.
In order to formulate the results of this section we will define the following class.
\begin{definition}
Let $ S_{k} $ be the star with center vertex $ v $ and end vertices $ \{v_1,v_2,\dots,\allowbreak v_k\} $ such that $ k\geq 1 $. Denote the edge $ vv_j\in E(S_{k})$ by $ e_j $ for $ j\in [k] $. Let $ S(i_1,i_2,\dots,i_k) $ be the graph obtained by substituting each edge $ e_j $ of $ S_{k} $ by $ i_j $ parallel edges $ e_j^1,e_j^2,\dots e_j^{i_j} $, where $ i_j \geq 2 $, and then subdividing each edge $ e_j^r $ by adding the vertex $ x_j^r $ for all $ r \in [i_j] $ and all $ j\in [k] $. A graph $ G $ belongs to the class $ {\cal S} $ if it is isomorphic to $ S(i_1,i_2,\dots,i_k) $ for some $ k\geq 1 $, where $ i_j\geq 2 $ for all $ j\in [k] $.
\end{definition}
We clearly have $ {\cal S} \subseteq {\cal G}_1$, since any $S(i_1,i_2,\dots,i_k) \in {\cal G}_1(F) $, where $ F \cong S_{k} $. On the other hand, if $G' \in {\cal G}(S_k)$, the underlying graph does not contain a clique of order larger than two and consequently, $|N(y)\cap D|=2$ for every supplementary vertex $y$. This implies that $G'\in {\cal G}_1(S_k)$. By the definitions above, we have the following equivalence.
\begin{prop}
\label{prop:perfect}
For any graph, $G\in {\cal S}$ holds if and only if $G\in {\cal G}_1(S_k)$ $ ( $or, equivalently, $G\in {\cal G}(S_k))$ for a non-trivial star $S_k$ and $G$ does not contain a supplementary edge.
\end{prop}
The main result of this section is a characterization theorem for $(\gamma,\gamma_2)$-perfect graphs.
\begin{thm}
\label{thm:perfect}
$ G $ is a connected $ (\gamma,\gamma_2)$-perfect graph if and only if $ G \in {\cal S}. $
\end{thm}
\begin{proof}
We first prove that if $ G \in {\cal S} $, then it is $ (\gamma,\gamma_2) $-perfect graph.
By Proposition \ref{prop:perfect}, we know that $ G \in {\cal G}_1(F) $, where $ F \cong S_{k} $ for $ k\geq 1 $. Then, by Lemma \ref{lem:d2dom}, $ \gamma_2(G) = |V(F)|= k+1$. Since a minimal $ 2 $-dominating set is a dominating set, we have the inequality $ \gamma(G) \leq k+1 $. In order to prove that $\gamma(G)=\gamma_2(G)$, it is enough to show that $ \gamma(G) > k $. Suppose, to the contrary, that $ D' $ is a minimum dominating set of $ G $ such that $ |D'|\leq k $.
Consider the vertices of $ G $ corresponding to the end vertices of the star $ S_{k} $. Let $ \{v_1,v_2,\dots, v_k\} = V(F)\setminus \{v\} \subseteq V(G)$, where $ v $ is the center of $ F\cong S_{k} $. Since $D'$ is a dominating set, $ |N_G[v_j]\cap D'| \geq 1 $ for each $ j \in [k] $. Note that the closed neighborhoods of any two vertices from the set $ \{v_1,v_2,\dots,v_k\} $ are disjoint. Since $ |D'|\leq k $ by our assumption, we have $v\notin D'$ and $ |N_G[v_j] \cap D'|=1 $, for every $ j\in[k] $. Moreover, as the center $v$ must also be dominated, there exists some $ j \in [k] $ and $ r\in [i_j] $ such that $ x_j^r \in D' $. Then, $ v_j \notin D' $ and the vertices in $ (X_j \cup Y_j)\setminus \{x_j^r\} $ are not dominated by $ D' $, which is a contradiction. Consequently, $ k $ vertices are not enough to dominate all the vertices of $ G $, that is, $ \gamma(G) \geq k+1 $. It follows that $ \gamma(G) = \gamma_2(G) $ for any $ G \in {\cal S} $.
Next, suppose that $ H $ is an induced subgraph of $ G $ with minimum degree at least two. If $H$ does not contain any subdivision vertices, we have $ \delta(H)=0 $, a contradiction. Thus, $ H $ contains a subdivision vertex. Let $ x_p^q \in V(H)$ for some $ p\in [k] $ and $ q\in [i_p] $. Since $ \deg_G(x_p^q)=2 $, then both of the neighbors of $ x_p^q $ must be in $ V(H) $, i.e., $ N_G(x_p^q) = \{v,v_p\}\subseteq V(H) $. Since $ \delta(H) \geq 2 $ by the assumption, using an argument similar to the above, we have $ \deg_H(v_p)\geq 2 $. Thus, $ |(X_p \cup Y_p)\setminus \{x_p^q\})\cap V(H)| \geq 1 $. Consequently, $ H \in {\cal S} $ and, as it was proved above, $ \gamma(H) = \gamma_2(H) $ holds for every induced subgraph of $ G $ with minimum degree at least two.
To prove the converse, assume that $ G $ is a connected $ (\gamma, \gamma_2) $-perfect graph. Note that $ \gamma(C_n) = \ceil{\frac{n}{3}} \ $ and $ \gamma_2(C_n) = \ceil{\frac{n}{2}}$, where $ n\ge 3 $. Thus, the $(\gamma,\gamma_2)$-perfect graph $ G $ does not contain an induced cycle $ C_n $, where $ n =3 $ or $ n\geq 5 $.
\medskip
\begin{figure}[h]
\centerline{\includegraphics[width=0.5\textwidth]{H1_H3.eps}}
\caption{The graphs $ H_1 $, $ H_2 $ and $ H_3 $}
\label{fig:induced3Graphs}
\end{figure}
Now, suppose that $ G $ has a non-induced subgraph isomorphic to $ C_r $, for some $ r\geq 5 $. Since all of its induced cycles are $ 4 $-cycles, $ G $ contains at least one of the three graphs $ H_1, H_2 $ and $ H_3 $, shown in Figure \ref{fig:induced3Graphs}, as an induced subgraph. Observe that $ \gamma(H_i) < \gamma_2(H_i) $ for all $ i \in \{1,2,3\} $. This contradicts our assumption that $ G $ is a $ (\gamma, \gamma_2)$-perfect graph. Thus, $ G $ does not contain a cycle $ C_r $, where $ r\neq 4 $.
Since $ G $ is $ (\gamma, \gamma_2)$-perfect by the assumption, then the equality $ \gamma(G)=\gamma_2(G) $ holds. By Proposition \ref{prop:1}, we know that $ G\in {\cal G} $. Thus, if $ D $ is a minimum $ 2 $-dominating set of $ G $, then $ D $ is independent and $ F=G^2[D] $ is the underlying graph of $ G $.
First, note that $ F $ does not contain a cycle $ C_r $ for $ r\geq 3 $. Otherwise, $ G $ would contain a subgraph isomorphic to $ C_{2r} $, which is a contradiction. Thus, $ F $ is a forest and $G\in {\cal G}_1(F)$. Then suppose that $ F $ is not connected. Since $ G $ is connected, there is a supplementary edge $ e = uv $, where $ u $ and $ v $ are two subdivision vertices of $ G $ such that $N(u)\cap V(F)$ and $N(v)\cap V(F)$ are in different components of $ F $. By the definition of the graph class $ {\cal G}_1$, there are two vertices $ u' $ and $ v' $ such that $N_G(u) \cap V(F) = N_G(u') \cap V(F) $ and $N_G(v) \cap V(F) = N_G(v') \cap V(F) $. Let $ \{x_1,x_2\} = N_G(u) \cap V(F) $ and $ \{x_3,x_4\} = N_G(v) \cap V(F) $, where the sets $ \{x_1,x_2\} $ and $ \{x_3,x_4\} $ are contained by different components of $ F $. Consider the set $ A = \{x_1,x_2,x_3,x_4,u,v,u',v'\} $ and the induced subgraph $ G[A] $. It is easy to check that $\delta(G[A]) \ge 2$, $\gamma(G[A])\le 3$ and $\gamma_2(G[A])=4 $, which is a contradiction. Thus, $ F $ is a tree.
Suppose that $ G $ has a supplementary edge $ e=uv \in E(G) $, where $ u,v \in V(G)\setminus V(F) $. Let $ N_G(u) \cap V(F) = \{x_1,x_2\} $ and $ N_G(v) \cap V(F) = \{x_3,x_4\} $. Note that $|\{ x_1,x_2\} \cap \{x_3,x_4\}|\leq 1 $, otherwise $ G $ would contain a subgraph isomorphic to $ C_3 $. By Lemma~\ref{lem:2nghbrs}, there exist two further vertices $u'$ and $v'$ satisfying $ N_G(u') \cap V(F) = \{x_1,x_2\} $ and $ N_G(v') \cap V(F) = \{x_3,x_4\} $. If $|\{ x_1,x_2\} \cap \{x_3,x_4\}|= 1 $, then without loss of generality, assume that $ x_2 = x_3 $. Then, there is a subgraph of $ G $ isomorphic to $ C_3 $ induced by the vertices $ u$, $v$ and $x_2 $, a contradiction. If $\{ x_1,x_2\} \cap \{x_3,x_4\} = \emptyset$, then let $ S= \{x_1,x_2,x_3,x_4,u,v,u',v'\}$. A similar argument applied to the subgraph of $ G $ induced by the vertex set $ S $ yields the inequality $ \gamma(G[S])\le 3 < \gamma_2(G[S])=4$. Thus, $ G $ does not have any supplementary edges.
\medskip
\begin{figure}[h]
\centerline{\includegraphics[width=0.3\textwidth]{H4.eps}}
\caption{The graph $ H_4 $}
\label{fig:H4}
\end{figure}
Suppose that $ F $ contains a subgraph isomorphic to $ P_4 $. Since $ G $ does not have a supplementary edge, it contains an induced subgraph isomorphic to $ H_4 $ given in Figure \ref{fig:H4}. Note that $ \delta(H_4) \geq 2 $ and $ 3=\gamma(H_4) < \gamma_2(H_4) = 4 $, which contradicts the assumption that $ G $ is $ (\gamma, \gamma_2)$-perfect. Thus, $ F $ is a star, $G\in {\cal G}_1(F)$, and $G$ does not contain supplementary edges. This finishes the proof by Proposition~\ref{prop:perfect}.
\end{proof}
The graph obtained from an edge by attaching two pendant edges to both of its ends will be called $ T_6 $ (for illustration see Fig.~\ref{fig:T}).
\medskip
\begin{figure}[h]
\centerline{\includegraphics[width=0.15\textwidth]{T.eps}}
\caption{The graph $ T_6 $}
\label{fig:T}
\end{figure}
\begin{prop}
\label{prop:S}
$ G \in {\cal S} $ if and only if $ G $ is a connected graph with $ \delta(G)\geq 2 $ and it contains no subgraph isomorphic to any of $T_6,P_8$, or $C_k $ where $ k\neq 4 $.
\end{prop}
\begin{proof}
If $ G \in {\cal S} $, then it is easy to see that $ G $ is a connected graph with $ \delta(G)\geq 2 $ and it does not contain a subgraph isomorphic to $T_6,P_8$, or $C_k $ where $ k\neq 4 $.
Now, assume that $ G $ is a connected graph of minimum degree at least two which does not contain a subgraph isomorphic to $T_6,P_8$, or $C_k $ where $ k\neq 4 $. Note that $G$ is bipartite. We further have $\min\{\deg_G(u),\deg_G(v)\}=2$ for each edge $uv \in E(G)$, since $\delta(G) \ge 2$ and $G$ does not contain a subgraph isomorphic to $T_6$ or $C_3$.
First, suppose that $ G $ contains an edge $ e=uv \in E(G) $, which is a bridge. Then $ G-e $ has two components, say $ G_1 $ and $ G_2$. Since $ \delta(G)\geq 2 $, both $G_1$ and $G_2$ are non-trivial graphs and may contain at most one vertex, namely either $u$ or $v$, which is of degree 1. Thus, both of the components contain a cycle. These cycles must be vertex-disjoint $ 4 $-cycles with a path between them. Hence, $ G $ contains a subgraph isomorphic to $ P_8 $ and this contradicts our assumption.
Since $ G $ does not contain a bridge, every edge of $ G $ lies on a $ 4 $-cycle. If all the vertices of $ G $ have degree two, then $ G $ is isomorphic to $ C_4 $ and $ G \in {\cal S} $. If $ G $ is not isomorphic to $ C_4 $, then every $ 4 $-cycle contains a vertex of degree at least three. For a vertex $ v $ of degree two, we define the function $ f(v) $ to denote the vertex opposite to $ v $ in a $4 $-cycle. Let $ A = \{v \in V(G): \deg(v) \geq 3$ or $\deg(f(v))\geq 3\} $.
Consider two vertices $ u,v \in A $. If $ uv \in E(G) $, then $ uv $ belongs to a $ 4 $-cycle, say $ uvv'u' $. At least one of $ u $ and $ v $ is of degree two, without loss of generality, say $ \deg(u)=2 $. Thus, $ u $ belongs only to this $ 4 $-cycle. Since $ f(u) = v' $, by the definition of $ A $, $ \deg(v')\geq 3 $. If $ \deg(v)\geq 3 $, then $ vv' \in E(G) $, we have a contradiction. If $ \deg(v)= 2 $, then $ v\in A $ and $ v $ belongs only to the $ 4 $-cycle $ uvv'u' $. Thus, $ f(v)=u' $, $ \deg(u')\geq 3 $ and $ u'v' \in E(G) $, which is a contradiction. Hence, $ A $ is independent.
Consider two vertices $ u,v \in V(G) \setminus A $. If $ uv \in E(G) $, then at least one of $ f(u) $ or $ f(v) $ is of degree at least three. Then, by the definition of the function $ f $, we have $ u \in A $ or $ v \in A $, which is a contradiction. Hence, $ V(G) \setminus A $ is independent.
Consequently, $ (A, V(G)\setminus A) $ is a bipartition of $ V(G) $. Note that every $ 4 $-cycle has exactly two vertices in $ A $. Hence, $G^A \in {\cal G}_1(F)$ where $F\cong G^2[A]$, and there are no supplementary edges. Since $G$ does not have a subgraph isomorphic to $C_n$ for $n\ge 6$, the underlying graph is a tree. If $ F $ contains a subgraph isomorphic to $ P_4 $, then $ G $ contains a subgraph isomorphic to $ P_8 $, which is a contradiction. Thus, $ F $ is a star, and Proposition~\ref{prop:perfect} implies that $G\in {\cal S}$.
\end{proof}
Thus, Proposition~\ref{prop:S} allows us to state Theorem~\ref{thm:perfect} in a different form as follows.
\begin{thm}
Let $ G $ be a connected graph with $ \delta(G)\geq 2 $. Then $ G $ is a $ (\gamma,\gamma_2 )$-perfect graph if and only if $ G $ contains no subgraph isomorphic to any of $T_6,P_8$, or $C_k $ where $ k\neq 4 $.
\end{thm}
Note that for any $G\in {\cal S}$, the center of the underlying star can be chosen as a vertex $v$ of degree $\Delta(G)$ and then, the subdivision vertices are exactly those contained in $N_G(v)$. Therefore, the characterization given in Theorem~\ref{thm:perfect} directly yields a polynomial-time algorithm which recognizes $(\gamma, \gamma_2)$-perfect graphs.
\section{Concluding remarks and open problems} \label{sec:5}
In Section 1, we defined the graph class ${\cal G}$ which contains all $(\gamma, \gamma_2)$-graphs. Then, in Section 2, we gave a characterization for $(\gamma, \gamma_2)$-graphs over a specified subclass ${\cal H}$ of ${\cal G}$. In the definition of ${\cal H}$ and in the proof of the main theorem, we referred to the properties of the
\begin{figure}
\centerline{\includegraphics[width=0.7\textwidth]{Example_nonIsoF.eps}}
\caption{$ G^* $ is a graph with $ \gamma (G^*)= \gamma_2 (G^*) = 6$, which has two non-isomorphic underlying graphs and $ G^* \in {\cal H}(F_1) \cap {\cal H}(F_2) $.}
\label{fig:nonIso}
\end{figure}
underlying graph. We noted there that the underlying graph is not always unique when a graph $G$ from ${\cal G}$ is given. In Figure \ref{fig:nonIso}, we show a $ (\gamma,\gamma_2) $-graph having two non-isomorphic underlying graphs. Analogously, one can construct infinitely many graphs with the same property.
In the definition of the class ${\cal H}$, we forbid $ 3 $-cycles and $ 4 $-cycles in the underlying graph. The characterization given in Theorem~\ref{thm:2} does not hold if $ 3 $-cycles are not forbidden in the underlying graph. This is shown by the graph $A_4^* \in {\cal G}_2(F)$ (see Figure~\ref{fig:A4_star}), where the underlying graph $F$ is a star supplemented by an edge. One can readily check that even if $A_4^*$ contains an induced $A_4^W$ subgraph, it remains a $(\gamma, \gamma_2)$-graph as $\gamma(A_4^*)=\gamma_2(A_4^*)=5$. Similarly, it is possible to construct graphs whose underlying graphs are $C_3$-free but not $C_4$-free such that the statement of Theorem~\ref{thm:2} does not remain valid for them. Therefore, the following problems are still open.
\begin{figure}[h]
\centerline{\includegraphics[width=0.3\textwidth]{A4_star.eps}}
\caption{The graph $ A_4^* $ }
\label{fig:A4_star}
\end{figure}
\begin{prob}
Characterize $(\gamma, \gamma_2)$-graphs over the following graph classes:
\begin{enumerate}
\item Over the subclass of ${\cal G}_2$ where the underlying graph does not contain any $C_4$ subgraphs;
\item Over the subclass of ${\cal G}_2$ where the underlying graph is $C_3$-free;
\item Over ${\cal G}_2$.
\end{enumerate}
\end{prob}
\vspace{1.5cc}
\noindent \textbf{Acknowledgment.} Research of Csilla Bujt\'as was partially supported by the Slovenian Research Agency under the project N1-0108.
\vspace{2cc}
\bibliographystyle{abbrv}
| {
"timestamp": "2021-01-05T02:24:19",
"yymm": "1907",
"arxiv_id": "1907.07866",
"language": "en",
"url": "https://arxiv.org/abs/1907.07866",
"abstract": "The 2-domination number $\\gamma_2(G)$ of a graph $G$ is the minimum cardinality of a set $ D \\subseteq V(G) $ for which every vertex outside $ D $ is adjacent to at least two vertices in $ D $. Clearly, $ \\gamma_2(G) $ cannot be smaller than the domination number $ \\gamma(G) $. We consider a large class of graphs and characterize those members which satisfy $\\gamma_2=\\gamma$. For the general case, we prove that it is NP-hard to decide whether $\\gamma_2=\\gamma$ holds. We also give a necessary and sufficient condition for a graph to satisfy the equality hereditarily.",
"subjects": "Combinatorics (math.CO)",
"title": "On the equality of domination number and $ 2 $-domination number",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9817357243200245,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7085610835317455
} |
https://arxiv.org/abs/1610.04161 | Why Deep Neural Networks for Function Approximation? | Recently there has been much interest in understanding why deep neural networks are preferred to shallow networks. We show that, for a large class of piecewise smooth functions, the number of neurons needed by a shallow network to approximate a function is exponentially larger than the corresponding number of neurons needed by a deep network for a given degree of function approximation. First, we consider univariate functions on a bounded interval and require a neural network to achieve an approximation error of $\varepsilon$ uniformly over the interval. We show that shallow networks (i.e., networks whose depth does not depend on $\varepsilon$) require $\Omega(\text{poly}(1/\varepsilon))$ neurons while deep networks (i.e., networks whose depth grows with $1/\varepsilon$) require $\mathcal{O}(\text{polylog}(1/\varepsilon))$ neurons. We then extend these results to certain classes of important multivariate functions. Our results are derived for neural networks which use a combination of rectifier linear units (ReLUs) and binary step units, two of the most popular type of activation functions. Our analysis builds on a simple observation: the multiplication of two bits can be represented by a ReLU. | \section{Introduction}
Neural networks have drawn significant interest from the machine learning community, especially due to their recent empirical successes (see the surveys \citep{bengio2009learning}). Neural networks are used to build state-of-art systems in various applications such as image recognition, speech recognition, natural language process and others (see, \citealt{krizhevsky2012imagenet}; \citealt{goodfellow2013maxout}; \citealt{wan2013regularization}, for example). The result that neural networks are universal approximators is one of the theoretical results most frequently cited to justify the use of neural networks in these applications. Numerous results have shown the universal approximation property of neural networks in approximations of different function classes, (see, e.g., \citealt{cybenko1989approximation}; \citealt{hornik1989multilayer}; \citealt{funahashi1989approximate}; \citealt{hornik1991approximation}; \citealt{chui1992approximation}; \citealt{barron1993universal}; \citealt{poggio2015notes}).
All these results and many others provide upper bounds on the network size and assert that small approximation error can be achieved if the network size is sufficiently large. More recently, there has been much interest in understanding the approximation capabilities of deep versus shallow networks. \citet{delalleau2011shallow} have shown that there exist deep sum-product networks which cannot be approximated by shallow sum-product networks unless they use an exponentially larger amount of units or neurons. \citet{montufar2014number} have shown that the number of linear region increases exponentially with the number of layers in the neural network. \citet{telgarsky2016benefits} has established such a result for neural networks, which is the subject of this paper. \citet{eldan2015power} have shown that, to approximate a specific function, a two-layer network requires an exponential number of neurons in the input dimension, while a three-layer network requires a polynomial number of neurons. These recent papers demonstrate the power of deep networks by showing that depth can lead to an exponential reduction in the number of neurons required, for specific functions or specific neural networks. Our goal here is different: we are interested in function approximation specifically and would like to show that for a given upper bound on the approximation error, shallow networks require exponentially more neurons than deep networks for a large class of functions.
The multilayer neural networks considered in this paper are allowed to use either rectifier linear units (ReLU) or binary step units (BSU), or any combination of the two. The main contributions of this paper are
\begin{itemize}[leftmargin=*]
\item We have shown that, for {$\varepsilon$-approximation} of functions with enough piecewise smoothness, a multilayer neural network which uses $\Theta(\log(1/\varepsilon))$ layers only needs $\mathcal{O}(\text{poly}\log(1/\varepsilon))$ neurons, while $\Omega(\text{poly}(1/\varepsilon))$ neurons are required by neural networks with $o(\log(1/\varepsilon))$ layers. In other words, shallow networks require exponentially more neurons than a deep network to achieve the level of accuracy for function approximation.
\item We have shown that for all differentiable and strongly convex functions, multilayer neural networks need $\Omega(\log(1/\varepsilon))$ neurons to achieve an {$\varepsilon$-approximation}. Thus, our results for deep networks are tight.
\end{itemize}
The outline of this paper is as follows. In Section~\ref{sec::preliminary}, we present necessary definitions and the problem statement. In Section~\ref{sec::upperbound}, we present upper bounds on network size, while the lower bound is provided in Section~\ref{sec::lowerbound}. Conclusions are presented in Section~\ref{sec::conclusions}. Around the same time that our paper was uploaded in arxiv, a similar paper was also uploaded in arXiv by \citet{2016arXiv161001145Y}. The results in the two papers are similar in spirit, but the details and the general approach are substantially different.
\section{Preliminaries and problem statement}\label{sec::preliminary}
In this section, we present definitions on feedforward neural networks and formally present the problem statement.
\subsection{Feedforward Neural Networks}
A \textit{feedforward neural network} is composed of layers of computational units and defines a unique function $\tilde{f}:\mathbb{R}^d\rightarrow\mathbb{R}$. Let $L$ denote the number of hidden layers, $N_{l}$ denote the number of units of layer $l$, $N=\sum_{l=1}^{L}N_{l}$ denote the size of the neural network, vector $\mbox{\boldmath$x$}=(x^{(1)},...,x^{(d)})$ denote the input of neural network, $z_{j}^{l}$ denote the output of the $j$th unit in layer $l$, $w_{i,j}^{l}$ denote the weight of the edge connecting unit $i$ in layer $l$ and unit $j$ in layer $l+1$, $b_{j}^{l}$ denote the bias of the unit $j$ in layer $l$. Then outputs between layers of the feedforward neural network can be characterized by following iterations:
\begin{align*}z_{j}^{l+1}=\sigma\left(\sum_{i=1}^{N_{l}}w_{i,j}^{l}z_{i}^{l}+b_{j}^{l+1}\right),\quad l\in[L-1],j\in[N_{l+1}],\end{align*}
with
\begin{align*}\text{input layer: }&z_{j}^{1}=\sigma\left(\sum_{i=1}^{d}w_{i,j}^{0}x^{(i)}+b_{j}^{1}\right),\quad j\in[N_{1}],\\
\text{output layer: }&\tilde{f}(\mbox{\boldmath$x$})=\sigma\left(\sum_{i=1}^{N_{L}}w_{i,j}^{L}z_{i}^{L}+b_{j}^{L+1}\right).\end{align*}
Here, $\sigma(\cdot)$ denotes the activation function and $[n]$ denotes the index set $[n]=\{1,...,n\}$. In this paper, we only consider two important types of activation functions:
\begin{itemize}
\item{Rectifier linear unit: }$\sigma(x)=\max\{0,x\}$, $x\in\mathbb{R}$.
\item {Binary step unit: }$\sigma(x)=\mathbb{I}\{x\ge0\}, x\in\mathbb{R}$.
\end{itemize}
We call the number of layers and the number of neurons in the network as the \textit{depth} and the \textit{size} of the feedforward neural network, respectively. We use the set $\mathcal{F}(N,L)$ to denote the function set containing all feedforward neural networks of depth $L$, size $N$ and composed of a combination of rectifier linear units (ReLUs) and binary step units. We say one feedforward neural network is \textit{deeper} than the other network if and only if it has a larger depth. Through this paper, the terms \textit{feedforward neural network} and \textit{multilayer neural network} are used interchangeably.
\subsection{Problem Statement}
In this paper, we focus on bounds on the size of the feedforward neural network function approximation. Given a function $f$, our goal is to understand whether a multilayer neural network $\tilde{f}$ of depth $L$ and size $N$ exists such that it solves \begin{equation}\label{eq::A}\min_{\tilde{f}\in \mathcal{F}(N,L)}\|f-\tilde{f}\|\le \varepsilon.\end{equation}
Specifically, we aim to answer the following questions:
\begin{itemize}
\item[1] Does there exists $L(\varepsilon)$ and $N(\varepsilon)$ such that \eqref{eq::A} is satisfied? We will refer to such $L(\varepsilon)$ and $N(\varepsilon)$ as upper bounds on the depth and size of the required neural network.
\item[2] Given a fixed depth $L$, what is the minimum value of $N$ such that \eqref{eq::A} is satisfied? We will refer to such an $N$ as a lower bound on the size of a neural network of a given depth $L$.
\end{itemize}
The first question asks what depth and size are sufficient to guarantee an $\varepsilon$-approximation. The second question asks, for a fixed depth, what is the minimum size of a neural network required to guarantee an $\varepsilon$-approximation. Obviously, tight bounds in the answers to these two questions provide tight bounds on the network size and depth required for function approximation. Besides, solutions to these two questions together can be further used to answer the following question. If a deeper neural network of size $N_{d}$ and a shallower neural network of size $N_{s}$ are used to approximate the same function with the same error, then how fast does the ratio $N_{d}/N_{s}$ decay to zero as the error decays to zero?
\begin{figure*}[t]
\vspace{-0.9cm}
\centering
\includegraphics[width=0.8\linewidth]{binaryexpansion.pdf}
\vspace{-0.7cm}
\caption{An $n$-layer neural network structure for finding the binary expansion of a number in $[0,1]$.}
\label{fig::binaryexpansion}
\end{figure*}
\section{Upper bounds on function approximations}\label{sec::upperbound}
In this section, we present upper bounds on the size of the multilayer neural network which are sufficient for function approximation. Before stating the results, some notations and terminology deserve further explanation. First, the upper bound on the network size represents the number of neurons required at most for approximating a given function with a certain error. Secondly, the notion of the approximation is the $L_{\infty}$ distance: for two functions $f$ and $g$, the $L_{\infty}$ distance between these two function is the maximum point-wise disagreement over the cube $[0,1]^{d}$.
\subsection{Approximation of univariate functions}
In this subsection, we present all results on approximating univariate functions.
We first present a theorem on the size of the network for approximating a simple quadratic function. As part of the proof, we present the structure of the multilayer feedforward neural network used and show how the neural network parameters are chosen. Results on approximating general functions can be found in Theorem~\ref{thm::polynomials} and~\ref{thm::anyfunction}.
\begin{theorem}\label{thm::quadratic}
For function $f(x)=x^{2},x\in[0,1]$, there exists a multilayer neural network $\tilde{f}(x)$ with $\mathcal{O}\left(\log\frac{1}{\varepsilon}\right)$layers, $\mathcal{O}\left(\log\frac{1}{\varepsilon}\right)$ binary step units and $\mathcal{O}\left(\log\frac{1}{\varepsilon}\right)$ rectifier linear units such that $|f(x)-\tilde{f}(x)|\le\varepsilon,\quad \forall x\in[0,1].$
\end{theorem}
\begin{proof}
The proof is composed of three parts. For any $x\in[0,1]$, we first use the multilayer neural network to approximate $x$ by its finite binary expansion $\sum_{i=0}^{n}\frac{x_{i}}{2^{i}}$. We then construct a 2-layer neural network to implement function $f\left(\sum_{i=0}^{n}\frac{x_{i}}{2^{i}}\right)$.
For each $x\in[0,1]$, $x$ can be denoted by its binary expansion $x=\sum_{i=0}^{\infty}\frac{x_{i}}{2^{i}},$ where $x_{i}\in\{0,1\}$ for all $i\ge0$. It is straightforward to see that the $n$-layer neural network shown in Figure~\ref{fig::binaryexpansion} can be used to find $x_{0},...,x_{n}$.
Next, we implement the function $\tilde{f}(x)=f\left(\sum_{i=0}^{n}\frac{x_{i}}{2^{i}}\right)$ by a two-layer neural network.
Since ${f(x)=x^{2}}$, we then rewrite $\tilde{f}(x)$ as follows:
\begin{align*}
\tilde{f}(x)&=\left(\sum_{i=0}^{n}\frac{x_{i}}{2^{i}}\right)^{2}=\sum_{i=0}^{n}\left[x_{i}\cdot\left(\frac{1}{2^{i}}\sum_{j=0}^{n}\frac{x_{j}}{2^{j}}\right)\right]=\sum_{i=0}^{n}\max\left(0,2(x_{i}-1)+\frac{1}{2^{i}}\sum_{j=0}^{n}\frac{x_{j}}{2^{j}}\right).
\end{align*}
The third equality follows from the fact that $x_{i}\in\{0,1\}$ for all $i$. Therefore, the function $\tilde{f}(x)$ can be implemented by a multilayer network containing a deep structure shown in Figure~\ref{fig::binaryexpansion} and another hidden layer with $n$ rectifier linear units. This multilayer neural network has $\mathcal{O}(n)$ layers, $\mathcal{O}(n)$ binary step units and $\mathcal{O}(n)$ rectifier linear units.
Finally, we consider the approximation error of this multilayer neural network, \begin{align*}|f(x)&-\tilde{f}(x)|=\left|x^{2}-\left(\sum_{i=0}^{n}\frac{x_{i}}{2^{i}}\right)^{2}\right|\le2\left|x-\sum_{i=0}^{n}\frac{x_{i}}{2^{i}}\right|=2\left|\sum_{i=n+1}^{\infty}\frac{x_{i}}{2^{i}}\right|\le\frac{1}{2^{n-1}}.\end{align*}
Therefore, in order to achieve $\varepsilon$-approximation error, one should choose $n=\left\lceil\log_{2}\frac{1}{\varepsilon}\right\rceil+1$. In summary, the deep neural network has $\mathcal{O}\left(\log\frac{1}{\varepsilon}\right)$ layers, $\mathcal{O}\left(\log\frac{1}{\varepsilon}\right)$ binary step units and $\mathcal{O}\left(\log\left(\frac{1}{\varepsilon}\right)\right)$ rectifier linear units.
\end{proof}
Next, a theorem on the size of the network for approximating general polynomials is given as follows.
\begin{theorem}\label{thm::polynomials}
For polynomials $f(x)=\sum_{i=0}^{p}a_{i}x^{i}$, ${x\in[0,1]}$ and $\sum_{i=1}^{p}|a_{i}|\le1$, there exists a multilayer neural network $\tilde{f}(x)$ with $\mathcal{O}\left(p+\log\frac{p}{\varepsilon}\right)$ layers, $\mathcal{O}\left(\log\frac{p}{\varepsilon}\right)$ binary step units and $\mathcal{O}\left(p\log\frac{p}{\varepsilon}\right)$ rectifier linear units such that $|f(x)-\tilde{f}(x)|\le\varepsilon$, $\forall x\in[0,1].$
\end{theorem}
\begin{proof}
The proof is composed of three parts. We first use the deep structure shown in Figure~\ref{fig::binaryexpansion} to find the $n$-bit binary expansion $\sum_{i=0}^{n}a_{i}x^{i}$ of $x$. Then we construct a multilayer network to approximate polynomials $g_{i}(x)=x^{i}$, $i=1,...,p$. Finally, we analyze the approximation error.
Using the same deep structure shown in Figure~\ref{fig::binaryexpansion}, we could find the binary expansion sequence $\{x_{0},...,x_{n}\}$. In this step, we used $n$ binary steps units in total. Now we rewrite $g_{m+1}(\sum_{i=0}^{n}\frac{x_{i}}{2^{n}})$,
\begin{align}
g_{m+1}\left(\sum_{i=0}^{n}\frac{x_{i}}{2^{i}}\right)&=\sum_{j=0}^{n}\left[x_{j}\cdot\frac{1}{2^{j}}g_{m}\left(\sum_{i=0}^{n}\frac{x_{i}}{2^{i}}\right)\right]=\sum_{j=0}^{n}\max\left[2(x_{j}-1)+\frac{1}{2^j}g_{m}\left(\sum_{i=0}^{n}\frac{x_{i}}{2^{i}}\right),0\right].\label{eq::iteration}
\end{align}
Clearly, the equation \eqref{eq::iteration} defines iterations between the outputs of neighbor layers.
Therefore, the deep neural network shown in Figure~\ref{fig::quadratic} can be used to implement the iteration given by \eqref{eq::iteration}. Further, to implement this network, one should use $\mathcal{O}(p)$ layers with $\mathcal{O}(pn)$ rectifier linear units in total.
\begin{figure*}[t]
\vspace{-0.5cm}
\centering
\includegraphics[width=1\linewidth]{universalapproximation.pdf}
\vspace{-1cm}
\caption{The implementation of polynomial function}
\label{fig::quadratic}
\vspace{-0.2cm}
\end{figure*}
We now define the output of the multilayer neural network as
$\tilde{f}(x)=\sum_{i=0}^{p}a_{i}g_{i}\left(\sum_{j=0}^{n}\frac{x_{j}}{2^{j}}\right).$
For this multilayer network, the approximation error is
\begin{align*}
|f(x)-\tilde{f}(x)|&=\left|\sum_{i=0}^{p}a_{i}g_{i}\left(\sum_{j=0}^{n}\frac{x_{j}}{2^{j}}\right)-\sum_{i=0}^{p}a_{i}x^{i}\right|\le \sum_{i=0}^{p}\left[|a_{i}|\cdot\left|g_{i}\left(\sum_{j=0}^{n}\frac{x_{j}}{2^{j}}\right)-x^{i}\right|\right]\le\frac{p}{2^{n-1}}
\end{align*}
This indicates, to achieve $\varepsilon$-approximation error, one should choose $n=\left\lceil\log\frac{p}{\varepsilon}\right\rceil+1$. Besides, since we used $\mathcal{O}(n+p)$ layers with $\mathcal{O}(n)$ binary step units and $\mathcal{O}(pn)$ rectifier linear units in total, this multilayer neural network thus has $\mathcal{O}\left(p+\log\frac{p}{\varepsilon}\right)$ layers, $\mathcal{O}\left(\log\frac{p}{\varepsilon}\right)$ binary step units and $\mathcal{O}\left(p\log\frac{p}{\varepsilon}\right)$ rectifier linear units.
\end{proof}
In Theorem~\ref{thm::polynomials}, we have shown an upper bound on the size of multilayer neural network for approximating polynomials. We can easily observe that the number of neurons in network grows as $p\log p$ with respect to $p$, the degree of the polynomial. We note that both \citet{andoni2014learning} and \citet{barron1993universal} showed the sizes of the networks grow exponentially with respect to $p$ if only 3-layer neural networks are allowed to be used in approximating polynomials.
Besides, every function $f$ with $p+1$ continuous derivatives on a bounded set can be approximated easily with a polynomial with degree $p$. This is shown by the following well known result of Lagrangian interpolation. By this result, we could further generalize Theorem~\ref{thm::polynomials}. The proof can be found in the reference~\citep{gil2007numerical}.
\begin{lemma}[\textbf{Lagrangian interpolation at Chebyshev points}]\label{thm::interpolation}
If a function $f$ is defined at points $z_{0},...,z_{n}$, $z_{i}=\cos((i+1/2)\pi/(n+1))$, $i\in[n]$, there exists a polynomial of degree not more than $n$ such that
$P_{n}(z_{i})=f(z_{i})$, $i=0,...,n.$
This polynomial is given by
$P_{n}(x)=\sum_{i=0}^{n}f(z_{i})L_{i}(x) $
where
$L_{i}(x)=\frac{\pi_{n+1}(x)}{(x-z_{i})\pi'_{n+1}(z_{i})}$ and $\pi_{n+1}(x)=\prod_{j=0}^{n}(x-z_{j}).$
Additionally, if $f$ is continuous on $[-1,1]$ and $n+1$ times differentiable in $(-1,1)$, then
$$\left\|R_n\right\|=\left\|f-P_{n}\right\|\le\frac{1}{2^{n}(n+1)!}\left\|f^{(n+1)}\right\|,$$
where $f^{(n)}(x)$ is the derivative of $f$ of the $n$th order and the norm $\left\|f\right\|$ is the $l_{\infty}$ norm ${\left\|f\right\|=\max_{x\in [-1,1]}f(x)}$.
\end{lemma}
Then the upper bound on the network size for approximating more general functions follows directly from Theorem~\ref{thm::polynomials} and Lemma~\ref{thm::interpolation}.
\begin{theorem}\label{thm::anyfunction}
Assume that function $f$ is continuous on $[0,1]$ and $\left\lceil\log\frac{2}{\varepsilon}\right\rceil+1$ times differentiable in $(0,1)$. Let $f^{(n)}$ denote the derivative of $f$ of $n$th order and $\left\|f\right\|=\max_{x\in[0,1]}f(x)$. If ${\left\|f^{(n)}\right\|}\le{n!}$ holds for all ${n\in\left[\left\lceil\log\frac{2}{\varepsilon}\right\rceil+1\right]}$, then there exists a deep neural network $\tilde{f}$ with $\mathcal{O}\left(\log\frac{1}{\varepsilon}\right)$ layers, $\mathcal{O}\left(\log\frac{1}{\varepsilon}\right)$ binary step units, $\mathcal{O}\left(\left(\log\frac{1}{\varepsilon}\right)^2\right)$ rectifier linear units such that
$\left\|f-\tilde{f}\right\|\le\varepsilon$.
\end{theorem}
\begin{proof}
Let $N=\left\lceil\log\frac{2}{\varepsilon}\right\rceil$. From Lemma~\ref{thm::interpolation}, it follows that there exists polynomial $P_{N}$ of degree $N$ such that for any $x\in[0,1]$, $$\left|f(x)-P_N(x)\right|\le\frac{\left\|f^{(N+1)}\right\|}{2^{N}(N+1)!}\le\frac{1}{2^{N}}.$$
Let $x_{0},...,x_{N}$ denote the first $N+1$ bits of the binary expansion of $x$ and define ${\tilde{f}(x)=P_{N}\left(\sum_{i=0}^{N}\frac{x_{i}}{2^{N}}\right)}.$
In the following, we first analyze the approximation error of $\tilde{f}$ and next show the implementation of this function. Let $\tilde{x}=\sum_{i=0}^{N}\frac{x_{i}}{2^{i}}$. The error can now be upper bounded by
\begin{align*}
|f(x)-\tilde{f}(x)|&=\left|f(x)-P_{N}\left(\tilde{x}\right)\right|\le \left|f(x)-f\left(\tilde{x}\right)\right|+\left|f\left(\tilde{x}\right)-P_{N}\left(\tilde{x}\right)\right|\\
&\le\left\|f^{(1)}\right\|\cdot\left|x-\sum_{i=0}^{N}\frac{x_{i}}{2^{i}}\right|+\frac{1}{2^{N}}\le \frac{1}{2^{N}}+\frac{1}{2^{N}}\le \varepsilon
\end{align*}
In the following, we describe the implementation of $\tilde{f}$ by a multilayer neural network. Since $P_N$ is a polynomial of degree $N$, function $\tilde{f}$ can be rewritten as
$$\tilde{f}(x)=P_{N}\left(\sum_{i=0}^{N}\frac{x_{i}}{2^{i}}\right)=\sum_{n=0}^{N}c_{n}g_{n}\left(\sum_{i=0}^{N}\frac{x_{i}}{2^{i}}\right)$$
for some coefficients $c_{0},...,c_{N}$ and $g_{n}=x^{n}$, $n\in[N]$. Hence, the multilayer neural network shown in the Figure~\ref{fig::quadratic} can be used to implement $\tilde{f}(x)$. Notice that the network uses $\mathcal{O}(N)$ layers with $\mathcal{O}(N)$ binary step units in total to decode $x_0$,...,$x_N$ and $\mathcal{O}(N)$ layers with $\mathcal{O}(N^2)$ rectifier linear units in total to construct the polynomial $P_{N}$. Substituting $N=\left\lceil\log\frac{2}{\varepsilon}\right\rceil$, we have proved the theorem.
\end{proof}
\textbf{Remark: }Note that, to implement the architecture in Figure~\ref{fig::quadratic} using the definition of a feedforward neural network in Section~\ref{sec::preliminary}, we need the $g_i$, $i\in[p]$ at the output. This can be accomplished by using $\mathcal{O}(p^2)$ additional ReLUs. Since $p=\mathcal{O}(\log(1/\varepsilon))$, this doesn't change the order result in Theorem~\ref{thm::anyfunction}.
Theorem~\ref{thm::anyfunction} shows that any function $f$ with enough smoothness can be approximated by a multilayer neural network containing polylog$\left(\frac{1}{\varepsilon}\right)$ neurons with $\varepsilon$ error. Further, Theorem~\ref{thm::anyfunction} can be used to show that for functions $h_{1}$,...,$h_{k}$ with enough smoothness, then linear combinations, multiplications and compositions of these functions can as well be approximated by multilayer neural networks containing polylog$\left(\frac{1}{\varepsilon}\right)$ neurons with $\varepsilon$ error. Specific results are given in the following corollaries.
\begin{corollary}[Function addition]\label{cor::functionadd} Suppose that all functions $h_{1},...,h_{k}$ satisfy the conditions in Theorem~\ref{thm::anyfunction}, and the vector $\bm{\beta}\in\{\bm{\omega}\in\mathbb{R}^{k}:\left\|\bm{\omega}\right\|_{1}=1\}$, then for the linear combination $f=\sum_{i=1}^{k}\beta_{i}h_{i}$, there exists a deep neural network $\tilde{f}$ with $\mathcal{O}\left(\log\frac{1}{\varepsilon}\right)$ layers, $\mathcal{O}\left(\log\frac{1}{\varepsilon}\right)$ binary step units, $\mathcal{O}\left(\left(\log\frac{1}{\varepsilon}\right)^2\right)$ rectifier linear units such that
$|f(x)-\tilde{f}|\le\varepsilon$, $\forall x\in[0,1].$
\end{corollary
\textbf{Remark: }Clearly, Corollary~\ref{cor::functionadd} follows directly from the fact that the linear combination $f$ satisfies the conditions in Theorem~\ref{thm::anyfunction} if all the functions $h_{1}$,...,$h_{k}$ satisfy those conditions. We note here that the upper bound on the network size for approximating linear combinations is independent of $k$, the number of component functions.
\begin{corollary}[Function multiplication]\label{cor::functionmul} Suppose that all functions $h_{1}$,...,$h_{k}$ are continuous on $[0,1]$ and $\left\lceil4k\log_{2}4k+4k+2\log_{2}\frac{2}{\varepsilon}\right\rceil+1$ times differentiable in $(0,1)$. If ${\|h_{i}^{(n)}\|}\le{n!}$ holds for all ${i\in[k]}$ and $n\in\left[\left\lceil4k\log_{2}4k+4k+2\log_{2}\frac{2}{\varepsilon}\right\rceil+1\right]$ then for the multiplication $f=\prod_{i=1}^{k}h_{i}$, there exists a multilayer neural network $\tilde{f}$ with $\mathcal{O}\left(k\log k+\log\frac{1}{\varepsilon}\right)$ layers, $\mathcal{O}\left(k\log k+\log\frac{1}{\varepsilon}\right)$ binary step units and $\mathcal{O}\left((k\log k)^{2}+\left(\log\frac{1}{\varepsilon}\right)^{2}\right)$ rectifier linear units
such that
$|f(x)-\tilde{f}(x)|\le \varepsilon$, $\forall {x\in[0,1]}.$
\end{corollary}
\begin{corollary}[Function composition]\label{cor::functioncom} Suppose that all functions $h_{1},...,h_{k}:[0,1]\rightarrow[0,1]$ satisfy the conditions in Theorem~\ref{thm::anyfunction}, then for the composition $f=h_{1}\circ h_{2}\circ...\circ h_{k}$, there exists a multilayer neural network $\tilde{f}$ with $\mathcal{O}\left(k\log k\log\frac{1}{\varepsilon}+\log k\left(\log\frac{1}{\varepsilon}\right)^{2}\right)$ layers, $\mathcal{O}\left(k\log k\log\frac{1}{\varepsilon}+\log k\left(\log\frac{1}{\varepsilon}\right)^{2}\right)$ binary step units and $\mathcal{O}\left(k^{2}\left(\log\frac{1}{\varepsilon}\right)^{2}+\left(\log\frac{1}{\varepsilon}\right)^{4}\right)$ rectifier linear units such that
$|f(x)-\tilde{f}(x)|\le \varepsilon$, $\forall x\in[0,1].$
\end{corollary
\textbf{Remark: }Proofs of Corollary~\ref{cor::functionmul} and \ref{cor::functioncom} can be found in the appendix. We observe that different from the case of linear combinations, the upper bound on the network size grows as $k^{2}\log^{2}k$ in the case of function multiplications and grows as $k^{2}\left(\log\frac{1}{\varepsilon}\right)^{2}$ in the case of function compositions where $k$ is the number of component functions.
In this subsection, we have shown a poly$\log\left(\frac{1}{\varepsilon}\right)$ upper bound on the network size for $\varepsilon$-approximation of both univariate polynomials and general univariate functions with enough smoothness. Besides, we have shown that linear combinations, multiplications and compositions of univariate functions with enough smoothness can as well be approximated with $\varepsilon$ error by a multilayer neural network of size poly$\log\left(\frac{1}{\varepsilon}\right)$. In the next subsection, we will show the upper bound on the network size for approximating multivariate functions.
\subsection{Approximation of multivariate functions}\label{sec::multivariate}
In this subsection, we present all results on approximating multivariate functions. We first present a theorem on the upper bound on the neural network size for approximating a product of multivariate linear functions. We next present a theorem on the upper bound on the neural network size for approximating general multivariate polynomial functions. Finally, similar to the results in the univariate case, we present the upper bound on the neural network size for approximating the linear combination, the multiplication and the composition of multivariate functions with enough smoothness.
\begin{theorem}\label{thm::multinomial}
Let $W=\{\bm{w}\in\mathbb{R}^{d}:\left\|\bm{w}\right\|_{1}=1\}$.
For $f(\bm{x})=\prod_{i=1}^{p}\left( \bm{w}_{i}^{T}\bm{ x}\right)$, $\bm{x}\in[0,1]^{d}$ and $\bm{w}_{i}\in W$, $i=1,...,p$, there exists a deep neural network $\tilde{f}(\bm{x})$ with $\mathcal{O}\left(p+\log\frac{pd}{\varepsilon}\right)$ layers and $\mathcal{O}\left(\log\frac{pd}{\varepsilon}\right)$ binary step units and $\mathcal{O}\left(pd\log\frac{pd}{\varepsilon}\right)$ rectifier linear units such that
$|f(\bm{x})-\tilde{f}(\bm{x})|\le\varepsilon$, $\forall \bm{x}\in[0,1]^{d}.$
\end{theorem}
\iffalse
\begin{proof}
The proof is composed of two parts. As before, we first use the deep structure shown in Figure~\ref{fig::binaryexpansion} to find the binary expansion of $\bm{x}$ and next use a multilayer neural network to approximate the polynomial.
Let $\bm{x}=(x^{(1)},...,x^{(d)})$ and $\bm{w}_{i}=(w_{i1},...,w_{id})$. We could now use the deep structure shown in Figure~\ref{fig::binaryexpansion} to find the binary expansion for each $x^{(k)}$, $k\in [d]$. Let $\tilde{x}^{(k)}=\sum_{r=0}^{n}\frac{x_{r}^{(k)}}{2^{r}}$ denote the binary expansion of~$x^{(k)}$, where $x_{r}^{(k)}$ is the $r$th bit in the binary expansion of $x^{(k)}$. Obviously, to decode all the $n$-bit binary expansions of all $x^{(k)}$, $k\in[d]$, we need a multilayer neural network with $n$ layers and $dn$ binary units in total. Besides, we let $\tilde{\bm{x}}=(\tilde{x}^{(1)},...,\tilde{x}^{(d)})$. Now we define $$\tilde{f}({\bm{x}})=f(\tilde{\bm{x}})=\prod_{i=1}^{p}\left(\sum_{k=1}^{d}w_{ik}\tilde{x}^{(k)}\right).$$
We further define $$g_{l}(\tilde{\bm{x}})=\prod_{i=1}^{l}\left(\sum_{k=1}^{d}w_{ik}\tilde{x}^{(k)}\right).$$
Since for $l=1,...,p-1$,$$g_{l}(\tilde{\bm{x}})=\prod_{i=1}^{l}\left(\sum_{k=1}^{d}w_{ik}\tilde{x}^{(k)}\right)\le\prod_{i=1}^{l}\left\|\bm{w}_{i}\right\|_{1}=1,$$
then we can rewrite $g_{l+1}(\tilde{\bm{x}})$, $l=1,...,p-1$ into
\begin{align}
&g_{l+1}(\tilde{\bm{x}})=\prod_{i=1}^{l+1}\left(\sum_{k=1}^{d}w_{ik}\tilde{x}^{(k)}\right)=\sum_{k=1}^{d}\left[w_{(l+1)k}\tilde{x}^{(k)}\cdot g_{l}(\tilde{\bm{x}})\right]\notag\\
&=\sum_{k=1}^{d}\left\{w_{(l+1)k}\sum_{r=0}^{n}\left[x_{r}^{(k)}\cdot\frac{g_{l}(\tilde{\bm{x}})}{2^{r}}\right]\right\}\notag\\
&=\sum_{k=1}^{d}\left\{w_{(l+1)k}\sum_{r=0}^{n}\max\left[2(x_{r}^{(k)}-1)+\frac{g_{l}(\tilde{\bm{x}})}{2^{r}},0\right]\right\}\label{eq::seconditeration}
\end{align}
Obviously, the equation \eqref{eq::seconditeration} defines relationship between the outputs of neighbor layers and thus can be used to implement the multilayer neural network. In this implementation, we need $dn$ rectifier linear units in each layer and thus $dnp$ rectifier linear units. Therefore, to implement function $\tilde{f}(\bm{x})$, we need $p+n$ layers, $dn$ binary step units and $dnp$ rectifier linear units in total.
In the rest of proof, we consider the approximation error. Since for $k=1,...,d$ and $\forall \bm{x}\in[0,1]^{d}$,
$$\left|\frac{\partial f(\bm{x})}{\partial x^{(k)}}\right|=\left|\sum_{j=1}^{p}\left[w_{jk}\cdot\prod_{i=1,i\neq j}^{p}\left(\bm{w}_{i}^{T}\bm{x}\right)\right]\right|\le \sum_{j=1}^{p}|w_{jk}|\le p,$$
then
\begin{align*}|f(\bm{x})-\tilde{f}(\bm{x})|&=|f(\bm{x})-f(\tilde{\bm{x}})|\le \left\|\nabla f\right\|_{2}\cdot\left\|\bm{x}-\tilde{\bm{x}}\right\|_2\\&\le\frac{pd}{2^{n}}.\end{align*}
By choosing $n=\left\lceil\log_{2}\frac{pd}{\varepsilon}\right\rceil$, we have
$$|f(\bm{x})-f(\tilde{\bm{x}})|\le \varepsilon.$$
Since we use $nd$ binary step units converting the input to binary form and $dnp$ neurons in function approximation, we thus use $\mathcal{O}\left(d\log\frac{pd}{\varepsilon}\right)$ binary step units and $\mathcal{O}\left(pd\log\frac{pd}{\varepsilon}\right)$ rectifier linear units in total. In addition, since we have used $n$ layers to convert input to binary form and $p$ layers in the function approximation section of the network, then whole deep structure has $\mathcal{O}\left(p+\log\frac{pd}{\varepsilon}\right)$ layers.
\end{proof}\fi
Theorem~\ref{thm::multinomial} shows an upper bound on the network size for $\varepsilon$-approximation of a product of multivariate linear functions. Furthermore, since any general multivariate polynomial can be viewed as a linear combination of products, the result on general multivariate polynomials directly follows from Theorem~\ref{thm::multinomial}.
\begin{theorem}\label{thm::generalmultinomial}Let the multi-index vector $\bm{\alpha}=(\alpha_{1},...,\alpha_{d})$, the norm $|\bm{\alpha}|=\alpha_{1}+...+\alpha_{d}$, the coefficient $C_{\bm{\alpha}}=C_{\alpha_{1}...\alpha_{d}}$, the input vector $\bm{x}=(x^{(1)},...,x^{(d)})$ and the multinomial $\bm{x}^{\bm{\alpha}}={x^{(1)}}^{\alpha_{1}}...{x^{(d)}}^{\alpha_{d}}$. For positive integer $p$ and polynomial $f(\bm{x})=\sum_{\bm{\alpha}:|\bm{\alpha}|\le p}C_{\bm{\alpha}}\bm{x}^{\bm{\alpha}}$, $\bm{x}\in[0,1]^{d}$ and $\sum_{\bm{\alpha}:|\bm{\alpha}|\le p}|C_{\bm{\alpha}}|\le 1$, there exists a deep neural network $\tilde{f}(\bm{x})$ of depth $\mathcal{O}\left(p+\log\frac{dp}{\varepsilon}\right)$ and size $N(d,p,\varepsilon)$ such that
$|f(\bm{x})-f(\tilde{\bm{x}})|\le \varepsilon,$
where \vspace{-0.2cm}
$$N(d,p,\varepsilon)=p^{2}\left(\begin{matrix}p+d-1\\d-1\end{matrix}\right)\log\frac{pd}{\varepsilon}.$$
\end{theorem}\vspace{-0.3cm}
\textbf{Remark:} The proof is given in the appendix. By further analyzing the results on the network size, we obtain the following results: (a) fixing degree $p$, $N(d,\varepsilon)=\mathcal{O}\left(d^{p+1}\log\frac{d}{\varepsilon}\right)$ as $d\rightarrow \infty$ and (b) fixing input dimension $d$, ${N(p,\varepsilon)=\mathcal{O}\left(p^{d}\log\frac{p}{\varepsilon}\right)}$ as $p\rightarrow \infty$. Similar results on approximating multivariate polynomials were obtained by \citet{andoni2014learning} and \citet{barron1993universal}. \citet{barron1993universal} showed that on can use a 3-layer neural network to approximate any multivariate polynomial with degree $p$, dimension $d$ and network size $d^{p}/\varepsilon^{2}$. \citet{andoni2014learning} showed that one could use the gradient descent to train a 3-layer neural network of size $d^{2p}/\varepsilon^{2}$ to approximate any multivariate polynomial. However, Theorem~\ref{thm::generalmultinomial} shows that the deep neural network could reduce the network size from $\mathcal{O}\left(1/\varepsilon\right)$ to $\mathcal{O}\left(\log\frac{1}{\varepsilon}\right)$ for the same $\varepsilon$ error. Besides, for a fixed input dimension $d$, the size of the 3-layer neural network used by \citet{andoni2014learning} and \citet{barron1993universal} grows exponentially with respect to the degree $p$. However, the size of the deep neural network shown in Theorem~\ref{thm::generalmultinomial} grows only polynomially with respect to the degree. Therefore, the deep neural network could reduce the network size from $\mathcal{O}(\exp(p))$ to $\mathcal{O}(\text{poly}(p))$ when the degree $p$ becomes large.
Theorem~\ref{thm::generalmultinomial} shows an upper bound on the network size for approximating multivariate polynomials. Further, by combining Theorem~\ref{thm::anyfunction} and Corollary~\ref{cor::functioncom}, we could obtain an upper bound on the network size for approximating more general functions. The results are shown in the following corollary.
\begin{corollary}\label{cor::multivariatemul} Assume that all univariate functions $h_{1},...,h_{k}:[0,1]\rightarrow[0,1]$, $k\ge 1$, satisfy the conditions in Theorem~\ref{thm::anyfunction}. Assume that the multivariate polynomial ${l(\bm{x}):[0,1]^{d}\rightarrow[0,1]}$ is of degree $p$. For composition ${f=h_{1}\circ h_{2}\circ...\circ h_{k}\circ l(\bm{x})}$, there exists a multilayer neural network $\tilde{f}$ of depth $\mathcal{O}\left(p+\log d+k\log k\log\frac{1}{\varepsilon}+\log k\left(\log\frac{1}{\varepsilon}\right)^{2}\right)$ and of size $N(k,p,d,\varepsilon)$ such that
$|\tilde{f}(\bm{x})-f(\bm{x})|\le \varepsilon$ for $\forall x\in[0,1]^{d},$
where \vspace{-0.6cm} \begin{align*}N(k,p,d,\varepsilon)=\mathcal{O}&\left(p^{2}\left(\begin{matrix}p+d-1\\d-1\end{matrix}\right)\log\frac{pd}{\varepsilon}+k^{2}\left(\log\frac{1}{\varepsilon}\right)^{2}+\left(\log\frac{1}{\varepsilon}\right)^{4}\right).\end{align*}
\end{corollary}
\textbf{Remark}: Corollary~\ref{cor::multivariatemul} shows an upper bound on network size for approximating compositions of multivariate polynomials and general univariate functions. The upper bound can be loose due to the assumption that $l(\bm{x})$ is a general multivariate polynomials of degree $p$. For some specific cases, the upper bound can be much smaller. We present two specific examples in the Appendix~\ref{appendix::gaussion} and~\ref{appendix::ridge}.
In this subsection, we have shown that a similar poly$\log\left(\frac{1}{\varepsilon}\right)$ upper bound on the network size for $\varepsilon$-approximation of general multivariate polynomials and functions which are compositions of univariate functions and multivariate polynomials.
The results in this section can be used to find a multilayer neural network of size {polylog$\left(\frac{1}{\varepsilon}\right)$} which provides an approximation error of at most $\varepsilon$. In the next section, we will present lower bounds on the network size for approximating both univariate and multivariate functions. The lower bound together with the upper bound shows a tight bound on the network size required for function approximations.
While we have presented results in both the univariate and multivariate cases for smooth functions, the results automatically extend to functions that are piecewise smooth, with a finite number of pieces. In other words, if the domain of the function is partitioned into regions, and the function is sufficiently smooth (in the sense described in the Theorems and Corollaries earlier) in each of the regions, then the results essentially remain unchanged except for an additional factor which will depend on the number of regions in the domain.
\vspace{-0.3cm}
\section{Lower bounds on function approximations}\label{sec::lowerbound}\vspace{-0.3cm}
In this section, we present lower bounds on the network size in function for certain classes of functions. Next, by combining the lower bounds and the upper bounds shown in the previous section, we could analytically show the advantages of deeper neural networks over shallower ones. The theorem below is inspired by a similar result \citep{dasgupta1993power} for univariate quadratic functions, where it is stated without a proof. Here we show that the result extends to general multivariate strongly convex functions.
\begin{theorem}\label{thm::lowerbounduni}
Assume function ${f:[0,1]^{d}\rightarrow \mathbb{R}}$ is differentiable and strongly convex with parameter $\mu$. Assume the multilayer neural network $\tilde{f}$ is composed of rectifier linear units and binary step units. If $|f(\bm{x})-\tilde{f}(\bm{x})|\le \varepsilon$, $\forall \bm{x}\in[0,1]^d,$
then the network size
$N\ge\log_{2}\left(\frac{\mu}{16\varepsilon}\right).$
\end{theorem}
\textbf{Remark: }The proof is in the Appendix~\ref{appendix::lowerbound}. Theorem~\ref{thm::lowerbounduni} shows that every strongly convex function cannot be approximated with error $\varepsilon$ by any multilayer neural network with rectifier linear units and binary step units and of size smaller than $\log_{2}(\mu/\varepsilon)-4$. Theorem~\ref{thm::lowerbounduni} together with Theorem~\ref{thm::quadratic} directly shows that to approximate quadratic function $f(x)=x^{2}$ with error $\varepsilon$, the network size should be of order $\Theta\left(\log\frac{1}{\varepsilon}\right)$. Further, by combining Theorem~\ref{thm::lowerbounduni} and Theorem~\ref{thm::anyfunction}, we could analytically show the benefits of deeper neural networks. The result is given in the following corollary.
\begin{corollary}\label{cor::benefits}
Assume that univariate function $f$ satisfies conditions in both Theorem~\ref{thm::anyfunction} and Theorem~\ref{thm::lowerbounduni}. If a neural network $\tilde{f}_{s}$ is of depth ${L_{s}=o\left(\log\frac{1}{\varepsilon}\right)}$, size $N_{s}$ and $|f(x)-\tilde{f}_{s}(x)|\le\varepsilon$, for ${\forall x\in[0,1]}$, then there exists a deeper neural network $\tilde{f}_{d}(x)$ of depth $\Theta\left(\log\frac{1}{\varepsilon}\right)$, size $N_{d}=\mathcal{O}(L^{2}_{s}\log^{2} N_{s})$ such that
$|f(x)-\tilde{f}_{d}(x)|\le\varepsilon$, $\forall x\in[0,1].$
\end{corollary}
\textbf{Remarks: }(i) The strong convexity requirement can be relaxed: the result obviously holds if the function is strongly concave and it also holds if the function consists of pieces which are strongly convex or strongly concave. (ii) Corollary~\ref{cor::benefits} shows that in the approximation of the same function, the size of the deep neural network $N_{s}$ is only of polynomially logarithmic order of the size of the shallow neural network $N_{d}$, \textit{i.e.}, $N_{d}=\mathcal{O}(\text{polylog} (N_{s}))$. Similar results can be obtained for multivariate functions on the type considered in Section~\ref{sec::multivariate}.
\vspace{-0.3cm}
\section{Conclusions}\label{sec::conclusions}\vspace{-0.3cm}
In this paper, we have shown that an exponentially large number of neurons are needed for function approximation using shallow networks, when compared to deep networks. The results are established for a large class of smooth univariate and multivariate functions. Our results are established for the case of feedforward neural networks with ReLUs and binary step units.
\subsubsection*{Acknowledgments}
The research reported here was supported by NSF Grants CIF 14-09106, ECCS 16-09370, and ARO Grant W911NF-16-1-0259.
| {
"timestamp": "2017-03-07T02:01:08",
"yymm": "1610",
"arxiv_id": "1610.04161",
"language": "en",
"url": "https://arxiv.org/abs/1610.04161",
"abstract": "Recently there has been much interest in understanding why deep neural networks are preferred to shallow networks. We show that, for a large class of piecewise smooth functions, the number of neurons needed by a shallow network to approximate a function is exponentially larger than the corresponding number of neurons needed by a deep network for a given degree of function approximation. First, we consider univariate functions on a bounded interval and require a neural network to achieve an approximation error of $\\varepsilon$ uniformly over the interval. We show that shallow networks (i.e., networks whose depth does not depend on $\\varepsilon$) require $\\Omega(\\text{poly}(1/\\varepsilon))$ neurons while deep networks (i.e., networks whose depth grows with $1/\\varepsilon$) require $\\mathcal{O}(\\text{polylog}(1/\\varepsilon))$ neurons. We then extend these results to certain classes of important multivariate functions. Our results are derived for neural networks which use a combination of rectifier linear units (ReLUs) and binary step units, two of the most popular type of activation functions. Our analysis builds on a simple observation: the multiplication of two bits can be represented by a ReLU.",
"subjects": "Machine Learning (cs.LG); Neural and Evolutionary Computing (cs.NE)",
"title": "Why Deep Neural Networks for Function Approximation?",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9817357237856482,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.708561083146063
} |
https://arxiv.org/abs/2211.10607 | Bounds for the collapsibility number of a simplicial complex and non-cover complexes of hypergraphs | The collapsibility number of simplicial complexes was introduced by Wegner in order to understand the intersection patterns of convex sets. This number also plays an important role in a variety of Helly type results. There are only a few upper bounds for the collapsibility number of complexes available in literature. In general, it is difficult to establish such non-trivial upper bounds. In this article, we construct a sequence of upper bounds $\theta_k(X)$ for the collapsibility number of a simplicial complex $X$. We also show that the bound given by $\theta_k$ is tight if the underlying complex is $k$-vertex decomposable. We then give an upper bound for $\theta_k$ and therefore for the collapsibility number of the non-cover complex of a hypergraph in terms of its covering number. | \section{Introduction}
Let $X$ be a (finite) simplicial complex. Let $\gamma,\sigma \in X$ be such that $|\gamma|\leq d$ and $\sigma \in X$ is the only maximal simplex that contains $\gamma$. Then, $(\gamma, \sigma)$ is called a {\it free pair} and $\gamma$ is called a {\it free face} of $\sigma$ in $X$. An {\it elementary $d$-collapse} of $X$ is the simplicial complex $X'$ obtained from $X$ by
removing all those simplices $\tau $ of $X$ such that
$\gamma \subseteq \tau \subseteq \sigma$, and we denote this elementary $d$-collapse by $
X \xrightarrow{\gamma} X'$.
The complex $X$ is called \emph{$d$-collapsible} if there exists a sequence of elementary $d$-collapses
$$
X=X_1\xrightarrow{\gamma_1} X_2 \xrightarrow{\gamma_2} \cdots \xrightarrow{\gamma_{k-1}} X_k=\emptyset
$$
from $X$ to the empty complex $\emptyset$. Note that every $d$-dimensional complex is always $d+1$ collapsible. Clearly, if $X$ is $d$-collapsible and $d < c$, then $X$ is $c$-collapsible. The \emph{collapsibility number} of $X$, denoted as $\mathcal{C}(X)$, is the minimal integer $d$ such that $X $ is $d$-collapsible.
The notion of d-collapsibility of simplicial complexes was introduced by Wegner \cite{Wegner75}. The motivation of introducing d-collapsibility comes from combinatorial geometry as a tool for studying intersection patterns of convex sets
\cite{Eckhoff,Kalai2005,Tancer2013, Wegner75}. The collapsibility number plays an important role in the study of various Helly type results \cite{ AhroniHolzmanJiang2019, Kalai1984, Kalai2005}. The collapsibility number is also related to an another well-studied combinatorial inviariant of a simplicial complex called Leray number $\mathcal{L}(X)$ (\Cref{Leray}). In \cite{Wegner75}, Wegner proved that $\mathcal{L}(X) \leq \mathcal{C}(X)$ for any simplicial complex $X$.
In general, it is difficult to determine the collpasibility number or bounds for the collapsibility number of a simplicial complex.
In \cite{TC22}, Biyiko{\u{g}}lu and Civan, introduced a combinatorial invariant $\theta(X)$ for a simplicial complex $X$ and proved that
$\mathcal{C}(X) \leq \theta(X)$ always holds. They also gave an example of a complex $X$, where $\mathcal{C}(X) \neq \theta(X)$. The main motivation behind this project is to find an invariant for a simplicial complex which is a better approximation to the collapsibility number than the theta number. In this article, we introduce a new combinatorial invariant $\theta_k(X)$ for a simplicial complex $X$ and each non-negative integer $k$, and show that $\mathcal{C}(X) \leq \theta_k(X) \leq \theta_{k-1}(X) \leq \ldots \leq \theta_1(X) \leq \theta_{0}(X) = \theta(X)$ (see \Cref{remark:theta} and \Cref{thetakprop}). We also give an example of a complex $X$, where $\mathcal{C}(X) = \theta_1(X) < \theta(X)$ (see \Cref{example1}). Further, we prove that for $k$-vertex decomposable simplical complexes $X$, $ \mathcal{C}(X) = \theta_k(X)$ (see \Cref{thm:kvdecomposable}).
One of the most commonly used method of establishing an upper bound on the collapsibility number of a complex $X$ is by computing the number $d(X,\prec)$ which is given by the minimal exclusion sequence of simplices of $X$ (see \Cref{prop:minimal_exclusion} for more details). In this article, we prove that $d(X,\prec)$ is in fact an upper bound for $\theta_0(X)$ (which can be strictly greater than $\mathcal{C}(X)$ for several complexes) and therefore for all $\theta_k$ for any simplicial complex $X$.
In \cite[Theorem 3]{Choi2020}, the authors showed that the collapsibility number of non-cover complex of a graph $G$ (denoted $\mathrm{NC}(G)$) is bounded by $ |V(G)|- i_\gamma(G)-1$ by computing $d(\mathrm{NC}(G), \prec)$, where $i_\gamma(G)$ denotes the independence domination number of $G$. In \Cref{thm:bound for nc}, we prove that for any hypergraph $\H$, $\theta_0(\mathrm{NC}(\H))$ (and therefore $\mathcal{C}(\mathrm{NC}(\H))$) is bounded above by the $ |V(\H)|- \tau(\H)-1$, where $\tau(\H)$ (defined in \Cref{section3}) is the covering number of $\H$.
In \cite{KimKim2021}, authors defined three invariants, {strong total domination number} $\tilde{\gamma}(\mathcal{H})$, {strong independence domination number} $\gamma_{si}(\mathcal{H})$ and edgewise-domination number $\gamma_E(\mathcal{H})$ for a hypergraph $\mathcal{H}$ (see Definitions \ref{defn:strong total domination number}, \ref{defn:strong independence domination number} and \ref{defn:edgewise-domination number}). Under suitable conditions on $\H$, they showed that $\mathcal{L}(\mathrm{NC}(\H)) \leq |V(\H)| - \max\{\left \lceil{\tilde{\gamma}(\mathcal{H})}/{2}\right\rceil, \gamma_{si}(\H), \gamma_{E}(\H)\} -1 $. We give a class of hypergraphs (see \Cref{counterexample}) for which $$\tau(\H) > \max\{\tilde{\gamma}(\H), \gamma_{si}(\H), \gamma_E(\H)\}.$$ Since $\mathcal{L}(X) \leq \mathcal{C}(X)$ for any simplicial complex $X$, our result, \Cref{thm:bound for nc}
implies a more general result,
$$\mathcal{L}(\mathrm{NC}(\H)) \leq |V(\H)| - \max\{\tau(\H), \left\lceil\frac{ \tilde{\gamma}(\mathcal{H})}{2}\right\rceil, \gamma_{si}(\H), \gamma_{E}(\H)\} -1 .$$
\section{The $\theta_k$ number of a complex}
Tancer \cite{Tancerstrongd} showed that the collapsibility number of a simplicial complex is bounded by the collapsibility number of link and deletion of $X$ with respect to any vertex $v$. This allows for inductive arguments to find the bounds on collapsibility number of a simplicial complex \cite{Lew18}.
For any simplicial complex $X$ and $\sigma \in X$, the subcomplexes \emph{link} and \emph{deletion} of $\sigma$ in $X$ are defined as follows
\begin{equation*}
\begin{split}
\mathrm{lk}(\sigma,X) & = \{\tau \in X : \sigma \cap \tau = \emptyset,~ \sigma \cup \tau \in X\}, \\
\mathrm{del}(\sigma,X) & = \{\tau \in X : \sigma \nsubseteq \tau\}.
\end{split}
\end{equation*}
Biyiko{\u{g}}lu and Civan \cite{TC22} defined $\theta(X)$ inductively along these ideas for any simplicial complex $X$. Let $X^o$ denote the set of vertices $v$ in $X$ such that $\mathrm{lk}(v, X) \neq \mathrm{del}(v, X)$. If $X^o$ is empty then $\theta(X)=0$, else $$\theta(X)=\min_{v \in X^0}\{\max \{\theta(\mathrm{lk}(v, X))+1 , \theta(\mathrm{del}(v, X)\}\}.$$
They showed that $\mathcal{C}(X) \leq \theta(X)$ for all $X$ and proved several properties for $\theta(X)$. We expand on their ideas and define a sequence of invariants $\theta_k(X)$ which lie between $\mathcal{C}(X)$ and $\theta(X)$.
We first introduce the notation we use throughout this paper. Let $X$ be a simplicial complex. We denote the set of vertices of $X$ by $V(X)$. For $A \subseteq V(X)$, the induced subcomplex on vertex set $A$ is $X[A] = \{\sigma \in X : \sigma \subseteq A\}$. For $k \geq 0$, we let $X_{(k)}$ denote the set of $k$-dimensional faces of $X$ and
$$X_{(k)}^o=\{ \sigma \in X_{(k)} : \mathrm{lk}{(\sigma, X)} \neq X[(V(X)\setminus V(\sigma)] \}.$$
\begin{lemma}\label{lemma:conesimplex}
Let $X$ be a simplicial complex of dimension at least $k$. If $X_{(k)}^o = \emptyset$, then $X$ is a simplex.
\end{lemma}
\begin{proof}
If $k = 0$, then we prove the result by the induction on the number of vertices of $X$. The base case ({\it i.e.,} $X$ is a vertex) is trivially true. Now observe that,
if $X_{(0)}^o = \emptyset$ then $\mathrm{lk}(v, X)_{(0)}^o = \emptyset$. Moreover, for any vertex $v$, $\mathrm{lk}(v, X) = X - \{v\}$, which implies that $ X = (X - \{v\}) \ast \{v\}$. Hence by induction on the number of vertices, we get that $\mathrm{lk}(v, X)$ is simplex and therefore $X$ is a simplex.
Let $k > 0$. Let $\sigma \in X$ be a $k$-dimensional simplex. Then $\mathrm{lk}(\sigma, X) = X[V(X) \setminus \sigma]$. Hence $X = \mathrm{lk}(\sigma, X) \ast \sigma$. Let $Y = \mathrm{lk}(\sigma, X)$. If $Y_{0}^o=\emptyset $, then $Y$ is a simplex and therefore $X$ is a simplex. If $Y_{(0)}^o \neq \emptyset$, then $\mathrm{lk}(v, Y) \neq Y - \{v\}$ for some $v \in V(Y)$. Choose $w \in \sigma$ and let $\tau = (\sigma \setminus \{w\}) \cup \{v\}$. Then $\mathrm{lk}(\tau, X) \neq X[V(X) \setminus \tau]$, a contradiction. Hence $Y_{(0)}^o = \emptyset$. By induction $Y$ is a simplex and therefore
$X = Y \ast \sigma$ implies that $X$ is a simplex.
\end{proof}
\begin{definition}\label{definition:theta_k}
Define $\theta_0(X) = \theta'_0(X)= \theta(X)$ and for $k \geq 1$, define $\theta_k$ inductively as follows.
\begin{equation*}
\theta_k'(X)=\begin{cases}
\theta_{k-1}(X) & \text{if } X_{(k)}^o=\emptyset,\\
\min\limits_{\sigma\in X_{(k)}^o}\{ \max \{ \theta'_k(\mathrm{del}{(\sigma, X))}, \theta'_k(\mathrm{lk}{(\sigma, X))}+k+1\} & \text{otherwise},
\end{cases} \end{equation*}
and $\theta_k(X) = \min\{\theta'_k(X), \theta_{k-1}(X)\}$.
\end{definition}
\begin{remark}\label{remark:theta}
Note by definition $\theta_k(X) \leq \theta_{k-1}(X)$ for all $k\geq 1$.
\end{remark}
We now give an example where $\theta_1 < \theta_0$.
\begin{example} (Example V6F10-6 from \cite{MorTak}) \label{example1}
Let $\Delta$ be the simplicial complex on the vertex set $\{1, 2, 3, 4, 5,6\} $ with the set of facets
$$\{\{1, 2, 3\}, \{1, 2, 4\}, \{1, 2, 5\}, \{1, 3, 4\},\{1, 3, 6\}, \{2, 4,5\}, \{2,5,6\}, \{3,4,6\},\{3, 5,6\}, \{4,5,6\}\}.$$
This example was also discussed in \cite{Dochvdecomps} as an example of a complex which is $1$-vertex decomposable but not $0$-vertex decomposable. Thus, from \Cref{thm:kvdecomposable}, we get that $\mathcal{C}(\Delta)=\theta_1(\Delta)$. We now show that $\theta_1(\Delta)\leq 2<3\leq \theta_0(\Delta)$.
To compute $\theta_1(\Delta)$, let us look at $\theta_1(\mathrm{lk}(\{1,5\},\Delta))$ and $\theta_1(\mathrm{del}(\{1,5\},\Delta))$. Observe that $\mathrm{lk}(\{1,5\},\Delta)$ is a point $\{2\}$ implying that $\theta_1(\mathrm{lk}(\{1,5\},\Delta))=0$. Thus, $$\theta_1(\Delta)\leq\max\{\theta_1(\mathrm{del}(\{1,5\},\Delta)), 2\}.$$
Here, the set of facets of $\mathrm{del}(\{1,5\},\Delta))$ is $$\{\{1,2,3\}, \{1,2,4\}, \{1,3,4\},\{1,3,6\}, \{2,4,5\}, \{2,5,6\},\{3,4,6\},\{3,5,6\}, \{4,5,6\}\}.$$ It is easy to verify by doing a similar calculation on the deletion complexes using the sequence $\{\{1,6\}, \{2,3\},\{1,3\},\{1,2\},\{2,4\},\{2,5\},\{3,4\},\{3,5\},\{4,5\}\}$ of $1$-faces, link complex at every step is a simplex and the deletion complex at the end is $1$-dimensional. Observe that $\mathcal{C}(X)$ is always less than or equal to the dimension of $X$. Hence by using the sequence $\{\{1,6\}, \{2,3\},\{1,3\},\{1,2\},\{2,4\},\{2,5\},\{3,4\},\{3,5\},\{4,5\}\}$ of $1$-faces in the $\mathrm{del}(\{1, 5\}, \Delta)$, we conclude that $\theta_1(\Delta) \leq 2$.
Observe that the link of every vertex contains an induced subcomplex isomorphic to a triangulation of a circle. Hence the collapsibility number of link of every vertex is $2$. Thus for each vertex $v\in \Delta$, $\theta_0(\mathrm{lk}(v,\Delta))\ge \mathcal{C}(\mathrm{lk}(v,\Delta))\geq 2$. Hence by definition $\theta_0(\Delta)\geq 3$.
\end{example}
\begin{theorem}\label{thetakprop}
Let $X$ be a simplicial complexes. Then for any $k \geq 0$,
$$ \mathcal{C}(X)\le \theta_k(X).$$
\end{theorem}
\begin{proof}
Proof is by induction on $k$. From \cite[Theorem 25]{TC22}, $\mathcal{C}(X) \leq \theta_0(X) = \theta(X)$. Let $k \geq 1$ and assume that
$\mathcal{C}(X) \leq \theta_r(X)$ for $0 \leq r < k$. We now prove that $\mathcal{C}(X) \le \theta_k(X)$ by induction on the number of $k$-simplices of $X$. If $X$ has no $k$-simplex then clearly $X_{(k)}^o=\emptyset$ implying that $\theta_k(X)=\theta_{k-1}(X)$ and hence the result follows.
By definition, $\theta_k(X)=\min\{\theta_{k-1}(X),\theta_k'(X)\}$. If $\theta_k(X)=\theta_{k-1}(X)$ then the result follows from induction. Now assume that $\theta_k(X)=\theta_k'(X)$.
We first prove the following claim.
\begin{claim}\label{claim}
For any $\sigma \in X_{(k)}$ ({\it i.e.}, $\sigma$ is a $k$-face) $$\mathcal{C}(X) \le \max\{ \mathcal{C}(\mathrm{del}(\sigma, X), \mathcal{C}(\mathrm{lk}(\sigma, X)) +k+1 \}.$$
\end{claim}
\begin{proof}[Proof of \Cref{claim}]Let $\mathrm{lk}(\sigma, X)$ be $d$-collapsible. Then there exist a sequence of elementary $d$-collapses such that
$$ \mathrm{lk}(\sigma, X)=X_0 \xrightarrow{\sigma_1} X_1 \xrightarrow{\sigma_2} X_2 \ldots \xrightarrow{\sigma_r} X_r=\emptyset.$$
Since $\mathrm{lk}(\sigma, X) \xrightarrow{\sigma_1} X_1$ is an elementary collapse, there exist a facet $\tau_1\in \mathrm{lk}(\sigma, X)$ such that $\sigma_1$ is a free face of $\tau_1$ in $\mathrm{lk}(\sigma, X)$. Therefore, $\sigma_1 \cup \sigma$ is a free face of $\tau_1\cup \sigma$ in $X$. Furthermore, since $|\sigma_1 \cup \sigma|\le d+k+1$, we get an elementary $(d+k+1)$-collapse in $X$. Hence, the sequence
$$ X=Y_0 \xrightarrow{\sigma_1\cup \sigma } Y_1 \xrightarrow{\sigma_2\cup \sigma } Y_2 \ldots \xrightarrow{\sigma_r\cup \sigma} Y_r=\mathrm{del}(\sigma, X)$$
gives a sequence of elementary $(d+k+1)$-collapses of $X$ onto $\mathrm{del}(\sigma, X)$.
This implies that the collapsibility number of $X$ is less than or equal to $\max\{ \mathcal{C}(\mathrm{del}(\sigma, X)), d+k+1\}$.
\end{proof}
If $X_{(k)}^o= \emptyset$, then the result follows from the induction on $k$ (since $\theta_k(X)=\theta_{k-1}(X)$). For $X_{(k)}^o\neq \emptyset$, let $\sigma\in X_{(k)}$ such that $\theta_k'(X)=\max\{\theta_k'(\mathrm{del}(\sigma, X)), \theta_k'(\mathrm{lk}(\sigma,X))+k+1\}$.
From the previous claim, we have that
\begin{equation*}
\begin{split}
\mathcal{C}(X) & \le \max\{ \mathcal{C}(\mathrm{del}(\sigma, X), \mathcal{C}(\mathrm{lk}(\sigma, X)) +k+1 \}\\
& \le \max\{\theta_k(\mathrm{del}(\sigma, X)), \theta_k(\mathrm{lk}(\sigma,X))+k+1\}\\
& \leq \max\{\theta_k'(\mathrm{del}(\sigma, X)), \theta_k'(\mathrm{lk}(\sigma,X))+k+1\}\\
&=\theta_k'(X)=\theta_k(X).
\end{split}
\end{equation*}
Here, the second inequality follows from induction, and the third inequality follows from the fact that $\theta_k(X)\leq \theta_k'(X)$.
\end{proof}
\begin{remark} Note that \Cref{thetakprop}, along with \Cref{example1}, implies that $\theta_1$ is a better approximation to $\mathcal{C}(X)$ than $\theta_{0}$.
\end{remark}
In our next result, we show that the bound obtained in \Cref{thetakprop} is tight for a particular class of complexes known as $k$-vertex decomposable complexes. Given a simplicial complex $X$ its \emph{pure $n$-skeleton}, $X^{[n]}$ is the subcomplex of $X$ spanned by all $n$-faces of $X$. The complex $X$ is said to be \emph{ pure} $n$-dimensional complex if $X=X^{[n]}$.
A pure $d$-dimensional simplicial complex $X$ is said to be {\it shellable},
if its maximal simplices can be ordered $\Gamma_1, \Gamma_2 \ldots, \Gamma_t$ in such a way
that the subcomplex $(\bigcup\limits_{i = 1}^{k-1} \Gamma_i) \cap \Gamma_k$ is pure
and $(d-1)$-dimensional for all $k = 2, \ldots, t$. A pure simplicial complex $X$ is said to be \emph{Cohen Macaulay} if, for all simplices $\sigma \in X$, the complex $\mathrm{lk}(\sigma,X)$ is homologically ($\text{dim}(\mathrm{lk}(\sigma,X))-1$)-connected, {\it i.e.}, $\tilde{H}_i(\mathrm{lk}(\sigma,X))=0$ for all $i < \text{dim}(\mathrm{lk}(\sigma,X) $. As a consequence, we get that if $X$ is Cohen Macaulay, then $\mathrm{lk}(\sigma, X)$ is also Cohen Macaulay for any $\sigma \in X$.
Alternatively, a pure simplicial complex $X$ is said to be \emph{Cohen Macaulay} if each induced subcomplex $A$ of $X$ is homologically ($\text{dim}(A)-1$)-connected.
From this definition and standard facts on homology, it can be easily verified that if $X$ is Cohen Macaulay, then any skeleton of $X$ is also Cohen Macaulay.
\begin{definition} \cite[Definition 5.1]{Dochvdecomps}
For $k \geq 0$, a pure $r$-dimensional simplicial complex $X$ is said to be {\it $k$-vertex decomposable} if $X$ is a simplex or $X$ contains a face $\sigma$ such that
\begin{enumerate}
\item $\text{dim}(\sigma) \leq k $.
\item both $\mathrm{del}(\sigma, X)$ and $\mathrm{lk}(\sigma, X)$ are $k$-vertex decomposable, and
\item $\mathrm{del}(\sigma, X)$ is pure and the dimension is same as that of $X$. (Such a face $\sigma$ is called a {\it shedding} face of $X$).
\end{enumerate}
\end{definition}
The $k$-vertex decomposability ($k \geq 1$) of a complex interpolates between the shellability and $0$-vertex decomposability of the complex. More precisely,
$$
0\text{-vertex decomp.} \implies k\text{-vertex decomp.} \implies \text{shellability} \implies \text{Cohen Macaulay}.
$$
The first two implications are discussed in \cite[Section 5]{Dochvdecomps}. The last implication follows from \cite[Section 11]{bjorner}.
\begin{theorem}\label{thm:kvdecomposable}
If $X$ is $k$-vertex decomposable for some $k\geq 0$, then $$\mathcal{C}(X)=\theta_k(X)=\theta'_k(X).$$
\end{theorem}
Note that the authors in \cite{TC22} showed that $\mathcal{C}(X)=\theta_0(X)$ whenever $X$ is vertex decomposable complex, {\it i.e.}, 0-vertex decomposable. To prove \Cref{thm:kvdecomposable}, we need a few results which we prove now.
Recall that $X_{(k)}^o=\{ \sigma \in X_{(k)} : \mathrm{lk}{(\sigma, X)} \neq X[(V(X)\setminus V(\sigma)] \}$.
\begin{lemma}\label{lem:existance_of_nonconeface}
Let $X$ be of a $k$-vertex decomposable simplicial complex of dimension at least $k$. Then $X_{(k)}^o \neq \emptyset$.
\end{lemma}
\begin{proof}
Let $X$ be $r$-dimensional, where $r\ge k$. Since $X$ is $k$-vertex decomposable, there exists a shedding face $\tau \in X$ such that $\dim(\tau)\leq k$ and dim$(\mathrm{del}(\tau, X))=r$. Since $X$ is pure and $r \geq k$, there exists a $k$-face $\sigma \in X$ such that $\tau \subseteq \sigma$. We now prove that $\sigma \in X_{(k)}^o$. On contrary, assume that $\sigma \notin X_{(k)}^o$, {\it i.e.}, $\mathrm{lk}{(\sigma, X)} = X[(V(X)\setminus V(\sigma)]$. Let $\gamma$ be a facet of $X[(V(X)\setminus V(\sigma)]$. This implies that $ \gamma\sqcup \sigma$ is a facet of $X$. Hence, $(\gamma\cup\sigma)\setminus \tau$ is a facet of $\mathrm{del}(\tau,X)$. Now observe that dim$((\gamma\cup\sigma)\setminus \tau)< \dim(\gamma\cup \sigma)= r$. This contradicts the fact that $\mathrm{del}(\tau,X)$ is pure and of dimension $r$. Hence $\sigma \in X_{(k)}^o$.
\end{proof}
The following proposition is a generalization of \cite[Theorem 4.2]{LerayResult}.
\begin{lemma} \label{thm-MayerVietoris}
Let $Y$ be a simplicial complex and suppose that $\tilde{H}_{n-k}(\mathrm{lk}(\sigma,Y)) \neq 0$ for a $k$-face $\sigma \in Y$. If $\mathrm{lk}(\sigma,Y)^{[n-k]}$ is contained in a subcomplex $Y_0$ of $\mathrm{del}(\sigma,Y)$ with $\tilde{H}_{n-k}(Y_0) =0$, then $\tilde{H}_{n+1}(Y) \neq 0$.
\end{lemma}
\begin{proof}
Given a $k$-face $\sigma$, let st$(\sigma, Y)=\{\tau \in Y : \sigma \subseteq \tau\}$. Then, $Y=\mathrm{del}(\sigma, Y) \cup \text{st}(\sigma, Y) $ and $\mathrm{del}(\sigma, Y) \cap \text{st}(\sigma, Y) =\mathrm{lk}( \sigma, Y) \ast \partial(\sigma)$, where $\partial(\sigma)= \{\tau : \tau \subsetneq \sigma\}$.
Using Mayer-Vietoris we get that the sequence
$$ \tilde{H}_{n+1}(Y) \to \tilde{H}_{n}(\mathrm{lk}(\sigma,Y)\ast \partial(\sigma)) \xrightarrow{i} \tilde{H}_{n}(\mathrm{del}(\sigma,X)) $$
is exact. Note that,
$$\tilde{H}_n(\mathrm{lk}(\sigma,Y)\ast \partial(\sigma))\cong \tilde{H}_n(\Sigma^{k}(\mathrm{lk}(\sigma,Y))) \cong \tilde{H}_{n-k}(\mathrm{lk}(\sigma,Y)).$$
Since the map $i$ is induced by an inclusion of $\mathrm{lk}(\sigma,Y)^{[n-k]}$ in $ Y_0\subseteq\mathrm{del}(\sigma,Y)$, the map $i$ is trivial. Thus, by the exactness of the above diagram, we get the required result.
\end{proof}
\begin{lemma} \label{SCM}
Let $k\geq 1$ be a positive integer, and $Y$ be a simplicial complex. If $\tau$ is a shedding $k$-face for $Y$, $\mathrm{del}( \tau,Y)$ is Cohen Macaulay and $\tilde{H}_{n-k}(\mathrm{lk}(\tau,Y)) \neq 0$, then $\tilde{H}_{n+1}(Y) \neq 0$.
\end{lemma}
\begin{proof}
If $\tau$ is a shedding $k$-face for $Y$, then $\mathrm{del}(\tau,Y)$ is a pure complex of $\text{dim}(Y)$. Moreover, $\mathrm{lk}(\tau,Y)^{[n-k]}$ will be contained in the subcomplex $\mathrm{del}(\tau,Y)^{[n-k]} \subseteq \mathrm{del}(\tau,Y)^{[n+1]}$. Further, $\mathrm{del}(\tau,Y)$ is a Cohen Macaulay complex implies that $\mathrm{del}(\tau,Y)^{[n]}$ is Cohen Macaulay for all $n\ge 1$. In particular, choosing $\emptyset=\sigma \in \mathrm{del}( \tau,Y)$ we get that $\mathrm{del}( \tau, Y)^{[n+1]}=\mathrm{lk}(\sigma,\mathrm{del}(\tau,Y)^{[n+1]})$ is homologically $n$-connected.
Therefore, $\tilde{H}_{n+1}(Y)$ $ \neq 0$ by \Cref{thm-MayerVietoris}.
\end{proof}
Our next result establishes the commutativity of link and deletion of disjoint faces in a complex.
\begin{lemma}\label{lem:linkdeletionface}
Let $\sigma, \tau \in X$ such that $\sigma \cap \tau =\emptyset$. Then, $\mathrm{lk}(\tau,(\mathrm{del}(\sigma,X))=\mathrm{del}(\sigma,\mathrm{lk}(\tau,X))$.
\end{lemma}
\begin{proof}
Let $\gamma\in \mathrm{lk}((\tau,\mathrm{del}(\sigma,X))$. Thus $\gamma \cup \tau \in \mathrm{del}(\sigma,X)$ implying that $\sigma \nsubseteq (\gamma \cup \tau)$.
This gives us that $\sigma \nsubseteq \gamma$. Moreover, we know that $\gamma \in \mathrm{lk}(\tau,\mathrm{del}(\sigma,X)) \subseteq \mathrm{lk}(\tau,X)$. Therefore, the last two statements imply that $\gamma \in \mathrm{del}(\sigma, \mathrm{lk}(\tau, x))$.
Now let $\eta \in \mathrm{del}(\sigma,\mathrm{lk}(\tau,X))$. So, $\sigma \nsubseteq \eta$. Furthermore, $\sigma \cap \tau = \emptyset$ implies that $\sigma \nsubseteq \eta \cup \tau$. Hence $\eta\cup \tau \in \mathrm{del}(\sigma,X)$ which gives us that $\eta \in \mathrm{lk}(\tau,\mathrm{del}( \sigma,X))$.
\end{proof}
\begin{definition}\label{Leray}
A simplicial complex $X$ is called {\em $k$-Leray} if $\tilde{H}_i(\mathsf{L}) = 0 $ for all $i\geq k$ and for every induced subcomplex $\mathsf{L}\subseteq X$. The Leray number $\mathcal{L}(X)$ of $X$ is the least integer $k$ for which $X$ is $k$-Leray.
\end{definition}
\begin{proposition}\label{sheddingface} If $\sigma$ is a shedding $k$-face for a simplicial complex $X$ such that $\mathrm{del}(\sigma,X)$ is Cohen-Macaulay, then $\mathcal{L}(X)\geq \max\{\mathcal{L}(\mathrm{del}(\sigma,X)), \mathcal{L}(\mathrm{lk}(\sigma,X))+k+1\}$.
\end{proposition}
\begin{proof}
By \cite[Lemma 2.3]{LerayResult}, $\mathcal{L}(X)\ge d$ if and only if $\tilde{\text{H}}_{d-1}(\mathrm{lk}(\gamma,X)) \neq 0$ for some $\gamma \in X$.
Let $\mathcal{L}(\mathrm{lk}(\sigma,X))=d$, then there exists a face $\tau \in \mathrm{lk}(\sigma,X)$ such that $\tilde{H}_{d-1}(\mathrm{lk}(\tau,\mathrm{lk}(\sigma,X) )) \neq 0$.
Since $\tau \cap \sigma =\emptyset$, by \Cref{lem:linkdeletionface}, $\mathrm{lk}(\tau,(\mathrm{del}(\sigma,X))=\mathrm{del}(\sigma,\mathrm{lk}(\tau,X))$. Furthermore, since the link of any simplex in a Cohen-Macaulay complex is again Cohen-Macaulay, the complex $\mathrm{lk}(\tau,(\mathrm{del}(\sigma,X))=\mathrm{del}(\sigma, \mathrm{lk}(\tau,X))$ is Cohen-Macaulay. Since the link of any face in a pure complex is again pure, it is easy to check that $\sigma$ is a shedding face for $\mathrm{lk}(\tau,X)$ as well. Since $\tilde{H}_{d-1}(\mathrm{lk}( \sigma,\mathrm{lk}(\tau,X)) \neq 0$, by \Cref{SCM}, we get that $\tilde{H}_{d+k}(\mathrm{lk}( \tau,X)) \neq 0$. This implies that $\mathcal{L}(X)\geq d+k+1$.
Now it is sufficient to prove that $\mathcal{L}(X) \geq \mathcal{L}(\mathrm{del}(\sigma, X))$. The proof is by induction on number of vertices in $X$. Let $Y = \mathrm{del}(\sigma, X) $. Let $A \subseteq V(Y)$. Then observe that $Y[A] = \mathrm{del}(\sigma \cap A, X [A])$. By induction, $\mathcal{L}(X[A]) \geq \mathcal{L}(Y[A])$. Since $\mathcal{L}(X) \geq \mathcal{L}(X[A])$ by taking $A = V(Y)$, we get that
$$
\mathcal{L}(X) \geq \mathcal{L}(X[A]) \geq \mathcal{L}(Y[A]) = \mathcal{L}(Y).
$$
\end{proof}
We can now prove \Cref{thm:kvdecomposable}.
\begin{proof}[Proof of \Cref{thm:kvdecomposable}]
We know that $\mathcal{L}(X) \leq \mathcal{C}(X) \leq \theta_k(X)\leq \theta_k'(X)$. We will now prove that $\theta_k'(X)\leq \mathcal{L}(X)$ by induction on the number of $k$-faces of $X$. The base case is when the complex has only one $k$-face, {\it i.e.,} the complex is a simplex. In this case $\mathcal{L}(X)=0=\theta_k'(X)$. Since $X$ is $k$-vertex decomposable, \Cref{lem:existance_of_nonconeface} implies that $X_{(k)}^o\neq \emptyset$ and any shedding $k$-face is in $X_{(k)}^o$. Also, since $X$ is $k$-vertex decomposable there exists a $k$-dimensional shedding face $\sigma$ of $X$ such that $\sigma \in X_{(k)}^o$ and $\mathrm{del}(\sigma, X)$ is a pure $k$-vertex decomposable complex and therefore Cohen-Macaulay. From Proposition \ref{sheddingface} we get that $\mathcal{L}(X)\geq \max\{\mathcal{L}(\mathrm{del}( \sigma,X), \mathcal{L}(\mathrm{lk}(\sigma,X) + k+1\}$. Thus, from \Cref{definition:theta_k}, we have that
\begin{equation*}
\begin{split}
\theta_k'(X)& \leq \max\{\theta_k'(\mathrm{del}(\sigma,X)),\theta_k'(\mathrm{lk}(\sigma,X))+k+1 \}\\
& \leq \max\{\mathcal{L}(\mathrm{del}(\sigma,X)),\mathcal{L}(\mathrm{lk}(\sigma,X))+k+1 \}\\
&\leq \mathcal{L}(X).
\end{split}
\end{equation*}
Here, the second inequality follows from induction.
\end{proof}
The following can be easily inferred from \Cref{thm:kvdecomposable} and the fact that $k$-vertex decomposability implies shellability.
\begin{remark}
If $X$ is $k$-dimensional pure complex and $\theta_k(X) \neq \mathcal{C}(X)$, then $X$ is not shellable.
\end{remark}
\section{Non-cover complexes of hypergraphs}\label{section3}
A {\it hypergraph} $\H$ is an ordered pair $(V(\H), E(\H))$, where $V$ is a (finite) set and $E$ is a family of subsets of $V$. The elements of $V(\H)$ are called the vertices of $\H$, and the elements of $E(\H)$ its edges. Let $\H$ be a hypergraph. Let $v \in V(\H)$. A vertex $w \in V(\H)$ is a {\it neighbour} of $v$, if there exists $e \in E(\H)$ such that $\{v, w\} \subseteq e$. The neighbour set of $v$ is defined as $N(v)= \{w : w \text{ is a neighbour of } v\}$. For $A \subseteq V(\H)$, the neighbour set of $A$ is $N(A) := \bigcup_{v \in A} N(v)$. For $S \subseteq V(\H)$, the induced subgraph $\H[S]$ is the hypergraph on the vertex set $S$ and any $e \subseteq S$ is an edge in $\H[S]$ if $e \in E(\H)$.
A set $B \subseteq V(\H)$ is called a {\it cover} of $\H$ if $e \cap B \neq \emptyset$ for all $e \in E(\H)$. The {\it covering number} of $\H$ is defined as
$$
\tau(\H) = \min \{|B| : B \text{ is a cover of } \H\}.
$$
A set $A \subseteq V(\H)$ is called a {\it non-cover} if it is not a cover of $\H$. The {\it non-cover complex} $\mathrm{NC}(\H)$ of $\H$ is a simplicial complex defined as
$$
\mathrm{NC}(\H) = \{ A \subseteq V(\H): A \ \text{is a non-cover of} \ \H \}.
$$
\begin{theorem}\label{thm:bound for nc}
Let $\H$ be a hypergraph. Then
\[C(\mathrm{NC}(\H))\leq \theta_0(\mathrm{NC}(\H))\leq |V(\H)|- \tau(\H)-1.\]
\end{theorem}
Before proving \Cref{thm:bound for nc}, we first review the minimal exclusion principle, which will play a key role in the proof of \Cref{thm:bound for nc}.
Let $X$ be a simplicial complex on a linearly ordered vertex set $V$ and let $\prec: \gamma_1,\ldots,\gamma_m$ be a linear ordering on the maximal simplices of $X$.
Given a $\gamma \in X$, the \textit{minimal exclusion sequence} $\textrm{mes}(\gamma, \prec)$ defined as follows:
Let $j$ denote the smallest index such that $\gamma \subseteq \gamma_j$.
If $j=1$, then $\textrm{mes}(\gamma, \prec)$ is the null sequence.
If $j\geq 2$, then $\textrm{mes}(\gamma, \prec)=(v_1,\ldots, v_{j-1})$ is a finite sequence of length $j-1$ such that
$v_1=\min (\gamma\setminus \gamma_1)$ and for each $k\in\{2, \ldots, j-1\}$,
$$v_k=\begin{cases}
\min(\{v_1,\dots,v_{k-1}\}\cap (\gamma \setminus \gamma_k)) & \text{if } \{v_1,\dots,v_{k-1}\}\cap (\gamma \setminus \gamma_k)\neq\emptyset,\\
\min (\gamma\setminus \gamma_k) & \text{otherwise.}
\end{cases} $$
Let $M(\gamma, \prec)$ denote the set of vertices appearing in $\textrm{mes}(\gamma, \prec)$. Define
$$ d(X, \prec):=\max_{\gamma \in X}|M(\gamma, \prec)|.$$
The following result gives us a bound for the collapsibility number of the complex $X$ using $d(X, \prec)$.
\begin{proposition}\cite[Theorem 6]{Lew18} \label{prop:minimal_exclusion}
If $\prec$ is a linear ordering of the maximal simplices of $X$, then $X$ is $d(X, \prec)$-collapsible.
\end{proposition}
The proof of \cite[Theorem 6]{Lew18} can be modified to show that $\theta_0(X)$ is bounded above by $d(X, \prec)$.
\begin{proposition}\label{thm:theta_mes}
Let $X$ be a simplicial complex, then $\theta_0(X) \leq d(X,\prec) $.
\end{proposition}
\begin{proof}
The proof is by induction on number of vertices of $X$. If $X$ is a simplex then $\theta_0(X)=0 \leq d(X, \prec)$.
If $X$ has at least one non-cone vertex $v$, then by definition $\theta_0(X) \leq \max\{\theta_0(\mathrm{del}(v, X)), \theta_0(\mathrm{lk}(v, X))+1\}$.
The argument in \cite[Theorem 6]{Lew18} shows that $$d(\mathrm{lk}(v, X), \prec)-1 \leq d(X,\prec) \text{ and }d(\mathrm{del}(v, X), \prec) \leq d(X, \prec).$$ Hence the proof follows by induction.
\end{proof}
For a positive integer $n$, let $[n]$ denotes the ordered set $\{1, \ldots, n\}$. In the rest of the section, we assume that $V(\H) = [n]$ and if $e = \{v_1,v_2,\dots,v_r\}\in E(\H)$ then $v_1<v_2<\cdots<v_r$. Let $<_{L}$ be the lexicographic order on the $E(\H)$ induced by the ordering of $V(\H)$. Note that every facet of $\mathrm{NC}(\H)$ is the complement of an edge of $\H$. We define a linear order $
\prec$ on the set of facets of $\mathrm{NC}(\H)$ as follows: $\gamma \prec \gamma'$ if $\overline{\gamma} <_{L} \overline{\gamma'}$. Let $\gamma_1 \prec \gamma_2 \prec \ldots \prec \gamma_m$ be all the facets of $\mathrm{NC}(\H)$.
Let $D$ be a cover of $\H$ such that $|D|= \tau(\H)$. Without loss of generality, we assume that $D=\{1, \ldots, \tau(\H) \}$. For $A \subseteq [n]$, we let $\overline{A} = [n] \setminus A$.
\begin{lemma}\label{lem:mes_equal}
Let $\gamma, \gamma' \in \mathrm{NC}(\H)$. If $\overline{\gamma} \cap D = \overline{\gamma'} \cap D$ and the induced subgraph $\H[\overline{\gamma} \cap D]$ contains an edge, then
$$\text{mes}(\gamma, \prec) = \text{mes}(\gamma', \prec).$$
\end{lemma}
\begin{proof}
Let $k$ be the smallest index such that $\gamma \subseteq \gamma_k$. Since $\H[\overline{\gamma} \cap D]$ contains an edge, the edge $\overline{\gamma_{k}} \subseteq D$. Observe that for any edge $e' \in E(\H[D]$), and an edge $e \in E(\H)$ such that $e \cap D \neq \emptyset$ and $e \cap \overline{D} \neq \emptyset$, we have $ e' <_{L} e $. Therefore, by the definition of $\prec$, it
follows that for every $1 \leq i < k$, the $i$th facet $\gamma_i$ satisfies $\overline{\gamma_i} \subseteq D$ (since $\overline{\gamma_i}<_{L} \overline{\gamma_{k}}\subseteq D$). Clearly, $\gamma \cap D = \gamma'\cap D$. Hence,
we get that
$$\overline{\gamma_i}\cap \gamma =\overline{\gamma_i}\cap \gamma\cap D =\overline{\gamma_i}\cap \gamma' \cap D=\overline{\gamma_i}\cap \gamma'.$$
Therefore, we conclude that if $k$ is the smallest index such that $\gamma' \subseteq \gamma_k$ and for every $i \in [k-1]$, then the $i$th entry of $\textrm{mes}(\gamma', {\prec})$ is equal to the $i^{th}$ entry that of $\textrm{mes}(\gamma, {\prec})$.
\end{proof}
\begin{lemma}\label{lem:neighbor_inequality}
For any $S \subseteq D$, $N(S) \neq \emptyset $ and
$$ |N(S) \cap \overline{D}| -|S| \leq |\overline{D}| - |D|. $$
\end{lemma}
\begin{proof}
If $S = D$, then result vacuously true. So, assume that $ D \setminus \neq \emptyset$. We prove this by defining an injective map $\phi : D\setminus S \longrightarrow \overline{D}\setminus (N(S)\cap \overline{D})$. For any $v \in D$, let $NC(D,v)= \{e\in E(\H): (D\setminus \{v\}) \cap e =\emptyset \}$. Let $D\setminus S = \{v_1,\dots,v_r\}$. Define $$\phi_1: \{v_1\} \longrightarrow \overline{D}\setminus (N(S)\cap \overline{D})$$ as $\phi_1(v_1)= \text{min}\big{(}\bigcup\limits_{e\in NC(D,v_1)} e\setminus \{v_1\}\big{)}$. For any $v \in D$ and for any $e\in NC(D,v)$, it is easy to see that $e\setminus \{v\}\subseteq \overline{D}\setminus (N(S)\cap \overline{D})$. Therefore, the map $\phi_1$ is well defined. For each $2\leq j \le r$, we will now construct an injective map $$ \phi_j: \{v_1,\dots,v_j\} \longrightarrow \overline{D}\setminus (N(S)\cap \overline{D})$$ using the map $\phi_{j-1}$. We divide the construction of $\phi_j$ in two parts.
\vspace{0.3cm}
\noindent{\bf Case 1:} $\big{(}\bigcup\limits_{e\in NC(D,v_j)} e \big{)}\setminus \{v_j,\phi_{j-1}(v_1),\dots,\phi_{j-1}(v_{j-1})\}\neq \emptyset$.
In this case define the map $\phi_j$ as follows.
\begin{equation*}
\phi_j(v_i)=\begin{cases}
\phi_{j-1}(v_i), & \text{ if } i \le j-1, \\
\text{min}\big{(}\big{(}\bigcup\limits_{e\in NC(D,v_j)} e\big{)} \setminus \{v_j,\phi_{j-1}(v_1),\dots,\phi_{j-1}(v_{j-1})\}\big{)}, & \text{ if } i=j.
\end{cases}
\end{equation*}
\vspace{0.3cm}
\noindent{\bf Case 2:} $\big{(}\bigcup\limits_{e\in NC(D,v_j)} e \big{)}\setminus \{v_j,\phi_{j-1}(v_1),\dots,\phi_{j-1}(v_{j-1})\}= \emptyset$.
In this case choose a $t \in [j]$ such that $\big{(}\bigcup\limits_{e\in NC(D,v_t)} e \big{)}\nsubseteq \{v_t,\phi_{j-1}(v_1),\dots,\phi_{j-1}(v_{j-1})\}$. If there is no such $t\in [j],$ then $(D\setminus \{v_1,\dots,v_{j+1}\})\cup \{\phi_{j-1}(v_1),\dots,\phi_{j-1}(v_{j-1})\}$ will be a covering set of size smaller than $|D|$, which is a contradiction. Let $t \in [j]$ such that $\big{(}\bigcup\limits_{e\in NC(D,v_t)} e \big{)}\nsubseteq \{v_t,\phi_{j-1}(v_1),\dots,\phi_{j-1}(v_{j-1})\}$. Clearly $t \neq j$. Now define
$$ \phi_j: \{v_1,\dots,v_j\} \longrightarrow \overline{D}\setminus (N(S)\cap \overline{D})$$
as follows.
\begin{equation*}
\phi_j(v_i)=
\begin{cases}
\phi_{j-1}(v_i), & \text{ if } i \neq t,j,\\
\phi_{j-1}(v_t), & \text{ if } i =j, \\
\text{min}\big{(} \big{(}\bigcup\limits_{e\in NC(D,v_j)} e \big{)} \setminus \{v_t,\phi_{j-1}(v_1),\dots,\phi_{j-1}(v_{j-1})\}\big{)}, & \text{ if } i = t.
\end{cases}
\end{equation*}
By construction, the map $\phi_j$ is injective. Hence $\phi=\phi_r: D\setminus S \longrightarrow \overline{D}\setminus (N(S)\cap \overline{D})$ is an injective map.
\end{proof}
We now prove the main result of this section. This is done by extending the idea of \cite{Choi2020} to a new setting.
\begin{proof}[Proof of \Cref{thm:bound for nc}]
For a $\sigma \in \mathrm{NC}(\H)$, we let $\Psi_{\sigma}=|N(\overline{\sigma}\cap D)\cap \overline{\sigma} \cap \overline{D}|$.
Let $\gamma \in \mathrm{NC}(\H)$. We first show that for $v \in \gamma \cap \overline{D}$, if $v \in M(\gamma, \prec)$, then $v$ is a neighbour of some vertex in
$\overline{\gamma} \cap D$. Let $k$ be the smallest index such that the $k^{th}$ entry of $\textrm{mes}(\gamma, \prec)$ is $v$. Then $v \in \gamma \setminus \gamma_k$,
which implies that $v \in \overline{\gamma_k}$. Since $D$ is a cover and $\overline{\gamma_k}\in E(\H)$, $\overline{\gamma_k} \cap D \neq \emptyset$. Choose a $w \in \overline{\gamma_k} \cap D$. Since $w < v$
and $v$ is the $k^{th}$ entry of $\textrm{mes}(\gamma, \prec)$, we see that $w \notin \gamma$ implying that $w \in \overline{\gamma} \cap D$. Furthermore, since $v, w \in \overline{\gamma_k}$, $v$ is a neighbor of $w$.
Therefore,
\begin{align}\label{eq:mes}
|M(\gamma, \prec)| &\le |\gamma \cap D| + |N(\overline{\gamma}\cap D)\cap (\gamma \cap \overline{D})| \nonumber \\
&= |D|-|\overline{\gamma}\cap D| +
|N(\overline{\gamma}\cap D)\cap \overline{D}|-\Psi_{\gamma}\nonumber\\
&\le |D|-\tau(\H)+|\overline{D}|-\Psi_{\gamma} \nonumber\\
&=|V(\H)|-\tau(\H)-\Psi_{\gamma},
\end{align}
where the last inequality holds by applying Claim~\ref{lem:neighbor_inequality} to the set $\overline{\sigma}\cap D$.
By \Cref{prop:minimal_exclusion}, it is sufficient to show that $\Psi_{\gamma} \geq 1$. Suppose that $\Psi_{\gamma}=0$. There exists an edge $e $ such that $e \subseteq \overline{\gamma}$. If $e \cap \overline{D} \neq \emptyset$, then $\Psi_{\gamma} = N(\overline{\gamma}\cap D)\cap \overline{\gamma} \cap \overline{D} \neq \emptyset$.
Hence we conclude that $\H[\overline{\gamma} \cap D]$ has an edge.
Let $\gamma'=\gamma\cap D$.
Then $\overline{\gamma}\cap D=\overline{\gamma'}\cap D$.
By \Cref{lem:mes_equal}, $\textrm{mes}(\gamma, \prec)=\textrm{mes}(\gamma', \prec)$ and therefore
$M(\gamma, \prec)=M(\gamma', \prec)$.
Observe that $\Psi_{\gamma'} \geq 1$. Thus, by replacing $\gamma$ by $\gamma'$ and using \Cref{eq:mes}, we conclude that $|M(\gamma, \prec)| \leq |V(\H)|-\tau(\H)-1$. Result follows from \Cref{thm:theta_mes}.
\end{proof}
\vspace{0.2 cm}
A set $B \subseteq V(\H)$ is called a dominating set of $\H$ if every vertex in $V(\H)\setminus B$ is a neighbour of at least one vertex from $B$. The {\it domination number} of $\H$ is
$$
\gamma(\H) = \min \{|B| : B \text{ is a dominating set of } \H\}.
$$
It is easy to observe that $\gamma(\H)\leq \tau(\H)$ for every hypregraph $\H$. Therefore the following result is an immediate corollary of \Cref{thm:bound for nc} and \Cref{thetakprop}.
\begin{corollary}\label{cor:collapsibility_bound}
For any hypergraph $\H$,
\[ \mathcal{C}(\mathrm{NC}(\H))\leq \theta(\mathrm{NC}(\H)) \leq |V(\H)|- \gamma(\H)-1. \]
\end{corollary}
We now compare \Cref{thm:bound for nc} with the results of Kim and Kim \cite{KimKim2021}, where they established upper bounds on the Leray number of $\mathrm{NC}(\H)$ with various domination parameters of the hypergraphs $\H$. To do this comparison, we first recall the required terminology from \cite{KimKim2021}.
Let $\mathcal{H}$ be a hypergraph.
Let $v \in V(\H)$ and $B$ be a subset of $V(\H)$. Then
$B$ {\em strongly totally dominates} $v$ if there exists $B' \subseteq B \setminus \{v\}$ such that $B' \cup \{v\} \in E(\H)$.
Let $W$ be a subset of $V(\H)$.
If $B \subseteq V$ strongly dominates every vertex in $W$, then $B$ is said to be {\em strongly dominates} $W$.
The {\em strong total domination number of $A$ in $\mathcal{H}$} is defined as
\[\gamma(\mathcal{H}; W) := \min\{|B|:B\subseteq V(\H), ~B\text{ strongly dominates } W\}.\]
\begin{definition}\label{defn:strong total domination number}
The {\em strong total domination number} $\tilde{\gamma}(\mathcal{H})$ of $\mathcal{H}$ is the strong total domination number of $V(\H)$, {\it i.e.}, $\tilde{\gamma}(\mathcal{H}) = \gamma(\mathcal{H}; V(\H))$.
\end{definition}
A set $ \mathcal{I} \subseteq V(\H)$ is said to be {\em strongly independent} in $\mathcal{H}$ if it is independent and every edge of $\mathcal{H}$ contains at most one vertex of $\mathcal{I}$.
\begin{definition}\label{defn:strong independence domination number}
The {\em strong independence domination number} of $\mathcal{H}$ is the integer
\[\gamma_{si}(\mathcal{H}) := \max\{\gamma(\mathcal{H}; \mathcal{I}): \mathcal{I} \text{ is a strongly independent set of }\mathcal{H}\}.\]
\end{definition}
\begin{definition}\label{defn:edgewise-domination number}
The {\em edgewise-domination number} of $\mathcal{H}$ is the minimum number of edges whose union strongly dominates the $V(\H)$, i.e.
\[\gamma_E(\mathcal{H}) := \min\{|\mathcal{F}|: \mathcal{F} \subseteq E(\H), \bigcup_{e \in \mathcal{F}} e\text{ strongly dominates }V(\H)\}.\]
\end{definition}
\begin{theorem}\cite[Theorem 1.6]{KimKim2021} \label{leray numbers}
Let $\mathcal{H}$ be a hypergraph with no isolated vertices.
Then
\begin{enumerate}
\item[(i)] If $|e| \leq 3$ for every $e \in E(\mathcal{H})$, then $L(\mathrm{NC}(\mathcal{H})) \leq |V(\mathcal{H})| - \left \lceil\frac{\tilde{\gamma}(\mathcal{H})}{2}\right\rceil -1$.
\item[(ii)] If $|e| \leq 2$ for every $e \in E(\mathcal{H})$, then $L(\mathrm{NC}(\mathcal{H})) \leq |V(\mathcal{H})| - \gamma_{si}(\mathcal{H}) -1$.
\item[(iii)] $\mathcal{L}(\mathrm{NC}(\mathcal{H})) \leq |V(\mathcal{H})| - \gamma_{E}(\mathcal{H}) -1$.
\end{enumerate}
\end{theorem}
\begin{example}\label{counterexample}
Let $n \geq 5$ be a positive integer. Let $\H = ([n], \binom{n}{3} )$. For any set $A \subseteq [n]$ such that $|A| \leq n-3$, there exists and edge $e \subseteq [n] \setminus A$ and therefore $\tau(\H) \geq n-2$. Since $[n-2] \cap e \neq \emptyset$ for all $e \in E(\H)$, we conclude that $\tau(\H) = n-2$. Clearly any set of cardinality $3$ strongly dominates $[n]$. Hence $\gamma_{E}(\H) = 1$. Since any set of cardinality $1$ or $2$ can not strongly dominate $[n]$, we see that $\tilde{\gamma}(\H) = 3$. Observe that any strongly independent set is of cardinality $1$ and therefore we conclude that $\gamma_{si}(\H) = 2$.
\end{example}
\begin{remark}\label{remark:example}
Note that for the above class of examples, $\tau(\H) > \max\{\tilde{\gamma}(\H), \gamma_{si}(\H), \gamma_E(\H)\}$.
Wegner \cite[Lemma 3]{Wegner75} proved that $\mathcal{L}(X) \leq \mathcal{C}(X)$. Thus, for the class of hypergraphs in \Cref{counterexample}, we obtain a much tighter bound on $\mathcal{L}(X)$ from \Cref{thm:bound for nc} as compared to those obtained in \cite{KimKim2021}.
\end{remark}
\section{Future directions}
In \Cref{example1}, we gave a simplicial complex $X$ such that $\theta_k(X) < \theta_{k-1}(X)$ for $k=1$. However, we are unable to find examples for general $k$. Therefore, we pose the problem here.
\begin{ques}Given a $k \geq 2$, does there exist a simplicial complex $X$ such that $\theta_k(X) < \theta_{k-1}(X)$?
\end{ques}
In \Cref{thm:kvdecomposable}, we proved that $\mathcal{C}(X) = \theta_k(X)$, if $X$ is $k$-vertex decomposable. It would be interesting to find classes of simplicial complexes for which $\theta_k$ is equal to the collapsibility number.
\begin{ques}
Classify simplicial complexes $X$ for which $\mathcal{C}(X)=\theta_k(X)$.
\end{ques}
For complexes $X$ and $Y,$ we know that the collapsibility number of join, $\mathcal{C}(X \ast Y) = \mathcal{C}(X) + \mathcal{C}(Y)$ (\cite[Proposition 4.2.3]{Igor18} and the discussion below it) and $\theta_0(X \ast Y) = \theta_0(X) + \theta_0(Y)$ (\cite[Theorem 11]{TC22}). This raises the following natural question.
\begin{ques}
For any $k\ge 1$ and the simplicial complexes $X$ and $Y$, $$\theta_k(X\ast Y)= \theta_k(X)+\theta_k(Y).$$
\end{ques}
\section*{Acknowledgements} We thank the Department of Mathematics, Indian Institute of Technology Bombay, for hosting us, where the major part of this work was done. The third author is partially supported by the Start-up Research Grant SRG/2022/000314 from SERB, DST, India.
\bibliographystyle{plain}
| {
"timestamp": "2022-11-22T02:05:34",
"yymm": "2211",
"arxiv_id": "2211.10607",
"language": "en",
"url": "https://arxiv.org/abs/2211.10607",
"abstract": "The collapsibility number of simplicial complexes was introduced by Wegner in order to understand the intersection patterns of convex sets. This number also plays an important role in a variety of Helly type results. There are only a few upper bounds for the collapsibility number of complexes available in literature. In general, it is difficult to establish such non-trivial upper bounds. In this article, we construct a sequence of upper bounds $\\theta_k(X)$ for the collapsibility number of a simplicial complex $X$. We also show that the bound given by $\\theta_k$ is tight if the underlying complex is $k$-vertex decomposable. We then give an upper bound for $\\theta_k$ and therefore for the collapsibility number of the non-cover complex of a hypergraph in terms of its covering number.",
"subjects": "Combinatorics (math.CO)",
"title": "Bounds for the collapsibility number of a simplicial complex and non-cover complexes of hypergraphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9817357232512719,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7085610827603805
} |
https://arxiv.org/abs/2005.11135 | Almost sure behavior of linearly edge-reinforced random walks on the half-line | We study linearly edge-reinforced random walks on $\mathbb{Z}_+$, where each edge $\{x,x+1\}$ has the initial weight $x^{\alpha} \vee 1$, and each time an edge is traversed, its weight is increased by $\Delta$. It is known that the walk is recurrent if and only if $\alpha \leq 1$. The aim of this paper is to study the almost sure behavior of the walk in the recurrent regime. For $\alpha<1$ and $\Delta>0$, we obtain a limit theorem which is a counterpart of the law of the iterated logarithm for simple random walks. This reveals that the speed of the walk with $\Delta>0$ is much slower than $\Delta=0$. In the critical case $\alpha=1$, our (almost sure) bounds for the trajectory of the walk shows that there is a phase transition of the speed at $\Delta=2$. | \section{Linearly edge-reinforced random walks on the half-line}
\section{Introduction}
Reinforced random walks (RRWs), introduced by Coppersmith and Diaconis, are a class of self-interacting random walks that have attracted many researchers for three decades or more. Quoting from Diaconis \cite{Diaconis88},
\begin{quote}
{\it ``It was introduced as a simple model of exploring a new city. At first all routes are equally unfamiliar, and one chooses at random between them. As time goes on, routes that have been traveled more in the past are more likely to be traveled."}
\end{quote}
Consider a finite connected graph, and each edge is given a positive initial weights.
In each step the traveller jumps to an adjacent vertex by traversing an edge, with probability proportional to the weight of that edge. Each time an edge is traversed, its weight is increased by a fixed constant $\Delta>0$ {\it (linear edge-reinforcement)}. We can see that the walk is recurrent, that is, every vertex is visited infinitely often with probability one. The limiting density of the normalized occupation measure on the edges, obtained by Coppersmith and Diaconis in 1986, is found in \cite{Diaconis88} (see also Keane and Rolles \cite{KeaneRolles00}).
In this paper we consider the linearly edge-reinforced random walk (LERRW) on the half-line in recurrent regime, and give almost sure results on how far the traveller is from the origin. We begin with a motivating example. Let $\{S_n\}$ be the symmetric simple random walk on $\mathbb{Z}$, starting at the origin. Notice that $\{|S_n|\}$ is the symmetric simple random walk on $\mathbb{Z}_+=\{0,1,2,\cdots\}$, with a reflection at the origin. This walk is recurrent a.s., that is,
\[ \liminf_{n \to \infty} |S_n|=0, \quad \mbox{and} \quad \limsup_{n \to \infty} |S_n|=+\infty \quad \mbox{a.s..} \]
More precisely, by the law of the iterated logarithm, we have
\begin{align*}
\limsup_{n \to \infty} \dfrac{|S_n|}{\sqrt{2n\log \log n}} = 1 \quad \mbox{a.s..}
\end{align*}
Now consider the LERRW on $\mathbb{Z}_+$,
where the initial weights are all one and $\Delta=1$. Let $X_n$ be the position of the walk at time $n$, and assume that $X_0=0$.
Again $\{X_n\}$ is recurrent a.s. (see \cite{Davis90}), but is quantitatively quite different from $\{|S_n|\}$: Theorem \ref{thm:Takei20Main1} below
implies that
\begin{align*}
\limsup_{n \to \infty} \dfrac{X_n}{\log_4 n} = 1 \quad \mbox{a.s..}
\end{align*}
In this way, the random walk with reinforcement is much slower than ordinary random walk.
We also discuss how strongly the linear reinforcement affects the long time behavior of the walk on $\mathbb{Z}_+$ with heterogeneous initial weights, where each edge $\{x,x+1\}$ has initial weight $x^{\alpha} \vee 1$. To summarize our aim is to describe phase transitions of the speed in the recurrent regime $\alpha \leq 1$ and $\Delta \geq 0$.
For $\alpha<1$, we obtain a limit theorem which shows strong slow-down effects from $\Delta=0$, and is a counterpart of the law of the iterated logarithm for simple random walks. To our best knowledge this kind of results are first for RRWs. On the other hand, for $\alpha=1$, which is critical for recurrence, essential slow-down effects appear only for $\Delta>2$.
\section{Definitions and Results} \label{sec:DefResults}
\subsection{Model} We define the {\it edge-reinforced random walk} (ERRW) on $\mathbb{Z}_+$, denoted by $\boldsymbol{X} =\{X_n\}$, as follows. This process takes values on the vertices of $\mathbb{Z}_+=\{0,1,2,\ldots\}$, and at each step it jumps to one of the nearest neighbors.
For each $x \in \mathbb{Z}_+$, let $\mathbf{f}_{x} =(f(\ell,x) \colon \ell \in \mathbb{Z}_+)$ be a non-decreasing sequence of positive numbers, called the {\it reinforcement scheme} at $x$.
For $x \in \mathbb{Z}_+$, let $\phi_n(x)$ be the number of traversals of the edge $\{x,\,x+1\}$ by time $n$, namely
\begin{equation}
\label{def:phi}
\phi_n(x) := \sum_{i=1}^n 1_{ \{ \{X_{i-1},\,X_i\} = \{x,\,x+1\} \}}.
\end{equation}
For each $n \geq 0$, the weights at time $n$ are defined by
\[
w_n (x) = f(\phi_n(x),x)\quad \mbox{for $x \in \mathbb{Z}_+$},
\]
We set $w_n(-1) = 0$ for all $n$, which implies a reflection at the origin.
We call $w_0(x)=f(0,x)$ the {\it initial weight} of the edge $\{x,x+1\}$.
Assume that $P(X_0=0)=1$. The transition probability is given by
\begin{align}
P(X_{n+1}=X_n+1\,|\,X_0,\ldots,X_n)&=1-P(X_{n+1}=X_n-1\,|\,X_0,\ldots,X_n) \notag \\
&=\frac{w_n (X_n)}{w_n(X_n-1)+w_n(X_n)}. \label{eq:ERRWtransition}
\end{align}
The {\it linearly edge-reinforced random walk} (LERRW) is the ERRW whose reinforcement scheme is defined by
\begin{align}
f(\ell,x)=f(0,x) + \ell \Delta \quad \mbox{for $\ell,x \in \mathbb{Z}_+$}. \label{eq:LERRWschemeDef}
\end{align}
We call $\Delta \geq 0$ the {\it reinforcement parameter}.
\subsection{Recurrence classification}
We say that the path $\boldsymbol{X}$ is {\it recurrent} if every point is visited infinitely often, and {\it transient} if every point is visited only finitely many times.
The recurrence problem for the LEERW on $\mathbb{Z}_+$ is solved by Takeshima \cite{Takeshima00} (although only the $\Delta=1$ case is treated in \cite{Takeshima00}, his argument works for any $\Delta > 0$ as well). In Appendix \ref{sec:AppendixRecTraPres}, we give an elementary and short proof of Theorem \ref{thm:Takeshima00linear}.
\begin{theorem}[\cite{Takeshima00}, Theorem 4.1] \label{thm:Takeshima00linear}
Let $\boldsymbol{X}$ be the linearly edge-reinforced random walk on $\mathbb{Z}_+$
with the reinforcement parameter $\Delta \geq 0$. Define
\[ F_0:= \sum_{x=0}^{\infty} \frac{1}{f(0,x)}. \]
\begin{itemize}
\item[(i)] If $F_0=+\infty$, then $\boldsymbol{X}$ is recurrent a.s..
\item[(ii)] If $F_0<+\infty$, then $\boldsymbol{X}$ is transient a.s..
\end{itemize}
\end{theorem}
Theorem \ref{thm:Takeshima00linear} shows that the recurrence of the LERRW is completely determined by the initial weights: In particular, if the walk is transient when $\Delta=0$, then it never becomes recurrent even if $\Delta>0$ is very large.
\subsection{Main results: Almost sure behavior of LERRWs} Consider the LERRW $\boldsymbol{X}$ on $\mathbb{Z}_+$ with the initial weight
\begin{align}
f(0,x) = x^{\alpha} \vee 1 =
\begin{cases}
1 & \mbox{($x=0$),} \\
x^{\alpha} & \mbox{($x \in \mathbb{N}=\{1,2,\cdots\}$),} \\
\end{cases}
\label{eq:InitWeightPowerIkenami}
\end{align}
and the reinforcement parameter $\Delta>0$.
By Theorem \ref{thm:Takeshima00linear}, $\boldsymbol{X}$ is recurrent a.s. if and only if $\alpha \leq 1$. Ikenami (Master thesis) \cite{Ikenami01} shows that if $0 \leq \alpha < 1$ and the reinforcement parameter $\Delta=1$, then for any $\varepsilon>0$,
\[
\lim_{n \to \infty} \frac{X_n}{(\log n)^{(1+\varepsilon)/(1-\alpha)}} =0
\quad \mbox{a.s..}
\]
The proof in \cite{Ikenami01} is inspired by the Lyapunov function method in Comets, Menshikov, and Popov \cite{CometsMenshikovPopov98}.
Our first result is for the off-critical case, $\alpha<1$, and the precise order of oscillation of $X_n$ is indeed $(\log n)^{1/(1-\alpha)}$.
\begin{theorem} \label{thm:Takei20Main1} Assume that $\alpha <1$ and $\Delta >0$. Let
\begin{align}
K(\alpha,\Delta) :=
\begin{cases}
\dfrac{1-\alpha}{2\Delta} &(\alpha<0), \\[2mm]
\left( \Psi\left(\dfrac{1}{2\Delta}+\dfrac{1}{2}\right)-\Psi\left(\dfrac{1}{2}\right) \right)^{-1}&(\alpha=0), \\[3mm]
\dfrac{1-\alpha}{\Delta} &(0<\alpha<1), \\
\end{cases}
\label{eq:Takei20Main1Const}
\end{align}
where $\Psi(z) = \Gamma'(z) / \Gamma(z)$ is the digamma function.
The LERRW $\boldsymbol{X}$ with the initial weight \eqref{eq:InitWeightPowerIkenami} and the reinforcement parameter $\Delta$ satisfies that
\begin{align*}
\limsup_{n \to \infty} \dfrac{X_n}{\{ K(\alpha,\Delta)\log n \}^{1/(1-\alpha)}} = 1
\quad \mbox{a.s..}
\end{align*}
\end{theorem}
\begin{remark}
It is known that $K(0,1)=1/(\log 4)$, and $K(0,\Delta) \sim 1/(2\Delta)$ as $\Delta \to \infty$.
(See \eqref{eq:digammaDifflog4} and Lemma \ref{lem:Takeshima00(4.10)(4.12)improved} below, respectively.)
\end{remark}
The next theorem is concerning the growth in the critical case, $\alpha=1$.
\begin{theorem} \label{thm:Takei20Main2} Assume that $\alpha =1$ and $\Delta >0$, and consider the LERRW $\boldsymbol{X}$ with the initial weight \eqref{eq:InitWeightPowerIkenami} and the reinforcement parameter $\Delta$.
\begin{itemize}
\item[(i)] If $\Delta >2$, then for any $\varepsilon>0$,
\begin{align*}
\limsup_{n \to \infty} \dfrac{X_n}{n^{(1-\varepsilon)/\Delta}} =+\infty,
\quad \mbox{and} \quad
\lim_{n \to \infty} \dfrac{X_n}{n^{(1+\varepsilon)/\Delta}} = 0\quad \mbox{a.s..}
\end{align*}
\item[(ii)] If $0< \Delta \leq 2$, then for any $\varepsilon>0$,
\begin{align*}
\limsup_{n \to \infty} \dfrac{X_n}{n^{(1-\varepsilon)/2}} =+\infty,
\quad \mbox{and} \quad
\lim_{n \to \infty} \dfrac{X_n}{n^{(1+\varepsilon)/2}} = 0
\quad \mbox{a.s..}
\end{align*}
\end{itemize}
\end{theorem}
\subsection{Effect of linear reinforcement} For comparison, we give almost sure bounds for unreinforced case. In the case $\alpha<1$ the speed of the walker becomes much slower as soon as $\Delta>0$, while it is not in the critical case $\alpha=1$.
\begin{theorem} \label{thm:Takei20Main3}
Consider the LERRW $\boldsymbol{X}$ with the initial weight \eqref{eq:InitWeightPowerIkenami} and the reinforcement parameter $\Delta=0$ (i.e. unreinforced).
\begin{itemize}
\item[(i)] If $\alpha < -1$, then for any $\varepsilon>0$,
\begin{align*}
\limsup_{n \to \infty} \dfrac{X_n}{n^{1/(1-\alpha)}} >0,
\quad \mbox{and} \quad
\lim_{n \to \infty} \dfrac{X_n}{\{ n(\log n)^{1+\varepsilon}\}^{1/(1-\alpha)}} = 0
\quad \mbox{a.s..}
\end{align*}
\item[(ii)] If $\alpha = -1$, then for any $\varepsilon>0$,
\begin{align*}
\limsup_{n \to \infty} \dfrac{X_n}{n^{(1-\varepsilon)/2}} =+\infty,
\quad \mbox{and} \quad
\lim_{n \to \infty}\dfrac{X_n}{\{ n(\log n)^{1+\varepsilon}\}^{1/2}} = 0
\quad \mbox{a.s..}
\end{align*}
\item[(iii)] If $-1<\alpha \leq 1$, then for any $\varepsilon>0$,
\begin{align*}
\limsup_{n \to \infty} \dfrac{X_n}{n^{1/2}} >0,
\quad \mbox{and} \quad
\lim_{n \to \infty} \dfrac{X_n}{\{ n(\log n)^{1+\varepsilon}\}^{1/2}} = 0
\quad \mbox{a.s..}
\end{align*}
\end{itemize}
\end{theorem}
\iffalse
Our results are summarized in Table \ref{table:Takei20LERRW}.
\begin{table}[th]
\begin{tabular}{|c||c|c|} \hline
& $\Delta=0$ & $\Delta>0$ \\ \hline
$\alpha<-1$ & around $n^{1/(1-\alpha)}$ & $(\log n)^{1/(1-\alpha)}$ \\ \cline{1-2}
$ -1 \leq \alpha <1$ & & \\ \cline{1-1} \cline{3-3}
$\alpha=1$ & around $n^{1/2}$ & around $n^{1/\Delta}$ ($\Delta > 2$)\\
& & around $n^{1/2}$ ($0 < \Delta \leq 2$) \\ \hline
\end{tabular}
\caption {The speed of the LERRW $\boldsymbol{X}=\{X_n\}$ with $f(0,x)=x^{\alpha} \vee 1$ and $\Delta \geq 0$.}
\label{table:Takei20LERRW}
\end{table}
\fi
\subsection{Related works}
We briefly review related literatures concerning limit theorems for ERRWs in one dimension. In Davis \cite{Davis90}, the strong law of large numbers
\[ \lim_{n \to \infty} \dfrac{X_n}{n}=0\quad \mbox{a.s.} \]
is proved for initially fair, sequence-type RRWs (that is, $\mathbf{f}_x$ does not depend on $x$). See also Takeshima \cite{Takeshima00} for a possible generalization.
For limit theorems for sublinear ERRWs, see Davis \cite{Davis96} and T\'{o}th \cite{Toth96AOP,Toth97SSMHungary}, among others.
The continuous time vertex-reinforced jump process (VRJP) was introduced by Davis and Volkov \cite{DavisVolkov02}. The LERRW and the VRJP are known to be closely related, see Sabot and Tarr\`{e}s \cite{SabotTarres15} and references therein. The analog of Theorem \ref{thm:Takeshima00linear} for VRJP on $\mathbb{Z}_+$ is proved in Davis and Dean \cite{DavisDean10}. For the VRJP $\{X_t\}$ on $\mathbb{Z}_+$ corresponding to the LERRW with $f(x,\ell) = 1+\ell$, Davis and Volkov \cite{DavisVolkov02} shows that
\begin{align*}
\lim_{t \to \infty} \dfrac{1}{\log t} \left(\max_{0 \leq s \leq t}X_s \right) = 2.768\cdots \quad \mbox{a.s..}
\end{align*}
In Lupu, Sabot, and Tarr\'{e}s \cite{LupuSabotTarres19}, the continuous space limit of the VRJP in one dimension is constructed, and it is also obtained as a fine mesh limit of the LERRW.
\iffalse
\subsection{Organization of the paper}
The rest of the paper is organized as follows. In section \ref{sec:prelim}, we summarize preliminary results for a random walk in random environment (RWRE) closely related to the LERRW. Ikenami's proof \cite{Ikenami01}
is inspired by the Lyapunov function method due to Comets, Menshikov, and Popov \cite{CometsMenshikovPopov98}, which is further developed by Menshikov and Wade \cite{MenshikovWade08SPA} and Hryniv, Menshikov, and Wade \cite{HrynivMenshikovWade13}, among others. The readers interested in this powerful method are referred to the monograph by Menshikov, Popov, and Wade \cite{MenshikovPopovWade16}. In section \ref{sec:asbound}, we list important tools for obtaining almost sure bounds, and prove Theorem \ref{thm:Takei20Main3}. The proofs of our main results, Theorems \ref{thm:Takei20Main1} and \ref{thm:Takei20Main2}, are given in sections \ref{sec:proofmain} and \ref{sec:ProofPropTakei20SxAsymp}. In the appendix \ref{sec:AppendixRecTraPres}, we give an another proof of recurrence classification for the LERRW.
\fi
\section{Preliminaries} \label{sec:prelim}
\subsection{Reduction of LERRW to RWRE}
Following Pemantle \cite{Pemantle88}, we introduce a random walk in random environment (RWRE), which is equivalent to the LERRW on $\mathbb{Z}_+$ with $\Delta>0$.
Let $p_0=1$. Assume that $\{ p_i(\omega) \}_{i \in \mathbb{N} }$ is a sequence of independent random variables, and the distribution of $p_i$ is $\mbox{\rm Beta}\left(\dfrac{w_0(i)}{2\Delta},\dfrac{w_0(i-1)+\Delta}{2\Delta} \right)$,
that is, for $0\leq \alpha < \beta \leq 1$,
\begin{align*}
&\mathbb{P}(\alpha \leq p_i \leq \beta) \\
&= B \left(\frac{w_0(i)}{2\Delta},\frac{w_0(i-1)+\Delta}{2\Delta} \right)^{-1} \int_{\alpha}^{\beta} u^{\frac{w_0(i)}{2\Delta}-1}(1-u)^{\frac{w_0(i-1)+\Delta}{2\Delta}-1} \,du,
\end{align*}
where
\begin{align*}
B \left(\frac{w_0(i)}{2\Delta},\frac{w_0(i-1)+\Delta}{2\Delta} \right) = \int_0^1 t^{\frac{w_0(i)}{2\Delta}-1}(1-t)^{\frac{w_0(i-1)+\Delta}{2\Delta}-1} \,dt.
\end{align*}
The expectation and variance under $\mathbb{P}$ are denoted by $\mathbb{E}[\,\cdot \,]$ and $\mathbb{V}[\,\cdot \,]$, respectively.
Given a random environment $\{p_i(\omega)\}_{i \in \mathbb{Z}_+}$, a Markov chain $\boldsymbol{Y}=\{Y_n\}$ on $\mathbb{Z}_+$ is defined by $\mathbf{P}^{\omega}_{i_0} (Y_0=i_0)=1$ and
\begin{align*}
\begin{cases}
\mathbf{P}^{\omega}_{i_0} (Y_{n+1}=i+1\,|\,Y_n=i)=p_i(\omega), \\
\mathbf{P}^{\omega}_{i_0} (Y_{n+1}=i-1\,|\,Y_n=i)=q_i(\omega):=1-p_i(\omega) \\
\end{cases}
\end{align*}
for $n \geq 0$ and $i \in \mathbb{Z}_+$. The next result is found in \cite{Pemantle88}, Section 3. (See also Eckhoff and Rolles \cite{EckhoffRolles09} for the uniqueness of representation.)
\begin{lemma}
For any $n \geq 0$ and any $i_0,i_1,\cdots,i_n \in \mathbb{Z}_+$, we have
\begin{align*}
P(X_1=i_1,\ldots,X_n=i_n\mid X_0=i_0)
= \mathbb{E}\left[ \mathbf{P}^{\omega}_{i_0}(Y_1=i_1,\ldots,Y_n=i_n) \right].
\end{align*}
\end{lemma}
\subsection{RW in a fixed environment}
In this subsection, we fix an environment $\{p_i\}$.
Define $\{\gamma_x \}_{x \in \mathbb{Z}_+}$ by
\[ \gamma_0:=1,\quad \mbox{and} \quad \gamma_x := \prod_{i=1}^{x} \dfrac{q_i}{p_i} \quad \mbox{for $x\in \mathbb{N}$.} \]
In the electric network interpretation (see e.g. Chapter 2 in Lyons and Peres \cite{LyonsPeres16}), $\gamma_x$ is the resistance of the edge $\{x,x+1\}$.
Let
\begin{align*}
h(x) := \sum_{i=0}^{x-1} \gamma_i
\end{align*}
be a harmonic function with $h(0)=0$ and $h(1)=1$. The effective resistance from the origin to infinity is $\displaystyle h(\infty) = \sum_{i=0}^\infty \gamma_i$.
Using the conductance $w_x:=1/\gamma_x$ of the edge $\{x,x+1\}$, we have
\begin{align}
p_x = \dfrac{w_x}{w_{x-1}+w_x},
\quad \mbox{and} \quad
q_x = \dfrac{w_{x-1}}{w_{x-1}+w_x}\quad \mbox{for $x \in \mathbb{Z}_+$},
\label{eq:1dRWREpxqxweight}
\end{align}
where $w_{-1}:=0$. Define $\{\pi_x \}_{x \in \mathbb{Z}_+}$ by
\[
\pi_x:=w_{x-1}+w_x
\quad \mbox{for $x\in \mathbb{Z}_+$.} \]
From \eqref{eq:1dRWREpxqxweight}, we can see that $\{\pi_x \}$ is a reversible measure.
Notice that
\begin{align} Z:=\sum_{i=0}^{\infty} \pi_i<+\infty
\quad \mbox{if and only if} \quad
\sum_{i=0}^{\infty} \dfrac{1}{\gamma_i}<+\infty.
\label{equiv:PosRecCriterion}
\end{align}
The following recurrence classification is classical (see e.g. Theorem 2.2.5 in \cite{MenshikovPopovWade16}).
\begin{lemma} Consider the random walk $\boldsymbol{Y}=\{Y_n\}$ in a fixed environment $\{p_i\}$.
\begin{itemize}
\item[(i)] If $\displaystyle \sum_{x=0}^{\infty} \gamma_x <+\infty$, then $\boldsymbol{Y}$ is transient.
\item[(ii)] If $\displaystyle \sum_{x=0}^{\infty} \gamma_x = \sum_{x=0}^{\infty} \dfrac{1}{\gamma_x} =+\infty$, then $\boldsymbol{Y}$ is null recurrent.
\item[(iii)] if $\displaystyle \sum_{x=0}^{\infty} \gamma_x =+\infty$ and $\displaystyle \sum_{x=0}^{\infty} \dfrac{1}{\gamma_x} <+\infty$, then $\boldsymbol{Y}$ is positive recurrent. The unique stationary distribution is given by
\[ \pi(x):= \dfrac{1}{Z} \pi_x \quad \mbox{for $x \in \mathbb{Z}_+$.}\]
\end{itemize}
\end{lemma}
\section{Almost sure bound} \label{sec:asbound}
\subsection{Almost sure bound by the Lyapunov function method}
We consider the RWRE $\boldsymbol{Y}$, defined in the previous section. The first hitting time to $x \in \mathbb{Z}_+$ is defined by
\[ \tau_x := \inf \{ n \geq 0 : Y_n= x \}. \]
The next lemma is a consequence of the hitting time identity (see Proposition 2.20 in \cite{LyonsPeres16}).
\begin{lemma} \label{lem:Etaux} Define
\begin{align}
T^{\omega}(x):= \sum_{j=0}^{x-1} \pi_j \{ h(x)-h(j) \}
= \sum_{j=0}^{x-1} \pi_j \sum_{i=j}^{x-1} \gamma_i
=\sum_{i=0}^{x-1} \gamma_i \sum_{j=0}^i \pi_j
\label{eq:BDchainT(x)Def}
\end{align}
for $x \in \mathbb{Z}_+$. Then the expectation of $\tau_x$ under $\mathbf{P}^{\omega}_0$ is given by $\mathbf{E}^{\omega}_0[\tau_x]=T^{\omega}(x)$.
\end{lemma}
To obtain the almost sure upper bound, we use the following lemma (see Lemma 6.1.4 and Theorem 2.8.1 in \cite{MenshikovPopovWade16}).
\begin{lemma
\label{lem:MenshikovWade08Lem9(i)}
Let $t_1$ be an increasing, nonnegative function on $\mathbb{Z}_+$ with $t_1(x) \to \infty$ as $x \to \infty$.
If
\[ \mbox{$\mathbb{P}$-a.e. $\omega$,} \quad T^{\omega}(x) \geq t_1(x) \quad \mbox{for all but finitely many $x \in \mathbb{Z}_+$}, \]
then for any $\varepsilon > 0$, $\mathbb{P}$-a.e. $\omega$ and $\mathbf{P}^{\omega}_0$-a.s.,
\[ Y_n \leq t_1^{-1} \bigl( 2n\{\log (2n)\}^{1+\varepsilon} \bigr)\quad \mbox{for all but finitely many $n$}. \]
\end{lemma}
As for the almost sure lower bound, we use the following version of Lemma 4.3 in \cite{HrynivMenshikovWade13}. No essential change is needed for the proof.
\begin{lemma
\label{lem:HrynivMenshikovWade13Lem4.2bis}
Let $t_2$ be an increasing, nonnegative function on $\mathbb{Z}_+$ with
\[\sum_{x=1}^{\infty} \dfrac{t_2(x)}{t_2(x^2)}<\infty.
\]
If
\begin{align*}
\mbox{$\mathbb{P}$-a.e. $\omega$,} \quad T^{\omega}(x) \leq t_2(x) \quad \mbox{for infinitely many $x \in \mathbb{Z}_+$},
\end{align*}
then for any $\varepsilon > 0$,
\[ \mbox{$\mathbb{P}$-a.e. $\omega$ and $\mathbf{P}^{\omega}_0$-a.s.,}\quad Y_n \geq t_2^{-1} \bigl( (1-\varepsilon)n \bigr)\quad \mbox{for all but finitely many $n$}. \]
\end{lemma}
We list useful bounds for $T^{\omega}(x)$, easily derived from \eqref{eq:BDchainT(x)Def}: Eqs. \eqref{ineq:T(x)lower} and \eqref{ineq:T(x)upper} are due to \cite{HrynivMenshikovWade13}, Lemma 3.5.
\begin{lemma}
\label{lem:HrynivMenshikovWade13Lem3.5}
For any $x \in \mathbb{Z}_+$, we have
\begin{align}
T^{\omega}(x) & \geq h(x) \geq \max_{0 \leq i < x} \gamma_i \geq \gamma_{x-1},\quad \mbox{and} \label{ineq:T(x)lower} \\
T^{\omega}(x) &\leq 2 x^2 \left( \max_{0 \leq i < x} \gamma_i\right)\left( \max_{0 \leq j < x} \dfrac{1}{\gamma_j}\right). \label{ineq:T(x)upper}
\end{align}
If $\displaystyle Z^{\omega}=\sum_{i=0}^{\infty} \pi_i<\infty$, then \eqref{ineq:T(x)upper} can be improved as follows:
\begin{align}
T^{\omega}(x) & \leq Z^{\omega} h(x) \leq Z^{\omega} x \left( \max_{0 \leq i < x} \gamma_i \right). \label{ineq:T(x)upperPosRec}
\end{align}
\end{lemma}
\iffalse
\begin{proof}
\eqref{ineq:T(x)lower} and \eqref{ineq:T(x)upperPosRec} are direct consequences from \eqref{eq:BDchainT(x)Def}. \eqref{ineq:T(x)upper} follows from
\begin{align*}
T^{\omega} (x) = \sum_{i=0}^{x-1} \gamma_i \sum_{j=0}^i \pi_j
&\leq 2\left( \max_{0 \leq j < x} \dfrac{1}{\gamma_j}\right) \sum_{i=0}^{x-1} (i+1) \gamma_i \\
&\leq 2\left( \max_{0 \leq i < x} \gamma_i\right)\left( \max_{0 \leq j < x} \dfrac{1}{\gamma_j}\right) \sum_{i=0}^{x-1} (i+1) \\
&= x(x+1) \left( \max_{0 \leq i < x} \gamma_i\right)\left( \max_{0 \leq j < x} \dfrac{1}{\gamma_j}\right),
\end{align*}
and $x(x+1) \leq 2x^2$ for $x>0$.
\end{proof}
\fi
\subsection{LERRW with $\Delta=0$} As a warm-up, we prove almost sure bounds for the case $\Delta=0$. We use the next lemma, which is an infinite series version of l'H\^{o}pital's rule, due to Stolz and Ces\`aro.
\begin{lemma}[see e.g. Knopp \cite{Knopp56Dover}, p. 34] \label{lem:StolzCesaro} If a real sequence $\{a_n\}$ and a positive sequence $\{b_n\}$ satisfy
\[ \lim_{n\to \infty} \dfrac{a_n}{b_n} = L \in \mathbb{R} \cup \{ \pm \infty \},\quad\mbox{and}\quad \sum_{n=1}^{\infty} b_n = +\infty, \]
then
\[ \lim_{n\to \infty} \dfrac{\sum_{k=1}^n a_k}{\sum_{k=1}^n b_k} = L. \]
\end{lemma}
\begin{proof}[Proof of Theorem \ref{thm:Takei20Main3}] Assume that $\alpha \leq 1$, and consider the random walk $\boldsymbol{Y}$ in a fixed environment $\{p_i\}$ given by \eqref{eq:1dRWREpxqxweight} and $w_x=x^{\alpha} \vee 1$.
Notice that $\displaystyle Z=\sum_{j=0}^{\infty} \pi_j<+\infty$ if and only if $\alpha <-1$. In this subsection we write $T(x)$ for $T^{\omega}(x)$.
\noindent (i) Suppose that $\alpha<-1$. Since
\begin{align*}
h(x)=1+\sum_{i=1}^{x-1} i^{-\alpha} \sim \dfrac{1}{1-\alpha}x^{1-\alpha} \quad \mbox{as $x \to \infty$,}
\end{align*}
it follows from Lemma \ref{lem:HrynivMenshikovWade13Lem3.5} that
for any $\varepsilon > 0$,
\begin{align*}
\dfrac{1-\varepsilon}{1-\alpha}x^{1-\alpha} \leq T(x) \leq \dfrac{(1+\varepsilon)Z}{1-\alpha}x^{1-\alpha} \quad \mbox{for all but finitely many $x$.}
\end{align*}
By Lemmata \ref{lem:MenshikovWade08Lem9(i)} and \ref{lem:HrynivMenshikovWade13Lem4.2bis}, we have
\begin{align*}
Y_n \leq \left(\dfrac{1-\alpha}{1-\varepsilon} \cdot 2n\{\log(2n)\}^{1+\varepsilon/2}\right)^{1/(1-\alpha)} \quad \mbox{for all large $n$,}
\intertext{and}
Y_n \geq \left\{\dfrac{1-\alpha}{(1+\varepsilon)Z} \cdot (1-\varepsilon)n\right\}^{1/(1-\alpha)}\mbox{for infinitely many $n$,}
\end{align*}
$\mathbf{P}_0$-a.s. (Notice that $Z \geq \pi_0=1$). Thus we obtain the conclusion of (i).
\noindent (ii) When $\alpha=-1$, we have
\begin{align*}
\gamma_i \sum_{j=0}^i \pi_j &= i\left( 2+ 2\sum_{j=1}^i \dfrac{1}{j} -\dfrac{1}{i}\right) \sim 2i \log i \quad \mbox{as $i \to \infty$.}
\end{align*}
By Lemma \ref{lem:StolzCesaro},
\begin{align*}
T(x) &= \sum_{i=0}^{x-1} \gamma_i \sum_{j=0}^i \pi_j \sim x^2\log x \quad \mbox{as $x \to \infty$.}
\end{align*}
For simplicity, we content ourselves with a weaker bound: For any $\varepsilon > 0$,
\begin{align*}
(1-\varepsilon) x^2 \leq T(x) \leq (1+\varepsilon) x^{2+\varepsilon}\quad \mbox{for all but finitely many $x$.}
\end{align*}
We can obtain the conclusion of (ii) by a similar calculation as in (i).
\noindent (iii) Suppose that $-1 < \alpha \leq 1$. We have
\begin{align*}
\gamma_i \sum_{j=0}^i \pi_j &= i^{-\alpha}\left( 2+ 2\sum_{j=1}^i j^{\alpha} - i^{\alpha} \right) \sim
\dfrac{2}{\alpha+1}i \quad \mbox{as $i \to \infty$,}
\intertext{and}
T(x) &= \sum_{i=0}^{x-1} \gamma_i \sum_{j=0}^i \pi_j \sim \dfrac{1}{\alpha+1} x^2 \quad \mbox{as $x \to \infty$,}
\end{align*}
again by Lemma \ref{lem:StolzCesaro}. The rest of the proof is the same as above.
\end{proof}
\section{Proof of main theorems} \label{sec:proofmain}
The following proposition allows us to estimate the random resistance $\{\gamma_x\}_{x\in\mathbb{Z}_+}$.
\begin{proposition} \label{prop:Takei20SxAsymp} Assume that $\{ p_i(\omega) \}_{i \in \mathbb{N} }$ is a sequence of independent random variables, and the distribution of $p_i$ is $\mbox{\rm Beta}\left(\dfrac{w_0(i)}{2\Delta},\dfrac{w_0(i-1)+\Delta}{2\Delta} \right)$. Let
\[ S_x := \log \gamma_x = \sum_{i=1}^x \log \dfrac{1-p_i}{p_i} \quad \mbox{for $x \in \mathbb{N}$}. \]
\begin{itemize}
\item[(i)] If $\alpha<1$ and $\Delta >0$, then
\[ \lim_{x \to \infty} \dfrac{S_x}{ x^{1-\alpha}}=\dfrac{1}{K(\alpha,\Delta)}\quad \mbox{$\mathbb{P}$-a.e. $\omega$,} \]
where $K(\alpha,\Delta)$ is defined in \eqref{eq:Takei20Main1Const}.
\item[(ii)] If $\alpha=1$ and $\Delta>0$, then
\[ \lim_{x \to \infty} \dfrac{S_x}{ \log x}=\Delta-1 \quad \mbox{$\mathbb{P}$-a.e. $\omega$.} \]
\end{itemize}
\end{proposition}
The proof of Proposition \ref{prop:Takei20SxAsymp} consists of several steps, and will be given in the next section.
We prove our main results first. Notice that
\begin{align}
\displaystyle Z^{\omega} = \sum_{x=0}^{\infty} \pi_x<+\infty \quad \mbox{if $\alpha<1$ and $\Delta>0$, or $\alpha=1$ and $\Delta>2$.} \label{cond:LERRWPosRec}
\end{align}
\begin{proof}[Proof of Theorem \ref{thm:Takei20Main1}] We fix $\alpha<1$ and $\Delta>0$, and write $K=K(\alpha,\Delta)$.
By Proposition \ref{prop:Takei20SxAsymp} (i) and Eq. \eqref{ineq:T(x)lower}, for any $\varepsilon>0$,
\begin{align*}
T^{\omega}(x) \geq \gamma_{x-1} \geq \exp\left(\dfrac{1-\varepsilon}{K} x^{1-\alpha} \right) \quad \mbox{for all large $x$.}
\end{align*}
By Lemma \ref{lem:MenshikovWade08Lem9(i)}, $\mathbb{P}$-a.e. $\omega$ and $\mathbf{P}^{\omega}_0$-a.s.,
\begin{align*}
Y_n \leq \left\{ \dfrac{K \log [2n\log \{(2n)^{1+\varepsilon} \}]}{1-\varepsilon}\right\}^{1/(1-\alpha)} \quad \mbox{for all large $n$,}
\end{align*}
which implies
\begin{align*}
\limsup_{n \to \infty} \dfrac{Y_n}{(K\log n)^{1/(1-\alpha)}} \leq \dfrac{1}{(1-\varepsilon)^{1/(1-\alpha)}}.
\end{align*}
Thus we have
\begin{align*}
\limsup_{n \to \infty} \dfrac{X_n}{(K\log n)^{1/(1-\alpha)}} \leq 1\quad \mbox{$P$-a.s..}
\end{align*}
We turn to the lower bound. Fix an arbitrary $\varepsilon>0$. Using Proposition \ref{prop:Takei20SxAsymp} (i), we can see that
\begin{align*}
\max_{0 \leq i < x} \gamma_i \leq \exp\left( \dfrac{1+\varepsilon/2}{K} x^{1-\alpha} \right)\quad \mbox{for all large $x$.}
\end{align*}
By \eqref{cond:LERRWPosRec},
\begin{align*}
Z^{\omega} x \leq \exp\left( \dfrac{\varepsilon/2}{K} x^{1-\alpha} \right) \quad \mbox{for all large $x$.}
\end{align*}
It follows from \eqref{ineq:T(x)upperPosRec} that
\begin{align*}
T^{\omega}(x) \leq \exp\left( \dfrac{1+\varepsilon}{K} x^{1-\alpha} \right)\quad \mbox{ for all large $x$.}
\end{align*}
By Lemma \ref{lem:HrynivMenshikovWade13Lem4.2bis}, $\mathbb{P}$-a.e. $\omega$ and $\mathbf{P}^{\omega}_0$-a.s.,
\begin{align*}
Y_n \geq \left\{ \dfrac{K\log (1-\varepsilon)n}{1+\varepsilon}\right\}^{1/(1-\alpha)} \mbox{for infinitely many $n$,}
\end{align*}
which implies
\begin{align*}
\limsup_{n \to \infty} \dfrac{Y_n}{(K\log n)^{1/(1-\alpha)}} \geq \dfrac{1}{(1+\varepsilon)^{1/(1-\alpha)}}.
\end{align*}
Thus we have
\begin{align*}
\limsup_{n \to \infty} \dfrac{X_n}{(K\log n)^{1/(1-\alpha)}} \geq 1\quad \mbox{$P$-a.s..}
\end{align*}
This completes the proof.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:Takei20Main2}] The proof of the case (i) (resp. (ii)) is closely related to that of the case (i) (resp. (iii)) of Theorem \ref{thm:Takei20Main3}.
\noindent (i) Assume that $\alpha=1$ and $\Delta>2$. Proposition \ref{prop:Takei20SxAsymp} (ii) implies that for any $\varepsilon \in (0,1)$,
\begin{align*}
\gamma_i \geq \exp((\Delta-1-\varepsilon)\log i)=i^{\Delta-1-\varepsilon}\quad \mbox{for all large $i$}.
\end{align*}
By \eqref{ineq:T(x)lower},
\begin{align}
T^{\omega}(x) \geq h(x)=\sum_{i=0}^{x-1} \gamma_i \geq \dfrac{(1-\varepsilon)x^{\Delta-\varepsilon}}{\Delta-\varepsilon} \quad \mbox{for all large $x$.} \label{ineq:Takei20Main2T(x)lower1}
\end{align}
By Lemma \ref{lem:MenshikovWade08Lem9(i)}, $\mathbb{P}$-a.e. $\omega$ and $\mathbf{P}^{\omega}_0$-a.s.,
\begin{align*}
Y_n \leq \left(\dfrac{\Delta-\varepsilon}{1-\varepsilon} \cdot 2n\{ \log (2n) \}^{1+\varepsilon} \right)^{1/(\Delta-\varepsilon)} \quad \mbox{for all large $n$,}
\end{align*}
which implies that
\begin{align*}
\lim_{n \to \infty} \dfrac{X_n}{n^{(1+\varepsilon)/(\Delta-\varepsilon)}} =0
\quad \mbox{$P$-a.s..}
\end{align*}
Now we turn to the lower bound. By Proposition \ref{prop:Takei20SxAsymp} (ii) and \eqref{cond:LERRWPosRec}, for any $\varepsilon>0$,
\begin{align*}
\gamma_i \leq \exp((\Delta-1+\varepsilon/2) \log i)=i^{\Delta-1+\varepsilon/2},
\end{align*}
and $Z^{\omega} \leq i^{\varepsilon/2}$ for all large $i$, which together with \eqref{ineq:T(x)upperPosRec} imply that
\begin{align*}
T^{\omega} (x) \leq Z^{\omega} h (x) \leq \dfrac{(1+\varepsilon)x^{\Delta+\varepsilon}}{\Delta+\varepsilon/2}\quad \mbox{for all large $x$.}
\end{align*}
By Lemma \ref{lem:HrynivMenshikovWade13Lem4.2bis}, $\mathbb{P}$-a.e. $\omega$ and $\mathbf{P}^{\omega}_0$-a.s.,
\begin{align*}
Y_n \geq \left\{\dfrac{\Delta+\varepsilon/2}{1+\varepsilon} \cdot (1-\varepsilon)n\right\}^{1/(\Delta+\varepsilon)}\mbox{for infinitely many $n$,}
\end{align*}
which implies that
\begin{align*}
\limsup_{n \to \infty} \dfrac{X_n}{n^{1/(\Delta+2\varepsilon)}}=+\infty
\quad \mbox{$P$-a.s..}
\end{align*}
\noindent
(ii) First we assume that $\alpha=1$ and $0<\Delta< 2$. For any $\varepsilon > 0$,
\begin{align*}
i^{\Delta-1-\varepsilon} \leq \gamma_i \leq i^{\Delta-1+\varepsilon}\quad \mbox{for all large $i$.}
\end{align*}
Notice that
\begin{align*}
T^{\omega}(x) &=\sum_{i=0}^{x-1} \gamma_i \sum_{j=0}^i \pi_j = 1+ \sum_{i=1}^{x-1} \gamma_i \left( 2 + 2 \sum_{j=1}^i \dfrac{1}{\gamma_j} - \dfrac{1}{\gamma_i} \right),
\end{align*}
and
\begin{align*}
\sum_{i=1}^{x-1} i^{\Delta-1 \pm \varepsilon} \sum_{j=1}^i j^{-(\Delta-1) \pm \varepsilon} \sim \dfrac{1}{(2-\Delta\pm \varepsilon)(2\pm2\varepsilon)}x^{2\pm2\varepsilon} \quad \mbox{as $x \to \infty$.}
\end{align*}
For any $\varepsilon>0$, we have
\begin{align}
T^{\omega}(x) \leq \dfrac{1+\varepsilon}{(2-\Delta+\varepsilon)(2+2\varepsilon)}x^{2+2\varepsilon} \quad \mbox{for all large $x$.} \label{ineq:Takei20Main2T(x)upper2}
\end{align}
On the other hand, for any $\varepsilon \in (0,(\Delta -2) \wedge 1)$,
\begin{align*}
T^{\omega}(x) \geq \dfrac{1-\varepsilon}{(2-\Delta-\varepsilon)(2-2\varepsilon)}x^{2-2\varepsilon} \quad \mbox{for all large $x$.}
\end{align*}
The rest of proof is similar to that of Theorem \ref{thm:Takei20Main3} (iii).
For the case $\alpha=1$ and $\Delta=2$, note that \eqref{ineq:Takei20Main2T(x)lower1} is actually valid for any $\Delta>0$, and \eqref{ineq:Takei20Main2T(x)upper2} is true also for $\Delta=2$. This completes the proof.
\end{proof}
\section{Proof of Proposition \ref{prop:Takei20SxAsymp}}
\label{sec:ProofPropTakei20SxAsymp}
Define $\zeta_i:=\log \dfrac{1-p_i}{p_i}$ for $i \in \mathbb{N}$. We begin with a particularly simple case, $\alpha=0$ and $\Delta>0$.
Since $\{p_i\}_{i \in \mathbb{N}}$ is an i.i.d. sequence
with $\mbox{Beta} \left(\dfrac{1}{2\Delta},\dfrac{1+\Delta}{2\Delta} \right)$,
the strong law of large numbers for i.i.d. sequences together with (4.1) in \cite{Takeshima00} imply that
\[
\lim_{x \to \infty} \dfrac{S_x}{x} = \mathbb{E} [\zeta_1]=\Psi\left(\frac{1}{2\Delta}+\dfrac{1}{2} \right) - \Psi\left(\frac{1}{2\Delta}\right)
\quad \mbox{$\mathbb{P}$-a.e. $\omega$.}
\]
If $\Delta=1$, then we have
\begin{align}
\Psi(1) - \Psi\left(\frac{1}{2}\right) &= \int_0^1 \left( \log\dfrac{1-u}{u}\right) \cdot \dfrac{1}{2\sqrt{u}}\,du \notag \\
&= \int_0^1 \{\log(1+t) + \log (1-t) - 2 \log t\} \,dt \quad (t=\sqrt{u}) \notag \\
&= \log 4.
\label{eq:digammaDifflog4}
\end{align}
To obtain the result for the other cases, we prepare some lemmata.
From (4.15) and (4.13) in \cite{Takeshima00}, for $x \in \mathbb{N}$,
\begin{align}
\mathbb{E}[S_x]
&= \sum_{i=1}^x \left\{ \Psi\left( \dfrac{w_0(i-1)+\Delta}{2\Delta} \right) - \Psi\left(\dfrac{w_0(i)}{2\Delta}\right) \right\} \label{eq:Takeshima00(4.15)a} \\
&= \Psi\left( \dfrac{w_0(0)}{2\Delta} \right) - \Psi\left( \dfrac{w_0(x)}{2\Delta} \right) + \sum_{i=0}^{x-1} \left\{ \Psi\left( \dfrac{w_0(i)}{2\Delta} +\dfrac{1}{2}\right) - \Psi\left(\dfrac{w_0(i)}{2\Delta}\right) \right\}, \label{eq:Takeshima00(4.15)b}\\
\mathbb{V}[S_x]
&= \sum_{i=1}^x \left\{ \Psi'\left( \dfrac{w_0(i-1)+\Delta}{2\Delta} \right) + \Psi'\left(\dfrac{w_0(i)}{2\Delta}\right) \right\}. \label{eq:Takeshima00(4.13)}
\end{align}
\begin{lemma} \label{lem:Takeshima00(4.10)(4.12)improved}
We have
\begin{align}
\Psi'(z) \sim \begin{cases}
\dfrac{1}{z} & (z \to \infty), \\[2mm]
\dfrac{1}{z^2} &(z \to 0), \\
\end{cases}
\label{asymp:Takeshima00(4.12)improved}
\end{align}
and
\begin{align}
\Psi\left( z +\dfrac{1}{2}\right) - \Psi\left(z\right) \sim \begin{cases}
\dfrac{1}{2z} & (z \to \infty), \\[2mm]
\dfrac{1}{z} &(z \to 0). \\
\end{cases}
\label{asymp:Takeshima00(4.10)improved}
\end{align}
\end{lemma}
\begin{proof} The series expansions of $\Psi(z)$ and $\Psi'(z)$ are given by
\begin{align}
\Psi(z) &= -\gamma-\sum_{k=0}^{\infty} \left( \dfrac{1}{z+k}-\dfrac{1}{1+k} \right), \label{eq:Takeshima00(4.6)} \\
\Psi'(z) &= \sum_{k=0}^{\infty} \dfrac{1}{(z+k)^2}, \label{eq:Takeshima00(4.7)}
\end{align}
where $\gamma$ is Euler's constant. Eq. \eqref{asymp:Takeshima00(4.12)improved} can be obtained as follows:
\begin{align*}
\Psi'(z) = \dfrac{1}{z^2}+\sum_{\ell=1}^{\infty} \dfrac{1}{(z+\ell)^2}
\begin{cases}
\displaystyle \geq \dfrac{1}{z^2} + \int_1^{\infty} \dfrac{1}{(z+x)^2}\,dx = \dfrac{1}{z^2}+\dfrac{1}{z+1}, \\[3mm]
\displaystyle \leq \dfrac{1}{z^2} +\int_0^{\infty} \dfrac{1}{(z+x)^2}\,dx = \dfrac{1}{z^2}+\dfrac{1}{z}. \\
\end{cases}
\end{align*}
We turn to \eqref{asymp:Takeshima00(4.10)improved}. Since $\Psi$ is increasing and $\Psi'$ is decreasing,
\begin{align}
\Psi\left( z + \dfrac{1}{2} \right) - \Psi(z) \begin{cases} \leq \Psi\left( z + 1 \right) - \Psi(z) = \dfrac{1}{z},
\vspace{1mm} \\[3mm]
\displaystyle = \int_z^{z+1/2} \Psi'(u) \,du \leq \dfrac{1}{2} \Psi'(z) \leq\dfrac{1}{2z}+\dfrac{1}{2z^2}.
\end{cases}
\label{ineq:Takeshima00(4.10)bis1}
\end{align}
We use the first bound for $0<z \leq 1$, and the second for $z \geq 1$.
On the other hand, by \eqref{eq:Takeshima00(4.6)},
\begin{align}
\Psi\left( z + \dfrac{1}{2} \right) - \Psi(z)
= \dfrac{1}{2}\sum_{\ell=0}^{\infty} \dfrac{1}{(z+\ell)(z+\frac{1}{2}+\ell)} \geq \dfrac{1}{z(2z+1)},
\label{ineq:Takeshima00(4.10)bis2}
\end{align}
and from (4.10) in \cite{Takeshima00},
\begin{align}
\Psi\left( z + \dfrac{1}{2} \right) - \Psi(z) \geq \dfrac{1}{2z}.
\label{ineq:Takeshima00(4.10)bis3}
\end{align}
We use \eqref{ineq:Takeshima00(4.10)bis2} for $0<z \leq 1/2$, and \eqref{ineq:Takeshima00(4.10)bis3} for $z \geq 1/2$.
Thus \eqref{asymp:Takeshima00(4.10)improved} follows from \eqref{ineq:Takeshima00(4.10)bis1}--\eqref{ineq:Takeshima00(4.10)bis3}.
\end{proof}
We use the following bound also ((4.11) in \cite{Takeshima00}): For $s,t>0$,
\begin{align}
\log t - \log s - \dfrac{1}{t} \leq \Psi(t) - \Psi(s) \leq \log t - \log s + \dfrac{1}{s}. \label{eq:Takeshima00(4.11)}
\end{align}
\begin{lemma} \label{lem:E(Sx)asymp} Assume that $\alpha \leq 1$ and $\Delta>0$. As $x \to \infty$,
\begin{align*}
\mathbb{E}[S_x] \sim \begin{cases}
\dfrac{2\Delta}{1-\alpha} x^{1-\alpha}&(\alpha<0), \\
\dfrac{\Delta}{1-\alpha} x^{1-\alpha}&(0<\alpha<1), \\
(\Delta-1) \log x&(\alpha=1,\,\Delta \neq 1). \\
\end{cases}
\end{align*}
If $\alpha=\Delta=1$, then $\mathbb{E}[S_x] \equiv \log 4$ for all $x \in \mathbb{N}$, by \eqref{eq:Takeshima00(4.15)a} and \eqref{eq:digammaDifflog4}.
\end{lemma}
\begin{proof} By \eqref{eq:Takeshima00(4.11)} and
\[ \log \dfrac{w_0(0)}{2\Delta} - \log \dfrac{w_0(x)}{2\Delta} = -\alpha \log x, \]
the first term in \eqref{eq:Takeshima00(4.15)b} satisfies
\begin{align}
- \alpha \log x -2\Delta\leq
\Psi\left( \dfrac{w_0(0)}{2\Delta} \right) - \Psi\left( \dfrac{w_0(x)}{2\Delta} \right) \leq - \alpha \log x +2\Delta x^{-\alpha}.
\label{ineq:E(Sx)asymp}
\end{align}
First we assume that $0<\alpha \leq 1$. From \eqref{ineq:E(Sx)asymp}, we have
\begin{align}
\Psi\left( \dfrac{w_0(0)}{2\Delta} \right) - \Psi\left( \dfrac{w_0(x)}{2\Delta} \right) \sim - \alpha \log x \quad \mbox{as $x \to \infty$.}
\label{asymp:E(Sx)asympPos1}
\end{align}
Since $w_0(x) \to \infty$ as $x \to \infty$, by \eqref{asymp:Takeshima00(4.10)improved},
\begin{align*}
\Psi\left( \dfrac{w_0(i)}{2\Delta} +\dfrac{1}{2}\right) - \Psi\left(\dfrac{w_0(i)}{2\Delta}\right) \sim \dfrac{\Delta}{i^{\alpha}} \quad \mbox{as $i \to \infty$,}
\end{align*}
which implies
\begin{align}
&\sum_{i=0}^{x-1} \left\{ \Psi\left( \dfrac{w_0(i)}{2\Delta} +\dfrac{1}{2}\right) - \Psi\left(\dfrac{w_0(i)}{2\Delta}\right) \right\} \notag \\
&\sim \Delta \sum_{i=1}^{x-1} \dfrac{1}{i^{\alpha}}
\sim
\begin{cases}
\dfrac{\Delta}{1-\alpha} x^{1-\alpha} &(0<\alpha<1), \\
\Delta \log x & (\alpha=1) \\
\end{cases}
\quad \mbox{as $x \to \infty$.}
\label{asymp:E(Sx)asympPos2}
\end{align}
Eqs. \eqref{asymp:E(Sx)asympPos1} and \eqref{asymp:E(Sx)asympPos2} give the conclusion for $0<\alpha \leq 1$.
Next we assume that $\alpha < 0$. Since $w_0(x)$ vanishes as $x \to \infty$, by \eqref{asymp:Takeshima00(4.10)improved},
\begin{align*}
\Psi\left( \dfrac{w_0(i)}{2\Delta} +\dfrac{1}{2}\right) - \Psi\left(\dfrac{w_0(i)}{2\Delta}\right) \sim \dfrac{2\Delta}{i^{\alpha}}\quad \mbox{as $i \to \infty$,}
\end{align*}
which implies
\begin{align*}
\sum_{i=0}^{x-1} \left\{ \Psi\left( \dfrac{w_0(i)}{2\Delta} +\dfrac{1}{2}\right) - \Psi\left(\dfrac{w_0(i)}{2\Delta}\right) \right\} \sim 2\Delta \sum_{i=1}^{x-1} \dfrac{1}{i^{\alpha}} \sim \dfrac{2\Delta}{1-\alpha} x^{1-\alpha}\quad \mbox{as $x \to \infty$.}
\end{align*}
In view of \eqref{ineq:E(Sx)asymp}, the second term is dominant in \eqref{eq:Takeshima00(4.15)b} as $x \to \infty$.
\end{proof}
\begin{lemma} \label{lem:V(Sx)asymp} Assume that $\alpha \leq 1$ and $\Delta>0$. As $x \to \infty$,
\begin{align*}
\mathbb{V}[S_x] \sim \begin{cases}
\dfrac{4\Delta^2}{1-2\alpha}x^{1-2\alpha}&(\alpha<0), \\[1mm]
\dfrac{4\Delta}{1-\alpha} x^{1-\alpha}&(0<\alpha<1), \\
4\Delta \log x&(\alpha=1). \\
\end{cases}
\end{align*}
\end{lemma}
\begin{proof}
First we assume that $0<\alpha \leq 1$. Since $w_0(x) \to \infty$ as $x \to \infty$, by \eqref{asymp:Takeshima00(4.12)improved},
\begin{align*}
\Psi'\left(\dfrac{w_0(i)}{2\Delta}\right) &\sim \dfrac{2\Delta}{i^{\alpha}}, \quad \mbox{and} \\
\Psi'\left( \dfrac{w_0(i-1)+\Delta}{2\Delta} \right) &\sim \dfrac{2\Delta}{(i-1)^{\alpha}+\Delta}\sim \dfrac{2\Delta}{i^{\alpha}} \quad \mbox{as $i \to \infty$.}
\end{align*}
This together with \eqref{eq:Takeshima00(4.13)} implies
\begin{align*}
\mathbb{V}[S_x] \sim 4\Delta \sum_{i=1}^x \dfrac{1}{i^{\alpha}} \sim \begin{cases}
\dfrac{4\Delta}{1-\alpha} x^{1-\alpha} &(0<\alpha<1), \\
4\Delta \log x&(\alpha=1)
\end{cases} \quad \mbox{as $x \to \infty$}.
\end{align*}
Next we assume that $\alpha < 0$. As $w_0(x)$ vanishes as $x \to \infty$, by \eqref{asymp:Takeshima00(4.12)improved},
\begin{align*}
\Psi'\left(\dfrac{w_0(i)}{2\Delta}\right) &\sim \left(\dfrac{2\Delta}{i^{\alpha}}\right)^2 =\dfrac{4\Delta^2}{i^{2\alpha}}, \quad \mbox{and}\\
\Psi'\left( \dfrac{w_0(i-1)+\Delta}{2\Delta} \right) &\to
\Psi'\left( \dfrac{1}{2}\right)>0
\quad \mbox{as $i \to \infty$,}
\end{align*}
which implies
\begin{align*}
\sum_{i=1}^x \Psi'\left( \dfrac{w_0(i-1)+\Delta}{2\Delta} \right) &\sim \Psi'\left( \dfrac{1}{2}\right)x,\quad \mbox{and}\\
\sum_{i=1}^x \Psi'\left(\dfrac{w_0(i)}{2\Delta}\right) & \sim 4\Delta^2\sum_{i=1}^x\dfrac{1}{i^{2\alpha}} \sim \dfrac{4\Delta^2}{1-2\alpha}x^{1-2\alpha}
\end{align*}
as $x \to \infty$. This together with \eqref{eq:Takeshima00(4.13)} gives the conclusion.
\end{proof}
The following lemma is a consequence of Kolmogorov's strong law of large numbers (see e.g. \cite{Ito84}, Theorem 4.5.2).
\begin{lemma} \label{lem:takeshima00Lemma4.3}
Let $\{ \zeta_i \}$ be a sequence of independent, square-integrable random variables, and
$\displaystyle S_x:=\sum_{i=1}^x \zeta_i$.
If $V[S_x]$ diverges as $x \to \infty$, then for any $\delta >0$,
\[ \lim_{x \to \infty} \dfrac{ S_x-E[S_x]}{ (V[S_x])^{1/2+\delta} } = 0\quad \mbox{a.s..} \]
In particular, if
\[\lim_{x \to \infty} \dfrac{(V[S_x])^{1/2+\delta}}{E[S_x]} = 0 \quad \mbox{for some $\delta>0$}, \]
then we have
\[ \lim_{x \to \infty} \dfrac{S_x}{E[S_x]}=1 \quad \mbox{a.s..}\]
\end{lemma}
The conclusion of Proposition \ref{prop:Takei20SxAsymp} for $\alpha \leq 1$ and $\alpha \neq 0$ follows from Lemmata \ref{lem:E(Sx)asymp}, \ref{lem:V(Sx)asymp}, and \ref{lem:takeshima00Lemma4.3}.
| {
"timestamp": "2020-07-28T02:06:32",
"yymm": "2005",
"arxiv_id": "2005.11135",
"language": "en",
"url": "https://arxiv.org/abs/2005.11135",
"abstract": "We study linearly edge-reinforced random walks on $\\mathbb{Z}_+$, where each edge $\\{x,x+1\\}$ has the initial weight $x^{\\alpha} \\vee 1$, and each time an edge is traversed, its weight is increased by $\\Delta$. It is known that the walk is recurrent if and only if $\\alpha \\leq 1$. The aim of this paper is to study the almost sure behavior of the walk in the recurrent regime. For $\\alpha<1$ and $\\Delta>0$, we obtain a limit theorem which is a counterpart of the law of the iterated logarithm for simple random walks. This reveals that the speed of the walk with $\\Delta>0$ is much slower than $\\Delta=0$. In the critical case $\\alpha=1$, our (almost sure) bounds for the trajectory of the walk shows that there is a phase transition of the speed at $\\Delta=2$.",
"subjects": "Probability (math.PR)",
"title": "Almost sure behavior of linearly edge-reinforced random walks on the half-line",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9817357221825193,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7085610819890156
} |
https://arxiv.org/abs/1907.10398 | Medians in median graphs and their cube complexes in linear time | The median of a set of vertices $P$ of a graph $G$ is the set of all vertices $x$ of $G$ minimizing the sum of distances from $x$ to all vertices of $P$. In this paper, we present a linear time algorithm to compute medians in median graphs, improving over the existing quadratic time algorithm. We also present a linear time algorithm to compute medians in the $\ell_1$-cube complexes associated with median graphs. Median graphs constitute the principal class of graphs investigated in metric graph theory and have a rich geometric and combinatorial structure, due to their bijections with CAT(0) cube complexes and domains of event structures. Our algorithm is based on the majority rule characterization of medians in median graphs and on a fast computation of parallelism classes of edges ($\Theta$-classes or hyperplanes) via Lexicographic Breadth First Search (LexBFS). To prove the correctness of our algorithm, we show that any LexBFS ordering of the vertices of $G$ satisfies the following fellow traveler property of independent interest: the parents of any two adjacent vertices of $G$ are also adjacent. Using the fast computation of the $\Theta$-classes, we also compute the Wiener index (total distance) of $G$ in linear time and the distance matrix in optimal quadratic time. | \section{Introduction}
The median problem (also called the Fermat-Torricelli problem or the
Weber problem) is one of the oldest optimization problems in Euclidean
geometry~\cite{LoMoWe}. The \emph{median problem} can be defined for
any metric space $(X,d)$: given a finite set $P\subset X$ of points
with positive weights, compute the points $x$ of $X$ minimizing the
sum of the distances from $x$ to the points of $P$ multiplied by their
weights. The median problem in graphs is one of the principal models
in network location theory~\cite{Hakimi,TaFrLa} and is equivalent to
finding nodes with largest closeness centrality in network
analysis~\cite{Bav,Beau,Sabi}. It also occurs in social group choice
as the Kemeny median. In the consensus problem in social group choice,
given individual rankings of $d$ candidates one has to compute a
consensual group decision. By the classical Arrow's impossibility
theorem, there is no consensus function satisfying natural
``fairness'' axioms. It is also well-known that the majority rule
leads to Condorcet's paradox, i.e., to the existence of cycles in the
majority relation. In this respect, the Kemeny median~\cite{Ke,KeSn}
is an important consensus function and corresponds to the median
problem in graphs of permutahedra (the graph whose vertices are all
$d!$ permutations of the candidates and whose edges are the pairs of
permutations differing by adjacent transpositions). Other classical
algorithmic problems related to distances are the diameter and center
problems. Yet another such problem comes from chemistry and consists
in the computation of the Wiener index of a graph. This is a
topological index of a molecule, defined as the sum of the distances
between all pairs of vertices in the associated chemical
graph~\cite{Wie}. The Wiener index is closely related to the
closeness centrality of a vertex in a graph, a quantity inversely
proportional to the sum of all distances between the given vertex and
all other vertices that has been frequently used in sociometry and the
theory of social networks.
The median problem in Euclidean spaces cannot be solved in symbolic
form~\cite{Baj}, but can be solved numerically by Weiszfeld's
algorithm~\cite{Weis} and its convergent modifications (see e.g.\
\cite{Os}), and can be approximated in nearly linear time with
arbitrary precision~\cite{CoLeMiPaSi}. For the $\ell_1$-metric the
median problem becomes easier and can be solved by the majority rule
on coordinates, i.e., by taking as median a point whose $i$th
coordinate is the median of the list of $i$th coordinates of the
points of $P$. This kind of rule was used in~\cite{Jordan} to define
centroids of trees (which coincide with their
medians~\cite{GoWi,TaFrLa}) and can be viewed as an instance of the
majority rule in social choice theory. For graphs with $n$ vertices,
$m$ edges, and standard graph distance, the median problem can be
trivially solved in $O(nm)$ time by solving the All Pairs Shortest
Paths (APSP) problem. One may ask if APSP is necessary to compute the
median. However, by~\cite{AbGrVa} APSP and median problem are
equivalent under subcubic reductions. It was also shown
in~\cite{AbVaWa} that computing the medians of sparse graphs in
subquadratic time refutes the HS (Hitting Set) conjecture. It was also
noted in~\cite{Cab} that computing the Wiener index of a sparse graph
in subquadratic time will refute the Exponential time (SETH)
hypothesis. Note also that computing the Kemeny median is
NP-hard~\cite{DwKuNaSi} if the input is the list of individual
preferences.
In this paper, we show that the medians in median graphs can be
computed in optimal $O(m)$ time (i.e., without solving APSP). Median
graphs are the graphs in which each triplet $u,v,w$ of vertices has a
unique median, i.e., a vertex metrically lying between $u$ and $v$,
$v$ and $w$, and $w$ and $u$. They originally arise in universal
algebra~\cite{Av,BiKi} and their properties have been first
investigated in~\cite{Mu,Ne}. It was shown in~\cite{Ch_CAT,Ro} that
the cube complexes of median graphs are exactly the CAT(0) cube
complexes, i.e., cube complexes of global non-positive
curvature. CAT(0) cube complexes, introduced and nicely characterized
in~\cite{Gromov} in a local-to-global way, are now one of the
principal objects of investigation in geometric group
theory~\cite{Sa_survey}. Median graphs also occur in Computer Science:
by~\cite{ArOwSu,BaCo} they are exactly the domains of event structures
(one of the basic abstract models of concurrency)~\cite{NiPlWi} and
median-closed subsets of hypercubes are exactly the solution sets of
2-SAT formulas~\cite{MuSch,Schaefer}. The bijections between median
graphs, CAT(0) cube complexes, and event structures have been used
in~\cite{CC-ICALP17,CC-MSO,Ch_nice} to disprove three conjectures in
concurrency~\cite{RoTh,Thi_conjecture,ThiaYa}. Finally, median
graphs, viewed as median closures of sets of vertices of a hypercube,
contain all most parsimonious (Steiner) trees~\cite{BaFoRo} and as
such have been extensively applied in human genetics. For a survey on
median graphs and their connections with other discrete and geometric
structures, see the books~\cite{HaImKla,Knuth}, the
surveys~\cite{BaCh_survey,KlMu_survey}, and the paper~\cite{ChChHiOs}.
As we noticed, median graphs have strong geometric and metric
properties. For the median problem, the concept of $\Theta$-classes
is essential. Two edges of a median graph $G$ are called opposite if
they are opposite in a common square of $G$. The equivalence relation
$\Theta$ is the reflexive and transitive closure of this oppositeness
relation. Each equivalence class of $\Theta$ is called a
$\Theta$-class ($\Theta$-classes correspond to hyperplanes in CAT(0)
cube complexes~\cite{Sa} and to events in event
structures~\cite{NiPlWi}). Removing the edges of a $\Theta$-class,
the graph $G$ is split into two connected components which are convex
and gated. Thus they are called halfspaces of $G$. The convexity of
halfspaces implies via~\cite{Dj} that any median graph $G$
isometrically embeds into a hypercube of dimension equals to the
number $q$ of $\Theta$-classes of $G$.
\subsection*{Our results and motivation}
In this paper, we present a linear time algorithm to compute medians
in median graphs and in associated $\ell_1$-cube complexes. Our main
motivation and technique stem from the majority rule characterization
of medians in median graphs and the unimodality of the median
function~\cite{BaBa,SoCh_Weber}. Even if the majority rule is simple
to state and is a commonly approved consensus method, its algorithmic
implementation is less trivial if one has to avoid the computation of
the distance matrix. On the other hand, the unimodality of the median
function implies that one may find the median set by using local
search. More generally, consider a partial orientation of the input
median graph $G$, where an edge $uv$ is transformed into the arc
$\overrightarrow{uv}$ iff the median function at $u$ is larger than
the median function at $v$ (in case of equality we do not orient the
edge $uv$). Then the median set is exactly the set of all the sinks
in this partial orientation of $G$. Therefore, it remains to compare
for every edge $uv$ the median function at $u$ and at $v$ in constant
time. For this we use the partition of the edge-set of a median graph
$G$ into $\Theta$-classes and; for every $\Theta$-class, the partition
of the vertex-set of $G$ into complementary halfspaces. It is easy to
notice that all edges of the same $\Theta$-class are oriented in the
same way because for any such edge $uv$ the difference between the
median functions at $u$ and at $v$, respectively, can be expressed as
the sum of weights of all vertices in the same halfspace as $v$ minus
the sum of weights of all vertices in the same halfspace as $u$.
Our main technical contribution is a new method for computing the
$\Theta$-classes of a median graph $G$ with $n$ vertices and $m$ edges
in linear $O(m)$ time. For this, we prove that Lexicographic Breadth
First Search (LexBFS)~\cite{RoTaLu} produces an ordering of the
vertices of $G$ satisfying the following \emph{fellow traveler
property}: for any edge $uv$, the parents of $u$ and $v$ are
adjacent. With the $\Theta$-classes of $G$ at hand and the majority
rule for halfspaces, we can compute the weights of halfspaces of $G$
in optimal time $O(m)$, leading to an algorithm of the same complexity
for computing the median set. We adapt our method to compute in
linear time the median of a finite set of points in the $\ell_1$-cube
complex associated with $G$. We also show that this method can be
applied to compute the Wiener index in optimal $O(m)$ time and the
distance matrix of $G$ in optimal $O(n^2)$ time.
In all previous results we assumed that the input of the problem is
given by the median graph or its cube complex, together with the set
of terminals and their weights. However, analogously to the Kemeny
median problem, the median problem in a median graph $G$ can be
defined in a more compact way. We mentioned above that median graphs
are exactly the domains of configurations of event structures and the
solution sets of 2-SAT formulas (with no equivalent variables). The
underlying event structure or the underlying 2-SAT formula provide a
much more compact (but implicit) description of the median graph
$G$. Therefore, we can formulate a median problem by supposing that
the input is a set of configurations of an event structure and their
weights. The goal is to compute a configuration minimizing the sum of
the weighted (Hamming) distances to the terminal-configurations.
Thanks to the majority rule, we show that this median problem can be
efficiently solved in the size of the input. Finally, we suppose that
the input is an event structure and the goal is to compute a
configuration minimizing the sum of distances to \emph{all}
configurations.
We show that this problem is $\#$P-hard. For this we establish a
direct correspondence between event structures and 2-SAT formulas and
we use a result by Feder~\cite{Fe} that an analogous median problem
for 2-SAT formulas is $\#$P-hard.
\subsection*{Related work}
The study of the median problem in median graphs originated
in~\cite{BaBa,SoCh_Weber} and continued
in~\cite{BaBrChKlKoSu,McMuRo,Mu_Exp,MuNo,PuSl}. Using different
techniques and extending the majority rule for trees~\cite{GoWi}, the
following \emph{majority rule} has been established
in~\cite{BaBa,SoCh_Weber}: a halfspace $H$ of a median graph $G$
contains at least one median iff $H$ contains at least one half of the
total weight of $G$; moreover, the median of $G$ coincides with the
intersection of halfspaces of $G$ containing strictly more than half
of the total weight. It was shown in~\cite{SoCh_Weber} that the median function of a median
graph is weakly convex (an analog of a discrete convex function).
This convexity property characterizes all graphs in which all local
medians are global~\cite{BaCh_median}. A nice axiomatic
characterization of medians of median graphs via three basic axioms
has been obtained in~\cite{MuNo}. More recently, \cite{PuSl}
characterized median graphs as \emph{closed Condorcet domains},
i.e., as sets of linear orders with the property that, whenever the
preferences of all voters belong to the set, their majority relation
has no cycles and also belongs to the set. Below we will show that the
median graphs are the bipartite graphs in which the medians are
characterized by the majority rule.
Prior to our work, the best algorithm to compute the $\Theta$-classes
of a median graph $G$ has complexity $O(m\log n)$~\cite{HagImKl}. It
was used in~\cite{HagImKl} to recognize median graphs in subquadratic
time. The previous best algorithm for the median problem in a median
graph $G$ with $n$ vertices and $q$ $\Theta$-classes has complexity
$O(qn)$~\cite{BaBrChKlKoSu} which is quadratic in the worst case.
Indeed $q$ may be linear in $n$ (as in the case of trees) and is
always at least $d(\sqrt[d]{n} - 1)$ as shown below ($d$ is the
largest dimension of a hypercube which is an induced subgraph of $G$).
Additionally, \cite{BaBrChKlKoSu} assumes that an isometric embedding
of $G$ in a $q$-hypercube is given. The description of such an
embedding has already size $O(qn)$. The $\Theta$-classes of a median
graph $G$ correspond to the coordinates of the smallest hypercube in
which $G$ isometrically embeds (this is called the \emph{isometric
dimension} of $G$~\cite{HaImKla}). Thus one can define
$\Theta$-classes for all partial cubes, i.e., graphs isometrically
embeddable into hypercubes. An efficient computation (in $O(n^2)$
time) of all $\Theta$-classes was the main step of the $O(n^2)$
algorithm of~\cite{Epp} for recognizing partial cubes. The
fellow-traveler property (which is essential in our computation of
$\Theta$-classes) is a notion coming from geometric group
theory~\cite{ECHLPT} and is a main tool to prove the (bi)automaticity
of a group. In a slightly stronger form it allows to establish the
dismantlability of graphs (see~\cite{BrChChGoOs,Ch_dism,Ch_CAT} for
examples of classes of graphs in which a fellow traveler order was
obtained by BFS or LexBFS). LexBFS has been used to solve optimally
several algorithmic problems in different classes of graphs, in
particular for their recognition (for a survey, see~\cite{Co}).
Cube complexes of median graphs with $\ell_1$-metric have been
investigated in~\cite{vdV}. The same complexes but endowed with the
$\ell_2$-metric are exactly the CAT(0) cube complexes. As we noticed
above, they are of great importance in geometric group
theory~\cite{Sa_survey}. The paper~\cite{BiHoVo} established that the
space of trees with a fixed set of leaves is a CAT(0) cube complex. A
polynomial-time algorithm to compute the $\ell_2$-distance between two
points in this space was proposed in~\cite{OwPr}. This result was
recently extended in~\cite{Hayashi} to all CAT(0) cube complexes. A
convergent numerical algorithm for the median problem in CAT(0) spaces
was given in~\cite{Bacak}.
Finally, for an extensive bibliography on Wiener index in graphs,
see~\cite{HaImKla,Kl_wiener}. The Wiener index of a tree can be
computed in linear time~\cite{MoPi}. Using this and the fact that
benzenoids (i.e., subgraphs of the hexagonal grid bounded by a simple
curve) isometrically embed in the product of three trees,~\cite{ChKl}
proposed a linear time algorithm for the Wiener index of
benzenoids. Finally, in a recent breakthrough~\cite{Cab}, a
subquadratic algorithm for the Wiener index and the diameter of planar
graphs was presented.
\begin{figure}[t]
\captionsetup[subfigure]{singlelinecheck=true}
\centering
\qquad \qquad \qquad
\subcaptionbox{\label{fig:ExGrMed1}}
{\includegraphics[scale=0.40]{Images/ExGraphMedian.pdf}}
\hfill \subcaptionbox{\label{fig:ExGrMed2}}
{\includegraphics[scale=0.35,page=1]{Images/Graphe2.pdf}}
\qquad \qquad \qquad
\caption{Two median graphs, the second one (denoted by $D$) will be
our running example.}
\end{figure}
\section{Preliminaries}
All graphs $G=(V,E)$ in this paper are finite, undirected, simple, and
connected; $V$ is the vertex-set and $E$ is the edge-set of $G$. We
write $u\sim v$ if $u,v\in V$ are adjacent. The \emph{distance}
$d(u,v)=d_G(u,v)$ between two vertices $u$ and $v$ is the length of a
shortest $(u,v)$-path, and the \emph{interval}
$I(u,v)=\{ x \in V : d(u,x) + d(x,v) = d(u,v) \}$ consists of all the
vertices on shortest $(u,v)$--paths. A set $H$ (or the subgraph
induced by $H$) is \emph{convex} if $I(u,v)\subseteq H$ for any two
vertices $u,v$ of $H$; $H$ is a \emph{halfspace} if $H$ and
$V\setminus H$ are convex. Finally, $H$ is \emph{gated} if for every
vertex $v \in V$, there exists a (unique) vertex $v' \in V(H)$ (the
\emph{gate} of $v$ in $H$) such that for all $u \in V(H)$,
$v' \in I(u,v)$. The \emph{$k$-dimensional hypercube $Q_k$} has all
subsets of $\{1,\ldots,k\}$ as the vertex-set and $A \sim B$ iff
$|A \triangle B| = 1$. A graph $G$ is called \emph{median} if
$I(x,y) \cap I(y,z) \cap I(z,x)$ is a singleton for each triplet
$x,y,z$ of vertices; this unique intersection vertex $m(x,y,z)$ is
called the \emph{median} of $x,y,z$. Median graphs are bipartite and
do not contain induced $K_{2,3}$. The \emph{dimension} $d=\dim(G)$ of
a median graph $G$ is the largest dimension of a hypercube of $G$. In
$G$, we refer to the $4$-cycles as \emph{squares}, and the hypercube
subgraphs as \emph{cubes}.
A map $w:V\rightarrow \ensuremath{\mathbb{R}}\xspace^+\cup \{ 0\}$ is called a \emph{weight
function}. For a vertex $v\in V$, $w(v)$ denotes the weight of $v$
(for a set $S\subseteq V$, $w(S)=\sum_{x\in S} w(x)$ denotes the
weight of $S$). Then $F_w(x)=\sum_{v\in V} w(v)d(x,v)$ is called the
\emph{median function} of the graph $G$ and a vertex $x$ minimizing
$F_w$ is called a \emph{median vertex} of $G$. Finally,
$\Med_w(G)=\{x\in V : x \mbox{ is a median of } G\}$ is called the
\emph{median set} (or simply, the \emph{median}) of $G$ with respect
to the weight function $w$. The \emph{Wiener index} $W(G)$ (called
also the \emph{total distance}) of a graph $G$ is the sum of all
pairwise distances between the vertices of $G$. For a weight function
$w$, the \emph{Wiener index} of $G$ is the sum $W_w(G)=\sum_{u,v\in V}
w(u)w(v)d(u,v)$.
\section{Facts about median graphs}\label{sec:properties}
We recall the principal properties of median graphs used in the
algorithms. Some of those results are a part of folklore for the
people working in metric graph theory and some other results can be
found in the papers \cite{Mu,Mu_Exp} by Mulder. For readers
convenience, we provide the proofs (sometimes different from the
original proofs) of all those results in the Appendix.
From now on, $G=(V,E)$ is a median graph with $n$ vertices and $m$
edges. The first three properties follow from the definition.
\begin{lemma}[Quadrangle Condition]\label{quadrangle}
For any vertices $u,v,w,z$ of $G$ such that $v, w\sim z$ and
$d(u,v) = d(u,w) = d(u,z)-1 = k$, there is a unique vertex
$x\sim v,w$ such that $d(u,x) = k-1$.
\end{lemma}
\begin{lemma}[Cube Condition]\label{cube}
Any three squares of $G$, pairwise intersecting in three edges and
all three intersecting in a single vertex, belong to a 3-dimensional
cube of $G$.
\end{lemma}
\begin{lemma}[Convex=Gated]\label{convex-gated}
A subgraph of $G$ is convex if and only if it is gated.
\end{lemma}
Two edges $uv$ and $u'v'$ of $G$ are in relation $\Theta_0$ if
$uvv'u'$ is a square of $G$ and $uv$ and $u'v'$ are opposite edges of
this square. Let $\Theta$ denote the reflexive and transitive closure
of $\Theta_0$. Denote by $E_1,\ldots,E_q$ the equivalence classes of
$\Theta$ and call them \emph{$\Theta$-classes} (see
Fig.~\ref{fig-halfspaces}(a)).
\begin{lemma}[\!\!\cite{Mu})(Halfspaces and
$\Theta$-classes]\label{halfspaces} For any $\Theta$-class $E_i$ of
$G$, the graph $G_i=(V,E\setminus E_i)$ consists of exactly two
connected components $H'_i$ and $H''_i$ that are halfspaces of $G$;
all halfspaces of $G$ have this form. If $uv\in E_i$, then $H'_i$
and $H''_i$ are the subgraphs of $G$ induced by
$W(u,v)=\{ x\in V: d(u,x)<d(v,x)\}$ and
$W(v,u)=\{ x\in V: d(v,x)<d(u,x)\}$.
\end{lemma}
By~\cite{Dj}, $G$ is a partial cube, (i.e., isometrically embeds into
an hypercube) iff $G$ is bipartite and $W(u,v)$ is convex for any edge
$uv$ of $G$. Consequently, we obtain the following corollary.
\begin{corollary}\label{cor-halfspaces-convex}
$G$ isometrically embeds into a hypercube of dimension equals to the
number $q$ of $\Theta$-classes of $G$.
\end{corollary}
\begin{lemma}\label{convex-int-halfspaces}
Each convex subgraph $S$ of $G$ is the intersection of all
halfspaces containing $S$.
\end{lemma}
Two $\Theta$-classes $E_i$ and $E_j$ are \emph{crossing} if each
halfspace of the pair $\{ H'_i,H''_i\}$ intersects each halfspace of
the pair $\{ H'_j,H''_j\}$; otherwise, $E_i$ and $E_j$ are called
\emph{laminar}.
\begin{lemma}[Crossing $\Theta$-classes]\label{crossing}
Any vertex $v \in V(G)$ and incident edges
$vv_1\in E_1, \ldots, vv_k \in E_k$ belong to a single cube of $G$
if and only if $E_1, \ldots, E_k$ are pairwise crossing.
\end{lemma}
The \emph{boundary} $\partial H'_i$ of a halfspace $H'_i$ is the
subgraph of $H'_i$ induced by all vertices $v'$ of $H'_i$ having a
neighbor $v''$ in $H''_i$. A halfspace $H'_i$ of $G$ is
\emph{peripheral} if $\partial H'_i=H'_i$ (See
Fig.~\ref{fig-halfspaces}(b)).
\begin{lemma}[Boundaries]\label{boundary}
For any $\Theta$-class $E_i$ of $G$, $\partial H'_i$ and
$\partial H''_i$ are isomorphic and gated.
\end{lemma}
From now on, we suppose that $G$ is rooted at an arbitrary vertex
$v_0$ called the \emph{basepoint}. For any $\Theta$-class $E_i$, we
assume that $v_0$ belongs to the halfspace $H''_i$. Let
$d(v_0,H'_i)=\min \{ d(v_0,x): x\in H'_i\}$. Since $H'_i$ is gated,
the gate of $v_0$ in $H'_i$ is the unique vertex of $H'_i$ at distance
$d(v_0,H'_i)$ from $v_0$. Since median graphs are bipartite, the
choice of $v_0$ defines a canonical orientation of the edges of $G$:
$uv\in E$ is oriented from $u$ to $v$ (notation $\overrightarrow{uv}$)
if $d(v_0,u)<d(v_0,v)$. Let $\overrightarrow{G}_{v_0}$ denote the
resulting oriented pointed graph.
\begin{lemma}[\!\!\cite{Mu_Exp})(Peripheral Halfspaces]\label{peripheral}
Any halfspace $H'_i$ maximizing $d(v_0,H'_i)$ is peripheral.
\end{lemma}
\begin{figure}[t]
\captionsetup[subfigure]{singlelinecheck=true}
\centering
\subcaptionbox{\label{fig:ExGrMedGate}}{\includegraphics[scale=0.64,page=7]{Images/Graphe2.pdf}}\qquad \subcaptionbox{\label{fig:ExGrMedBoun}}{\includegraphics[scale=0.64,page=6]{Images/Graphe2.pdf}}\qquad \subcaptionbox{\label{fig:ExGrMedPer}}{\includegraphics[scale=0.64,page=5]{Images/Graphe2.pdf}}
\caption{~(a) In dashed, the $\Theta$-class $E_i$ of $D$, its two
complementary halfspaces $H'_i$ and $H''_i$ and their boundaries
$\partial H_i'$ and $\partial H_i''$,~(b) two peripheral
halfspaces of $D$, and~(c) a LexBFS ordering of
$D$.}\label{fig-halfspaces}
\end{figure}
For a vertex $v$, all vertices $u$ such that $\overrightarrow{uv}$ is
an edge of $\overrightarrow{G}_{v_0}$ are called \emph{predecessors}
of $v$ and are denoted by $\Lambda(v)$. Equivalently, $\Lambda(v)$
consists of all neighbors of $v$ in the interval $I(v_0,v)$. A median
graph $G$ satisfies the \emph{downward cube property} if any vertex
$v$ and all its predecessors $\Lambda(v)$ belong to a single cube of
$G$.
\begin{lemma}[\!\!\cite{Mu})(Downward Cube Property]\label{descendent_cube}
$G$ satisfies the downward cube property.
\end{lemma}
Lemma~\ref{descendent_cube} immediately implies the following upper
bound on the number of edges of $G$:
\begin{corollary}\label{upper_edges}
If $G$ has dimension $d$, then $m\le dn\le n\log n$.
\end{corollary}
We give a sharp lower bound on the number $q$ of $\Theta$-classes,
which is new to our knowledge.
\begin{proposition}\label{prop-nbthetaclasses}
If $G$ has $q$ $\Theta$-classes and dimension $d$, then
$q \geq d(\sqrt[d]{n}-1)$. This lower bound is realized for products
of $d$ paths of length $\sqrt[d]{n}-1$.
\end{proposition}
\begin{proof}
Let $\Gamma(G)$ be the crossing graph $\Gamma(G)$ of $G$:
$V(\Gamma(G))$ is the set of $\Theta$-classes of $G$ and two
$\Theta$-classes are adjacent in $\Gamma(G)$ if they are crossing.
Note that $|V(\Gamma(G))| = q$. Let $X(\Gamma(G))$ be the clique
complex of $\Gamma(G)$. By the characterization of median graphs
among ample classes~\cite[Proposition~4]{BaChDrKo}, the number of
vertices of $G$ is equal to the number $|X(\Gamma(G))|$ of simplices
of $X(\Gamma(G))$. Since $G$ is of dimension $d$,
by~\cite[Proposition~4]{BaChDrKo}, $\Gamma(G)$ does not contain
cliques of size $d+1$. By Zykov's theorem~\cite{Zyk} (see
also~\cite{Wood}), the number of $k$-simplices in $X(\Gamma(G))$ is
at most $\binom{d}{k}\left(\frac{q}{d}\right)^k$. Hence
$ n = |V(G)| = |X(\Gamma(G))| \leq \sum_{k=0}^d
\binom{d}{k}\left(\frac{q}{d}\right)^k = \left(1+
\frac{q}{d}\right)^d$ and thus $q \geq d(\sqrt[d]{n}-1)$. Let now
$G$ be the Cartesian product of $d$ paths of length
$(\sqrt[d]{n}-1)$. Then $G$ has $(\sqrt[d]{n}-1+1)^d = n$ vertices
and $d(\sqrt[d]{n}-1)$ $\Theta$-classes (each $\Theta$-class of $G$
corresponds to an edge of one of factors).
\end{proof}
\section{Computation of the $\Theta$-classes}\label{sec:theta}
In this section we describe two algorithms to compute the
$\Theta$-classes of a median graph $G$: one runs in time $O(dm)$ and
uses BFS, and the other runs in time $O(m)$ and uses LexBFS.
\subsection{$\Theta$-classes via BFS}
The \emph{Breadth-First Search (BFS)} refines the basepoint order and defines the same orientation
$\overrightarrow{G}_{v_0}$ of $G$.
BFS uses a queue $Q$ and the insertion in $Q$ defines a total order $<$ on the vertices of $G$: $x<y$ iff $x$ is
inserted in $Q$ before $y$. When a vertex $u$ arrives at the head of
$Q$, it is removed from $Q$ and all not yet discovered neighbors $v$
of $u$ are inserted in $Q$; $u$ becomes the \emph{parent} $f(v)$ of
$v$; for any vertex $v\neq v_0$, $f(v)$ is the smallest predecessor of
$v$. The arcs $\overrightarrow{f(v)v}$ define the \emph{BFS-tree} of
$G$.
For each vertex $v$, BFS produces the list $\Lambda(v)$ of
predecessors of $v$ ordered by $<$; denote this ordered list by
$\Lambda_{<}(v)$.
By Lemma~\ref{descendent_cube}, each list $\Lambda_{<}(v)$ has size at
most $d:=\dim(G)$. Notice also that the total order $<$ on vertices
of $G$ give raise to a total order on the edges of $G$: for two edges
$uv$ and $u'v'$ with $u<v$ and $u'<v'$ we have $uv<u'v'$ if and only
if $u<u'$ or if $u=u'$ and $v<v'$.
Now we show how to use a BFS rooted at $v_0$ to compute, for each edge
$uv$ of a median graph $G$, the unique $\Theta$-class $E(uv)$
containing the edge $uv$. Suppose that $uv$ is oriented by BFS from
$u$ to $v$, i.e., $d(v_0,u)<d(v_0,v)$. There are only two
possibilities: either the edge $uv$ is the first edge of the
$\Theta$-class $E(uv)$ discovered by BFS or the $\Theta$-class of $uv$
already exists. The following lemma shows how to distinguish between
these two cases:
\begin{lemma}\label{prime_traces}
An edge $uv\in E_i$ with $d(v_0,u)<d(v_0,v)$ is the first edge of a
$\Theta$-class $E_i$ iff $u$ is the unique predecessor of $v$, i.e.,
$\Lambda_<(v)=\{ u\}$.
\end{lemma}
\begin{proof}
First let $uv$ be the first edge of $E_i$ discovered by {BFS}. Since
$H'_i$ is gated, $v$ is the gate of $v_0$ in $H'_i$ and $u$ is the
unique neighbor of $v$ in $H''_i$. We assert that $u$ is the unique
neighbor of $v$ in $I(v_0,v)$. Suppose $I(v_0,v)$ contains a second
neighbor $u''$ of $v$. Since $v$ is the gate of $v_0$ in $H'_i$ and
$u''$ is closer to $v_0$ than $v$, $u''$ necessarily belongs to
$H''_i$, a contradiction with the uniqueness of $u$. Conversely,
suppose that $v$ has only $u$ as a neighbor in $I(v_0,v)$ but $uv$
is not the first edge of $E_i$ with respect to the BFS. This implies
that the gate $x$ of $v_0$ in $H'_i$ is different from $v$. Let
$u'$ be a neighbor of $v$ in $I(x,v)$ and note that
$I(x,v)\subseteq I(v_0,v)$. Since $v,x\in H'_i$ and $H'_i$ is
convex, $u'$ belongs to $H'_i$. Since $u$ belongs to $H''_i$, we
conclude that $u$ and $u'$ are two different neighbors of $v$ in
$I(v_0,v)$, a contradiction.
\end{proof}
If $uv$ is not the first edge of its $\Theta$-class, the following
lemma shows how to find its $\Theta$-class:
\begin{lemma}\label{nonprime_traces}
Let $uv$ be an edge of a median graph with $u\in \Lambda_<(v)$. If
$v$ has a second predecessor $v'$, then there exists a square
$u'uvv'$ in which $uv$ and $u'v'$ are opposite edges and
$u'\in \Lambda_<(u)\cap \Lambda_< (v')$.
\end{lemma}
\begin{proof}
Indeed, by the quadrangle condition, the vertices $u$ and $v'$ have
a unique common neighbor $u'$ such that $u'uvv'$ is a square of $G$
and $u'$ is closer to $v_0$ than $u$ and $v'$. Consequently,
$u'\in \Lambda_<(u)\cap \Lambda_<(v')$ and $uv$ and $u'v'$ are
opposite edges of $u'uvv'$.
\end{proof}
From Lemmas~\ref{prime_traces} and~\ref{nonprime_traces} we deduce the
following algorithm for computing the $\Theta$-classes of $G$. First,
run a BFS and return a BFS-ordering of the vertices and edges of $G$
and the ordered lists $\Lambda_<(v), v\in V$. Then consider the edges
of $G$ in the BFS-order. Pick a current edge $uv$ and suppose that
$u\in \Lambda_<(v)$. If $\Lambda_<(v)=\{u\}$, by
Lemma~\ref{prime_traces} $uv$ is the first edge of its $\Theta$-class,
thus create a new $\Theta$-class $E_i$ and insert $uv$ in
$E_i$. Otherwise, if $v$ has a second predecessor $v'$, then traverse
the ordered lists $\Lambda_<(u)$ and $\Lambda_<(v')$ to find their
unique common predecessor $u'$ (which exists by
Lemma~\ref{nonprime_traces}). Then insert the edge $uv$ in the
$\Theta$-class of the edge $u'v'$. Since the two sorted lists
$\Lambda_<(u)$ and $\Lambda_<(v')$ are of size at most $d$, their
intersection (that contains only $u'$) can be computed in time $O(d)$,
and thus the $\Theta$-class of each edge $uv$ of $G$ can be computed
in $O(d)$ time. Consequently, we obtain:
\begin{proposition}\label{BFS}
The $\Theta$-classes of a median graph $G$ with $n$ vertices, $m$
edges, and dimension $d$ can be computed in $O(dm)=O(d^2n)$ time.
\end{proposition}
\subsection{$\Theta$-classes via LexBFS}
The \emph{Lexicographic Breadth-First Search (LexBFS)}, proposed
in~\cite{RoTaLu}, is a refinement of BFS. In BFS, if $u$ and $v$ have
the same parent, then the algorithm order them arbitrarily. Instead,
the LexBFS chooses between $u$ and $v$ by considering the ordering of
their second-earliest predecessors. If only one of them has a
second-earliest predecessor, then that one is chosen. If both $u$ and
$v$ have the same second-earliest predecessor, then the tie is broken
by considering their third-earliest predecessor, and so on (See
Fig.~\ref{fig-halfspaces}(c)). The LexBFS uses a set partitioning
data structure and can be implemented in linear time~\cite{RoTaLu}.
In median graphs, the next lemma shows that it suffices to consider
only the earliest and second-earliest predecessors, leading to a
simpler implementation of LexBFS:
\begin{lemma}\label{LexBFSmedian}
If $u$ and $v$ are two vertices of a median graph $G$, then
$|\Lambda (u)\cap \Lambda (v)|\le 1$.
\end{lemma}
\begin{proof}
Let $x,x'$ be two distinct predecessors of $u$ and $v$. Since
$x,x'\in \Lambda (u)\cap \Lambda (v)$, we have
$d(v_0,u)=d(v_0,v)=d(v_0,x)+1=d(v_0,x')+1=k+1$. By
Lemma~\ref{quadrangle}, there is a vertex $y\sim x,x'$ at distance
$k-1$ from $v_0$. But then $x,x',u,v,y$ induce a forbidden
$K_{2,3}$.
\end{proof}
A graph $G$ satisfies the \emph{fellow-traveler property} if for any
LexBFS ordering of the vertices of $G$, for any edge $uv$ with
$v_0 \notin\{u,v\}$, the parents $f(u)$ and $f(v)$ are adjacent.
\begin{theorem}\label{fellow-traveler}
Any median graph $G$ satisfies the fellow-traveler property.
\end{theorem}
\begin{proof}
Let $<$ be an arbitrary LexBFS order of the vertices of $G$ and $f$
be its parent map. Since any LexBFS order is a BFS order, $<$ and
$f$ satisfy the following properties of BFS:
\begin{enumerate}[({BFS}1)]
\item if $u<v$, then $f(u) \leq f(v)$;
\item if $f(u)<f(v)$, then $u<v$;
\item if $v\neq v_0$, then
$f(v)=\min_<\{u: u\sim v\}$;
\item if $u<v$ and $v \sim f(u)$, then $f(v)=f(u)$.
\end{enumerate}
Notice also the following simple but useful property:
\begin{lemma}\label{claim1}
If $abcd$ is a square of $G$ with $d(v_0,c)=k$,
$d(v_0,b)=d(v_0,d)=k+1, d(v_0,a)=k+2$ and $f(a)=b$, and the edge
$ad$ satisfies the fellow-traveler property, then $f(d)=c$.
\end{lemma}
\begin{proof}
By the fellow traveler property, $f(d) \sim f(a) = b$. If
$f(d) \neq c$, then $a, b, c, d, f(d)$ induce a forbidden
$K_{2,3}$.
\end{proof}
\begin{figure}
\captionsetup[subfigure]{singlelinecheck=true}
\centering
\subcaptionbox{\label{fig:lexcara}}
{\includegraphics[scale=0.6,page=1]{Images/FT-Property.pdf}}
\hfill \subcaptionbox{\label{fig:lexcarb}}
{\includegraphics[scale=0.6,page=2]{Images/FT-Property.pdf}}
\hfill \subcaptionbox{\label{fig:lexcarc}}
{\includegraphics[scale=0.6,page=3]{Images/FT-Property.pdf}}
\hfill \subcaptionbox{\label{fig:lexard}}
{\includegraphics[scale=0.6,page=4]{Images/FT-Property.pdf}}
\hfill \subcaptionbox{\label{fig:lexcare}}
{\includegraphics[scale=0.6,page=5]{Images/FT-Property.pdf}}
\vspace{3ex}
\subcaptionbox{\label{fig:lexcarf}}
{\includegraphics[scale=0.6,page=6]{Images/FT-Property.pdf}}
\hfill \subcaptionbox{\label{fig:lexcarg}}
{\includegraphics[scale=0.6,page=8]{Images/FT-Property.pdf}}
\hfill \subcaptionbox{\label{fig:lexcarh}}
{\includegraphics[scale=0.6,page=9]{Images/FT-Property.pdf}}
\hfill \subcaptionbox{\label{fig:lexcarj}}
{\includegraphics[scale=0.6,page=11]{Images/FT-Property.pdf}}
\caption{Animated proof of
Theorem~\ref{fellow-traveler}.}\label{FT-Property}
\end{figure}
We prove the fellow-traveler property by induction on the total
order on the edges of $G$ defined by $<$. The proof is illustrated
by several figures (the arcs of the parent map are represented in
bold). We use the following convention: all vertices having the
same distance to the basepoint $v_0$ will be labeled by the same
letter but will be indexed differently; for example, $w_1$ and $w_2$
are two vertices having the same distance to $v_0$.
Suppose by way of contradiction that $e=u_1v_3$ with $v_3<u_1$ is
the first edge in the order $<$ such that the parents $f(u_1)$ and
$f(v_3)$ of $u_1$ and $v_3$ are not adjacent. Then necessarily
$f(u_1)\ne v_3$. Set $v_1=f(u_1)$ and $w_3=f(v_3)$
(Fig.~\ref{fig:lexcara}). Since $d(v_0, v_1)=d(v_0,v_3)$ and
$u_1\sim v_1,v_3$, by the quadrangle condition $v_1$ and $v_3$ have
a common neighbor at distance $d(v_0,v_1)-1$ from $v_0$. This vertex
cannot be $w_3$, otherwise $f(u_1)$ and $f(v_3)$ would be
adjacent. Therefore there is a vertex $w_4\sim v_1,v_3$ at distance
$d(v_0,v_1)-1$ from $v_0$ (Fig.~\ref{fig:lexcarb}). By induction
hypothesis, the parent $x_3=f(w_4)$ of $w_4$ is adjacent to
$w_3=f(v_3)$. Since $u_1\sim v_1=f(u_1),v_3$ and
$v_3\sim w_3=f(v_3),w_4$, by (BFS3) we conclude that $v_1 < v_3$ and
$w_3<w_4$. By (BFS2), $f(v_1)\leq f(v_3)$, whence $f(v_1)\leq w_3$
and since $f(v_1)\neq f(v_3)$ (otherwise, $f(u_1)\sim f(v_3)$), we
deduce that $f(v_1)< w_3<w_4$. Hence $f(v_1) \neq w_4$. Set
$w_1=f(v_1)$. By the induction hypothesis, $f(v_1)=w_1$ is adjacent
to $f(w_4)=x_3$ (Fig.~\ref{fig:lexcarc}). By the cube condition
applied to the squares $w_4v_1w_1x_3$, $w_4v_1u_1v_3$, and
$w_4v_3w_3x_3$ there is a vertex $v_2$ adjacent to $u_1$, $w_1$, and
$w_3$. Since $u_1\sim v_2$ and $f(u_1)=v_1$, by (BFS3) we obtain
$v_1<v_2$. Since $v_2$ is adjacent to $w_1$ and $w_1=f(v_1)$, by
(BFS4) we obtain $f(v_2)=f(v_1)=w_1$, and by (BFS2),
$v_2<v_3$. Since $f(v_2)=w_1$, by Lemma~\ref{claim1} for $v_2w_1x_3w_3$, we obtain $f(w_3)=x_3$ (Fig.~\ref{fig:lexard}).
Since $v_1<v_2$, $f(v_1)=f(v_2)=w_1$, and $v_2\sim w_1,w_3$, by
LexBFS $v_1$ is adjacent to a predecessor different from $w_1$ and
smaller than $w_3$. Since $w_3<w_4$, this predecessor cannot be
$w_4$. Denote by $w_2$ the second smallest predecessor of $v_1$
(Fig.~\ref{fig:lexcare}) and note that $w_1 < w_2 < w_3 <w_4$.
By the quadrangle condition, $w_2$ and $w_4$ are adjacent to a
vertex $x_5$, which is necessarily different from $x_3$ because $G$
is $K_{2,3}$-free. By the induction hypothesis, $f(w_2)$ and
$f(v_1)=w_1$ are adjacent. Then $f(w_2)\ne x_3,x_5$, otherwise we
obtain a forbidden $K_{2,3}$. Set $f(w_2)=x_2$. Analogously,
$f(x_5)=y_5$ and $f(w_2)=x_2$ are adjacent as well as $f(x_5)=y_5$
and $f(w_4)=x_3$ (Fig.~\ref{fig:lexcarf}). By (BFS1),
$x_2 = f(w_2) < f(w_3) = x_3$ and by (BFS3), $x_3 = f(w_4) < x_5$.
Since $w_3<w_4$ with $f(w_3)=f(w_4)$ and $w_4$ is adjacent to $x_5$,
by LexBFS $w_3$ must have a predecessor different from $x_3$ and
smaller than $x_5$. This vertex cannot be $x_2$ by (BFS3) since
$f(w_3) = x_3$. Denote this predecessor of $w_3$ by $x_4$ and
observe that $x_2 <x_3<x_4<x_5$. By the induction hypothesis, the
parent of $x_4$ is adjacent to $f(w_3)=x_3$. Let $y_4=f(x_4)$.
If $y_4=y_5$, applying the cube condition to the squares
$x_3w_3x_4y_5$, $x_3w_4x_5y_5$, and $x_3w_4v_3w_3$ we find a vertex
$w$ adjacent to $x_4$, $v_3$, and $x_5$. Applying the cube
condition to the squares $w_4v_3wx_5$, $w_4v_1w_2x_5$, and
$w_4v_1u_1v_3$ we find a vertex $v$ adjacent to $u_1$, $w_2$, and
$w$. Since $v\sim w_2$, by (BFS3) $f(v)\leq w_2<w_3=f(v_3)$, hence
by (BFS2) we obtain $v<v_3$. Therefore we can apply the induction
hypothesis, and by Lemma~\ref{claim1} for $u_1v_1w_2v$, we deduce that $f(v)=w_2$. By Lemma~\ref{claim1}
for $v_3w_3x_4w$, we deduce that $f(w)=x_4$
(Fig.~\ref{fig:lexcarg}). Applying the induction hypothesis to the
edge $vw$ we have that $f(v)=w_2$ is adjacent to $f(w) = x_4$,
yielding a forbidden $K_{2,3}$ induced by $v, x_5, x_4, w, w_2$
(Fig.~\ref{fig:lexcarg}). All this shows that $y_4\neq y_5$. By the
quadrangle condition, $y_5$ and $y_4$ have a common neighbor $z_3$
(Fig.~\ref{fig:lexcarh}).
Recall that $x_2<x_3<x_4<x_5$, and note that by (BFS1),
$y_4=f(x_4)<f(x_5)=y_5$. We denote by $H$ the subgraph of $G$
induced by the vertices
$V'=\{w_1,x_2,x_3,x_4,x_5,y_4,y_5,z_3\}$. The set of edges of $H$ is
$E'=\{z_3y_4, z_3y_5, y_4x_3, y_4x_4, y_5x_2, y_5x_3, y_5x_5,
x_2w_1, x_3w_1\}$. To conclude the proof, we use the following technical
lemma.
\begin{lemma}\label{claim2}
Let $H=(V',E')$ (Fig.~\ref{fig-lem-aux-lexBFS-a}) be an induced
graph of $G$, where
$d(v_0, w_1)=d(v_0, x_2)+1=\cdots=d(v_0, x_5)+1=d(v_0,
y_4)+2=d(v_0, y_5)+2=d(v_0, z_3)+3$ and $f(x_5)=y_5$ and
$f(x_4)=y_4$, such that $x_2<x_3<x_4<x_5$ and $y_4<y_5$. If $G$
satisfies the fellow-traveler property up to distance
$d(v_0, w_1)$, then there exists a vertex $x_0$ such that
$x_0<x_2$ and $x_0\sim w_1,y_4$ (Fig.~\ref{fig-lem-aux-lexBFS-b}).
\end{lemma}
\begin{figure}[h]
\captionsetup[subfigure]{singlelinecheck=true}
\centering
\subcaptionbox{\label{fig-preuve-aux-lexBFS-c}}
{\includegraphics[scale=0.6,page=16]{Images/FT-Property.pdf}}\hfill
\subcaptionbox{\label{fig-preuve-aux-lexBFS-d}}
{\includegraphics[scale=0.6,page=17]{Images/FT-Property.pdf}}\hfill
\subcaptionbox{\label{fig-preuve-aux-lexBFS-e}}
{\includegraphics[scale=0.6,page=18]{Images/FT-Property.pdf}}\hfill
\subcaptionbox{\label{fig-preuve-aux-lexBFS-g}}
{\includegraphics[scale=0.6,page=20]{Images/FT-Property.pdf}}
\caption{To the proof of Lemma~\ref{claim2}.}\label{fig-preuve-aux-lexBFS}
\end{figure}
\begin{proof}[Proof of Lemma~\ref{claim2}]
Consider a median graph $G$ for which Lemma~\ref{claim2} does not
hold. Among all induced subgraphs of $G$ satisfying the conditions
of the lemma but for which there does not exist a vertex
$x_0\ne x_3\sim w_1,y_4$ with $x_0<x_2$, we select a copy of $H$
minimizing the distance $d(v_0,w_1)$. First, suppose that
$f(w_1)=x_2$. By Lemma~\ref{claim1} for $w_1x_2y_5x_3$, we deduce $f(x_3)=y_5$. Then, by (BFS1), we
get $y_5=f(x_3)\leq f(x_4) \leq f(x_5)=y_5$. Hence, $f(x_4)=y_5$,
a contradiction. Therefore $f(w_1) \neq x_2$. Since $G$ satisfies
the fellow-traveler property up to distance $d(v_0,w_1)$, we get
$f(x_2)\sim f(w_1)$. Let $x_1$ be the parent of $w_1$
(Fig.~\ref{fig-preuve-aux-lexBFS-c}) and let $y_2 = f(x_2)$ be the
parent of $x_2$. To avoid an induced $K_{2,3}$, $y_2$ cannot
coincide with $y_5$. Moreover, $y_2$ does not coincide with $y_4$
because otherwise $x_1$ would be the common neighbor of $w_1$ and
$y_4$ required by Lemma~\ref{claim2}. Let $z_5$ be the parent of
$y_5$. By the fellow-traveler property, $z_5 = f(y_5)$ is adjacent
to $y_2=f(x_2)$. By the cube condition applied to the squares
$x_2w_1x_1y_2$, $x_2w_1x_3y_5$, and $x_2y_2z_5y_5$, we find a
neighbor $y_3$ of $x_3$, $x_1$, and $z_5$. If $z_5 = z_3$, then
$y_3=y_4$ (otherwise we get a $K_{2,3}$) and $x_1$ is the neighbor
of $w_1$ and $y_4$ required by Lemma~\ref{claim2}, a
contradiction. Thus $y_3 \neq y_4$ and $z_5 \neq z_3$. Moreover,
by Lemma~\ref{claim1} for $w_1x_1y_3x_3$, $y_3=f(x_3)$ (see
Fig~\ref{fig-preuve-aux-lexBFS-d}). Let $t$ be the parent of
$z_3$. By induction hypothesis, $z_5=f(y_5)\sim
t=f(z_3)$. Applying the cube condition to the squares
$y_5z_3tz_5$, $y_5x_3y_3z_5$, and $y_5x_3y_4z_3$, we find a
neighbor $z_4$ of $t$, $y_3$ and $y_4$. By Lemma~\ref{claim1}
for $x_3y_3z_4y_4$, $f(y_4)=z_4$ (Fig.~\ref{fig-preuve-aux-lexBFS-e})
and by (BFS1), $x_2<x_3<x_4<x_5$ implies
$y_2=f(x_2)< y_3=f(x_3)<y_4=f(x_4)< y_5=f(x_5)$. Since
$d(x_1, v_0)<d(w_1,v_0)$, our choice of $H$ implies the existence
of a neighbor $y_0$ of $x_1$ and $z_4$ such that $y_0<y_2$
(Fig.~\ref{fig-preuve-aux-lexBFS-g}). Applying the cube condition
to the squares $y_3x_1y_0z_4$, $y_3x_1w_1x_3$ and $y_3x_3y_4z_4$,
we find a neighbor $x_0$ of $w_1$, $y_4$, and $y_0$. By (BFS3),
$f(x_0)\leq y_0 < y_2 = f(x_2)$ and thus, by (BFS2), $x_0<x_2$
(Fig.~\ref{fig-preuve-aux-lexBFS-g}), a contradiction with the
choice of $H$.
\end{proof}
Since $G$ contains a subgraph $H$ satisfying the conditions of
Lemma~\ref{claim2}, there exists a vertex $x_0$ such that $x_0<x_2$
and $x_0\sim w_1,y_4$ (Fig.~\ref{fig:lexcarj}). By the cube
condition applied to the squares $x_3w_1x_0y_4$, $x_3w_1v_2w_3$, and
$x_3w_3x_4y_4$, there exists $w_0\sim x_0,v_2,x_4$
(Fig.~\ref{fig:lexcarj}). Since $x_0$ is adjacent to $w_0$, by
(BFS3) $f(w_0)\leq x_0<x_2=f(w_2)$. By (BFS2), $w_0<w_2$. Recall
that $f(v_1)=w_1=f(v_2)$ and that $w_2$ is the second-earliest
predecessor of $v_1$. Since $w_0<w_2$ and $w_0$ is a predecessor of
$v_2$, by LexBFS we deduce that $v_2<v_1$. Since $v_1$ and $v_2$ are
both adjacent to $u_1$ we obtain a contradiction with
$f(u_1)=v_1$. This contradiction shows that any median graph $G$
satisfies the fellow-traveller property. This finishes the proof of
Theorem~\ref{fellow-traveler}.
\end{proof}
\begin{figure}
\captionsetup[subfigure]{singlelinecheck=true}
\centering
\subcaptionbox{\label{fig-lem-aux-lexBFS-a}}
{\includegraphics[scale=0.6,page=12]{Images/FT-Property.pdf}}
\qquad\qquad \subcaptionbox{\label{fig-lem-aux-lexBFS-b}}
{\includegraphics[scale=0.6,page=13]{Images/FT-Property.pdf}}
\caption{The induced subgraph $H$ in Lemma~\ref{claim2}.}\label{fig-lem-aux-lexBFS}
\end{figure}
We now explain how to implement LexBFS in a median graph $G$ in a
simpler way than in the general case. By Lemma~\ref{LexBFSmedian}, it
suffices to keep for each vertex $v$ only its earliest and
second-earliest predecessors, i.e., if $v$ and $w$ have the same earliest predecessor, then LexBFS will
order $v$ before $w$ iff either the second-earliest predecessor of $v$
is ordered before the second earliest predecessor of $w$ or if $v$ has
a second-earliest predecessor and $w$ does not. Similarly to BFS,
LexBFS can be implemented using a single queue $Q$. Additionally to
BFS, each already labeled vertex $u$ must store the position $\pi(u)$
in $Q$ of the earliest vertex of $Q$ having $u$ as a single
predecessor. In $Q$, all vertices having $u$ as their parent occur
consecutively. Additionally, among these vertices, the ones having a
second predecessor must occur before the vertices having only $u$ as a
predecessor and the vertices having a second predecessor must be
ordered according to that second predecessor. To ensure this property,
we use the following rule: if a vertex $v$ in $Q$, currently having
only $u$ as a predecessor, discovers yet another predecessor $u'$,
then $v$ is swapped in $Q$ with the vertex $\pi(u)$, and $\pi(u)$ is
updated. Clearly this is an $O(m)$ implementation.
\begin{algorithm}[ht]\caption{$\Theta$-classes via LexBFS}\label{alg:calcthlin}
\DontPrintSemicolon
\SetAlgoVlined
\KwData{$G=(V,E)$, $v_0\in V$}
\KwResult{The $\Theta$-classes $\Theta$ of $G$ ordered by
increasing distance from $v_0$}
\Begin{
$\Theta \leftarrow \emptyset$\;
$(E, \Lambda, f) \leftarrow LexBFS (G,v_0)$\;
\tcp{$E$ : the list of edges ordered by LexBFS}
\tcp{$\Lambda:V\mapsto 2^V$ such that $\Lambda(v)$ is the set of
predecessors of $v$}
\tcp{$f:V\mapsto V$ such that $f(v)$ is the parent of $v$}
\ForEach{$uv \in E$ }
{
\uIf {$|\Lambda[v]|=1$}
{
Add a new $\Theta$-class $\{uv\}$ to $\Theta$
\tcp*{first edge in the $\Theta$-class}
}
\uElseIf{$f(v)\neq u$}{
Add the edge $uv$ to the $\Theta$-class of the edge
$f(u)f(v)$
}
\Else{
Pick any $x$ in $\Lambda(v)\setminus\{u\}$\;
Add the edge $uv$ to the $\Theta$-class of the edge
$f(x)x$\;
}
}
\Return{$\Theta$}
}
\end{algorithm}
Now we use Theorem~\ref{fellow-traveler} to compute the
$\Theta$-classes of $G$. We run LexBFS and return a LexBFS-ordering of $V(G)$ and
$E(G)$ and the ordered lists $\Lambda_<(v), v\in V$. Then consider the edges
of $G$ in the LexBFS-order. Pick the first unprocessed edge $uv$ and
suppose that $u\in \Lambda_<(v)$. If $\Lambda_<(v)=\{u\}$, by
Lemma~\ref{prime_traces}, $uv$ is the first edge of its
$\Theta$-class, thus we create a new $\Theta$-class $E_i$ and insert
$uv$ as the first edge of $E_i$. We call $uv$ the \emph{root} of $E_i$
and keep $d(v_0,v)$ as the distance from $v_0$ to $H'_i$. Now suppose
$|\Lambda_<(v)|\ge 2$. We consider two cases: (i) $u\neq f(v)$ and
(ii) $u=f(v)$. For (i), by
Theorem~\ref{fellow-traveler},
$uv$ and $f(u)f(v)$ are opposite edges of a square. Therefore $uv$
belongs to the $\Theta$-class of $f(u)f(v)$ (which was already
computed because $f(u)f(v)<uv$). In order to recover the $\Theta$-class of the edge $f(u)f(v)$ in
constant time, we use a (non-initialized) matrix $A$ whose rows and
columns correspond to the vertices of $G$ such that $A[x,y]$ contains
the $\Theta$-class of the edge $xy$ when $x$ and $y$ are adjacent and
the $\Theta$-class of $xy$ has already been computed and $A[x,y]$ is
undefined if $x$ and $y$ are not adjacent or if the $\Theta$-class of
$xy$ has not been computed yet. For (ii), pick any
$x\in \Lambda_<(v), x\ne u$. By Theorem~\ref{fellow-traveler},
$uv=f(v)v$ and $f(x)x$ are opposite edges of a square. Since $f(x)x$
appears before $uv$ in the LexBFS order, the $\Theta$-class of $f(x)x$
has already been computed, and the algorithm inserts $uv$ in the
$\Theta$-class of $f(x)x$. Each $\Theta$-class $E_i$ is totally ordered by the order in which the
edges are inserted in $E_i$. Consequently, we obtain:
\begin{theorem}\label{LexBFS}
The $\Theta$-classes of a median graph $G$ can be computed in $O(m)$ time.
\end{theorem}
\section{The median of $G$}\label{sec:medianWiener}
We use Theorem~\ref{LexBFS} to compute the median set
$\Med_w(G)$ of a median graph $G$ in $O(m)$ time.
We also use the existence of peripheral halfspaces
and the majority rule.
\subsection{Peripheral peeling} The order $E_1,E_2,\ldots, E_q$ in which the $\Theta$-classes $E_i$ of
$G$ are constructed corresponds to the order of the distances from
$v_0$ to $H'_i$: if $i<j$ then $d(v_0,H'_i)\le d(v_0,H'_j)$ (recall
that $v_0\in H''_i$). By Lemma~\ref{peripheral}, the halfspace $H'_q$
of $E_q$ is peripheral. If we contract all edges of $E_q$ (i.e., we
identify the vertices of $H'_q=\partial H'_q$ with their neighbors in
$\partial H''_q$) we get a smaller median graph
$\ensuremath{\widetilde{G}}=H''_q$; $\ensuremath{\widetilde{G}}$ has $q-1$ $\Theta$-classes $\ensuremath{\widetilde{E}}_1,\ldots,\ensuremath{\widetilde{E}}_{q-1}$, where
$\ensuremath{\widetilde{E}}_i$ consists of the edges of $E_i$ in $\ensuremath{\widetilde{G}}$. The halfspaces of $\ensuremath{\widetilde{G}}$ have the form $\ensuremath{\widetilde{H}}'_i=H'_i\cap H''_q$
and $\ensuremath{\widetilde{H}}''_i=H''_i\cap H''_q$. Then $\ensuremath{\widetilde{E}}_1,\ldots,\ensuremath{\widetilde{E}}_{q-1}$ corresponds to the ordering of the
halfspaces $\ensuremath{\widetilde{H}}'_1,\ldots,\ensuremath{\widetilde{H}}'_{q-1}$ of $\ensuremath{\widetilde{G}}$ by their distances to
$v_0$. Hence the last halfspace $\ensuremath{\widetilde{H}}'_{q-1}$ is peripheral in $\ensuremath{\widetilde{G}}$. Thus the ordering $E_q,E_{q-1},\ldots,E_1$ of the $\Theta$-classes of $G$
provides us with a set $G_q=G,G_{q-1}=\ensuremath{\widetilde{G}},\ldots,G_0$ of median graphs
such that $G_0$ is a single vertex and for each $i\ge 1$, the
$\Theta$-class $E_i$ defines a peripheral halfspace in the graph $G_i$
obtained after the successive contractions of the peripheral
halfspaces of $G_q,G_{q-1},\ldots, G_{i+1}$ defined by
$E_q, E_{q-1}, \ldots, E_{i+1}$. We call $G_q,G_{q-1},\ldots,G_0$ a
\emph{peripheral peeling} of $G$.
Since each vertex of $G$ and each $\Theta$-class is contracted only
once, we do not need to explicitly compute the restriction of each
$\Theta$-class of $G$ to each $G_i$. For this it is enough to keep for each vertex $v$ a variable
indicating whether this vertex belongs to an already contracted
peripheral halfspace or not.
Hence, when the $i$th $\Theta$-class must be contracted, we simply
traverse the edges of $E_i$ and select those edges whose both ends are
not yet contracted.
\subsection{Computing the weights of the halfspaces of
$G$}\label{ssec:weight-halfpaces}
We use a peripheral peeling $G_q,G_{q-1},\ldots,G_0$ of $G$ to compute
the weights $w(H'_i)$ and $w(H''_i)$, $i=1,\ldots,q$ of all halfspaces
of $G$. As above, let $\ensuremath{\widetilde{G}}$ be obtained from $G$ by contracting the
$\Theta$-class $E_q$. Consider the weight function $\ensuremath{\widetilde{w}}$ on
$\ensuremath{\widetilde{G}}=H''_q$ defined as follows:
\begin{equation}\label{weightsecond}
\ensuremath{\widetilde{w}}(v'') =
\begin{cases}
w(v'') + w(v') & \text{if } v''\in \partial H''_q, v'\in H'_q, \text{ and }
v''\sim v',\\
w(v'') & \text{if } v''\in H''_q \setminus \partial H''_q.\\
\end{cases}
\end{equation}
\begin{algorithm}[]\caption{ComputeWeightsOfHalfspaces($G,w,\Theta$)}\label{alg:compweight}
\SetKw{Add}{add}
\SetKw{To}{to}
\DontPrintSemicolon
\SetAlgoVlined
\KwData{A median graph $G=(V,E)$, a weight function $w : V\rightarrow
\ensuremath{\mathbb{R}}\xspace^+n\cup \{0\}$, the $\Theta$-classes $\Theta = (E_1, \hdots,E_q)$ of
$G$ ordered by increasing distance to the basepoint $v_0$.}
\KwResult{The list of the pairs of weights
$\left((w(H''_q),w(H'_q)), \ldots, (w(H''_1),w(H'_1))\right)$}
\Begin{
\uIf{$|V| = 1$}{
\Return the empty list
}
\Else
{
Let $H'$ and $H''$ be the two complementary halfspaces defined
byn$E_q$ ($v_0\in H''$)\;
$w(H') \leftarrow \sum_{v\in H'} w(v)$\;
$w(H'') \leftarrow w(V) - w(H')$\;
\ForEach{$v'v''\in E_q$ with $v' \in H'$ and $v'' \in H''$}
{
$w(v'')\leftarrow w(v')+w(v'')$ \;
}
$L \leftarrow$ ComputeWeightsOfHalfspaces($H'', w,
\Theta\setminus \{E_q\}$) \;
\Add $(w(H''),w(H'))$ \To $L$\;
\Return $L$\;
}
}
\end{algorithm}
\begin{lemma}\label{weight-halfspaces}
For any $\Theta$-class $\ensuremath{\widetilde{E}}_i$ of $\ensuremath{\widetilde{G}}$, $\ensuremath{\widetilde{w}}(\ensuremath{\widetilde{H}}'_i)=w(H'_i)$ and
$\ensuremath{\widetilde{w}}(H''_i)=w(H''_i)$.
\end{lemma}
By Lemma~\ref{weight-halfspaces}, to compute all $w(H'_i)$ and
$w(H''_i),$ it suffices to compute the weight of the peripheral
halfspace of $E_i$ in the graph $G_i$, set it as $w(H'_i)$, and set
$w(H''_i):=w(G)-w(H'_i)$.
Let $G$ be the current median graph, let $H'_q$ be a peripheral
halfspace of $G$, and $\ensuremath{\widetilde{G}}=H''_q$ be the graph obtained from $G$ by
contracting the edges of $E_q$. To compute $w(H'_q)$, we traverse the
vertices of $H'_q$ (by considering the edges of $E_q$). Set
$w(H''_q)=w(G)-w(H'_q)$. Let $\ensuremath{\widetilde{w}}$ be the weight function on $\ensuremath{\widetilde{G}}$
defined by Equation~\ref{weightsecond}. Clearly, $\ensuremath{\widetilde{w}}$ can be computed
in $O(|V(H'_q)|)=O(|E_q|)$ time. Then by Lemma~\ref{weight-halfspaces}
it suffices to recursively apply the algorithm to the graph $\ensuremath{\widetilde{G}}$ and
the weight function $\ensuremath{\widetilde{w}}$. Since each edge of $G$ is considered only
when its $\Theta$-class is contracted, the algorithm has complexity
$O(m)$.
\subsection{The median $\Med_w(G)$}\label{s-algomed}
We start with a simple property of the median function $F_w$ that
follows from Lemma~\ref{halfspaces}:
\begin{lemma}\label{F(x)-F(y)}
If $xy\in E_i$ with $x\in H'_i$ and $y\in H''_i$, then
$F_w(x)-F_w(y)=w(H''_i)-w(H'_i)$.
\end{lemma}
A halfspace $H$ of $G$ is \emph{majoritary} if $w(H)>\frac{1}{2}w(G)$,
\emph{minoritary} if $w(H)<\frac{1}{2}w(G)$, and \emph{egalitarian} if
$w(H)=\frac{1}{2}w(G)$. Let
$\ensuremath{\Med^{\rm{loc}}_w}\xspace(G) = \{v \in V : F_w(v)\le F_w(u), \forall u \sim v\}$ be the
set of local medians of $G$. We continue with the majority
rule:
\begin{proposition}[\!\!\cite{BaBa,SoCh_Weber}]\label{majority}
$\Med_w(G)$ is the intersection of all majoritary halfspaces and
$\Med_w(G)$ intersects all egalitarian halfspaces. If $H'_i$ and
$H''_i$ are egalitarian halfspaces, then $\Med_w(G)$ intersects both
$H'_i$ and $H''_i$. Moreover, $\Med_w(G)=\ensuremath{\Med^{\rm{loc}}_w}\xspace(G)$.
\end{proposition}
\begin{proof}
Let us first prove a generalization of Lemma~\ref{F(x)-F(y)} from
which the different statements of Proposition~\ref{majority} easily
follow.
\begin{lemma}\label{majority-claim}
Let $E_i$ be a $\Theta$-class of a median graph $G$ and let
$H'_i,H''_i$ be the two halfspaces defined by $E_i$. If
$x''\in H''_i$ and $x'$ is its gate in $H'_i$, then
$F_w(x'')\ge F_w(x')+d(x'',x')(w(H'_i)-w(H''_i))$.
\end{lemma}
\begin{proof}
By definition of the median function,
\[
F_w(x'')-F_w(x') = \sum_{u\in V}{d(x'',u)w(u)}-\sum_{u\in V}
{d(x',u)w(u)}=\sum_{u\in V}{(d(x'',u)-d(x',u))w(u)}.
\]
Then, we decompose the sum over the complementary halfspaces
$H'_i$ and $H''_i:$
\[
F_w(x'')-F_w(x')=\sum_{u'\in
H_i'}{(d(x'',u')-d(x',u'))w(u')}+\sum_{u''\in
H_i''}{(d(x'',u'')-d(x',u''))w(u'')}.
\]
Since $x'$ is the gate of $x''$ on $H_i'$, for any $u'\in H_i'$,
$d(x'',u')-d(x',u')= d(x'',x')$. By triangle inequality,
$d(x'',u'')-d(x',u'')\geq -d(x'',x')$ for every $u''\in H_i''$.
We get
\[
F_w(x'')-F_w(x') \geq \sum_{u \in H_i'}d(x'',x')w(u') -
\sum_{u''\in H_i''}d(x'',x')w(u'')
\]
and conclude that
$F_w(x'') \geq F_w(x') + d(x'',x')(w(H_i')-w(H_i''))$.
\end{proof}
Let $H''_i$ and $H'_i$ be two complementary halfspaces such that
$w(H'_i)>w(H''_i)$. Pick any vertex $x''\in H''_i$ and its gate $x'$
in $H'_i$. By Lemma~\ref{majority-claim}, $F_w(x'')>F_w(x')$ and
therefore $x''$ cannot be a median. This shows that the complement
of a majoritary halfspace does not contain any median vertex. This
implies that $\Med_w(G)$ is contained in the intersection $M$ of the
majoritary halfspaces. If $\Med_w(G)$ is a proper subset of $M$,
since $M$ is convex we can find two adjacent vertices
$x\in \Med_w(G)$ and $y\in M\setminus \Med_w(G)$. Let $xy\in E_i$
with $x\in H'_i$ and $y\in H''_i$. Since $y\in M$, $H'_i$ cannot be
a majoritary halfspace. Since $x\in \Med_w(G)$, $H'_i$ cannot be a
minoritary halfspace. Thus $H'_i$ and $H''_i$ are egalitarian
halfspaces. Since $F_w(x)-F_w(y)=w(H''_i)-w(H'_i)=0$, we deduce that
$y$ is a median vertex, thus $\Med_w(G)=M$.
Now, consider two egalitarian complementary halfspaces $H''_i$ and
$H'_i$. Suppose that a median vertex $x'$ belongs to $H'_i$ and let
$x''$ be its gate on $H''_i$. By Lemma~\ref{majority-claim},
$F_w(x'')\le F_w(x')$. Therefore, $x''$ is also median. By
symmetry, we conclude that both $H'_i$ and $H''_i$ contain a median
vertex.
We now show that any local median is a median. Pick any
vertex $v\notin \Med_w(G)$. Since $\Med_w(G)$ is the intersection of all majority halfspaces of
$G$, there exists a majority halfspace $H$ containing $\Med_w(G)$
and not containing $v$. Let $v'$ be the gate of $v$ in $H$ and $u$
be a neighbor of $v$ in $I(v,v')$. Then necessarily
$H\subseteq W(u,v)$, thus $W(u,v)$ is a majoritary halfspace. This
implies that $F_w(u)<F_w(v)$, i.e., $v$ is not a local median. This
concludes the proof of Proposition~\ref{majority}.
\end{proof}
We use Proposition~\ref{majority} and the weights of halfspaces
computed above to derive $\Med_w(G)$. For this, we direct the edges $v'v''$ of each
$\Theta$-class $E_i$ of $G$ as follows. If $v'\in H'_i$ and
$v''\in H''_i$, then we direct $v'v''$ from $v'$ to $v''$ if $w(H''_i)>w(H'_i)$ and
from $v''$ to $v'$ if $w(H'_i)>w(H''_i)$. If $w(H'_i)=w(H''_i)$, then
the edge $v'v''$ is not directed. We denote this partially directed
graph by $\ensuremath{\overrightarrow{G}}$. A vertex $u$ of $G$ is a \emph{sink} of $\ensuremath{\overrightarrow{G}}$ if there is no edge $uv$
directed in $\ensuremath{\overrightarrow{G}}$ from $u$ to $v$. From Lemma~\ref{F(x)-F(y)}, $u$ is
a sink of $\ensuremath{\overrightarrow{G}}$ if and only if $u$ is a local median of $G$. By
Proposition~\ref{majority}, $\ensuremath{\Med^{\rm{loc}}_w}\xspace(G)=\Med_w(G)$ and
thus $\Med_w(G)$ coincides with the set $S(\ensuremath{\overrightarrow{G}})$ of sinks of $\ensuremath{\overrightarrow{G}}$. Note
that in the graph induced by $\Med_w(G)$, all edges are non-oriented
in $\ensuremath{\overrightarrow{G}}$.
Once all $w(H'_i)$ and $w(H''_i)$ have been computed, the orientation
$\ensuremath{\overrightarrow{G}}$ of $G$ can be constructed in $O(m)$ by traversing all
$\Theta$-classes $E_i$ of $G$. The graph induced by $S(\ensuremath{\overrightarrow{G}})$ can then
be found in $O(m)$.
\begin{theorem}\label{median}
The median $\Med_w(G)$ of a median graph $G$ can be computed in
$O(m)$ time.
\end{theorem}
The next remark follows immediately from the majority rule and the
fast computation of the $\Theta$-classes:
\begin{remark}\label{medmaj}
Given the median set $\Med_w(G)$ of a median graph $G$, one can find
all majoritary halfspaces of $G$ in linear time $O(m)$.
\end{remark}
\subsection*{Computing a diametral pair of $\Med_w(G)$}
The article~\cite{BaBa} proved that in a median graph, the median set
coincide with the interval between two diametral pairs of its
vertices. We show how to find this pair in $O(m)$ time, using a
corollary of Proposition~\ref{majority} :
\begin{corollary}\label{disjoint-halfspaces}
If two disjoint halfspaces $H'$ and $H''$ defined by two laminar
$\Theta$-classes of $G$ both intersect $\Med_w(G)$,
then $w(v)=0$ for any vertex $v\in V\setminus(H'\cup H'')$.
\end{corollary}
\begin{proof}
Since $\Med_w(G)\cap H'\ne \emptyset$ and
$\Med_w(G)\cap H''\ne \emptyset$, and since $H'$ and $H''$ are
disjoint, by Proposition~\ref{majority}, both $H'$ and $H''$ are
egalitarian halfspaces. Since $H'$ and $H''$ are disjoint,
$w(H')+w(H'')=w(V)$, hence $w(V\setminus (H'\cup H''))=0$.
\end{proof}
\begin{corollary}\label{cor-interval}
If $w(G)> 0$, we can find $u,v \in V(G)$ in $O(m)$ such that
$\Med_w(G) = I(u,v)$.
\end{corollary}
The proof of Corollary~\ref{cor-interval} is based on a result
of~\cite[Proposition~6]{BaBa} stating that in a median graph, the
median set is an interval. Let $H$ be a gated subgraph of $G$ and $u$
be a vertex of $H$. The set
$P_{H}(u) = \{ v\in V : u \text{ is the gate of } v \mbox{ in } H \}$
is called the \emph{fiber} of $u$ with respect to $H$. We say that a
fiber $P_{H}(u)$ is \emph{positive} if $w(P_H(u))>0$. The fibers
$\{P_{H}(u): u\in H\}$ define a partition of $V(G)$. We give below a
non lattice-based proof of the result of Bandelt and
Barth\'el\'emy~\cite[Proposition~6]{BaBa}.
\begin{proposition}\label{interval}
Let $M = \Med_w(G)$ and let $u$ be a vertex of $M$ with a positive
fiber $P_{M}(u)$. Then $M=I(u,v)$ for the vertex $v \in M$
maximizing $d(u,v)$.
\end{proposition}
\begin{proof}
Since $M$ is convex and $u,v \in M$, we have $I(u,v) \subseteq
M$. We now prove the reverse inclusion. Suppose by way of
contradiction that there exists a vertex $z \in M \setminus I(u,v)$.
Consider such a vertex $z$ minimizing $d(v,z)$. Let $z'$ be the
median of $u,v,z$. Then $z'\in I(u,v)$. Since $M$ is convex and
$z,z'\in M$, from the minimality choice of $z$ we conclude that $z$
and $z'$ are adjacent. Since $G$ is bipartite and $v$ is a furthest
from $u$ vertex of $M$, necessarily $z'\ne v$. Since
$z\notin I(u,v),z'\in I(u,v)$ and $G$ is bipartite,
$z'\in I(z,u)\cap I(z,v)$, i.e., $u,v\in W(z',z)$. Let $y$ be any
neighbor of $z'$ in $I(z',v)$ and suppose that $z'z\in E_i$ and
$z'y\in E_j$.
We assert that the $\Theta$-classes $E_i$ and $E_j$ are
laminar. Indeed otherwise, by Lemma~\ref{crossing}, there exists a
vertex $x' \sim z,y$.
Since $x'\in I(z,y)$ and $z,y\in M$, $x'$ also belongs to $M$ and is
one step closer to $v$ than $z$. The minimality choice of $z$
implies that $x'\in I(v,u)$. Since $u\in W(z,x')$, we have
$z\in I(x',u)$, yielding $z\in I(v,u)$. This contradiction shows
that $E_i$ and $E_j$ are laminar, i.e., the halfspaces $W(z,z')$ and
$W(y,z')$ are disjoint. Since they both intersect $M$, by
Corollary~\ref{disjoint-halfspaces}, all vertices of $G$ not
belonging to $W(z,z')\cup W(y,z')$ have weight 0.
Pick any vertex $p\in P_M(u)$. Since $p$ belongs to the fiber
$P_M(u)$ and $z,y\in M$, necessarily $u\in I(y,p)\cap I(z,p)$.
Since $y$ is a neighbor of $z'$ in $I(z',v)$ and $z'\in I(v,u)$, we
obtain that $z'\in I(y,u)\subseteq I(y,p)$, yielding $p\in W(z',y)$.
Analogously, from the choice of $z$ we deduced that
$z'\in I(z,u)\subseteq I(z,p)$, yielding $p\in W(z',z)$. This
establishes that $P_M(u)\subseteq W(z',y)\cap W(z',z)$, i.e.,
$P_M(u)$ is disjoint from the halfspaces $W(y,z')$ and $W(z,z')$.
This contradicts the fact that $P_M(u)$ has positive weight.
\end{proof}
Now we can prove the corollary :
\begin{proof}[Proof of Corollary~\ref{cor-interval}]
Once we have computed $M=\Med_w(G)$, one can pick an arbitrary
vertex $v_1$ such that $w(v_1) > 0$. By running a BFS-algorithm, we
find in linear time the closest to $v_1$ vertex $u$ in $M$ and the
furthest from $v_1$ vertex $v$ in $M$. Since $M$ is gated, $u$ is
unique, $v_1 \in P_M(u)$ and $v$ is at maximum distance from $u$ in
$M$. By Proposition~\ref{interval}, $I(u,v) = \Med_w(G)$.
\end{proof}
\subsection*{Median graphs and the majority rule}
In the Introduction we mentioned that the median graphs are the
bipartite graphs satisfying the majority rule. We will make now this
statement precise. Let $G=(V,E)$ be a bipartite graph. A
\emph{halfspace} of $G$ is a subgraph induced by
$W(u,v)=\{ x\in V: d(x,u)<d(x,v)\}$ for some edge $uv$.
Recall that all halfspaces of $G$ are convex iff $G$ is isometrically
embeddable into a hypercube~\cite{Dj}. For a weight function $w$ on
$G$ and any pair of complementary halfspaces $W(u,v)$ and $W(v,u)$, we
have $w(W(u,v))+w(W(v,u))=w(V)$. A \emph{majoritary halfspace} of $G$
is a halfspace $W(u,v)$ such that $w(W(u,v))>w(W(v,u))$. A bipartite
graph $G$ \emph{satisfies the majority rule} if for any weight
function $w$ on $G$, $\Med_w(G)$ is the intersection of all majoritary
halfspaces of $G$.
\begin{proposition}\label{prop-maj-median}
A bipartite graph $G$ satisfies the majority rule if and only if $G$
is median.
\end{proposition}
\begin{proof}
One direction is covered by Proposition~\ref{majority}. Now, suppose
that a bipartite graph $G$ satisfies the majority rule. To prove
that $G$ is median it suffices to show that $G$ satisfies the
quadrangle condition and does not contain $K_{2,3}$~\cite{Mu}. To
establish the quadrangle condition, let
$d(u,z)=k+1, d(u,x)=d(u,y)=k\ge 2$ and $z\sim x,y$.
Consider the weight function $w(u)=w(x)=w(y)=1$ and $w(v)=0$ is
$v\in V\setminus \{ u,x,y\}$. Notice that
$F_w(x)=F_w(y)=k+2, F_w(z)=k+3,F_w(u)=2k$ and that $F_w(v)\ge k+1$
for any other vertex $v$. Since $W(x,z)$ and $W(y,z)$ are
majoritary halfspaces and $x\notin W(y,z), y\notin W(x,z)$, the
vertices $x$ and $y$ are not medians. This implies that if
$v\in \Med_w(G)$, then $F_w(v)=k+1$. Since $G$ is bipartite, this is
possible only if $d(v,u)=k-1, d(v,x)=d(v,y)=1$, i.e., $G$ satisfies
the quadrangle condition. Suppose now that $G$ contains a $K_{2,3}$
induced by the vertices $x,y,z,u,u'$, where $u$ and $u'$ are
adjacent to $x,y,z$. Consider the weight function
$w(x)=w(y)=w(z)=w(u)=w(u')=1$ and $w(v)=0$ for any vertex
$v\in V\setminus \{ x,y,z,u,u'\}$. Then $F_w(u)=F_w(u')=5$ and
$F_w(x)=F_w(y)=F_w(z)=6$. Since $G$ is bipartite, $F_w(v)\ge 6$ for
any other vertex $v$. Thus $\Med_w(G)=\{ u,u'\}$. Since
$u,y,z\in W_w(u,x)$ and $x,u'\in W(x,u)$ we deduce that $W(u,x)$ is
a majoritary halfspace and $u'\in \Med_w(G)\setminus W(u,x)$, a
contradiction with the majority rule.
\end{proof}
\subsection{The Wiener index $W_w(G)$ and the distance matrix $D(G)$
of $G$.}\label{app-Wiener}
\begin{algorithm}[h]
\caption{DistanceMatrix($G,\Theta$)}
\label{alg:calcapsp}
\DontPrintSemicolon \SetAlgoVlined
\KwData{A median graph $G=(V,E)$, the $\Theta$-classes $\Theta =
(E_1, \hdots,E_q)$ ordered by increasing distance to the basepoint
$v_0$.}
\KwResult{The distance matrix $D : V\times V \rightarrow \ensuremath{\mathbb{N}}\xspace$}
\Begin{
\uIf{$G$ contains a single vertex $v$}{
$D(v,v) \leftarrow 0$\;
\Return $D$
}
\Else{
Let $H'$ and $H''$ be two complementary halfspaces defined by
$E_q$ ($v_0\in H''$)\;
$D\leftarrow$ DistanceMatrix($H'', \Theta \setminus \{E_q\}$) \;
\ForEach{$u'u'' \in E_q$ with $u' \in H'$ and $u''\in H''$}
{
\lForEach{$v''\in H''$}{
$D(u', v'')\leftarrow D(u'',v'')+1$}
\lForEach{$v'v'' \in E_q$ with $v'\in H'$ and $v''\in H''$}
{
$D(u', v')\leftarrow D(u'',v'')$
}
}
\Return $D$
}
}
\end{algorithm}
Using the fast computation of the $\Theta$-classes and of the weights
of halfspaces of a median graph $G$, we can compute the Wiener index
of $G$ in linear time.
\begin{proposition}\label{t-wiener}
The Wiener index $W_w(G)$ of a median graph $G$ can be computed in
$O(m)$ time.
\end{proposition}
\begin{proof}
Given the weights $w(H'_i)$ and $w(H''_i)$ of all halfspaces of $G$,
the Wiener index $W_w(G)$ of $G$ can be computed in $O(q)$ time
using the following formula (which holds for all partial cubes, see
e.g.~\cite{Kl_wiener}):
\begin{lemma}\label{wiener_folklore}
$W_w(G)=\sum_{i=1}^q w(H'_i)\cdot w(H''_i)$.
\end{lemma}
\begin{proof}
If $G$ is a partial cube, then for any two vertices $u,v$ of $G$,
$d(u,v)$ is equal to the number of $\Theta$-classes $E_i$
separating $u$ and $v$ ($u$ and $v$ belong to different halfspaces
defined by $E_i$). We write $u|_iv$ if $E_i$ separates $u$ and
$v$. This implies that
\[
\sum_{u\in V}\sum_{v\in V} w(u)\cdot w(v)\cdot d(u,v)=\sum_{u\in
V}\sum_{v\in V}\sum_{i: u|_i v} w(u)\cdot w(v)\cdot
1=\sum_{i=1}^{q} \sum_{u\in H'_i} \sum_{v\in H''_i} w(u)\cdot w(v)
\]
and thus $W_w(G)=\sum_{i=1}^q w(H'_i)\cdot w(H''_i)$.
\end{proof}
Since the weight of all halfspaces can be computed in $O(m)$ time
and since $q \leq m$, $W_w(G)$ can also be computed in $O(m)$ time.
\end{proof}
The distance matrix $D(G)$ of a median graph $G$ can be computed in
$O(n^2)$ time by traversing the reverse peripheral peeling
$G_0,\ldots,G_{q-1},G_q=G$ of $G$ (the pseudo-code is given in
Algorithm~\ref{alg:calcapsp}). For each $i$, we compute $D(G_i)$
assuming $D(G_{i-1})$ already computed. Since $G_{i-1}$ coincides
with the halfspace $H''_i$ of $G_i$, $G_{i-1}$ is gated in $G_i$, thus
$D(G_i)$ restricted to $G_{i-1}$ coincides with $D(G_{i-1})$. Thus, to
obtain $D(G_i)$ we have to compute the distances from all vertices
$v'$ of $H'_i$ to all other vertices of $G_i$. For each pair $u',v'$
of $H'_i$, let $u'',v''$ be their unique neighbors in $G_{i-1}=H''_i$.
Since $H'_i$ is peripheral in $G_i$, $H'_i$ is isomorphic to the
boundary $\partial H''_i$ of $H''_i$. Since $\partial H''_i$ is gated
(by Lemma~\ref{boundary}), $d(u',v')=d(u'',v'')$. Since $v''$ is the
gate of $v'$ in $H''_i$, for each vertex $w''\in H''_i$ we have
$d(v',w'')=d(v'',w'')+1$. This establishes how to complete the
distance matrix $D(G_i)$ from $D(G_{i-1})$. The complexity of this
completion is the number of items of $D(G_i)$ which are not in
$D(G_{i-1})$. This shows that $D(G)$ can be computed in total $O(n^2)$
time. Consequently, we obtain the following result:
\begin{proposition}\label{t-dist}
The distance matrix $D(G)$ of a median graph $G$ can be computed in
$O(n^2)$ time.
\end{proposition}
\section{The median problem in the cube complex of
$G$}\label{sec-cubecomplex}
\subsection{The main result}\label{s-problegeom}
In this section, we describe a linear time algorithm to compute
medians in cube complexes of median graphs.
\subsection*{The problem}
Let $G=(V,E)$ be a median graph with $n$ vertices, $m$ edges, and $q$
$\Theta$-classes $E_1,\ldots, E_q$. Let $\ensuremath{\mathcal{G}}\xspace$ be the \emph{cube
complex} of $G$ obtained by replacing each graphic cube of $G$ by a
unit solid cube and by isometrically identifying common subcubes. We
refer to $\ensuremath{\mathcal{G}}\xspace$ as to the \emph{geometric realization} of $G$ (see
Fig.~\ref{the_complex}(a)). We suppose that $\ensuremath{\mathcal{G}}\xspace$ is endowed with the
intrinsic $\ell_1$-metric $d_1$. Let $P$ be a finite set of points of
$(\ensuremath{\mathcal{G}}\xspace,d_1)$ (called \emph{terminals}) and let $w$ be a weight function
on $\ensuremath{\mathcal{G}}\xspace$ such that $w(p)>0$ if $p\in P$ and $w(p)=0$ if $p\notin
P$. The goal of the \emph{median problem} is to compute the set
$\Med_w(\ensuremath{\mathcal{G}}\xspace)$ of median points of $\ensuremath{\mathcal{G}}\xspace$, i.e., the set of all points
$x\in \ensuremath{\mathcal{G}}\xspace$ minimizing the function
$F_w(x)=\sum_{p\in \ensuremath{\mathcal{G}}\xspace} w(p)d_1(x,p)=\sum_{p\in P} w(p)d_1(x,p)$.
\subsection*{The input}
The cube complex $\ensuremath{\mathcal{G}}\xspace$ is given by its 1-skeleton $G$.
Each terminal $p\in P$ is given by its coordinates in the smallest
cube $Q(p)$ of $\ensuremath{\mathcal{G}}\xspace$ containing $p$. Namely, we give a vertex $v(p)$ of
$Q(p)$ together with its neighbors in $Q(p)$ and the coordinates of $p$ in
the embedding of $Q(p)$ as a unit cube in which $v(p)$ is the origin
of coordinates.
Let $\delta$ be the sum of the sizes of the encodings of the
points of $P$. Thus the input of the median problem has size
$O(m+\delta)$.
\subsection*{The output} Unlike $\Med_w(G)$ (which is a gated subgraph
of $G$), $\Med_w(\ensuremath{\mathcal{G}}\xspace)$ is not a subcomplex of $\ensuremath{\mathcal{G}}\xspace$. Nevertheless we
show that $\Med_w(\ensuremath{\mathcal{G}}\xspace)$ is a subcomplex of the box complex $\ensuremath{\widehat{\mathcal{G}}}\xspace$
obtained by subdividing $\ensuremath{\mathcal{G}}\xspace$, using the hyperplanes passing via the
terminals of $P$. The output is the 1-skeleton $\ensuremath{\widehat{M}}\xspace$ of $\Med_w(\ensuremath{\widehat{G}})$ and $\Med_w(\ensuremath{\widehat{\mathcal{G}}}\xspace)$, and the local
coordinates of the vertices of $\ensuremath{\widehat{M}}\xspace$ in $\ensuremath{\mathcal{G}}\xspace$. We show that the output
has linear size $O(m)$.
\begin{theorem}\label{mediancomplexx}
Let $G$ be a median graph with $m$ edges and let $P$ be a finite set
of terminals of $\ensuremath{\mathcal{G}}\xspace$ described by an input of size $\delta$. The
1-skeleton $\ensuremath{\widehat{M}}\xspace$ of $\Med_w(\ensuremath{\mathcal{G}}\xspace)$ can be computed in linear time
$O(m+\delta)$.
\end{theorem}
\begin{figure}[t]
\captionsetup[subfigure]{singlelinecheck=true}
\centering
\subcaptionbox{\label{fig:Complex}}{\includegraphics[scale=0.64,page=5]{Images/Complexe.pdf}}
\qquad\qquad \subcaptionbox{\label{fig:hyperplane}}
{\includegraphics[scale=0.64,page=4]{Images/Complexe.pdf}}
\qquad\qquad \subcaptionbox{\label{fig:boxcomplex}}
{\includegraphics[scale=0.64,page=6]{Images/Complexe.pdf}}
\caption{~(a) The cube complex ${\mathcal D}$ of $D$,~(b) a
hyperplane of $\mathcal D$, and~(c) the box complex
$\widehat{\mathcal D}$ and $\Med_w({\mathcal D})$ (in gray)
defined by 4 terminals of weight $1$.}\label{the_complex}
\end{figure}
\subsection{Geometric halfspaces and hyperplanes}\label{ss-geomhalfs}
In the following, we fix a basepoint $v_0$ of $G$. For each point $x$
of $\ensuremath{\mathcal{G}}\xspace$, let $Q(x)$ be the smallest cube of $\ensuremath{\mathcal{G}}\xspace$ containing $x$ and
let $v(x)$ be the gate of $v_0$ in $Q(x)$. For each $\Theta$-class
$E_i$ defining a dimension of $Q(x)$, let $\epsilon_i(x)$ be the
coordinate of $x$ along $E_i$ in the embedding of $Q(x)$ as a unit
cube in which $v(x)$ is the origin.
For a $\Theta$-class $E_i$ and a cube $Q$ having $E_i$ as a dimension,
the \emph{$i$-midcube} of $Q$ is the subspace of $Q$ obtained by
restricting the $E_i$-coordinate of $Q$ to $\frac{1}{2}$.
A \emph{midhyperplane} $\ensuremath{\mathfrak{h}}\xspace_i$
of $\ensuremath{\mathcal{G}}\xspace$
is the union of all $i$-midcubes.
Each $\ensuremath{\mathfrak{h}}\xspace_i$ cuts $\ensuremath{\mathcal{G}}\xspace$ in two components~\cite{Sa} and the union of
each of these components with $\ensuremath{\mathfrak{h}}\xspace_i$ is called a \emph{geometric
halfspace} (see Fig.~\ref{the_complex}(b)).
The \emph{carrier} $\ensuremath{\mathcal{N}}\xspace_i$ of $E_i$ is the union of all cubes of $\ensuremath{\mathcal{G}}\xspace$
intersecting $\ensuremath{\mathfrak{h}}\xspace_i$; $\ensuremath{\mathcal{N}}\xspace_i$ is isomorphic to $\ensuremath{\mathfrak{h}}\xspace_i\times[0,1]$. For
a $\Theta$-class $E_i$ and $0< \epsilon < 1$, the \emph{hyperplane}
$\ensuremath{\mathfrak{h}}\xspace_i(\epsilon)$ is the set of all points $x\in \ensuremath{\mathcal{N}}\xspace_i$ such that
$\epsilon_i(x)=\epsilon$. Let $\ensuremath{\mathfrak{h}}\xspace_i(0)$ and $\ensuremath{\mathfrak{h}}\xspace_i(1)$ be the
respective geometric realizations of $\partial H_i''$ and
$\partial H_i'$. Note that $\ensuremath{\mathfrak{h}}\xspace_i(\epsilon)$ is obtained from $\ensuremath{\mathfrak{h}}\xspace_i$
by a translation.
The \emph{open carrier} $\ensuremath{\mathcal{N}}\xspace^{\circ}_i$ is
$\ensuremath{\mathcal{N}}\xspace_i\setminus(\ensuremath{\mathfrak{h}}\xspace_i(0)\cup \ensuremath{\mathfrak{h}}\xspace_i(1))$.
We denote by $\ensuremath{\mathcal{H}}\xspace'_i(\epsilon)$ and $\ensuremath{\mathcal{H}}\xspace''_i(\epsilon)$ the geometric
halfspaces of $\ensuremath{\mathcal{G}}\xspace$ defined by
$\ensuremath{\mathfrak{h}}\xspace_i(\epsilon)$. Let $\ensuremath{\mathcal{H}}\xspace''_i:=\ensuremath{\mathcal{H}}\xspace''_i(0)$ and $\ensuremath{\mathcal{H}}\xspace'_i:=\ensuremath{\mathcal{H}}\xspace'_i(1)$; they are the
geometric realizations of $H'_i$ and $H''_i$. Note that $\ensuremath{\mathcal{G}}\xspace$ is the
disjoint union of $\ensuremath{\mathcal{H}}\xspace'_i$, $\ensuremath{\mathcal{H}}\xspace''_i$, and $\ensuremath{\mathcal{N}}\xspace^{\circ}_i$.
\subsection{The majority rule for $\ensuremath{\mathcal{G}}\xspace$}
Now we show how to reduce the median problem in $\ensuremath{\mathcal{G}}\xspace$ to a median problem
in a median graph.
\subsection*{The box complex $\ensuremath{\widehat{\mathcal{G}}}\xspace$}
By~\cite[Theorem 3.16]{vdV}, $(\ensuremath{\mathcal{G}}\xspace,d_1)$ is a median metric space
(i.e., $|I(x,y)\cap I(y,z)\cap I(z,x)|=1$ $\forall x,y,z\in \ensuremath{\mathcal{G}}\xspace$) and
the graph $G$ is isometrically embedded in $(\ensuremath{\mathcal{G}}\xspace,d_1)$. For each
$p\in P$ and each coordinate $\epsilon_i(p)$,
consider the hyperplane $\ensuremath{\mathfrak{h}}\xspace_i(\epsilon_i(p))$. All such hyperplanes
subdivide $\ensuremath{\mathcal{G}}\xspace$ into a box complex $\ensuremath{\widehat{\mathcal{G}}}\xspace$ (see
Fig.~\ref{the_complex}(c)).
Clearly, $(\ensuremath{\widehat{\mathcal{G}}}\xspace,d_1)$ is a median space. By~\cite[Theorem 3.13]{vdV},
the 1-skeleton $\ensuremath{\widehat{G}}$ of $\ensuremath{\widehat{\mathcal{G}}}\xspace$ is a median graph and each point of $P$
corresponds to a vertex of $\ensuremath{\widehat{G}}$. The $\Theta$-classes of $\ensuremath{\widehat{G}}$ are
subdivisions of the $\Theta$-classes of $G$. In $\ensuremath{\widehat{\mathcal{G}}}\xspace$, all edges of a
$\Theta$-class of $\ensuremath{\widehat{G}}$ have the same length. Let $\ensuremath{\widehat{G}}_l$ be the graph
$\ensuremath{\widehat{G}}$ in which the edges have these lengths. $\ensuremath{\widehat{G}}_l$ is a median
space, thus $\Med_w(\ensuremath{\widehat{G}}_l)=\Med_w(\ensuremath{\widehat{G}})$ by~\cite{SoCh_Weber}. By
Proposition~\ref{majority}, $\Med_w(\ensuremath{\widehat{G}}_l)$
is the intersection of the majoritary halfspaces of
$\ensuremath{\widehat{G}}$.
\begin{proposition}\label{med_as_subcomplex}
$\Med_w(\ensuremath{\mathcal{G}}\xspace)$ is the subcomplex of $\ensuremath{\widehat{\mathcal{G}}}\xspace$ defined by
$\ensuremath{\widehat{M}}\xspace:=\Med_w(\ensuremath{\widehat{G}}_l)$.
\end{proposition}
\begin{proof}
Let $\ensuremath{\widehat{E}}\xspace_1, \ldots, \ensuremath{\widehat{E}}\xspace_{\hat{q}}$ be the the
$\Theta$-classes of $\ensuremath{\widehat{G}}$. For a point $x \in
\ensuremath{\mathcal{G}}\xspace$, we denote by $\ensuremath{\widehat{Q}}\xspace(x)$ the smallest box of $\ensuremath{\widehat{\mathcal{G}}}\xspace$ containing
$x$, and for any $\Theta$-class $\ensuremath{\widehat{E}}\xspace_i$ of
$\ensuremath{\widehat{Q}}\xspace(x)$, let $\ensuremath{\widehat{\epsilon}}\xspace_i(x)$ be the coordinate of
$x$ along the dimension $\ensuremath{\widehat{E}}\xspace_i$ in the embedding of
$\ensuremath{\widehat{Q}}\xspace(x)$ as a unit cube where the origin is the gate of
$v_0$ on $\ensuremath{\widehat{Q}}\xspace(x)$.
For a point $x\in \ensuremath{\mathcal{G}}\xspace$, let $\ensuremath{\widehat{\mathcal{G}}}\xspace(x)$ be the subdivision of the
complex $\ensuremath{\widehat{\mathcal{G}}}\xspace$ by the hyperplanes passing via $x$, let $\ensuremath{\widehat{G}}(x)$
denote the 1-skeleton of $\ensuremath{\widehat{\mathcal{G}}}\xspace(x)$ and let $\ensuremath{\widehat{G}}_l(x)$ be the
corresponding weighted graph. Again, $\ensuremath{\widehat{G}}(x)$ is a median graph and
$\ensuremath{\widehat{G}}_l(x)$ is an isometric subgraph of $\ensuremath{\widehat{\mathcal{G}}}\xspace(x)$ that is isometric to
$\ensuremath{\mathcal{G}}\xspace$ and $\ensuremath{\widehat{\mathcal{G}}}\xspace$.
We show by induction on the dimension of $\ensuremath{\widehat{Q}}\xspace(x)$ that
$x \in \Med_w(\ensuremath{\mathcal{G}}\xspace) = \Med_w(\ensuremath{\widehat{\mathcal{G}}}\xspace) = \Med_w(\ensuremath{\widehat{\mathcal{G}}}\xspace(x))$ if and only if for
any vertex $v$ of $\ensuremath{\widehat{Q}}\xspace(x)$, $v \in \Med_w(\ensuremath{\mathcal{G}}\xspace)$. If $\ensuremath{\widehat{Q}}\xspace(x) = \{x\}$,
then there is nothing to prove. Otherwise, pick a $\Theta$-class
$\ensuremath{\widehat{E}}\xspace_i$ of $\ensuremath{\widehat{G}}$ such that $0 < \ensuremath{\widehat{\epsilon}}\xspace_i(x) <1$. In $\ensuremath{\widehat{G}}(x)$, $x$
has exactly two neighbors $x',x''$ belonging to two opposite facets
of $\ensuremath{\widehat{Q}}\xspace(x)$ such that $\ensuremath{\widehat{\epsilon}}\xspace_i(x'')=0$ and $\ensuremath{\widehat{\epsilon}}\xspace_i(x')=1$.
Observe that $V(\ensuremath{\widehat{Q}}\xspace(x)) = V(\ensuremath{\widehat{Q}}\xspace(x')) \cup V(\ensuremath{\widehat{Q}}\xspace(x''))$ and
$x \in I_{\ensuremath{\widehat{G}}(x)}(x',x'')$. By the definition of $\ensuremath{\widehat{G}}$, there is no
terminal $p \in P$ with $0 <\ensuremath{\widehat{\epsilon}}\xspace_i(p) <1$. Consequently in
$\ensuremath{\widehat{G}}(x)$, $W(x'',x) \cap P = W(x,x') \cap P$ and
$W(x',x) \cap P = W(x,x'') \cap P$. Therefore in $\ensuremath{\widehat{G}}(x)$ (and in
$\ensuremath{\widehat{G}}_l(x)$), the halfspace $W(x'',x)$ is majoritary
(resp.\ egalitarian, minoritary) if and only if $W(x',x)$ minoritary
(resp.\ egalitarian, majoritary).
Suppose that $x \in \Med_w(\ensuremath{\mathcal{G}}\xspace)$. If $W(x,x'')$ (resp.\ $W(x,x')$) is
minoritary in $\ensuremath{\widehat{G}}(x)$, then by Lemma~\ref{F(x)-F(y)} applied to
$\ensuremath{\widehat{G}}_l(x)$, $F_w(x) > F_w(x'')$ (resp.\ $F_w(x) > F_w(x')$) and
$x \notin \Med_w(\ensuremath{\mathcal{G}}\xspace)$, a contradiction. Thus, necessarily $W(x'',x)$
and $W(x',x)$ are egalitarian and by Lemma~\ref{F(x)-F(y)} in
$\ensuremath{\widehat{G}}_l(x)$, $F_w(x')=F_w(x'')=F_w(x)$. Since $\ensuremath{\widehat{Q}}\xspace(x')$ and
$\ensuremath{\widehat{Q}}\xspace(x'')$ are facets of $\ensuremath{\widehat{Q}}\xspace(x)$, by induction hypothesis, all
vertices in $V(\ensuremath{\widehat{Q}}\xspace(x)) = V(\ensuremath{\widehat{Q}}\xspace(x')) \cup V(\ensuremath{\widehat{Q}}\xspace(x''))$ belong to
$\Med_w(\ensuremath{\mathcal{G}}\xspace)$. Conversely, suppose that
$V(\ensuremath{\widehat{Q}}\xspace(x)) = V(\ensuremath{\widehat{Q}}\xspace(x')) \cup V(\ensuremath{\widehat{Q}}\xspace(x'')) \subseteq \Med_w(\ensuremath{\mathcal{G}}\xspace)$. Then
by induction hypothesis, $x',x'' \in \Med_w(\ensuremath{\mathcal{G}}\xspace)$. Since
$\Med_w(\ensuremath{\widehat{G}}_l(x)) = \Med_w(\ensuremath{\widehat{G}}(x))$ is convex and
$x \in I_{\ensuremath{\widehat{G}}(x)}(x,x')$, we have $F_w(x) = F_w(x') = F_w(x'')$ and
consequently, $x \in \Med_w(\ensuremath{\mathcal{G}}\xspace)$.
\end{proof}
\subsection*{The $E_i$-median problems}
We adapt now Proposition~\ref{majority} to the continuous setting. In
our algorithm and next results we will not explicitly construct the
box complex $\ensuremath{\widehat{\mathcal{G}}}\xspace$ and its 1-skeleton $\widehat{G}$ (because they are
too large), but we will only use them in proofs.
For a $\Theta$-class of $G$, the $E_i$-\emph{median} is the median of
the multiset of points of the segment $[0,1]$ weighted as follows: the
weight $w_i(0)$ of $0$ is $w(\ensuremath{\mathcal{H}}\xspace''_i)$, the weight $w_i(1)$ of 1 is
$w(\ensuremath{\mathcal{H}}\xspace'_i)$, and for each $p\in P\cap \ensuremath{\mathcal{N}}\xspace_i^{\circ}$, there is a point
$\epsilon_i(p)$ of $[0,1]$ of weight $w_i(\epsilon_i(p))=w(p)$. It is
well-known that this median is a segment $[\varrho''_i,\varrho'_i]$
defined by two consecutive points $\varrho''_i\le \varrho'_i$ of
$[0,1]$ with positive weights,
and for any $p \in P$, $\epsilon_i(p)\leq \varrho_i''$ or
$\epsilon_i(p) \geq \varrho_i'$. \emph{Majoritary}, \emph{minoritary},
and \emph{egalitarian} geometric halfspaces of $\ensuremath{\mathcal{G}}\xspace$ are defined in the
same way as the halfspaces of $G$.
\begin{proposition}\label{majority_bis}
Let $E_i$ be a $\Theta$-class of $G$. Then the following holds:
\begin{enumerate}
\item $\Med_w(\ensuremath{\mathcal{G}}\xspace)\subseteq \ensuremath{\mathcal{H}}\xspace''_i$ (resp.\
$\Med_w(\ensuremath{\mathcal{G}}\xspace)\subseteq \ensuremath{\mathcal{H}}\xspace'_i$) if and only if $\ensuremath{\mathcal{H}}\xspace''_i$ is
majoritary (resp.\ $\ensuremath{\mathcal{H}}\xspace'_i$ is majoritary), i.e.,
$\rho_i'' = \rho_i' = 0$ (resp.\ $\rho_i'' = \rho_i' = 1$);
\item $\Med_w(\ensuremath{\mathcal{G}}\xspace)\subseteq \ensuremath{\mathcal{H}}\xspace''_i\cup \ensuremath{\mathcal{N}}\xspace^{\circ}_i$ (resp.\
$\Med_w(\ensuremath{\mathcal{G}}\xspace)\subseteq \ensuremath{\mathcal{H}}\xspace'_i\cup \ensuremath{\mathcal{N}}\xspace^{\circ}_i$) and $\Med_w(\ensuremath{\mathcal{G}}\xspace)$
intersects each of the sets $\ensuremath{\mathcal{H}}\xspace''_i$ (resp.\ $\ensuremath{\mathcal{H}}\xspace'_i$) and
$\ensuremath{\mathcal{N}}\xspace^{\circ}_i$ if and only if $\ensuremath{\mathcal{H}}\xspace''_i$ (resp.\ $\ensuremath{\mathcal{H}}\xspace'_i$) is
egalitarian and $\ensuremath{\mathcal{H}}\xspace'_i$ (resp.\ $\ensuremath{\mathcal{H}}\xspace''_i$) is minoritary, i.e.,
$0 = \rho_i'' < \rho_i' < 1$ (resp.\ $0 < \rho_i'' < \rho_i' = 1$);
\item $\Med_w(\ensuremath{\mathcal{G}}\xspace)\subseteq \ensuremath{\mathcal{N}}\xspace^{\circ}_i$ if and only if $\ensuremath{\mathcal{H}}\xspace'_i$
and $\ensuremath{\mathcal{H}}\xspace''_i$ are minoritary, i.e.,
$0 < \rho_i'' \leq \rho_i' < 1$;
\item $\Med_w(\ensuremath{\mathcal{G}}\xspace)$ intersects the three sets $\ensuremath{\mathcal{H}}\xspace_i, \ensuremath{\mathcal{H}}\xspace''_i,$ and
$\ensuremath{\mathcal{N}}\xspace^{\circ}_i$ if and only if $\ensuremath{\mathcal{H}}\xspace'_i$ and $\ensuremath{\mathcal{H}}\xspace''_i$ are
egalitarian, i.e., $0 = \rho_i'' \leq \rho_i' = 1$ (and thus
$w(\ensuremath{\mathcal{N}}\xspace^{\circ}_i)=0$).
\end{enumerate}
\end{proposition}
\begin{proof}
Let $0<\epsilon_1<\cdots<\epsilon_k<1$ denote the possible values of
coordinates of points of $P$ with respect to the $\Theta$-class
$E_i$. They define parallel hyperplanes
$\ensuremath{\mathfrak{h}}\xspace_i(0),\ensuremath{\mathfrak{h}}\xspace_i(\epsilon_1),\ldots,\ensuremath{\mathfrak{h}}\xspace_i(\epsilon_k),\ensuremath{\mathfrak{h}}\xspace_i(1)$. The
pieces of the edges of $E_i$ bounded by two such consecutive
hyperplanes define a $\Theta$-class of $\widehat{G}$ and all such
$\Theta$-classes are laminar. In fact, we have the following chains
of inclusions between the geometric halfspaces of $\ensuremath{\mathcal{G}}\xspace$ (or $\ensuremath{\widehat{\mathcal{G}}}\xspace$)
defined by those $\Theta$-classes:
$\ensuremath{\mathcal{H}}\xspace''_i=\ensuremath{\mathcal{H}}\xspace_i''(0)\subset \ensuremath{\mathcal{H}}\xspace''_i(\epsilon_1)\subset\cdots\subset
\ensuremath{\mathcal{H}}\xspace''_i(\epsilon_k)$ and
$\ensuremath{\mathcal{H}}\xspace'_i=\ensuremath{\mathcal{H}}\xspace'_i(1)\subset \ensuremath{\mathcal{H}}\xspace'_i(\epsilon_k)\subset\cdots\subset
\ensuremath{\mathcal{H}}\xspace'_i(\epsilon_1)$. Similar inclusions hold between corresponding
halfspaces of the graph $\widehat{G}$. Notice also that, by the
definition of $\widehat{G}$ and $\ensuremath{\widehat{\mathcal{G}}}\xspace$, the geometric halfspaces and
the corresponding graphic halfspaces, have the same weight.
Therefore, to deduce the different cases of the proposition, it
suffices to apply the majority rule (Proposition~\ref{majority} and
Corollary~\ref{disjoint-halfspaces}) to the halfspaces of
$\widehat{G}$ occurring in the two chains of inclusions and use the
fact that $\ensuremath{\widehat{M}}\xspace$ is the $1$-skeleton of $\Med_w(\ensuremath{\mathcal{G}}\xspace) = \Med_w(\ensuremath{\widehat{\mathcal{G}}}\xspace)$ in
$\ensuremath{\widehat{\mathcal{G}}}\xspace$ from Proposition~\ref{med_as_subcomplex}.
\end{proof}
\subsection{The algorithm}
\begin{algorithm}[]
\caption{ComputeMedianCubeComplex($G,P,w,\Theta$)}\label{alg:compmedcomplex}
\SetKw{Add}{add}
\SetKw{To}{to}
\SetKwFunction{ComputeWeightsOfHalfspaces}{ComputeWeightsOfHalfspaces}
\DontPrintSemicolon
\SetAlgoVlined
\KwData{A median graph $G=(V,E)$, a set of terminals $P$, a weight
function $w : P\rightarrow \ensuremath{\mathbb{R}}\xspace^+$, the $\Theta$-classes $\Theta =
(E_1, \hdots,E_q)$ of $G$ ordered by increasing distance to the
basepoint $v_0$.}
\KwResult{The graph $\ensuremath{\widehat{M}}\xspace$ and the coordinates of each vertex $\ensuremath{\widehat{m}} \in
\ensuremath{\widehat{M}}\xspace$ in $\ensuremath{\mathcal{G}}\xspace$}
\Begin{
Modify the root $v(p)$ of each point $p \in P$ such that $v(p)$
is the gate of $v_0$ on $Q(p)$ \;
Compute $w(P_i)$ for all $\Theta$-classes $E_i$ by
traversing $P$\;
Compute $w_*(v)=w(P_v)$ for all $v \in V$ by traversing $P$ \;
Apply \ComputeWeightsOfHalfspaces$(G,w_*,\Theta)$ to compute the
weights $w_*(H_i')$ and $w_*(H_i'')$ for each $\Theta$-class $E_i$\;
Set $w(\ensuremath{\mathcal{H}}\xspace_i') \leftarrow w_*(H_i')$ and $w(\ensuremath{\mathcal{H}}\xspace_i'') \leftarrow
w_*(H_i'') - w(P_i)$ for each $\Theta$-class $E_i$\;
Compute the $E_i$-median instance for all $\Theta$-classes $E_i$ by
traversing $P$\;
Compute the $E_i$-median $[\varrho_i'',\varrho_i']$ for each
$\Theta$-class $E_i$ \;
Orient the edges of $G$ and compute the set of half-edges around
each vertex $v \in V$\;
Compute the set $S(\ensuremath{\overrightarrow{G}})$ of sinks of $\ensuremath{\overrightarrow{G}}$ by traversing the
edges of $G$\;
$V(\ensuremath{\widehat{M}}\xspace) \leftarrow \left\{g(v):v \in S(\ensuremath{\overrightarrow{G}})\right\}$\;
$E(\ensuremath{\widehat{M}}\xspace)n\leftarrow \left\{g(u)g(v):uv \in E \text{ and } u,v \in
S(\ensuremath{\overrightarrow{G}})\right\}$ \;
\Return $\left(V(\ensuremath{\widehat{M}}\xspace),E(\ensuremath{\widehat{M}}\xspace)\right)$ \;
}
\end{algorithm}
\subsection*{Preprocessing the input}
We first compute the $\Theta$-classes $E_1, E_2, \ldots, E_q$ of $G$
ordered by increasing distance from $v_0$ to $H'_i$. Using this, we
first modify the input of the median problem in linear time
$O(m + \delta)$ in such a way that for each terminal $p \in P$, $v(p)$
is the gate of $v_0$ in $Q(p)$. Once the $\Theta$-classes have been
computed, we can assume that each terminal $p$ is described by its
root $v(p)$ as well as a list of coordinates $\Delta(p)$, one
coordinate $0 <\epsilon_i(p)<1$ for each $\Theta$-class $E_i$ of
$Q(p)$ such that $\epsilon_i(p)$ is the coordinate of $x$ along $E_i$
in the embedding of $Q(p)$ as a unit cube in which $v(p)$ is the
origin. To update $v(p)$ and $\Delta(p)$, we use a (non-initialized)
matrix $B$ whose rows and columns are indexed respectively by the
vertices and the $\Theta$-classes of $G$ and such that if a vertex $v$
has a neighbor $v'$ such that $vv'$ belongs to the $\Theta$-class
$E_i$, then $B[v,E_i] = v'$ (and $B[v,E_i]$ is undefined if $v$ does
not have such a neighbor $v'$). One can construct $B$ in time $O(m)$
by traversing all edges of $G$ once the $\Theta$-classes have been
computed. With the matrix $B$ at hand, for each terminal $p \in P$, we
consider the coordinates of $p$ in order and for each coordinate
$\epsilon_i(p)$ of $p$, if $v'=B[v(p),E_i]$ is closer to $v_0$ than
$v(p)$, we replace $v(p)$ by $v'$ and $\epsilon_i(p)$ by
$1-\epsilon_i(p)$. Observe that each time we modify $v(p)$, $v(p)$ is
still a vertex of $Q(p)$ and thus $B[v(p),E_j]$ is still defined for
any coordinate $\epsilon_j \in \Delta(p)$. Note that $v(p)$ can move
to distance up to $|\Delta(p)|$ from its original position during the
process. Once the matrix $B$ has been computed, the modification of
the roots of all the points $p \in P$ can be performed in time
$O(\sum_{p\in P} \Delta(p)) = O(\delta)$.
In this way, the local coordinates of the terminals of $P$ coincide
with the coordinates $\epsilon_i(p)$ defined in
Section~\ref{ss-geomhalfs}.
For each $\Theta$-class $E_i$, let
$P_i =P\cap \ensuremath{\mathcal{N}}\xspace^{\circ}_i = \{p \in P : 0 < \epsilon_i(p) <1\}$, and
for each point $v \in V(G)$, let $P_v = \{p \in P : v(p) =v\}$. By
traversing the points of $P$, we can compute all sets $P_i$,
$1\leq i \leq q$ and $P_v$, $v \in V$ and the weights of these sets in
time $O(\delta)$.
\subsection*{Computing the $E_i$-medians}
We first compute the weights $w_i(0) = w(\ensuremath{\mathcal{H}}\xspace_i'')$ and
$w_i(1) = w(\ensuremath{\mathcal{H}}\xspace_i')$ of the geometric halfspaces $\ensuremath{\mathcal{H}}\xspace_i'', \ensuremath{\mathcal{H}}\xspace_i'$ of
$G$. For each vertex $v$ of $G$, let $\ensuremath{{w}_*}\xspace(v) = w(P_v)$. Note that
$\ensuremath{{w}_*}\xspace(V) = w(P)$. Since $v_0 \in H_i''$, $\ensuremath{{w}_*}\xspace(H_i') = w(\ensuremath{\mathcal{H}}\xspace_i')$ and
$\ensuremath{{w}_*}\xspace(H_i'') = w(\ensuremath{\mathcal{H}}\xspace_i'') + w(\ensuremath{\mathcal{N}}\xspace_i^o)$ for each $\Theta$-class
$E_i$. We apply the algorithm of Section~\ref{ssec:weight-halfpaces}
to $G$ with the weight function $\ensuremath{{w}_*}\xspace$ to compute the weights
$\ensuremath{{w}_*}\xspace(H_i')$ and $\ensuremath{{w}_*}\xspace(H_i'')$ of all halfspaces of $G$. Since
$w(\ensuremath{\mathcal{N}}\xspace_i^o) = w(P_i)$ is known, we can compute $w(\ensuremath{\mathcal{H}}\xspace_i') = \ensuremath{{w}_*}\xspace(H_i')$
and $w(\ensuremath{\mathcal{H}}\xspace_i'') = \ensuremath{{w}_*}\xspace(H_i'') - w(P_i)$. This allows us to complete
the definition of each $E_i$-median problem which altogether can be
solved linearly in the size of the input~\cite[Problem 9.2]{Cormen},
i.e., in time $O(\Sigma_{i=1}^q (|P_i| +2)) = O(\delta+m)$.
\subsection*{Computing $\ensuremath{\widehat{M}}\xspace$}
To compute the 1-skeleton $\ensuremath{\widehat{M}}\xspace$ of $\Med_w(\ensuremath{\mathcal{G}}\xspace)$ in $\ensuremath{\widehat{G}}$, we orient
the edges of $E_i$ according to the weights of $\ensuremath{\mathcal{H}}\xspace'_i$ and $\ensuremath{\mathcal{H}}\xspace''_i$:
$v'v''\in E_i$ with $v'\in \ensuremath{\mathcal{H}}\xspace'_i$ and $v''\in \ensuremath{\mathcal{H}}\xspace''_i$ is directed
from $v''$ to $v'$ if $\varrho'_i=\varrho''_i=1$ ($\ensuremath{\mathcal{H}}\xspace'_i$ is
majoritary) and from $v'$ to $v''$ if $\varrho'_i=\varrho''_i=0$
($\ensuremath{\mathcal{H}}\xspace''_i$ is majoritary), otherwise the edges of $E_i$ are not
oriented. Denote this partially directed graph by $\ensuremath{\overrightarrow{G}}$ and let
$S(\ensuremath{\overrightarrow{G}})$ be the set of sinks of $\ensuremath{\overrightarrow{G}}$. A non-directed edge
$v'v'' \in E_i$ defines a \emph{half-edge with origin} $v''$ if
$\varrho''_i>0$ and a \emph{half-edge with origin} $v'$ if
$\varrho'_i < 1$ (an edge $v'v''$ such that
$0< \varrho''_i\le \varrho'_i< 1$ defines two half-edges).
\begin{proposition}\label{one_sink}
For any vertex $v$ of $\ensuremath{\overrightarrow{G}}$, all half-edges with origin $v$ define a
cube $Q_v$ of $\ensuremath{\mathcal{G}}\xspace$.
\end{proposition}
\begin{proof}
For any vertex $v$ and two $\Theta$-classes $E_i,E_j$ defining
half-edges with origin $v$, let $v_i$ and $v_j$ be the respective
neighbors of $v$ in $\ensuremath{\widehat{G}}$ along the directions $E_i$ and $E_j$. By
Proposition~\ref{majority_bis}, $vv_i$ and $vv_j$ point to two
majoritary halfspaces of $\ensuremath{\widehat{G}}$ (and $\ensuremath{\mathcal{G}}\xspace$). Since those two
halfspaces cannot be disjoint, $E_i$ and $E_j$ are crossing.
The proposition then follows from Lemma~\ref{crossing}.
\end{proof}
For any cube $Q$ of $\ensuremath{\mathcal{G}}\xspace$, let $B(Q) \subseteq Q$ be the subcomplex of
$\ensuremath{\widehat{\mathcal{G}}}\xspace$ that is the Cartesian product of the $E_i$-medians
$[\varrho''_i,\varrho'_i]$ over all $\Theta$-classes $E_i$ wich define
dimensions of $Q$. By the definition of the $E_i$-medians, $B(Q)$ is a
single box of $\ensuremath{\widehat{\mathcal{G}}}\xspace$ and its vertices belong to $\ensuremath{\widehat{G}}$.
\begin{proposition}
For any cube $Q$ of $\ensuremath{\mathcal{G}}\xspace$, if $Q \cap \Med_{w}(\ensuremath{\mathcal{G}}\xspace) \neq \emptyset$,
then $B(Q) = \Med_w(\ensuremath{\mathcal{G}}\xspace) \cap Q$.
\end{proposition}
\begin{proof}
If a vertex $x$ of $B(Q)$ is not a median of $\ensuremath{\widehat{G}}$, by
Proposition~\ref{majority}, $x$ is not a local median of $\ensuremath{\widehat{G}}$. Thus
$F_w(x)>F_w(y)$ for an edge $xy$ of $\ensuremath{\widehat{G}}$. Suppose that $xy$ is
parallel to the edges of $E_i$ of $G$. Then $\epsilon_i(x)$
coincides with $\varrho''_i$ or $\varrho'_i$. Since $F_w(x)>F_w(y)$,
the halfspace $W(y,x)$ of $\ensuremath{\widehat{G}}$ is majoritary, contrary to the
assumption that $\epsilon_i(x)$ is an $E_i$-median point. Thus all
vertices of $B(Q)$ belong to $\ensuremath{\widehat{M}}\xspace$ and by
Proposition~\ref{med_as_subcomplex}, $B(Q)\subseteq \Med_w(\ensuremath{\mathcal{G}}\xspace)$. It
remains to show that any point of $Q\setminus B(Q)$ is not
median. Otherwise, by Proposition~\ref{med_as_subcomplex} and since
$\ensuremath{\widehat{M}}\xspace$ is convex, there exists a vertex $y\notin B(Q)$ of
$(\ensuremath{\widehat{M}}\xspace\cap Q)\setminus B(Q)$ adjacent to a vertex $x$ of $B(Q)$. Let
$xy$ be parallel to $E_i$. Then $\epsilon_i(x)$ coincides with
$\varrho''_i$ or $\varrho'_i$ and $\epsilon_i(y)$ does not belong to
the $E_i$-median $[\varrho''_i,\varrho'_i]$. Hence the halfspace
$W(y,x)$ of $\ensuremath{\widehat{G}}$ is minoritary, contrary to $F_w(y)=F_w(x)$.
\end{proof}
For a sink $v$ of $\ensuremath{\overrightarrow{G}}$, let $g(v)$ be the point of $Q_v$ such that
for each $\Theta$-class $E_i$ of $Q_v$, $\epsilon_i(g(v)) = \varrho'$
if $v \in \ensuremath{\mathcal{H}}\xspace_i'$ and $\epsilon_i(g(v)) = \varrho''$ if
$v \in \ensuremath{\mathcal{H}}\xspace_i''$. Note that $g(v)$ is the gate of $v$ in $B(Q_v)$ and
$g(v)$ is a vertex of $\ensuremath{\widehat{M}}\xspace$. Conversely, let $x \in \ensuremath{\widehat{M}}\xspace$ and consider
the cube $Q(x)$. Since $B(Q(x))$ is a cell of $\ensuremath{\widehat{\mathcal{G}}}\xspace$, for each
$\Theta$-class $E_i$ of $Q(x)$, we have
$\epsilon_i(x) \in \{\varrho'_i,\varrho_i''\}$. Let $f(x)$ be the
vertex of $Q(x)$ such that $f(x) \in \ensuremath{\mathcal{H}}\xspace_i''$ if
$\epsilon_i(x) = \varrho''_i$ and $f(x) \in \ensuremath{\mathcal{H}}\xspace_i'$ otherwise.
\begin{proposition}\label{prop-sources}
For any $v\in S(\ensuremath{\overrightarrow{G}})$, $g(v)$ is the gate of $v$ in $\ensuremath{\widehat{M}}\xspace$ and
$\Med_w(\ensuremath{\mathcal{G}}\xspace)$. For any $x\in \ensuremath{\widehat{M}}\xspace$, $x = g(f(x))$ is the gate of
$f(x)$ in $\ensuremath{\widehat{M}}\xspace$ and $\Med_w(\ensuremath{\mathcal{G}}\xspace)$.
Furthermore, for any edge $uv$ of $G$ with $u,v\in S(\ensuremath{\overrightarrow{G}})$, either
$g(u) = g(v)$ or $g(u)g(v)$ is an edge of $\ensuremath{\widehat{M}}\xspace$. Conversely, for any
edge $xy$ of $\ensuremath{\widehat{M}}\xspace$, $f(x)f(y)$ is an edge of $G$.
\end{proposition}
\begin{proof}By Proposition~\ref{majority_bis} applied to $\ensuremath{\mathcal{G}}\xspace$,
Proposition~\ref{majority} applied to $\ensuremath{\widehat{G}}$, and the definition of
sinks of $\ensuremath{\overrightarrow{\widehat{G}}}$, $g(v)$ is a sink of $\ensuremath{\overrightarrow{\widehat{G}}}$, thus $g(v)$ is a median
of $\ensuremath{\widehat{G}}$ and $\ensuremath{\mathcal{G}}\xspace$. Since $B(Q_v) = \Med_w(\ensuremath{\mathcal{G}}\xspace) \cap Q_v$ is gated
and non-empty, the gate of $v$ in $\Med_w(\ensuremath{\mathcal{G}}\xspace)$ belongs to $B(Q_v)$
and thus the gate of $v$ in $\Med_w(\ensuremath{\mathcal{G}}\xspace)$ is the gate of $v$ on
$B(Q_v)$. Conversely, $\epsilon_i(x) \notin \{0,1\}$ for any $E_i$
defining a dimension of $Q(x)$, thus there is an $E_i$-half-edge
with origin $f(x)$. Pick now any $E_j$-edge incident to $v$ such
that $E_j$ does not define a dimension of $Q(x)$. Without loss of
generality, assume that $f(x)\in \ensuremath{\mathcal{H}}\xspace_j'$. Then $x \in \ensuremath{\mathcal{H}}\xspace_j'$,
yielding $w(\ensuremath{\mathcal{H}}\xspace_j') \geq \frac{1}{2}w(P)$. By
Proposition~\ref{majority_bis}, $\varrho_j'=1$ and thus $f(x)$ is
not the origin of an $E_j$-edge or $E_j$-half-edge. Consequently,
$Q_{f(x)} = Q(x)$ by Proposition~\ref{one_sink} and by the
definition of $f(x)$ and $g(f(x))$, we have $x= g(f(x))$.
Let $v'v''$ be an $E_i$-edge between two sinks of $\ensuremath{\overrightarrow{G}}$ with
$v' \in \ensuremath{\mathcal{H}}\xspace_i'$ and $v'' \in \ensuremath{\mathcal{H}}\xspace''_i$. Let $x' = g(v')$ and
$x'' = g(v'')$ and assume that $x' \neq x''$. Let $u', u''$ be the
points of $v'v''$ such that $\epsilon_i(u') = \varrho_i'$ and
$\epsilon_i(u'') = \varrho_i''$. Note that $u'$ and $u''$ are
adjacent vertices of $\ensuremath{\widehat{G}}$ and that $u' \in I_{\ensuremath{\widehat{G}}}(v',x')$ and
$u'' \in I_{\ensuremath{\widehat{G}}}(v'',x'')$. In $\ensuremath{\widehat{G}}$, $x''$ is the gate of $u''$
(and $x'$ is the gate of $u'$) in $\ensuremath{\widehat{M}}\xspace$. Since
$d_{\ensuremath{\widehat{G}}}(u',x') + d_{\ensuremath{\widehat{G}}}(x',x'') = d_{\ensuremath{\widehat{G}}}(u',x'') \leq
d_{\ensuremath{\widehat{G}}}(u'',x'') +1$ and
$d_{\ensuremath{\widehat{G}}}(u'',x'') + d_{\ensuremath{\widehat{G}}}(x',x'') = d_{\ensuremath{\widehat{G}}}(u'',x') \leq
d_{\ensuremath{\widehat{G}}}(u',x') +1$, we obtain that $d_{\ensuremath{\widehat{G}}}(x',x'') \leq 1$.
Any edge $x'x''$ of $\ensuremath{\widehat{M}}\xspace$ is parallel to a $\Theta$-class $E_i$ of
$G$. For any $\Theta$-class $E_j$ of $Q(x')$ (resp.\ $Q(x'')$) with
$j \neq i$, $E_j$ is a $\Theta$-class of $Q(x'')$ (resp.\ $Q(x')$)
and $\epsilon_j(x') = \epsilon_j(x'')$. By their definition, $f(x')$
and $f(x'')$ can be separated only by $E_i$, i.e.,
$d_G(f(x'),f(x'')) \leq 1$. Since $f$ is an injection from $V(\ensuremath{\widehat{M}}\xspace)$
to $S(\ensuremath{\overrightarrow{G}})$, necessarily $f(x')$ and $f(x'')$ are adjacent.
\end{proof}
The algorithm computes the set $S(\ensuremath{\overrightarrow{G}})$ of all sinks of $\ensuremath{\overrightarrow{G}}$ and for
each sink $v\in S(\ensuremath{\overrightarrow{G}})$, it computes the gate of $g(v)$ of $v$ in
$\ensuremath{\widehat{M}}\xspace$ and the local coordinates of $g(v)$ in $\ensuremath{\mathcal{G}}\xspace$. The algorithm
returns $\left\{g(v): v\in S(\ensuremath{\overrightarrow{G}})\right\}$ as
$V(\ensuremath{\widehat{M}}\xspace)$ and $\left\{g(u)g(v) : uv \in E \text{ and } u,v \in
S(\ensuremath{\overrightarrow{G}})\right\}$ as
$E(\ensuremath{\widehat{M}}\xspace)$. Proposition~\ref{prop-sources} implies that $V(\ensuremath{\widehat{M}}\xspace)$ and
$E(\ensuremath{\widehat{M}}\xspace)$ are correctly computed and that $\ensuremath{\widehat{M}}\xspace$ contains at most $n$
vertices and $m$ edges. Moreover each vertex $x$ of $\ensuremath{\widehat{M}}\xspace$ is the gate
$g(f(x))$ of the vertex $f(x)$ of $Q(x)$ that has dimension at most
$\deg(f(x))$. Hence the size of the description of the vertices of
$\ensuremath{\widehat{M}}\xspace$ is at most $O(m)$. This finishes the proof of
Theorem~\ref{mediancomplexx}.
\subsection{Wiener index in $\ensuremath{\mathcal{G}}\xspace$}
We describe a linear time algorithm to compute the Wiener index of a
set of terminals in the $\ell_1$-cube complex $\ensuremath{\mathcal{G}}\xspace$ of a median graph
$G$. By analogy with graphs, the Wiener index in $\ensuremath{\mathcal{G}}\xspace$ is the sum of
the weighted distances between all pairs of terminals.
\begin{proposition}
Let $G$ be a median graph with $m$ edges and let $P$ be a finite set
of terminals of $\ensuremath{\mathcal{G}}\xspace$ described by an input of size $\delta$. The
Wiener index of $P$ in $\ensuremath{\mathcal{G}}\xspace$ can be computed in $O(m+\delta)$ time.
\end{proposition}
\begin{proof}
The proof is similar to the proof of Proposition~\ref{t-wiener}.
Let $0<\epsilon_1<\cdots<\epsilon_k<1$ denote the $E_i$-coordinates
of the points in $P$ and let $\epsilon_0=0$ and $\epsilon_{k+1}=1$.
Just like in the proof of Proposition~\ref{majority_bis}, we have
the following chains of inclusions between the halfspaces defined by
the hyperplanes
$\ensuremath{\mathfrak{h}}\xspace_i(\epsilon_0),\ensuremath{\mathfrak{h}}\xspace_i(\epsilon_1),\ldots,\ensuremath{\mathfrak{h}}\xspace_i(\epsilon_k),\ensuremath{\mathfrak{h}}\xspace_i
(\epsilon_{k+1})$:
\[ \ensuremath{\mathcal{H}}\xspace''_i=\ensuremath{\mathcal{H}}\xspace_i''(\epsilon_0)\subset
\ensuremath{\mathcal{H}}\xspace''_i(\epsilon_1)\subset\cdots\subset \ensuremath{\mathcal{H}}\xspace''_i(\epsilon_k) \]
and
\[\ensuremath{\mathcal{H}}\xspace'_i=\ensuremath{\mathcal{H}}\xspace'_i(\epsilon_{k+1})\subset
\ensuremath{\mathcal{H}}\xspace'_i(\epsilon_k)\subset\cdots\subset \ensuremath{\mathcal{H}}\xspace'_i(\epsilon_1).\]
Then, similarly to Lemma~\ref{wiener_folklore}, we get the following
result:
\begin{lemma}
$W_w(\ensuremath{\mathcal{G}}\xspace)=\sum_{i=1}^{q} \sum_{j=0}^k w(\ensuremath{\mathcal{H}}\xspace''_i(\epsilon_j))\cdot
w(\ensuremath{\mathcal{H}}\xspace'_i(\epsilon_{j+1}))\cdot(\epsilon_{j+1}-\epsilon_j)$.
\end{lemma}
Once the $O(\delta)$ hyperplanes are ordered, we can compute the
weights of the halfspaces in $O(m+\delta)$ time and compute the
Wiener index of $P$ in $\ensuremath{\mathcal{G}}\xspace$ in $O(m+\delta)$ time.
\end{proof}
\section{The median problem in event structures}\label{sec:compact}
In this section, we consider the median problem in which the median
graph is implicitly defined as the domain of configurations of an
event structure. We show that the problem can be solved efficiently in
the size of the input. However, if the input consists solely of the
event structure and the goal is to compute the median of all
configurations of the domain, then this algorithmic problem is
$\#$P-hard. To prove this we provide a direct (polynomial size)
correspondence between event structures and 2-SAT formulas and use
$\#$P-hardness of a similar median problem for 2-SAT established
in~\cite{Fe}.
\subsection{Definitions and bijections}
We start with the definition of event structures and 2-SAT formulas
and their bijections with median graphs.
\subsubsection{Event structures}
Event structures, introduced by Nielsen, Plotkin, and
Winskel~\cite{NiPlWi,Winskel}, are a widely recognized abstract model
of concurrent computation. An \emph{event structure} is a triple
${\E}=(E,\le, \#)$, where
\begin{itemize}
\item $E$ is a set of \emph{events},
\item $\le\subseteq E\times E$ is a partial order of \emph{causal
dependency},
\item $\#\subseteq E\times E$ is a binary, irreflexive, symmetric
relation of \emph{conflict},
\item $\downarrow \!e:=\{ e'\in E: e'\le e\}$ is finite for any
$e\in E$,
\item $e\# e'$ and $e'\le e''$ imply $e\# e''$.
\end{itemize}
Two events $e',e''$ are \emph{concurrent} (notation $e'\| e''$) if
they are order-incomparable and they are not in conflict.
A \emph{configuration} of an event structure ${\E}$ is any finite
subset $c\subset E$ of events which is \emph{conflict-free}
($e,e'\in c$ implies that $e,e'$ are not in conflict) and
\emph{downward-closed} ($e\in c$ and $e'\le e$ implies that
$e'\in c$). Notice that $\varnothing$ is always a configuration and
that $\downarrow \!e$ and $\downarrow \!e\setminus \{ e\}$ are
configurations for any $e\in E$. The \emph{domain} of $\E$ is the set
$\cD({\E})$ of all configurations of ${\E}$ ordered by inclusion;
$(c',c)$ is a (directed) edge of the Hasse diagram of the poset
$({\cD}({\E}),\subseteq)$ if and only if $c=c'\cup \{ e\}$ for an
event $e\in E\setminus c$. The domain $\cD(\E)$ can be endowed with
the \emph{Hamming distance} $d(c,c')=|c\Delta c'|$ between any
configurations $c$ and $c'$. From the following result, the Hamming
distance coincides with the graph-distance.
Barth\'elemy and Constantin~\cite{BaCo}
established the following bijection between event structures and
pointed median graphs:
\begin{theorem}[\!\!\cite{BaCo}]\label{median_domain}
The (undirected) Hasse diagram of the domain $(\cD(\E),\subseteq)$
of an event structure $\E=(E,\le, \#)$ is
median. Conversely, for any median graph $G$ and any basepoint $v$
of $G$, the pointed median graph $G_v$ is the Hasse diagram of the
domain of an event structure $\E_v$.
\end{theorem}
We briefly recall the construction of the event structure $\E_v$.
Consider a median graph $G$ and an arbitrary basepoint $v$. The events
of the event structure $\E_{v}$ are the hyperplanes of the cube
complex $\ensuremath{\mathcal{G}}\xspace$ (or the $\Theta$-classes of $G$). Two hyperplanes $H$ and
$H'$ define concurrent events if and only if they cross (i.e., there
exist a square with two opposite edges in one $\Theta$-class and other
two opposite edges in the second $\Theta$-class). The hyperplanes $H$
and $H'$ are in relation $H\leq H'$ if and only if $H=H'$ or $H$
separates $H'$ from $v$. Finally, the events defined by $H$ and $H'$
are in conflict if and only if $H$ and $H'$ do not cross and neither
separates the other from $v$.
\begin{example}\label{example-event-structure1}
The pointed median graph $G$ described in Fig.~\ref{fig:ExStrEvt} is
the domain of the event structure $\E=(E,\leq, \# )$. The seven
events $e_1,\ldots, e_7$ of $E$ correspond to the seven
$\Theta$-classes of $G$. The causal dependency is defined by
$e_1 \leq e_3, e_5, e_6, e_7$; $e_2 \leq e_4, e_5, e_6, e_7$;
$e_3, e_4, e_5 \leq e_6, e_7$. The events $e_6$ and $e_7$ are in
conflict and all remaining pairs of events are concurrent.
\end{example}
\begin{figure}[h]
\centering
{\includegraphics[scale=0.37]{Images/ExempleStrEvt.pdf}
\caption{The domain of the event structure from
Example~\ref{example-event-structure1}.}\label{fig:ExStrEvt}}
\end{figure}
Consequently, event structures encode median graphs and this
representation is much more compact than the standard one using
vertices and edges. For example, the hypercube of dimension $d$ is
the domain of the event structure with $d$ events that are pairwise
concurrent.
\subsubsection{The median problem in event structures}
Let ${\E}=(E,\le, \#)$ be a finite event structure. The input is a
set $C=\{ c_1,\ldots,c_k\}$ of configurations of $\E$ and their
weights $w_1,\ldots,w_k$, where each $c_i$ is given by the list of
events belonging to $c_i$. The goal of the \emph{median problem in the
event structure} $\E$ is to compute a configuration $c$ minimizing
the function $F_w(c)=\sum_{i=1}^k w_i d(c,c_i)$, where $d(c,c')$ is
the Hamming distance between $c$ and $c'$. Consider also a special
case of this median problem, in which $C$ is the set of \emph{all}
configurations of $\E$ and the input is the event structure $\E$,
i.e., the graphs $(E,\le)$ and $(E,\#)$. We call this problem the
\emph{compact median problem}.
\subsubsection{2-SAT formulas}
A \emph{2-SAT formula} on variables $x_1,\ldots,x_n$ is a formula
$\varphi$ in conjunctive normal form with two literals per clause,
i.e., a set of clauses of the form $(u \vee v)$, where each of the two
literals $u,v$ is either a positive literal $x_i$ or a negative
literal $\neg x_i$. A \emph{solution} for $\varphi$ is an assignment
$S$ of variables to 0 or 1 that satisfies all clauses. The
\emph{solution set} ${\mathcal S}(\varphi)$ of $\varphi$ is the set of
all solutions of $\varphi$. We consider each solution set as a subset
of vertices of the $n$-dimensional hypercube $Q_n$. A subset $Y$ of
vertices of the hypercube $Q_n$ (viewed as a median graph) is called
\emph{median-stable} if the median of each triplet $x,y,z\in Y$ also
belongs to $Y$.
\begin{proposition}[\!\!\cite{Scha,MuSch}]\label{median-2SAT}
Median-stable sets are exactly the solution sets of 2-SAT formulas.
\end{proposition}
A variable of a 2-SAT formula $\varphi$ is \emph{trivial} if it has
the same value in each solution. Two nontrivial variables $x_i$ and
$x_j$ in $\varphi$ are \emph{equivalent} if either $x_i=x_j$ in all
solutions or $x_i=\neg x_j$ in all solutions. Here is a
characterization of 2-SAT formulas corresponding to median graphs:
\begin{proposition}[\!\!{\cite[Corollary 3.34]{Fe}}]\label{2sat=median-graph}
The solution set ${\mathcal S}(\varphi)$ of a 2-SAT formula
$\varphi$ induces a median graph if and only if $\varphi$ has no
equivalent variables.
\end{proposition}
\subsection{A direct correspondence between event structures and 2-SAT formulas}
We provide a canonical correspondence between event structures and
2-SAT formulas (which may be useful also for other purposes).
By Theorem~\ref{median_domain} and
Propositions~\ref{median-2SAT},\ref{2sat=median-graph} there is a
bijection between the domains of event structures and pointed median
graphs and a bijection between (unpointed) median graphs and 2-SAT
formulas not containing equivalent variables. Since $\varnothing$ and
$\downarrow \!e, e\in E$ are configurations, their characteristic
vectors must be solutions of the associated 2-SAT formula. This can
be ensured by requiring that the 2-SAT formula does not contain
clauses of the form $(x_i\vee x_j)$.
Let $\E=(E, \leq, \#)$ be an event structure with
$E=\{ e_1,\ldots,e_n\}$. We associate to $\E$ a 2-SAT formula
$\varphi_{\E}$ on $n$ variables $x_1,\ldots,x_n$. For each pair of
events such that $e_i \leq e_j$ we define the clause
$(x_i \vee \neg x_j)$ and for each pair of events such that
$e_i \# e_j$ we assign the clause $(\neg x_i \vee \neg x_j)$. Next for
each subset $c$ of $E$ we denote by $S_c$ its characteristic vector.
\begin{proposition}\label{event-struc->2SAT}
${\mathcal S}(\varphi_{\E})$ coincides with $\cD(\E)$.
\end{proposition}
\begin{proof}
Let $c\subseteq E$ and $c\notin \cD(\E)$, i.e., $c$ is either not
downward-closed or not conflict-free. In the first case, there exist
two events $e_i$ and $e_j$ such that $e_i \leq e_j$ and
$e_j\in c, e_i\notin c$. This implies that $\varphi_{\E}$ contains
the clause $(\neg x_j \vee x_i)$. Since $S_c(x_j)=1$ and
$S_c(x_i)=0$, $S_c$ is not a solution of $\varphi_{\E}$. In the
second case, there exist two events $e_i,e_j\in c$ such that
$e_i \# e_j$. This implies that $\varphi_{\E}$ contains the clause
$(\neg x_i \vee \neg x_j)$. Since $S_c(x_i)=S_c(x_j)=1$, $S_c$ is
not a solution of $\varphi_{\E}$.
Conversely, suppose that $S_c$ is not a solution of
$\varphi_{\E}$. Recall that $\varphi_{\E}$ contains only clauses of
the form $(\neg x_i \vee x_j)$ or $(\neg x_i \vee \neg x_j)$. If a
clause $(\neg x_i \vee x_j)$ of $\varphi_{\E}$ is false, then
$S_c(x_i)=1$ and $S_c(x_j)=0$. Thus, the events $e_i$ and $e_j$ are
such that $e_j \leq e_i$ and $e_i\in c$ and $e_j\notin c$.
Consequently, $c$ is not a configuration of $\E$. Similarly, if a
clause $(\neg x_i \vee \neg x_j)$ in $\varphi_{\E}$ is false, then
$S_c(x_i)=S_c(x_j)=1$. Thus $c$ contains two events $e_i$ and $e_j$
such that $e_i \# e_j$, whence $c$ is not a configuration. This
shows that ${\mathcal S}(\varphi_{\E})$ and $\cD(\E)$ coincide.
\end{proof}
Let $\varphi$ be a 2-SAT formula on variables $x_1,\ldots, x_n$ not
containing trivial and equivalent variables and clauses of the form
$(x_i\vee x_j)$. We associate to $\varphi$ an event structure
$\E_{\varphi}=(E, \leq, \#)$ consisting of a set
$E=\{ e_1,\ldots,e_n\}$ and the binary relations $\leq$ and $\#$
defined as follows. First we define two binary relations $\#_0$ and
$\leq_0$: for $e_i,e_j$ we set $e_i \#_0 e_j$ if and only if $\varphi$
contains the clause $(\neg x_i \vee \neg x_j)$ and we set
$e_i \leq_0 e_j$ if and only if $\varphi$ contains the clause
$(x_i \vee \neg x_j)$. Let $\leq$ be the transitive and reflexive
closure of $\leq_0$. Let also $\#$ be the relation obtained by setting
$e_j \# e_{\ell}$ for each quadruplet $e_i, e_j, e_k, e_{\ell}\in E$
such that $e_i \leq e_j,e_k \leq e_{\ell}$, and $e_i \#_0 e_k$. Note
that $\#_0\subseteq \#$ and that $\#$ satisfies the last axiom of
event structures.
\begin{proposition}\label{2SAT->event-struc}
$\E_{\varphi}=(E, \leq, \#)$ is an event structure and
$\cD(\E_{\varphi})$ coincides with ${\mathcal S}(\varphi)$.
\end{proposition}
\begin{proof}
In view of previous conclusions, to show that $\E_{\varphi}$ is an
event structure it remains to prove that $\leq$ is
antisymmetric. Suppose by way of contradiction that there exist
$e_i,e_j\in E$ such that $e_i \leq e_j$ and $e_j \leq e_i$. By
definition of $\leq$, there exist $e_1, \ldots, e_p\in E$ and
$e_{p+1}, e_{p+2}, \ldots, e_q\in E$ such that
$e_i \leq_0 e_1 \leq_0 e_2 \leq_0 \cdots \leq_0 e_p \leq_0 e_j$ and
$e_j \leq e_{p+1} \leq_0 e_{p+2} \leq_0 \cdots \leq_0 e_q \leq_0
e_i$. Consequently, the formula $\varphi$ contains the clauses
$(\neg x_i \vee x_1), (\neg x_1 \vee x_2), \ldots, (\neg x_p \vee
x_j),(\neg x_j \vee x_{p+1}), \ldots, (\neg x_q \vee x_i)$. Thus,
the variables $x_1, \ldots, x_q, x_i, x_j$ are equivalent, which is
impossible because $\varphi$ is a 2-SAT formula without trivial and
equivalent variables.
To prove the second assertion, let $c=\{ e_1, \ldots e_p\}$ be a
subset of $E$ which is not a configuration of $\E_{\varphi}$, i.e.,
either $c$ is not downward-closed or is not conflict-free. First,
suppose that $c$ contains an event $e_j$ such that there is an event
$e_i\notin c$ with $e_i \leq e_j$. Since $\leq$ is the transitive
and reflexive closure of $\leq_0$, there exists a pair of events
$e_k,e_{\ell}$ such that $e_{\ell}$ belongs to $c$, $e_k$ does not
belong to $c$ and $e_k \leq_0 e_{\ell}$. Thus $\varphi$ contains
the clause $(\neg x_{\ell} \vee x_k)$. Since $S_c(x_{\ell})=1$ and
$S_c(x_k)=0$, the assignment $S_c$ is not a solution of
$\varphi$. Therefore, we can suppose now that $c$ is
downward-closed. Second, suppose that $c$ contains two events $e_i$
and $e_j$ such that $e_i \# e_j$. By definition of $\#$ and $\#_0$,
there exists a pair of events $e_k \leq e_i$ and $e_{\ell} \leq e_j$
such that $e_k \#_0 e_{\ell}$. Since $c$ is downward-closed and
contains both $e_i$ and $e_j$, necessarily both $e_k$ and $e_{\ell}$
belong to $c$. Thus, $S_c(x_k)=S_c(x_{\ell})=1$. But since
$e_k \#_0 e_{\ell}$, $\varphi$ contains the clause
$(\neg x_k \vee \neg x_{\ell})$, which is not satisfied by
$S_c$. Consequently, if $c$ is not a configuration of $\E$, then
$S_c$ is not a solution of $\varphi$.
Conversely, suppose that $S$ is a assignment which is not a solution
of $\varphi$. Let $c\subseteq E$ such that $S_c=S$. Since $\varphi$
does not contain clauses of the form $(x_i\vee x_j)$, this implies
that $\varphi$ either contains a clause $(\neg x_i \vee x_j)$ such
that $S_c(x_i)=1$ and $S_c(x_j)=0$, or $\varphi$ contains a clause
$(\neg x_i \vee \neg x_j)$ such that $S_c(x_i)=S_c(x_j)=1$. If
$\varphi$ contains $(\neg x_i \vee x_j)$ with $S_c(x_i)=1$ and
$S_c(x_j)=0$, then $e_j \leq e_i$ in $\E_{\varphi}$. Consequently,
the corresponding subset $c$ of events is not a configuration of
$\E_{\varphi}$ because $c$ contains $e_i$ but not $e_j$. If
$\varphi$ contains $(\neg x_i \vee \neg x_j)$ with
$S_c(x_i)=S_c(x_j)=1$, then $e_i \# e_j$ and again $c$ is not a
configuration of $\E_{\varphi}$ because $c$ contains two conflicting
events $e_i$ and $e_j$.
\end{proof}
\subsection{The median problem in event structures}\label{median_event}
In this subsection we show that the median problem in event structures
can be solved in linear time in the size of the input. We also show
that a diametral pair of median configurations can be computed in
linear time. On the other hand, we show that the compact median
problem is hard.
\subsubsection{An algorithm for the median problem in event structures}
Let $\E=(E, \leq, \#)$ be an event structure with
$E=\{ e_1,\ldots,e_n\}$. Let $C=\{ c_1,\ldots,c_k\}$ be a set of
configurations of $\E$ and $w_1,\ldots,w_k$ be their weights. Let
$c^*$ be a subset of $E$ defined by the majority rule in the hypercube
$Q_n=\{ 0,1\}^E$. Namely, $c^*$ consists of all events $e_i$ such
that the weight of all configurations of $C$ containing $e_i$ is
strictly larger than the total weight of all configurations not
containing $e_i$:
$c^*=\{ e_i\in E: \sum_{j: e_i\in c_j} w_j>\sum_{j: e_i\notin c_j}
w_j\}$. We assert that $c^*$ is a configuration of $\E$ and that $c^*$
minimizes the median function $F_w(c)=\sum_{i=1}^k w_i d(c,c_i)$.
Since $\cD(\E)$ is an isometric subgraph of $Q_n$, for each
$c\in \cD(\E)$, the values of the median function $F_w(c)$ in
$\cD(\E)$ and $Q_n$ are the same.
Since $Q_n$ is a median graph, by the majority rule
(Proposition~\ref{majority}), the median set of $Q_n$ is the
intersection of all majoritary halfspaces of $Q_n$. Each pair
$H'_i,H''_i$ of complementary halfspaces of $Q_n$ correspond to an
event $e_i$ of $\E$: one halfspace $H'_i$ consists of all
$c\subseteq E$ containing $e_i$ and its complement $H''_i$ consists of
all $c\subseteq E$ not containing $e_i$. If $w(H'_i)>w(H''_i)$, then
$H'_i$ is majoritary, which means that the weight of all
configurations of $C$ containing $e_i$ is strictly larger than one
half of the total weight. By definition of $c^*$, $e_i$ belongs to
$c^*$, i.e., $c^*$ (viewed as a characteristic vector) is a vertex of
$H'_i$. Similarly, if $w(H''_i)>w(H'_i)$, then $H''_i$ is majoritary,
which means that the weight of all configurations of $C$ not
containing $e_i$ is strictly larger than one half. By definition of
$c^*$, $e_i$ does not belong to $c^*$, i.e., $c^*$ is a vertex of
$H''_i$. Consequently, $c^*$ is a vertex of the hypercube $Q_n$
minimizing the function $F_w(c)=\sum_{i=1}^k w_i d(c,c_i)$ over all
$c\subseteq E$. Since the minimum of this function taken over
$\cD(\E)$ cannot be smaller than this minimum, to finish the proof it
remains to show that $c^*$ is a configuration of $\E$. Suppose that
$e_i\in c^*$ and $e_j\leq e_i$. Since each configurations of $\E$ is
downward-closed, all configurations of $C$ containing $e_i$ also
contain $e_j$. Thus the weight of all configurations of $C$ containing
$e_j$ is strictly larger than the weight of all configurations not
containing $e_j$, whence $e_j\in c^*$. Now suppose by way of
contradiction that $e_i,e_j\in c^*$ and $e_i\# e_j$. Since
$e_i,e_j\in c^*$, the weight of all configurations of $C$ containing
$e_i$ is strictly larger than one half of the total weight and the
weight of all configurations of $C$ containing $e_j$ is also strictly
larger than one half of the total weight. Therefore, in $C$ we must
find a configuration containing both $e_i$ and $e_j$, which is
impossible because $e_i\# e_j$. Consequently, we obtain the following
result:
\begin{proposition}
A median configuration $c^*$ of any set $C=\{ c_1,\ldots,c_k\}$ of
configurations of an event structure $\E$ can be computed in linear
time in the size $O(\sum_{i=1}^k |c_i|)$ of the input.
\end{proposition}
\begin{remark}
We mentioned in the Introduction that the space of trees with a
fixed set of $n$ leaves is a CAT(0) cube complex~\cite{BiHoVo}. The
vertices of this complex are the so-called $n$-trees and is was
known since 1981 that the set of all $n$-trees is a median
semilattice~\cite{MaMcM}, thus a median graph. Let
$E=\{ e_1,\ldots, e_n\}$. An $n$-\emph{tree} $T$ is a collection of
subsets of $E$ satisfying the following conditions: (1)
$E\in T, \varnothing \notin T$, (2) $\{ e_i\}\in T$ for any
$e_i\in E$, (3) $A\cap B\in \{ \varnothing, A,B\}$ for any
$A,B\in T$. Any set $A\in T$ is called a \emph{cluster}. This name
is justified by the fact that $n$-trees are exactly the collections
of clusters occurring in hierarchical clustering: any two clusters
either are disjoint or one is contained in another one. Barth\'elemy
and McMorris~\cite{BaMcM} considered the median problem for
$n$-trees, where the input consists of the $n$-trees
$T_1,\ldots, T_k$ on $E$ and the goal is to compute an $n$-tree $T$
minimizing $\sum_{i=1}^k d(T,T_i)$, where $d(T,T')=|T\Delta T'|$ is
the number of clusters in $T$ but not in $T'$ plus the number of
clusters in $T'$ but not in $T$. Since the space of all $n$-trees is
a median semilattice, the authors of~\cite{BaMcM} deduced that the
majority $n$-tree $T^*$ is a median $n$-tree; the \emph{majority
$n$-tree} $T^*$ consists of all clusters included in strictly more
than one half of the $n$-trees $T_1,\ldots, T_k$. This can be viewed
as another compact formulation of the median problem in an
implicitly defined median graph, where the input is given by the
$n$-trees $T_1,\ldots, T_k$.
\end{remark}
\begin{remark}\label{median-2sat}
Due to the correspondence between event structures and 2-SAT
formulas establishes in Propositions~\ref{event-struc->2SAT}
and~\ref{2SAT->event-struc}, we can define a similar median problem
for a set of solutions $S_{c_1},\ldots,S_{c_k}$ of the 2-SAT formula
$\varphi_{\E}$ and to search for a solution
$S_c\in {\mathcal S}(\varphi_{\E})$ minimizing
$\sum_{i=1}^k w_id(S_c,S_{c_i})$. From our bijections we deduce that
$S_{c^*}$ belongs to ${\mathcal S}(\varphi_{\E})$ and therefore is
an optimal solution.
\end{remark}
\subsubsection{Computing a diametral pair of median configurations}
We know from~\cite{BaBa} that the median set of a median graph
coincides with the interval between two diametral pairs of its
vertices. In Proposition~\ref{interval} we gave a different proof of
this result and in Corollary~\ref{cor-interval} we showed how to find
such a diametral pair in linear time. Now we will show how to compute
a diametral pair $\{ c'_*,c''_*\}$ of median configurations in linear
time in the size of the input (the description of the event structure
and of the set of configurations).
\begin{remark}
Similarly to the median problem in the cube complexes associated to
median graphs, we cannot explicitly return all median
configurations, because one can have an exponential number of such
optimal solutions.
\end{remark}
For the moment, we will suppose that $\{ c',c''\}$ is a diametral pair
of the median set of configurations (which exists by the result
of~\cite{BaBa}). Recall also that in the previous subsection we
defined a canonical median configuration $c^*$. Similarly to the
classification of $\Theta$-classes of a median graph, we can classify
the events of $\E$ in three classes: an event $e\in E$ is called (i)
\emph{majoritary} if $e$ belongs to $c^*$, (ii) \emph{minoritary} if
the halfspace defined by $e$ and containing $c^*$ is a minoritary
halfspace, and (iii) \emph{egalitarian} if the two halfspaces defined
by $e$ have the same weight. We denote by $E_=$ the set of all
egalitarian events. We start with several simple assertions.
\begin{lemma}\label{egalitarian_distance}
The distance $d(c',c'')$ between $c'$ and $c''$ in the median graph
$\cD(\E)$ equals to the number of egalitarian events.
\end{lemma}
\begin{proof}
By Proposition~\ref{majority}, no majoritary or minoritary event of
$\E$ separate two configurations of the median set. Therefore, any
event corresponding to a $\Theta$-class separating $c'$ and $c''$ is
egalitarian; this establishes that $d(c',c'')$ is not larger than
$|E_=|$. Conversely, we assert that any event $e\in E_=$ separates
$c'$ and $c''$. By Proposition~\ref{majority}, the two halfspaces
defined by $e$ both intersect the medians set. If $c',c''$ are not
separated by these halfspaces, they necessarily they both belong to
the same halfspace and some median configuration $c$ belongs to the
complementary halfspace. Since $c\in I(c',c'')$, we obtain a
contradiction with the convexity of halfspaces. This proves that
any egalitarian event separates $c'$ and $c''$.
\end{proof}
\begin{lemma}\label{median_c*}
The median configuration $c^*$ is the gate of the empty
configuration $c_{\varnothing}=\varnothing$ in the interval
$I(c',c'')$. In particular, $c^*$ is the median of the triplet
$c_{\varnothing},c',$ and $c''$.
\end{lemma}
\begin{proof}
Suppose by way of contradiction that $c\ne c^*$ is the gate of
$c_{\varnothing}$ in $I(c',c'')$. This implies that
$c\in I(c_{\varnothing},c^*)$, i.e., $c^*$ is the union of $c$ and
the events separating $c$ and $c^*$. Since $c,c^*\in I(c',c'')$, by
Lemma~\ref{egalitarian_distance} any event separating $c$ and $c^*$
is an egalitarian event. This contradicts the definition of $c^*$:
by its definition, $c^*$ contains only majoritary events.
\end{proof}
By Lemma~\ref{median_c*},
$c^*\in I(c_{\varnothing},c')\cap I(c_{\varnothing},c'')$ and we
conclude that $c'=c^* \cup A$ and $c''=c^*\cup B$, where $A$ and $B$
are sets of egalitarian events. By Lemma~\ref{egalitarian_distance},
$d(c',c'')=|E_=|$ and since it coincides with the Hamming distance
$|c'\Delta c''|=|A\Delta B|$, the sets $A$ and $B$ must constitute a
partition of $E_{=}$. Since $c'=c^*\cup A$ and $c''=c^*\cup B$ are
configurations, we conclude that the sets $c^*\cup A, c^*\cup B$ are
conflict-free and downward-closed. Therefore, the events of $c^*$ are
not in conflict with the events of $E_{=}=A\cup B$.
On the set $E_=$ we define the following binary relation $R_0$: for
$e_1,e_2\in E_=$ we set $e_1R_0 e_2$ if $e_1 \leq e_2$ or
$e_2 \leq e_1$. Let $R$ be the transitive closure of the relation
$R_0$. Observe that the equivalence classes of $R$ are the connected
components of the graph obtained by forgetting the orientation of the
Hasse diagram of $(E_{=},\leq)$. Now define the following conflict
graph $\Gamma$: the vertices of $\Gamma$ are the equivalence classes
of $R$ and two such classes $C'$ and $C''$ are linked by an edge in
$\Gamma$ if and only if there exists an event $e'\in C'$ and an event
$e''\in C''$ such that $e'\# e''$.
\begin{lemma}\label{Gamma-bipartite}
Any equivalence class $C$ of the relation $R$ is
conflict-free. Consequently, the conflict graph $\Gamma$ is
bipartite.
\end{lemma}
\begin{proof}
Let $A,B$ be a bipartition of $E_=$ such that
$c'=c^*\cup A, c''=c^*\cup B$ is a diametral pair of median
configurations (that $c',c''$ have such a representation follows
from the discussion after Lemma~\ref{median_c*}). Since the sets $A$
and $B$ are conflict-free, it suffices to prove that $C$ is
contained in $A$ or in $B$.
Suppose by way of contradiction that there exists $e \in A\cap C$
and $e' \in B\cap C$. By the definition of $R_0$, there exist events
$e=e_0,e_1,\ldots,e_p,e_{p+1}=e'\in E_=$ such that
$(e_0,e_1), (e_1,e_2),\ldots,(e_{p-1},e_p),(e_p,e_{p+1})\in
R_0$. Since $A,B$ is a partition of $E_{=}$, there exists
$(e_{j-1},e_j) \in R_0$ such that $e_{j-1} \in A \setminus B$ and
$e_j \in B \setminus A$. Since $(e_{j-1},e_j) \in R_0$, either
$e_{j-1}\leq e_j$ or $e_j \leq e_{j-1}$. Without loss of generality,
assume the first. Consequently, since $c''=c^* \cup B$ is
downward-closed and contains $e_j$, necessarily $e_{j-1}$ belongs to
$c''$. Since $e_{j-1}$ is egalitarian and all events in $c^*$ are
majoritary, necessarily $e_{j-1} \in B$, a contradiction.
Therefore, any equivalence class of $R$ is contained in $A$ or in
$B$. Since $A$ and $B$ are conflict-free, any edge of the conflict
graph $\Gamma$ must run between $A$ and $B$. Therefore, the
equivalent classes of $R$ included in $A$ and those included in $B$
define a bipartition of $\Gamma$ into two independent sets.
\end{proof}
\begin{lemma}\label{Gammma-bipartition}
For the partition $A_*,B_*$ of $E_=$ induced by any bipartition
$Q'_*,Q''_*$ of $\Gamma$ into two independent sets,
$c'_*=c^*\cup A_*$ and $c''_*=c^*\cup B_*$ is a diametral pair of
median configurations.
\end{lemma}
\begin{proof}
Let $A_*$ be the union of all equivalence classes of $R$ contained
in $Q'_*$ and let $B_*$ be the union of all equivalence classes of
$R$ contained in $Q''_*$. We assert that $c'_*=c^*\cup A_*$ and
$c''_*=c^*\cup B_*$ are configurations of $\E$. We prove this
assertion for $c'_*$. Since by Lemma~\ref{Gamma-bipartite} each
equivalence class of $R$ is conflict-free and since $Q'_*$ is an
independent set of $\Gamma$, the set $A_*$ is necessarily
conflict-free. As we noticed above, no event of $c^*$ and of $E_=$
are in conflict. Consequently, the set $c'_*=c^*\cup A_*$ is
conflict-free.
Now we show that $c'_*=c^*\cup A_*$ is downward-closed. Pick any
event $e\in c'_*$ and any event $e'\leq e$. If $e\in c^*$, then
$e'\in c^*$ because we proved that $c^*$ is a configuration. Now
suppose that $e\in A_*$. Suppose by way of contradiction that
$e'\notin c_*' = c_* \cup A_*$, i.e., either $e'$ is minoritary or
$e'$ is egalitarian but belongs to
$B_*$.
Since any configuration containing $e$ also contains $e'$, the total
weight of the configurations containing $e'$ is at least the total
weight of the configurations containing $e$. Since $e$ is
egalitarian, either $e'$ is majoritary or $e'$ is egalitarian. In
the first case, $e' \in c^*$ by the definition of $c^*$. In the
second case, $e$ and $e'$ belong to the same equivalence class of
$R$ and thus they both belong to $A_*$ or they both belong to $B_*$.
Consequently, $c'_*=c^*\cup A_*$ and $c''_*=c^*\cup B_*$ are
configurations of $\E$. We assert that both $c'_*$ and $c''_*$ are
median configurations. Clearly, both $c'_*$ and $c''_*$ are
contained in all halfspaces $H'_e$ for all majoritary events
$e\in c^*$. On the other hand, $c'_*$ and $c''_*$ are contained in
the halfspaces $H''_e$ for all minoritary events $e$. Since in both
cases those halfspaces have weight strictly larger than one half of
the total weight, $c'_*$ and $c''_*$ belong to all majoritary
halfspaces, thus by Proposition~\ref{majority} they are median
configurations. Finally, since $d(c'_*,c''_*)=|A_*\cup B_*|=|E_=|$,
we conclude that $c'_*,c''_*$ is a diametral pair of the set of
median configurations.
\end{proof}
From previous results, we obtain the following linear time algorithm
for computing a diametral pair $c'_*,c''_*$ of median
configurations. First we compute the set $E_{\maj}$ of majoritary
events and the set $E_=$ of egalitarian events. This can be done in
total $O(\sum_{i=1}^k |c_i|)$ time by traversing the lists of events
describing the set of configurations $c_1,\ldots,c_k$. Next, compute
the binary relation $R_0$ and its reflexive and transitive closure
$R$. As noticed above, this can be done by computing the connected
components of the subgraph induced by $E_{=}$ of the Hasse diagram of
$\leq$ where we forget the orientation. This can be done in linear
time in the size of $(E,\leq)$. To compute the graph $\Gamma$, one
just need to consider the conflict graph $(E,\#)$ and for any
$e_1, e_2 \in E_{=}$ such that $e_1 \# e_2$, we add an edge between
the equivalence classes of $e_1$ and $e_2$. This can be done in linear
time in the size of $(E,\#)$ and the size of the graph $\Gamma$ is
linear in the size of $\E$.
\begin{proposition}\label{median-interval-event-structure}
A diametral pair $c'_*=c^*\cup A_*,c''_*=c^*\cup B_*$ of median
configurations of any set $C=\{ c_1,\ldots,c_k\}$ of configurations
of an event structure $\E$ can be computed in
$O(|\E|+\sum_{i=1}^k |c_i|)$ time.
\end{proposition}
\subsubsection{The compact median problem is $\#$P-hard}\label{reduction}
An analogue of compact median problem for 2-SAT formulas was already studied by Feder~\cite{Fe}, who proved the following result:
\begin{proposition}[{\!\!\cite[Lemma 3.54]{Fe}}]\label{feder-median}
For a 2-SAT formula $\varphi$ without trivial and equivalent
variables, the problem of finding the median of the median graph
${\mathcal S}(\varphi)$ is $\#$P-hard.
\end{proposition}
The literals of any satisfiable 2-SAT formula $\varphi$ can be renamed
to transform $\varphi$ into an equivalent formula not containing
clauses $(x_i\vee x_j)$. Thus Proposition~\ref{feder-median} holds for
2-SAT formulas without trivial and equivalent variables and clauses
$(x_i\vee x_j)$. Since the size of the event structure $\E_{\varphi}$
in Proposition~\ref{2SAT->event-struc} is quadratic in the size of the
2-SAT formula $\varphi$, we obtain the following result from
Propositions~\ref{2SAT->event-struc},~\ref{feder-median}, and
Remark~\ref{median-2sat}:
\begin{proposition}\label{compact-median-problem}
The compact median problem in an event structure $\E$ is $\#$P-hard.
\end{proposition}
\subsection*{Acknowledgements}
We would like to thank Florent Capelli, Nadia Creignou, and Yann
Strozecki for providing us with the proof of
Proposition~\ref{feder-median} much before we discovered this result
in~\cite{Fe}. The work on this paper was supported by ANR project
DISTANCIA (ANR-17-CE40-0015).
\bibliographystyle{amsplain}
\input{BCCV-medians-in-medians.bbl}
| {
"timestamp": "2020-07-24T02:18:43",
"yymm": "1907",
"arxiv_id": "1907.10398",
"language": "en",
"url": "https://arxiv.org/abs/1907.10398",
"abstract": "The median of a set of vertices $P$ of a graph $G$ is the set of all vertices $x$ of $G$ minimizing the sum of distances from $x$ to all vertices of $P$. In this paper, we present a linear time algorithm to compute medians in median graphs, improving over the existing quadratic time algorithm. We also present a linear time algorithm to compute medians in the $\\ell_1$-cube complexes associated with median graphs. Median graphs constitute the principal class of graphs investigated in metric graph theory and have a rich geometric and combinatorial structure, due to their bijections with CAT(0) cube complexes and domains of event structures. Our algorithm is based on the majority rule characterization of medians in median graphs and on a fast computation of parallelism classes of edges ($\\Theta$-classes or hyperplanes) via Lexicographic Breadth First Search (LexBFS). To prove the correctness of our algorithm, we show that any LexBFS ordering of the vertices of $G$ satisfies the following fellow traveler property of independent interest: the parents of any two adjacent vertices of $G$ are also adjacent. Using the fast computation of the $\\Theta$-classes, we also compute the Wiener index (total distance) of $G$ in linear time and the distance matrix in optimal quadratic time.",
"subjects": "Discrete Mathematics (cs.DM); Data Structures and Algorithms (cs.DS); Combinatorics (math.CO)",
"title": "Medians in median graphs and their cube complexes in linear time",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.981735721648143,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7085610816033332
} |
https://arxiv.org/abs/1201.5909 | Brownian approximation to counting graphs | Let C(n,k) denote the number of connected graphs with n labeled vertices and n+k-1 edges. For any sequence (k_n), the limit of C(n,k_n) as n tends to infinity is known. It has been observed that, if k_n=o(\sqrt{n}), this limit is asymptotically equal to the $k_n$th moment of the area under the standard Brownian excursion. These moments have been computed in the literature via independent methods. In this article we show why this is true for k_n=o(\sqrt[3]{n}) starting from an observation made by Joel Spencer. The elementary argument uses a result about strong embedding of the Uniform empirical process in the Brownian bridge proved by Komlos, Major, and Tusnady. | \section{Introduction} Let $C(n,k)$ denote the number of connected graphs with $n$ labeled vertices and $n+k-1$ edges. For example, $C(n,0)$ is the number of labeled trees on $n$ vertices and is equal to $n^{n-2}$ by Cayley's theorem. There is a rich history of the study of the asymptotics of the sequence $C(n,k)$. Wright gives the asymptotic formula when $k$ is fixed and $n\rightarrow \infty$ in \cite{W77} and when $k=o(\sqrt[3]{n})$ in \cite{W80}. Two different approaches were taken in analyzing the case when both $n,k\rightarrow \infty$, one by Bender, Canfield, and McKay \cite{brcm}, and the other by Coja-Oghlan, Moore, and Sanwalani \cite{coms}, and van der Hofstad and Spencer \cite{vdhs}.
When $k=o(\sqrt{n})$ it has been observed that these limits (upto scaling) are also given by the moments of the area under a standard Brownian excursion. A standard Brownian excursion is a random element taking values in the subset of all nonnegative continuous functions on the interval $[0,1]$ (denoted by $C[0,1]$) given by
\[
\left\{ \omega \in C[0,1]:\; \omega(0)=\omega(1)=0,\;\text{and}\; \omega(t) >0\; \text{for all $t \in (0,1)$} \right\}.
\]
Informally, one can describe this process as a standard Brownian motion (starting from zero) conditioned to return to zero for the first time at time one. We refer the readers to the book by Revuz and Yor \cite{ry99} for a proper introduction. Another description of combinatorial interest is that a standard Brownian excursion is the contour of the Brownian Continuum Random Tree defined by Aldous \cite{A93}. In any case, the area under this random continuous curve is well-defined and measurable. Exact and asymptotic formulas for the moments of this random area can be found in the article by Louchard \cite{L} and the recent survey by Janson \cite{Jsurv}.
Our main result is the following.
\begin{thm}\label{thm:intro}
Let $A$ be a random variable whose law is given by the area under the standard Brownian excursion. Then, for all $(n,k)$ such that $k=k_n=o(\sqrt[3]{n})$ and $n\rightarrow \infty$ we have
\[
\lim_{n\rightarrow \infty} \frac{k!}{n^{n+3k/2-2}} \frac{C(n,k)}{ E A^k} = 1.
\]
\end{thm}
It can be found in \cite[Page 90, eqn.~(53)]{Jsurv} that
\begin{equation}\label{eq:momentasymp}
E A^k \sim 3\sqrt{2} k \left( \frac{k}{12 e} \right)^{k/2}, \qquad \text{as $k\rightarrow \infty$}.
\end{equation}
Hence, the previous theorem reproves the result of \cite{coms} and \cite[Sec 2.1]{vdhs} (when $k=o(\sqrt[3]{n})$):
\begin{equation}\label{eq:cnkfromexc}
C(n,k) \sim n^{n-2} n^{3k/2} \left( e/12k \right)^{k/2}\left( 3/\sqrt{\pi} \right)k^{1/2}.
\end{equation}
The first explanation between the connection of $C(n,k)$ with the area under the standard Brownian excursion was given in a beautiful paper by Spencer \cite{spen97}. Let $Z_1, \ldots, Z_n$ be independent Poisson random variables with mean one. Let $Y_j=\sum_{i=1}^j Z_i$ denote the partial sum process. Define the queue walk by $Q_0=1$, and
\begin{equation}\label{eq:poissonsum}
Q_i= Q_{i-1} + (Z_i-1)=Y_i - (i-1), \quad i=1,2,\ldots, n.
\end{equation}
Define the event $\mathrm{Exc}:=\{Q_i>0, \; i=1,\ldots, n-1, \; \text{and}\; Q_n=0 \}$. Consider the empirical area
\[
M= \sum_{i=1}^{n-1} \left(Q_i -1\right)=\sum_{i=1}^{n-1} \left(Y_i - i\right).
\]
Let $E^*$ denote the expectation conditioned on the event $\mathrm{Exc}$. Then (see \cite[Theorem 3.2]{spen97}) the following \textit{exact} relationship holds
\begin{equation}\label{eq:spencer}
C(n,k)= n^{n-2} E^*\left[ \combi{M}{k} \right].
\end{equation}
Now, under proper scaling, the law of the queue walk under $\mathrm{Exc}$ converges in distribution to the law of a standard Brownian excursion. The area under the curve is a continuous function of the curve under the uniform distance. And hence, under proper scaling, which is dividing by $n^{3/2}$, $M$ converges in distribution to the area under the standard Brownian excursion.
The original article by Wright \cite{W77}, had identified the limiting value of $C(n,k)$, when $k$ is fixed and as $n$ tends to infinity, as $n^{n-2} n^{3k/2} c_k/ k!$, where this sequence of constants $(c_k,\; k\in \mathbb{N})$ came to be known as Wright's constants.
Spencer observed, but did not prove, that if the weak convergence of $M$ to that of $A$ can be strengthened to a convergence of moments of fixed order, this would explain the factor of $n^{3k/2}$ in Wright's expression, and provide an interpretation of Wright's constants as the moment sequence of $A$.
The limits of all sequences $C(n,k_n)$ have been derived in \cite{brcm} by analytic combinatorial methods, and in \cite{coms, vdhs} by probabilistic methods. For any sequence $k_n=o\left( \sqrt{n}\right)$, they are still given by the formula \eqref{eq:cnkfromexc}. However, these methods neither employ nor shed any light on the Brownian approximation.
The main result in this article argues this in the regime of $k_n=o(\sqrt[3]{n})$. Our main tool is an explicit coupling that is made feasible by a strong approximation result due to Koml\'os, Major, and Tusn\'ady (KMT). This strong approximation result is the Brownian bridge version of the more famous KMT embedding that approximates a random walk by a Brownian motion.
Interestingly, the proof breaks down beyond $k_n=o(\sqrt[3]{n})$ for technical reasons and cannot be improved by the currently known version of the KMT result. It is curious that this is the same regime that Wright \cite{W80} could extend his argument.
\section*{Acknowledgement} I am grateful to Prof.~Joel Spencer for suggesting the problem to me and several useful discussion. I also thank an anonymous referee for a careful reading of a previous version of the paper which corrected an earlier error.
\section{Proof of the main result}
We will use the following notation throughout: For any event $A$, the random variable $1\{ A\}$ takes the value one if $A$ occurs, and takes the value zero otherwise.
\begin{thm}\label{thm:main}
Consider the queue walk and the event $\mathrm{Exc}$ from \eqref{eq:poissonsum}. Consider the rescaled continuous time process
\begin{equation}\label{whatisxt}
X_n(t)= \frac{1}{\sqrt{n-1}}\left(Q_{i}-1\right), \quad i/(n-1) \le t < (i+1)/(n-1), \quad i=0,\ldots n-2.
\end{equation}
Also define $X_n(1)=0$. Then $X_n$ is a path, defined on $[0,1]$, that starts and ends at zero.
Consider the empirical area
\begin{equation}\label{eq:whatismn}
M_n:= \int_0^1 X_n(t)dt= \frac{1}{(n-1)^{3/2}} \sum_{i=0}^{n-1} \left( Q_i - 1 \right).
\end{equation}
Let $E^*$ denote the expectation conditioned on the event $\mathrm{Exc}$. Let $A$ be the area under a standard Brownian excursion. For all $k=k_n=o\left( \sqrt[3]{n} \right)$, we have
\[
\lim_{n\rightarrow \infty} \frac{E^*\left( M_n \right)^k}{ E A^k} =1.
\]
\end{thm}
\bigskip
Let us give an outline of the steps of the proof backwards in the order in which they appear.
\begin{enumerate}
\item[Step 3.] We prove an embedding of the partial sum process, conditioned on $\mathrm{Exc}$, in a standard Brownian excursion and estimate errors.
\item[Step 2.] Step 3 follows by a known KMT embedding of the empirical process (to be defined later) in a Brownian bridge.
\item[Step 1.] We describe how Step 3 follows from Step 2 by \textit{re-rooting at the minimum}.
\end{enumerate}
We now expand each step of the proof.
\subsection{Re-rooting at the minimum} Consider a deterministic sequence of numbers $x := (x_1, \ldots, x_n)$. The walk with steps $x$ is the sequence of partial sums of $x$ starting at zero. That is
\[
s_0=0, \quad s_{i}= s_{i-1} + x_{i}, \quad i=1,2,\ldots
\]
For $i \in \{1,2,\ldots, n\}$ let $x^{(i)}$ denote the $i$th cyclic shift of $x$, that is the sequence of length $n$ whose $j$th term is $x_{i+j}$ where $i+j$ refers to $(i + j)\bmod n$. The following lemmas stem from the classical ballot problem and is used by Tak\'acs in \cite{T} to prove Kemperman's formula. For more details and the proofs see \cite[Section 6.1]{combstoc}.
\begin{lemma}\label{lemma:cyclicshift}
Let $x:=(x_1, \ldots, x_n)$ be a sequence with values in $\{ -1,0,1,\ldots\}$ and sum $-1$. Let $\sigma=\min\{i:\; s_i=\min_{1\le j\le n} s_j \}$, i.e., the first time the walk reaches its absolute minimum. Then the walk with steps $x^{(\sigma)}$ hits $-1$ for the first time at step n. Moreover, this is the only $i$ such that the walk with steps $x^{(i)}$ hits $-1$ for the first time at step n.
\end{lemma}
We say the path $S$ with steps $x$ has been re-rooted at the minimum to denote the path $R(S)$ with steps $x^{(\sigma)}$. The following lemma, called the discrete Vervaat's transform, follows from above and the exchangeability of increments \cite[page 125]{combstoc}.
\begin{lemma}\label{lemma:exchangeable}
Suppose $X=(X_1, \ldots, X_n)$ denote a sequence of iid random variables with values in $\{-1,0,1,\ldots\}$. Let $S_0=0$, and $S_{i+1}=S_i+X_{i+1}$, for $i=0,1,\ldots,n-1$. We refer to the sequence $S=(S_0, S_1, \ldots, S_n)$ as a path. Let $\text{Bridge}$ denote the event $S_n=-1$. Then, if $S$ is a path, conditioned on \text{Bridge}, then $R(S)$ is distributed as a path conditioned on $\mathrm{Exc}=\{ S_n=-1,\; S_i \ge0, \; 1\le i<n \}$.
\end{lemma}
A continuous version of this can be easily guessed \cite[page 125]{combstoc}. Recall that the Brownian bridge is a standard BM conditioned to be zero at time one.
\begin{lemma}\label{lemma:vervaat}
Let $\sigma$ be the (almost surely) unique time when a Brownian bridge $B$ achieves its global minimum. Then the process given by the cyclic transformation $\{ B_{\sigma+t} - B_\sigma, \; 0\le t \le 1\}$ (understood $\bmod 1$) is a standard Brownian excursion.
\end{lemma}
To use these results for our problem we need to understand the event $\text{Bridge}$ for the queue walk. Consider a unit rate Poisson point process on the half line $[0,\infty)$. That is, consider iid Exponential random variables $\{ T_i, \; i \in \mathbb{N}\}$ with mean one, and consider their partial sums $S_0=0$, and $S_n = \sum_{i=1}^n T_i$, $n\ge 1$. We say an `event' occurs at each `time point' $S_n$. Then, one can define $Z_i$ to be the number of events occurred during the interval $[i-1, i)$, for $i\ge 1$. This maintains that $Z_1, Z_2, \ldots$ are iid Poisson random variables with mean one.
The event $\{Q_n=0\}=\{Y_n=n-1\}$ is equivalent to the statement that there are $(n-1)$ events in time interval $[0,n]$. It is well-known, that conditioned on the number of events in a given interval, their points of occurrences, under the Poisson process, are independently and uniformly distributed. Thus, under $\text{Bridge}$, the times of occurrences of the $(n-1)$ events are given by $(n-1)$ iid Uniform$(0,1)$ random variables multiplied by $n$.
Let $U_1, U_2, \ldots,$ denote a countable sequence of iid Uniform$(0,1)$ random variables. Consider the process
\begin{equation}\label{eq:whatisf}
F_{n-1}(t):= \sum^{n-1}_{i=1}1\left\{ U_i \le t \right\} - nt, \quad t \in [0,1].
\end{equation}
Then the sequence $(F_{n-1}(k/n), \; k=0,1,2,\ldots,n)$ has the same law as $(Q_k-1,\; k=0,1,2,\ldots,n)$ under $\text{Bridge}$. Note that we have suppressed the dependence of $n$ from $Q_k$. In what follows the correct $n$ will be obvious from the context.
Also define what is known as the \textit{empirical process}
\begin{equation}\label{eq:whatisgn}
G_{n}(t):= \sqrt{n}\left( \frac{1}{n}\sum^{n}_{i=1}1\left\{ U_i \le t \right\} - t\right) = \frac{1}{\sqrt{n}} \left( F_n(t) + t \right), \quad t\in [0,1].
\end{equation}
\subsection{Strong embedding for the empirical process.} The famous Koml\'os-Major-Tusn\'ady (KMT) paper \cite{kmt} contains a result on the strong embedding of the empirical process in the Brownian bridge. Several variations of the result have since been discovered due to its importance in the empirical process theory. See \cite{cchm}, \cite{mz87}. We take the statement of the following lemma from \cite{mz87}.
\begin{lemma}\label{lemma:strong}
There is a probability space on which one can define random variables $U_1, U_2, \ldots$ that are iid Uniform $(0,1)$ and a sequence of processes $B_1, B_2, \ldots$ that are each distributed as a standard Brownian bridge such that if we define
\[
\Delta_1:=\Delta_1(n)= \sup_{0\le s \le 1}\abs{G_n(s) - B_n(s)},
\]
then there exist universal positive constants $a,K,\lambda$ such that
\begin{equation}\label{eq:kmtbnd}
P\left( \sqrt{n} \Delta_1 > a \log n + x \right) < K e^{- \lambda x}, \quad \text{for all}\; x\in [0, \infty).
\end{equation}
\end{lemma}
Consider now a path of $G_n$ and $B_n$ as defined on the probability space above. We now have three processes: the process $X_{n+1}$ as defined in \eqref{whatisxt}, the empirical process $G_n$ and the Brownian bridge $B_n$. We know by Lemma \ref{lemma:strong} that the supremum distance between the processes $G_n$ and $B_n$ is $\Delta_1$.
We now estimate the supremum distance between $X_{n+1}$ and $G_n$. Note from the line following \eqref{eq:whatisf}
\[
X_{n+1}(t)= \frac{1}{\sqrt{n}} F_n\left( \frac{k}{n+1} \right), \quad \frac{k}{n+1}\le t < \frac{k+1}{n+1},\quad k=0,1,\ldots,n.
\]
Subtracting, from \eqref{eq:whatisgn}, we get
\[
\abs{X_{n+1}(t) - G_n(t) } \le \frac{1}{\sqrt{n}} \abs{F_n(t) - F_n\left( \frac{k}{n+1} \right)}.
\]
Using \eqref{eq:whatisgn} again, for $k/(n+1) \le t < (k+1)/(n+1)$, we have
\[
\abs{X_{n+1}(t) - G_n(t) } \le \abs{G_n(t) - G_n\left(\frac{k}{n+1} \right)} + \frac{1}{(n+1)\sqrt{n}}.
\]
Taking supremum on both sides above and using Lemma \ref{lemma:strong} we get
\[
\begin{split}
\Delta_2&:=\sup_{0\le t\le 1} \abs{X_{n+1}(t) - G_n(t) } \\
& \le \frac{1}{\sqrt[3]{n}} + \sup_{0\le k \le n}\sup_{k< (n+1) t < k+1} \abs{B_n(t) - B_n(k/(n+1))} + 2\Delta_1.
\end{split}
\]
Express the Brownian bridge $B_n$ as $(\beta_n(t) - t \beta_n(1),\; 0\le t\le 1)$, for some standard Brownian motion $\beta_n$ (see \cite[page 37]{ry99}). Then
\[
\begin{split}
\sup_{k/(n+1)< t < (k+1)/(n+1)}& \abs{B_n(t) - B_n(k/(n+1))} \le \\
&\sup_{k/(n+1)< t < (k+1)/(n+1)} \abs{\beta_n(t) - \beta_n(k/(n+1))} + \frac{\abs{\beta_n(1)}}{n+1}.
\end{split}
\]
By the Markov property of Brownian motion we know that for each $k$, the quantity
\[
\abs{Z_k}/\sqrt{n+1}:=\sup_{k/(n+1)\le t\le (k+1)/(n+1)}\abs{\beta_n(t) - \beta_n(k/(n+1))}
\]
are independent and identically distributed. In fact, by the stationary increment property of Brownian motion and scaling, this distribution is the same as that of $\abs{\beta_n}^*=\sup_{0\le s \le 1} \abs{\beta_n(s)}$. Moreover, by Paul L\'evy's Characterization Theorem (\cite[page 240]{ry99}), we know that $\overline{\beta_n}:=\sup_{0\le s \le 1} \beta_n(s)$ and $\underline{\beta_n}:=\sup_{0\le s\le 1} -\beta_n(s)$ are both distributed as the absolute value of a standard Normal random variable $N$. Observe that, for any positive $x$,
\[
P(\abs{N} > x)= P(\overline{\beta_n} > x) \le P\left( \abs{\beta_n}^* > x \right) \le P(\overline{\beta_n} > x) + P\left( \underline{\beta_n} > x \right)= 2P(\abs{N} > x).
\]
Thus the distribution of each $Z_k$ (and all their moments) are comparable to that of the absolute value of the standard Normal. We will use this fact implicitly in the following argument. In any case
\[
\Delta_2 \le \frac{1}{\sqrt[3]{n}} + \frac{1}{\sqrt{n+1}} \max_{0\le i \le n} \abs{Z_i} + \frac{\abs{\beta_n(1)}}{n+1}+ 2 \Delta_1.
\]
Hence, the supremum distance between the continuous time walk $X_{n+1}$ and the Brownian bridge
\begin{equation}\label{eq:supest}
\Delta_{(n)}:= \sup_{0\le t \le 1}\abs{X_{n+1}(t) - B_n(t)}\le \Delta_1 + \Delta_2= \frac{1}{\sqrt[3]{n}} + \frac{1}{\sqrt{n+1}} \max_{0\le i \le n} \abs{Z_i} + \frac{\abs{\beta_n(1)}}{n+1}+ 3 \Delta_1.
\end{equation}
\bigskip
\begin{figure}[t]
\centering
\includegraphics[width=5in, height=1.6in]{reroot.pdf}
\caption{Re-rooting at the minimum}
\label{fig:reroot}
\end{figure}
We now re-root at the minimum both the continuous time walk $X_{n+1}$ and the Brownian bridge $B_n$. Please see Figure \ref{fig:reroot} where the smooth curve is the sine curve which is approximated by a jagged path. By \eqref{eq:supest} their minimums differ by $\Delta_{(n)}$ and can be attained at different times $\sigma^{w}$ (for the walk) and $\sigma^{b}$ (for the Brownian bridge). Depending on how close $\sigma_w$ and $\sigma_b$ are, after re-rooting we get a walk excursion and a standard Brownian excursion which might be a little off. However, this does not affect the total area.
To see what we mean, suppose $y$ is a continuous curve on $[0,1]$ such that $y(0)=y(1)=0$ and with an absolute minimum $y_{\min}$. Then the area under the curve $\tilde{y}(\cdot):= y(\cdot) + y_{\min}$ is equal to the area under the curve that is obtained by re-rooting $y$ at its minimum. The difference between the minimums of the walk and the Brownian bridge is at most $\Delta_{(n)}$, which is also the uniform distance between the two curves. Thus the area between the two re-rooted curves differ by at most $2 \Delta_{(n)}$. We have the following lemma.
\begin{thm}\label{thm:space}
Consider the set-up in Theorem \ref{thm:main}. On the KMT space described in Lemma \ref{lemma:strong}, it is possible to have for each $n\in \mathbb{N}$ a copy of $M_n$, under $\mathrm{Exc}$, and a random variable $A_n$, distributed according to the area of a standard Brownian excursion such that if $D_n=\abs{M_n - A_n}$, then there exists two absolute positive constants $C_1$ and $C_2$ such that for all $k,n \in \mathbb{N}$ we have
\[
\left[ E D_n^k\right]^{1/k} \le C_1 \left( \frac{\log n}{\sqrt{n}} \right) + C_2 \left( \frac{k}{\sqrt{n}}\right),
\]
\end{thm}
\begin{proof}[Proof of Theorem \ref{thm:space}] From \eqref{eq:supest}, the discussion above, and by using the triangle inequality we get
\begin{equation}\label{eq:triangle}
\begin{split}
\left(E D_n^{k}\right)^{1/k} &\le 2 \left[ E\left( \frac{1}{\sqrt[3]{n}} + \frac{1}{\sqrt{n+1}} \max_{0\le i \le n} \abs{Z_i} + \frac{\abs{\beta_n(1)}}{n+1}+ 3 \Delta_1 \right)^k\right]^{1/k}\\
&\le 2\left[ \frac{1}{\sqrt[3]{n}} + \frac{1}{\sqrt{n}} \left( E\left(\max_{i} \abs{Z_i}\right)^k\right)^{1/k} +\frac{1}{n+1}\left(E\abs{\beta_n(1)}^k\right)^{1/k}+ 3 \left(E \Delta_1^k\right)^{1/k} \right],\\
&\le \frac{2}{\sqrt[3]{n}} + \frac{2}{\sqrt{n}} \left[ E\left(\max_{i} \abs{Z_i}\right)^k\right]^{1/k} + \frac{2}{n+1}\left(E\abs{\beta_n(1)}^k\right)^{1/k}+ 6 \left( E \Delta_1^k\right)^{1/k}.
\end{split}
\end{equation}
Let $\mu_n=E\max_i \abs{Z_i}$. It is well-known that $\mu_n=O(\sqrt{\log n})$. From the Gaussian concentration of measure we know that the probability density of $\abs{\max_i \abs{Z_i} - \mu_n}$ has a sub-Gaussian tail with variance $1$. Thus, from known moments of the standard Normal distribution we infer that
\[
\begin{split}
\left(E\left(\max_{i} \abs{Z_i}\right)^k\right)^{1/k} &\le \mu_n + \left( E\abs{\max_i \abs{Z_i} - \mu_n}^k\right)^{1/k} \le \mu_n + C^* \sqrt{k},\\
\left(E\abs{\beta_n(1)}^k\right)^{1/k} &\le C^* \sqrt{k},
\end{split}
\]
where $C^*$ is some absolute constant.
Further, note that $\Delta_1=\sup_s\abs{G_n(s) - B_n(s)}$. Now
\[
\sqrt{n} \Delta_1 \le a\log n + \left(\Delta_1 - a \log n \right)^+,
\]
where $x^+= x1\{ x>0\}$. Again using the triangle inequality for the $k$th norm and the bound from \eqref{eq:kmtbnd}, we get
\[
\begin{split}
\sqrt{n} \left( E \Delta_1^k\right)^{1/k} &\le a \log n + \left[ k K \int_0^\infty x^{k-1} e^{-\lambda x} dx\right]^{1/k}= a \log n + K^{1/k} \lambda^{-1} (k!)^{1/k}.
\end{split}
\]
Thus
\[
\left[ E \Delta_1^k \right]^{1/k} \le \frac{a\log n}{\sqrt{n}} + \frac{O(k)}{\sqrt{n}}.
\]
Substituting in \eqref{eq:triangle} we get our desired result.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:main}]
From the last theorem we know that
\begin{equation}\label{eq:poitoexp}
\left( E D_n^{k} \right)^{1/k} \le C_1\left( \frac{\log n}{\sqrt{n}} \right) + C_2 \left( \frac{k}{\sqrt{n}} \right).
\end{equation}
From here one can estimate the deviation of moments. By elementary calculus,
\begin{equation}\label{eq:calcul}
\abs{ (1+\epsilon)^k - 1} \le 2k \abs{\epsilon}, \quad \text{for all}\; \abs{\epsilon} \le \frac{1}{2k}.
\end{equation}
Choose $\epsilon$ such that
\[
(1+\epsilon)= \left[ E\left( M_n \right)^{k}\right]^{1/k} / \left[ E\left( A_n \right)^{k} \right]^{1/k}.
\]
Clearly then
\[
\abs{\epsilon}\le \frac{\abs{\left[ E\left( M_n \right)^{k}\right]^{1/k} - \left[ E\left( A_n \right)^{k} \right]^{1/k}} }{\left[ E\left( A_n \right)^{k} \right]^{1/k}}\le \frac{\left[E \left(D_n\right)^k\right]^{1/k}}{\left[ E\left( A_n \right)^{k} \right]^{1/k}}.
\]
Note that $A_n$, being the area under a standard Brownian excursion, has a law that does not depend on $n$. From \eqref{eq:momentasymp} we claim the existence of another absolute constant $C_3 >0$ such that for all $k \in \mathbb{N}$,
\[
\left[ E\left( A_n \right)^{k} \right]^{1/k} \ge C_3 \sqrt{k}.
\]
Thus
\[
\abs{\epsilon}\le O\left(\frac{\log n}{\sqrt{kn}} \right) + O\left( \sqrt{\frac{k}{n}} \right).
\]
Thus $k\epsilon$ converges to zero for all sequences such that $k=o(\sqrt[3]{n})$. This finishes the proof.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:intro}]
To finish the proof of Theorem \ref{thm:intro} we simply need to argue that when $k=o(\sqrt[3]{n})$, then
\[
\lim_{n\rightarrow \infty} k!\frac{E^*\left[ \combi{M}{k} \right]}{E^*\left(M^k\right)}=1
\]
and use \eqref{eq:spencer}. However, this follows from elementary bounds since $M=O(n^{3/2})$ and $k=o(\sqrt[3]{n})$. We skip the details.
\end{proof}
\bibliographystyle{alpha}
| {
"timestamp": "2012-06-04T02:05:29",
"yymm": "1201",
"arxiv_id": "1201.5909",
"language": "en",
"url": "https://arxiv.org/abs/1201.5909",
"abstract": "Let C(n,k) denote the number of connected graphs with n labeled vertices and n+k-1 edges. For any sequence (k_n), the limit of C(n,k_n) as n tends to infinity is known. It has been observed that, if k_n=o(\\sqrt{n}), this limit is asymptotically equal to the $k_n$th moment of the area under the standard Brownian excursion. These moments have been computed in the literature via independent methods. In this article we show why this is true for k_n=o(\\sqrt[3]{n}) starting from an observation made by Joel Spencer. The elementary argument uses a result about strong embedding of the Uniform empirical process in the Brownian bridge proved by Komlos, Major, and Tusnady.",
"subjects": "Probability (math.PR); Combinatorics (math.CO)",
"title": "Brownian approximation to counting graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9817357211137666,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7085610812176506
} |
https://arxiv.org/abs/1711.01194 | New Bounds on the Biplanar Crossing Number of Low-dimensional Hypercubes | In this note we provide an improved upper bound on the biplanar crossing number of the 8-dimensional hypercube. The $k$-planar crossing number of a graph $cr_k(G)$ is the number of crossings required when every edge of $G$ must be drawn in one of $k$ distinct planes. It was shown in Czabarka et al. that $cr_2(Q_8) \leq 256$ which we improve to $cr_2(Q_8) \leq 128$. Our approach highlights the relationship between symmetric drawings and the study of $k$-planar crossing numbers. We conclude with several open questions concerning this relationship. | \section{Introduction}
The traditional \textit{crossing number} of a graph $G=(V,E)$, denoted by $cr(G)$, is the minimum number of edge crossings required to draw $G$ in the 2-dimensional Euclidean plane. To study printed circuit boards, Owens \cite{Owe} generalized the question: what is the minimum number of edge crossings required by a drawing that is allowed to carefully divide the edges of $G$ among two different 2-dimensional Euclidean planes? Since then the definition has been extended to $k \geq 2$ planes \cite{Cza}.
Suppose that $E$ is partitioned into $k$ disjoint subsets, $E_1,E_2,...,E_k$, and let $G_i=(V, E_i)$. Each $G_i$ has some crossing number $cr(G_i)$. Suppose further that $G_i$ will be drawn in the $i$th plane from a set of $k$ distinct planes. The \emph{$k-planar$ crossing number of $G$}, denoted $cr_k(G)$ is then the minimum of
\[cr(G_1)+cr(G_2)+...+cr(G_k)\]
over all partitions of the edge set $E$.
Trivially, letting $E_1=E$ shows that $cr_k(G)\leq cr(G)$. The question remains: given the freedom to consider any partition of $G$'s edges among $k$ disjoint planes, how low can we drive the number of required crossings?
A significant challenge in designing a crossing-minimizing $k$-planar drawing of $G$ is that, even for quite simple $G_i$, $cr(G_i)$ could be unknown. For example: for $Q_4$, the 4-dimensional hypercube, it is known that $cr(Q_4)=8$; however, the exact value of $cr(Q_d)$ is unknown for $d>4$ \cite{Far}.
The previous upper bound $cr_2(Q_8) \leq 256$ was given by a construction of Czabarka, S\'ykora, Sz\'ekely, and Vr\'to in \cite{Cza}. Czabarka et al. give a general construction for an upper bound on $cr_2(Q_d)$ that achieves 256 crossings when $d=8$. Their approach specifies a bi-planar partition of the edges of $Q_8$ based on a set of lower-dimensional hypercube subgraphs. Their upper bound is minimized when these hypercube subgraphs are as-uniform-as-possible in size. In particular, for $Q_8$ their construction specifies sixteen disjoint $Q_4$ subgraphs in Plane 1 and a further sixteen disjoint $Q_4$ subgraphs in Plane 2. Recall that $cr(Q_4)=8$, so drawing each disjoint copy of $Q_4$ optimally yields \[cr_2(Q_8) \leq 16\times2\times8=256.\]
We now present our main result which improves on the the best known upper bound of $cr(Q_8)$ by a factor of 2.
\begin{theorem}\label{Tmain}
There exists a 2-planar drawing of the 8-dimensional hypercube with 128 crossings so that $cr_2(Q_8)\leq 128$.
\end{theorem}
\section{A biplanar drawing of $Q_8$ with 128 crossings}
To prove Theorem \ref{Tmain}, we provide a biplanar drawing of $Q_8$ with 128 crossings. We improve the previous construction by plane-swapping edges to give a net reduction in total edge crossings. Our drawing consists of graphs $G_1$ and $G_2$ in Plane 1 and 2 respectively such that $G_1 \cong G_2$ where $cr(G_i) \leq 64$. We found several distinct bi-planar drawings of $Q_8$ with exactly 128 crossings which satisfy these conditions. For ease of exposition, we present a highly symmetric drawing.
We define a \emph{depleted $n$-dimensional hypercube} to be a graph whose vertex set is $V(Q_n)$ and will refer to such graphs as \textit{depleted $n$-cubes}. We will make use of depleted 5-cubes. To this end we introduce the following partition $V(Q_4) := C_1 \sqcup C_2$ where
\begin{align*}
C_1&:=\{0000,1000,0010,1010,0011,1011,0001,1001\} \\
C_2 &:= \{0111,1111,0101,1101,0100,1100,0110,1110\}.
\end{align*}
For ease of notation, we denote $\hat c \in C_1$ and $\check c \in C_2$. Moreover, we let $b \in \{0,1\}$ represent the usual binary-bit. Maintaining the notation of \cite{Cza} we refer to each node of $Q_8$ by a length-8 binary string from $\{0,1\}^8$. Given two binary strings $s_1$ and $s_2$ we write $s_1s_2$, or $s_1-s_2$ for readability, to be the usual string concatenation.
In our construction, each plane contains 512 edges, and furthermore, $G_1$ and $G_2$ are isomorphic. For exposition, suppose that we initially have a Plane 0 which contains all the edges and vertices of $Q_8$. Further suppose that there exist Planes 1 and 2 which each initially contain the vertices of $Q_8$ and no edges. We move every edge from Plane 0 to either Plane 1 or Plane 2 to create our biplanar partition. In the following table, we describe explicitly the 512 edges we add to Plane 1.
Consider the set of pairs \[P_1:=\{(0000,1000), (0010, 1010), (0011, 1011), (0001, 1001)\} \subset \binom{C_1}{2}.\] For $(\hat c_1, \hat c_2) \in P_1$ define the \emph{depleted 5-cube of Type 1}, denoted $D_1(\hat c_1, \hat c_2)$, according to Table \ref{Type1}.
\begin{table}[h!]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
\multicolumn{4}{|c|}{$E(D_1(\hat c_1, \hat c_2))$ for $\hat c \in (\hat c_1, \hat c_2) \in P_1$}\\
\hline
$(\hat{c}-b000, \hat{c}-b001)$& $(\hat{c}-b000, \hat{c}-b100)$& $ (\hat{c}-b100, \hat{c}-b101)$&$(\hat{c}-b001, \hat{c}-b101)$\\ \hline
$(\hat{c}-b010, \hat{c}-b011) $&$ (\hat{c}-b010, \hat{c}-b110) $& $(\hat{c}-b110, \hat{c}-b111) $& $(\hat{c}-b011, \hat{c}-b111)$\\ \hline
$(\hat{c}-b000, \hat{c}-b010)$ & $(\hat{c}-b001, \hat{c}-b011)$ &$(\hat{c}-b100, \hat{c}-b110)$ &$(\hat{c}-b101, \hat{c}-b111)$\\ \hline
$(\hat{c}-0101, \hat{c}-1101)$ &$(\hat{c}-0111, \hat{c}-1111)$ &$(\hat{c}-0110, \hat{c}-1110)$& $(\hat{c}-0100, \hat{c}-1100)$\\
\hline
$(\hat c_1-0000, \hat c_2-0000)$ & $(\hat c_1-0100, \hat c_2-0100)$ & $(\hat c_1-1100, \hat c_2-1100)$ & $(\hat c_1-1000, \hat c_2-1000)$\\ \hline
$(\hat c_1-1001, \hat c_2-1001)$ & $(\hat c_1-1101, \hat c_2-1101)$ & $(\hat c_1-0101, \hat c_2-0101)$ & $(\hat c_1-0001, \hat c_2-0001)$\\
\hline
\end{tabular}
\caption{Table of the 64 edges of \textit{depleted 5-cubes of Type 1}.}
\label{Type1}
\end{table}
The four\textit{ depleted 5-cubes of Type 1} are vertex disjoint (from the form of pairs in $P_1$). We present an eight-crossing drawing of a \textit{depleted 5-cube of Type 1} in Figure \ref{F:Type1}, which proves the following claim.
\begin{claim}
$cr(D_1(\hat c_1, \hat c_2)) \leq 8$.
\end{claim}
We similarly define $D_2(\check c_1, \check c_2)$, the \emph{Depleted 5-cube of Type 2}, according to Table \ref{Type2} given \[P_2 := \{(0111, 1111), (0101, 1101), (0100, 1100), (0110, 1110)\} \subset \binom{C_2}{2}.\]
\begin{figure}[h!]
\label{F:Type1}
\hspace*{-17mm}\includegraphics[trim = 0mm 0mm 0mm 0mm, clip, height=14.5cm]{drawing1matchedpairofcubes.pdf} \caption{A drawing of $D_1(\hat c_1, \hat c_2)$ for $(\hat c_1, \hat c_2) \in P_1$ with eight crossings.}
\end{figure}
\begin{table}[h!]
\begin{tabular}{|c|c|c|c|}
\hline
\multicolumn{4}{|c|}{$E(D_2(\check c_1, \check c_2))$ for $\check c \in (\check c_1, \check c_2) \in P_2$.} \\
\hline
$(\check{c}-b000, \check{c}-b001)$& $(\check{c}-b000, \check{c}-b100)$& $(\check{c}-b100, \check{c}-b101)$&$(\check{c}-b001, \check{c}-b101)$\\ \hline
$(\check{c}-b010, \check{c}-b011)$ & $(\check{c}-b010, \check{c}-b110)$ & $(\check{c}-b110, \check{c}-b111)$ & $(\check{c}-b011, \check{c}-b111)$\\ \hline
$(\check{c}-b000, \check{c}-b010)$ & $(\check{c}-b001, \check{c}-b011)$ &$(\check{c}-b100, \check{c}-b110)$ &$(\check{c}-b101, \check{c}-b111)$\\ \hline
$(\check{c}-0011, \check{c}-1011)$ &$(\check{c}-0001, \check{c}-1001)$ &$(\check{c}-0000, \check{c}-1000)$ & $(\check{c}-0010, \check{c}-1010)$\\
\hline
$(\check c_1-0110, \check c_2-0110)$ & $(\check c_1-0111, \check c_2-0111)$ & $(\check c_1-0011, \check c_2-0011)$ & $(\check c_1-1011, \check c_2-1011)$\\ \hline
$(\check c_1-1111, \check c_2-1111)$ & $(\check c_1-1110, \check c_2-1110)$ & $(\check c_1-1010, \check c_2-1010)$ & $(\check c_1-0010, \check c_2-0010)$\\
\hline
\end{tabular}
\caption{Table of 64 edges of \textit{depleted 5-cubes of Type 2.}}
\label{Type2}
\end{table}
Again, the four \textit{depleted 5-cubes of Type 2} are vertex disjoint.
An eight-crossing drawing of a \textit{depleted 5-cube of Type 2} is given in Figure \ref{F:Type2}, which proves the following claim.
\begin{claim}
$cr(D_2(\check c_1, \check c_2)) \leq 8$.
\end{claim}
Each \textit{depleted 5-cube} has 64 edges, so Plane 1 contains 512 edges. Further, no
\textit{depleted 5-cube of Type 1} shares a vertex with a \textit{depleted 5-cube of Type 2}. This follows from the form of the pairs in $P_1$ and $P_2$ and the form of the edge sets described in Tables 1 and 2. Thus, these 512 edges can be drawn in Plane 1 with at most 64 crossings.
\begin{remark}
Plane 2 contains all the edges of $Q_8$ which are not in Plane 1. Moreover, $G_1 \cong G_2$.
\end{remark}
\begin{figure}[h!]
\hspace*{-17mm}\includegraphics[trim = 0mm 0mm 0mm 0mm, clip, height=14.5cm]{drawing2matchedpairofcubessecondtype.pdf}
\caption{A drawing of $D_2(\check c_1, \check c_2)$ for $(\check c_1, \check c_2) \in P_2$ with eight crossings.}
\label{F:Type2}
\end{figure}
We now provide a more illuminating description of the edges of Plane 2. The edges in Plane 2 have a symmetric representation in terms of the edges in Plane 1. Let $\rho:E(Q_8) \to E(Q_8)$ such that \[\rho((v_pv_s, u_pu_s)) = (v_sv_p, u_su_p)\] where $v_p$ is a prefix string of length four, $v_1v_2v_3v_4$, and $v_s$ is a suffix string of length four, $v_5v_6v_7v_8$ that together define vertex $v = v_1v_2\dots v_8$. Indeed $\rho$ captures the symmetric relationship between edges in Plane 1 and the edges in Plane 2. Assuming an ordering on the vertices of $Q_8$ one can check that $\rho$ is indeed a bijection. As an example, in Table \ref{Type1} we assign edge ($\hat{c}$b-000, $\hat{c}$b-001) to Plane 1. So we send \[\rho((\hat{c}b-000,\hat{c}b-001)) = (b000-\hat{c}, b001-\hat{c})\] to Plane 2. If we let ${\cal P}_i$ be the set of edges partitioned into Plane $i$ then ${\cal P}_2 = \rho({\cal P}_1).$
Moreover, the drawings provided in Figures 1 and 2 for \textit{depleted 5-cubes of Type $1$} (or \textit{Type 2}, resp.) are also drawings of their images under $\rho$. It follows that, for the edge partition we describe, each plane can be drawn with at most $64$ crossings implying that $cr_2(Q_8) \leq 128$ as desired.
A natural next step in this research is to determine whether or not this bound is sharp. The authors believe this to be the case; however, such a proof remains elusive. Alas, we leave the reader with the following conjecture.
\begin{conjecture}
$cr_2(Q_8) = 128.$
\end{conjecture}
\section{Lower Bounds on \textit{structurally-symmetric} $k$-planar crossing numbers for Hypercubes}
Notably, our bi-planar drawing of $Q_8$ satisfies $G_1 \cong G_2$. This is a rather special property and is termed \emph{self-complementary} in \cite{Cza}. It could be the case that there exists a non-isomorphic partition of $E(Q_8)$ which admits strictly fewer crossings. Yet, we wonder whether demanding that the $G_i$ be isomorphic truly forces a suboptimal number of crossings for $k$-planar drawings. In particular, such symmetry would be expected when considering highly symmetric graphs like hyper-cubes.
To formalize this question, we introduce the following generalization of self-complementary edge partitions.
\begin{definition} For a finite graph $G=(V,E)$, let $P$ denote an edge-partition $E = (E_1, E_2,..., E_k)$ and define $G_i=(V, E_i)$ for all $i$. If for all pairs $(r,s) \in [k] \times [k]$ we have $G_r \cong G_s$, then $P$ is a $k$\textit{-structurally-symmetric partition} of $G$.
\end{definition}
Trivially, when $|E|$ is not a multiple of $k$, no $k$\textit{-structurally-symmetric partition} of $E$ exists.\\
\begin{definition} If there exists a $k$\textit{-structurally-symmetric partition} for $G$ that can be drawn with $cr_k(G)$ crossings then we say that the graph $G$ is $k$\textit{-structurally-symmetric}.
\end{definition}
It is unclear whether graphs exist for which any $k$\textit{-structurally-symmetric partition} of $E$ forces a sub-optimal $k$-planar drawing (which requires strictly more than $cr_k(G)$ crossings).
In particular, we leave the reader with the following question.
\begin{question}Is the $d$-dimensional hypercube $2$\textit{-structurally-symmetric}?
\end{question}
This question motivates the following definition.
\begin{definition}
Let $cr_{kss}(G)$ denote the minimum number of crossings required among all $k$-structurally symmetric partitions of $G$. We call $cr_{kss}$ \emph{the $k$-structurally-symmetric crossing number of $G$}.
\end{definition}
Trivially, $cr_{kss}(G)\geq cr_k(G)$. So, $k$-\textit{structurally symmetric graphs} are precisely those graphs $G$ that have $cr_k(G)=cr_{kss}(G)$. We conclude by presenting the reader questions concerning $k$-structurally-symmetric crossing numbers.
\begin{question} Characterize the set of all $k$\textit{-structurally-symmetric} graphs. To this end, what structural properties ensure that a graph is $k$-structurally-symmteric or otherwise?
\end{question}
\begin{question}
Provide a graph for which the difference between $cr_{kss}(G)$ and $cr_k(G)$ is large (or even $>0$). Further, is there an infinite family $(G_n)_{n \geq 1}$ such that $G_{n} \subseteq G_{n+1}$ and $(cr_{kss}(G_n) - cr_{k}(G_{n}))_{n \geq 1} \uparrow \infty$?
\end{question}
\section{Acknowledgements}
This material is based upon work that started at the Mathematics Research Communities workshop ``Beyond Planarity: Crossing Numbers of Graphs", organized by the American Mathematical Society, with the support of the National Science Foundation under Grant Number DMS 1641020.
We would like to extend our thanks to the organizers of the workshop for their commitment to engendering academic growth in young career mathematicians. We are particularly thankful for L\'aszl\'o Sz\'ekely and his exemplary mentoring which made this project possible.
| {
"timestamp": "2017-11-06T02:10:09",
"yymm": "1711",
"arxiv_id": "1711.01194",
"language": "en",
"url": "https://arxiv.org/abs/1711.01194",
"abstract": "In this note we provide an improved upper bound on the biplanar crossing number of the 8-dimensional hypercube. The $k$-planar crossing number of a graph $cr_k(G)$ is the number of crossings required when every edge of $G$ must be drawn in one of $k$ distinct planes. It was shown in Czabarka et al. that $cr_2(Q_8) \\leq 256$ which we improve to $cr_2(Q_8) \\leq 128$. Our approach highlights the relationship between symmetric drawings and the study of $k$-planar crossing numbers. We conclude with several open questions concerning this relationship.",
"subjects": "Combinatorics (math.CO)",
"title": "New Bounds on the Biplanar Crossing Number of Low-dimensional Hypercubes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.981735720045014,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7085610804462857
} |
https://arxiv.org/abs/1403.3127 | An asymptotic relationship between coupling methods for stochastically modeled population processes | This paper is concerned with elucidating a relationship between two common coupling methods for the continuous time Markov chain models utilized in the cell biology literature. The couplings considered here are primarily used in a computational framework by providing reductions in variance for different Monte Carlo estimators, thereby allowing for significantly more accurate results for a fixed amount of computational time. Common applications of the couplings include the estimation of parametric sensitivities via finite difference methods and the estimation of expectations via multi-level Monte Carlo algorithms. While a number of coupling strategies have been proposed for the models considered here, and a number of articles have experimentally compared the different strategies, to date there has been no mathematical analysis describing the connections between them. Such analyses are critical in order to determine the best use for each. In the current paper, we show a connection between the common reaction path (CRP) method and the split coupling (SC) method, which is termed coupled finite differences (CFD) in the parametric sensitivities literature. In particular, we show that the two couplings are both limits of a third coupling strategy we call the "local-CRP" coupling, with the split coupling method arising as a key parameter goes to infinity, and the common reaction path coupling arising as the same parameter goes to zero. The analysis helps explain why the split coupling method often provides a lower variance than does the common reaction path method, a fact previously shown experimentally. | \section{Introduction}
\label{sec:intro}
Models of biochemical reaction networks with stochastic dynamics have become increasingly popular in the science literature over the previous fifteen years where they are often studied via computational methods and, in particular, Monte Carlo methods. These computational methods tend to be extremely expensive and time-consuming without the use of variance reduction techniques. One of the most common ways to achieve a large reduction of variance is to couple two relevant processes in order to increase their covariance. There are three main couplings found in the relevant literature: (i) the use of common random numbers (CRN), (ii) the common reaction path (CRP) coupling \cite{Khammash2010}, and (iii) a \textit{split coupling} (SC) method termed coupled finite differences in the setting of parametric sensitivities \cite{AndCFD2012,AndHigham2012}. It has been observed in the literature that both the CRP and SC couplings are far superior to the CRN coupling in terms of variance reduction \cite{AndCFD2012,Khammash2010,Srivastava2013}. It has also been observed through examples that the SC method tends to perform much better than the CRP method, though some exceptions exist \cite{AndCFD2012,Srivastava2013}. To the best of the authors' knowledge there has to date been no analytical work on understanding the connections between these two couplings. In the present paper we prove that both the CRP and SC couplings arise naturally as different limits of a third family of couplings we term the \textit{local-CRP} coupling. In particular, the CRP coupling arises as a limit in which the local-CRP coupling is as loosely coupled as possible, whereas the SC coupling arises from a limit of the local-CRP ``recoupling'' as often as possible. Such an analysis sheds light on why the split coupling often provides a lower variance than does the CRP coupling.
The outline for the remainder of the paper is as follows. In Section \ref{sec:MathModel}, we formally present the mathematical models considered in this paper, together with a brief description of the computational methods that serve as motivation for the present work. In Section \ref{sec:couplings}, we present the different coupling strategies for the models presented in Section \ref{sec:MathModel}. In Section \ref{sec:Analysis}, we state and prove our main results. In Section \ref{sec:examples}, we provide numerical examples demonstrating our main results, and in Section \ref{sec:discussion} we conclude with some brief remarks.
\section{Mathematical model and motivating computational methods}
\label{sec:MathModel}
Motivated by models in biochemistry, we consider continuous time
Markov chain models in $\mathbb Z^d$, in which the $i$th component of the
process typically represents the number of molecules of ``species''
$i$ present in the system. The transitions of the chain are specified by vectors, $\zeta_k
\in \mathbb Z^d$, for $k \in \{1,\dots,R\}$ with $R < \infty$, determining
the net change in the chain due to the occurrence of a single
``reaction,'' and by the intensity functions $\lambda_k: \mathbb Z^d \to
\mathbb R_{\ge 0}$, which determine the rate at which the different reactions
are occurring.\footnote{Intensity functions are termed ``propensity''
functions in the biochemistry literature.} Specifically, letting $N_k(t)$ be the number of times transition $k \in \{1,\dots,R\}$ has occurred by time $t\ge 0$, we will consider the continuous time Markov chain $X$ satisfying the equation
\begin{equation*}
X(t) = X(0) + \sum_{k = 1}^R N_k(t) \zeta_k,
\end{equation*}
where $N_k$ is a counting process with local intensity function $\lambda_k$. That is, $\{N_k\}$ are the counting processes for which the processes
\[
N_k(t) - \int_0^t \lambda_k(X(s))ds
\]
are local martingales.
One useful representation for the counting processes $N_k(t)$ is via time-changed unit-rate Poisson processes \cite{Anderson2007a,AndKurtz2011,Kurtz86,KurtzPop81},
\[
N_k(t) = Y_k\left( \int_0^t \lambda_k (X(s)) ds\right),
\]
yielding the stochastic equation
\begin{equation}\label{eq:RTC_X}
X(t) = X(0) + \sum_{k = 1}^R Y_k\left( \int_0^t \lambda_k (X(s)) ds\right)\zeta_k
\end{equation}
where $\{Y_k\}_{k = 1}^R$ is a collection of independent unit-rate Poisson processes.
Note that $X$ can also be specified by its infinitesimal generator,
\begin{equation}\label{eq:gen_A}
(Af)(x) = \sum_{k = 1}^R \lambda_k(x) (f(x+\zeta_k) - f(x)),
\end{equation}
where $f$ is any bounded function with compact support.
We denote $Z$ as the process on $\mathbb Z^d$ with the same transition directions $\{\zeta_k\}$ as $X$, but with intensities $\widetilde \lambda_k:\mathbb Z^d \to \mathbb R_{\ge 0}$. That is, $Z$ is the Markov process with infinitesimal generator
\begin{equation}\label{eq:gen_B}
(Bf) (x) = \sum_{k=1}^R \widetilde \lambda_k(x)(f(x+\zeta_k) - f(x)),
\end{equation}
and which satisfies the stochastic equation
\begin{equation}\label{eq:RTC_Z}
Z(t) = Z(0) + \sum_{k = 1}^R Y_k\left( \int_0^t \widetilde \lambda_k (Z(s)) ds\right)\zeta_k,
\end{equation}
where $\{Y_k\}_{k = 1}^R$ is a collection of independent unit-rate Poisson processes. In the remainder of the paper, we consider different ways to couple $X$ and $Z$ and provide an asymptotic relationship between two of the couplings.
\subsection{Motivating computational methods}
We briefly present two computational methods that serve as the motivation for the analysis of the different coupling strategies: finite difference methods for parametric sensitivity analysis and multi-level Monte Carlo for the estimation of expectations.
\subsubsection{Parametric sensitivity analysis}
Suppose that $\{X^\theta\} $ is a parametric family of processes about $\theta$ on
a state space $E$, and $f:E \to \RR$ is some statistic of interest. For example, $f(X(t)) = X_i(t)$ may provide the abundance of species $i$ at time $t \ge 0$. It is common to wish to evaluate
\begin{align}
\begin{split}
\frac{d}{d\theta} \mathbb{E} [f(X^\theta(t))] \approx \frac{\mathbb{E}[f(X^{\theta+ h}(t))] - \mathbb{E}[f(X^{\theta}(t))]}{h}
\end{split} \label{eq:motivation_Derivs}
\end{align}
as a measurement of the sensitivity of $\mathbb{E} [f(X^\theta(t))]$ with respect to $\theta$. Such a strategy is usually called a \textit{finite difference} method.
We would like to empirically evaluate the right-hand side of \eqref{eq:motivation_Derivs} in as efficient a manner as possible. By coupling the processes $(X^{\theta + h}, X^{\theta})$, we may evaluate
\[
h^{-1}\mathbb{E}[f(X^{\theta+ h}(t)) - f(X^{\theta}(t))],
\]
with the magnitude of $\textsf{Var}(f(X^{\theta+ h}(t)) - f(X^{\theta}(t)))$ determining the quality of the coupling. In particular, we wish to minimize $\textsf{Var}(f(X^{\theta+ h}(t)) - f(X^{\theta}(t)))$ without greatly increasing the computational cost of producing realizations of the coupled processes $(X^{\theta + h}, X^{\theta})$.
We explicitly note that in the setting of the previous section, we have
$$\lambda_k(\cdot) = \eta_k(\theta, \cdot) , ~~~~ \widetilde \lambda_k(\cdot) =
\eta_k(\theta + h, \cdot),$$
where for each $k$, $\{\eta_k(\theta, \cdot) : \RR^d \to \RR_{\ge 0}\}$ is a
parametric family of functions about $\theta$. In this case, we have
\[
X = X^{\theta}, \quad \text{and} \quad Z = X^{\theta+h}.
\]
As mentioned in Section \ref{sec:intro}, there has been a large amount of work in the literature on developing good coupling strategies for the estimation of parametric sensitivities via finite differences \eqref{eq:motivation_Derivs}; see, for example, \cite{AndCFD2012,AndSkubak,Khammash2010,Srivastava2013}. To the best of the authors' knowledge there has been no mathematical analysis detailing the connections between the different couplings used, though see the discussion in Section \ref{sec:discussion} for details pertaining to a recent work by Arampatzis and Katsoulakis \cite{Markos}.
\subsection{Multi-level Monte Carlo}
In \cite{Giles2008}, Mike Giles introduced the multi-level Monte Carlo (MLMC) method for the approximation of expectations of diffusion processes. Specifically, if $X$ is the diffusion process of interest and $\{Z_\ell\}$ are a family of approximations to $X$, with higher values of $\ell$ corresponding to better approximations, then we observe that for any function $f$ of interest,
\begin{align*}
\mathbb{E} [f(X(t))] &\approx \mathbb{E} [f(Z_L) ]= \sum_{\ell = 1}^L \mathbb{E}[ f(Z_{\ell}(t)) - f(Z_{\ell-1}(t))] + \mathbb{E} [ f(Z_0)],
\end{align*}
where $L$ is chosen large enough so that $\left| \mathbb{E}[ f(X(t))] - \mathbb{E} [f(Z_L(t))]\right|$ is below some target accuracy. It is typical to choose $Z_\ell$ to be the process produced by Euler-Maruyama with a step size of $M^{-\ell}$ for some $M \in \{2,3,\dots,7\}$. If each term $f(Z_{\ell}(t)) - f(Z_{\ell-1}(t))$ is tightly coupled, then the variance of each of the intermediate estimators will be low, thereby moving the computational cost to the lowest level, $\mathbb{E}[ f(Z_0)]$, which can be estimated quickly via Euler-Maruyama with large time-steps.
In \cite{AndHigham2012}, Anderson and Higham extended the multi-level Monte Carlo method to the setting of this paper by utilizing the split coupling detailed in Section \ref{sec:couplings}. They further noted that an unbiased estimator can be produced for jump models by coupling the exact process $X$ with the approximate process with the finest time discretizatoin
\begin{align*}
\mathbb{E} [ f(X(t)) ] =\mathbb{E}[ f(X(t)) - f(Z_L(t))] + \sum_{\ell = 1}^L \mathbb{E}[ f(Z_{\ell}(t)) - f(Z_{\ell-1}(t))] + \mathbb{E}[ f(Z_0)],
\end{align*}
where, again, it is the quality of the coupling at each level that determines the overall quality of the method.
We point out that in the diffusive case the most natural coupling is to re-use the driving Brownian path for each of the coupled processes. This is relatively easy to do via the Brownian bridge. However, as will be noted in the next section, there are multiple natural couplings to choose from in the context of jump processes with state dependent intensity functions, and different choices lead to computational methods with vastly different computational complexities and, hence, runtimes.
\section{Different Couplings}
\label{sec:couplings}
We return to the notation introduced at the beginning of Section
\ref{sec:MathModel} and focus our discussion on ways to couple $X$
and $Z$ with intensities $\lambda_k$ and $\widetilde \lambda_k$,
respectively.
\subsection{Split coupling}
We will begin by introducing the split coupling (SC), which first appeared as an analytic tool in \cite{Kurtz82} and later appeared in the context of computational methods in \cite{AndCFD2012,AndersonGangulyKurtz,AndHigham2012,AHS13,Markos}. Let $a\wedge b \overset{\mbox{\tiny def}}{=} \min\{a,b\}$, and let
$\mathcal U$ and $\mathcal V$ be any c\`adl\`ag processes on $\RR^d$. Then for each $k \in \{1,\dots,R\}$ we define the operators $r_{1k},r_{2k},$ and $r_{3k}$ via
\begin{align}
\begin{split}
r_{1k}(\lambda_k, \widetilde \lambda_k, \mathcal U, \mathcal V)(s) &\overset{\mbox{\tiny def}}{=} \lambda_k(\mathcal U(s)) \wedge \widetilde \lambda_k( \mathcal V(s))\\
r_{2k}(\lambda_k, \widetilde \lambda_k, \mathcal U, \mathcal V)(s) &\overset{\mbox{\tiny def}}{=} \lambda_k(\mathcal U(s)) -r_{1k}(\lambda_k, \widetilde \lambda_k, \mathcal U, \mathcal V)(s)\\
r_{3k}(\lambda_k, \widetilde \lambda_k, \mathcal U, \mathcal V)(s) &\overset{\mbox{\tiny def}}{=} \widetilde \lambda_k(\mathcal V(s)) - r_{1k}(\lambda_k, \widetilde \lambda_k, \mathcal U, \mathcal V)(s).
\end{split} \label{couple_rates}
\end{align}
The split coupling of the processes $X$ and $Z$ is then given by
\begin{align}
\begin{split}
X_{\text{sc}}(t) = X(0) + &\sum_{k=1}^R\Bigg\{ Y_{1k} \left(
\int_0^t r_{1k}(\lambda_k, \widetilde \lambda_k, X_{\text{sc}}, Z_{\text{sc}})(s) ds \right) \\
&+ Y_{2k}
\left( \int_0^t r_{2k}(\lambda_k, \widetilde \lambda_k, X_{\text{sc}}, Z_{\text{sc}})(s) ds \right)
\Bigg\} \zeta_k \\
Z_{\text{sc}}(t) = Z(0) + &\sum_{k=1}^R \Bigg\{ Y_{1k} \left( \int_0^t
r_{1k}(\lambda_k, \widetilde \lambda_k, X_{\text{sc}}, Z_{\text{sc}})(s) ds
\right) \\
& + Y_{3k} \left( \int_0^t r_{3k}(\lambda_k, \widetilde \lambda_k, X_{\text{sc}}, Z_{\text{sc}})(s) ds \right) \Bigg\} \zeta_k,
\end{split} \label{eq:split_coupling}
\end{align}
where $\{Y_{1k}\}_{k = 1}^R\cup \{Y_{2k}\}_{k = 1}^R\cup \{Y_{3k}\}_{k = 1}^R$ are mutually independent unit-rate Poisson processes. Note that $X_{\text{sc}}$ and $Z_{\text{sc}}$ share the family of counting processes determined by the Poisson processes $Y_{1k}$. Further note that $(X,Z)$ satisfying the stochastic equation \eqref{eq:split_coupling} is simply a continuous time Markov chain on $\mathbb Z^d\times \mathbb Z^d$ with infinitesimal generator
\begin{align*}
(\mathcal L_{\text{sc}}g)(x,z) &= \sum_{k = 1}^R \min\{\lambda_k(x),\widetilde \lambda_k(z)\} ( g(x+\zeta_k,z+\zeta_k) - g(x,z))\\
&\hspace{.1in} + \sum_{k = 1}^R(\lambda_k(x) - \min\{\lambda_k(x),\widetilde \lambda_k(z)\} ) ( g(x+\zeta_k,z) - g(x,z))\\
&\hspace{.1in}+ \sum_{k = 1}^R(\widetilde\lambda_k(z) - \min\{\lambda_k(x),\widetilde \lambda_k(z)\} ) ( g(x,z+\zeta_k) - g(x,z)),
\end{align*}
where $g: \mathbb Z^d\times \mathbb Z^d\to \mathbb R$ is any bounded function with compact support.
\subsection{Common random numbers}
In the common random numbers (CRN) coupling, we simply simulate the embedded discrete time Markov chain for each process concurrently with the exponential holding time for each transition. The processes $X$ and $Z$ are then coupled by using (i) the same stream of random variables for the generation of the embedded discrete time chain, and (ii) the same stream of random variables for the exponential holding times.
More explicitly, let $\{U_i\}_{i=0}^\infty$
be a sequence of uniform random variables over the interval $[0,1]$, and let $\eta: \mathbb R_{\ge 0}^R \times [0,1]\to \{\zeta_1,\dots,\zeta_R\}$ be defined via
\[
\eta (c_1, ...., c_R, u) = \zeta_k ~~~ \textrm{ if } ~~~ \frac{{\sum_{i=1}^{k-1} c_i}}{\sum_{i=1}^R c_i} \leq
u < \frac{{\sum_{i=1}^k c_i}}{\sum_{i=1}^R c_i},
\]
which is a categorical random variable parametrized by $c_1,...., c_R$.
Also, let us denote
\begin{equation}
\lambda_0(x) = \sum_{k=1}^R \lambda_k(x) \quad \text{and} \quad \widetilde \lambda_0(x) = \sum_{k=1}^R \widetilde \lambda_k(x).
\end{equation}
Then for a common unit-rate poisson process $Y$, which will determine the exponential holding times, we consider the following system:
\begin{align}
\begin{split}
R_X(t) &= Y\left( \int_0^t \lambda_0(X_{\text{crn}}(s)) ds \right) \\
R_Z(t) &= Y\left( \int_0^t \widetilde \lambda_0(Z_{\text{crn}}(s)) ds \right) \\
X_{\text{crn}}(t) &= X_{\text{crn}}(0) + \int_0^t \eta (\lambda_1(X_{\text{crn}}(s-)),\dots, \lambda_R(X_{\text{crn}}(s-)), U_{R_X(s-)} ) dR_X(s) \\
Z_{\text{crn}}(t) &= Z_{\text{crn}}(0) + \int_0^t \eta (\widetilde \lambda_1(Z_{\text{crn}}(s-)),\dots,\widetilde\lambda_R(Z_{\text{crn}}(s-)),
U_{R_Z(s-)} ) dR_Z(s),
\end{split}
\end{align}
where we note that the processes shared not just the Poisson process $Y$, but also the sequence of uniform $[0,1]$ random variables $\{U_i\}_{i = 0}^\infty$.
The solution to this system exists and is unique by construction \cite{AndKurtz2011,Gill76,Gill77}. We note that while the
representations are different, the marginal processes $X_{\text{crn}}$ and $X_{\text{sc}}$ have the same distribution, while the coupled processes $(X_{\text{crn}}, Z_{\text{crn}})$ and $(X_{\text{sc}}, Z_{\text{sc}})$ obviously do not.
\subsection{Common reaction path coupling and the local common reaction path coupling}
The common reaction path (CRP) coupling arises by simply noting that we may couple the processes \eqref{eq:RTC_X} and \eqref{eq:RTC_Z} via the Poisson processes $\{Y_k\}$. That is, in the CRP coupling $(X_{\text{crp}},Z_{\text{crp}})$ satisfies
\begin{align}
\begin{split}
X_{\text{crp}}(t) = X_{\text{crp}}(0) + \sum_{k=1}^R Y_{k} \left(
\int_0^t \lambda_k(X_{\text{crp}}(s)) ds \right) \zeta_k \\
Z_{\text{crp}}(t) = Z_{\text{crp}}(0) + \sum_{k=1}^RY_{k} \left( \int_0^t \widetilde
\lambda_k(Z_{\text{crp}}(s) ) ds
\right)\zeta_k,
\end{split} \label{CRP}
\end{align}
where the $Y_k$ are independent unit-rate Poisson processes.
Numerical experiments have shown that this coupling is significantly tighter than the CRN coupling, in that it produces a lower variance between the coupled processes, for many situations \cite{AndCFD2012,Khammash2010,Srivastava2013}. However, the variance between the processes often increases substantially as $t$ grows. In fact, the variance of the relevant estimators oftentimes approaches that of independent realizations of $X$ and $Z$ as $t$ grows towards infinity \cite{AndCFD2012,Srivastava2013}.
We postulate that the variance of the CRP coupling increases in this manner because of its
inability to fix a ``decoupling'' once it occurs. To understand this heuristically, suppose that given $X_{\text{crp}}(t_0)$ and $Z_{\text{crp}}(t_0)$ for some $t_0>0$ we also have
\begin{align}
\int_0^{t_0} \widetilde
\lambda_k(Z_{\text{crp}}(s)) ds \ll \int_0^{t_0} \lambda_k(X_{\text{crp}}(s)) ds \label{trap}
\end{align}
for all $k$. Then the time and type of the next jump of $X_{\text{crp}}$ is nearly uncorrelated from the time and type of the next jump of $Z_{\text{crp}}$. This is true even if $X_{\text{crp}}(t_0)$ and $Z_{\text{crp}}(t_0)$ are very close or even equal.
This problem does not occur with the split coupling since the next jump times of
$X$ and $Z$ are always correlated via the counting processes with
intensity
\[
\lambda_k(X_{\text{sc}}(s)) \wedge \widetilde \lambda_k(Z_{\text{sc}}(s)).
\]
The above discussion motivates us to consider the following modification to the CRP coupling. We discretize $[0,T]$ into multiple subintervals. For each such subinterval we generate the coupled processes using a new set of independent unit-rate Poisson processes and initial conditions given by the values of the processes at the terminal time of the previous subinterval. Note that if the processes $X_{\text{crp}}$ and $Z_{\text{crp}}$ are equal to each other at a transition between subintervals, then the processes will have recoupled.
We will elaborate on this strategy.
Let $\pi = \{0 = s_0 < s_1 \cdots < s_n= T\} $ be a partition of
$[0,T]$. Also let $\{Y_{km}: k = 1,\dots,R, m =0,1,2,\dots~ \}$ be a
set of independent, unit-rate Poisson processes.
Then we define the local-CRP coupling over $[0, T]$ with respect to $\pi$ as the solution of
\begin{align}
\begin{split}
X_{\text{crp}}^\pi(t) &= X(0) + \sum_{k =1}^R \sum_{m=0}^{\infty} Y_{km} \left(
\int_{t \wedge s_m}^{t \wedge s_{m+1}} \lambda_k(X^{\pi}_{\text{crp}}(s) ) ds \right)
\zeta_k \\
Z_{\text{crp}}^\pi(t) &= Z(0)+ \sum_{k =1}^R \sum_{m=0}^{\infty} Y_{km} \left(
\int_{t \wedge s_m}^{t \wedge s_{m+1}} \widetilde \lambda_k(Z^{\pi}_{\text{crp}}(s)) ds \right)
\zeta_k .
\end{split} \label{localCRP}
\end{align}
We remark that, irrespective of $\pi$, the marginal distribution of
$X_{\text{crp}}^\pi$ is the same as that of $X$, our process of interest, and the same goes for $Z^{\pi}_{\text{crp}}$ and $Z$.
Also, when $\pi$ is a trivial partition with $n =1$, the coupling \eqref{localCRP}
is precisely the CRP coupling of \eqref{CRP}. In the next section, we
will consider the limit of the the family of local-CRP couplings as $n \to \infty$ and prove that under reasonable conditions the coupled processes converge weakly to the processes coupled via the split coupling \eqref{eq:split_coupling}.
\section{Limit of the local-CRP coupling}
\label{sec:Analysis}
We begin this section by specifying some notation. First, when $X$ and $Z$ are stochastic processes built on the probability space $(\Omega,\mathcal F,P)$, we denote by $X(s,\omega)$ the process $X$ evaluated at time $s$ for a given choice $\omega \in \Omega$. Further, by $(X, Z)(s,\omega)$ we mean $( X(s, \omega), Z(s,\omega)) $, a vector of random
variables evaluated at time $s$. As is usual, we will often omit $\omega$ from the notation when no confusion is expected. Finally,
when $\mathbf{t} = (t_1,\dots,t_K)$ is a $K$ dimensional vector of times points, we denote
\[
X(\mathbf{t}) = [X(t_1), \dots, X(t_K)].
\]
Also, throughout the section, we assume that $X(0) = Z(0)$.
\subsection{Weak convergence of finite dimensional distributions}
We will first articulate what we mean by taking $n\to \infty$ in
the context of the last section.
\begin{defn}
Let $\pi_n = \{0 = s_0 \le s_1 \le \cdots \le s_n= T\} $ be a partition of
$[0,T]$. For $m \in \{0,\dots,n-1\}$ let
$$\Delta_m \pi_n= s_{m+1} - s_m. $$
The mesh of $\pi_n$ is defined as
\[
\text{mesh}(\pi_n) \overset{\mbox{\tiny def}}{=} \max\{ \Delta_m \pi_n : m \in \{0,\dots,n-1\}\}.
\]
\end{defn}
Supposing that $\text{mesh}(\pi_n) \to 0$ as $n \to \infty$, the limit of interest to us is the weak limit of $(X^{\pi_n}_{\text{crp}},Z^{\pi_n}_{\text{crp}})$ as $n \to \infty$. We begin with Proposition \ref{Theorem} showing the weak convergence of $X_{\text{crp}}^{\pi_n}$ to $X_{\text{sc}}$ over finite coordinates as $n\to \infty$. In Subsection \ref{sec:weak_conv} we prove weak convergence at the process level.
\begin{prop}
Suppose that neither of the nominal processes $X,Z$ are explosive and let $(X_{\text{sc}}(t), Z_{\text{sc}}(t))$ be coupled in the way of \eqref{eq:split_coupling}. Let
\[
\pi_n = \{0 = s_0 \leq s_1 \leq \cdots \leq s_{n} =T \}
\]
be a sequence of partitions such that $\text{mesh} (\pi_n) \to 0$, as $n \to \infty$, and for each $n$ let $(X^{\pi_n}_{\text{crp}}(t), Z^{\pi_n}_{\text{crp}}(t))$ be coupled in the way of \eqref{localCRP}.
Then for any $K \in \ZZ_{\ge 0}$ and
$\mathbf{t} \in [0,T]^K$, and any bounded Lipshitz $f: (\RR^d \times \RR^d)^K \to \RR$,
\noindent
\[
\mathbb{E}[f((X^{\pi_n}_{\text{crp}} , Z^{\pi_n}_{\text{crp}})( \mathbf{t}))] \rightarrow \mathbb{E}[f((X_{\text{sc}}, Z_{\text{sc}}) (\mathbf{t}))], \quad \text{as } n \to \infty.
\]
\label{Theorem}
\end{prop}
We will briefly outline the proof of \ref{Theorem}. For a fixed $n$, let
\begin{align}
\{ Y_{ikm}^n; ~~i = 1,2,3, \quad k = 1,\dots,R, \quad m = 0,1,2,... \} \label{OrdSpace}
\end{align}
and
\begin{align}
\{ Y_{km}^n; ~~ k = 1,\dots,R, \quad m = 0,1,2,\dots \} \label{OrdSpace2}
\end{align}
be two sets of independent unit-rate Poisson processes. At this point, we do not make any assumption on the
correlation between the processes in the set \eqref{OrdSpace} and the processes in the set
\eqref{OrdSpace2}, except to note that they will not be independent. In fact, we will construct the Poisson processes of \eqref{OrdSpace} as functions of the Poisson processes of \eqref{OrdSpace2}.
For now, simply consider the processes built using the Poisson processes of \eqref{OrdSpace}
\begin{align}
\begin{split}
X_{\text{sc}}^{\pi_n}(t)= X_{\text{sc}}(0) + &\sum_{m = 0}^{\infty}\sum_{k=1}^R \Bigg\{ Y_{1km}^n \left(
\int_{s_m \wedge t}^{s_{m+1}\wedge t} r_{1k}(\lambda_k, \widetilde \lambda_k, X_{\text{sc}}^{\pi_n}, Z_{\text{sc}}^{\pi_n})(s) ds \right) \\
&+ Y_{2km}^n
\left( \int_{s_m \wedge t}^{s_{m+1}\wedge t} r_{2k}(\lambda_k, \widetilde \lambda_k, X_{\text{sc}}^{\pi_n}, Z_{\text{sc}}^{\pi_n})(s) ds \right)
\Bigg\} \zeta_k \\
Z_{\text{sc}}^{\pi_n}(t) = X_{\text{sc}}(0) + &\sum_{m = 0}^{\infty}\sum_{k=1}^R\Bigg\{ Y_{1km}^n \left(
\int_{s_m\wedge t}^{s_{m+1}\wedge t}
r_{1k}(\lambda_k, \widetilde \lambda_k, X_{\text{sc}}^{\pi_n}, Z_{\text{sc}}^{\pi_n})(s) ds
\right) \\
& + Y_{3km}^n \left( \int_{s_m \wedge t}^{s_{m+1}\wedge t} r_{3k}(\lambda_k, \widetilde \lambda_k, X_{\text{sc}}^{\pi_n}, Z_{\text{sc}}^{\pi_n})(s) ds \right) \Bigg\} \zeta_k,
\end{split} \label{ordinary2}
\end{align}
along with
\begin{align}
\begin{split}
X_{\text{crp}}^{\pi_n}(t) &= X_{\text{crp}}(0) + \sum_{m=0}^{\infty} \sum_{k =1}^R Y_{km}^n \left(
\int_{t \wedge s_m}^{t \wedge s_{m+1}} \lambda_k(X^{\pi_n}_{\text{crp}}(s) ) ds \right)
\zeta_k \\
Z_{\text{crp}}^{\pi_n} (t) &= X_{\text{crp}}(0) + \sum_{m=0}^{\infty} \sum_{k =1}^R Y_{km}^n \left(
\int_{t \wedge s_m}^{t \wedge s_{m+1}} \widetilde \lambda_k(Z^{\pi_n}_{\text{crp}}(s) ) ds \right)
\zeta_k,
\end{split} \label{localCRP2}
\end{align}
which are built with the Poisson processes \eqref{OrdSpace2}.
Note that
$(X_{\text{sc}},Z_{\text{sc}}) \overset{dist}{=} (X_{\text{sc}}^{\pi_n}, Z_{\text{sc}}^{\pi_n})$ irrespective of $n$.
The construction we will employ will allow us to conclude that
$(X_{\text{sc}}^{\pi_n}, Z_{\text{sc}}^{\pi_n})$ and
$(X_{\text{crp}}^{\pi_n},Z_{\text{crp}}^{\pi_n})$ satisfy
\begin{align}
\begin{split}
\lim_{n \to \infty}P\left( \max_{i\in\{0,...,K \}}
|(X_{\text{sc}}^{\pi_n}(t_i),Z_{\text{sc}}^{\pi_n}(t_i)) - (X_{\text{crp}}^{\pi_n}(t_i),
Z_{\text{crp}}^{\pi_n}(t_i))| > \gamma\right) = 0
\end{split} \label{Couple}
\end{align}
for any $\gamma > 0$. We can then appeal to a standard Portmanteau
type argument to finish the proof of Proposition \ref{Theorem}: let $\epsilon >0$, and consider any bounded
continuous map
$f: (\RR^d \times \RR^d)^K \to \RR$
with Lipshitz constant $L$. Then
\begin{align*}
\begin{split}
&|\mathbb{E} f( (X_{\text{sc}}, Z_{\text{sc}})(\textbf{t}) - \mathbb{E} f(
(X_{\text{crp}}^{\pi_n},Z_{\text{crp}}^{\pi_n})(\textbf{t}) | \\
&~~~~~~~~~= | \mathbb{E} f( (X_{\text{sc}}^{\pi_n},
Z_{\text{sc}}^{\pi_n})(\textbf{t})- \mathbb{E} f( (X_{\text{crp}}^{\pi_n}, Z_{\text{crp}}^{\pi_n})(\textbf{t}) |\\
&~~~~~~~~~\leq L \mathbb{E} [ |(X_{\text{sc}}^{\pi_n},Z_{\text{sc}}^{\pi_n}) (\textbf{t}) -
(X_{\text{crp}}^{\pi_n} , Z_{\text{crp}}^{\pi_n}) (\textbf{t})| ] \\
&~~~~~~~~~\leq L K \gamma + L~ P(\max_{i=0,...,K } |(X_{\text{sc}}^{\pi_n}(t_i),Z_{\text{sc}}^{\pi_n}(t_i)) - (X_{\text{crp}}^{\pi_n}(t_i), Z_{\text{crp}}^{\pi_n}(t_i))| > \gamma).
\end{split}
\end{align*}
We can first choose $\gamma < \epsilon/(2LK)$. With this $\gamma$ fixed, we
may choose $n$ large enough so that the second piece can be bounded by
$\epsilon/2$, and the claim is achieved.
\vspace{.1in}
We must still describe the specific construction alluded to above that will allow us to conclude \eqref{Couple}.
For each $n$, let
\begin{align}
\{ Y_{km}^n, Y_{ikm}^{n,aug}, i = 1,2,3, ~ k = 1,\dots,R, ~m = 0,1,2,\dots\}, \label{Space}
\end{align}
be independent unit-rate Poisson processes.
We generate $(X_{\text{crp}}^{\pi_n}, Z_{\text{crp}}^{\pi_n})$ up to time $T$ using the processes $Y_{km}^n$ according to \eqref{localCRP2}. We now turn our attention to constructing the required independent unit-rate Poisson processes $Y_{ikm}^n$, and the coupled processes $(X_{\text{sc}}^{\pi_n}, Z_{\text{sc}}^{\pi_n})$ built using them according to \eqref{ordinary2}.
Inductively arguing on $m$, suppose we have already generated $(X_{\text{sc}}^{\pi_n}, Z_{\text{sc}}^{\pi_n})$ given by \eqref{ordinary2} up to time $s_m\ge 0$. We further suppose that we have constructed the relevant Poisson processes $Y_{ik\tilde m}^n$ for all $\tilde m < m$. We must now describe how to construct $Y_{ikm}^n$ for each valid pair $(i,k)$. We define the following random times for each $i\in \{1,2,3\}$ and $k\in \{1,\dots,R\}$:
\begin{align}\label{eq:cT_gen}
\begin{split}
&\mathcal{T}_{ikm} \overset{\mbox{\tiny def}}{=} r_{ik}(\lambda_k, \widetilde \lambda_k, X_{\text{sc}}^{\pi_n}, Z_{\text{sc}}^{\pi_n})(s_m) \cdot \Delta_m(\pi_n)
\end{split}
\end{align}
and
\begin{align*}
T_{km}^{\text{crp}} \overset{\mbox{\tiny def}}{=} \left(\int_{s_m}^{s_{m+1}} \lambda(X_{\text{crp}}^{\pi_n}(s) ) ds\right) \vee
\left(\int_{s_m}^{s_{m+1}} \widetilde \lambda (Z_{\text{crp}}^{\pi_n}(s) ) ds\right),
\end{align*}
where, as usual, $a \vee b \overset{\mbox{\tiny def}}{=} \max\{a,b\}$, and we recall that $(X_{\text{crp}}^{\pi_n}, Z_{\text{crp}}^{\pi_n})$ has already been generated up to time $T$.
For notational clarity we refrain from using $n$ in the notation above for the random times.
We now define
$Y_{1km}^n$ in the following manne
\begin{align*}
\begin{split}
Y_{1km}^n(u) &=Y_{km}^n(u) \hspace{1.77in} \textrm{for}~~ u\leq \mathcal{T}_{1km} \\[1ex]
Y_{1km}^n(u) &= Y_{1km}^n (\mathcal{T}_{1km} ) + Y_{1km}^{n,aug}(u-\mathcal{T}_{1km}) ~~~~~~ \textrm{for}~~ u > \mathcal{T}_{1km}.
\end{split}
\end{align*}
Having defined $Y_{1km}^n$, we turn to the construction of $Y_{2km}^n$ and $Y_{3km}^n$. The construction is based on which of two of the following cases hold.
\begin{enumerate}[1.]
\item If $ \lambda(Z_{\text{sc}}^{\pi_n}(s_m)) \le \lambda(X_{\text{sc}}^{\pi_n}(s_m))$, then let $Y_{2km}^n$ satisfy
\begin{align*}
\begin{split}
Y_{2km}^n(u) &= Y_{km}^n(u + \mathcal{T}_{1km}) - Y_{km}^n(\mathcal{T}_{1km}) \hspace{0.77in} \textrm{for}~~
u \leq \mathcal{T}_{2km} \\[1ex]
Y_{2km}^n(u) &= Y_{2km}^n( \mathcal{T}_{2km}) + Y_{2km}^{n,aug}(u-\mathcal{T}_{2km}) \hspace{.56in} \textrm{for}~~ u > \mathcal{T}_{2km},
\end{split}
\end{align*}
and let $Y_{3km}^n(u) = Y_{3km}^{n, aug}(u)$ for all $u\ge 0$.
\item If $ \lambda(Z_{\text{sc}}^{\pi_n}(s_m)) > \lambda(X_{\text{sc}}^{\pi_n}(s_m))$, then let $Y_{3km}^n$ satisfy
\begin{align*}
\begin{split}
Y_{3km}^n(u) &= Y_{km}^n(u + \mathcal{T}_{1km}) - Y_{km}^n(\mathcal{T}_{1km}) \hspace{0.77in} \textrm{for}~~
u \leq \mathcal{T}_{3km} \\[1ex]
Y_{3km}^n(u) &= Y_{3km}^n( \mathcal{T}_{3km}) + Y_{3km}^{n,aug}(u-\mathcal{T}_{3km}) \hspace{.56in} \textrm{for}~~ u > \mathcal{T}_{3km},
\end{split}
\end{align*}
and let $Y_{2km}^n(u) = Y_{2km}^{n, aug}(u)$ for all $u\ge 0$.
\end{enumerate}
\noindent Note that the strong Markov property guarantees that the processes $\{Y_{ikm}^n\}$ so constructed are independent, unit-rate Poisson processes. We then generate $(X_{\text{sc}}^{\pi_n}, Z_{\text{sc}}^{\pi_n})$ between times $s_m$ and $s_{m+1}$ according to \eqref{ordinary2} with the processes $\{Y_{ikm}^n\}$. Note that in so doing, we have also created a coupling between $(X^{\pi_n}_{\text{sc}}, Z^{\pi_n}_{\text{sc}})$ and $(X^{\pi_n}_{\text{crp}}, Z^{\pi_n}_{\text{crp}})$.
Note that for each $i, k$, and $m$, the value $\mathcal{T}_{ikm}$ as defined in \eqref{eq:cT_gen} is an
approximation to
$$T_{ikm}^{\text{sc}} \overset{\mbox{\tiny def}}{=} \int_{s_m}^{s_{m+1}} r_{ik}(\lambda_k,
\widetilde \lambda_k, X_{\text{sc}}^{\pi_n},Z_{\text{sc}}^{\pi_n})(s) \, ds.$$
We would like to make a few observations about this approximation before proceeding further.
\begin{lemma}
Fix $n$, and let $m \in \{0,1,\dots\}$.
If
\begin{align}
\sum_{k=1}^R \sum_{i=1}^3 Y_{ikm}^n(\mathcal{T}_{ikm} \vee T_{ikm}^{\text{sc}}) =1 \label{HalfCondition}
\end{align}
then
there is a unique $j \in \{1,2,3\} $ and $\ell \in \{1, ..., R \} $ for
which
\[
Y_{j \ell m}^n(\mathcal{T}_{j \ell m} \wedge T_{j \ell m}^{\text{sc}}) =1.
\]
\label{LetmeGo}
\end{lemma}
Note the difference between $\wedge$ and $\vee$ in the above statement.
\begin{proof}
For each $(i,k)$, define
\[
Q_{ik}(t) \overset{\mbox{\tiny def}}{=} Y_{ikm}^n \left( \int_{s_m}^{t+s_m} r_{ik}(\lambda_k, \widetilde \lambda_k, X_{\text{sc}}^{\pi_n},
Z_{\text{sc}}^{\pi_n})(s) ds \right),
\]
for $t \ge 0$.
Note that \eqref{HalfCondition} implies that $Y_{j \ell m}^n(\mathcal{T}_{j \ell m}
\vee T_{j \ell m}^{\text{sc}}) = 1$ for some $j$ and $\ell$ and $Y_{i k
m}^n(\mathcal{T}_{i k m}
\vee T_{ik m}^{\text{sc}}) = 0$ for all $(i,k) \neq (j, \ell)$. In particular, this implies $Q_{j \ell}$ is the first one among the set of counting processes $\{Q_{ik} \} $ to jump. (This follows since for all $(i,k)$, $r_{ik}(\lambda_k, \widetilde \lambda_k, X_{\text{sc}}^{\pi_n},
Z_{\text{sc}}^{\pi_n})(s)$ will not change from $r_{ik}(\lambda_k, \widetilde \lambda_k, X_{\text{sc}}^{\pi_n},
Z_{\text{sc}}^{\pi_n})(s_m)$ until the first jump of $(X_{\text{sc}}^{\pi_n},
Z_{\text{sc}}^{\pi_n})$ during $s>s_m$.) By the definitions of $\mathcal{T}_{j\ell m}$ and $T^{\text{sc}}_{j \ell m}$, it easily follows that $Y_{j\ell m}^n (\mathcal{T}_{j\ell m} \wedge T^{\text{sc}}_{j\ell m} = 1$. It is trivial that no other $(i,k)$ pair can satisfy this relation.
\end{proof}
The following is an analogue to Lemma \eqref{LetmeGo}.
\begin{lemma}
If $( X_{\text{crp}}^{\pi_n}, Z_{\text{crp}}^{\pi_n})(s_m) = ( X_{\text{sc}}^{\pi_n}, Z_{\text{sc}}^{\pi_n})(s_m)$ and
\[
\sum_k Y_{km}^n \left( \left( \sum_{i=1}^3 \mathcal{T}_{i k m} \right) \vee T^{\text{crp}}_{km} \right) = 1
\]
then there is a unique $j$ for which
\[
Y_{j m}^n \left( \left( \sum_{i=1}^3 \mathcal{T}_{i j m} \right) \wedge T^{\text{crp}}_{j m} \right) =1.
\]
Further, the first jump time of the Poisson process $Y_{j m}^n$ occurs at some $t_0$ satisfying
\[
t_0 < \left( \lambda_{j}(X_{\text{crp}}^{\pi_n}(s_m)) \vee \widetilde
\lambda_{j}(Z_{\text{crp}}^{\pi_n}(s_m)) \right) \Delta_m.
\]
\label{LetmeGo2}
\end{lemma}
\begin{proof}
Because the two processes are equal at time $s_m$, we have that
\[
\sum_{i=1}^3 \mathcal{T}_{i k m} = \left ( \lambda_{k_0}(X_{\text{crp}}^{\pi_n}(s_m)) \vee \widetilde
\lambda_{k_0}(Z_{\text{crp}}^{\pi_n}(s_m)) \right) \Delta_m.
\]
As neither $Z_{\text{crp}}^{\pi_n}$ nor $X_{\text{crp}}^{\pi_n}$ changes until the first
firing of $Y_{j m}$, the claim follows.
\end{proof}
Based on the last two observations, we have the following lemma which will be useful in proving Proposition \ref{Theorem}.
\begin{lemma}
Fix $n$ and suppose that, for a given path
of $(X^{\pi_n}_{\text{sc}},Z^{\pi_n}_{\text{sc}})(\omega),$ $(X^{\pi_n}_{\text{crp}}, Z^{\pi_n}_{\text{crp}})(\omega)$
coupled in the way we described above,
\begin{align}
\begin{split}
H_{m,n}(\omega) \overset{\mbox{\tiny def}}{=} \sum_{k=1}^R \max\left \{ \sum_{i=1}^3
Y_{ikm}^n(\mathcal{T}_{ikm} \vee T_{ikm}^{\text{sc}}), Y_{km}^n\left(\left(
\sum_{i=1}^3 \mathcal{T}_{ikm} \right) \vee T_{km}^{\text{crp}} \right) \right \} \leq 1,
\end{split} \label{Condition}
\end{align}
for all $m$.
Then for all $m = 0,\dots, n$,
$$(X_{\text{sc}}^{\pi_n},Z_{\text{sc}}^{\pi_n})(s_m,\omega) = (X_{\text{crp}}^{\pi_n}, Z_{\text{crp}}^{\pi_n})(s_m,\omega)$$ \label{Proper}
\end{lemma}
\begin{proof}
We will omit $\omega$ in the expressions. We have
\[
(X^{\pi_n}_{\text{sc}}, Z^{\pi_n}_{\text{sc}})(s_0) = (X_{\text{crp}}^{\pi_n}, Z_{\text{crp}}^{\pi_n})(s_0)
\]
by assumption.
Arguing inductively, assume that
\begin{align*}
\begin{split}
(X^{\pi_n}_{\text{sc}}, Z^{\pi_n}_{\text{sc}})(s_m) = (X_{\text{crp}}^{\pi_n}, Z_{\text{crp}}^{\pi_n})(s_m).
\end{split}
\end{align*}
We will show that
$$(X^{\pi_n}_{\text{sc}}, Z^{\pi_n}_{\text{sc}})(s_{m+1}) = (X^{\pi_n}_{\text{crp}}, Z^{\pi_n}_{\text{crp}})(s_{m+1})$$
when \eqref{Condition} holds.
If $H_{m,n} = 0$ for this $m$,
then
\[
(X^{\pi_n}_{\text{sc}}, Z^{\pi_n}_{\text{sc}})(s_{m+1}) = (X^{\pi_n}_{\text{sc}}, Z^{\pi_n}_{\text{sc}})(s_m) = (X^{\pi_n}_{\text{crp}}, Z^{\pi_n}_{\text{crp}})(s_m) = (X^{\pi_n}_{\text{crp}}, Z^{\pi_n}_{\text{crp}})(s_{m+1}),
\]
and there is nothing to do. Therefore we consider the case in which $H_{m,n} =
1$.
More specifically, suppose that for some $k_0$,
$$\max\left \{ \sum_{i=1}^3 Y_{i k_0 m}^n(\mathcal{T}_{i k_0 m} \vee T_{i k_0 m}^{\text{sc}}),
Y_{k_0 m}^n\left( \left(\sum_{i=1}^3 \mathcal{T}_{i k_0 m} \right)\vee T_{ k_0 m}^{\text{crp}} \right) \right \}
=1.$$
This means that, by condition \eqref{Condition},
$$\max\left \{ \sum_{i=1}^3 Y_{i k m}^n (\mathcal{T}_{i \ell m} \vee T_{i
k m}^{\text{sc}}),
Y_{k m}^n\left( \left(\sum_{i=1}^3 \mathcal{T}_{i k m} \right)\vee
T_{k m}^{\text{crp}} \right) \right \}
=0$$
for all $k \neq k_0$.
Combined with Lemmas \ref{LetmeGo} and \ref{LetmeGo2}, these conditions guarantee that each of the processes $X_{\text{sc}}^{\pi_n}, Z^{\pi_n}_{\text{sc}}, X^{\pi_n}_{\text{crp}}, Z^{\pi_n}_{\text{crp}}$ jump precisely one time in the time interval $[s_m,s_{m+1}]$, and the jump happens according to reaction channel $k_0$ (see \cite{KoyamaThesis} for more details). That is, we have
\[
X_{\text{sc}}^{\pi_n}(s_{m+1}) =Z_{\text{sc}}^{\pi_n}(s_{m+1}) =X_{\text{crp}}^{\pi_n}(s_{m+1}) =Z_{\text{crp}}^{\pi_n}(s_{m+1}) = X_{\text{sc}}^{\pi_n}(s_{m}) + \zeta_{k_0},
\]
and we are done.
\end{proof}
It is not too difficult to see that if $\lambda_k$ and $\widetilde \lambda_k$ are uniformly bounded for all $k$, then we can
make the condition in Lemma \ref{Proper} hold with a probability greater than
$1-\epsilon$ for any $\epsilon>0$ by setting
$\text{mesh}(\pi_n)$ small enough. Of course, we do not have such a uniform bound on the
intensity functions. Also, note that Lemma \ref{Proper} does \textbf{not} imply that
\[
(X^{\pi_n}_{\text{crp}},Z^{\pi_n}_{\text{crp}})(t) = (X^{\pi_n}_{\text{sc}}, Z^{\pi_n}_{\text{sc}})(t) \text{ for } t \in [s_{m}, s_{m+1}],
\]
even if the conditions of the lemma are met, as the processes may (and most likely will) jump at slightly different times. However, we trivially note that under the conditions of Lemma \ref{Proper},
\begin{equation}\label{eq:245423}
(X^{\pi_n}_{\text{crp}},Z^{\pi_n}_{\text{crp}})(t) = (X^{\pi_n}_{\text{sc}}, Z^{\pi_n}_{\text{sc}})(t) \text{ for all } t \in [s_{m}, s_{m+1}]
\end{equation}
if neither $(X^{\pi_n}_{\text{crp}},Z^{\pi_n}_{\text{crp}})$ nor
$(X^{\pi_n}_{\text{sc}},Z^{\pi_n}_{\text{sc}})$ jump at all in $[s_m, s_{m+1}]$.
We are now in a position to prove Proposition \ref{Theorem}.
\begin{proof} [Proof of Proposition \ref{Theorem}]
We first recall that $\textbf{t} = (t_1,\dots,t_K)$ for some $K\in \{1,2,\dots\}$. Next, we define
\[
K_0^n \overset{\mbox{\tiny def}}{=} \{m \in \{ 0,...,n-1\} ~ ; ~ \{t_j\}_{j=1}^K \cap [s_{m} , s_{m+1}) \neq \emptyset \}.
\]
Fix $\epsilon>0$. As we remarked around \eqref{Couple}, it suffices to show that, for large enough $n$,
\[
P\left( \max_{i=0,...,K }|(X_{\text{sc}}^{\pi_n}(t_i),Z_{\text{sc}}^{\pi_n}(t_i)) - (X_{\text{crp}}^{\pi_n}(t_i),
Z_{\text{crp}}^{\pi_n}(t_i))| > 0\right) < \epsilon,
\]
where we converted the $\gamma$ in \eqref{Couple} to a zero as our processes take values in $\mathbb Z^d$.
We will resort to a localization
argument and take advantage of the fact that $X$ and $Z$ are both
nonexplosive.
Let $M > 0$, and let $H_{m,n}$ be defined as in Lemma \ref{Proper}.
Define
\begin{align}
\begin{split}
A_n(\mathbf{t}) &\overset{\mbox{\tiny def}}{=} \left\{ \omega :
H_{m,n}(\omega) \leq 1 \textrm{~if } m
\not \in K_0^n \text{ and } H_{m,n}(\omega)= 0 \textrm{~ if } m \in K_0^n\right\},
\end{split}
\end{align}
and
\begin{align}
\begin{split}
B_{M,n} \overset{\mbox{\tiny def}}{=} &\{ \omega: \max\{ \sup_{s\leq T} \lambda_k(X^{\pi_n}_{\text{sc}}(s)),
\sup_{s\leq T} \widetilde \lambda_k(Z^{\pi_n}_{\text{sc}}(s)), \sup_{s\leq T}\lambda_k(X^{\pi_n}_{\text{crp}}(s)),
\sup_{s\leq T}\widetilde \lambda_k(Z^{\pi_n}_{\text{crp}}(s)\} \leq M \}.
\end{split}
\end{align}
Note that by the non-explosivity of the processes, the supremums are achieved everywhere they appear above.
By Lemma \ref{Proper} and the arguments in and around \eqref{eq:245423}, we have that
\[
A_n(\mathbf{t}) \subset \{(X^{\pi_n}_{\text{sc}},Z^{\pi_n}_{\text{sc}})(\mathbf{t}) = (X^{\pi_n}_{\text{crp}}, Z^{\pi_n}_{\text{crp}})(\mathbf{t})\}.
\]
Therefore
\begin{align}
P( (X^{\pi_n}_{\text{sc}},Z^{\pi_n}_{\text{sc}})(\mathbf{t}) \neq (X^{\pi_n}_{\text{crp}}, Z^{\pi_n}_{\text{crp}})(\mathbf{t}) ) &\leq P(A_n^C(\mathbf{t})) \notag \\
&= P(A_n^C(\mathbf{t}) \cap B_{M,n}) + P(A_n^C (\mathbf{t})\cap B_{M,n}^C). \label{Ineq}
\end{align}
We handle the two pieces on the right hand side of \eqref{Ineq} separately.
For the second term in \eqref{Ineq}, we first note that
\begin{align*}
\begin{split}
B_{M,n}^C \subset \{\sup_{s\leq T} \lambda_k(X^{\pi_n}_{\text{sc}} (s)) >M\} &\cup \{\sup_{s\leq
T}\widetilde \lambda_k(Z^{\pi_n}_{\text{sc}}(s)) > M \} \cup \\
&\{\sup_{s\leq T} \lambda_k(X^{\pi_n}_{\text{crp}}(s)) >M\} \cup
\{\sup_{s\leq T}\widetilde \lambda_k(Z_{\text{crp}}^{\pi_n}(s)) >M\}.
\end{split}
\end{align*}
Now, recall that the marginal distributions of $X^{\pi_n}_{\text{crp}}$ and $X^{\pi_n}_{\text{sc}}$ are the same as the
marginal distribution of $X$, and that the same goes for $Z^{\pi_n}_{\text{crp}}$ and $Z^{\pi_n}_{\text{sc}}$
compared with $Z$. Therefore, for all $n$ we have
\begin{align}
P(B_{M,n}^C) \leq 2 \times \left [ P(\sup_{s \leq T} \{ \lambda_k(X_s) \} > M) + P(\sup_{s \leq T}
\{ \widetilde \lambda_k(Z_s) \} > M) \right ] . \label{Monotone}
\end{align}
By the monotone convergence theorem and the fact that the processes
are all non explosive, the right hand side of \eqref{Monotone} will tend to $0$ as $M \to \infty$.
Therefore, we can take $M$ large enough so
that the second piece of \eqref{Ineq} is smaller than $\epsilon /2$.
We fix this $M$, and turn attention to the first term on the right hand side of \eqref{Ineq}.
We consider the localized version of $H$. In particular, for our fixed $M>0$
let
\[
H_{m,n}^M(\omega) \overset{\mbox{\tiny def}}{=} \sum_{k=1}^R \max\left \{ \sum_{i=1}^3
Y_{ikm}^n(M\Delta_m(\pi_n)), Y_{km}^n\left( 3M\Delta_m(\pi_n) \right)
\right \}.
\]
Then it is clear that, for any $q> 0, $
\[
\left\{ \{ H_{m,n} > q\} \cap B_{M,n} \right\} \subset \left\{ \{
H_{m,n}^M > q\} \cap B_{M,n} \right\} \subset \{H_{m,n}^M > q\}
\]
and therefore
\begin{align}
P(A_n^C(\mathbf{t}) & \cap B_{M,n}) \notag \\
&\leq P( H_{m,n}^M > 1\textrm{ for some } m \not \in K_0^n \textrm{~~\textbf{OR}~~} H_{m,n}^M >0 \textrm{ for some } m \in K_0^n
) \notag \\
& \leq \sum_{m \not \in K_0^n} P(H_{m,n}^M > 1) + \sum_{m \in K_0^n} P(H_{m,n}^M > 0).\label{eq:45873450}
\end{align}
To handle these two pieces, we recall two basic facts pertaining to Poisson random variables. First, if we denote by $W(\Lambda) \sim \text{Poisson}(\Lambda)$ then
\begin{align*}
P (W(\Lambda) > 1) = &1- \exp(-\Lambda) (1 +\Lambda) \\
\leq & 1- (1- \Lambda) (1 +\Lambda) \\
= &\Lambda^2,
\end{align*}
where we used the inequality $\exp(-x)\ge 1- x$. Second, and using the same inequality,
\begin{align*}
P (W(\Lambda) > 0) &= 1- \exp(-\Lambda) \leq \Lambda.
\end{align*}
Now note that
\[
P( \{ H_{m,n}^M > q\}) \leq P(W(6RM \Delta_m (\pi_n)) > q ).
\]
Hence, if $\text{mesh}(\pi_n) = \delta_n$
then by the two facts above and \eqref{eq:45873450}, we have
\begin{align}
P(A^C_n(\mathbf{t}) \cap B_M) &\leq \sum_{m \not \in K_0^n} (6RM \Delta_m (\pi_n))^2 + \sum_{m \in K_0^n} (6RM \Delta_m (\pi_n)) \notag\\
&\leq (6RM)^2 \delta_n \sum_{m \not \in K_0^n}
\frac{ \Delta_m(\pi_n)}{\delta_n} \Delta_m(\pi_n)+ 6RM|K_0^n|
\delta_n \notag \notag\\
&\leq (6RM)^2 \delta_n T + 6RM |K_0^n|
\delta_n,\label{Last}
\end{align}
where in the third inequality we used that $\frac{\Delta_m(\pi_n)}{\delta_n} < 1$, which follows by the definition of mesh.
We can now take $n$ large enough so that \eqref{Last} is less
than $\epsilon/2$. Collecting the above, we may now conclude that for such $n$,
$$P( |( X^{\pi_n}_{\text{sc}},Z^{\pi_n}_{\text{sc}})(\mathbf{t}) - (X^{\pi_n}_{\text{crp}}, Z^{\pi_n}_{\text{crp}})(\mathbf{t})| > 0 ) <
\epsilon,$$
as required.
\end{proof}
The following is an immediate corollary to Proposition \ref{Theorem}.
\begin{cor}
Let $s= \{s_0 < s_1 < s_2 < \cdots < s_{m_1}\} $ and $t = \{t_0 < t_1 <
t_2 < \cdots < t_{m_2}\}$. Let $f_i : \RR^d \to \RR,$ $i = 0, \dots, m_1$, and $g_j : \RR^d \to \RR,$ $j = 0, \dots, m_2,$
be bounded and continuous functions on $\RR^d$, and assume the
conditions set forth in Proposition \ref{Theorem}. Then
\[
\mathbb{E}\left[\prod_{i=0}^{m_1} f_i((X_{\text{crp}}^{\pi_n}(s_i)) \prod_{j=0}^{m_2}g_j( Z_{\text{crp}}^{\pi_n}(t_j))
) \right] \to \mathbb{E} \left[ \prod_{i=0}^{m_1} f_i((X_{\text{sc}}(s_i)) \prod_{j=0}^{m_2}g_j( Z_{\text{sc}}(t_j)))\right], \ \text{as } n \to \infty.
\]
\label{Cor}
\end{cor}
Of course, we hope that Proposition \ref{Theorem} together with Corollary \ref{Cor} imply the
weak convergence of $(X_{\text{crp}}^{\pi_n}, Z_{\text{crp}}^{\pi_n})$ to $(X_{\text{sc}}, Z_{\text{sc}})$ at the process level. Since it is natural to view $(X_{\text{crp}}^{\pi_n}, Z_{\text{crp}}^{\pi_n}) \in \mathbb R^{2d}$, we would ideally like to show that $(X_{\text{crp}}^{\pi_n}, Z_{\text{crp}}^{\pi_n}) \implies (X_{\text{sc}}, Z_{\text{sc}})$ weakly as stochastic processes on $\mathbb R^{2d}$. For such convergence to hold we require the laws of
$\{ (X_{\text{crp}}^{\pi_n}, Z_{\text{crp}}^{\pi_n})\}$ to be relatively compact (i.e. every sequence has a convergent subsequence)
Unfortunately, and perhaps surprisingly, this is not the case as we now show.
The following result is Theorem 7.2 on page 128 of \cite{Kurtz86}.
Following the notation in \cite{Kurtz86}, when $E$ is a metric space we let $D_E[0, \infty)$ be the set of all c\`adl\`ag functions from $[0,\infty)$ to $E$.
\begin{thm}
Let $(E,r)$ be a complete and separable metric space, and let $\{X_n\}$ be a family of processes with sample paths in $D_E[0,\infty)$ endowed with the Skorohod metric. Then $\{X_n\}$ is relatively compact if and only if the
following two conditions hold:
\begin{enumerate}[1.]
\item For each $\eta>0$ and rational $t\ge 0$, there is exists a compact set $\Gamma_{\eta,t} \subset E$ such that
$$\inf_n P (X_n(t) \in \Gamma_{\eta, t}) \geq 1- \eta. $$
\item For every $\eta >0$ and $T>0$, there exists $\delta > 0$ such that
$$\sup_{n} P(w'(X_n, \delta, T) \geq \eta) < \eta$$
where
$$w'(X, \delta, T) \overset{\mbox{\tiny def}}{=} \inf_{\pi} \max_i \sup_{a, b \in [t_i, t_{i+1})}
|X(a)- X(b)| $$
where $\pi$ ranges over all partitions of $[0,T]$ satisfying $t_{i+1} - t_{i}
> \delta$ for all $i\ge 0$.
\end{enumerate} \label{Precompact}
\end{thm}
Unfortunately the conditions of Theorem \ref{Precompact} do not
hold in general for our set of processes $\{ (X_{\text{crp}}^{\pi_n}, Z_{\text{crp}}^{\pi_n})\} $ over the skorohod space $D_{\RR^{2d}} [ 0, \infty)$.
To see this, we note the following two facts:
\begin{enumerate}[1.]
\item For jump processes whose jump sizes are bounded below, for example by integer values in our present setting, for small enough $\eta>0$ we have
\[
\{w'((X,Z),\delta,T)<\eta\} = \{w'((X,Z), \delta, T)= 0\},
\]
\item The event $ w'((X,Z), \delta, T)= 0$ can be achieved if and only if
the minimum time between jumps of $(X, Z)$ is greater than $\delta$.
\end{enumerate}
To understand the second statement, simply note that if the minimum time between jumps is less than $\delta$, then for any partition $\pi$ satisfying $t_{i+1}-t_i>\delta$ for all $i$, the process must change by at least the smallest jump size ($\min_k |\zeta_k|$ in our case) in \textit{some} interval of the partition. Conversely, if the minimum holding time of the process is greater than $\delta$, then we achieve a value of $0$ for $w'$ by choosing $\pi$ so that the jump times correspond with a subset of the partition times $t_i$.
The following example explicitly shows that Theorem \ref{Precompact} does not hold for our choice of $\{ (X_{\text{crp}}^{\pi_n}, Z_{\text{crp}}^{\pi_n})\} $ with $E = \RR^{2d}$. Essentially the same argument would work for any model considered in this paper.
\begin{example}\label{ex:408957}
Consider the chemical reaction network
\begin{align*}
A \to 2A
\end{align*}
which models increases in $A$ as a counting process with a linear intensity (i.e. a linear birth process). We consider the corresponding coupled processes $(X_{\text{sc}},Z_{\text{sc}})$ and $(X_{\text{crp}}^{\pi_n},Z_{\text{crp}}^{\pi_n})$ with
\[
\lambda_1(x) = \theta x, ~~~~~ \widetilde \lambda_1(x) = (\theta + h)x,
\]
and initial condition
\[
X_{\text{sc}}(0) = Z_{\text{sc}}(0) =X_{\text{crp}}^{\pi_n}(0) = Z_{\text{crp}}^{\pi_n}(0) >0.
\]
For any $\delta>0$, the probability that the processes $X_{\text{sc}}$ and $Z_{\text{sc}}$ jump simultaneously in the time period $[0,\delta]$ \textit{and} that their simultaneous jump is the first jump for both processes is
\[
\alpha_\delta \overset{\mbox{\tiny def}}{=} \frac{\theta}{\theta+h} \left( 1 - e^{-(\theta + h)X(0) \delta}\right)>0.
\]
By the arguments we made in the proof above, for any $\epsilon >0$ there exists some $M_\epsilon$
such that if $n > M_\epsilon$, then with probability greater than $\alpha_\delta -
\epsilon$, both $X_{\text{crp}}^{\pi_n}$ and $Z_{\text{crp}}^{\pi_n}$ will also make
a first jump in $[0,\delta]$. However, with a probability of one, $X_{\text{crp}}^{\pi_n}$ and $Z_{\text{crp}}^{\pi_n}$ jump at different times.
Hence, when they jump in the time interval $[0,\delta)$, we have
\begin{align*}
\sup_{a,b\in [0,\delta)} | (X^{\pi_n}_{\text{crp}},Z^{\pi_n}_{\text{crp}})(a) - (X^{\pi_n}_{\text{crp}},Z^{\pi_n}_{\text{crp}})(b)| \ge 1.
\end{align*}
This in particular means that for
any $0< \eta < 1$,
\[
\sup _n P (w'( (X_{\text{crp}}^{\pi_n},Z_{\text{crp}}^{\pi_n}) , \delta, T) \geq \eta)
\geq \alpha_\delta,
\]
and the laws of $\{ (X_{\text{crp}}^{\pi_n}, Z_{\text{crp}}^{\pi_n})\}$ fail to be relatively compact.
\end{example}
\subsection{Weak Convergence in the product Skorohod topology}
\label{sec:weak_conv}
Example \ref{ex:408957} demonstrates that the measures induced by $(X_{\text{crp}}^{\pi_n}, Z_{\text{crp}}^{\pi_n})$ on $D_{\mathbb R^{2d}}[0,\infty)$ are not relatively compact. Hence, the processes $(X_{\text{crp}}^{\pi_n}, Z_{\text{crp}}^{\pi_n})$ do not converge weakly to $(X_\text{sc},Z_{\text{sc}})$ in $D_{\mathbb R^{2d}}[0,\infty)$. However, in this section we demonstrate that there is convergence in
$$\mathcal{D} :=D_{\RR^d}[0, \infty) \times D_{\RR^d}[0, \infty)$$
endowed with the product Skorohod topology.
As is usual, the main work that remains to be done is in showing that $\{ (X_{\text{crp}}^{\pi_n}, Z_{\text{crp}}^{\pi_n})\}$ is relatively compact in the appropriate topological space.
\begin{prop}\label{prop:30498}
Let $\mathcal{D} \overset{\mbox{\tiny def}}{=} D_{\RR^d}[0, \infty) \times D_{\RR^d}[0, \infty)$, with the product Skorohod topology. The family of processes $\{ (X_{\text{crp}}^{\pi_n}, Z_{\text{crp}}^{\pi_n})\}$ is relatively compact in $\mathcal{D}$.
\end{prop}
\begin{proof}
By Theorem 2.2 on page 104 of \cite{Kurtz86}, it is enough to show that
for any $\epsilon > 0$,
there exists a compact set $C^\epsilon\in \mathcal{D}$ such that
$$ \inf_n P ((X_{\text{crp}}^{\pi_n}, Z_{\text{crp}}^{\pi_n}) \in C^\epsilon) > 1- \epsilon.$$
To show this, we consider the marginal processes, which we recall
satisfy $X \sim X_{\text{crp}}^{\pi_n}$ and $ Z\sim Z_{\text{crp}}^{\pi_n}$ for each $n \ge 1$. Note that if $A^\epsilon, B^\epsilon \subset D_{\RR^d}[0,\infty)$ are compact, then the inequalities
\begin{align}
\begin{split}
P (X \in A^\epsilon) &= P (X_{\text{crp}}^{\pi_n} \in A^\epsilon) > 1- \frac{\epsilon}2\\
P (Z \in B^\epsilon)&= P (Z_{\text{crp}}^{\pi_n} \in B^\epsilon) > 1- \frac{\epsilon}2
\end{split}
\label{eq:490875}
\end{align}
imply the inequality
\[
P ((X_{\text{crp}}^{\pi_n}, Z_{\text{crp}}^{\pi_n}) \in A^\epsilon\times B^\epsilon) > 1- \epsilon,
\]
with $A^\epsilon\times B^\epsilon$ compact in $\mathcal{D}$.
Hence, it is sufficient to simply prove the pair of inequalities \eqref{eq:490875} for the marginal processes, which live in $D_{\mathbb R^d}[0,\infty)$. However, inequality \eqref{eq:490875} holds so long as the marginal processes are tight (in $D_{\mathbb R^d}[0,\infty)$), and so Theorem \ref{Precompact} may be used.
Therefore it
suffices to show that $X$ and $Z$ both separately satisfy the conditions in
Theorem \ref{Precompact}, which we do now.
Since $X$ is a nonexplosive pure jump process, it clearly
passes the first condition of Theorem \ref{Precompact}. Also, recall that $X$ is constructed with $R\in \mathbb Z_{>0}$ Poisson processes, one for each jump direction. Then for any $T >0$ and $M>0$,
\begin{align}
\begin{split}
P(w'(X, \delta, T) > 0 ) &\leq P\left(w'(X, \delta, T) > 0, \sup_{k=1,..,R, s <
T} \lambda_k(X(s)) \leq M\right)\\
&\hspace{.3in} + P\left( \sup_{k=1,..,R, s <T} \lambda_k(X(s)) > M \right) \\
&\le P \left(w'(Y(MR \cdot), \delta, T) > 0\right) + P\left( \sup_{k=1,..,R, s <T}
\lambda_k(X(s)) > M\right)
\end{split}
\end{align}
where $Y(M R\cdot)$ is a Poisson process with rate $MR$. Since $X$ is non explosive, we
may take $M$ large enough to control the second piece, and for this
$M$ we can choose $\delta$ small enough to control the first
piece. That is,
$\lim_{\delta \to 0 }P( w'(X, \delta, T) > 0 ) = 0$.
This tells us that $X$ also passes the second condition of Theorem \ref{Precompact}. The same procedure works for $Z$. Thus, $\{ (X_{\text{crp}}^{\pi_n}, Z_{\text{crp}}^{\pi_n}) \} $ is relatively compact in $\mathcal{D}$ with the product
topology.
\end{proof}
With this proposition at our hand, we can prove the main result of our paper.
\begin{thm}
Suppose $X$ and $Z$ are both non-explosive, c\`adl\`ag process as given above.
Let $D_{\RR^d}[0, \infty)$ be the Skorohod Space as defined in \cite{Kurtz86}. Consider the
product topology on
$$\mathcal{D} :=D_{\RR^d}[0, \infty) \times D_{\RR^d}[0, \infty).$$
Also, let $\pi_n = \{s_j^n\} $ be a sequence of partitions of
$[0,\infty)$ such that
\[
\text{mesh}(\pi_n) = \max_{j < \infty} (s_{j}^n - s_{j-1}^n) \to 0, \quad \text{as } n \to \infty.
\]
Then for all $f : \mathcal{D} \to \RR$ that are bounded and
continuous,
\[
\mathbb{E}[f(X_{\text{crp}}^{\pi_n}, Z_{\text{crp}}^{\pi_n})] \to \mathbb{E}[f(X_{\text{sc}}, Z_{\text{sc}})], \ \ \ \text{as } n \to \infty.
\]
That is, $(X_{\text{crp}}^{\pi_n}, Z_{\text{crp}}^{\pi_n} ) \to (X_{\text{sc}}, Z_{\text{sc}})$, as $n \to \infty$, weakly in the product Skorohod topology.
\label{Goal}
\end{thm}
We would like to emphasize that the test function $f$ considered above maps a path in $\mathcal{D}$ to $\RR$. The test functions for Proposition \ref{Theorem}, on the other hand, are evaluated at discrete time points.
Now we put everything together to prove Theorem
\ref{Goal}.
\begin{proof}[Proof of Theorem \ref{Goal}]
By Proposition \ref{prop:30498} it is sufficient to show that every convergent (in distribution) subsequence of $(X_{\text{crp}}^{\pi_n}, Z_{\text{crp}}^{\pi_n} )$ converges in distribution to $(X_{\text{sc}}, Z_{\text{sc}})$.
By Corollary \ref{Cor}, it is sufficient to show that if
\begin{align}
\mathbb{E} \left[ \prod_{i=0}^{m_1} f_i(X_{\text{sc}}(s_i)) \prod_{j=0}^{m_2}g_j( Z_{\text{sc}}(t_j))
\right] &= \mathbb{E} \left[ \prod_{i=0}^{m_1} f_i(X^*(s_i)) \prod_{j=0}^{m_2}g_j( Z^*(t_j))
\right]
\label{marginal}
\end{align}
for all $\{ s_i\}, \{ t_j\} \subset [0,\infty),$ and $f_i, g_i \in \overline C(\RR^d)$ (bounded and continuous functions),
then $\mathbb{E}[h(X_{sc}, Z_{sc})]= \mathbb{E}[h(X^*, Z^*)]$ for any bounded and continuous function $h: \mathcal{D} \to \mathbb R$. A standard monotone class argument (for example, see page 132 in \cite{Kurtz86}) shows that \ref{marginal} is more than enough to guarantee that $\mathbb{E}[h(X_{\text{sc}}, Z_{\text{sc}})] = \mathbb{E}[h(X^*, Z^*)]$ for all
$h$ continuous with respect to $D_{\RR^{2d}}[0, \infty).$ From the definition of the Skorohod metric, it is straightforward to show that the topology of $D_{\RR^{2d}}[0, \infty)$ is finer than that of $\mathcal{D}$. This in particular means that the continuous functions with respect to $\mathcal{D}$ are a subset of those of $D_{\RR^{2d}}[0, \infty)$. Thus, we may conclude that $\mathbb{E}[h(X_{\text{sc}}, Z_{\text{sc}})] = \mathbb{E}[h(X^*, Z^*)]$ if $h$ is continuous with respect to $\mathcal{D}$, and the result is shown.
\end{proof}
While the results presented so far pertain to the specific couplings found in the numerical analysis literature, a slightly more general theorem can be achieved by following an identical line of reasoning.
\begin{thm}\label{general}
For $i \in \{1,2,3\}$ and $k \in \{1,\dots,R\}$, let $r_{ik}:\RR^d\times \RR^d \to \RR_{\ge 0}$ be a non-negative measurable function. Suppose that $\{\pi_n\}$ is a sequence of partitions of $[0,\infty)$ for which $\text{mesh}(\pi_n) \to 0$, as $n \to \infty$. Define $(X_{\text{sc}},Z_{\text{sc}})$ and $(X^{\pi_n}_{\text{crp}},Z^{\pi_n}_{\text{crp}})$ via
\begin{align*}
X_{\text{sc}}(t) = X(0) + &\sum_{k=1}^R\Bigg\{ Y_{1k} \left(
\int_0^t r_{1k}(X_{\text{sc}}, Z_{\text{sc}})(s) ds \right) + Y_{2k}
\left( \int_0^t r_{2k}(X_{\text{sc}}, Z_{\text{sc}})(s) ds \right)
\Bigg\} \zeta_k \\
Z_{\text{sc}}(t) = Z(0) + &\sum_{k=1}^R \Bigg\{ Y_{1k} \left( \int_0^t
r_{1k}(X_{\text{sc}}, Z_{\text{sc}})(s) ds
\right)
+ Y_{3k} \left( \int_0^t r_{3k}(X_{\text{sc}}, Z_{\text{sc}})(s) ds \right) \Bigg\} \zeta_k,
\end{align*}
and
\begin{align*}
X_{\text{crp}}^{\pi_n}(t) &= X(0) + \sum_{m=0}^{\infty} \sum_{k =1}^R Y_{km}^n \left(
\int_{t \wedge s_m}^{t \wedge s_{m+1}} \{ r_{1k}(X_{\text{crp}}^{\pi_n}, Z_{\text{crp}}^{\pi_n})(s) + r_{2k}(X_{\text{crp}}^{\pi_n}, Z_{\text{crp}}^{\pi_n})(s) \} ds \right)
\zeta_k \\
Z_{\text{crp}}^{\pi_n} (t) &= Z(0) + \sum_{m=0}^{\infty} \sum_{k =1}^R Y_{km}^n \left(
\int_{t \wedge s_m}^{t \wedge s_{m+1}} \{r_{1k}(X_{\text{crp}}^{\pi_n}, Z_{\text{crp}}^{\pi_n})(s) + r_{3k}(X_{\text{crp}}^{\pi_n}, Z_{\text{crp}}^{\pi_n})(s)\} ds \right)
\zeta_k,
\end{align*}
where all notation is as before.
Finally, we suppose that all processes are non-explosive. Then, $(X_{\text{crp}}^{\pi_n}, Z_{\text{crp}}^{\pi_n} ) \to (X_{\text{sc}}, Z_{\text{sc}})$, as $n \to \infty$, weakly in the product Skorohod topology.
\end{thm}
Proposition \ref{Goal} is therefore a special case of Theorem \ref{general} in which each $r_{ik}$ depends on $\lambda_k$ and $\tilde \lambda_k$ in a specific way.
\section{Numerical examples}
\label{sec:examples}
In this section, we provide two numerical examples demonstrating the convergence of the local-CRP coupling to that of the split coupling. Based upon our motivation in terms of variance reduction, we focus upon the convergence of the variance between the coupled processes.
\begin{example}\label{example1}
We begin by considering a basic model of gene transcription and translation, where the model tracks the counts for the numbers of genes ($G$), mRNA molecules ($M$), and proteins ($P$) in the system. We suppose that the system can undergo the following possible reactions,
\begin{align*}
G &\rightarrow G+M\tag{R1}\\
M &\rightarrow M+P\tag{R2}\\
M &\rightarrow \emptyset\tag{R3}\\
P &\rightarrow \emptyset,\tag{R4}
\end{align*}
where, for example, reaction (R1) implies a net change to the system of one extra mRNA molecule. Since no reaction changes the number of genes present in the system, we may take that to be a fixed quantity. Hence, there are two dynamic components, and the stochastic model for this system is
\begin{align*}
X(t) = X(0) &+ Y_1\left( \int_0^t \lambda_1(X(s)) ds\right) \left[\begin{array}{c} 1 \\ 0\end{array}\right] + Y_2\left( \int_0^t \lambda_2(X(s)) ds\right) \left[\begin{array}{c} 0 \\ 1\end{array}\right]\\
&+ Y_3\left( \int_0^t \lambda_3(X(s)) ds\right) \left[\begin{array}{c} -1 \\ 0\end{array}\right]+ Y_4\left( \int_0^t \lambda_4(X(s)) ds\right) \left[\begin{array}{c} 0\\-1\end{array}\right],
\end{align*}
where $X_1$ counts the numbers of mRNA molecules, and $X_2$ counts the numbers of proteins. We now let $X$ be the process with intensity functions
\begin{align*}
\lambda_1(x) = 2, \quad \lambda_2(x) = 10x_1, \quad \lambda_3(x) = (1/4 + 1/80)x_1, \quad \lambda_4(x) = x_2,
\end{align*}
and let $Z$ be the process with intensity functions
\begin{align*}
\lambda_1(x) = 2, \quad \lambda_2(x) = 10x_1, \quad \lambda_3(x) = (1/4 - 1/80)x_1, \quad \lambda_4(x) = x_2.
\end{align*}
These are reasonable choices, for example, if we were attempting to estimate the sensitivity of some statistic with respect to the rate parameter for the third intensity function evaluated at $1/4$.
Let $\pi_n$ be a partition of $[0,30]$ into $n$ equally sized intervals. In Figure \ref{fig:figure1}, we plot numerical estimates of $Var(X_{\text{sc}}(t) - Z_{\text{sc}}(t))$, $Var(X_{\text{crp}}(t) - Z_{\text{crp}}(t)),$ and $Var(X^{\pi_n}_{\text{crp}}(t) - Z^{\pi_n}_{\text{crp}}(t))$, for $n \in \{2,6,30,300\}$, over the time period $[0,30]$. The estimates were achieved via Monte Carlo methods with 10,000 sample paths.
\begin{figure}[t]
\begin{center}
\includegraphics[scale = 0.6]{MergedGeneNew.pdf}
\end{center}
\caption{Numerical approximations (via Monte Carlo with 10,000 sample paths) for the variance of the difference between the processes $X$ and $Z$ of Example \ref{example1} for the split coupling (blue), CRP coupling (black), and various local-CRP couplings. Convergence of the variance of the local-CRP coupling to the variance of the split coupling is clear.}
\label{fig:figure1}
\end{figure}
We observe the uniform convergence of $Var(X_{\text{crp}}^{\pi_n}(\cdot) - Z_{\text{crp}}^{\pi_n}(\cdot))$ to $Var(X_{sc}(\cdot) - Z_{sc}(\cdot))$ as $\text{mesh}(\pi_n) \to 0$. We also observe a sharp drop in the variance of $X^{\pi_n}_{\text{crp}}(\cdot) - Z_{\text{crp}}^{\pi_n}(\cdot)$ at the ``resetting'' of the Poisson processes, which occur at the end of each interval of the discretization $\pi_n$.
\end{example}
\begin{example}\label{example2}
Consider a simple quadratic birth and death model
\begin{align*}
\emptyset &\rightarrow 2A\tag{r1}\\
2A &\rightarrow \emptyset\tag{r2}
\end{align*}
with initial count $X(0)$ given by a Poisson random variable with parameter $15$.
We can model the dynamics of this system with the stochastic equations
\[
X(t) = X(0) + 2Y_1\left(\int_0^t \lambda_1(X(s))ds\right) - 2Y_2\left(\int_0^t \lambda_2(X(s))ds\right),
\]
where
\[
\lambda_1(x) = 400, \quad \text{and} \quad \lambda_2(x) = kx(x -1),
\]
and where $k$ is a parameter of the model. We consider the model $X$ with $k = 0.1 + 1/25$ and the model $Z$ with $k = 0.1 - 1/25$. Further, we let the initial conditions of $X$ and $Z$ be independent Poisson random variables with a parameter of 15 (that is, the initial conditions of $X$ and $Z$ are independent from each other). Let $\pi_n$ be a partition of $[0,1]$ into $n$ equally sized intervals. In Figure \ref{figure2}, we plot numerical estimates of $Var(X_{\text{sc}}(t) - Z_{\text{sc}}(t))$, $Var(X_{\text{crp}}(t) - Z_{\text{crp}}(t)),$ and $Var(X^{\pi_n}_{\text{crp}}(t) - Z^{\pi_n}_{\text{crp}}(t))$, for $n \in \{2,4,8,100\}$, over the time period $[0,1]$. The estimates were achieved via Monte Carlo methods with 5,000 sample paths. We again observe the sharp drop in variance at the ``resetting'' times of the processes.
\begin{figure}[t]
\begin{center}
\includegraphics[scale = 0.6]{BDMergedNew.pdf}
\caption{Numerical approximations (via Monte Carlo with 5,000 sample paths) for the variance of the difference between the processes $X$ and $Z$ of Example \ref{example2} for the split coupling (blue), CRP coupling (black), and various local-CRP couplings. Convergence of the variance of the local-CRP coupling to the variance of the split coupling is clear.}
\end{center}
\end{figure}
\label{figure2}
\end{example}
\section{Discussion}
\label{sec:discussion}
The stochastic models finding widespread use in the cell biology literature are typically immensely complicated, and computational methods often provide the only effective way to probe the dynamics. As Persi Diaconis recently noted \cite{Diaconis2013}, this presents mathematicians with an opportunity to make contributions by explicitly studying the different simulation and computational algorithms themselves. Such analyses will not only shed light on which methods to use in different contexts, but will inevitably lead to a deeper understanding of the underlying processes, and hence to better computational methods.
In this work we have clarified the connection between two couplings commonly found in the computational cell biology literature and, in particular, showed that the split coupling can be regarded as a natural limit of a localized version of the CRP coupling.
There are other interesting ways to understand the split coupling. For example, Arampatzis and Katsoulakis \cite{Markos} recently studied a group of couplings that is included in the family of general split couplings considered in Theorem \ref{general}. They note that for each test function $f$
there is an optimal choice for the function
$r_{1k}(\lambda_k, \widetilde \lambda_k, \mathcal U, \mathcal V)(s)$ in \eqref{couple_rates}
that minimizes the variance of the finite difference $\mathbb{E}[f(X_t) - f(Z_t)]$ in the setting of \eqref{eq:split_coupling}.
When the test function is $f(x) = x$, the correct choice of $r_{1k}$ is the one given in \eqref{couple_rates}, which yields the split coupling.
\vspace{.2in}
\noindent\textbf{Acknowledgments.}
We thank Thomas Kurtz for several illuminating discussions and for reading an early version of this work. We thank two anonymous referees for careful readings that substantially improved the clarity of Section \ref{sec:weak_conv}. Anderson was supported by NSF grants DMS-1009275 and DMS-1318832 and Army Research Office grant W911NF-14-1-0401. Koyama was supported by NSF grants DMS-1009275, DMS-1318832, and DMS-0805793.
\bibliographystyle{amsplain}
| {
"timestamp": "2014-08-05T02:02:04",
"yymm": "1403",
"arxiv_id": "1403.3127",
"language": "en",
"url": "https://arxiv.org/abs/1403.3127",
"abstract": "This paper is concerned with elucidating a relationship between two common coupling methods for the continuous time Markov chain models utilized in the cell biology literature. The couplings considered here are primarily used in a computational framework by providing reductions in variance for different Monte Carlo estimators, thereby allowing for significantly more accurate results for a fixed amount of computational time. Common applications of the couplings include the estimation of parametric sensitivities via finite difference methods and the estimation of expectations via multi-level Monte Carlo algorithms. While a number of coupling strategies have been proposed for the models considered here, and a number of articles have experimentally compared the different strategies, to date there has been no mathematical analysis describing the connections between them. Such analyses are critical in order to determine the best use for each. In the current paper, we show a connection between the common reaction path (CRP) method and the split coupling (SC) method, which is termed coupled finite differences (CFD) in the parametric sensitivities literature. In particular, we show that the two couplings are both limits of a third coupling strategy we call the \"local-CRP\" coupling, with the split coupling method arising as a key parameter goes to infinity, and the common reaction path coupling arising as the same parameter goes to zero. The analysis helps explain why the split coupling method often provides a lower variance than does the common reaction path method, a fact previously shown experimentally.",
"subjects": "Numerical Analysis (math.NA); Probability (math.PR); Quantitative Methods (q-bio.QM)",
"title": "An asymptotic relationship between coupling methods for stochastically modeled population processes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9817357184418847,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7085610792892381
} |
https://arxiv.org/abs/1005.5492 | Matroid automorphisms of the H_4 root system | We study the rank 4 linear matroid $M(H_4)$ associated with the 4-dimensional root system $H_4$. This root system coincides with the vertices of the 600-cell, a 4-dimensional regular solid. We determine the automorphism group of this matroid, showing half of the 14,400 automorphisms are geometric and half are not. We prove this group is transitive on the flats of the matroid, and also prove this group action is primitive. We use the incidence properties of the flats and the {\it orthoframes} of the matroid as a tool to understand these automorphisms, and interpret the flats geometrically. | \section{Introduction} Regular polytopes in 4-dimensions are notoriously difficult to understand geometrically. Coxeter's classic text \cite{cox} is an excellent resource, concentrating on both the metric properties and the symmetry groups of regular polytopes. Another approach to understanding these polytopes is through combinatorics; we use matroids to model the linear dependence of a collection of vectors associated to the polytope. That is the context for this paper, and we concentrate on the matroid associated with the 120-cell or the 600-cell, two dual 4-dimensional regular polytopes.
The connection between polytopes and matroids, or, more generally, between root systems and matroids, is as follows. Given a finite set $S$ of vectors in $\mathbb{R}^n$ possessing a high degree of symmetry, define the (linear) matroid $M(S)$ as the dependence matroid for the set $S$ over $\mathbb{R}$. Then there should be a close relationship between the symmetry group of the original set $S$ ({\it geometric} symmetry) and the matroid automorphism group $\Aut(M(S))$ ({\it combinatorial} symmetry). In particular, every geometric symmetry necessarily preserves the dependence structure of $S$, so gives rise to a matroid automorphism.
The root system $H_4$ can be obtained by choosing the 120 vectors in $\mathbb{R}^4$ that form the vertices of the 600-cell. These vectors come in 60 pairs, and each pair corresponds to a single point in the matroid. Thus, $M(H_4)$ is a rank-4 matroid on 60 points.
This paper generalizes and extends \cite{eg}. In particular, we are interested in understanding the structure of the matroid automorphism group $\Aut(M(H_4))$. We show (Theorem~\ref{T:aut}) that $\Aut(M(H_4))$ contains {\it non-geometric} automorphisms in the sense that half of the 14,400 elements of $\Aut(M(H_4))$ do not arise from the Coxeter/Weyl group $W(H_4)$. We also prove the automorphism group of $M(H_4)$ acts transitively on each class of flats of the matroid (Lemma~\ref{L:trans}), and that the action is primitive (Theorem~\ref{T:prim}). A key tool for understanding the structure of the automorphisms is the incidence relation among the flats of $M(H_4)$ (Lemma~\ref{L:flatinc} and Proposition~\ref{P:planeintersect}). This incidence structure allows us to compute the stabilizer of a point of the matroid (Lemma~\ref{L:stab}), a fact we need to understand the structure of the group.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=4in]{alllines}
\caption{A projection of the matroid $M(H_4)$.}
\label{F:alllines}
\end{center}
\end{figure}
The connection between the geometric and combinatorial symmetry of certain root systems has been explored in \cite{dut,eg,fggp,fglp}. In \cite{fglp}, matroid automorphism groups are computed for the root systems $A_n, B_n$ and $D_n$, while \cite{eg} considers the root system $H_3$ associated with the icosahedron and \cite{fggp} examines the matroid associated with the root system $F_4$. The general case is treated in \cite{dut}, where a computer program is employed to show that $\Aut (M(S))\cong G_S/W$ for all root systems $S$ {\it except} $F_4, H_3$ and $H_4$, where $G_S$ is the Coxeter/Weyl group associated with $S$ and $W$ is either the 2-element group $\mathbb{Z}_2$ (when $G$ has {\it central inversion}) or $W$ is trivial (when $G$ does not have central inversion). No attempt is made to understand the structure of these matroids in \cite{dut}, however.
Other models for connecting geometric and combinatorial symmetry are possible, of course. In particular, since each pair of vectors $\pm {\bf v}$ in a root system corresponds to a double point in the associated linear matroid, we could consider both vectors in the matroid. This has the effect of doubling the number of automorphisms for each such pair; in our case, this increases the number of automorphisms by a factor of $2^{60}$. Alternatively, we could, associate an {\it oriented} matroid with the root system. This doubles the number of automorphisms considered here. Another option is to consider a projective version of the root system. We point out, however, that all of these modifications differ from our treatment in transparent ways that do not change our understanding of the connection between the geometry and the combinatorics.
This paper is organized as follows: The matroid $M(H_4)$ is defined as the column dependence matroid for a $4 \times 60$ matrix in Section~\ref{S:def}. In Section~\ref{S:flats}, we describe the flats and {\it orthoframes} of the matroid and their incidence. Orthoframes are special bases of the matroid, and they are important for understanding a certain kind of duality between points and 15-point planes. This point-plane correspondence is made explicit in Propositions~\ref{P:orthodata}(4), \ref{P:planeeqns}, and \ref{P:ptplanedual}, where it is interpreted combinatorially, algebraically and geometrically, respectively. Orthoframes also allow us to reconstruct the matroid - Proposition~\ref{P:orthorecon}.
Section~\ref{S:aut} is the heart of this paper, concentrating on the structure of the matroid automorphisms. We show that the stabilizer of a point $x$ is $\stab(x)\cong S_5\times \mathbb{Z}_2$ (Lemma~\ref{L:stab}), then use this to show that $\Aut(M(H_4))$ acts transitively on flats (Lemma~\ref{L:trans}) and primitively on the matroid (Theorem~\ref{T:prim}). This allows us to understand the structure of the group - Theorem~\ref{T:aut}. We conclude (Section~\ref{S:geo}) with a few connections between the flats of the matroid and various classes of faces of the 120- and 600-cell.
We would like to thank Derek Smith and David Richter for useful discussions about the Coxeter/Weyl group $W$ for the $H_4$ root system. The third author especially thanks Prof. Thomas Brylawski for teaching him about matroids and the beauty of symmetry groups.
\section{Preliminaries}\label{S:def}
We assume some basic familiarity with matroids and root systems. We refer the reader to the first chapter of \cite{ox} for an introduction to matroids and \cite{gb,h} for much more on root systems. The study of root systems is very important for Lie algebras, and the term `root' can be traced to characteristic roots of certain Lie operators. For our purposes, the collection of roots forms a matroid, and the Coxeter/Weyl group of the root system is closely related to the automorphism group of that matroid.
The root system $H_4$ has an interpretation via two dual 4-dimensional regular polytopes, the 120-cell and the 600-cell. The 120-cell is composed of 120 dodecahedra and the 600-cell is composed of 600 tetrahedra. Each vertex of the 120-cell is incident to precisely 3 dodecahedra and each vertex of the 600-cell meets 5 tetrahedra, justifying the intuitive notion that the 120-cell is a 4-dimensional analogue of the dodecahedron while the 600-cell is a 4-dimensional version of the icosahedron.
As dual polytopes, the 120-cell and the 600-cell have the same set of hyperplane reflections and symmetry groups. Then the connection between these two dual solids and the root system $H_4$ is direct: The roots are precisely the normal vectors of all the reflecting hyperplanes that preserve the 120-cell (or, dually, the 600-cell). A copy of this root system also appears as the collection of 120 vertices of the 600-cell (where the 600-cell is positioned with the origin at its center and we identify a vertex with the vector from the origin to that vertex). Extensive information about these polytopes appears in Table 5 of the appendix of \cite{cox}.
\begin{Def} The matroid $M(H_4)$ is defined to be the linear dependence matroid on the set of 60 column vectors of the matrix $H$ over $\mathbb{Q}[\tau]$, where $\tau=\frac{1+\sqrt{5}}{2}$ satisfies $\tau^2=\tau+1$.
$$H=\left[\begin{array}{cccccccccccc}
1 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 &1&1 \\
0 & 1 & 0 & 0 & 1 & 1 & 1 & 1 & -1 & -1&-1&-1 \\
0 & 0 & 1 & 0 & 1 & 1 & -1 & -1 & 1 & 1&-1&-1 \\
0 & 0 & 0 & 1 & 1 & -1 & 1 & -1 & 1 & -1&1&-1
\end{array}\right .\dots$$
$$\left .\begin{array}{cccccccccccc}
0&0&0&0&0&0&0&0&0&0&0&0\\
\tau & \tau & \tau & \tau &\tau^2 & \tau^2 & \tau^2 &\tau^2 &1 &1&1 &1 \\
\tau^2 & \tau^2 &-\tau^2 &-\tau^2 & 1 &1 &- 1&-1&\tau&\tau &-\tau &-\tau \\
1 &-1&1&-1 &\tau&-\tau&\tau&-\tau& \tau^2&-\tau^2 &\tau^2 &- \tau^2
\end{array}\right.\dots$$
$$\left . \begin{array}{cccccccccccc}
\tau & \tau & \tau & \tau & \tau^2 & \tau^2 &\tau^2&\tau^2 &1 &1 & 1 &1 \\
0& 0 & 0 & 0 &0 & 0 & 0 &0 &0 &0 &0 &0 \\
1 & 1 &-1 &-1 & \tau& \tau &- \tau&-\tau &\tau^2&\tau^2 &-\tau^2 &-\tau^2 \\
\tau^2 &-\tau^2&\tau^2&-\tau^2 &1 &-1 &1 &-1 & \tau&-\tau&\tau&-\tau
\end{array}\right.\dots$$
$$\left . \begin{array}{cccccccccccc}
\tau& \tau & \tau & \tau &\tau^2 & \tau^2 & \tau^2 & \tau^2 &1 & 1 & 1&1 \\
\tau^2 & \tau^2 & -\tau^2 &-\tau^2 &1 &1&-1 &-1 & \tau&\tau&-\tau &-\tau \\
0& 0 &0 &0 & 0 &0 &0 &0 &0&0&0 &0 \\
1 &-1&1&-1 &\tau&-\tau&\tau&-\tau &\tau^2&-\tau^2&\tau^2&-\tau^2
\end{array}\right.\dots$$
$$\left .\begin{array}{cccccccccccc}
\tau & \tau &\tau & \tau &\tau^2 & \tau^2 & \tau^2 &\tau^2 &1 &1 &1 &1 \\
1 & 1 &-1 &-1 & \tau& \tau &-\tau&-\tau &\tau^2&\tau^2 &-\tau^2 &-\tau^2 \\
\tau^2&-\tau^2&\tau^2&-\tau^2&1&-1&1&-1&\tau&-\tau&\tau&-\tau \\
0&0&0&0&0&0&0&0&0&0&0&0
\end{array}\right]$$
\end{Def}
The full root system $H_4$ consists of these 60 column vectors together with their 60 negatives. Note that replacing any column vector by its negative does not change the matroid. See Sec. 8.7 of \cite{cox} for more on the derivation of these coordinates.
Since $r(M(H_4))=4$, we can represent the matroid with an affine picture in $\mathbb{R}^3$. We find affine coordinates in $\mathbb{R}^3$ as follows: First, find a non-singular linear transformation of the column vectors of $H_4$ that maps each vector to an ordered 4-tuple in which the first entry is non-zero, then project onto the plane $x_1=1$ and plot the remaining ordered triples. We note that choosing different transformations gives rise to different projections; choosing the `best' such projection is subjective. In Figure \ref{F:alllines}, we give a projection of one such representation.
\section{The flats and orthoframes of $M(H_4)$} \label{S:flats}
We describe the rank-4 matroid $M(H_4)$ by determining the number of flats of each kind and the flat incidence structure. This incidence structure will also be important for determining the automorphisms of the matroid. We use lower case letters to label the points of the matroid and upper case letters for flats of rank 2 or 3.
\subsection{Flats} Every line in $M(H_4)$ has 2, 3 or 5 points, and there are 4 different isomorphism classes of planes (rank-3 flats) in $M(H_4)$. The planes are shown in Figure~\ref{F:3planes}. This fact can be proven by a direct computation using the column dependences of the matrix $H$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=4in]{planes}
$\Pi_3$\hspace{1.4in} $\Pi_5$\hspace{1.2in} $\Pi_6$
\bigskip
\includegraphics[width=6in]{MatroidH3}
$\Pi_{15}$
\caption{The four planes that appear in $M(H_4)$.}
\label{F:3planes}
\end{center}
\end{figure}
\begin{Lem}\label{L:flatcount} Flat counts: In Table \ref{T:flatcount}, we list the number of flats of rank 1, 2 and 3 in the matroid $M(H_4)$.
\end{Lem}
\begin{table}[htdp]
\caption{The number of flats of each kind in the matroid.}
\begin{center}
\begin{tabular}{|c||c||c|c|c||c|c|c|c|} \hline
Rank & Rank 1& \multicolumn{3}{c||}{Rank 2} & \multicolumn{4}{c|}{Rank 3} \\ \hline
Flat& Points & 2-pt lines & 3-pt lines & 5-pt lines &
$\Pi_3$ & $\Pi_5$ & $\Pi_6$ & $\Pi_{15}$ \\ \hline
No. & 60 & 450 & 200 & 72 &
600 & 360 & 300 & 60 \\ \hline
\end{tabular}
\end{center}
\label{T:flatcount}
\end{table}
Diagrams of $M(H_4)$ that emphasize the 3-point lines and 5-point lines appear in Figure \ref{F:35lines}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=2.45in]{3lines}
\includegraphics[width=2.45in]{5lines}
\caption{The 3-point lines (left) and 5-point lines (right) of $M(H_4)$.}
\label{F:35lines}
\end{center}
\end{figure}
\begin{Lem}\label{L:flatinc}
Flat incidence: In Table \ref{T:flatincidence}, we list the number of flats of a certain kind that contain a given flat of lower rank.
\end{Lem}
\begin{table}[htdp]
\begin{center}
\begin{tabular}{|c||c|c|c||c|c|c|c|} \hline
& \multicolumn{3}{c||}{Rank 2} & \multicolumn{4}{c|}{Rank 3} \\ \hline
& 2-pt lines & 3-pt lines & 5-pt lines &
$\Pi_3$ & $\Pi_5$ & $\Pi_6$ & $\Pi_{15}$ \\ \hline
A point is in & 15 & 10 & 6 & $10^{(a)}$& $6^{(a)}$ & 30 & 15 \\ \hline
A 2-pt line is in& 1 & - & - &4 &4& 2 & 2 \\ \hline
A 3-pt line is in & -&1&- &3 &- & 6 & 3 \\ \hline
A 5-pt line is in& -& -&1 &- &5 & - & 5 \\ \hline
\end{tabular}
\end{center}
\smallskip
\caption{The number of flats of one kind that contain a given flat of another kind. (a) The point is the apex of the $\Pi_3$ or $\Pi_5$.}
\label{T:flatincidence}
\end{table}
Both lemmas can be verified by computer calculations, but we give an example of how the various counts are interrelated. Assuming the point-flat incidence counts for 3-point lines and $\Pi_{15}$ planes, we will count the number of $\Pi_6$ planes; other counts may be obtained with similar arguments.
For a given point $x \in M(H_4)$, there are ten 3-point lines through $x$, giving $45$ pairs of 3-point lines containing $x$. Now $x$ is in 15 $\Pi_{15}$ planes, and each of these planes is completely determined by the pair of 3-point lines containing $x$. Each of the remaining 30 pairs of 3-point lines containing $x$ uniquely determine a $\Pi_6$ containing $x$. Thus, there are 30 $\Pi_6$ planes containing a given point.
To get the total number of $\Pi_6$ planes, consider the point-$\Pi_6$ incidence. Each point is in 30 $\Pi_6$ planes, and each $\Pi_6$ contains 6 points. Thus, the total number of $\Pi_6$ planes is $300$.
The flats of a matroid satisfy the {\it flat covering property}:
\begin{quotation}
If $F$ is a flat in a matroid $M$, then $\{F'-F \mid F' \mbox{ is a flat that covers } F\}$ partitions $E-F$.
\end{quotation}
We illustrate this partitioning property for $M(H_4)$:
\begin{description}
\item[Point/line incidence] From Table~\ref{T:flatincidence}, we know a given point $x$ is covered by precisely 15 2-point lines, ten 3-point lines and six 5-point lines. Then it is easy to see this pencil of lines contains precisely 59 points (not counting $x$), partitioning $E-x$, as required.
\item [Line/plane incidence] We consider the three kinds of lines in $M(H_4)$.
\begin{itemize}
\item 2-point lines: Each 2-point line $L$ is covered by four $\Pi_3$'s, four $\Pi_5$'s, two $\Pi_6$'s and two $\Pi_{15}$'s. Each $\Pi_3$ that covers $L$ contains two points not on $L$. Similarly, each $\Pi_5$ covering $L$ has four more points, each such $\Pi_6$ also has four more points, and each such $\Pi_{15}$ has 13 points. This gives us the required partition of the remaining 58 points.
\item 3-point lines: Each 3-point line $L$ is in 3 $\Pi_3$'s, 6 $\Pi_6$'s and 3 $\Pi_{15}$'s. As above, counting the points in these covering planes gives a total of 57 points partitioned by these planes.
\item 5-point lines: If $L$ is a 5-point line, then only two kinds of planes contain $L$: the 5 $\Pi_5$'s and the 5 $\Pi_{15}$'s. These 10 planes contain 55 points (excluding the points on $L$), again giving us the required partition of $E-L$.
\end{itemize}
\end{description}
As an application of the incidence data given above, we prove the following.
\begin{Prop}\label{P:planeintersect} Every pair of $\Pi_{15}$ planes intersect.
\end{Prop}
\begin{proof}
Let $P$ be a 15-point plane and let $L_5$ be a 5-point line contained in $P$. Since every 5-point line is contained in precisely five $\Pi_{15}$'s, there are four $\Pi_{15}$'s that meet $P$ along the line $L_5$. Since $P$ contains six 5-point lines, this gives a total of 24 $\Pi_{15}$'s that meet our given plane $P$ in a 5-point line.
We repeat this argument for 3-point lines: Each of the ten 3-point lines in $P$ is contained in two more $\Pi_{15}$'s, accounting for another 20 $\Pi_{15}$'s meeting $P$.
Finally, each two-point line is in two $\Pi_{15}$'s, but there are 15 2-point lines in $P$. This gives another 15 $\Pi_{15}$'s planes that meet $P$ in a 2-point line. But this now accounts for 59 $\Pi_{15}$'s, all of which meet $P$ in either a 2, 3 or 5-point line. Thus, every pair of $\Pi_{15}$'s meet.
\end{proof}
In fact, all of these intersections are {\it modular}: $r(P_1 \cap P_2)=2$ for all pairs of 15-point planes $P_1$ and $P_2$. We also remark the 15-point planes are isomorphic (as matroids) to $M(H_3)$ - the matroid associated to the root system $H_3$ (see \cite{eg}). We will need this connection in Section~\ref{S:aut}.
\subsection{Orthoframes}
Of special interest is the interesting symmetry between points and $\Pi_{15}$ planes: There are 60 points and 60 $\Pi_{15}$'s, where each point is in 15 $\Pi_{15}$'s and each $\Pi_{15}$ has 15 points. The easiest way to understand this symmetry is through {\it orthoframes}.
\begin{Def}\label{D:ortho}
A basis $B$ for $M(H_4)$ is an {\it orthoframe} if each pair of points in $B$ forms a 2-point line in the matroid.
\end{Def}
For instance, the basis formed by the first 4 columns of the matrix $H$ is an orthoframe. In general, these bases correspond to column vectors in $H$ that are pairwise orthogonal. Two more orthoframes are:
$$\left[\begin{array}{cccc}
0& 1 & \tau & \tau^2 \\
1 & 0 & \tau^2 & -\tau \\
-\tau & \tau^2 & 0 & -1 \\
-\tau^2 & -\tau& 1 & 0
\end{array}\right] \hspace{.5in}
\left[\begin{array}{cccc}
1& 0 & 1 & \tau^2 \\
1 & \tau & \tau & -\tau \\
1 & -\tau^2 & 0 & -1 \\
1 & 1& -\tau^2 & 0
\end{array}\right]
$$
Orthoframes are important to us for two reasons: matroid automorphisms give group actions on the set of orthoframes, and orthoframes have an immediate geometric interpretation in the root system $H_4$.
We state (without proof) several useful facts we will need about orthoframes. The proofs are routine, and follow in a similar way the incidence counts of Lemmas~\ref{L:flatcount} and \ref{L:flatinc}.
\begin{Prop}\label{P:orthodata}
\begin{enumerate}
\item $B$ is an orthoframe if and only if the four column vectors corresponding to the points of $B$ are pariwise orthogonal.
\item There are 75 orthoframes.
\item Each point is in 5 orthoframes, and each 2-point line is in exactly one orthoframe.
\item If $O_1, O_2, \dots, O_5$ are the 5 orthoframes that contain a given point $x$, then $\displaystyle{\bigcup_{i=1}^5 O_i -x}$ is a 15-point plane.
\end{enumerate}
\end{Prop}
Part (4) of this proposition allows us to define a bijection between the points of the matroid and the 15-point planes: Given a point $x$, let $O_1, O_2, \dots, O_5$ be the five orthoframes that contain $x$. Then define $P_x:=\displaystyle{\bigcup_{i=1}^5 O_i -x}$. Conversely, a given 15-point plane can be partitioned into five partial orthoframes (this partition is visible in the picture of a $\Pi_{15}$ in Fig.~\ref{F:3planes} -- see also Sec. 2.1 of \cite{eg}). Then a 15-point plane $P_x$ uniquely determines a point $x$ that ``completes'' each of these orthoframes.
We will use this correspondence frequently; we introduce some terminology suggestive of the relationship between the column vectors corresponding to the point and the plane.
\begin{Def}\label{D:names}
Suppose the point $x$ corresponds to the 15-point plane $P_x$ as above. Then we say the point $x$ is the {\it orthopoint} of the plane $P_x$ and the 15-point plane $P_x$ is the {\it orthoplane} of the point $x$.
\end{Def}
We can also use the orthoframes to uniquely reconstruct the matroid $M(H_4)$.
\begin{Prop}\label{P:orthorecon} The collection of 75 orthoframes completely determines all the flats of the matroid $M(H_4)$.
\end{Prop}
\begin{proof} We show how the orthoframe data allows us to reconstruct all the flats.
\begin{itemize}
\item {\bf $\Pi_{15}$ planes:} The union of the orthoframes containing a given point $x$ form the 15-point orthoplane $P_x$ (where the common point is removed), so we can construct all the $\Pi_{15}$'s this way.
\item {\bf Lines:} Since each 2-point line is in a unique orthoframe, we simply list the six 2-point lines contained in each of the 75 orthoframes, giving us the 450 2-point lines. By the proof of Prop. \ref{P:planeintersect}, every 3-point line and every 5-point line occurs as the intersection of some pair of $\Pi_{15}$'s. This allows us to reconstruct all rank-2 flats.
\item {\bf $\Pi_3$ and $\Pi_5$ planes:} For the trivial planes $\Pi_3$ and $\Pi_5$, each such plane arises as the union of a 3 or 5-point line in a $\Pi_{15}$ with the plane's orthopoint as the apex of the $\Pi_3$ or $\Pi_5$.
\item {\bf $\Pi_6$ planes:} The remaining non-trivial flats are the 300 $\Pi_6$ planes. We consider all pairs of intersecting 3-point lines. Each intersecting pair determines either a $\Pi_{15}$ or a $\Pi_6$. We know all the 15-point planes at this point, so we can determine all pairs giving a $\Pi_6$. To reconstruct each $\Pi_6$ from this information, note that each $\Pi_6$ contains four 3-point lines, every pair of which intersect. This allows us to uniquely determine each $\Pi_6$ from the collection of 3-point lines.
\end{itemize}
\end{proof}
Compared with bases, orthoframes provide a much more efficient way to describe the matroid. While there are 75 orthoframes, a computer search gives 398,475 bases; a random subset of four columns has approx 81.7\% chance of being a basis.
We conclude this section by noting an algebraic explanation for the point-orthoplane correspondence. Each point corresponds to an ordered 4-tuple $[a,b,c,d]$, and each $\Pi_{15}$ corresponds to the solution set of a linear equation. The connection between the coordinates of the point $z$ and the corresponding linear equation the associated orthoplane $P_z$ satisfies is simple.
\begin{Prop}\label{P:planeeqns}
Let $z$ be a point with corresponding orthoplane $P_z$, and suppose $z$ corresponds to the ordered 4-tuple $[a,b,c,d]$. Then $P_z$ is defined by the linear equation $ax_1+bx_2+cx_3+dx_4=0$.
\end{Prop}
\begin{proof}
Let $z$ be a point and let $O_1, O_2, \dots, O_5$ be the five orthoframes containing $z$. Then if $\displaystyle{y \in \bigcup_{i=1}^5 O_i -z =P_z}$, we have the 4-tuples corresponding to the points $y$ and $z$ are orthogonal (by Prop.~\ref{P:orthodata}(1)). Thus, if the coordinates for $z$ are $[a,b,c,d]$, we have $ax_1+bx_2+cx_3+dx_4=0$ for all column vectors $[x_1,x_2,x_3,x_4] \in P_z$.
\end{proof}
As an example of this algebraic connection, let $z$ be the point with coordinates $[\tau^2,0,\tau,-1]$. Then the equation $\tau^2 x_1+\tau x_3-x_4=0$ is satisfied by $P_z$:
$$\left(
\begin{array}{ccccccccccccccc}
0 & 1 & 1 & 0 & 0 & 0 & 0 & \tau & 1 & 1 & 1 & \tau & \tau & 1 & 1 \\
1 & 1 & -1 & \tau^2 & \tau^2 & 1 & 1 & 0 & 0 & \tau & -\tau & 1 & -1 & \tau^2 & -\tau^2 \\
0 & -1 & -1 & 1 & -1 & \tau & -\tau & -1 & -\tau^2 & 0 & 0 & -\tau^2 & -\tau^2 & -\tau & -\tau \\
0 & 1 & 1 & \tau & -\tau & \tau^2 & -\tau^2 & \tau^2 & -\tau & \tau^2 & \tau^2 & 0 & 0 & 0 & 0
\end{array}
\right).$$
\section{Automorphisms}\label{S:aut}
We turn to our main topic: the structure of the automorphisms of $M(H_4)$.
For a group $G$ acting on a set $X$ with $x \in X$, recall the {\it stabilizer of } $x$
$$\stab(x) = \{g \in G \mid g(x)=x\}.$$
\begin{Lem}\label{L:stab}
Let $x$ be a point of $M(H_4)$ and $P_x$ its 15-point orthoplane. Then $\stab(x)=\stab(P_x)\cong S_5 \times \mathbb{Z}_2$.
\end{Lem}
\begin{proof}
The point-orthoplane correspondence (Prop.~\ref{P:orthodata}(4) or Prop.~\ref{P:planeeqns}) gives $\stab(x)= \stab(P_x)$. Note that $P_x \cong M(H_3)$, the matroid associated with the icosahedral root system. Then, by Theorem 3.3 of \cite{eg}, $\Aut(M(H_3)) \cong S_5$, so $S_5$ fixes the plane $P_x$. ($S_5$ acts on the five rank-3 orthoframes.) Thus $S_5 \leq \stab(x)$.
We may now suppose $\sigma \in \stab(P_x)$ where $\sigma$ fixes the orthoplane $P_x$ pointwise. We will show that $\sigma =I$ or $\sigma$ is the matroid automorphism induced by geometric reflection $r_x$ of the root system through $P_x$. (Note that reflection fixes $P_x$ pointwise, but also fixes the orthopoint $x$.)
So assume $\sigma(w)=w$ for $w=x$ and for all $w \in P_x$. Then $\sigma$ fixes (at least) 16 points; we partition the remaining 44 points of the matroid into two classes:
\begin{description}
\item[Class 1] Let $\{L_1, L_2, \dots, L_{10}\}$ be the pencil of 3-point lines through $x$. Then $\mathcal{C}_1:=\bigcup L_i - x$ contains 20 points. We write $L_i=\{x, y_i, z_i\}$ for $1 \leq i \leq 10$, so $\mathcal{C}_1=\{y_1, y_2, \dots, y_{10}, z_1, z_2, \dots, z_{10}\}$. For a given $i$, we first show $\sigma$ either fixes both $y_i$ and $z_i$ or it swaps them.
\begin{figure}
\begin{center}
\includegraphics[width=2in]{3ptlines}
\caption{The three points $x, y_1$ and $z_1$ forming the apexes of the three $\Pi_3$'s containing the line $K$ are collinear. $K$ and $L_1$ are skew, i.e, $r(K \cup L_1)=4$.}
\label{F:3ptlines}
\end{center}
\end{figure}
Consider a 3-point line $K$ in the 15-point plane $P_x$, which we know is fixed pointwise by $\sigma$. Then $K$ is also fixed pointwise. The line $K$ is contained in three $\Pi_3$'s (Lemma~\ref{L:flatinc}), with three different apexes, one of which is $x$. Then it is straightforward to show these three apexes form a 3-point line, so they correspond to one of the lines, say $L_1$, in the pencil through $x$, as in Figure~\ref{F:3ptlines}. Since matroid automorphisms preserve all $\Pi_3$'s, and since $K$ is fixed, we must have $\sigma(y_1)\in\{y_1,z_1\}$.
But there are ten 3-point lines in $P_x$, and each of these lines will correspond to one of the $L_j$ in precisely the same way $K$ corresponds to $L_1$. Thus, we have $\sigma(y_i)\in \{y_i,z_i\}$ for all $i$.
Now suppose $\sigma(y_1)=y_1$. We will show that $\sigma(y_i)=y_i$ for all $i$ (and so $\sigma$ is the identity on $\mathcal{C}_1$). Now every pair of lines $L_i, L_j$ determines either a 6-point plane $\Pi_6$ or a 15-point plane $\Pi_{15}$. Our incidence counts from Lemma~\ref{L:flatinc} can be used to show that, for a given $i$, precisely 6 lines $L_j$ can be paired with $L_i$ to generate a $\Pi_6$, and the remaining three lines will generate $\Pi_{15}$'s when paired with $L_i$. We concentrate on the $\Pi_6$'s.
\begin{figure}
\begin{center}
\includegraphics[width=3in]{6ptplanes}
\caption{The pencil of 3-point lines through $x$. $L_1$ and $L_2$ generate a $\Pi_6$.}
\label{F:6ptplanes}
\end{center}
\end{figure}
Suppose $L_1$ and $L_2$ determine a $\Pi_6$, where $w$ is the unique point of the $\Pi_6$ not on $L_1$ or $L_2$, as in Figure~\ref{F:6ptplanes}. Since $\sigma(x)=x$ and $\sigma(y_1)=y_1$, we know $\sigma(z_1)=z_1$. Thus, if $\sigma$ swaps $y_2$ and $z_2$, then the 3-point line $\{y_1, w, z_2\}$ is mapped to the independent set $\{y_1, w, y_2\}$, which is impossible for a matroid automorphism. Thus, $\sigma$ fixes $y_2$ and $z_2$.
To show that $\sigma$ fixes all $y_i$ and $z_i$, construct a graph $\Gamma$ as follows: The 10 vertices are labeled by the lines $L_i$, with an edge between $L_i$ and $L_j$ if and only if these two lines determine a $\Pi_6$. Then $\Gamma$ is a regular graph on 10 vertices with every vertex having degree 6, so $\Gamma$ is connected. Thus we can find a path from $L_1$ to any line $L_j$, and it is clear that each edge of the path forces $\sigma$ to fix the points on the corresponding line. Thus, $\sigma$ fixes each point in $\mathcal{C}_1$. (Incidentally, we note the point $w$ is on the fixed 15-point plane $P_x$. By choosing different pairs of lines in the pencil, we can locate all 15 points of $P_x$ in this way.)
Finally, if $\sigma$ swaps any pair $y_i, z_i$, then $\sigma$ swaps {\it all} pairs, by a similar argument. Then $\sigma$ corresponds to the reflection $r_x$ through $P_x$.
\smallskip
\item[Class 2] Let $\{M_1, M_2, \dots, M_{6}\}$ be the pencil of six 5-point lines through $x$ (again, from Lemma~\ref{L:flatinc}). Then $\mathcal{C}_2:=\bigcup M_i - x$ contains 24 points. As we did for $\mathcal{C}_1$, we show that these 24 points are either swapped in 12 transpositions (when $\sigma$ corresponds to reflection) or are all fixed pointwise (when $\sigma=I$).
As before, fix a 5-point line $K$ in the fixed plane $P_x$ and consider the five points in the matroid that form the apexes of $\Pi_5$ planes which use $K$. Then it is again straightforward to show that these five apexes form a 5-point line, so they correspond to one of the $M_i$. (This is completely analogous to the situation with $\Pi_3$'s that contain a fixed 3-point line, as in Figure~\ref{F:3ptlines}.) Since matroid automorphisms preserve $\Pi_5$'s, each line $M_i$ in the pencil must be fixed.
We need to show that $\mathcal{C}_2$ is fixed by $\sigma$ when $\sigma$ fixes $\mathcal{C}_1$ pointwise. Now the 15 pairs of lines in the pencil $\{M_1, M_2, \dots, M_{6}\}$ generate the 15 $\Pi_{15}$ planes containing $x$. Thus, if $\sigma$ fixes $\mathcal{C}_1$ pointwise, it fixes two intersecting 3-point lines in each of these $\Pi_{15}$'s, since the $\Pi_{15}$'s containing $x$ are also generated by 15 pairs of lines from $\{L_1, L_2, \dots, L_{10}\}$. Thus, in each $\Pi_{15}$ that contains $x$, we have a pair of intersecting 3-point lines that are fixed pointwise, and a pair of intersecting 5-point lines that are also fixed (not necessarily pointwise).
But the only automorphism of a 15-point plane with this cycle structure on its 3- and 5-point lines is the identity -- this follows from the last two columns of Table 1 of \cite{eg}. Thus, $\sigma$ fixes $\mathcal{C}_2$ pointwise.
If $\sigma$ swaps each pair $(y_i,z_i)$ in $\mathcal{C}_1$, then we obtain reflection again, and the 24 points in $\mathcal{C}_2$ are all moved in 12 transpositions, corresponding to the reflection $r_x$ through the plane $P_x$.
\end{description}
Thus, every $\sigma \in \stab(x)$ can be decomposed as an automorphism of the plane $P_x$ followed or not by reflection through that plane. These two operations commute, so we have $\stab(x) \cong S_5 \times \mathbb{Z}_2$.
\end{proof}
Recall there are seven different equivalence classes of flats: 2, 3 and 5-point lines, and 4 different classes of planes.
\begin{Lem}\label{L:trans}
$\Aut(M(H_4))$ acts transitively on each equivalence class of flats of the matroid.
\end{Lem}
\begin{proof}
The proof makes use of the fact that the Coxeter/Weyl group acts transitively on the roots of $H_4$ (see \cite{cox}). Since every geometric symmetry of the root system gives rise to a matroid automorphism, we immediately get $\Aut(M(H_4))$ acts transitively on the points of the matroid. The point-orthoplane correspondence then gives us a transitive action on the $\Pi_{15}$'s.
We now consider the remaining flat classes.
\begin{description}
\item[Rank 2 flats] Let $\mathcal{L}_k$ be the class of all $k$-point lines, for $k=2,3$ and 5, and let $L_1$ and $L_2$ be two $k$-point lines. If $L_1$ and $L_2$ are both in the same $\Pi_{15}$, then we use the fact (see \cite{eg}) that $\Aut(M(H_3))$ acts transitively on lines to get an automorphism $\sigma$ mapping $L_1$ to $L_2$.
If $L_1$ and $L_2$ are not contained in any $\Pi_{15}$, then either $r(L_1\cup L_2)=4$, i.e., the lines $L_1$ and $L_2$ are skew, or $L_1$ and $L_2$ are 3-point lines in a $\Pi_6$. In the former case, find two 15-point planes $P_1$ and $P_2$ with $L_1 \subseteq P_1$ and $L_2 \subseteq P_2$. Now use the transitivity on 15-point planes to map $P_1$ to $P_2$, and then use transitivity on lines within $P_2$ to map the image of $L_1$ to $L_2$.
If $L_1=\{a,b,c\}$ and $L_2=\{a,d,e\}$ are intersecting 3-point lines in a $\Pi_6$, then use transitivity on points to map $b$ to $d$. This must carry $L_1$ to $L_2$.
\smallskip
\item[Rank 3 flats] We already have $\Aut(M(H_4))$ is transitive on 15-point planes. It is also clear that transitivity of 3- and 5-point lines gives us transitivity on $\Pi_3$ and $\Pi_5$ planes. It remains to prove transitivity for $\Pi_6$ planes.
Let $G_1$ and $G_2$ be two $\Pi_6$ planes, and let $L_1$ and $L_2$ be 2-point lines with $L_i \subseteq G_i$ ($i=1$ or 2). Then transitivity on 2-point lines allows us to map $L_1 \mapsto L_2$. So we can assume $G_1$ and $G_2$ share the 2-point line $xy$, as in Figure~\ref{F:6ptplanestrans}. By Lemma~\ref{L:flatinc}, $G_1$ and $G_2$ are the only two $\Pi_6$'s that contain $xy$. Then there are two 15-point planes that also contain the 2-point line $xy$; call these two planes $P_1$ and $P_2$. Then reflecting through either $P_1$ or $P_2$ will map $G_1$ to $G_2$, since reflection must send a $\Pi_6$ to a $\Pi_6$, reflections move 44 points, and $G_1$ and $G_2$ are the only two $\Pi_6$'s containing $xy$.
\end{description}
\begin{figure}
\begin{center}
\includegraphics[width=4in]{6ptplanestrans}
\caption{Two $\Pi_6$ planes share the 2-point line $xy$. $r(G_1 \cup G_2)=4$.}
\label{F:6ptplanestrans}
\end{center}
\end{figure}
\end{proof}
As an example of how transitivity on $\Pi_6$ planes works, consider the matrices $A$ and $B$ below. The columns of $A$ satisfy the equation $x_3=x_4$, and the columns of $B$ satisfy $x_1=x_2$. Note that the corresponding 6-point planes have two points in common - the 2-point line $ef$.
$$
A=\kbordermatrix{&a&b&c&d&e&f\\
& 1 & 0 & 1 & 1 & 1 & 1 \\
& 0 & 1 & -1& -1 & 1 & 1\\
& 0 & 0 & 1 & -1 & -1& 1\\
& 0 & 0 & 1 & -1 & -1& 1}
\hskip.25in
B=\kbordermatrix{&a'&b'&c'&d'&e&f\\
& 0 & 0 & 1 & 1 & 1 & 1 \\
& 0 & 0 & 1& 1 & 1 & 1\\
& 1 & 0 & 1 & -1 & -1& 1\\
& 0 & 1 & -1 & 1 & -1& 1}
$$
To find a matroid automorphism that maps $G_1$ to $G_2$, we let $x=[1,-1,1,-1]$ and $y=[1,-1,-1,1]$. Then $\{e,f,x,y\}$ is an orthoframe, i.e., $e,f \in P_x$ and $e, f \in P_y$. Reflection through the plane $P_x$ is accomplished by $\displaystyle{v\mapsto v-\frac{2v\cdot x}{x\cdot x}x}$. This maps $a \mapsto d', b \mapsto c', c \mapsto b', d \mapsto a'$. The reader can check reflection through $P_y$ maps $a \mapsto c', b \mapsto d', c \mapsto a', d \mapsto b'$. In either case, we have a map interchanging $G_1$ and $G_2$.
Alternatively, we can map one plane to the other by performing two row swaps on the matrix $H$: $(13)(24)$. This is an {\it even} permutation of the rows, and so maps $M(H_4)$ to itself.
It is interesting to note that although $\Aut(M(H_4))$ acts transitively on pairs of intersecting 5-point lines, it does not act transitively on pairs of intersecting 3-point lines. The latter fall into two equivalence classes, as we have already seen: A pair of intersecting 3-point lines determines either a $\Pi_6$ or a $\Pi_{15}$.
$\Aut(M(H_4))$ also acts transitively on orthoframes. We omit the short proof.
\begin{Lem}\label{L:orthotrans}
$\Aut(M(H_4))$ acts transitively on orthoframes.
\end{Lem}
Recall a group $G$ acting on a set $X$ is {\it primitive} if $G$ acts transitively and preserves no non-trivial blocks of $X$. We now prove the action of $\Aut(M(H_4))$ on the points of the matroid is primitive.
\begin{Thm}\label{T:prim} The automorphism group action is primitive on the 60 points of the ground set of $M(H_4)$.
\end{Thm}
\begin{proof} Suppose $E$ is partitioned into blocks, and suppose $\Delta$ is a block. Then, for any $\sigma \in \Aut(M(H_4)), \Delta \cap \sigma(\Delta)=\Delta$ or $\emptyset$. We must prove $|\Delta|=1$ or 60.
Suppose $x\in \Delta$. Note for all $\sigma \in \stab(x)$, we must have $ \sigma(\Delta)=\Delta$. Since $\Aut(M(H_3))\cong S_5 \leq \stab(x)$ acts transitively on the 15 points of $P_x$, we must have either $P_x \subseteq \Delta$ or $P_x \cap \Delta = \emptyset.$ There are now two cases to consider.
\begin{itemize}
\item If $P_x \subseteq \Delta$, then since $P_x$ meets every other 15-point plane (from Prop.~\ref{P:planeintersect}), we get $\sigma(P_x) \cap P_x \neq \emptyset$ for all $\sigma \in \Aut(M(H_4))$. Thus, $\Delta \cap \sigma(\Delta) = \Delta$ for all $\sigma \in \Aut(M(H_4))$, i.e., $\sigma(\Delta)=\Delta$ for all $\sigma$. But this immediately gives $\Delta=E$, i.e., $\Delta$ is the trivial block formed by the entire ground set of the matroid.
\item If $P_x \cap \Delta = \emptyset$, we restrict to stab$(x)$ and consider all the lines that contain $x$. We know $x$ is in 15 2-point lines, but the 15 points that produce these 2-point lines form $P_x$, so none of these 15 points is in $\Delta$.
There are ten 3-point lines through $x$, which we denote $\{L_1, L_2, \dots, L_{10}\}$, as in the proof of Lemma~\ref{L:stab}. From that proof and the fact that the action of $\Aut(M(H_4))$ is transitive on 3-point lines (Lemma~\ref{L:trans}), we must have either $L_i \subseteq \Delta$ for all $1 \leq i \leq 10$, or $\Delta \cap L_i = \{x\}$ for all $i$. (Note: Every $\sigma \in \mbox{ stab}(x)$ maps the pencil of lines through $x$ to itself, so each line contributes the same number of points to $\Delta$, and reflecting through the plane $P_x$ forces us to take 0 or 2 points from each $L_i$, not counting $x$.) Thus, $\Delta$ contains either 0 points or 20 points from the $L_i$ pencil, not counting $x$.
Using an analogous argument on the pencil of six 5-point lines through $x$, we find each such line must meet $\Delta$ in the same number of points, and that number must be 0, 2 or 4 per line (not counting $x$). This means $\Delta$ contains 0, 12 or 24 points from this pencil, again not counting $x$.
Putting all of this together gives the $|\Delta|=1, 13, 21, 25, 33$ or $45$. But $|\Delta|$ must divide 60, since the blocks partition $E$. Thus $|\Delta|=1$, so $\Delta=\{x\}$.
\end{itemize}
\end{proof}
The next result follows immediately from Theorem 1.7 of \cite{cam}.
\begin{Cor}\label{C:maxsubgp}
$\stab(x)$ is a maximal subgroup of $\Aut(M(H_4))$.
\end{Cor}
It is worth pointing out that the action of $\Aut(M(H_3))$ on $M(H_3)$ is imprimitive -- the partition into rank-3 orthoframes is a non-trivial partition of the 15 elements of the matroid into 5 blocks. This corresponds geometrically to permuting the 5 cubes embedded in a dodecahedron.
The root system $H_4$ has a Coxeter/Weyl group of size 14,400. Coxeter's notation \cite{cox} for the group $[3,3,5]$ suggests its construction as a reflection group.
$$[3,3,5] = \langle R_1, R_2, R_3, R_4 \mid (R_1R_2)^3=(R_2R_3)^3=(R_3R_4)^5=I \rangle$$
In this presentation, we assume each $R_i$ is a reflection, i.e., $R_i^2=I$, and that $(R_iR_j)^2=I$ for $|i-j|>1$, i.e., reflections $R_i$ and $R_j$ are orthogonal for $|i-j|>1$.
Conway and Smith (Table 4.3 of \cite{cs}) express this group as $\pm [I \times I] \cdot 2$, where $I \cong A_5$ is the chiral (or direct) symmetry group of the icosahedron. In 4-dimensions, $I \times I$ is best understood as a rotation group via quaternion multiplication.
\begin{Thm}\label{T:aut} Let $W$ be the Coxeter/Weyl isometry group for the root system $H_4$, with center $Z$ generated by central inversion $\bf{v} \mapsto -\bf{v}$.
\begin{enumerate}
\item $|\Aut(M(H_4))|=|W|=14,400$.
\item $W/Z$ is an index $2$ subgroup of $\Aut(M(H_4))$.
\end{enumerate}
\end{Thm}
\begin{proof}
\begin{enumerate}
\item From Lemma~\ref{L:stab}, we have $\stab(x)\cong S_5 \times \mathbb{Z}_2$. Since the orbit of $x$ is all of $E$ (as the automorphism group is transitive), we have $|\Aut(M(H_4))|=|S_5 \times \mathbb{Z}_2| \cdot |E|=14,400$.
\item Every isometry of $W$ gives a matroid automorphism, and central inversion in $W$ corresponds to the identity in $\Aut(M(H_4))$. The result now follows from (1).
\end{enumerate}
\end{proof}
In \cite{dut}, $\Aut(M(H_4))$ is obtained as follows: First extend the root system $H_4$ by adding an isomorphic copy $H_4 '$ of $H_4$. Then $\Aut(M(H_4)) \cong W(H_4 \cup H_4')/Z$, where $Z \cong \mathbb{Z}_2$ is the subgroup generated by central inversion ($Z$ is the center of $W$). The $H_4'$ copy is obtained by using the field automorphism $\phi:\mathbb{Q}[\tau] \to \mathbb{Q}[\tau]$ given by $\tau \mapsto \bar{\tau}$ on the original root system $H_4$. (Note that this map must operate on a different set of coordinates than those treated here, since the 24 roots whose coordinates avoid $\tau$ are fixed by this map.)
We summarize this section with the following consequence of Theorem~\ref{T:aut}:
\begin{quotation}
The automorphism groups of the root systems $H_3$ and $H_4$ have the same connection to the Coxeter/Weyl groups $W(H_3)$ and $W(H_4)$. In each case, half of the matroid automorphisms are geometric and half are not. The non-geometric automorphisms arise from the $S_5$ action in $\stab(x)$ that permit odd permutations of rank-3 orthoframes in $\Pi_{15}$ planes.
\end{quotation}
\section{Geometric interpretations of $M(H_4)$.}\label{S:geo}
We can interpret the flats and orthoframes of $M(H_4)$ in terms of the 120-cell and its dual, the 600-cell. We give the number of vertices, edges, 2-dimensional faces and 3-dimensional faces for the 120- and 600-cell in Table~\ref{T:geoinfo} - this information appears in Table 1(ii) of \cite{cox}.
\begin{table}[htdp]
\caption{Number of elements of the 120- and 600-cell.}
\begin{center}
\begin{tabular}{|c|c|c|c|c|} \hline
Object & Vertices & Edges & 2D faces & 3D facets \\ \hline
120-cell & 600 & 1200 & 720 & 120 \\ \hline
600-cell & 120 & 720 & 1200 & 600 \\ \hline
\end{tabular}
\end{center}
\label{T:geoinfo}
\end{table}%
The 2-dimensional faces of the 120-cell are pentagons and the 3-dimensional facets are dodecahedra; for the 600-cell, 2-dimensional faces are triangles and 3-dimensional facets are tetrahedra.
Now each of the 60 points of the matroid corresponds to a pair of roots $\pm \bf{v}$ of the root system $H_4$. Since the roots are also the vertices of the 600-cell, we immediately get a correspondence between the points of the matroid and the pairs of opposite vertices of the 600-cell. We can interpret the other geometric elements of the 120- and 600-cell through the matroid $M(H_4)$ in Table~\ref{T:geocorr}. We also remark the 75 matroid orthoframes correspond to 75 embedded hypercubes in the 120-cell or 600-cell.
\begin{table}[htdp]
\caption{Correspondence between geometric elements and flats in $M(H_4)$.}
\begin{center}
\begin{tabular}{|| l c l||} \hline
Geometric Family & & Matroid Flat \\ \hline \hline
120 Vertices of 600-cell & $\Leftrightarrow$ & 60 Points \\ \hline
1200 Triangles of 600-cell & $\Leftrightarrow$ & 600 $\Pi_3$'s \\
720 Pentagons of 120-cell & $\Leftrightarrow$ & 360 $\Pi_5$'s \\ \hline
600 Tetrahedra of 600-cell & $\Leftrightarrow$ & 300 $\Pi_6$'s \\
120 Dodecahedra of 120-cell & $\Leftrightarrow$ & 60 $\Pi_{15}$'s \\ \hline
\end{tabular}
\end{center}
\label{T:geocorr}
\end{table}%
We comment briefly on some of these connections. For the root system $H_3$, this correspondence is explored in detail in \cite{eg}. In that case, the roots are parallel to the edges of an icosahderon. This makes the matroid correspondence immediate: 3-point lines of the matroid correspond to pairs of triangles in the icosahedron and 5-point lines correspond to pairs of vertices of the icosahedron (or pentagons of the dual dodecahedron).
The chief difficulty in applying the results of \cite{eg} to $H_4$ arises from the fact that the edges of the 120-cell or 600-cell are no longer parallel to the roots. But, since each 15-point plane is isomorphic to $M(H_3)$ as a matroid, the correspondence between dodecahedra and $\Pi_{15}$'s is clear. We explain the connection between the 720 pentagons in the 120-cell and the 360 $\Pi_5$'s in the matroid. In the 120-cell, a given pentagon is in two dodecahedra, but in $M(H_4)$, a given 5-point line is in 5 $\Pi_{15}$'s. We can ``correct'' this by using $\Pi_5$'s, since each 5-point line of the matroid is in precisely five $\Pi_5$'s.
Finally, we can use the orthopoint-orthoplane bijection to get a matroidal interpretation for the 120-cell/600-cell duality.
\begin{Prop}\label{P:ptplanedual}
Let $F$ be the collection of 15-point planes, and let $B$ be the bipartite graph with vertex set $E \cup F$ with an edge joining the point $x$ to the plane $P$ if and only if $x \in P$. Then $\Aut(B)\cong \Aut(M(H_4)) \times \mathbb{Z}_2$.
\end{Prop}
\begin{proof}
It is clear the bipartite graph $B$ allows us to reconstruct all the flats of the matroid, and any matroid automorphism acting on $E$ will necessarily be a graph automorphism of $B$. Further, we can swap the points and the planes -- map a point $x$ to its orthoplane $P_x$.
\end{proof}
We conclude by observing that it should be possible to treat matroids associated to other root systems (especially the exceptional $E_6, E_7,$ and $E_8$) in a coherent way that also explains the structure of those matroids. We hope to undertake such a program in the future.
| {
"timestamp": "2010-10-28T02:00:46",
"yymm": "1005",
"arxiv_id": "1005.5492",
"language": "en",
"url": "https://arxiv.org/abs/1005.5492",
"abstract": "We study the rank 4 linear matroid $M(H_4)$ associated with the 4-dimensional root system $H_4$. This root system coincides with the vertices of the 600-cell, a 4-dimensional regular solid. We determine the automorphism group of this matroid, showing half of the 14,400 automorphisms are geometric and half are not. We prove this group is transitive on the flats of the matroid, and also prove this group action is primitive. We use the incidence properties of the flats and the {\\it orthoframes} of the matroid as a tool to understand these automorphisms, and interpret the flats geometrically.",
"subjects": "Combinatorics (math.CO)",
"title": "Matroid automorphisms of the H_4 root system",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9817357227168956,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.7085610764988144
} |
https://arxiv.org/abs/1901.06974 | A variational scheme for hyperbolic obstacle problems | We consider an obstacle problem for (possibly non-local) wave equations, and we prove existence of weak solutions through a convex minimization approach based on a time discrete approximation scheme. We provide the corresponding numerical implementation and raise some open questions. | \section{Introduction}
Obstacle type problems are nowadays a well established subject with many dedicated contributions in the recent literature. Obstacle problems for the minimizers of classical energies and regularity of the arising free boundary have been extensively studied, both for local operators (see, e.g. \cite{Ca, Ro18} and references therein) and non-local fractional type operators (see, e.g. \cite{Si} and the review \cite{Ro18}). The corresponding evolutive equations have also been considered, mainly in the parabolic context \cite{CaPeSh, CaFi13, NoOk15, BaFiRo18}. What seems to be missing in the picture is the hyperbolic scenario which, despite being in some cases as natural as the previous ones, has received little attention so far.
Among the available results for hyperbolic obstacle problems there is a series of works by Schatzman and collaborators \cite{Schatzman78, Schatzman80, Schatzman81, PaoliSchatzman02I}, where the existence of a solution is proved via penalty methods and, furthermore, existence of energy preserving solutions are proved in dimension $1$ whenever the obstacle is concave \cite{Schatzman80}. The problem is also considered in \cite{Maruo85}, where the author proves the existence of a (possibly dissipative) solution within a more general framework but under technical hypotheses. More recently the $1$d situation has been investigated in \cite{Ki09} through a minimization approach based on time discretization, see also \cite{GiSv09, OmKaNa09, Ta94, DaLa11} for contributions on related problems using the same point of view.
Another variational approach to hyperbolic problems, through an elliptic regularization suggested by De Giorgi, is given in \cite{SeTi} and subsequent papers (see for instance \cite{DaDe} for time dependent domains).
In this paper we use a convex minimization approach, relying on a semi-discrete approximation scheme (as in \cite{Ki09, GiSv09, DaLa11}), to deal with more general situations so as to include also non-local hyperbolic problems in the presence of obstacles, in arbitrary dimension. As main results we prove existence of a suitably defined weak solution to the wave equation involving the fractional Laplacian with or without an obstacle, together with the corresponding energy estimates. Those results are summarized in Theorem \ref{thm:main1} and Theorem \ref{thm:main2} (see Section \ref{sec:free} and \ref{sec:obstacle}). The approximating scheme allows to perform numerical simulations which give quite precise evidence of dynamical effects. In particular, based on our numerical experiments for the obstacle problem, we conjecture that this method is able to select, in cases of nonuniqueness, the most dissipative solution, that is to say the one losing the maximum amount of energy at contact times.
Eventually, we remark that this approach is quite robust and can be extended for instance to the case of adhesive phenomena: in these situations an elastic string interacts with a rigid substrate through an adhesive layer \cite{CocliteFlorioLigaboMaddalena17} and the potential energy governing the interaction can be easily incorporated in our variational scheme.
The paper is organized as follows. We first recall the main properties of the fractional Laplace operator and fractional Sobolev spaces in Section \ref{sec:fractional} and then, in Section \ref{sec:free}, we introduce the time-disretized variational scheme and apply it to the non-local wave equation (with the fractional Laplacian), proving Theorem \ref{thm:main1}. In Section \ref{sec:obstacle} we adapt the scheme so as to include the obstacle problem, proving existence of weak solutions in Theorem \ref{thm:main2}. In the last section we describe the corresponding numerical implementation providing some examples and we conclude with some remarks and open questions.
\section{Fractional Sobolev spaces and the fractional Laplacian operator}\label{sec:fractional}
In this section we briefly review the main definitions and properties of the fractional setting and we fix the notation used in the rest of the paper. For a more complete introduction to fractional Sobolev spaces we point to \cite{DiNezzaPalatucciValdinoci12, mclean2000strongly} and references therein.
\textbf{Fractional Sobolev spaces.} Let $\Omega \subset \R^d$ be an open set. For $s \in \R$, we define the Sobolev spaces $H^s(\Omega)$ as follows:
\begin{itemize}
\item for $s \in (0,1)$ and $u \in L^2(\Omega)$, define the Gagliardo semi-norm of $u$ as
\[
[u]_{H^s(\Omega)} = \left( \int_\Omega \int_\Omega \frac{\abs{u(x)-u(y)}^2}{\abs{x-y}^{d+2s}} \, dxdy \right)^{\frac{1}{2}}.
\]
The fractional Sobolev space $H^s(\Omega)$ is then defined as
\[
H^s(\Omega) = \left\{ u \in L^2(\Omega) \,:\, [u]_{H^s(\Omega)} < \infty \right\},
\]
with norm $||u||_{H^s(\Omega)} = (||u||_{L^2(\Omega)}^2 + [u]_{H^s(\Omega)}^2)^{1/2}$;
\item for $s \geq 1$ let us write $s = [s] + \{s\}$, with $[s]$ integer and $0\leq \{s\} < 1$. The space $H^s(\Omega)$ is then defined as
\[
H^s(\Omega) = \{ u \in H^{[s]}(\Omega) \,:\, D^\alpha u \in H^{\{s\}}(\Omega) \text{ for any } \alpha \text{ s.t. } |\alpha| = [s] \},
\]
with norm $||u||_{H^s(\Omega)} = (||u||_{H^{[s]}(\Omega)}^2+\sum_{|\alpha|=[s]}||D^\alpha u||_{H^{\{s\}}(\Omega)}^2)^{1/2}$;
\item for $s < 0$ we define $H^s(\Omega) = (H^{-s}_0(\Omega))^*$, where as usual the space $H^s_0(\Omega)$ is obtained as the closure of $C^\infty_c(\Omega)$ in the $||\cdot||_{H^s(\Omega)}$ norm.
\end{itemize}
\textbf{Fractional Laplacian.} For any $s > 0$, denote by $(-\Delta)^s$ the fractional Laplace operator, which (up to normalization factors) can be defined as follows:
\begin{itemize}
\item for $s \in (0,1)$, we set
\[
-(-\Delta)^s u(x) = \int_{\R^d} \frac{u(x+y)-2u(x)+u(x-y)}{\abs{y}^{d+2s}} \, dy, \quad x \in \R^d;
\]
\item for $s \geq 1$, $s = [s] + \{s\}$, we set $(-\Delta)^s = (-\Delta)^{\{s\}} \circ (-\Delta)^{[s]}$.
\end{itemize}
Let us define, for any $u, v \in H^s(\R^d)$, the bilinear form
\[
[u, v]_{s} = \int_{\R^{d}} (-\Delta)^{s/2}u(x) \cdot (-\Delta)^{s/2}v(x) \, dx
\]
and the corresponding semi-norm $[u]_s = \sqrt{[u,u]_s} = ||(-\Delta)^{s/2}u||_{L^2(\R^d)}$. Define on $H^s(\R^d)$ the norm $||u||_s = (||u||_{L^2(\R^d)}^2 + [u]_s^2 )^{1/2}$, which in turn is equivalent to the norm $||\cdot||_{H^s(\R^d)}$.
\textbf{The spaces $\tilde{H}^s(\Omega)$.} Let $s > 0$ and fix $\Omega$ to be an open bounded set with Lipschitz boundary. The space we are going to work with throughout this paper is
\[
\tilde{H}^s(\Omega) = \{ u \in H^s(\R^d) \,:\, u = 0 \text{ a.e. in } \R^d\setminus \Omega \},
\]
endowed with the $||\cdot||_s$ norm. This space corresponds to the closure of $C^\infty_c(\Omega)$ with respect to the $||\cdot||_s$ norm.
We have also $(\tilde{H}^s(\Omega))^* = H^{-s}(\Omega)$, see \cite[Theorem 3.30]{mclean2000strongly}.
We finally recall the following embedding results (see \cite{DiNezzaPalatucciValdinoci12}).
\begin{theorem}
Let $s > 0$. The following holds:
\begin{itemize}
\item if $2s < d$, then $\tilde H^s(\Omega)$ embeds in $L^q(\Omega)$ continuously for any $q \in [1,2^*]$ and compactly for any $q \in [1,2^*)$, with $2^* = 2d/(d-2s)$;
\item if $2s = d$, then $\tilde H^s(\Omega)$ embeds in $L^q(\Omega)$ continuously for any $q \in [1,\infty)$ and compactly for any $q \in [1,2]$;
\item if $2s > d$, then $\tilde H^s(\Omega)$ embeds continuously in $C^{0,\alpha}(\Omega)$ with $\alpha = (2s-d)/2$.
\end{itemize}
\end{theorem}
\medskip
\section{A variational scheme for the fractional wave equation}\label{sec:free}
In this section, as a first step towards obstacle problems, we extend to the fractional wave equation a time-disretized variational scheme which traces back to Rothe \cite{Ro30} and since then has been extensively applied to many different hyperbolic type problems, see e.g. \cite{Ta94, Om97, SvadlenkaOmata08, DaLa11}.
Let $\Omega \subset \R^d$ be an open bounded domain with Lipschitz boundary.
Given $u_0 \in \tilde{H}^s(\Omega)$ and $v_0 \in L^2(\Omega)$, the problem we are interested in is the following: find $u = u(t,x)$ such that
\begin{equation}\label{eq:freewaves}
\begin{system}
& u_{tt} + (-\Delta)^s u = 0 &\quad&\text{in } (0,T) \times \Omega \\
& u(t,x) = 0 &\quad&\text{in } [0,T] \times (\R^d \setminus \Omega) \\
& u(0,x) = u_0(x) &\quad&\text{in } \Omega \\
& u_t(0,x) = v_0(x) &\quad&\text{in } \Omega \\
\end{system}
\end{equation}
where the ``boundary'' condition is imposed on the complement of $\Omega$ due to the non-local nature of the fractional operator. In particular, we look for weak type solutions of \eqref{eq:freewaves}.
\begin{definition}\label{def:weak}
We say a function
\[
u \in L^\infty(0,T; \tilde{H}^s(\Omega)) \cap W^{1,\infty}(0,T;L^2(\Omega)), \quad u_{tt} \in L^\infty(0,T;H^{-s}(\Omega)),
\]
is a weak solution of \eqref{eq:freewaves} if
\begin{equation}\label{eq:eqweak}
\int_{0}^T \int_\Omega u_{tt}(t) \phi(t) \,dxdt + \int_{0}^T [ u(t), \phi(t) ]_{s} \, dt = 0
\end{equation}
for all $\phi \in L^{1}(0,T;\tilde{H}^s(\Omega))$ and the initial conditions are satisfied in the following sense:
\begin{equation}\label{eq:u0free}
\lim_{h\to0^+} \frac1h \int_0^h\left( ||u(t)-u_0||_{L^2(\Omega)}^2 + [u(t)-u_0]_{s}^2 \right)dt = 0
\end{equation}
and
\begin{equation}\label{eq:v0free}
\lim_{h\to0^+} \frac1h \int_0^h||u_t(t)-v_0||_{L^2(\Omega)}^2\,dt = 0.
\end{equation}
\end{definition}
\medskip
\noindent The aim of this section is then to prove the next theorem.
\begin{theorem}\label{thm:main1}
There exists a weak solution of the fractional wave equation \eqref{eq:freewaves}.
\end{theorem}
The existence of a such a weak solution will be proved by means of an implicit variational scheme based on the idea of minimizing movements \cite{ambrosio1995minimizing} introduced by De Giorgi, elsewhere known also as the discrete Morse semiflow approach or Rothe's scheme \cite{Ro30}.
\subsection{Approximating scheme}
For any $n > 0$ let $\tau_n = T/n$, $u_{-1}^n = u_0 - \tau_n v_0$, and $u_0^n = u_0$ (conventionally we intend $v_0(x) = 0$ for $x \in \R^d \setminus \Omega$). For any $0 < i \leq n$, given $u^n_{i-2}$ and $u^n_{i-1}$, define
\begin{equation}\label{eq:scheme}
u_i^n = \arg \min_{u \in \tilde{H}^s(\Omega)} J_i^n(u) = \arg \min_{u \in \tilde{H}^s(\Omega)} \left[ \int_\Omega \frac{\abs{u-2u_{i-1}^n+u_{i-2}^n}^2}{2\tau_n^2}\,dx + \frac12 [u]_s^2 \right].
\end{equation}
Each $u^i_n$ is well defined: indeed, existence of a minimizer can be obtained via the direct method of the calculus of variations while uniqueness follows from the strict convexity of the functional $J_i^n$. Each minimizer $u_i^n$ can be characterize in the following way: take any test function $\phi \in \tilde{H}^s(\Omega)$, then, by minimality of $u_i^n$ in $\tilde{H}^s(\Omega)$, one has
\[
\frac{d}{d\varepsilon} J_i^n(u_i^n+\varepsilon \phi) |_{\varepsilon=0} = 0,
\]
which rewrites as
\begin{equation}\label{eq:ELnoobstacle}
\int_{\Omega} \frac{u_i^n-2u_{i-1}^n+u_{i-2}^n}{\tau_n^2}\,\phi \,dx + [u_i^n , \phi]_{s} = 0 \quad \text{for all } \phi \in \tilde{H}^s(\Omega).
\end{equation}
We define the piecewise constant and piecewise linear interpolation in time of the sequence $\{u_i^n\}_i$ over $[-\tau_n,T]$ as follows: let $t_i^n = i\tau_n$, then the piecewise constant interpolant is given by
\begin{equation}\label{eq:uhbar}
\bar{u}^n(t,x) =
\begin{system}
& u_{-1}^n(x) &\quad &t=-\tau_n \\
& u_i^n(x) &\quad &t \in (t_{i-1}^n,t_i^n], \\
\end{system}
\end{equation}
and the piecewise linear one by
\begin{equation}\label{eq:uh}
u^n(t,x) =
\begin{system}
& u_{-1}^n(x) &\quad &t=-\tau_n \\
& \frac{t-t_{i-1}^n}{\tau_n}u_i^n(x) + \frac{t_i^n-t}{\tau_n}u_{i-1}^n(x) &\quad &t \in (t_{i-1}^n,t_i^n]. \\
\end{system}
\end{equation}
Define $v_i^n = (u_i^n-u_{i-1}^n)/\tau_n$, $0\leq i \leq n$, and let $v^n$ be the piecewise linear interpolation over $[0,T]$ of the family $\{v_i^n\}_{i=0}^n$, defined similarly to \eqref{eq:uh}. Taking the variational characterization \eqref{eq:ELnoobstacle} and integrating over $[0,T]$ we obtain
\[
\int_{0}^T \int_\Omega \left( \frac{u^n_t(t) - u^n_t(t-\tau_n)}{\tau_n} \right) \phi(t) \,dxdt + \int_{0}^T [ \bar{u}^n(t), \phi(t) ]_{s} \, dt = 0
\]
for all $\phi \in L^1(0,T;\tilde{H}^s(\Omega))$, or equivalently
\begin{equation}\label{eq:ELn}
\int_{0}^T \int_\Omega v_t^n(t) \phi(t) \,dxdt + \int_{0}^T [ \bar{u}^n(t), \phi(t) ]_{s} \, dt = 0.
\end{equation}
The idea is now to pass to the limit $n \to \infty$ and prove, using \eqref{eq:ELn}, that the approximations $u^n$ and $\bar{u}^n$ converge to a weak solution $u$ of \eqref{eq:freewaves}. For doing so the main tool is the following estimate.
\begin{prop}[Key estimate]\label{prop:keyestimate}
The approximate solutions $\bar{u}^n$ and $u^n$ satisfy
\[
\norm{ u_t^n(t) }_{ L^2(\Omega) }^{ 2 } + [ \bar{u}^n(t) ]_{s}^{ 2 } \leq C(u_0, v_0)
\]
for all $t \in [0,T]$, with $C(u_0, v_0)$ a constant independent of $n$.
\end{prop}
\begin{proof}
For each fixed $i \in \{1,\dots,n\}$ consider equation \eqref{eq:ELnoobstacle} with $\phi = u_{i-1}^n-u_i^n$, so that we have
\[
\begin{aligned}
0 &= \int_\Omega \frac{(u_{i}^n-2u_{i-1}^n+u_{i-2}^n)(u_{i-1}^n-u_{i}^n)}{\tau_n^2}\,dx + [u_i^n, u_{i-1}^n - u_i^n]_{s} \\
&\leq \frac{1}{2\tau_n^2} \int_\Omega (u_{i-1}^n-u_{i-2}^n)^2 - (u_{i}^n-u_{i-1}^n)^2 \, dx + \frac{1}{2} ( [u_{i-1}^n]_{s}^{ 2 } - [u_i^n]_{s}^{ 2 } ),
\end{aligned}
\]
where we use the fact that $b(a-b) \leq \frac12 (a^2-b^2)$. Summing for $i = 1,\dots,k$, with $1 \leq k \leq n$, we get
\[
\begin{aligned}
\left\lVert\frac{u_k^n-u_{k-1}^n}{\tau_n}\right\rVert_{ L^2(\Omega) }^{ 2 } + [u_k^n]_{s}^{ 2 } &\leq \frac{1}{\tau_n^2} \norm{ u_0-u_{-1}^n }_{ L^2(\Omega) }^{ 2 } + [u_0]_{s}^{ 2 } \\
&= ||v_0||_{L^2(\Omega)}^2 + [u_0]_{s}^{ 2 }.
\end{aligned}
\]
The result follows by the very definition of $u^n$ and $\bar{u}^n$.
\end{proof}
\begin{remark}\label{rem:energy}{\rm
Given a weak solution $u$ of \eqref{eq:freewaves} we can speak of the energy quantity
\[
E(t) = ||u_t(t)||_{L^2(\Omega)}^2 + [u(t)]_s^2.
\]
One can easily see by an approximation argument that $E$ is conserved throughout the evolution and, as a by-product of the last proof, we see that also the energy of our approximations is at least non-increasing, i.e. $E_i^n \leq E_{i-1}^n$, where $E_i^n = E(u^n(t_i^n)) = ||v_i^n||_{L^2(\Omega)}^2 + [u_i^n]_{s}^2$. Furthermore we also remark that we cannot improve this estimate, meaning that generally speaking the given approximations $u^n$ are not energy preserving.
}
\end{remark}
Thanks to Proposition \ref{prop:keyestimate}, we can now prove convergence of the $u^n$.
\begin{prop}[Convergence of $u^n$]\label{prop:convun}
There exists a subsequence of steps $\tau_n \to 0$ and a function $u \in L^\infty(0,T;\tilde{H}^s(\Omega)) \cap W^{1,\infty}(0,T;L^2(\Omega))$, with $u_{tt} \in L^\infty(0,T;H^{-s}(\Omega))$, such that
\[
\begin{aligned}
&u^n \to u &\text{ in }& C^0([0,T];L^2(\Omega)) \\
&u_t^n \rightharpoonup^* u_t &\text{ in }& L^\infty(0,T;L^2(\Omega)) \\
&u^n(t) \rightharpoonup u(t) &\text{ in }& \tilde{H}^s(\Omega) \text{ for any } t \in [0,T]. \\
\end{aligned}
\]
\end{prop}
\begin{proof}
From Proposition \ref{prop:keyestimate} it follows that
\begin{equation}
u_t^n(t) \text{ and } v^n(t) \text{ are bounded in } L^2(\Omega) \text{ uniformly in $t$ and $n$,}
\end{equation}
\begin{equation}\label{eq:boundhsdot}
u^n(t) \text{ is bounded in the } [\cdot]_s \text{ semi-norm uniformly in $t$ and $n$.}
\end{equation}
Observe now that $u^n(\cdot,x)$ is absolutely continuous on $[0,T]$; thus, for all $t_1, t_2 \in [0,T]$ with $t_1 < t_2$, we have
\[
\begin{aligned}
||u^n(t_2,\cdot) - u^n(t_1,\cdot)||_{L^2(\Omega)} &= \left( \int_{\Omega} \left( \int_{t_1}^{t_2} u_t^n(t,x) \,dt \right)^2\,dx \right)^\frac12 \\
&\leq \left( \int_{t_1}^{t_2} ||u_t^n(t,\cdot)||_{L^2(\Omega)}^2 \,dt \right)^\frac12 (t_2-t_1)^\frac12 \leq C(t_2-t_1)^\frac12,
\end{aligned}
\]
where we made use of the H\"{o}lder's inequality and of Fubini's Theorem.
This estimate yields
\begin{equation}\label{eq:boundl2}
u^n(t) \text{ is bounded in } L^2(\Omega) \text{ uniformly in $t$ and $n$},
\end{equation}
\begin{equation}\label{eq:equicont}
u^n \text{ is equicontinuous in } C^0([0,T];L^2(\Omega)).
\end{equation}
From \eqref{eq:ELn}, using \eqref{eq:boundl2} and \eqref{eq:boundhsdot}, we can also deduce that $v_t^n(t)$ is bounded in $H^{-s}(\Omega)$ uniformly in $t$ and $n$. All together we have
\begin{equation}\label{eq:unbounds}
u^n \text{ is bounded in } W^{1,\infty} (0,T;L^2(\Omega)) \text{ and in } L^\infty(0,T;\tilde{H}^s(\Omega)),
\end{equation}
\begin{equation}\label{eq:vnbounds}
v^n \text{ is bounded in } L^\infty(0,T;L^2(\Omega)) \text{ and in } W^{1,\infty}(0,T;H^{-s}(\Omega)).
\end{equation}
Thanks to \eqref{eq:equicont}, \eqref{eq:unbounds} and \eqref{eq:vnbounds} there exists a function $u \in L^\infty(0,T;\tilde{H}^s(\Omega)) \cap W^{1,\infty}(0,T;L^2(\Omega)) \cap C^0([0,T];L^2(\Omega))$ such that
\[
\begin{aligned}
&u^n \to u &\text{ in }& C^0([0,T];L^2(\Omega)) \\
&u_t^n \rightharpoonup^* u_t &\text{ in }& L^\infty(0,T;L^2(\Omega)) \\
&u^n(t) \rightharpoonup u(t) &\text{ in }& \tilde{H}^s(\Omega) \text{ for any } t \in [0,T] \\
\end{aligned}
\]
and there exists $v \in W^{1,\infty}(0,T;H^{-s}(\Omega))$ such that
\[
v^n \rightharpoonup^* v \text{ in } L^\infty(0,T;L^2(\Omega)) \quad\text{and}\quad v^n \rightharpoonup^* v \text{ in } W^{1,\infty}(0,T;H^{-s}(\Omega)).
\]
As one would expect $v(t) = u_t(t)$ as elements of $L^2(\Omega)$ for a.e. $t \in [0,T]$: indeed, for $t \in (t_{i-1}^n,t_i^n]$ and $\phi \in \tilde{H}^s(\Omega)$, we have by construction $u_t^n(t) = v^n(t^n_i)$, and so
\[
\begin{aligned}
\int_{\Omega} (u_t^n(t) - v^n(t))\phi\,dx &= \int_{\Omega} (v^n(t_i^n) - v^n(t))\phi\,dx = \int_{\Omega} \left(\int_{t}^{t^n_i} v_t^n(s)\,ds\right)\phi\,dx \\
&\leq \tau_n ||v_t^n||_{L^\infty(0,T;H^{-s}(\Omega))} ||\phi||_{H^s(\R^d)}
\end{aligned}
\]
which implies, for any $\psi(t,x) = \phi(x)\eta(t)$ with $\phi \in \tilde{H}^s(\Omega)$ and $\eta \in C^1_0([0,T])$, that
\[
\begin{aligned}
&\int_0^T \left[ \int_\Omega (u_t(t)-v(t))\phi\,dx\right]\eta(t) \,dt = \int_0^T\int_\Omega (u_t(t)-v(t))\psi \,dxdt \\
&= \lim_{n\to\infty} \int_0^T\int_\Omega (u^n_t(t)-v^n(t))\psi \,dxdt = \lim_{n\to\infty} \int_0^T\left[\int_\Omega (u^n_t(t)-v^n(t))\phi\,dx\right]\eta(t)\,dt \\
&\leq \lim_{n\to\infty} \tau_n T ||v_t^n||_{L^\infty(0,T;H^{-s}(\Omega))} ||\phi||_{H^s(\R^d)} ||\eta||_{\infty} = 0.
\end{aligned}
\]
Hence we have
\[
\int_\Omega (u_t(t)-v(t))\phi\,dx = 0 \quad \text{ for all }\phi\in \tilde{H}^s(\Omega) \text{ and a.e. } t \in [0,T],
\]
which yields the sought for conclusion. Thus, $v_t = u_{tt}$ and
$
u_{tt} \in L^\infty(0,T;H^{-s}(\Omega)).
$
\end{proof}
\begin{prop}[Convergence of $\bar{u}^n$]\label{prop:convunbar}
Let $u$ be the limit function obtained in Proposition \ref{prop:convun}, then
\[
\bar{u}^n \rightharpoonup^* u \text{ in } L^\infty(0,T;\tilde{H}^s(\Omega)).
\]
\end{prop}
\begin{proof}
By definition we have
\[
\begin{aligned}
&\sup_{t \in [0,T]} \int_\Omega \abs{u^n(t,x)-\bar{u}^n(t,x)}^2 \, dx = \sum_{i=1}^n \sup_{t \in [t_{i-1}^n,t_i^n]} (t-t_i^n)^2 \int_{\Omega} (v_i^n)^2\,dx \\
&\leq \tau_n^2 \sum_{i=1}^{n} ||v_i^n||_{L^2(\Omega)}^2 \leq C\tau_n
\end{aligned}
\]
which implies $\bar{u}^n \to u$ in $L^\infty(0,T;L^2(\Omega))$. Furthermore, taking into account Proposition \ref{prop:keyestimate}, $\bar{u}^n(t)$ is bounded in $\tilde{H}^s(\Omega)$ uniformly in $t$ and $n$, so that we have $\bar{u}^n \rightharpoonup^* u$ in $L^\infty(0,T;\tilde{H}^s(\Omega))$ and, as it happens for $u^n$, $\bar{u}^n(t) \rightharpoonup u(t)$ in $\tilde{H}^s(\Omega)$ for any $t \in [0,T]$.
\end{proof}
We can now pass to the limit in \eqref{eq:ELn} to prove $u$ to be a weak solution, thus proving Theorem \ref{thm:main1}.
\begin{proof}[Proof of Theorem \ref{thm:main1}]
The limit function $u$ obtained in Proposition \ref{prop:convun} is a weak solution of \eqref{eq:freewaves}. Indeed, for each $n>0$, by \eqref{eq:ELn} one has
\[
\int_{0}^T \int_\Omega v_t^n(t) \phi(t) \,dxdt + \int_{0}^T [ \bar{u}^n(t), \phi(t) ]_{s} \, dt = 0
\]
for any $\phi \in L^1(0,T;\tilde{H}^s(\Omega))$. Passing to the limit as $n \to \infty$, using Propositions \ref{prop:convun} and \ref{prop:convunbar}, we immediately get
\[
\int_{0}^T \int_\Omega u_{tt}(t) \phi(t) \,dxdt + \int_{0}^T [ u(t), \phi(t) ]_{s} \, dt = 0.
\]
Regarding the initial conditions \eqref{eq:u0free} and \eqref{eq:v0free} it suffices to prove that, if $t_k \to 0$ are Lebesgue points for both $t \mapsto ||u_t(t)||_{L^2(\Omega)}^2$ and $t \mapsto [u(t)]_{s}^2$, then
\begin{equation}\label{eq:normconv}
[u(t_k)]_{s}^2 \to [u_0]_{s}^2 \quad \text{and} \quad ||u_t(t_k)||_{L^2(\Omega)}^2\to ||v_0||_{L^2(\Omega)}^2.
\end{equation}
From the fact that $u_t \in W^{1,\infty}(0,T;H^{-s}(\Omega))$ we have $u_t(t_k) \to v_0$ in $H^{-s}(\Omega)$ and, since $u_t(t_k)$ is bounded in $L^2(\Omega)$ and $\tilde{H}^s(\Omega) \subset L^2(\Omega)$ is dense, we also have $u_t(t_k) \rightharpoonup v_0$ in $L^2(\Omega)$. On the other hand $u(t_k) \to u(0) = u_0$ strongly in $L^2(\Omega)$ because $u \in C^0([0,T];L^2(\Omega))$ and, being $u(t_k)$ bounded in $\tilde{H}^s(\Omega)$, $u(t_k) \rightharpoonup u(0)$ in $\tilde{H}^s(\Omega)$ and $[u_0]_{s} \leq \liminf_k [u(t_k)]_{s}$. To prove \eqref{eq:normconv} it suffices to observe that
\[
\limsup_{k\to\infty} \left([u(t_k)]_{s}^2 + ||u_t(t_k)||_{L^2(\Omega)}^2\right) \leq [u_0]_{s}^2 + ||v_0||_{L^2(\Omega)}^2
\]
by energy conservation.
\end{proof}
\section{The obstacle problem}\label{sec:obstacle}
In this section we switch our focus to hyperbolic obstacle problems for the fractional Laplacian. We will see how a weak solution can be obtained by means of a slight modification of the previously presented scheme, whose core idea has already been used in other obstacle type problems (for example in \cite{Ki09, NoOk15}).
As above, let $\Omega \subset \R^d$ be an open bounded domain with Lipschitz boundary and consider $g \colon \Omega \to \R$, with
\[
g \in C^0(\bar{\Omega}), \quad g<0 \text{ on } \partial \Omega.
\]
We are still interested in a non-local wave type dynamic like the one of equation \eqref{eq:freewaves}, where now we require the solution $u$ to lay above $g$: this way $g$ can be interpreted as a physical obstacle that our solution cannot go below. Consider then an initial datum
\[
u_0 \in \tilde{H}^s(\Omega), \quad u_0 \geq g \text{ a.e. in } \Omega,
\]
and $v_0 \in L^2(\Omega)$.
Equation \eqref{eq:freewaves}, with the addition of the obstacle $g$, reads as follows: find a function $u = u(t,x)$ such that
\begin{equation}\label{eq:obstaclewaves}
\begin{system}
& u_{tt} + (-\Delta)^s u \geq 0 &\quad&\text{in } (0,T) \times \Omega \\
& u(t,\cdot) \geq g &\quad&\text{in } [0,T] \times \Omega \\
& (u_{tt} + (-\Delta)^s u)(u-g) = 0 &\quad&\text{in } (0,T) \times \Omega \\
& u(t,x) = 0 &\quad&\text{in } [0,T] \times (\R^d \setminus \Omega) \\
& u(0,x) = u_0(x) &\quad&\text{in } \Omega \\
& u_t(0,x) = v_0(x) &\quad&\text{in } \Omega \\
\end{system}
\end{equation}
In this system the function $u$ is required to be an obstacle-free solution whenever away from the obstacle, where $u-g>0$, while we only require a variational inequality (first line) when $u$ touches $g$. The main difficulty in \eqref{eq:obstaclewaves} is the treatment of contact times: the previous system does not specify what kind of behavior arises at contact times, leaving us free to choose between ``bouncing'' solutions, the profile hits the obstacle and bounces back with a fraction of the previous velocity (see e.g. \cite{PaoliSchatzman02I}), and an ``adherent'' solution, the profile hits the obstacle and stops (this way we dissipate energy). The definition of weak solution we are going to consider includes both of these cases.
\begin{definition}\label{def:weakobst}
We say a function $u = u(t,x)$ is a weak solution of \eqref{eq:obstaclewaves} if
\begin{enumerate}
\item $u \in L^\infty(0,T; \tilde{H}^s(\Omega)) \cap W^{1,\infty}(0,T;L^2(\Omega))$ and $u(t,x) \geq g(x)$ for a.e. $(t,x) \in (0,T)\times \Omega$;
\item there exist weak left and right derivatives $u_t^{\pm}$ on $[0,T]$ (with appropriate modifications at endpoints);
\item for all $\phi \in W^{1,\infty}(0,T;L^2(\Omega)) \cap L^1(0,T;\tilde{H}^s(\Omega))$ with $\phi \geq 0$, $\text{spt}\,\phi \subset [0,T)$, we have
\[
-\int_{0}^{T} \int_{\Omega} u_t\phi_t \, dxdt + \int_{0}^{T} [u, \phi]_{s} \, dt - \int_\Omega v_0\,\phi(0) \, dx \geq 0
\]
\item the initial conditions are satisfied in the following sense
\[
u(0,\cdot) = u_0, \quad \int_\Omega (u_t^+(0)-v_0)(\phi-u_0) \, dx \geq 0 \quad \forall \phi \in \tilde{H}^s(\Omega), \phi \geq g.
\]
\end{enumerate}
\end{definition}
\medskip
\noindent Within this framework we can partially extend the construction presented in the previous section so as to prove existence of a weak solution.
\begin{theorem}\label{thm:main2}
There exists a weak solution $u$ of the hyperbolic obstacle problem \eqref{eq:obstaclewaves}, and $u$ satisfies the energy inequality
\begin{equation}\label{eq:eneine}
||u_t^\pm(t)||_{L^2(\Omega)}^2 + [u(t)]_{s}^2 \leq ||v_0||_{L^2(\Omega)}^2 + [u_0]_{s}^2 \quad \text{for a.e. }t\in[0,T].
\end{equation}
\end{theorem}
\medskip
We remark here that this definition of weak solution is weaker than the one proposed in \cite{Maruo85, Eck05}, in which the authors construct a solution to \eqref{eq:obstaclewaves} as a limit of (energy preserving) solutions $u^n$ of regularized systems, where the constraint $u^n \geq g$ is turned into a penalization term in the equation. Furthermore, up to our knowledge, the problem of the existence of an energy preserving weak solution to \eqref{eq:obstaclewaves} is still open: one would expect the limit function in \cite{Maruo85, Eck05} to be the best known candidate, while a partial result for concave obstacles in $1$d was provided by Schatzman in \cite{Schatzman80}.
\subsection{Approximating scheme}\label{subsec:appschemeobtacle}
The idea is to replicate the scheme presented in Section \ref{sec:free} for the obstacle-free dynamic: define
\[
K_g = \{ u \in \tilde{H}^s(\Omega) \,|\, u \geq g \text{ a.e. in } \Omega \}
\]
and, for any $n > 0$, let $\tau_n = T/n$. Define $u_{-1}^n = u_0 - \tau_n v_0$ and $u_0^n = u_0$, and construct recursively the family of functions $\{u_i^n\}_{i=1}^n \subset \tilde{H}^s(\Omega)$ as
\[
u_i^n = \arg \min_{u \in K_g} J_i^n(u),
\]
with $J_i^n$ defined as in \eqref{eq:scheme}. Notice how the minimization is now over functions $u \geq g$ in $\Omega$ so that to respect the additional constraint introduced by the obstacle. Since $K_g \subset \tilde{H}^s(\Omega)$ is convex, existence and uniqueness of each $u_i^n$ can be proved by means of standard arguments. Regarding the variational characterization of each minimizer $u_i^n$, we cannot take arbitrary variations $\phi \in \tilde{H}^s(\Omega)$ (we may end up exiting the feasible set $K_g$), and so we need to be more careful: we take any test $\phi \in K_g$ and consider the function $(1-\varepsilon)u_i^n + \varepsilon \phi$, which belongs to $K_g$ for any sufficiently small positive $\varepsilon$. Thus, since $u_i^n$ minimizes $J_i^n$, we have the following inequality
\[
\frac{d}{d\varepsilon} J_i^n(u_i^n+\varepsilon (\phi-u_i^n)) |_{\varepsilon=0} \geq 0,
\]
which rewrites as
\begin{equation}\label{eq:vardis}
\int_{\Omega} \frac{u_i^n-2u_{i-1}^n+u_{i-2}^n}{\tau_n^2}(\phi-u_i^n)\,dx + [u_i^n , \phi-u_i^n]_{s} \geq 0 \quad \text{for all } \phi \in K_g.
\end{equation}
In particular, since every $\phi \geq u_i^n$ is an admissible test function, we also have
\begin{equation}\label{eq:vardis_simple}
\int_{\Omega} \frac{u_i^n-2u_{i-1}^n+u_{i-2}^n}{\tau_n^2}\phi\,dx + [u_i^n , \phi]_{s} \geq 0 \quad \text{for all } \phi \in \tilde{H}^s(\Omega), \phi \geq 0.
\end{equation}
We define $\bar{u}^n$ and $u^n$ as, respectively, the piecewise constant and the piecewise linear interpolation in time of $\{u_i^n\}_i$ (as in \eqref{eq:uh}, \eqref{eq:uhbar}), and $v^n$ as the piecewise linear interpolant of velocities $v_i^n = (u_i^n-u_{i-1}^n)/\tau_n$, $0\leq i \leq n$. Using \eqref{eq:vardis_simple}, the analogue of \eqref{eq:ELnoobstacle} takes the following form
\[
\int_{0}^T \int_\Omega \left( \frac{u^n_t(t) - u^n_t(t-\tau_n)}{\tau_n} \right) \phi(t) \,dxdt + \int_{0}^T [ \bar{u}^n(t), \phi(t) ]_{s} \, dt \geq 0
\]
for all $\phi \in L^1(0,T;\tilde{H}^s(\Omega))$, $\phi(t,x) \geq 0$ for a.e. $(t,x) \in (0,T)\times \Omega$.
In view of a convergence result, we observe that the same energy estimate of Proposition \ref{prop:keyestimate} extends to this new context: for any $n>0$, we have
\[
\norm{ u_t^n(t) }_{ L^2(\Omega) }^{ 2 } + [ \bar{u}^n(t) ]_{s}^{ 2 } \leq C(u_0, v_0)
\]
for all $t \in [0,T]$, with $C(u_0, v_0)$ a constant independent of $n$. The exact same proof of Proposition \ref{prop:keyestimate} applies: just observe that, taking $\phi = u_{i-1}^n$ in \eqref{eq:vardis}, one gets
\[
0 \leq \int_\Omega \frac{(u_{i}^n-2u_{i-1}^n+u_{i-2}^n)(u_{i-1}^n-u_{i}^n)}{\tau_n^2}\,dx + [u_i^n, u_{i-1}^n - u_i^n]_{s}
\]
and then the rest follows. Convergence of the interpolants is then a direct consequence.
\begin{prop}[Convergence of $u^n$ and $\bar{u}^n$, obstacle case]\label{prop:convunobstacle}
There exists a subsequence of steps $\tau_n \to 0$ and a function $u \in L^\infty(0,T;\tilde{H}^s(\Omega)) \cap W^{1,\infty}(0,T;L^2(\Omega))$ such that
\[
\begin{aligned}
&u^n \to u \text{ in } C^0([0,T];L^2(\Omega)), &\quad& \bar{u}^n \rightharpoonup^* u \text{ in } L^\infty(0,T;\tilde{H}^s(\Omega)), \\
&u_t^n \rightharpoonup^* u_t \text{ in } L^\infty(0,T;L^2(\Omega)), &\quad& u^n(t) \rightharpoonup u(t) \text{ in } \tilde{H}^s(\Omega) \text{ for any } t \in [0,T], \\
\end{aligned}
\]
and furthermore $u(t,x)\geq g(x)$ for a.e. $(t,x) \in [0,T]\times\Omega$.
\end{prop}
\begin{proof}
To obtain the existence of $u$ and all the convergences we can repeat the first half of the proof of Proposition \ref{prop:convun} and the proof of Proposition \ref{prop:convunbar}. The fact that $u(t,x)\geq g(x)$ for a.e. $(t,x) \in [0,T]\times\Omega$ is a direct consequence of the fact that $u_i^n \in K_g$ for all $n$ and $0\leq i\leq n$.
\end{proof}
The missing step with respect to the obstacle-free dynamic is that generally speaking $u_{tt} \notin L^\infty(0,T;H^{-s}(\Omega))$. The cause of such a behavior is clear already in $1$d: suppose the obstacle to be $g=0$ and imagine a flat region of $u$ moving downwards at a constant speed; when this region reaches the obstacle the motion cannot continue its way down (we need to stay above $g$) and so the velocity must display an instantaneous and sudden change in a region of non-zero measure (within our scheme the motion stops on the obstacle and velocity drops to $0$ on the whole contact region). Due to this possible behavior of $u_t$, we cannot expect $u_{tt}$ to posses the same regularity as in the obstacle-free case. Nevertheless, such discontinuities in time of $u_t$ are somehow controllable and we can still provide some sort of regularity results, which are collected in the following propositions.
\begin{prop}\label{prop:FBV}
Let $u$ be the weak limit obtained in Proposition \ref{prop:convunobstacle} and, for any fixed $0\leq \phi\in \tilde{H}^s(\Omega)$, let $F \colon [0,T] \to \R$ be defined as
\begin{equation}\label{eq:F}
F(t) = \int_{\Omega}u_t(t)\phi\,dx.
\end{equation}
Then $F \in BV(0,T)$ and, in particular, $u^n_t(t) \rightharpoonup u_t(t)$ in $L^2(\Omega)$ for a.e. $t \in [0,T]$.
\end{prop}
\begin{proof}
Let us fix $\phi \in \tilde{H}^s(\Omega)$ with $\phi \geq 0$, and consider the functions $F^n \colon [0,T] \to \R$ defined as
\begin{equation}\label{eq:Fn}
F^n(t) = \int_{\Omega}^{} u_t^n(t)\phi \,dx.
\end{equation}
Observe that $||F^n||_{L^1(0,T)}$ is uniformly bounded because $u_t^n$ is bounded in $L^2(\Omega)$ uniformly in $n$ and $t$. Furthermore, for every fixed $n > 0$ and $0\leq i\leq n$, we deduce from \eqref{eq:vardis_simple} that
\begin{equation}\label{eq:ELn2}
\Bigg\lvert \int_{\Omega}^{} (v_i^n - v_{i-1}^n)\phi\,dx \Bigg\rvert - \int_\Omega (v_i^n-v_{i-1}^n)\phi\,dx \leq \tau_n\abs{[u_i^n,\phi]_{s}} - \tau_n[u_i^n,\phi]_{s}.
\end{equation}
Summing over $i = 1,\dots,n$ and using Proposition \ref{prop:keyestimate}, we get
\[
\begin{aligned}
\sum_{i=1}^{n} &\Bigg\lvert \int_{\Omega} (v_i^n - v_{i-1}^n)\phi\,dx \Bigg\rvert \leq \int_\Omega v_{n}^n\phi \, dx - \int_\Omega v_0\phi\,dx + \sum_{i=1}^{n} \tau_n\abs{[u_i^n,\phi]_{s}} - \sum_{i=1}^{n} \tau_n[u_i^n,\phi]_{s} \\ &\leq ||v_n^n||_{L^2(\Omega)} ||\phi||_{L^2(\Omega)} + ||v_0||_{L^2(\Omega)} ||\phi||_{L^2(\Omega)} + 2\tau_n \sum_{i=1}^{n} \abs{[u_i^n,\phi]_{s}} \\ &\leq ||v_n^n||_{L^2(\Omega)} ||\phi||_{L^2(\Omega)} + ||v_0||_{L^2(\Omega)} ||\phi||_{L^2(\Omega)} + 2\tau_n \sum_{i=1}^{n} [u_i^n]_{s} [\phi]_{s} \\ &\leq C||\phi||_{H^s(\R^d)}
\end{aligned}
\]
with $C$ independent of $n$. Thus, $\{F^n\}_n$ is uniformly bounded in $BV(0,T)$ and by Helly's selection theorem there exists a function $\bar{F}$ of bounded variation such that $F^n(t) \to \bar{F}(t)$ for every $t \in (0,T)$.
Take now $\psi(t,x) = \phi(x) \eta(t)$ for $\eta \in C^\infty_c(0,T)$, using that $u_t^n \rightharpoonup^* u_t$ in $L^\infty(0,T;L^2(\Omega))$, one has
\[
\begin{aligned}
&\int_{0}^{T}\int_{\Omega}^{} u_t(t)\psi \,dxdt = \lim_{n \to \infty} \int_{0}^{T} \int_\Omega u_t^n(t)\psi \,dxdt =
\lim_{n \to \infty} \int_{0}^{T} \int_\Omega u_t^n(t)\phi \,dx \,\eta(t) dt \\ &=\int_{0}^{T} \lim_{n \to \infty} \int_\Omega u_t^n(t)\phi \,dx \,\eta(t) \,dt = \int_{0}^{T} \bar{F}(t)\eta(t)\,dt
\end{aligned}
\]
where the passage to the limit under the sign of integral is possible due to the pointwise convergence of $F^n$ to $\bar{F}$ combined with the dominated convergence theorem. We conclude
\[
\int_{0}^{T} \left( \int_{\Omega} u_t(t)\phi \,dx -\bar{F}(t)\right) \eta(t)\,dt = 0
\]
and, by the arbitrariness of $\eta$, we have $F = \bar{F}$ for a.e. $t \in (0,T)$, which is to say $F \in BV(0,T)$. In particular,
\[
\int_{\Omega} u_t(t) \phi \,dx = F(t) = \lim_{n\to\infty} \int_{\Omega} u_t^n(t)\phi\,dx \quad \text{ for a.e. }t\in (0,T),
\]
meaning $u_t^n(t) \rightharpoonup u_t(t)$ in $L^2(\Omega)$ for almost every $t\in(0,T)$: indeed the last equality can first be extended to every $\phi \in \tilde{H}^s(\Omega)$ (just decomposing $\phi=\phi^+-\phi^-$ in its positive and negative parts) and then to every $\phi \in L^2(\Omega)$ being $\tilde{H}^s(\Omega) \subset L^2(\Omega)$ dense.
\end{proof}
\begin{remark}
In the rest of this section we choose to use the ``precise representative'' of $u_t$ given by $u_t(t) = $ weak-$L^2$ limit of $u^n_t(t)$, which is then defined for all $t \in [0,T]$.
\end{remark}
\begin{prop}
Fix $0\leq \phi\in \tilde{H}^s(\Omega)$ and let $F$ de defined as in \eqref{eq:F}. Then, for any $t \in (0,T)$, we have
\[
\lim_{r\to t^-} F(r) \leq \lim_{s\to t^+}F(s).
\]
\end{prop}
\begin{proof}
First of all we observe that the limits we are interested in exist because $F \in BV(0,T)$. Fix then $t \in (0,T)$ and let $0<r<t<s<T$. For each $n$ define $r_n$ and $s_n$ such that $r \in (t^n_{r_n-1},t^n_{r_n}]$ and $s \in (t^n_{s_n-1},t^n_{s_n}]$. If we consider the functions $F^n$ defined in \eqref{eq:Fn} and take into account \eqref{eq:ELn2}, one can see that
\[
\begin{aligned}
F^n(s)-F^n(r) &= \int_{\Omega} (u_t^n(s)-u_t^n(r))\phi\,dx = \int_{\Omega} (v^n_{s_n}-v^n_{r_n})\phi \,dx \\
&= \sum_{i=r_n+1}^{s_n} \int_{\Omega} (v^n_{i}-v^n_{i-1})\phi \,dx \geq \tau_n \sum_{i=r_n+1}^{s_n} \left( [u_i^n,\phi]_{s} - \abs{[u_i^n,\phi]_{s}} \right) \\
&\geq -2C \tau_n (s_n-r_n) ||\phi||_{H^s(\R^d)}
\end{aligned}
\]
for some positive constant $C$ independent of $n$. Since $|s-r|\geq |t^n_{s_n-1}-t^n_{r_n}|=\tau_n(s_n-1-r_n)$ we can conclude
\[
F^n(s)-F^n(r) \geq -2C|s-r|\cdot||\phi||_{H^s(\R^d)}-2C\tau_n||\phi||_{H^s(\R^d)}.
\]
Passing to the limit $n \to \infty$ we get $F(s)-F(r) \geq -2C|s-r|\cdot||\phi||_{H^s(\R^d)}$, which in turn implies the conclusion.
\end{proof}
The last result tells us that the velocity $u_t$ does not present sudden changes in regions where it is positive, accordingly with the fact that whenever we move upwards there are no obstacles to the dynamic and $u_t$ is expected to have, at least locally in time and space, the same regularity it has in the obstacle-free case.
We eventually switch prove conditions 2, 3 and 4 of our definition of weak solution, thus proving Theorem \ref{thm:main2}.
\begin{proof}[Proof of Theorem \ref{thm:main2}]
Let $u$ be the limit function obtained in Proposition \ref{prop:convunobstacle}. We verify one by one the four conditions required in Definition \ref{def:weakobst}.
\emph{(1.)} The first condition is verified thanks to Proposition \ref{prop:convunobstacle}.
\emph{(2.)} Existence of weak left and right derivatives $u_t^{\pm}$ on $[0,T]$ follows from Proposition \ref{prop:FBV}: just observe that, for any fixed $\phi \in \tilde{H}^s(\Omega)$, the function
\[
F(t) = \int_{\Omega}u_t(t)\phi\,dx
\]
is $BV(0,T)$ and thus left and right limits of $F$ are well defined for any $t \in [0,T]$. This, in turn, implies condition 2. in our definition of weak solution.
\emph{(3.)} For $n > 0$ and any test function $\phi \in W^{1,\infty}(0,T;L^2(\Omega)) \cap L^1(0,T;\tilde{H}^s(\Omega))$, with $\phi \geq 0$, $\text{spt}\,\phi \subset [0,T)$, we recall that
\[
\int_{0}^T \int_\Omega \left( \frac{u^n_t(t) - u^n_t(t-\tau_n)}{\tau_n} \right) \phi(t) \,dxdt + \int_{0}^T [ \bar{u}^n(t), \phi(t) ]_{s} \, dt \geq 0.
\]
Thanks to Proposition \ref{prop:convunobstacle}, we have
\[
\begin{aligned}
\int_{0}^{T} [\bar{u}^n(t), \phi(t)]_{s} \, dt \to \int_{0}^{T} [u(t), \phi(t)]_{s} \, dt \quad \text{as } n \to \infty
\end{aligned}
\]
while, on the other hand, we also have
\[
\begin{aligned}
& \int_{0}^{T} \int_\Omega \frac{u^n_t(t)-u^n_t(t-\tau_n)}{\tau_n}\,\phi(t) \,dxdt
= \int_{0}^{T-\tau_n}\int_\Omega u^n_t(t) \left( \frac{\phi(t)-\phi(t+\tau_n)}{\tau_n} \right) \, dxdt \\
&- \int_{0}^{\tau_n} \int_\Omega \frac{v_0}{\tau_n}\,\phi(t) \, dxdt + \int_{T-\tau_n}^{T} \int_\Omega \frac{u^n_t(t)}{\tau_n}\,\phi(t) \, dxdt \\
& \to \int_{0}^{T} \int_\Omega u_t(t)(-\phi_t(t)) \, dxdt - \int_\Omega v_0\,\phi(0) \, dx + 0 \quad \text{as } n \to \infty.
\end{aligned}
\]
This proves condition 3. for weak solutions.
\emph{(4.)} The fact that $u(0) = u_0$ is a direct consequence of $u^n(0)=u_0$ and of the convergence of $u^n$ to $u$ in $C^0([0,T];L^2(\Omega))$.
We are left to check the initial condition on velocity. Suppose, without loss of generality, that the sequence $u^n$ is constructed by taking $n \in \{2^m \,: m > 0\}$ (each successive time grid is obtained dividing the previous one). Fix then $n$ and $\phi \in K_g$, let $T^* = m\tau_n$ for $0 \leq m \leq n$ (i.e. $T^*$ is a ``grid point''). Let us evaluate
\[
\begin{aligned}
& \int_0^{T^*} \int_{\Omega}^{} \frac{u_t^n(t)-u_t^n(t-\tau_n)}{\tau_n} (\phi - \bar{u}^n(t)) = \sum_{i=1}^m \int_{t_{i-1}^n}^{t_i^n} \int_\Omega \frac{u_i^n-2u_{i-1}^n+u_{i-2}^n}{\tau_n^2}(\phi-u_i^n) \\
& = \int_{\Omega}^{} \sum_{i=1}^m \frac{u_i^n-2u_{i-1}^n+u_{i-2}^n}{\tau_n}(\phi-u_i^n) = \int_{\Omega}^{} \sum_{i=1}^m (v_i^n-v_{i-1}^n)(\phi-u_i^n) \\
&= -\int_{\Omega}^{} v_0^n(\phi-u_1^n)\,dx + \int_\Omega v_m^n(\phi-u_m^n) \,dx + \tau_n \sum_{i=1}^{m-1} \int_\Omega v_i^nv_{i-1}^n\,dx \\
&= -\int_{\Omega}^{} v_0(\phi-u_n(\tau_n))\,dx + \int_\Omega u_t^n(T^*)(\phi-u^n(T^*)) \,dx + \tau_n \sum_{i=1}^{m-1} \int_\Omega v_i^nv_{i-1}^n\,dx. \\
\end{aligned}
\]
Using \eqref{eq:vardis} we observe that
\[
\int_{0}^{T^*} \int_\Omega \frac{u^n_t(t) - u^n_t(t-\tau_n)}{\tau_n} ( \phi - \bar{u}^n(t) ) \,dxdt + \int_{0}^{T^*} [ \bar{u}^n(t), \phi-\bar{u}^n(t) ]_{s} \, dt \geq 0,
\]
which combined with the above expression and previous estimates on $u_i^n$ and $v_i^n$ leads to
\[
\begin{aligned}
&-\int_{\Omega}^{} v_0(\phi-u_n(\tau_n))\,dx + \int_\Omega u_t^n(T^*)(\phi-u^n(T^*)) \,dx \geq\\
& -\tau_n \sum_{i=1}^{m-1} \int_\Omega v_i^nv_{i-1}^n\,dx - \tau_n \sum_{i=1}^{m} [u_i^n, \phi-u_i^n]_{s} \geq -CT^* - CT^* ||\phi||_{H^s(\R^d)}.
\end{aligned}
\]
Passing to the limit as $n \to \infty$, using $u^n(\tau_n) \to u(0)$ and $u_t^n(T^*) \rightharpoonup u_t(T^*)$ (due to the use of the precise representative), we get
\[
\begin{aligned}
&-\int_{\Omega}^{} v_0(\phi-u(0))\,dx + \int_\Omega u_t(T^*)(\phi-u(T^*)) \,dx \geq -CT^* - C||\phi||_{H^s(\R^d)} T^*.
\end{aligned}
\]
Taking now $T^* \to 0$ along a sequence of ``grid points'' we have
\[
\int_\Omega (u_t^+(0)-v_0)(\phi-u(0)) \,dx \geq 0.
\]
And this completes the first part of the proof. We are left to prove the energy inequality \eqref{eq:eneine}. For this, recall that from Remark \ref{rem:energy} it follows that, for all $n > 0$,
\[
||v^n(t)||_{L^2(\Omega)}^2 + [u^n(t)]_{s}^2 \leq ||v_0||_{L^2(\Omega)}^2 + [u_0]_{s}^2 \quad \text{for all }t\in[0,T].
\]
Passing to the limit as $n \to \infty$ we immediately get \eqref{eq:eneine}.
\end{proof}
We conclude this section with some remarks and observations about the solution $u$ obtained through the proposed semi-discrete convex minimization scheme in the scenario $s = 1$. First of all we identify the weak solution $u$ obtained above to be a more regular solution whenever approximations $u^n$ stay strictly above $g$.
\begin{prop}[Regions without contact]\label{prop:nocontact}
Let $s = 1$ and, for $\delta > 0$, suppose there exists an open set $A_\delta \subset \Omega$ such that $u^n(t,x) > g(x) + \delta$ for a.e. $(t,x) \in (0,T)\times \Omega$ and for all $n > 0$. Then $u_{tt} \in L^\infty(0,T;H^{-1}(A_\delta))$ and $u$ satisfies \eqref{eq:eqweak} for all $\phi \in L^{1}(0,T;H^1_0(A_\delta))$.
\end{prop}
\begin{proof}
Take $\phi \in H^1_0(\Omega)$ with $\textup{spt}\,\phi \subset A_\delta$. Then, for every $n$ and $0 \leq i \leq n$, the function $u^n_i + \varepsilon\phi$ belongs to $K_g$ for $\varepsilon$ sufficiently small: indeed, for $x \in A_\delta$, we have $u_i^n(x) + \epsilon \phi(x) \geq g(x) + \delta + \epsilon\phi(x) \geq g(x)$ for small $\epsilon$, regardless of the sign of $\phi(x)$. In particular, equation \eqref{eq:vardis_simple} can be written as
\[
\int_{\Omega} \frac{u_i^n-2u_{i-1}^n+u_{i-2}^n}{\tau_n^2}\phi\,dx + \int_\Omega \nabla u_i^n \cdot \nabla \phi\,dx = 0 \quad \text{for all } \phi \in H^1_0(\Omega), \textup{spt}\,\phi \subset A_\delta.
\]
This equality allows us to carry out the second part of the proof of Proposition \ref{prop:convun}, so that, in the same notation, we can prove $v^n_t(t)$ to be bounded in $H^{-1}(A_\delta)$ uniformly in $t$ and $n$. Thus, $v \in W^{1,\infty}(0,T;H^{-1}(A_\delta))$ and
\[
v^n \rightharpoonup^* v \text{ in } L^\infty(0,T;L^2(A_\delta)) \quad\text{and}\quad v^n \rightharpoonup^* v \text{ in } W^{1,\infty}(0,T;H^{-1}(A_\delta)).
\]
Localizing everything on $A_\delta$, we can prove $v_t = u_{tt}$ in $A_\delta$ so that
\[
u_{tt} \in L^\infty(0,T;H^{-1}({A_\delta})),
\]
and equation \eqref{eq:eqweak} follows by passing to the limit as done in the proof of Theorem \ref{thm:main1} (cf. \cite{SvadlenkaOmata08, DaLa11}).
\end{proof}
\begin{remark}[One dimensional case with $s=1$]{\rm
In the one dimensional case and for $s=1$ the analysis boils down to the problem considered by Kikuchi in \cite{Ki09}. In this particular situation a stronger version of Proposition \ref{prop:nocontact} holds: suppose that $\Omega = [0,1]$, then for any $\phi \in C^0_0([0,T),L^2(0,1)) \cap W^{1,2}_0((0,T)\times(0,1))$ with $\textup{spt}\,\phi \subset \{ (t,x) \,: u(t,x)>0 \}$,
\[
-\int_{0}^{T} \int_0^1 u_t\phi_t \, dxdt + \int_{0}^{T} \int_0^1 u_x \phi_x \, dxdt - \int_0^1 v_0\,\phi(0) \, dx = 0.
\]
}
\end{remark}
\section{Numerical implementation and open questions}
The constructive scheme presented in the previous sections can be easily used to provide a numerical simulation of the relevant dynamic, at least in the case $s = 1$ where we can employ a classical finite element discretization. However, we observe that a similar finite element approach can be extended to the fractional setting $s < 1$ following for example the pipeline described in \cite{AiGl18, AiGl17}.
Minimization of energies $J_i^n$ can be carried out by means of a piecewise linear finite element approximation in space: given a triangulation $\mathcal{T}_h$ of the domain $\Omega$ we introduce the classical space
\[
X_h^1 = \{ u_h \in C^0(\bar{\Omega}) \,:\, u_h |_K \in \mathbb{P}_1(K),\text{ for all } K \in \mathcal{T}_h \}.
\]
For $n > 1$, $0 < i \leq n$, and given $u_{i-1}^n, u_{i-2}^n \in X_h^1$, we optimize $J_i^n$ among functions in $X_h^1$ respecting the prescribed Dirichlet boundary conditions (which are local because $s=1$). We get this way a finite dimensional optimization problem for the degrees of freedom of $u_i^n$ and we solve it by a gradient descend method combined with a dynamic adaptation of the descend step size.
\begin{figure}[tbh]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.4\linewidth]{scontro}
\includegraphics[width=0.7\linewidth]{evolution_01}
\end{tabular}
\caption{Time evolution of the solution till $t = 1.5$ (left) and space-time depiction of the same evolution till $T = 10$ (right).}
\label{fig:1d_exe}
\end{figure}
In the simulation in figure \ref{fig:1d_exe} we take $\Omega = (0,2\pi)$ and $u_0(x) = \sin(x) + 1.2$, with a constant initial velocity of $-2$ which pushes the string towards the obstacle $g=0$. The boundary conditions are set to be $u(t,0) = u(t,2\pi) = 1.2$ and the simulation is performed up to $T = 10$ using a uniform grid with $h = 2\pi/200$ and a time step $\tau = 1/100$. We can see how the profile stops on the obstacle after impact (blue region in the right picture of figure \ref{fig:1d_exe}) and how the impact causes the velocity to drop to $0$ and thus a loss of energy (as displayed in figure \ref{fig:1d_exe_E}). As soon as the profile leaves the obstacle the dynamic goes back to a classical wave dynamic and energy somehow stabilizes even if, as expected, it is not fully conserved from a discrete point of view. Due to energy dissipation at impact times, in the long run we expect the solution to never hit the obstacle again because the residual energy will only allow the profile to meet again the obstacle at $0$ speed, i.e. without any loss of energy.
\begin{figure}[tbh]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.4\linewidth]{Figure_6} &
\includegraphics[width=0.4\linewidth]{Figure_5}
\end{tabular}
\caption{Time evolution of the velocity up to $t = 2$ (left) and energy (right).}
\label{fig:1d_exe_E}
\end{figure}
Thus, also in higher dimension, we expect the solution $u$ obtained through the proposed scheme to become an obstacle-free solution of the wave equation as soon as the energy of the system drops below a certain value, preventing this way future collisions. This can be roughly summarized in the following conjecture.
\begin{conj}[Long time behavior]\label{conj:1}
Let $s = 1$ and, given an obstacle problem in the form of equation \eqref{eq:obstaclewaves}, let $u$ be the weak solution obtained through the convex minimization approach of Section \ref{subsec:appschemeobtacle}. Then, at least for sufficiently regular obstacles $g$, there exists $\bar{t} > 0$ such that $E(u(t))$ is constant for any $t > \bar{t}$.
\end{conj}
Alongside the previous conjecture, we observe that the solution $u$ obtained here seems to be, among all possible weak solutions, the one dissipating its kinetic energy at highest rate, when colliding with the obstacle $g$, and so the one realizing the ``adherent'' behavior we mentioned before. At the same time, from the complete opposite perspective, one could ask if it is possible to revise the scheme so that to obtain energy preserving approximations $u^n$, and try to use these approximations to provide an energy preserving weak solution (maybe under suitable additional hypothesis on the obstacle).
As already observed in the introduction, the proposed method can be extended to the case of semi-linear wave equations of the type
\[
u_{tt} + (-\Delta)^s u + f(u) = 0
\]
with $f$ a suitable function, possibly non-smooth. For example, one can consider $f$ to be the (scaled) derivative of a balanced, double-well potential, e.g. $f(u) = \frac{1}{\epsilon^2}(u^3-u)$ for $\epsilon > 0$: certain solutions of that equation are intimately related to timelike minimal hypersurfaces, i.e. with vanishing mean curvature with respect to Minkowski space-time metric \cite{del2018interface, jerrard2011defects, bellettini2010time}. On the other hand, as we said in the introduction, one could also manage adhesive type dynamics assuming $f$ to be the (non-smooth) derivative of a smooth potential $\Phi$, as it is done in \cite{CocliteFlorioLigaboMaddalena17}.
We eventually observe that the proposed approximations $u^n$ can be constructed, theoretically and numerically, also for a double obstacle problem, i.e. $g(x) \leq u(t,x) \leq f(x)$ for a suitable lower obstacle $g$ and upper obstacle $f$. However, in this new context, the previous convergence analysis cannot be replicated because even the basic variational characterization \eqref{eq:vardis_simple} is generally false and a more localized analysis would be necessary. Anyhow, also in this situation one would expect the solution to behave like an obstacle-free solution after some time, as suggested in Conjecture \ref{conj:1}.
\section*{Acknowledgements}
The authors are partially supported by GNAMPA-INdAM. The second author acknowledges partial support
by the University of Pisa Project PRA 2017-18.
\bibliographystyle{plain}
| {
"timestamp": "2019-01-24T02:12:20",
"yymm": "1901",
"arxiv_id": "1901.06974",
"language": "en",
"url": "https://arxiv.org/abs/1901.06974",
"abstract": "We consider an obstacle problem for (possibly non-local) wave equations, and we prove existence of weak solutions through a convex minimization approach based on a time discrete approximation scheme. We provide the corresponding numerical implementation and raise some open questions.",
"subjects": "Analysis of PDEs (math.AP)",
"title": "A variational scheme for hyperbolic obstacle problems",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9817357216481429,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.7085610757274494
} |
https://arxiv.org/abs/2203.08815 | QUBOs for Sorting Lists and Building Trees | We show that the fundamental tasks of sorting lists and building search trees or heaps can be modeled as quadratic unconstrained binary optimization problems (QUBOs). The idea is to understand these tasks as permutation problems and to devise QUBOs whose solutions represent appropriate permutation matrices. We discuss how to construct such QUBOs and how to solve them using Hopfield nets or adiabatic) quantum computing. In short, we show that neurocomputing methods or quantum computers can solve problems usually associated with abstract data structures. | \section{Introduction}
In this paper, we are concerned with quadratic unconstrained binary optimization problems (QUBOs) of the form
\begin{equation}
\label{eq:QUBO}
\vec{z}_* = \amin{\vec{z} \in \{ 0, 1 \}^N} \, \trn{\vec{z}} \mat{R} \, \vec{z} + \ipt{\vec{r}}{\vec{z}}
\end{equation}
where the objective is to find an optimal vector $\vec{z}_*$ of $N$ binary decision variables and where $\mat{R} \in \mathbb{R}^{N \times N}$ and $\vec{r} \in \mathbb{R}^N$ contain application specific parameters.
QUBOs are surprisingly versatile and occur in numerous settings \cite{Kochenberger2014-TUB,Lucas2014-IFO,Calude2017-QUBO,Glover2018-ATO,Bauckhage2019-AQF,Muecke2019-LBB,Bauckhage2020-AQC,Date2021-QFF,Biesner2022-SSS}. Notable use cases include RNA folding, budget allocation, portfolio optimization, routing, location planning, or item diversification. More generally, QUBOs arise in clique finding, graph partitioning, satisfiablity testing, data clustering, or classifier training.
In either case, QUBOs constitute combinatorial optimization problems. Indeed, as they deal with binary decision variables, QUBOs are specific integer programming problems and thus NP-hard in general. Yet they are also isomorphic to Hopfield- or Ising energy minimization problems known from neuro- or adiabtic quantum computing \cite{Hopfield1982-NNA,Farhi2000-QCB,Johnson2011-QAW,Farhi2000-QAOA,Bauckhage2020-PSW}. Since solvers can thus be implemented on emerging, potentially superior platforms such as neuromorphic- or quantum computers, research on the general capabilities and merits of QUBOs is increasing.
The work reported here falls into this category. We are interested in the universality of QUBOs and explore their use in tasks rarely seen as combinatorial optimization problems. In particular, we consider the basic computer scientific problems of sorting and tree building. Since there exist well established classical algorithms for this purpose, our study is a study of principle. We demonstrate that problems which usually involve the manipulation of abstract data structures can also be expressed as QUBOs of the form in \eqref{eq:QUBO}. This, in turn, establishes that neuromorphic- or quantum computers can sort and build trees.
\subsection{Basic Ideas and Notation}
Throughout, we assume we are given an unordered list or sequence $x_1, x_2, \ldots, x_n$ of $n$ numbers $x_i \in \mathbb{R}$ which we gather in a vector $\vec{x} \in \mathbb{R}^n$. The basic idea is then to determine an $n \times n$ permutation matrix $\mat{P}$ such that the entries of the vector $\vec{y} = \mat{P} \vec{x} \in \mathbb{R}^n$ are arranged in a certain, problem specific order. We therefore recall that a permutation matrix is a square binary matrix whose rows and columns sum to one. Formally, we express these characteristics as
\begin{align}
\mat{P} & \in \{0, 1\}^{n \times n} \\
\mat{P} \, \vec{1} & = \vec{1} \\
\trn{\mat{P}} \vec{1} & = \vec{1}
\end{align}
where $\vec{1} \in \mathbb{R}^n$ denotes the vector of all $1$s.
Our main contribution is to show that searching for an appropriate permutation matrix is tantamount to solving a QUBO as in \eqref{eq:QUBO} where $N = n^2$. Our derivation will involve the operation $\vec{v} = \operatorname*{vec}(\mat{M})$ which vectorizes a matrix $\mat{M}$ by stacking its columns into a vector $\vec{v}$. The operation $\mat{M} = \operatorname*{mat}(\vec{v})$ reverses this operation and matricizes $\vec{v}$ into $\mat{M}$. Finally, our derivation will also involve Kronecker products of the $n \times n$ identity matrix $\mat{I}$ and $n$-dimensional vectors; these Kronecker products are denoted by $\otimes$.
\subsection{Overview}
Next, in Section~\ref{sec:sorting}, we devise a QUBO formulation of the sorting problem. This will form the basis for our discussion in Section~\ref{sec:searching} where we set up QUBOs for constructing abstract data structures such as trees and heaps. In Section~\ref{sec:examples}, we present baseline experiments in which we use Hopfield networks to solve sorting and tree building QUBOs. Our results corroborate the viability of the proposed modeling framework. Finally, in Section~\ref{sec:conclusion}, we summarize key findings and discuss potential implications.
\section{QUBOs for Sorting Lists}
\label{sec:sorting}
In this section, we show that the basic computer scientific problem of sorting an unordered list of numbers can be cast as a QUBO. Without loss of generality, we focus on sorting in ascending order. Our discussion will be rather detailed because, once we have established that QUBOs can be used for sorting in ascending order, it will be easy to see how to adapt them to sorting in orders which represent serialized trees or heaps.
To begin with, we gather the given unordered numbers into an $n$-dimensional vector
\begin{equation}
\vec{x} = \trn{[x_1, x_2, \ldots, x_n]}
\end{equation}
This innocuous preparatory step allows us to treat the sorting problem as the problem of finding an $n \times n$ permutation matrix $\mat{P}$ such that the entries of the permuted vector $\vec{y} = \mat{P} \vec{x}$
obey $y_1 \leq y_2 \leq \ldots \leq y_n$.
In order to devise an objective function whose minimization would yield the sought after permutation matrix, we introduce yet another, rather specific $n$-dimensional vector, namely
\begin{equation}
\vec{n} = \trn{[1, 2, \ldots, n]}
\end{equation}
As the entries of this auxiliary vector are characterized by $n_1 \leq n_2 \leq \ldots \leq n_n$, we next appeal to the \emph{rearrangement inequality} of Hardy, Littlewood, and Polya \cite{Hardy1952-I}. This classical result implies that the negated inner product $-\ipt{\vec{y}}{\vec{n}}$ is minimal whenever $y_1 \leq y_2 \leq \ldots \leq y_n$. In other words, the expression $-\ipt{\vec{y}}{\vec{n}}$ is minimal if the entries of $\vec{y}$ and $\vec{n}$ are similarly sorted. Moreover, since we defined $\vec{y} = \mat{P} \vec{x}$, it follows that a permutation matrix $\mat{P}$ which sorts
the entries of $\vec{x}$ can be found by solving
\begin{equation}
\label{eq:qubo-step1}
\begin{aligned}
\mat{P} = \amin{\mat{Z} \in \{0,1\}^{n \times n}} \, & \, -\trn{\vec{x}} \trn{\mat{Z}} \vec{n} \\
\operatorname{s.\!t.} \quad & \;\;
\begin{aligned}
\mat{Z} \, \vec{1} & = \vec{1} \\
\trn{\mat{Z}} \vec{1} & = \vec{1}
\end{aligned}
\end{aligned}
\end{equation}
This intermediate (arguably lesser known) result shows that sorting can indeed be understood as an optimization problem \cite{Bauckhage2021-SAL,Aspnes2004-NOL}. However, \eqref{eq:qubo-step1} is a linear programming problem over binary matrices rather than a QUBO over binary vectors. Next, we therefore transform it into a problem of the form in \eqref{eq:QUBO}. This will involve two major steps: First, we rewrite the linear program over binary matrices as a linear program over binary vectors and, second, express the latter as a QUBO.
To rewrite our matrix linear program as a vector linear program, we resort to a lemma which we state and prove in the \hyperref[sec:appendix]{Appendix}. With respect to the matrix vector product $\trn{\mat{Z}} \vec{n}$ which occurs in the objective of \eqref{eq:qubo-step1}, the lemma establishes that vectorizing $\trn{\mat{Z}} \in \{0,1\}^{n \times n}$ into $\vec{z} \in \{0, 1\}^{n^2}$ and matricizing $\vec{n} \in \mathbb{R}^n$ into $\mat{N} \in \mathbb{R}^{n \times n^2}$ by means of
\begin{align}
\vec{z} & = \operatorname*{vec}\,(\mat{Z}) \\
\mat{N} & = \mat{I} \otimes \trn{\vec{n}} \label{eq:matN}
\intertext{provides us with the following identity}
\trn{\mat{Z}} \vec{n} & = \mat{N} \, \vec{z} \label{eq:qubo-step2}
\end{align}
In other words, we can equivalently express the product of the $n \times n$ matrix $\trn{\mat{Z}}$ and the $n$-dimensional vector $\vec{n}$ as a product of an $n \times n^2$ matrix $\mat{N}$ and an $n^2$-dimensional vector $\vec{z}$.
Similar arguments apply to the expressions in the equality constraints in \eqref{eq:qubo-step1}. That is, we further have
\begin{align}
\mat{Z} \, \vec{1} & = \mat{C}_r \, \vec{z} \\
\trn{\mat{Z}} \vec{1} & = \mat{C}_c \, \vec{z}
\intertext{where the $n \times n^2$ matrices $\mat{C}_r$ and $\mat{C}_c$ on the right hand sides capture row- and column sum constraints and are given by}
\mat{C}_r & = \trn{\vec{1}} \otimes \mat{I} \label{eq:matCr} \\
\mat{C}_c & = \mat{I} \otimes \trn{\vec{1}} \label{eq:matCc}
\end{align}
Consequently, we can reformulate the problem of estimating an optimal permutation matrix as the problem of finding an optimal binary vector, namely
\begin{equation}
\label{eq:qubo-step3}
\begin{aligned}
\vec{z}_* = \amin{\vec{z} \in \{0,1\}^{n^2}} \, & \, -\trn{\vec{x}} \mat{N} \, \vec{z} \\
\operatorname{s.\!t.} \quad & \;\;
\begin{aligned}
\mat{C}_r \, \vec{z} & = \vec{1} \\
\mat{C}_c \, \vec{z} & = \vec{1}
\end{aligned}
\end{aligned}
\end{equation}
Given this problem, we note that, once it has been solved for $\vec{z}_*$, the actually sought after permutation matrix can be computed as $\mat{P} = \trn{\operatorname{mat}(\vec{z}_*)}$ to then obtain the sorted version $\vec{y} = \mat{P} \vec{x}$ of $\vec{x}$.
In order to turn the intermediate linear constrained binary optimization problem in \eqref{eq:qubo-step3} into a quadratic unconstrained optimization binary problem, we note the equivalencies
\begin{alignat}{3}
\mat{C}_r \, \vec{z} & = \vec{1} && \;\;\Leftrightarrow\;\; \dsq{\,\mat{C}_r \, \vec{z}}{\vec{1}} & \; = 0 \\
\mat{C}_c \, \vec{z} & = \vec{1} && \;\;\Leftrightarrow\;\; \dsq{\,\mat{C}_c \, \vec{z}}{\vec{1}} & \; = 0
\end{alignat}
and expand the squared Euclidean distances as
\begin{align}
\dsq{\,\mat{C}_r \vec{z}}{\vec{1}} & = \trn{\vec{z}} \trn{\mat{C}_r} \mat{C}_r \vec{z} - 2 \, \trn{\vec{1}} \mat{C}_r \vec{z} + \ipt{\vec{1}}{\vec{1}} \\
\dsq{\,\mat{C}_c \vec{z}}{\vec{1}} & = \trn{\vec{z}} \trn{\mat{C}_c} \mat{C}_c \vec{z} - 2 \, \trn{\vec{1}} \mat{C}_c \vec{z} + \ipt{\vec{1}}{\vec{1}}
\end{align}
Since $\ipt{\vec{1}}{\vec{1}} = n$ is a constant independent of $\vec{z}$, we therefore have the following Lagrangian for the problem in \eqref{eq:qubo-step3}
\begin{align}
L \bigl( \vec{z}, \lambda_r, \lambda_c \bigr)
= & -\trn{\vec{x}} \mat{N} \, \vec{z} \notag \\
& + \lambda_r \bigl[ \trn{\vec{z}} \trn{\mat{C}_r} \mat{C}_r \vec{z} - 2 \, \trn{\vec{1}} \mat{C}_r \vec{z} \bigr] \notag \\
& + \lambda_c \,\bigl[ \trn{\vec{z}} \trn{\mat{C}_c} \mat{C}_c \vec{z} - 2 \, \trn{\vec{1}} \mat{C}_c \vec{z} \bigr] \\[2ex]
= & \; \trn{\vec{z}} \bigl[ \lambda_r \, \trn{\mat{C}_r} \mat{C}_r + \lambda_c \, \trn{\mat{C}_c} \mat{C}_c \bigr] \vec{z} \notag \\
& \, - \Bigl[ \trn{\vec{x}} \mat{N} + 2 \, \trn{\vec{1}} \bigl[ \lambda_r \, \mat{C}_r + \lambda_c \, \mat{C}_c \bigr] \Bigr] \, \vec{z}
\end{align}
Here, $\lambda_r$ and $\lambda_c$ are Lagrange multipliers which we henceforth treat as parameters that have to be set manually. (In section~\ref{sec:examples}, we suggest a simple, problem independent mechanism for this purpose.)
Finally, upon introducing the following $n^2 \times n^2$ matrix and the following $n^2$-dimensional vector
\begin{align}
\mat{R} & = \lambda_r \, \trn{\mat{C}_r} \mat{C}_r + \lambda_c \, \trn{\mat{C}_c} \mat{C}_c \label{eq:matR} \\
\vec{r} & = -\trn{\mat{N}} \vec{x} - 2 \, \trn{\bigl[ \lambda_r \, \mat{C}_r + \lambda_c \, \mat{C}_c \bigr]} \vec{1} \label{eq:vecR}
\end{align}
we find that \eqref{eq:qubo-step3} is equivalent to the following QUBO
\begin{equation}
\label{eq:qubo-step4}
\vec{z}_* = \amin{\vec{z} \in \{ 0, 1 \}^{n^2}} \; \trn{\vec{z}} \mat{R} \, \vec{z} + \ipt{\vec{r}}{\vec{z}}
\end{equation}
Again, if we could solve \eqref{eq:qubo-step4} for $\vec{z}_*$, the actually sought after permutation matrix is $\mat{P} = \trn{\operatorname{mat}(\vec{z}_*)}$ and allows us to compute the sorted version $\vec{y} = \mat{P} \vec{x}$ of $\vec{x}$.
\section{QUBOs for Building Trees and Heaps}
\label{sec:searching}
\begin{figure}[t]
\centering
\subfloat[binary search tree \label{fig:treexmpl-t}]{%
\includegraphics[width=0.55\columnwidth]{searchTree.pdf}}
\subfloat[serialization of the tree via breadth-first traversal\label{fig:treexmpl-l}]{%
\includegraphics[width=0.9\columnwidth]{searchTreeList.pdf}}
\caption{\label{fig:treexmpl} A binary search search tree over the numbers $1, \ldots, 7$.}
\end{figure}
The fact that QUBOs can be used to determine permutation matrices $\mat{P}$ which arrange the entries of a given vector $\vec{x}$ in an order prescribed by another vector $\vec{n}$ has applications beyond conventional sorting. It does, for instance, also allow for arranging the entries of $\vec{x}$ into more abstract data structures such as binary search trees. The general idea behind this claim is best explained by means of an example.
Figure~\ref{fig:treexmpl-t} shows the numbers $1$ through $7$ stored in a binary search tree. Looking at this figure, we recall that such a tree is a labeled directed acyclic graph whose vertices have up to two successors. Moreover, vertex labels of a binary search tree are arranged in such a manner that the label of each internal vertex is greater than the label of its left successor and less than the label of its right successor.
Figure~\ref{fig:treexmpl-l} shows a serialization of the tree in Fig.~\ref{fig:treexmpl-t}. This serialization resulted from a breadth-first traversal of the tree and is structure preserving in the following sense: Letting $v_0, v_1, v_2, \ldots$ denote the serialized vertices (where we deliberately start counting at $0$), the tree can be recovered by choosing the left and right successors of vertex $v_i$ to be $v_j$ and $v_k$ with $j = b i + 1$ and $k = b i + 2$ where $b=2$ denotes the branching factor of the tree. In Fig.~\ref{fig:treexmpl-l}, these implicit successor relations are visualized by means of dashed arrows.
Overall, our example illustrates that binary search trees can be thought of as specifically ordered lists. Hence, sticking with our example, if we wanted to arrange $n=7$ arbitrary numbers $x_1, \ldots, x_7$ into a search tree, we could gather them in a vector $\vec{x} \in \mathbb{R}^7$, consider the auxiliary vector
\begin{equation}
\vec{n} = \trn{[4, 2, 6, 1, 3, 5, 7]}
\end{equation}
and set up a QUBO as in \eqref{eq:qubo-step4} to determine a permutation matrix that arranges the $x_i$ into the desired tree order.
Of course this approach extends to settings with arbitrary many numbers as well as to trees with branching factors greater than two. It also extends to different labeling schemes and thus to different abstract data structures. We once again illustrate this claim by means of an example.
\begin{figure}[t]
\centering
\subfloat[maximum heap \label{fig:heapxmpl-t}]{%
\includegraphics[width=0.55\columnwidth]{heapTree.pdf}}
\subfloat[serialization of the heap via breadth-first traversal \label{fig:heapxmpl-l}]{%
\includegraphics[width=0.9\columnwidth]{heapTreeList.pdf}}
\caption{\label{fig:heapxmpl} A maximum heap over the numbers $1, \ldots, 7$.}
\end{figure}
Figure~\ref{fig:heapxmpl-t} shows the numbers $1$ through $7$ arranged in form of a maximum heap. Here, we recall that a maximum heap is a labeled binary tree where the labels of internal vertices are greater than or equal to the labels of their successors. Figure~\ref{fig:heapxmpl-l} shows a serialization of the heap in Fig.~\ref{fig:heapxmpl-t} again obtained from breadth-first traversal. Since this serialization is once again structure preserving, the above arguments reapply.
Hence, if we wanted to arrange $n=7$ arbitrary numbers $x_1, \ldots, x_7$ into a maximum heap, we could gather them in a vector $\vec{x} \in \mathbb{R}^7$, consider the auxiliary vector
\begin{equation}
\vec{n} = \trn{[7, 3, 6, 1, 2, 4, 5]}
\end{equation}
and set up a QUBO as in \eqref{eq:qubo-step4} to determine a permutation matrix that arranges the $x_i$ into the desired heap order.
The two tree building examples in this section thus reveal the sorting QUBO in \eqref{eq:qubo-step4} to be a general purpose tool for ordering or structuring problems. In a sense, it can actually be understood as a \emph{programmable machine}. The input variables it processes are contained in vector $\vec{x}$ and the program it executes is given by vector $\vec{n}$. Different programs, i.e.~different choices of $\vec{n}$, will cause the \emph{programmable} QUBO to produce different results of different utility.
\section{Practical Examples}
\label{sec:examples}
In this section, we present simple experiments which verify that the above theory can be put into practice.
We first address the open question of how to choose the two free parameters in equations \eqref{eq:matR} and \eqref{eq:vecR}. We then recap our construction of sorting or reordering QUBOs over binary variables and recall how to turn those into QUBOs over bipolar variables. These can be solved using (adiabatic) quantum computers or Hopfield nets and we apply the latter for sorting, tree building, and heap building.
\begin{figure*}[t!]
\centering
\subfloat[state evolution of a Hopfield net that solves a sorting QUBO]{%
\scriptsize
\begin{tabular}{rcr}
$t$ & $\vec{s}_t$ & $E \bigl( \vec{s}_t \bigr)$ \\
\midrule
$ 0$ & $- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -$ & $-673.5$ \\
$ 1$ & $- - - - - - - - - - - - - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -$ & $-689.1$ \\
$ 2$ & $- - - - - - - - - - - - - + - - - - - - - - - - - - - - - - - - - - - - - - - - + - - - - - - - -$ & $-704.4$ \\
$ 3$ & $- - - - + - - - - - - - - + - - - - - - - - - - - - - - - - - - - - - - - - - - + - - - - - - - -$ & $-719.4$ \\
$ 4$ & $- - - - + - - - - - - - - + - - - - - - - - - - + - - - - - - - - - - - - - - - + - - - - - - - -$ & $-734.0$ \\
$ 5$ & $- - - - + - - - - - - - - + - - - - - - - - - - + - - - - - - - - - - - - - - - + - - - + - - - -$ & $-748.3$ \\
$ 6$ & $- - - - + - - - - - - - - + - - - - - - - - - - + - - - - + - - - - - - - - - - + - - - + - - - -$ & $-762.4$ \\
$ 7$ & $- - - - + - - - - - - - - + + - - - - - - - - - + - - - - + - - - - - - - - - - + - - - + - - - -$ & $-776.4$ \\
$ 8$ & $- - - - + - - - - - - - - + + - - - - - - - - - + - - - - + - - - - - - - - - - + - - - + - - - -$ & $-776.4$ \\
\end{tabular}}
\subfloat[state evolution of a Hopfield net that solves a tree building QUBO]{%
\scriptsize
\begin{tabular}{rcr}
$t$ & $\vec{s}_t$ & $E \bigl( \vec{s}_t \bigr)$ \\
\midrule
$ 0$ & $- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -$ & $-673.5$ \\
$ 1$ & $- - - - - - - - - - - - - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -$ & $-689.1$ \\
$ 2$ & $- - - - - - - - - - - - - + - - - - - - - - - - - - - - - - - - - - - - - + - - - - - - - - - - -$ & $-704.4$ \\
$ 3$ & $- - - - - + - - - - - - - + - - - - - - - - - - - - - - - - - - - - - - - + - - - - - - - - - - -$ & $-719.4$ \\
$ 4$ & $- - - - - + - - - - - - - + - - - - - - - + - - - - - - - - - - - - - - - + - - - - - - - - - - -$ & $-734.0$ \\
$ 5$ & $- - - - - + - - - - - - - + - - - - - - - + - - - - - - - - - - - - - - - + - - - - - - - - + - -$ & $-748.3$ \\
$ 6$ & $- - - - - + - - - - - - - + - - - - - - - + - - - - - - - + - - - - - - - + - - - - - - - - + - -$ & $-762.4$ \\
$ 7$ & $- - - - - + - - - - - - - + - - - + - - - + - - - - - - - + - - - - - - - + - - - - - - - - + - -$ & $-776.4$ \\
$ 8$ & $- - - - - + - - - - - - - + - - - + - - - + - - - - - - - + - - - - - - - + - - - - - - - - + - -$ & $-776.4$ \\
\end{tabular}}
\subfloat[state evolution of a Hopfield net that solves a heap building QUBO]{%
\scriptsize
\begin{tabular}{rcr}
$t$ & $\vec{s}_t$ & $E \bigl( \vec{s}_t \bigr)$ \\
\midrule
$ 0$ & $- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -$ & $-673.5$ \\
$ 1$ & $- - - - - - - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -$ & $-689.1$ \\
$ 2$ & $- - - - - - - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + - - - - - - - - - - -$ & $-704.4$ \\
$ 3$ & $- - - - - - + + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + - - - - - - - - - - -$ & $-719.4$ \\
$ 4$ & $- - - - - - + + - - - - - - - - - - - - - - - - - - + - - - - - - - - - - + - - - - - - - - - - -$ & $-734.0$ \\
$ 5$ & $- - - - - - + + - - - - - - - - - - - - - - - - - - + - - - - - - - - - - + - - - - - + - - - - -$ & $-748.3$ \\
$ 6$ & $- - - - - - + + - - - - - - - - - - - - - - - - - - + - - - - - + - - - - + - - - - - + - - - - -$ & $-762.4$ \\
$ 7$ & $- - - - - - + + - - - - - - - - - + - - - - - - - - + - - - - - + - - - - + - - - - - + - - - - -$ & $-776.4$ \\
$ 8$ & $- - - - - - + + - - - - - - - - - + - - - - - - - - + - - - - - + - - - - + - - - - - + - - - - -$ & $-776.4$ \\
\end{tabular}}
\caption{\label{fig:runs} Visualizations of the evolution of states and energies of Hopfield nets of $N = n^2$ neurons which solve sorting or reordering QUBOs for $\vec{x} \in \mathbb{R}^n$. In each example, $n=7$ and all $N=49$ neurons are initially inactive. Over time $t$, the state $\vec{s}_t$ of the networks changes, i.e.~neurons switch from inactive ($-$) to active ($+$), and the energy $E(\vec{s}_t)$ decreases. Each process converges within only $O(n)$ steps to a stable state which encodes the solution to the respective problem.}
\end{figure*}
\subsection{On Choosing $\lambda_r$ and $\lambda_c$}
Working with parameterized QUBOs requires experience or guidelines as to suitable parameterizations. With regard to the Lagrange parameters $\lambda_r$ and $\lambda_c$ in \eqref{eq:matR} and \eqref{eq:vecR}, we therefore note that they should be chosen such that the objective function in \eqref{eq:qubo-step4} is balanced. This is to say that the contribution of the term $-\trn{\vec{x}} \mat{N} \, \vec{z}$ should not outweigh the sum-to-one constraints $\mat{C}_r \, \vec{z} = \vec{1}$ and $\mat{C}_c \, \vec{z} = \vec{1}$. Since this may happen whenever the sum (of the norms) of the entries of $\vec{x}$ is larger than $n$, a simple idea is to work with an $L_1$-normalized version $\vec{x}'$ of $\vec{x}$ where
\begin{equation}
\label{eq:normalization}
x_i' = \frac{x_i}{\sum_j \lvert x_j \rvert}
\end{equation}
Using this normalization of the problem input, we may choose
\begin{equation}
\label{eq:parameters}
\lambda_r = \lambda_c = n
\end{equation}
to appropriately weigh the individual components of the overall minimization objective.
\subsection{Setting Up Sorting / Ordering QUBOs}
Given an input vector $\vec{x}$ whose entries are to be rearranged into a certain order, we require an auxiliary vector $\vec{n}$ whose entries reflect that order. We then normalize $\vec{x}$ using \eqref{eq:normalization} and set $\lambda_r$ and $\lambda_c$ according to \eqref{eq:parameters}. Given these preparations, we compute $\mat{N}$, $\mat{C}_r$, and $\mat{C}_c$ using \eqref{eq:matN}, \eqref{eq:matCr}, and \eqref{eq:matCc}. This finally allows for computing $\mat{R}$ and $\vec{r}$ in \eqref{eq:matR} and \eqref{eq:vecR} which parameterize the QUBO in \eqref{eq:qubo-step4}. Note, however, that $\vec{r}$ is now computed with respect to $\vec{x}'$ instead of $\vec{x}$.
\subsection{From Binary to Bipolar QUBOs}
Recall that binary and bipolar vectors are affine isormorphic. That is, if $\vec{z} \in \{0,1\}^N$ is binary, then
$\vec{s} = 2 \, \vec{z} - \vec{1}$ is bipolar. Likewise, if $\vec{s} \in \{-1,+1\}^N$ is bipolar, then $\vec{z} = (\vec{s} + \vec{1}) / 2$ is binary. QUBOs as in \eqref{eq:qubo-step4} can therefore also be expressed as
\begin{align}
\label{eq:Ising}
\vec{s}_* & = \amin{\vec{s} \in \{ \pm 1 \}^N} \, \trn{\vec{s}} \mat{Q} \, \vec{s} + \ipt{\vec{q}}{\vec{s}}
\end{align}
where $\mat{Q} = \tfrac{1}{4} \, \mat{R}$ and $\vec{q} = \tfrac{1}{2} \, \mat{R} \, \vec{1} + \tfrac{1}{2} \, \vec{r}$. This is of considerable interest because \eqref{eq:Ising} is an Ising energy minimization problem that can be solved on adiabatic quantum computers or --using the quantum approximate optimization algorithm-- on quantum gate computers.
\subsection{Solving Sorting / Ordering QUBOs with Hopfield Nets}
Substituting $\mat{W} = -2 \, \mat{Q}$ and $\vec{\theta} = \vec{q}$, we may alternatively consider
\begin{align}
\label{eq:Hopfield}
\vec{s}_* & = \amin{\vec{s} \in \{ \pm 1 \}^N} -\tfrac{1}{2} \, \trn{\vec{s}} \mat{W} \vec{s} + \ipt{\vec{\theta}}{\vec{s}}
\end{align}
where $\mat{W}$ and $\vec{\theta}$ can be thought of the weight matrix and bias vector of a Hopfield network. Hence, the minimization objective in \eqref{eq:Hopfield} now defines the energy of a Hopfield network in state $\vec{s}$.
As Hopfield energies go, the energy landscape of a Hopfield net for sorting or reordering is fairly well behaved. That is, it does not suffer from (spurious) local minima which reflects the fact that sorting or reordering do not constitute hard problems.
When running a Hopfield net for sorting or reordering, we may thus consider a steepest energy descent mechanism
to update the network state $\vec{s}_t$ in iteration $t$ \cite{Bauckhage2021-HnetSort}. Our results below indicate that this yields fast convergence to a state which represents the solution to the problem at hand.
\subsection{Exemplary Results}
Just as in the previous section, we focus on simple settings and experiment with $n=7$ numbers. In particular, we consider
\begin{equation}
\vec{x} = \trn{[46, 52, -12, 33, 10, 51, 24]}
\end{equation}
In order to sort these numbers in ascending order and to arrange them into a binary tree or maximum heap, we work with the auxiliary vectors
\begin{align}
\vec{n}_S & = \trn{[1, 2, 3, 4, 5, 6, 7]} \\
\vec{n}_T & = \trn{[4, 2, 6, 1, 3, 5, 7]} \\
\vec{n}_H & = \trn{[7, 3, 6, 1, 2, 4, 5]}
\end{align}
and prepare corresponding Hopfield nets. In each experiment, we set the initial network state to $\vec{s}_0 = - \vec{1}$ and then run the steepest energy descent update mechanism.
Figure~\ref{fig:runs} visualizes the evolution of the respective Hopfield nets. It shows that each network rapidly converges to a stable state $\vec{s}_\infty$. In fact, all three networks converge within $O(n)$ updates which once again indicates that neither sorting nor reordering are difficult problems.
Moreover, the stable states the networks converge to encode the optimal solution $\vec{s}_*$ to the respective sorting, tree-, or heap building QUBO. That is, all three stable states encode a permutation matrix $\mat{P} = \trn{\operatorname{mat}\bigl( (\vec{s}_* + \vec{1}) / 2 \bigr)}$ which permutes $\vec{x}$ into the desired order.
To be specific, we obtain permutation matrices $\mat{P}_S$, $\mat{P}_T$, and $\mat{P}_H$ which produce
\begin{alignat}{2}
\vec{y}_S & = \mat{P}_S \, \vec{x} && = \trn{[-12, 10, 24, 33, 46, 51, 52]} \\
\vec{y}_T & = \mat{P}_T \, \vec{x} && = \trn{[33, 10, 51, -12, 24, 46, 52]} \\
\vec{y}_H & = \mat{P}_H \, \vec{x} && = \trn{[52, 24, 51, -12, 10, 33, 46]}
\end{alignat}
and thus correctly solve the problems we considered.
\section{Conclusion}
\label{sec:conclusion}
This paper resulted from ongoing research in which we ask: How universal is quadratic unconstrained binary optimization?
Examining the use of QUBOs for problems not commonly seen as combinatorial, we looked at the fundamental tasks of sorting lists and building trees. While there exists numerous established and efficient algorithms for these problems, our interest was in re-conceptualizing them because, if they can be modeled as QUBOs, it is clear that they can be solved using neuromputing techniques or quantum computing.
Our key contribution was to show that sorting and tree building can indeed be cast as QUBOs.
The basic idea was to simply understand these problems as permutation problems and to devise objective functions whose minimization results in appropriate permutation matrices. We first appealed to the rearrangement inequality and expressed sorting or reordering as linear programming problems over binary matrices. Using linear algebraic arguments and standard tools from optimization theory, we then showed how to rewrite them as linear program over binary vectors and, subsequently, as QUBOs.
Experiments demonstrated that the QUBO we derived in \eqref{eq:qubo-step4} provides a general purpose model for ordering or structuring problems. In a sense, it can be seen as a \emph{programmable machine}. The input this machine processes consists of a vector of numbers to be be reordered; the program it executes is given by another vector which dictates the order to be produced. As such orders may represent trees or heaps or other kinds of data types, QUBOs and, consequently, neuromorphic- or quantum computers can manipulate abstract data structures.
| {
"timestamp": "2022-03-18T01:00:22",
"yymm": "2203",
"arxiv_id": "2203.08815",
"language": "en",
"url": "https://arxiv.org/abs/2203.08815",
"abstract": "We show that the fundamental tasks of sorting lists and building search trees or heaps can be modeled as quadratic unconstrained binary optimization problems (QUBOs). The idea is to understand these tasks as permutation problems and to devise QUBOs whose solutions represent appropriate permutation matrices. We discuss how to construct such QUBOs and how to solve them using Hopfield nets or adiabatic) quantum computing. In short, we show that neurocomputing methods or quantum computers can solve problems usually associated with abstract data structures.",
"subjects": "Data Structures and Algorithms (cs.DS); Machine Learning (cs.LG); Quantum Physics (quant-ph)",
"title": "QUBOs for Sorting Lists and Building Trees",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9817357211137666,
"lm_q2_score": 0.7217431943271998,
"lm_q1q2_score": 0.7085610753417669
} |
https://arxiv.org/abs/1202.4656 | Scoring Play Combinatorial Games Under Different Operators | Scoring play games were first studied by Fraser Stewart for his PhD thesis. He showed that under the disjunctive sum, scoring play games are partially ordered, but do not have the same "nice" structure of normal play games. In this paper I will be considering scoring play games under three different operators given by John Conway and William Stromquist and David Ullman, namely the conjunctive sum, selective sum and sequential join. | \section{Introduction}
Until very recently scoring play games have not received the kind of treatment or analysis that normal and mis\`ere play games have. The general definition of a scoring play game is given below, for further reading on the general structure of scoring play games see \cite{FS} and \cite{FSP}.
In this paper we will be examining scoring play games under three different operators; the conjunctive sum, where the players must move on all available components on their turn; the selective sum, where the players can pick any components they wish to move on on their turn; and finally the sequential join, where the components are given a pre-arranged order and the players must play on them in that order. These operators were first defined by John Conway in On Numbers and Games \cite{ONAG}, and William Stromquist and David Ullman \cite{SU}.
We will also be looking at the Sprague-Grundy values of scoring play octal games under these three different operators. We will give evidence and conjecture that the period of the scoring play Sprague-Grundy function is eventually periodic, and has the same period for finite octal games as the disjunctive sum, for all three operators.
\subsection{Scoring Play Theory}
Intuitively a scoring play game is one that has the following three properties;
\begin{enumerate}
\item{The rules of the game clearly define what points are and how players either gain or lose them.}
\item{When the game ends the player with the most points wins.}
\item{For any two games $G$ and $H$, $a$ points in $G$ are equal to $a$ points in $H$, where $a\in \mathbb{R}$.}
\item{At any stage in a game $G$ if Left has $L$ points and Right has $R$ points, then the score of $G$ is $L-R$, where $L,R\in \mathbb{R}$.}
\end{enumerate}
Mathematically the definition is given as follows \cite{FS};
\begin{definition} A scoring play game $G=\{G^L|G^S|G^R\}$, where $G^L$ and $G^R$ are sets of games and $G^S\in\mathbb{R}$, the base case for the recursion is any game $G$ where $G^L=G^R=\emptyset$.\\
\noindent $G^L=\{\hbox{All games that Left can move to from } G\}$\\
\noindent $G^R=\{\hbox{All games that Right can move to from } G\}$,\\
and for all $G$ there is an $S=(P,Q)$ where $P$ and $Q$ are the number of points that Left and Right have on $G$ respectively. Then $G^S=P-Q$, and for all $g^L\in G^L$, $g^R\in G^R$, there is a $p^L,p^R\in\mathbb{R}$ such that $g^{LS}=G^S+p^L$ and $g^{RS}=G^S+p^R$.
$G_F^{SL}$ and $G_F^{SR}$ are called the final scores of $G$ and are the largest scores that Left and Right can achieve when $G$ ends, moving first respectively, if both players play their optimal strategy on $G$.
\end{definition}
For scoring play the disjunctive sum needs to be defined a little differently, because in scoring games when we combine them together we have to sum the games and the scores separately. For this reason we will be using two symbols $+_{\ell}$ and $+$. The $\ell$ in the subscript stands for ``long rule'', this comes from \cite{ONAG}, and means that the game ends when a player cannot move on any component on his turn.
\begin{definition} The disjunctive sum is defined as follows:
$$G+_{\ell} H=\{G^L+_{\ell} H,G+_{\ell} H^L|G^S+H^S|G^R+_{\ell} H,G+_{\ell} H^R\},$$
\noindent where $G^S+H^S$ is the normal addition of two real numbers.
\end{definition}
The outcome classes also need to be redefined to take into account the fact that a game can end with a tied score. So we have the following two definitions.
\begin{definition}
\item{$L_>=\{G|G_F^{SL}>0\}$, $L_<=\{G|G_F^{SL}<0\}$, $L_= =\{G|G_F^{SL}=0\}$.}
\item{$R_>=\{G|G_F^{SR}>0\}$, $R_<=\{G|G_F^{SR}<0\}$, $R_= =\{G|G_F^{SR}=0\}$.}
\item{$L_\geq=L_>\cup L_=$, $L_\leq = L_<\cup L_=$.}
\item{$R_\geq=R_>\cup R_=$, $L_\leq = R_<\cup R_=$.}
\end{definition}
\begin{definition}
The outcome classes of scoring games are defined as follows:
\begin{itemize}
\item{$\mathcal{L}=(L_>\cap R_>)\cup(L_>\cap R_=)\cup(L_=\cap R_>)$}
\item{$\mathcal{R}=(L_<\cap R_<)\cup(L_<\cap R_=)\cup(L_=\cap R_<)$}
\item{$\mathcal{N}=L_>\cap R_<$}
\item{$\mathcal{P}=L_<\cap R_>$}
\item{$\ti=L_=\cap R_=$}
\end{itemize}
\end{definition}
We will also be using two conventions throughout this paper. The first is that the initial score of a game will be $0$ unless stated otherwise. The second is that for a game $G$ if $G^L=G^R=\emptyset$, then we will write $G$ as $G^S$ rather than $\{.|G^S|.\}$. For example the game $G = \{\{.|0|.\}|1|\{.|2|.\}\}$, will be written as $\{0|1|2\}$. The game $\{.|n|.\}$, will be written as $n$, and so on. This is simply for convenience and ease of reading.
\subsection{Impartial Games}
The definition of an impartial scoring play game is less intuitive than for normal and mis\`ere play games. The reason for this is because we have to take into account the score, for example, consider the game $G=\{4|3|2\}$. On the surface the game does not appear to fall into the category of an impartial game, since Left wins moving first or second, however this game is impartial since both players move and gain a single point, i.e. they both have the same options.
So we will use the following definition for an impartial game;
\begin{definition}
A scoring game $G$ is impartial if it satisfies the following;
\begin{enumerate}
\item{$G^L=\emptyset$ if and only if $G^R=\emptyset$.}
\item{If $G^L\neq \emptyset$ then for all $g^L\in G^L$ there is a $g^R\in G^R$ such that $g^L+_{\ell} -G^S=-(g^R+_{\ell} -G^S)$.}
\end{enumerate}
\end{definition}
We will also be looking at octal games in this paper, and for scoring play games we use the following definition of an octal game.
\begin{definition}
A scoring play octal game $O=(n_1n_2\dots n_k, p_1p_2\dots p_k)$, is a set of rules for playing nim where if a player removes $i$ beans from a heap of size $n$ he gets $p_i$ points, $p_i\in \mathbb{R}$, and he must leave $a,b,c\dots$ or $j$ heaps, where $n_i=2^a+2^b+2^c+\dots+2^j$.
\end{definition}
By convention we will also say that $n\in O$ means that the nim heap $n$ is played under the rule set $O$. In \cite{FSP} and \cite{FSI}, the following definition and conjecture were given.
\begin{definition}
Let $n\in O=(t_1t_2\dots t_f, p_1p_2\dots p_t)$ and $m\in P=(s_1s_2\dots s_e, q_1q_2\dots q_t)$;
\begin{itemize}
\item{$\mathcal{G}_s(0)=0$.}
\item{$\mathcal{G}_s(n)=\max_{k,i}\{p_k-\mathcal{G}_s(n_1+_{\ell} n_2+_{\ell} \dots+_{\ell} n_{i})\}$, where $n_1+n_2+\dots +n_{i}=n-k$, $t_k=\Sigma_{i\in S_k}2^i$.}
\item{$\mathcal{G}_s(n+_{\ell} m)=\max_{k,i,l,j}\{p_k-\mathcal{G}_s(n_1+_{\ell} n_2+_{\ell} \dots+_{\ell} n_i+_{\ell} m),q_l-\mathcal{G}_s(n+_{\ell} m_1+_{\ell} m_2+_{\ell}\dots+_{\ell} m_j)\}$, where $n_1+n_2+\dots +n_i=n-k$, $t_k=\Sigma_{i\in S_k}2^i$, $m_1+m_2+\dots m_j=m-l$ and $s_l=\Sigma_{j\in R_l}2^j$.}
\end{itemize}
\end{definition}
\begin{conjecture}\label{period}
Let $O=(n_1n_2\dots n_t, p_1p_2\dots p_t)$ and $P=(m_1m_2\dots m_l, q_1q_2\dots q_l)$ be two finite taking-no-breaking octal games such that, there is at least one $n_s\neq 0$ or $1$, and if $n_i$ and $m_j=1,2$ or $3$ then $p_i=i$ and $q_j=j$, and $p_i=q_j=0$, otherwise, then for all $m$ there exists an $N$ such that;
$$\mathcal{G}_s(n+2k+_{\ell} m)=\mathcal{G}_s(n+_{\ell} m)$$
\noindent for all $n\geq N$and $k$ is the largest entry in $O$ such that $n_k\neq 0,1$.
\end{conjecture}
There was a lot of evidence given that the conjecture is true. In this thesis we will be examining the same function under all three operators. I also conjecture that if conjecture \ref{period} is true then the function settles down to the same period for all 3 operators. If this is true I think it would be a very interesting result, since you would expect changing the operator would change the period of the function, but in this case it appears that this probably does not happen.
\section{The Conjunctive Sum}
The first operator that we will be looking at is the conjunctive sum. Under this operator, players must move on all components on their turn. Mathematically it is defined as follows;
\begin{definition} The conjunctive sum is:
$$G\bigtriangleup H=\{G^L\bigtriangleup H^L|G^S+H^S|G^R\bigtriangleup H^R\}$$
\noindent where $G^S + H^S$ is the normal addition of two real numbers.
\end{definition}
\begin{theorem}
If $G\not\cong 0$ then $G\neq 0$.
\end{theorem}
\begin{proof} First consider the game $G^L=G^R=\emptyset$, then clearly if $G^S\neq 0$ then $G\neq 0$.
Next consider the case where $G^L\neq \emptyset$, since the case $G^R\neq \emptyset$ follows by symmetry. Let $P=\{.|a|b\}$, where $a=P^{SL}_F>0$. Since $G$ is a combinatorial game, this means that the game tree has finite depth and finite width, so we can choose $b<0$ such that $|b|$ is greater than any number on $G$. On Left's first turn he must move to $G^L\bigtriangleup P$, regardless of whether Right can play $G$ or not, he will have to move on $P$ on his next turn.
Thus $(G\bigtriangleup P)^{SL}_F<0$, and therefore $G\bigtriangleup P\not\approx P$, and the theorem is proven.
\end{proof}
\begin{theorem}
For any outcome classes $\mathcal{X}$, $\mathcal{Y}$ and $\mathcal{Z}$, there is a game $G\in \mathcal{X}$ and $H\in \mathcal{Y}$ such that $G\bigtriangleup H\in \mathcal{Z}$.
\end{theorem}
\begin{proof} To prove this consider the following game $G\bigtriangleup H$, where $G=\{\{.|b|\{.|c|\{e|d|f\}\}\}|a|.\}$ and $H=\{.|g|\{\{\{k|j|.\}|i|.\}|h|.\}\}$, as shown in figure \ref{cout}.
\begin{figure}[htb]
\begin{center}
\begin{graph}(4.5,4)
\roundnode{1}(0.5,4)\roundnode{2}(0,3)\roundnode{3}(0.5,2)\roundnode{4}(1,1)\roundnode{5}(1.5,0)
\roundnode{6}(0.5,0)\roundnode{7}(3,0)\roundnode{8}(3.5,1)\roundnode{9}(4,2)\roundnode{10}(4.5,3)
\roundnode{11}(4,4)
\edge{1}{2}\edge{2}{3}\edge{3}{4}\edge{4}{5}\edge{4}{6}
\edge{7}{8}\edge{8}{9}\edge{9}{10}\edge{10}{11}
\freetext(2.25,2){$\bigtriangleup$}
\nodetext{1}{$a$}\nodetext{2}{$b$}\nodetext{3}{$c$}\nodetext{4}{$d$}\nodetext{5}{$f$}
\nodetext{6}{$e$}\nodetext{7}{$k$}\nodetext{8}{$j$}\nodetext{9}{$i$}\nodetext{10}{$h$}
\nodetext{11}{$g$}
\end{graph}
\end{center}
\caption{$\{\{.|b|\{.|c|\{e|d|f\}\}\}|a|.\}\bigtriangleup \{.|g|\{\{\{k|j|.\}|i|.\}|h|.\}\}$}\label{cout}
\end{figure}
In these games $G^{SL}_F=c$ and $G^{SR}_F=a$, $H^{SL}_F=g$ and $H^{SR}_F=i$, however $(G\bigtriangleup H)^{SL}_F=e+j$ and $(G\bigtriangleup H)^{SR}_F=e+k$. Since the outcome classes of $G$ and $H$ depend on $a$, $c$, $g$ and $i$, and the outcome class of $G\bigtriangleup H$ depends on $e+j$ and $e+k$, then clearly we can choose $a$, $c$, $g$, $i$, $e$, $j$ and $k$, so that $G$ and $H$ can be in any outcome class and $G\bigtriangleup H$ can be in any outcome class and the theorem is proven.
\end{proof}
\subsection{Impartial Games}
\begin{theorem}
Impartial games form an abelian group under the conjunctive sum.
\end{theorem}
\begin{proof} To prove this we only need to show that there is an identity set $I$ that contains more than one element, and that for any impartial game $G$, there is a $G^{-1}$ such that $G\bigtriangleup G^{-1}\in I$.
Let $I=\{G|G \hbox{ is impartial and }G\in \ti\}$, then we wish to show that for all $G\in I$, $G\bigtriangleup P\approx P$ for all impartial games $P$. There are three cases to consider, since the remaining follow by symmetry, $P\in L_>$, $P\in L_<$ or $P\in L_=$.
So first let $P\in L_>$, and consider the game $G\bigtriangleup P$. Since Left can achieve a score of $0$ on $G$, then all Left has to do is play his winning strategy on $P$, and $G\bigtriangleup P\in L_>$.
Next let $P\in L_<$, and consider the game $G\bigtriangleup P$. $G\in L_=$, and since both $G$ and $P$ are impartial, neither player can change the parity of either game, since they must both play both games on every turn. So all Right has to do is play his winning strategy on $P$ and $G\bigtriangleup P\in L_<$.
Finally let $P\in L_=$, and consider the game $G\bigtriangleup P$. If both players always make their best moves on $G$ and $P$ then the final score of $G\bigtriangleup P$ will be 0, since $G\in \ti$ and $P\in L_=$. Since $G\in \ti$ and $P\in L_=$, this implies that if Left chooses a different move other than his best move either $G$ or $P$, then the final score will be $\leq 0$, and similarly for Right. This means that as long as Right keeps playing his best strategy, if Left chooses anything else Right can potentially win and similarly for Left. In other words the best thing for both players to do is to play their best strategy on both $G$ and $P$ and the final score will be a tie, i.e. $G\bigtriangleup P\in L_=$.
The cases for $R_>$, $R_<$ and $R_=$ follow by symmetry.
For the inverse of a game $G$, where $G^{SL}_F=n$ and $G^{SR}_F=p$, we let $H$ be a game where $H^{SL}_F=-n$ and $H^{SR}_F=-p$. Note that $G\bigtriangleup H\in I$ if and only if $G\bigtriangleup H\in \ti$.
So consider the game $G\bigtriangleup H$ with Left moving first, since the case where Right moves first follows by symmetry. If $G^L=\emptyset$, then this implies that $G^R=\emptyset$ since $G$ is impartial, which implies that $G^{SL}_F=G^{SR}_F=n$, so for the inverse let $H$ be a game such that $H^{SL}_F=H^{SR}_F=-n$. However since $H$ is impartial the only game that satisfies that condition is the game $H=\{.|-n|.\}$, which is clearly the inverse of $G$.
If $G^L\neq \emptyset$, and Left and Right make their best move at every stage on both $G$ and $H$, then the final score of $G\bigtriangleup H$ will be $G^{SL}_F+H^{SL}_F=n-n=0$. Using the same argument as the identity proof if Left or Right try a different strategy then the final score will be either $\leq 0$ or $\geq 0$ respectively, therefore $(G\bigtriangleup H)^{SL}_F=(G\bigtriangleup H)^{SR}_F=0$ and $H$ is the inverse of $G$.
It is clear that the set is closed, since if $G$ and $H$ are impartial then $G\bigtriangleup H$ must also be impartial. It is also clear that we have commutativity and associativity, since we must play on every component on every turn, then the order of the components is irrelevant.
\end{proof}
\subsection{Sprague-Grundy Theory}
First we define the following;
\begin{definition}
Let $n\in O=(t_1t_2\dots t_f, p_1,\dots p_f)$ and $m\in P=(s_1s_2\dots s_e, q_1\dots q_e)$;
\begin{itemize}
\item{$\mathcal{G}_s(0)=0$.}
\item{$\mathcal{G}_s(n)=\max_{k,i}\{p_k-\mathcal{G}_s(n_1\bigtriangleup n_2\bigtriangleup \dots\bigtriangleup n_{i})\}$, where $n_1+n_2+\dots +n_{i}=n-k$ and $t_k=\Sigma_{i\in S_k}2^i$.}
\item{$\mathcal{G}_s(n+_{\ell} m)=\max_{k,i,l,j}\{p_k+q_l-\mathcal{G}_s(n_1\bigtriangleup n_2\bigtriangleup \dots\bigtriangleup n_i\bigtriangleup m_1\bigtriangleup m_2\bigtriangleup\dots\bigtriangleup m_j)\}$, where $n_1+n_2+\dots +n_i=n-k$,$t_k=\Sigma_{i\in S_k}2^i$, $m_1+m_2+\dots m_j=m-l$ and $s_l=\Sigma_{j\in R_l}2^j$.}
\end{itemize}
\end{definition}
The fact that impartial games are a group mean that we can easily solve any octal game simply by knowing each heap's $\mathcal{G}_s(n)$ value. So we have the following theorem;
\begin{theorem}
$$\mathcal{G}_s(n\bigtriangleup m)=\mathcal{G}_s(n)+\mathcal{G}_s(m)$$
\end{theorem}
\begin{proof} We will prove this by induction. The base case is trivial, since $\mathcal{G}_s(0\bigtriangleup 0)=\mathcal{G}_s(0)+\mathcal{G}_s(0)=0$.
So assume that the theorem holds for all values up to $\mathcal{G}_s(n\bigtriangleup m)$, and consider $\mathcal{G}_s(n+1\bigtriangleup m)$, since the case $\mathcal{G}_s(n\bigtriangleup m+1)$ follows by symmetry. $\mathcal{G}_s(n+1\bigtriangleup m)=\max_{k,i,l,j}\{p_k+q_l-\mathcal{G}_s(n_1\bigtriangleup n_2\bigtriangleup \dots\bigtriangleup n_i\bigtriangleup m_1\bigtriangleup m_2\bigtriangleup\dots\bigtriangleup m_j)\}$, where $n_1+n_2+\dots +n_i=n+1-k$, $t_k=\Sigma_{i\in S_k}$, $m_1+m_2+\dots m_j=m-l$ and $s_l=\Sigma_{j\in R_l}$. However each $n_i'<n+1$ and $m_j'<m$, therefore $\max_{k,i,l,j}\{p_k+q_l-\mathcal{G}_s(n_1\bigtriangleup n_2\bigtriangleup \dots\bigtriangleup n_i\bigtriangleup m_1\bigtriangleup m_2\bigtriangleup\dots\bigtriangleup m_j)\}=\max_{k,i,l,j}\{k+l-\mathcal{G}_s(n_1)+\mathcal{G}_s(n_2)+\dots+\mathcal{G}_s(n_i)+\mathcal{G}_s(m_1)+\mathcal{G}_s(m_2)+\dots+\mathcal{G}_s(m_j)\}$, by induction.
Therefore $\max_{k,i,l,j}\{p_k+q_ll-\mathcal{G}_s(n_1)+\mathcal{G}_s(n_2)+\dots+\mathcal{G}_s(n_i)+\mathcal{G}_s(m_1)+\mathcal{G}_s(m_2)+\dots+\mathcal{G}_s(m_j)\}=\max_{k,i}\{p_k-G_s(n_1)+\mathcal{G}_s(n_2)+\dots+\mathcal{G}_s(n_i)\}+\max_{m,j}\{q_l-\mathcal{G}_s(m_1)+\mathcal{G}_s(m_2)+\dots+\mathcal{G}_s(m_j)\}=\max_{k,i}\{p_k-\mathcal{G}_s(n_1\bigtriangleup n_2\bigtriangleup\dots\bigtriangleup n_i)\}+\max_{m,j}\{q_l-\mathcal{G}_s(m_1\bigtriangleup m_2\bigtriangleup\dots\bigtriangleup m_j)\}=\mathcal{G}_s(n+1)+\mathcal{G}_s(m)$, and the proof is finished.
\end{proof}
\section{The Selective Sum}
The selective sum is a more general version of the disjunctive sum. Rather than choosing a single component on each turn and playing that one only, the player can select any components he wishes to play and play those components on his turn. It is defined as follows;
\begin{definition} The selective sum is:
$$G\triangledown H=\{G^L\triangledown H, G\triangledown H^L, G^L\triangledown H^L|G^S+H^S|G^R\triangledown H, G\triangledown H^R, G^R\triangledown H^R\}$$
\noindent where $G^S +H^S$ is the normal addition of two real numbers.
\end{definition}
\begin{theorem}
If $G\not\cong 0$ then $G\neq 0$.
\end{theorem}
\begin{proof} The proof of this is very similar to the same theorem for the conjunctive sum. First consider the game $G^L=G^R=\emptyset$, then clearly if $G^S\neq 0$ then $G\neq 0$.
Next consider the case where $G^L\neq \emptyset$, since the case $G^R\neq \emptyset$ follows by symmetry. Let $P=\{.|a|b\}$, where $a=P^{SL}_F>0$. Since $G$ is a combinatorial game, this means that the game tree has finite depth and finite width, we can choose $b$ to be more negative than any number on $G$. On Left's first turn he must move to $G^L\triangledown P$, Right can then win by simply moving to $G^L\triangledown b$ on his turn, since the final score will be less than $0$, regardless of what Left does.
Thus $(G\triangledown P)^{SL}_F<0$, and therefore $G\triangledown P\not\approx P$, and the theorem is proven.
\end{proof}
\begin{theorem}
For any outcome classes $\mathcal{X}$, $\mathcal{Y}$ and $\mathcal{Z}$, there is a game $G\in \mathcal{X}$ and $H\in \mathcal{Y}$ such that $G\bigtriangleup H\in \mathcal{Z}$.
\end{theorem}
\begin{proof} To prove this consider the following game $G\triangledown H$, where $G= \{\{c|b|.\}|a|.\}$ and $H=\{.|d|\{.|e|\{.|f|g\}\}\}$, as shown in the following diagram.
\begin{figure}[htb]
\begin{center}
\begin{graph}(3.5,3)
\roundnode{1}(0,1)\roundnode{2}(0.5,2)\roundnode{3}(1,3)\roundnode{4}(2,3)
\roundnode{5}(2.5,2)\roundnode{6}(3,1)\roundnode{7}(3.5,0)
\edge{1}{2}\edge{2}{3}\edge{4}{5}\edge{5}{6}\edge{6}{7}
\freetext(1.5,1.5){$\triangledown$}
\nodetext{1}{$a$}\nodetext{2}{$b$}\nodetext{3}{$c$}\nodetext{4}{$d$}
\nodetext{5}{$e$}\nodetext{6}{$f$}\nodetext{7}{$g$}
\end{graph}
\end{center}
\caption{$\{\{c|b|.\}|a|.\}\triangledown \{.|d|\{.|e|\{.|f|g\}\}\}$}
\end{figure}
In these games $G^{SL}_F=b$ and $G^{SR}_F=a$, $H^{SL}_F=d$ and $H^{SR}_F=e$, however $(G\triangledown H)^{SL}_F=c+f$ and $(G\bigtriangleup H)^{SR}_F=c+g$. Since the outcome classes of $G$ and $H$ depend on $a$, $b$, $d$ and $e$, and the outcome class of $G\triangledown H$ depends on $c+f$ and $c+g$, then clearly we can choose $a$, $b$, $c$, $d$, $e$, $f$ and $g$, so that $G$ and $H$ can be in any outcome class and $G\triangledown H$ can be in any outcome class and the theorem is proven.
\end{proof}
\subsection{Impartial Games}
\begin{theorem}
Impartial games form a non-trivial monoid under the selective sum.
\end{theorem}
\begin{proof} To prove that we have a non-trivial monoid we simply need to define an identity set that contains more than the game $\{.|0|.\}$.
First I will define a subset of the impartial games as follows;
$$I=\{i|G+_{\ell} i\approx G, \hbox{ for all impartial games }G\}$$
Again, in order to show that we have a non-trivial monoid we have to show that $I$ contains more than one element. So consider the following impartial game, $i=\{\{0|0|0\}|0|\{0|0|0\}\}$.
\begin{figure}[htb]
\begin{center}
\begin{graph}(3,2)
\roundnode{1}(0,0)\roundnode{2}(0.5,1)\roundnode{3}(1,0)\roundnode{4}(1.5,2)
\roundnode{5}(2,0)\roundnode{6}(2.5,1)\roundnode{7}(3,0)
\edge{1}{2}\edge{2}{3}\edge{4}{2}\edge{4}{6}\edge{5}{6}\edge{6}{7}
\nodetext{1}{0}\nodetext{2}{0}\nodetext{3}{0}\nodetext{4}{0}
\nodetext{5}{0}\nodetext{6}{0}\nodetext{7}{0}
\end{graph}
\end{center}
\caption{The game $\{\{0|0|0\}|0|\{0|0|0\}\}$}
\end{figure}
To show that $i\triangledown G\approx G$ for all impartial games $G$, there are 3 cases to consider $G_F^{SL}>0$, $G_F^{SL}<0$ and $G_F^{SL}=0$, since the cases for Right follow by symmetry. First let $G_F^{SL}>0$, if Left has no move on $G$, then neither does Right, since $G$ is impartial, i.e. $G=G^S$, so they will play $i$ and the final score will still be $G^S$.
So let Left have a move on $G$, if Left chooses his best move on $G$, then if Right plays $i$, then Left will respond in $i$ and Right must play $G$, which Left wins. If Right tries to play on both $G$ and $i$, then either Right moves to a game where $G^L=G^R=\emptyset$, in which case Left moves on $i$ only and wins, or $G^L\neq \emptyset$, and Left also plays both $G$ and $i$ in order to maintain parity on $G$ and still wins. Clearly if Right chooses to play $G$, then he will still lose, since Left also plays $G$ until it is finished and neither player can gain points on $i$.
Next let $G_F^{SL}<0$, this means that no matter what Left does, he will lose playing only $G$ on $G\triangledown i$, since Right will simply respond in $G$, until $G$ is finished, then they will play $i$, which does not change the final score of $G$. Again if Left tries to change the parity of $G$, by playing $i$, Right will also play $i$, and it will be Left's turn to move on $G$ again. If Left chooses to move on both $G$ and $i$, then as before Right will also move on $G$ and $i$ if $G^R\neq\emptyset$, and $i$ if $G^R=\emptyset$, but will win either way.
Finally let $G_F^{SL}=0$. This means that Left's best move will be a move that eventually ties $G$. So consider the game $G\bigtriangleup i$, Left's best move will be to move either on $G$ or $G$ and $i$, if Left moves on $i$ then this will give Right an opportunity to move first on $G$ and potentially win. If Left moves on $G$ then Right can either play $G$, $i$ or $G$ and $i$. If Right chooses to play $G$ then Left will simply respond in $G$ to force a tie, if Right plays $i$ then Left can either respond in $i$ and still tie, or play $G$ and potentially win. If Right plays both $G$ and $i$, again Left can respond in both and tie, or play $i$ only and potentially win. So therefore $(G\bigtriangleup i)^{SL}_F=0$.
Therefore the set of impartial games is a non-trivial monoid under the selective sum and the theorem is proven.
\end{proof}
\begin{conjecture}\label{cs1}
Not every impartial game is invertible under the selective sum.
\end{conjecture}
To prove this we have to show that for an impartial game $G$, there is no game $Y$, such that $G\triangledown Y\triangledown P\approx P$ for all impartial games $P$. Finding such a game and proving that it has no inverse is going to be rather difficult, but nevertheless the conjecture is likely to be true.
\subsection{Sprague-Grundy Theory}
As with the other operators I will define the function in the most general possible sense.
\begin{definition}
Let $n\in O=(t_1t_2\dots t_f, p_1,\dots p_f)$ and $m\in P=(s_1s_2\dots s_e, q_1\dots q_e)$;
\begin{itemize}
\item{$\mathcal{G}_s(0)=0$.}
\item{$\mathcal{G}_s(n)=\max_{k,i}\{p_k-\mathcal{G}_s(n_1+_{\ell} n_2\triangledown \dots\triangledown n_{i})\}$, where $n_1+n_2+\dots +n_{i}=n-k$ and $t_k=\Sigma_{i\in S_k}2^i$.}
\item{$\mathcal{G}_s(n+_{\ell} m)=\max_{k,i,l,j}\{p_k - \mathcal{G}_s(n_1\triangledown n_2\triangledown\dots\triangledown n_i\triangledown m), q_l-\mathcal{G}_s(n\triangledown m_1\triangledown m_2\triangledown \dots\triangledown m_j), p_k+q_l-\mathcal{G}_s(n_1\triangledown n_2\triangledown \dots\triangledown n_i\triangledown m_1\triangledown m_2\triangledown\dots\triangledown m_j)\}$, where $n_1+n_2+\dots +n_i=n-k$, $t_k=\Sigma_{i\in S_k}2^i$, $m_1+m_2+\dots m_j=m-l$ and $s_l=\Sigma_{j\in R_l}2^j$.}
\end{itemize}
\end{definition}
\begin{theorem}
Suppose $O_1,\dots,O_v$ are octal games, and there are natural numbers $N_1,\dots,N_v$ such that for each$i=1,\dots,v$, $G_s(n)\geq 0$ for all $n\in O_i$ and $n\leq N_i$. Then if $n_i\in O_i$ and $n_i\leq N_i$ for each $i=1,\dots,v$, $\mathcal{G}_s(n_1\triangledown\dots\triangledown n_v)=\Sigma_{i=1}^v\mathcal{G}_s(n_i)$.
\end{theorem}
\begin{proof} I will prove this by induction on $n_1+\dots+n_j$ for some $j$. The base case is clearly trivial since $\mathcal{G}_s(0\triangledown\dots\triangledown 0)=0$ regardless of how many $0$'s there are.
So for the inductive step assume that the result holds for all $n_1+\dots+n_j\leq K$ and I will choose and $n$ and $m$ such that $n+m=K+1$, and $G_s(n)$ and $G_s(m)\geq 0$. The reason I only choose two games $n$ and $m$ is because it makes the proof easier and it will also be clear that the same argument can be extended to any number of games.
$\mathcal{G}_s(n\triangledown m)=\max_{k,i,l,j}\{p_k - \mathcal{G}_s(n_1\triangledown\dots\triangledown n_i\triangledown m), q_l-\mathcal{G}_s(n\triangledown m_1\triangledown \dots\triangledown m_j), p_k+q_l-\mathcal{G}_s(n_1\triangledown \dots\triangledown n_i\triangledown m_1\triangledown\dots\triangledown m_j)\}$, and since $n_1+\dots n_i+m$, $m_1+\dots+m_j+n$ and $n_1\dots n_i+m_1+\dots + m_j\leq k$, then by induction, $\max_{k,i,l,j}\{p_k - \mathcal{G}_s(n_1\triangledown\dots\triangledown n_i\triangledown m), q_l-\mathcal{G}_s(n\triangledown m_1\triangledown \dots\triangledown m_j), p_k+q_l-\mathcal{G}_s(n_1\triangledown \dots\triangledown n_i\triangledown m_1\triangledown\dots\triangledown m_j)\}=\max\{p_k - \mathcal{G}_s(n_1\triangledown\dots\triangledown n_i)-\mathcal{G}_s(m), q_l-\mathcal{G}_s(n)-\mathcal{G}_s(m_1\triangledown \dots\triangledown m_j), p_k+q_l-\mathcal{G}_s(n_1\triangledown \dots\triangledown n_i) -\mathcal{G}_s(m_1\triangledown\dots\triangledown m_j)\}=\max\{\mathcal{G}_s(n)-\mathcal{G}_s(m), \mathcal{G}_s(m)-\mathcal{G}_s(n), \mathcal{G}_s(n)+\mathcal{G}_s(m)\}$.
However since we know that both $\mathcal{G}_s(n)$ and $\mathcal{G}_s(m)\geq 0$, then $\max\{\mathcal{G}_s(n)-\mathcal{G}_s(m), \mathcal{G}_s(m)-\mathcal{G}_s(n), \mathcal{G}_s(n)+\mathcal{G}_s(m)\}=\mathcal{G}_s(n)+\mathcal{G}_s(m)$, as previously stated it is clear that exactly the same argument can be used for any number games and so the theorem is proven.
\end{proof}
Note that this theorem will not hold if either $\mathcal{G}_s(n)$ or $\mathcal{G}_s(m)<0$, since in that case it might be better to move on $n$ or $m$ but not both $n$ and $m$, but this is still quite a strong result and tells us quite a lot about nim variants played under the selective sum. In the general case I make the following conjecture.
\begin{conjecture}
Let $O=(n_1n_2\dots n_t, p_1p_2\dots p_t)$ and $P=(m_1m_2\dots m_l, q_1q_2\dots q_l)$ be two finite taking-no-breaking octal games such that, there is at least one $n_s\neq 0$ or $1$, and if $n_i$ and $m_j=1,2$ or $3$ then $p_i=i$ and $q_j=j$, and $p_i=q_j=0$, otherwise, then for all $m$ there exists an $N$ such that;
$$\mathcal{G}_s(n+2k\triangledown m)=\mathcal{G}_s(n\triangledown m)$$
\noindent for all $n\geq N$ and $k$ is the largest entry in $O$ such that $n_k\neq 0,1$.
\end{conjecture}
What is interesting about this is that changing the operator does not appear to change the period, and in fact I make an even stronger conjecture;
\begin{conjecture}
Let $O=(n_1n_2\dots n_t, p_1p_2\dots p_t)$ and $P=(m_1m_2\dots m_l, q_1q_2\dots q_l)$ be two finite octal games, then if $\mathcal{G}_s(n+_{\ell} m)$ eventually has period $p$, $\mathcal{G}_s(n\triangledown m)$ also eventually has period $p$.
\end{conjecture}
So in other words what this conjecture says is that if these values are eventually periodic under the disjunctive sum, then not only are they eventually periodic under the selective sum, but they have the same period.
\section{The Sequential Join}
The sequential join was first defined by Stromquit and Ullman \cite{SU} and then studied further by Stewart \cite{FS2}. With this operator we give all the components of a game a pre-determined order, and then play them in that order. It is an interesting operator to look at because the structure of mis\`ere and normal play games is very similar under this operator.
\begin{definition} The sequential join of two games $G$ and $H$ is
defined as follows:
$$G\rhd H=\begin{cases}
&\{G^L\rhd H|G^S+H^S|G^R\rhd H\} \text{, if $G\neq \{.|G^S|.\}$}\\
&\{G^S\rhd H|G^S+H^S|G^S\rhd H\} \text{, Otherwise}
\end{cases}$$
\end{definition}
\begin{theorem}\label{seqid}
Scoring play games form a non-trivial monoid under the sequential join.
\end{theorem}
\begin{proof} To prove this we first define a set $I=\{i|i\rhd G\approx G\rhd i\approx G\hbox{ for all games }G\}$, and show that $I$ contains more than one element namely $\{.|0|.\}$. So consider the game $i=\{\{0|0|0\}|0|\{0|0|0\}\}$, as shown in the figure.
\begin{figure}[htb]
\begin{center}
\begin{graph}(3,2)
\roundnode{1}(0,0)\roundnode{2}(0.5,1)\roundnode{3}(1,0)\roundnode{4}(1.5,2)
\roundnode{5}(2,0)\roundnode{6}(2.5,1)\roundnode{7}(3,0)
\edge{1}{2}\edge{2}{3}\edge{4}{2}\edge{4}{6}\edge{5}{6}\edge{6}{7}
\nodetext{1}{0}\nodetext{2}{0}\nodetext{3}{0}\nodetext{4}{0}
\nodetext{5}{0}\nodetext{6}{0}\nodetext{7}{0}
\end{graph}
\end{center}
\caption{The game $i=\{\{0|0|0\}|0|\{0|0|0\}\}$}
\end{figure}
So first consider the game $i\rhd G$, if Left moves first on $i\rhd G$, then Right will move last on $i$, which means that Left will move first on $G$, and since the final score of $i$ is always $0$, then $(i\rhd G)^{SL}_F=G^{SL}_F$. Similarly for the game $G\rhd i$, the players will simply play through $G$, and regardless of what happens the game $i$ cannot change the score of $G$, and therefore $(G\rhd i)^{SL}_F=G^{SL}_F$.
To show that the set is a monoid and not a group we need to demonstrate that not all games are invertible, so consider the game $Y=\{\{c|b|.\}|a|.\}\}$, and the game $G=\{e|d|f\}$. If $Y$ is invertible this means that there exists a game $Y^{-1}$ such that $Y\rhd Y^{-1}\rhd G\approx G$ for all games $G$. $G^{SR}_F=f$, however $(Y\rhd Y^{-1}\rhd G)^{SR}_F=a+a'+d\neq f$ and so the theorem is proven.
\end{proof}
\begin{theorem}
For any outcome classes $\mathcal{X}$, $\mathcal{Y}$ and $\mathcal{Z}$, there is a game $G\in \mathcal{X}$ and $H\in \mathcal{Y}$ such that $G\rhd H\in \mathcal{Z}$.
\end{theorem}
\begin{proof} To prove this let $G=\{\{c|b|.\}|a|.\}$ and $H=\{H^L|d|H^R\}$, where $H^L$ and $H^R\neq \emptyset$, then $G^{SL}_F=a$, $G^{SR}_F=b$, $(G\rhd H)^{SL}_F=a+d$ and $(G\rhd H)^{SR}_F=b+d$. Since $d$ is not dependent on $H^{SL}_F$ and $H^{SR}_F$, and can be any real number, then we can pick $a$, $b$ and $d$, so that $G$ and $H$ are in any outcome class and $G\rhd H$ is any outcome class. Therefore the theorem is proven.
\end{proof}
\subsection{Impartial Games}
\begin{theorem}
Impartial games for a non-trivial monoid under the sequential join.
\end{theorem}
\begin{proof} From the proof of theorem \ref{seqid} we know that there is a non-trivial identity set, so to prove this we simply need to show that there is a game $G$ that is not invertible. So consider the game $G=\{1,\{0|0|0\}|0|\{0|0|0\},-1\}$. Let $Y$ be the inverse of $G$, then this implies that $G\rhd Y\rhd P\approx P$ for all impartial games $P$.
So let $P=\{.|0|.\}$, and consider the game $G\rhd Y\rhd P$. If Left moves first and moves to the game $1\rhd Y\rhd P$, then his implies that $-1$ is one of the Right options of $G$, since if Right moves to $-1$ on $Y$ then Left will move first on $P$ and $G\rhd Y$ will not change the final score of $P$. But $Y$ is impartial, so this implies that $1$ is a Left option of $Y$. So therefore if Left moves to the game $\{0|0|0\}\rhd Y\rhd P$, then this means that Right must move to the game $0\rhd Y\rhd P$, and Left will move first on $Y$, and Left can choose the option $1$ and hence win $G\rhd Y\rhd P$, i.e. $G\rhd Y\rhd P\not \approx P$ which is a contradiction.
So this means that $G$ is not invertible, and therefore the set of impartial games form a non-trivial monoid under the sequential join and the theorem is proven.
\end{proof}
\subsection{Sprague-Grundy Theory}
When consider the sequential join it doesn't really make sense to look at taking and breaking games, because once you break the heap into two or more smaller heaps we have to define the order that we play the two new heaps in. Since this is a rather difficult issue to resolve I will not be considering it in this paper.
\begin{definition}
Let $n\in O=(t_1t_2\dots t_f, p_1,\dots p_f)$ and $m\in P=(s_1s_2\dots s_e, q_1\dots q_e)$, be two taking no breaking games;
\begin{itemize}
\item{$\mathcal{G}_s(0)=0$.}
\item{$\mathcal{G}_s(n\rhd m)=\begin{cases}
&\max\{p_k-\mathcal{G}_s(n-k\rhd m)\} \text{, if $n\neq 0$}\\
&\max\{q_l-\mathcal{G}_s(n\rhd m-l)\} \text{, Otherwise}
\end{cases}$}.
\end{itemize}
\end{definition}
There is not really a lot to say about this operator, other than to make the following conjecture;
\begin{conjecture}
Let $O=(n_1n_2\dots n_t, p_1p_2\dots p_t)$ and $P=(m_1m_2\dots m_l, q_1q_2\dots q_l)$ be two finite octal games, then if $\mathcal{G}_s(n+_{\ell} m)$ eventually has period $p$, $\mathcal{G}_s(n\rhd m)$ also eventually has period $p$.
\end{conjecture}
This conjecture seems quite a reasonable one due to the nature of the operator. By playing the heaps in order, it means that $m$ cannot change the period of $n$. However since it is very hard to even prove that $\mathcal{G}_s(n+p)=\mathcal{G}_s(n)$ for all $n$ large enough, a proof of this conjecture will also be very difficult.
\section{Conclusion}
In this paper I really only examined the basic structure of the three operators that I looked at. There are of course plenty of other questions that I could have looked at. I feel that the most interesting thing was looking at the function $\mathcal{G}_s(n)$ under each of the different operators as it appears that the function settles down into the same period regardless of the operator being used.
I hope to be able to prove all of the conjectures made in this paper, and I feel that a proof of them would tell us a lot about the function $\mathcal{G}_s(n)$ and the nature of octal games under scoring play rules.
\section{The Symbol Font}
\combgames{} defines the following symbols.
\begin{table}[H]
\centering
\begin{tabular}{clclcl}
\ds{\cgup} & \ds{\cgdown} & \ds{\cgstar} \bigstrut\\
\ds{\cgdoubleup} & \ds{\cgdoubledown} & \ds{\cgneg} \bigstrut\\
\ds{\cgtripleup} & \ds{\cgtripledown} & \ds{\cgfarstar} \bigstrut\\
\ds{\cgquadup} & \ds{\cgquaddown} & \ds{\cgsunny} \bigstrut\\
\ds{\cgtiny} & \ds{\cgminy} & \ds{\textrm\Moon} \bigstrut\\
\ds{\cgko} & \ds{\cgKo} \bigstrut\\
\ds{\cgkobar} & \ds{\cgKobar} \bigstrut
\end{tabular}
\caption{\combgames{} ordinary symbols.}
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{clclcl}
\ds{\cglfuz} & \ds{\cggfuz} & \ds{\cgfuzzy} \bigstrut\\
\ds{\cgupsum} & \ds{\cgdownsum} & \ds{\cgnmultiply} \bigstrut
\end{tabular}
\caption{\combgames{} binary relations.}
\end{table}
A few notes.
\begin{itemize}
\item Usage of \cn{\cgnmultiply} for Norton multiplication has been largely supplanted by a simple \cn{\cdot}. Compare:
\[G \cgnmultiply \cgup \qquad\qquad \textrm{versus} \qquad\qquad G \cdot \cgup\]
\item This author dislikes the \cn{\cgfuzzy} symbol, precisely because there are already too many slashes in CGT. A reasonable alternative is \cn{\not}\cn{\gtrless}, which is available in the \textsf{amssymb} package. Compare:
\[G \cgfuzzy H \qquad\qquad \textrm{versus} \qquad\qquad G \not\gtrless H\]
\item The \textsf{marvosym} package provides a \cn{\Moon} symbol that can be used as an alternative to \cn{\textrm\Moon}:
\[\textrm\Moon \qquad\qquad \textrm{versus} \qquad\qquad \textrm{\Moon}\]
\end{itemize}
\section{Braces-and-Slashes Notation}
\combgames{} also has a powerful facility for typesetting games using braces-and-slashes notation. The basic command is \cn{\combgame}. All braces and slashes used within the command will be properly spaced, and in displayed equations they'll be sized to fit the surrounding material. Some examples:
\begin{table}[H]
{
\centering
\begin{tabular}{@{}l@{\hspace{0.3in}}c@{}}
\begin{verb}
\combgame{2||1|0}
\end{verb}
&
$\displaystyle \combgame{2||1|0}$
\bigskip \\
\begin{verb}
\combgame{\{3,\{4||2|1\}|||0||||-8\}}
\end{verb} &
$\displaystyle \combgame{\{3,\{4||2|1\}|||0||||-8\}}$
\bigskip
\end{tabular}
}
\begin{verb}
\int^T G = \combgame{\{T + \int^T G^L | -T + \int^T G^R\}}
\end{verb}
\bigskip
\hfill$\displaystyle \int^T G = \combgame{\{T + \int^T G^L | -T + \int^T G^R\}}$
\caption{Example usage of \cn{\combgame}.}
\end{table}
\begin{itemize}
\item There is also a starred form, \cn{\combgame*}, that suppresses growth of the vertical bars:
\begin{center}
\begin{tabular}{@{}l@{\hspace{0.3in}}c@{}}
\begin{verb}
\combgame{2|1||0|||-1||||-2}
\end{verb}
&
$\displaystyle \combgame{2|1||0|||-1||||-2}$
\bigskip \\
\begin{verb}
\combgame*{2|1||0|||-1||||-2}
\end{verb} &
$\displaystyle \combgame*{2|1||0|||-1||||-2}$
\bigskip
\end{tabular}
\end{center}
\item You should use \cn{\combgame} whenever appropriate, even for simple expressions, since it will typeset the result much more cleanly than otherwise. Compare the following two examples.
\begin{center}
\begin{tabular}{l@{\hspace{1cm}}l@{\hspace{1cm}}l}
\begin{verb}
\combgame*{\{2||-1|-3\}}
\end{verb}
&
$\combgame*{\{2||-1|-3\}}$
&
Beautiful!
\bigskip \\
\begin{verb}
\{2||-1|-3\}
\end{verb}
&
$\{2||-1|-3\}$
&
Hideous!
\end{tabular}
\end{center}
\item Within the \texttt{combgame} argument, the brace commands \cn{\{} and \cn{\}} are redefined to mean \cn{\left}\cn{\{} and \cn{\right}\cn{\}}. The old commands are still available as \cn{\lbrace} and \cn{\rbrace}.
\item Due to the way \TeX{} processes command inputs, the slash notation will not function correctly if used inside a macro. If you wish to define macros that refer to \cn{\combgame}, there is an alternative command \verb|\cgslashes{n}| for an $n$-tuple slash. The commands \cn{\cgslash} and \cn{\cgsslash} are shorthand for $n = 1$ and $2$, respectively. Here's an example:
\begin{verbatim}
\newcommand\threeswitch[3]
{\combgame{\{#1 \cgsslash #2 \cgslash #3\}}}
\end{verbatim}
Then \verb|\threeswitch{2}{1}{0}| would typeset
\[\combgame{\{2||1|0\}}\]
\item You can control the growth of the slashes by setting \cn{\cgslashextension}. The default is \texttt{1.5pt}. For example:
\begin{center}
\begin{tabular}{@{}l@{\hspace{0.3in}}c@{}}
\begin{verb}
\combgame{2|1||0|||-1||||-2}
\end{verb}
&
$\displaystyle \combgame{2|1||0|||-1||||-2}$
\bigskip \\
\begin{minipage}{2.75in}
\begin{verbatim}
\setlength\cgslashextension{3pt}
\combgame{2|1||0|||-1||||-2}
\end{verbatim}
\end{minipage} &
$\displaystyle \setlength\cgslashextension{3pt} \combgame{2|1||0|||-1||||-2}$
\bigskip
\end{tabular}
\end{center}
\end{itemize}
\section{Game Trees}
You can typeset game trees easily with the powerful \cn{\cgtree} command. Trees should be placed inside a \texttt{pspicture} environment and can coexist with other pstricks objects. Here's a simple example:
\begin{VExample}
\begin{pspicture}(5,4)
\put(1,4){\cgtree{
{\cgup^2} (0 | {\cgdown\cgstar} (0 | 0 \cgstar))
}}
\end{pspicture}
\end{VExample}
The general syntax of a \cn{\cgtree} argument is \verb|\cgtree{node}| where \texttt{node} has the following specification:
\begin{verbatim}
node ::= label (subtree)?
| special
subtree ::= '(' (node)* '|' (node)* ')'
\end{verbatim}
\texttt{label} may be either a single token, or a more complicated expression delineated by braces. The two nodelists in the subtree expression are typeset as left and right options of the parent node. Left options are laid out right-to-left; Right options left-to-right; always starting at the parent node. If the label is \texttt{.} then a node will be created with no label. \texttt{special} may be one of the following:
\begin{itemize}
\item \texttt{+} increases the space between options
\item \texttt{-} decreases the space between options
\item \texttt{:} creates a symbolic link (described below)
\end{itemize}
The \texttt{cgtree} command takes several options:
\begin{itemize}
\item \texttt{arrows} specifies the arrowheads used for tree edges, as in pstricks. Example: \texttt{arrows=->}
\item \texttt{unit} specifies the scale for drawing the tree, e.g., \texttt{unit=.5cm} (default: 1cm)
\item \texttt{nodesep} specifies the separation between edges and node boundaries. Default is \texttt{nodesep=.75ex}
\end{itemize}
In addition, each \emph{node} may have several options. Node options should be written in brackets immediately following the node label (and before the associated subtree, if one exists).
\begin{itemize}
\item \texttt{arrow} specifies the arrowhead for the edge \emph{to} this node
\item \texttt{sep} specifies the separation for this node
\item \texttt{name} gives a name for this node. \emph{Names must consist only of letters and numbers}. They can be used in symbolic links (see below) or elsewhere in the \texttt{pspicture} environment.
\item \texttt{ko} draws the edge \emph{to} this node as a ko.
\end{itemize}
If a \texttt{:} is specified instead of a node, a symbolic link is created. The \texttt{:} must be followed by a list of options in brackets, which \emph{must} include the \texttt{name} option. An edge will then be drawn to the previously named node. Be careful, as this will \emph{not} take into account whether the edge points left or right!
\begin{VExample}
\begin{pspicture}(6,6)
\put(1,6){\cgtree{
G ( . ( | {K_1} [name=koroot] (1 | {K_2} [ko,arrow=<->] (|0) ) )
| . (:[name=koroot] | )
)}}
\end{pspicture}
\end{VExample}
\bigskip
One last example: the following mess typesets a pretty figure from Bill Fraser's thesis.
\begin{VExample}
\psset{unit=.6cm}
\begin{pspicture}(19,6)
\put(5,6){\cgtree[unit=.6cm]{
{D_1}[name=D1] ( U (T[ko,name=T] (4|) | 3[name=three])
{D^L}[ko] (:[name=T] | +++++++{D_2}[ko] (2 |
W(.(2|-.[ko,name=WLR](|1)) | -.[ko] (:[name=WLR,sep=0pt]|{-10}))
+++{D^R}[ko](X :[ko,name=D1]|{-11})
)) | V (:[name=three] | .[ko](.(-.[ko](2|)|1)|{-10})))
}}
\end{pspicture}
\end{VExample}
\section{Game Boards}
There is also an extensible facility for typesetting grid-based
diagrams. The following examples illustrate usage for several of the
built-in games.
\begin{VExample}
\Domineering{ooo\\oo\\} = \combgame{
\{ \Domineering{^oo\\vo\\},
\Domineering{o^o\\ov\\}
| \Domineering{<>o\\oo\\},
\Domineering{o<>\\oo\\},
\Domineering{ooo\\<>\\} \}}
\end{VExample}
\bigskip
\begin{HExample}
\Clobber{xxxo\\o.ox\\xoxo}
\end{HExample}
\medskip
\begin{HExample}
\Clobberset{unit=.3cm,square=}
\Clobber{xxxo\\o.ox\\xoxo}
\end{HExample}
\medskip
\begin{HExample}
\DotsAndBoxes[unit=.5cm]{
o+o.o.o-o-o \\
|.|.|.|.*.| \\
o.o.o.o*o*o \\
|.|.|.|.|.| \\
o-o.o.o*o*o \\
|A|.|.|.|.| \\
o-o.o.o*o*o}
\end{HExample}
\medskip
\begin{HExample}
\Amazons[unit=.5cm]{..x\\.L.\\x.R}
\end{HExample}
\medskip
\begin{HExample}
\Hex{
*
L X * R
* X *
* O O *
* O X * *
* * X X * *
* * O O *
* X O *
* X *
R * O L
X
}
\end{HExample}
\bigskip
You can declare new games easily with the \cn{\newboard}
(equivalently, \cn{cgnewboard}) command. The syntax is very simple.
The command argument consists of a list of allowed grid characters
together with instructions for typesetting them. For example, the
\cn{\clobber} command is declared as follows.
\begin{verbatim}
\newgridgame[unit=.3cm]{clobber}{
X {\Clobber@Base{X}{c}}
O {\Clobber@Base{O}{c}}
x {\Clobber@Base{X}{c}}
o {\Clobber@Base{O}{c}}
}
\end{verbatim}
Here \cn{\Clobber@Base{X}{c}} and \cn{\Clobber@Base{O}{c}} are primitive commands for
rendering clobber symbols. See the source file for more examples and
a complete list of options. The file \texttt{board.tex} contains more
examples.
\section{Thermographs}
\combgames{} provides a \texttt{thermoplot} environment that can be used to draw an arbitrary number of thermographs together on the same plot. For example:
\begin{VExample}
\begin{thermoplot}[left=5/2,right=-5/2]
\thermograph{0}(0,3)(2,1)(2,-1);(0,3)(-1,2)(-1,1)(-3,-1)
\end{thermoplot}
\end{VExample}
\bigskip
Coordinates must be specified as \emph{rational numbers in the usual (CGT) coordinate space}. The \cn{\thermograph} command is followed by an argument specifying the mast value. This is followed by any number of semicolon-separated trajectories, listing the $(\textit{v},\textit{t})$-coordinates of each critical point on the trajectory.
Multiple thermographs can be placed on the same plot:
\begin{VExample}
\begin{thermoplot}[left=6,right=-19,top=12,scale=0.45cm]
\thermograph[linecolor=blue]{1/2}
(1/2,13/2)(2,5)(2,2)(8,-1);
(1/2,13/2)(2,5)(2,2)(-1,-1)
\thermograph[linecolor=red]{-19/4}
(-19/4,19/4)(1,-1);
(-19/4,19/4)(-21,-1)
\end{thermoplot}
\end{VExample}
\bigskip
Note how, in these examples, each trajectory is carried down to $-1$ instead of~$0$. This generates the ``hooks'' that extend below $t = 0$ on the thermographs.
You can also place individual trajectories instead of full thermographs. This can be useful if the mast has nonzero slope. For example:
\begin{VExample}
\begin{thermoplot}[left=1,right=-4]
\trajectory[linecolor=lightgray](1,4)(-1,2)(-1,1)(-3,-1)
\thermograph{-3}(-3,2)(-2,1)(-2,-1);(-3,2)(-4,1)(-4,-1)
\end{thermoplot}
\end{VExample}
\end{document}
\section{Gridgame Examples}
\newgridgame[printname=bar]{foo}{}
\[\foo{XO#.}\]
\[\TopplingDominoes{xxox.xo}\]
\[
\Domineering{ooo\\oo\\} = \left\{
\Domineering{^oo\\vo\\},
\Domineering{o^o\\ov\\}
\ |\ %
\Domineering{<>o\\oo\\},
\Domineering{o<>\\oo\\},
\Domineering{ooo\\<>\\}
\right\}
\]
\begin{center}
\Clobber{xxxo\\o.ox\\xoxo\\}\hspace{.25in}
\clobber{xxxo\\o.ox\\xoxo}\hspace{.25in}
\Clobber{xxxo\\o.ox\\xoxo}
\end{center}
\[
\DotsAndBoxes[unit=.5cm]{
o+o.o.o-o-o \\
|.|.|.|.*.| \\
o.o.o.o*o*o \\
|.|.|.|.|.| \\
o-o.o.o*o*o \\
|A|.|.|.|.| \\
o-o.o.o*o*o}
\]
\[
\DotsAndBoxes[unit=.5cm,dottedline={{{linewidth=1mm,linecolor=blue}}}]{
o+o.o.o-o-o \\
|.|.|.|.*.| \\
o.o.o.o*o*o \\
|.|.|.|.|.| \\
o-o.o.o*o*o \\
|A|.|.|.|.| \\
o-o.o.o*o*o}
\]
\[
\StringsAndCoins{
:.|.| \\
o.o.o.o*o \\
|.|.|.!.! \\
o.o.o.o.o \\
..|.|.!.! \\
A.o.o.o.o \\
..|.|.!.! \\
}
\]
\[
\Chomp{OO\\OOO\\OOO\\OOO\\XOO\\}
\]
\[
\Bridgit{
.x.x.x.x \\
o.o.o.o.o \\
.x.x.x.x \\
o-o|o.o.o \\
.x.x.x.x \\
o|o.o.o.o \\
.x-x|x.x \\
o.o.o.o.o \\
.x.x.x.x \\
}
\]
\begin{center}
\ \hfill
\Amazons{oox\\oL.\\xoo\\}
\hfill
\Amazons{oox\\oLR\\xoo\\}
\hfill\ \ %
\end{center}
\[
\Shove{GLG.#R#RGR\\}
\]
\[\Hackenbushstring\{BBRBR\\RGG\\BR\}\]
\[\Hackenbushstring[colour]\{BBRBR\\RGG\\BR\}\]
\[
\Maze{
........o \\
......./.| \\
......o...o \\
...../.....| \\
....o...*...o \\
.../.........| \\
..o...o...*...o \\
./.....|.......| \\
o...*...o...*...o \\
.|...............| \\
..o...*...*...o...o \\
...|........./.../ \\
....o...*...o...o \\
.....|.......|./ \\
......o...o...o \\
.......|./.../ \\
........o...o \\
.........|./ \\
..........o \\
}
\]
\section{BUGS}
\typeout{****** Demonstrating bugs. Hit return. *****}
\typeout{****** In domineering, must use ^# instead of ^^ (bug) *****}
\[
\Domineeringset{unit=.3cm,frame}
\Domineering{oo\\oo\\} \ \Domineering{ox\\oo\\}
\rightarrow
\Domineering{^o\\vo\\} \ \Domineering{ox\\oo\\}
\rightarrow
\Domineering{^o\\vo\\} \ \Domineering{ox\\<>\\}
\rightarrow
\Domineering{^o\\vo\\} \ \Domineering{ox\\<>\\}
\rightarrow
\Domineering{^^\\vv\\} \ \Domineering{ox\\<>\\}
\]
The following fails:
\Chompset{unit=.5cm,code={AABBCCDDEEZ{$\scriptstyle C\!/\!D$}}}
\[
\Chomp{
CB\\
AD\\
XABZ\\}
\]
But the following works:
\def\mathrm#1{\makebox(0,0){#1}}
\Chompset{unit=.5cm,code={A {\mathrm{A}} B {\mathrm{B}} C {\mathrm{C}} D {\mathrm{D}}
E {\mathrm{E}} Z {\mathrm{$\scriptstyle C\!/\!D$}}}}
\[
\Chomp{
CB\\
AD\\
XABZ\\}
\]
\end{document}
| {
"timestamp": "2012-02-22T02:03:57",
"yymm": "1202",
"arxiv_id": "1202.4656",
"language": "en",
"url": "https://arxiv.org/abs/1202.4656",
"abstract": "Scoring play games were first studied by Fraser Stewart for his PhD thesis. He showed that under the disjunctive sum, scoring play games are partially ordered, but do not have the same \"nice\" structure of normal play games. In this paper I will be considering scoring play games under three different operators given by John Conway and William Stromquist and David Ullman, namely the conjunctive sum, selective sum and sequential join.",
"subjects": "Combinatorics (math.CO)",
"title": "Scoring Play Combinatorial Games Under Different Operators",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9572778000158576,
"lm_q2_score": 0.7401743735019594,
"lm_q1q2_score": 0.7085524958940713
} |
https://arxiv.org/abs/2212.10942 | Strain topological metamaterials | Topological physics has revolutionised materials science, introducing topological insulators and superconductors with applications from smart materials to quantum computing. Bulk-boundary correspondence (BBC) is a core concept therein, where the non-trivial topology of a material's bulk predicts localized topological states at its boundaries. However, edge states also exist in systems where BBC is seemingly violated, leaving any topological origin unknown. For finite-frequency mechanical metamaterials, BBC has hitherto been described in terms of displacements, necessitating fixed boundaries to identify topologically protected edge modes. Herein, we introduce a new family of finite-frequency mechanical metamaterials whose topological properties emerge in strain coordinates for free boundaries. We show two examples, the first being the canonical mass-dimer, where BBC in strain coordinates reveals the previously unknown topological origin of its edge modes. Second, we introduce a new mechanical analog of the Majorana-supporting Kitaev chain. We theoretically and experimentally show that this Kitaev chain supports edge states for both free and fixed boundaries, wherein BBC is established in strains and displacements, respectively. Our findings suggest a previously undiscovered class of topological edge modes may exist, including within other settings such as electrical circuits and optics, and for more complex, tailored boundaries with coordinates other than strain. | \subsection*{Mass Dimer}
We begin with the mass dimer shown in Fig.~\ref{fig1}\textbf{(b)}. This system is a periodic 1D mass-spring chain with two alternating masses, $m_{1}$ and $m_{2}$, connected with a spring of stiffness $k$. The equations of motion of the particle displacements ($u_{A|B,n}$) in the $n_{th}$ unit cell are given by:
\begin{align}
m_{1}\Ddot{u}_{A,n} = k(u_{B,n}-u_{A,n}) - k(u_{A,n} - u_{B,n-1}
)\label{eom1}\\
m_{2}\Ddot{u}_{B,n} = k(u_{A,n+1}-u_{B,n}) - k(u_{B,n} - u_{A,n}
),\label{eom2}
\end{align}
where the first subscript denotes the sublattice within the unit cell and the second subscript $n$ denotes the unit cell number. We seek plane wave solutions of the form $\bm{\psi}_{n}(t) = \bm{u}(q) e^{i \Omega t - i q n}$, where $q$ is the normalized wavenumber and $\Omega$ the angular frequency. This results in the eigenvalue problem $D_{u,\mathrm{bulk}}(q) \bm{u}(q) = \omega^2 \bm{u}(q)$, where $\bm{u}(q) = [u_A(q), u_B(q)]^T$, $D_{u, \mathrm{bulk}}(q)$ is the Bloch dynamical matrix in displacement coordinates, and $\omega = \Omega/\Omega_0$ the normalized frequency with respect to the mid-gap frequency $\Omega^{2}_{0} = k(1/m_{1}+1/m_{2})$.
It is well known that the edge states of the mass dimer appear for free edges when the ratio $P:=m_1/m_2$ is varied~\cite{Allen2000}. However, their topological nature has been hitherto unknown, since the dynamical matrix $D_{u,\mathrm{bulk}}(q)$ lacks the necessary symmetries for a topological classification \cite{SI}.
We argue that the edge states in this model have a topological origin that can be revealed using strain coordinates. The strain coordinates for the $n$th unit cell are $s_{A,n} = u_{n,B} - u_{n,A}$ and $s_{B,n} = u_{n+1,A} - u_{n,B}$. Assuming plane wave solutions, we arrive at the following eigenvalue problem: $D_{s,\mathrm{bulk}}(q) \bm{s}(q) = \omega^2 \bm{s}(q)$, where $\bm{s}(q) = [s_A(q), s_B(q)]^T$, and $D_{s,\mathrm{bulk}}(q)$ is the Bloch dynamical matrix in \textit{strain} coordinates:
\begin{align}\label{general_sm_st}
D_{s,\mathrm{bulk}}(q) = \frac{1}{(1+P)}
\begin{pmatrix}
1 + P & -(P+ e^{-iq})\\
-(P + e^{iq}) & 1+P
\end{pmatrix}.
\end{align}
The matrix $D_{s,\mathrm{bulk}}(q)$ can be written in terms of Pauli matrices $\sigma_x$, $\sigma_y$ and $\sigma_z$, such that ${D_{s,\mathrm{bulk}}}(q) = \bm{I} + d_x \sigma_x + d_y \sigma_y $ with $d_x = (P+\cos{q})/(1+P)$ and $d_y = \sin{q} /(1+P)$. As a result, the matrix anti-commutes with $\sigma_z$ after a constant shift in the diagonal: $\sigma_{z}(D_{s,\mathrm{bulk}}(q)-\bm{I})\sigma_{z}^{-1} = -(D_{s,\mathrm{bulk}}(q)-\bm{I})$. In other words, the shifted $D_{s,\mathrm{bulk}}(q)$ is chiral. Thus, the system has a well-defined winding number on the $\sigma_x-\sigma_y$ plane, as is shown in Fig.~\ref{fig2}\textbf{(a)}. The winding number predicts a topological phase transition at $P=1$, with $P>1$ and $P<1$ corresponding to trivial and non-trivial phases, respectively. Therefore, we expect BBC for the mass dimer -- but in strain coordinates.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=1\columnwidth]{Figure2.pdf}
\end{center}
\caption{\label{fig2} \textbf{Mass dimer.} {
\textbf{(a)} The chiral symmetry of mass dimer -- revealed on strain coordinates -- leads to a well-defined winding number. Nonzero winding makes the configurations with $P<1$ topologically nontrivial.
\textbf{(b)} Evolution of the spectrum of a finite chain with an odd number of particles (21) and free boundaries as we change the parameter $P$. Edge states emerge for $P<1$. Colormap confirms localization of states inside the band gap.
\textbf{(c)} Spectrum of a finite chain in displacement and strain coordinates at $P = 0.25$. Except for the zero mode, the spectrum is the same in both coordinates and shows chiral symmetry about the midgap frequency $\omega^2 = 1$.
\textbf{(d)} Profiles of the edge states in (c). Their chiral nature is revealed in strain coordinates.
}}
\end{figure}
Figure~\ref{fig2}\textbf{(b)} shows the spectrum of a finite chain with \textit{free} boundaries and an odd number of particles (which means an even number of bonds). We witness the emergence of edge states inside the band gap for $P<1$ (corresponding to the lighter mass on the boundaries), as expected by the strain winding number. We note that BBC dictates that the finite dynamical matrix, $D_{s}$, should also preserve the underlying chiral symmetry.
This preservation is validated by the chiral operator for the finite matrix, which is defined as $\Gamma = \sigma_{z}\oplus\sigma_{z}\oplus...\oplus \sigma_{z}$ (see Methods).
\begin{figure*}[!]
\begin{center}
\includegraphics[width=1\textwidth]{Figure3.pdf}
\end{center}
\caption{\label{fig3} \textbf{Mechanical Kitaev chain.}
\textbf{(a)} A mechanical monomer chain with transverse and rotational degrees of freedom maps to the Kitaev chain after fine-tuning.
\textbf{(b)} Topological phase diagram of the Kitaev chain. The path of the fine-tuned mechanical chain follows the curved solid yellow line. Two cases experimentally tested herein ($P=1.5$ and $P=2.5$) are marked with triangles.
\textbf{(c)} Dispersion diagrams for $P=1.5$ and $P=2.5$ are obtained in two ways: Analytically, via the lumped-mass model, and numerically, using the finite element method. $H$ and $s$ are the varying dimensions. The colorbar denotes modal dominance.
\textbf{(d)} The winding number of $\tilde{D}_{u,\mathrm{bulk}}$ suggests a topologically non-trivial phase for $P<2$.
\textbf{(e)} Evolution of the spectrum of finite chain with an even number of particles (200) and fixed boundaries as we change $P$. Edge states emerge for $P < 2$.
\textbf{(f)} Profiles of the edge states in (e) for $P=1.5$. Due to particle-hole symmetry, the profiles of effective particles $u$ and holes $\Phi/\sqrt{P}$ are either identical or differ by a phase.
}
\end{figure*}
We contrast these findings with the interpretation of the chain in the typical displacement coordinates. In displacement coordinates, the chain has a zero-frequency mode, which corresponds to the rigid body motion of the free chain and breaks chirality. The strain description predicts all the nonzero eigenvalues of the system as is shown in Fig.~\ref{fig2}\textbf{(c)}. We witness the chiral symmetry of these non-zero eigenfrequencies with respect to the mid-gap frequency $\omega^2=1$. Furthermore, in Fig.~\ref{fig2}\textbf{(d)}, we show the profiles of the edge states at $P=0.25$ in both displacement and strain coordinates. Once again, strain coordinates reveal the chiral nature of the chain, where the vanishing amplitude of the topological edge states at alternating bonds is akin to the mechanical SSH (stiffness dimer)~\cite{Yang2021}.
Building on the idea of BBC for strain coordinates and revealing new topological modes, we construct a mechanical analog of the Kitaev chain (the prototypical model for a topological superconductor) with two degrees of freedom per site. These degrees of freedom, specifically particle displacement and rotation, lead us to choose generalized strain coordinates involving both DOFs and probe the topological nature of the Kitaev chain. For the first time, we demonstrate that this design not only obeys BBC for fixed boundaries (right column of Fig.~\ref{fig1}\textbf{(c)}), but also shows a topological edge mode for free boundaries that can be explained by BBC in strain coordinates (left column of Fig.~\ref{fig1}\textbf{(c)}).
\subsection*{Mechanical Kitaev chain}
In Fig.~\ref{fig3}\textbf{(a)}, we show a mechanical structure whose dynamics are governed by two in-plane degrees of freedom (DOFs) at each site (transverse displacement $u$ and rotation $\Phi$). Each site is connected with the next via two bonds corresponding to bending and shear stiffness ($K_{B}$ and $K_{S}$, respectively). We set $P=md^{2}/I$, the ratio of the generalized masses (particle mass $m$, lattice constant $d$, and particle mass moment of inertia $I$) and $\eta = \frac{K_{B}}{K_{S}}$, the ratio of generalized stiffnesses (with $K_B$ and $K_S$ the bending and shear stiffnesses, respectively).
In Methods, we analytically show that if we impose the fine-tuning: $\eta = 1 - (1/P)$, the dynamical matrix on displacement coordinates $\tilde{D}_{u}$ maps to a Kitaev chain \cite{kitaev2001unpaired} (the ``$\sim$'' sign refers to the fine-tuned system). Parameter $P$ is mapped to the chemical potential $\mu$, the coupling $\tau$ and the superconducting constant $\Delta$ in the following manner: $\mu = 2(P-1)$, $\tau=1$, and $\Delta = \sqrt{P}$. As a result, transverse displacements $u$ and the normalized rotations $\Phi/\sqrt{P}$ can be seen as particle and hole DOFs (Fig. \ref{fig3}\textbf{(a)}).
This mapping allows us to switch between trivial and non-trivial topological phases by continuously altering the value of $P$ while retaining the fine-tuning. Since we vary $P$ in our design, we trace a 1D path in the phase space of the Kitaev chain, as is shown in Fig.~\ref{fig3}\textbf{(b)}, wherein transitions between topologically trivial and nontrivial phases are possible.
In Fig.~\ref{fig3}\textbf{(c)}, we show the dispersion curves obtained for values of $P$ corresponding to systems in different topological phases. We observe two branches in the dispersion diagram as a result of the lumped-mass model having two DOFs, i.e., $u$ and $\Phi$, per mass. We also observe that the entire spectrum ($\omega^2$) is symmetric about a mid-axis, which is $\omega^2=1$. This is due to the particle-hole symmetry, such that
$\sigma_x \left( \tilde{D}_{u,\mathrm{bulk}}(q) - \bm{I} \right) \sigma_x^T = - \left( \tilde{D}_{u,\mathrm{bulk}}(q) - \bm{I} \right)$. Since $\tilde{D}_{u,\mathrm{bulk}}(q)$ maps to the Kitaev chain BdG Hamiltonian, a finite chain with boundaries that preserve the symmetry of the bulk (i.e., a chain with fixed boundaries) will exhibit topological edge states.
In Fig.~\ref{fig3}\textbf{(d)}, we plot the winding of the Bloch vector of the shifted $\tilde{D}_{u,\mathrm{bulk}}(q)$ in the $\sigma_y$-$\sigma_z$~plane. This suggests the existence of edge states for $P<2$. Indeed, for a fixed chain consisting of 200 particles, two localized states emerge in the band gap for $P<2$ as one can see in Fig.~\ref{fig3}\textbf{(e)}. In Fig.~\ref{fig3}\textbf{(f)}, we plot these two eigenstates, which are localized on the left and the right end of the chain. The particle-hole symmetry of the model dictates that the particle and hole DOFs of the edge mode eigenstates either exactly match (symmetric) or have opposite phase (antisymmetric) \cite{leumer2020exact}. In contrast to the edge states appearing in the SSH model~\cite{TheocharisPRB2021}, these topologically-protected edge states have mixed polarization in terms of displacement and rotation.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=1\columnwidth]{Figure4.pdf}
\end{center}
\caption{\label{fig4}
\textbf{Analysis of the Kitaev chain with free boundaries. (a)} The winding number of $\tilde{D}_{s,\mathrm{bulk}}$ predicts a non trivial phase for $P>2$ contrary to the winding of $\tilde{D}_{u,\mathrm{bulk}}$ which predicted a non trivial phase for $P<2$ (see Fig. \ref{fig3}\textbf{(d)} for comparison). \textbf{(b)} Spectrum of a finite chain ($N=200$) with both boundaries free as a function of $P$. Localized edge states emerge in the band gap for $P>2$.
\textbf{(c)} Profiles of the localized states on the free edges on strain coordinates. The bending ($b$) and shear ($s$) strain coordinates follow the pattern dictated by particle-hole symmetry.
\textbf{(d)} Profiles of the localized states on the free edges on displacement coordinates. Their profiles appear distorted.
}
\end{figure}
We now investigate the Kitaev system in strain coordinates, which now have a more complex form due to the coupling of rotational degrees of freedom with the transverse displacements. By applying the strain coordinate transformation, we obtain the bulk strain dynamical matrix $\tilde{D}_{s,\mathrm{bulk}}(q)$, which surprisingly maps \emph{again} to a Kitaev chain (as long as the fine-tuning is preserved) but with a different parameter dependence. In strain coordinates, $P$ is now replaced by $P/(P-1)$ (see Methods). In Fig.~\ref{fig4}\textbf{(a)}, we show the winding of $\tilde{D}_{s,\mathrm{bulk}}(q)$ predicts the inverse topological phases from those predicted by $\tilde{D}_{u,\mathrm{bulk}}(q)$. While a finite chain with fixed boundaries preserves particle-hole symmetry in displacement coordinates, we need a finite chain with \textit{free} boundaries in order to preserve particle-hole in strain coordinates~\cite{SI}.
As a result, we expect the emergence of edge states for a Kitaev chain with free boundaries but for the opposite parameter regimes compared to the system with fixed boundaries (topologically nontrivial regimes become trivial and \textit{vice versa}).
In Fig.~\ref{fig4}\textbf{(b)}, we show the spectrum of the Kitaev chain with free boundaries with varied $P$. Remarkably, we witness the emergence of two edge states inside the band gap for $P>2$, as predicted by the winding of the strain dynamical matrix $\tilde{D}_{s,\mathrm{bulk}}(q)$. These hidden topological edge states exhibit the profile dictated by particle-hole symmetry when expressed in the strain coordinate system (Fig. \ref{fig4}\textbf{(c)}), while their form appears distorted when expressed in displacement coordinates (Fig. \ref{fig4}\textbf{(d)}). Building off our unique definition of generalized strain in the Kitaev chain may also open the door for establishing symmetries based on other coordinates paired with the appropriate boundaries.
\subsection*{Experimental results}
To experimentally verify our predictions, we prepare a test setup to probe the Kitaev system with fixed-free boundary conditions so that both types of edge states can be observed in the system without changing the mounting. For a large chain with a negligible interaction between two boundaries, we expect the emergence of an edge state at the fixed end, as dictated by the BBC of the fixed-fixed chain. Similarly, we expect an edge state at the free end as well, albeit for different $P$ values than the fixed chain.
As such, the fixed-free chain should always have an edge state at one edge for all values of $P$ except $P=2$ (where the band gap closes). For systems with $P<2$ and $P>2$, they would support an edge state on the fixed and free ends, respectively.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\columnwidth]{Figure5.pdf}
\end{center}
\caption{\label{fig5}
\textbf{Experimental observation of edge states in the Kitaev chain.}
(a) Schematic of the experimental setup, suspended vertically by fixing particle $\sharp$1.
Three points are probed on each particle to characterize the transverse displacement and rotation.
(b) Measured frequency response at particle $\sharp$7 when the chains with $P=1.5$ and $P=2.5$ are excited at the fixed end (at particle $\sharp$2) or at the free end (particle $\sharp$13). The blue area corresponds to the band gap.
Measured amplitudes of the edge state (displayed in displacement coordinates) (c) localized at the fixed boundary for $P=1.5$, and (d) at the free boundary for $P=2.5$.}
\end{figure}
We fabricate chains of 13 masses (large cuboids) through additive manufacturing and suspend them vertically by mounting the particle $\sharp1$ as shown in Fig.~\ref{fig5}\textbf{(a)}. Therefore, the system represents a fixed-free chain. We consider two chains with different $P$ and excite them with an automatic modal hammer by striking the particle $\sharp2$ or $\sharp13$ corresponding to the fixed or free sides. By using a laser Doppler vibrometer, we then measure the velocity at multiple points along the chain. See Methods and Supplementary Information~\cite{SI} for more details on fabrication, experimental setup, and data acquisition.
Figure~\ref{fig5}\textbf{(b)} shows the experimentally measured frequency response at particle $\sharp7$ when the chains with $P=2.5$ and $P=1.5$ are excited from different ends. We witness a band gap (highlighted region) and a peak inside it, which appears for a given chain and side of excitation, corresponding to the edge state. The state inside the band gap exists at the fixed end for $P=1.5$ and at the free end for $P=2.5$, as theoretically predicted.
To verify that these modes are indeed localized at different edges, we reconstruct the mode shapes from the experimental data in Figs.~\ref{fig5}\textbf{(c,d)}. We observe excellent agreement between predictions and experiments, where amplitude decay can be seen as one goes away from the boundaries. We also note that the edge state localized at the free end [Fig.~\ref{fig5}\textbf{(d)}] is different in its shape compared to its counterpart for the fixed edge, as discussed earlier, corroborating the inversion of topological phases predicted for our mechanical system.
\def\bibsection{\section*{}}
| {
"timestamp": "2022-12-22T02:12:05",
"yymm": "2212",
"arxiv_id": "2212.10942",
"language": "en",
"url": "https://arxiv.org/abs/2212.10942",
"abstract": "Topological physics has revolutionised materials science, introducing topological insulators and superconductors with applications from smart materials to quantum computing. Bulk-boundary correspondence (BBC) is a core concept therein, where the non-trivial topology of a material's bulk predicts localized topological states at its boundaries. However, edge states also exist in systems where BBC is seemingly violated, leaving any topological origin unknown. For finite-frequency mechanical metamaterials, BBC has hitherto been described in terms of displacements, necessitating fixed boundaries to identify topologically protected edge modes. Herein, we introduce a new family of finite-frequency mechanical metamaterials whose topological properties emerge in strain coordinates for free boundaries. We show two examples, the first being the canonical mass-dimer, where BBC in strain coordinates reveals the previously unknown topological origin of its edge modes. Second, we introduce a new mechanical analog of the Majorana-supporting Kitaev chain. We theoretically and experimentally show that this Kitaev chain supports edge states for both free and fixed boundaries, wherein BBC is established in strains and displacements, respectively. Our findings suggest a previously undiscovered class of topological edge modes may exist, including within other settings such as electrical circuits and optics, and for more complex, tailored boundaries with coordinates other than strain.",
"subjects": "Mesoscale and Nanoscale Physics (cond-mat.mes-hall); Materials Science (cond-mat.mtrl-sci)",
"title": "Strain topological metamaterials",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9572778000158576,
"lm_q2_score": 0.7401743620390163,
"lm_q1q2_score": 0.7085524849208504
} |
https://arxiv.org/abs/2106.01313 | A modal description of paraxial structured light propagation | Here we outline a description of paraxial light propagation from a modal perspective. By decomposing the initial transverse field into a spatial basis whose elements have known and analytical propagation characteristics, we are able to analytically propagate any desired field, making the calculation fast and easy. By selecting a basis other than that of planes waves, we overcome the problem of numerical artefacts in the angular spectrum approach and at the same time are able to offer an intuitive understanding for why certain classes of fields propagate as they do. We outline the concept theoretically, compare it to the numerical angular spectrum approach, and confirm its veracity experimentally using a range of instructive examples. We believe that this modal approach to propagating light will be a useful addition to toolbox for propagating optical fields. | \section{Introduction}
\noindent Our understanding of the propagation of light has refined over the centuries, starting with geometric approaches that have their foundation in concepts outlined more than 400 years ago, through to a wave description that was given a firm theoretical footing nearly 200 years later \cite{young:light}, the two married through notions of stationary phase, least action and path interference \cite{gitin2013huygens}. From Maxwell onwards we have been able to calculate the propagation of arbitrary optical fields directly from the wave equation, finding exotic scalar solutions that include accelerating light \cite{efremidis2019airy}, non-diffracting light \cite{mazilu2010light}, the eigenmodes of free-space in various co-ordinate systems \cite{gutierrez2005helmholtz} and vectorial light \cite{zhan2009cylindrical,rosales2018review}, generally referred to today as structured light \cite{roadmap,forbes2020structured,forbes2021structured}. The classes and their properties are more commonly grouped and understood using geometrical \cite{padgett1999poincare,Holleczek2011,Milione2012higher,alonso2017ray,gutierrez2019modal} and operator \cite{Stoler:81,dennis2019gaussian} perspectives, shedding a deeper understanding on their commonality. In the context of laser beams, statistical tools applied to scalar fields has revealed the commonality in behaviour of all classes of beams \cite{siegman1991defining}, for example, that their second moment widths and divergences follow the same propagation rule as Gaussian beams but with adjusted beam quality factors \cite{siegman1993defining}, while a quantum toolkit has been successfully applied to vectorial light to quantify its degree of vectorness \cite{McLaren2015,Ndagano2016} and many other parameters \cite{qian2015shifting} by entanglement measures, exploiting parallels between non-separability in vector beams and quantum entangled states \cite{forbes2019classically,konrad2019quantum,Eberly2016}.
But how to calculate the propagation \VRF{of} these exotic fields? The standard textbook approach is to use the angular spectrum method by decomposing the fields into a basis of plane waves \cite{goodman2005introduction}. Its numerical nature means that it suffers from lack of physical insight into the nature of the propagation, although it does \VRF{lead} itself to easy implementation both on a computer and in the laboratory for ``digital'' propagation of paraxial light \cite{Schulze2012C}.
In this work we outline a modal approach to the propagation of arbitrary optical fields, shown graphically in Fig.~\ref{fig:concept}. We decompose an initial field at the plane $z=0$ into an appropriate basis with a known $z$-dependent propagation function. Because each basis element in the decomposition can be propagated analytically, so can the entire initial field which may not have any known analytical propagation rule. We use our approach to offer an intuitive explanation for some well-known propagation properties, including the propagation invariance of certain classes of modes, why lenses focus light, and why there is a ``far field'' where the light's propagation remains shape invariant. To illustrate the ease of implementation and accuracy of the approach, we compare it to the numerical angular spectrum approach, itself a type of modal analysis, showing excellent agreement, and then validate the method by experiment. Although we have restricted our examples to the ubiquitous scalar paraxial transverse spatial modes of light for brevity, it should be clear that it works equally well for vectorial light fields by applying the decomposition twice, once for each vector component. Similarly, it can be adapted to the time domain by using a one dimensional basis with known dispersion in certain media. We believe that this approach is powerful, intuitive and will be a useful resource in both research and teaching laboratories alike.
\begin{figure*}[ht!]
\centering\includegraphics[width=\linewidth]{Figure1.png}
\caption{(a) An arbitrary mode, shown on the \VRF{L}HS, can be decomposed into a sum of eigenmodes with complex weightings\VRF{, on the RHS}. (b) Each eigenmode on the \VRF{R}HS has an analytical $z$-dependent propagation, whose sum returns the propagation of the arbitrary mode. Two examples of the decomposition are shown in (c) and (d), where the desired fields are expressed as $\ell$ and $p$ Laguerre-Gaussian modes with the amplitudes $|\rho_{p,\ell}|^2$(top rows) and phases $\theta_{p,\ell}$(bottom rows) shown as false colour weights.}
\label{fig:concept}
\end{figure*}
\section{Spatial modes as a basis}
Consider that we wish to perform a modal expansion of an initial field, $u(x,y,z=0)$ into some orthonormal basis $\psi_{i}(x,y,z=0)$ where
\begin{equation}
u (x,y,z=0) = \sum_{i} c_{i} \psi_{i} (x,y,z=0)\,.
\label{modal0}
\end{equation}
To find the unknown coefficients $c_{i}$ one performs a modal decomposition, which can be done numerically or optically \cite{pinnell2020modal}, to return
\begin{equation}
c_{i} = \iint u (x,y,0)\psi^*_{i} (x,y,z=0) dA\,.
\label{modaldecomp}
\end{equation}
If the basis has a known propagation rule, i.e., $\psi_{i}(x,y,z=0) \overset{z}{\longrightarrow} \psi_{i}(x,y,z)$, then since each basis element on the RHS has a known $z$ dependence, we can find the propagation of the LHS by
\begin{equation}
u (x,y,z) = \sum_{i} c_{i} \psi_{i} (x,y,z)\,.
\label{modal0}
\end{equation}
We will show that this simple expansion is powerful both computationally and intuitively. Since the transverse plane is two-dimensional, examples of appropriate bases would be the Hermite-Gaussian (HG) or Laguerre-Gaussian (LG) beams (to name but two), given by
\begin{equation}
u (x,y,z) = \sum_{n,m} c_{n,m} \text{HG}_{n,m}(x,y,z) = \sum_{p,\ell} c_{p,\ell} \text{LG}_{p,\ell}(r,\phi,z)\,,
\label{modalz}
\end{equation}
We will use these two bases in the remainder of this report although any orthonormal basis will do. For example, in the Hermite-Gaussian basis we have
\begin{multline}
\text{HG}_{n,m}(x,y,z) = \sqrt{I_0} \text{H}_{n} \left( \frac{\sqrt{2} x}{w(z)} \right) \text{H}_{m} \left( \frac{\sqrt{2} y}{w(z)} \right) \\
\times \exp \left( -\frac{x^2 +y^2}{w^2(z)} \right) \exp ( i \eta(x,y,z) )\,,
\end{multline}
\noindent where
\begin{align}
\eta (x,y,z) &= -kz - k \frac{x^2 + y^2}{2R(z)} + (m+n+1) \arctan \left( \frac{z}{z_R}\right)\,, \nonumber \\
z_R &= \frac{\pi w_0^2}{\lambda}\,, \nonumber \\
w(z) &= w_0 \sqrt{1 + \frac{z^2}{z^2_R}}\,, \nonumber \\
R(z) &= z \left( 1 + \frac{z^2_R}{z^2} \right)^{-1}\,, \nonumber
\end{align}
\noindent $w_0$ is the embedded Gaussian beam size at $z=0$ and $I_0$ is found by normalising the power to $P$, to return
\begin{equation}
I_0 = 2 \mu_0c \frac{w^2_0}{w^2(z)} \frac{P}{\pi w^2(z) n! m! 2^{m+n-1}}\,.
\end{equation}
We can change to the Laguerre-Gaussian basis
\begin{multline}\label{LG}
\text{LG}_{p,\ell}(r,\phi) = \sqrt{I_0} \text{L}_p^{\ell} \left( \frac{2r^2}{w^2(z)}\right) \left( \frac{\sqrt{2}r}{w(z)}\right)^{|\ell|}
\\ \times \exp \left( -\frac{r^2}{w^2(z)} \right) \exp ( i \eta(r,\phi,z) )\,,
\end{multline}
\noindent where
\begin{align}
\eta (r,\phi,z) &= -kz - \frac{kr^2}{2R(z)} -\ell \phi + (2p+|\ell|+1) \arctan \left( \frac{z}{z_R}\right)\,, \nonumber \\
I_0 &= 2 \mu_0c\frac{2P p!}{\pi w^2(z) (p + |\ell|)!)}\,, \nonumber
\end{align}
\noindent and all other terms share the functional form of the HG modes. These two examples are pertinent since they encompass both Cartesian and cylindrical symmetries.
\section{Modal propagation}
To understand what influences the propagation dynamics, we note that the modal expansion is considered complete, so that the modal powers add up to the total power of the field, which we will set to $P=1$ for convenience. Using the HG modes as an example, this means that $\sum_{n,m} |c_{n,m}|^2 = 1$. However, the modal weightings are in general complex numbers, $c_{n,m} = \rho_{n,m} \exp{i \theta_{n,m}}$, with modal amplitudes ($\rho_{n,m}$) and phases ($\theta_{n,m}$). The initial field can be viewed as the interference of many HG modes. But in free-space there is no coupling between the HG modes, that is, there is no power exchange where one HG mode gains power at the expense of other, so the modal powers remain invariant. Likewise, the initial modal phases are constants that do not change with distance, but they appear to be altered by the Gouy phase change that is both mode and distance dependent, $\eta_{n,m}(z) \propto (m+n+1) \arctan( \frac{z}{z_R})$. The phase change $\Theta_{n,m}(z) = \theta_{n,m} + \eta_{n,m}(z)$ therefore holds all the information on how the propagation of any arbitrary mode (on the LHS \VRF{of Eq.~\ref{modalz}}) will evolve, since this term causes the various HG modes to interfere either constructively or destructively, altering with distance. In the modal perspective it is this interference that determines how optical modes propagate. In the case of the HG and LG bases, all modes have exactly the same radius of curvature, independent of basis indices. This crucial fact eliminates all spatial dependence from the phase, making it only a function of $z$.
What do we gain from such an expansion? There are at least two advantages: (1) the propagation becomes computationally simple since one need only perform a modal decomposition once, on the initial field, and thereafter only analytical propagation of each basis element is performed; (2) the propagation becomes more intuitive, which we illustrate with the examples to follow.
\\
\\
\noindent \textbf{Eigenmodes of free space I:} why are some optical fields propagation invariant in free-space? For example, the HG and LG modes do not change their intensity profile (shape) during propagation and instead only slowly diverge in size - we will call this ``propagation invariant''. In the modal propagation scenario, if the optical field in question is $u(x,y) = \text{HG}_{n,m}(x,y)$ then an expansion following Eq.~\ref{modalz} becomes
\begin{equation}
u (x,y,z) = \sum_{n,m} c_{n,m} \text{HG}_{n,m}(x,y,z) = \text{HG}_{n,m}(x,y,z)\,.
\end{equation}
\noindent since $|c_{n,m}|^2 = 1$. As there is only one eigenmode on the RHS, there can be no interference and hence no change to the optical field during propagation. Thus, if the initial field can be written in some coordinate system where it appears as a basis element, then in that symmetry it will be propagation invariant.
\\
\\
\noindent \textbf{Eigenmodes of free space II:} if the interference between terms in the expansion never changes, then the field to be propagated must also be propagation invariant. This can happen when the mode number of each basis element is identical, i.e., $N_{n,m} = n + m + 1$ or $N_{p,\ell} = 2p + |\ell| + 1$. The modal propagation approach predicts that there must be an infinite set of propagation invariant modes from appropriate superpositions of basis elements, and not just the trivial case shown above. For example, an initial petal-like mode of $u(r,\phi) = 2 \cos (6 \phi)$ can be written as
\begin{equation}
u(r,\phi) = 2 \cos (6\phi) \overset{z}{\longrightarrow} \text{LG}_{0,3}(r,\phi,z) + \text{LG}_{0,-3}(r,\phi,z)\,.
\end{equation}
Here the phase change of each mode is the same with $z$, so that the interference pattern remains unaltered with distance, as does the field itself. In Cartesian coordinates
\begin{equation}
u(x,y) = \VRF{\frac{1}{\sqrt{2}}} \text{HG}_{5,3}(r,\phi,0) + \VRF{\frac{1}{\sqrt{2}}} \text{HG}_{1,7}(r,\phi,0)\,,
\end{equation}
\noindent will also be propagation invariant during propagation, and so on.
\\
\\
\noindent \textbf{General modes in free space:} A counter-example is the general case when the field to be propagated is arbitrary. In this case the phase change of each basis element brings them in and out of constructive interference with the others, so that the final mode itself must also change during propagation. An example of this is a flat-top beam, expressed as
\begin{eqnarray}
u(x,y) = \exp\bigg(-\frac{\big(x^{2n} + y^{2n}\big)}{w_0^{2n}}\bigg),
\label{eq:ft}
\end{eqnarray}
\noindent where $n$ represents the order of the supergaussian. Such beams change dramatically in shape during propagation \cite{gori1994flattened}.
\\
\\
\noindent \textbf{The far-field:} Why do all optical fields converge to a ``far field'' pattern after multiples of the Rayleigh range ($z_R$) and thereafter do not alter in structural form with distance (other than a scale change)? In the modal propagation interpretation it is because the modal phases themselves converge to $\Theta_{n,m}(z) \rightarrow \pi (n+m+1)/2 + \theta_{n,m}$ at $z >> z_R$, i.e., a constant. Thus the relative phase of each basis element remains unchanged after this distance, and therefore so does the interference between the modes. If the interference does not change, then neither can the field that is propagating. This is why all fields eventually become ``propagation invariant''. The converse is equally true. If $z << z_R$ then the functional dependence of the phases is negligible so that $\Theta_{n,m}(z) \rightarrow \theta_{n,m} $. Here again there is no change in interference with distance and so the field must be close to ``propagation invariant'', but virtue of a large Rayleigh range.
\\
\\
\begin{figure}
\centering\includegraphics[width=\linewidth]{Figure2.png}
\caption{(a) When a Gaussian beam with a planar wavefront is decomposed into the LG basis at $z=0$, only the $p=0$ (Gaussian term) has a non-zero weighting. But if a Gaussian beam with curvature phase is decomposed into the same basis, many radial ($p$) modes appear, each with a modal phase which changes linearly with $p$, shown in (b) and (c), respectively. The insets shown the Gaussian intensity and phase as illustrative false colour plots.}
\label{fig:gauss}
\end{figure}
\noindent \textbf{Why lenses focus:} An intriguing aspect of a modal decomposition is that the choice of basis and basis parameters determines the number of modes in the expansion, as shown graphically in Fig.~\ref{fig:gauss}. For example, a Gaussian beam with curvature, $\text{LG}_{0,0} (z=0) \exp(ikr^2/2f)$, when expressed into the LG basis with no curvature (planar wavefronts with $R=0$), returns a superposition of many radial modes.
However, a Gaussian beam with curvature is equivalent to passing a planar wavefront Gaussian through a lens. This then suggests that the focussing of lenses can be given a modal explanation: the interference of many radial modes gives rise to the lensing action, predicting that the beam should converge to a spot (or diverge if the sign is reversed). How to make sense of this and see it in the ``math''? The complex expansion coefficients for this special case can be calculated analytically and found to be
\begin{equation}
c_{p} = |c_p| \exp \left( i p \arctan (k w_0^2/4f) \right)\,,
\end{equation}
\noindent with
\begin{equation}
|c_p| = 4f(k w_0^2)^p (16 f^2 + k^2 w_0^4)^\frac{p}{2}\,,
\end{equation}
\noindent where a constant modal phase of $\exp ( i \arctan (-k w_0^2/4f))$ across all modes has been dropped (since it is the relative phase that matters). We see that the phase scales \textit{linearly} with mode order: $\theta (p) = p \times \arctan (k w_0^2/4f) $. However, we know that a lens phase function scales quadratically with distance. To see the connection, we approximate the Laguerre-Gaussian beam as an oscillating cosine function (by linking the Laguerre-Gaussian beam to its Bessel equivalent \cite{mendoza2015laguerre} and the Bessel equivalent to the cosine function \cite{litvin2008bessel}), finding $\cos ( 2 \sqrt{2p+1} r/w_0 - \pi/4)$, so that for large $p$ the $p^\text{th}$ ring will be approximately located at $r_p^2 \approx p (w_0 \pi/16) ^2$. Since $r^2 \propto p$ and $\theta \propto p$ we have that $\theta \propto r^2$, as needed for a lensing action. It is instructive to compare this situation to the binary ring construction of a lens, by finding those radii where the light arrives in phase at the desired focal spot a distance $f$ from the $r=0$ position. This is easily calculated to be $r_n^2 \approx 2 n \lambda f$: the zones scale linearly with ring number (diffraction order) $n$. In the modal case, these rings are constructed from carefully crafted phase variations, done automatically by the modal decomposition, while the amplitudes account for the distribution of the light.
With the intuitive benefits now highlighted, we move on to demonstrate the easy of implementation, both numerically and experimentally.
\section{Experimental and Numerical Validation}
\begin{figure}[t]
\centering\includegraphics[width=\linewidth]{setupV1.png}
\caption{Illustration of the experimental set-up, where a spatial light modulator (SLM) was used to digitally create the desired fields to be tested.}
\label{fig:setup}
\end{figure}
\noindent To validate of this concept, we perform an experiment and a numerical comparison to the common angular spectrum approach. Our experimental set-up, shown in Fig.~\ref{fig:setup}, makes use of a visible laser beam and a Spatial Light Modulator (SLM). A Gaussian beam from a HeNe laser ($\lambda=633$ nm) was passed through a polarizer (Pol), oriented for horizontal polarization, before \VRF{being} expanded \VRF{by} an 10$\times$ objective lens (OL) and then collimated by a $f$ = 300 mm lens to overfill \VRF{the} SLM (HoloEye, PLUTO-VIS, with 1920 $\times$ 1080 pixels of pitch 8 um and calibrated for a 2$\pi$ phase shift at $\lambda = 633$ nm). The SLM was encoded with an appropriate computer generated hologram to create the desired field to be tested, often requiring complex amplitude modulation \cite{Forbes2016, SPIEbook}. The desired mode was imaged by lenses $f_2$=$f_3$ = 300 mm, with the aperture (ap) at the Fourier plane used to remove unwanted diffraction orders. A Point Grey Firefly camera was used to measure the beam profiles from the image plane ($z=0$) as a function of $z$ by moving the camera on a rail. The second moment width of the beam at each position was calculated from the captured images. To measure the far-field and to observe the beams passing through their waist planes, we employed a digital lens of focal length $f$ programmed on the SLM rather than a physical lens.
\begin{figure}[h!]
\centering\includegraphics[width=\linewidth]{Figure4.png}
\caption{Near-field (NF) and far-field (FF) results for the four test cases using the modal propagation approach, showing intensity and phase. An additional curvature $\text{exp}(ikr^2/2f)$ with $f=200$ mm is been added to total phase of each test case in order to measure the far-field plane. Each row of the figure shows from left to right the reconstructed phase of the beam at $z = 0$, a comparison of the intensity profiles of the numerically calculated (Th) and experimentally measured (Me) beam intensity at $z=0$, the reconstructed phase at $z = f$ mm, a comparison of the profiles of the numerically calculated (Th) and experimentally measured (Me) beam intensity at $z= f$ mm, respectively.}
\label{fig:NF-FF}
\end{figure}
We selected four test cases to cover a range of possibilities: (1) a Gaussian beam, (2) a superposition of LG$_{0,\pm1}$ beams, (3) a flat-top beam with $n=10$ and (4) the exotic mode similar to Fig.~\ref{fig:concept} (a), given by
\begin{multline}
u_{\text{Ex}}= 0.5e^{i\pi}\text{LG}_{0,1}+0.25e^{i\frac{\pi}{4}}\text{LG}_{1,-1}+e^{-i\frac{\pi}{2}}\text{LG}_{1,2} \\ +0.5\text{LG}_{0,0}+0.25e^{i\frac{\pi}{4}}\text{LG}_{1,1}+e^{-i\frac{\pi}{2}}\text{LG}_{2,-1}\,.
\end{multline}
The results of these tests are shown in Figures \ref{fig:NF-FF} and \ref{fig:width}.
Results in Fig.~\ref{fig:NF-FF} shows good agreement between the simulated beam profiles using our modal propagation approach (Th) and the measured profiles (Me), both in the near field ($z=0$) and far field ($z=f$). The results are consistent with the predictions: that some modes will be propagation invariant and others not. The fact that the results are corroborated experimentally in both the near and far fields confirms that it must work for all propagation distances.
\begin{figure}[h!]
\centering\includegraphics[width=\linewidth]{Figure5.png}
\caption{Second moment beam widths in the $x-$ and $y-$axis as a function of propagation distance ($z$) for the (a) flat-top and (b) exotic beam, comparing experimentally measured widths (Exp) to those calculated by the angular spectrum (AS) and modal (Modal) approaches. The insets show the measured intensities (Me) and the theoretical (Th) intensities from the modal approach.}
\label{fig:width}
\end{figure}
To quantify the agreement, we measure beam images from $z=0$ to $z = 400$ mm and calculate the second moment beam radius in the two orthogonal axis, with the results for the flat-top and exotic beams shown in Figure~\ref{fig:width}. We overlay on the results a calculation from the traditional angular spectrum (AS) approach, which was applied here by using the 2D Fast-Fourier transform (FFT) algorithm of the optical field and then the inverse 2D FFT to obtain superposition of all propagated planar wave components in the observation plane. It is evident that there is excellent agreement between calculated (AS and modal) and measured (Exp) results.
\section{Conclusion}
The modal approach to propagating arbitrary forms of structured light has the advantages of being analytic, computationally simple, and offers physical insights into the propagation dynamics of classes of modes. Here we have outlined the approach, used it to offer an intuitive understanding of paraxial light propagation, validated it against the traditional angular spectrum method and confirmed it experimentally. Although this test was limited to scalar light, it is self-evident that it will work for vectorial light too by applying the approach to both polarization components of the field.
\section*{Funding}
The authors would like to thank the
National Science Foundation of China (NSFC) grant, grant agreement no 92050202, China Post-doctoral Science Foundation grant, grant agreement no 238691,and the CSIR-IBS (South Africa) bursary scheme.
\section*{Disclosures}
The authors declare no conflicts of interest.
| {
"timestamp": "2021-06-03T02:31:02",
"yymm": "2106",
"arxiv_id": "2106.01313",
"language": "en",
"url": "https://arxiv.org/abs/2106.01313",
"abstract": "Here we outline a description of paraxial light propagation from a modal perspective. By decomposing the initial transverse field into a spatial basis whose elements have known and analytical propagation characteristics, we are able to analytically propagate any desired field, making the calculation fast and easy. By selecting a basis other than that of planes waves, we overcome the problem of numerical artefacts in the angular spectrum approach and at the same time are able to offer an intuitive understanding for why certain classes of fields propagate as they do. We outline the concept theoretically, compare it to the numerical angular spectrum approach, and confirm its veracity experimentally using a range of instructive examples. We believe that this modal approach to propagating light will be a useful addition to toolbox for propagating optical fields.",
"subjects": "Optics (physics.optics)",
"title": "A modal description of paraxial structured light propagation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9314625050654263,
"lm_q2_score": 0.7606506472514406,
"lm_q1q2_score": 0.7085175573684648
} |
https://arxiv.org/abs/2202.13697 | Metric, Schauder and Operator-Valued Frames | Notion of frames and Bessel sequences for metric spaces have been introduced. This notion is related with the notion of Lipschitz free Banach spaces. \ It is proved that every separable metric space admits a metric $\mathcal{M}_d$-frame. Through Lipschitz-free Banach spaces it is showed that there is a correspondence between frames for metric spaces and frames for subsets of Banach spaces. Several characterizations of metric frames are obtained. Stability results are also presented. Non linear multipliers are introduced and studied. This notion is connected with the notion of Lipschitz compact operators. Continuity properties of multipliers are discussed.For a subclass of approximated Schauder frames for Banach spaces, characterization result is derived using standard Schauder basis for standard sequence spaces. Duals of a subclass of approximate Schauder frames are completely described. Similarity of this class is characterized and interpolation result is derived using orthogonality. A dilation result is obtained. A new identity is derived for Banach spaces which admit a homogeneous semi-inner product. Some stability results are obtained for this class.A generalization of operator-valued frames for Hilbert spaces are introduced which unifies all the known generalizations of frames for Hilbert spaces. This notion has been studied in depth by imposing factorization property of the frame operator. Its duality, similarity and orthogonality are addressed. Connections between this notion and unitary representations of groups and group-like unitary systems are derived. Paley-Wiener theorem for this class are derived. | \chapter*{}
\par~
\begin{center}
\textbf{{\fontsize{16}{1em}\selectfont DECLARATION}} \\
\textit{By the Ph.D. Research Scholar}
\end{center}
\par I hereby declare that the research thesis entitled
\textbf{METRIC, SCHAUDER AND OPERATOR-VALUED
FRAMES} which is being
submitted to the \textbf{National Institute of Technology Karnataka,
Surathkal} in partial fulfillment of the requirements for the award
of the Degree of \textbf{Doctor of Philosophy} in
\textbf{Mathematical and Computational Sciences} is a
\textit{bonafide report of the research work carried out by me}. The material contained in this research thesis has not been submitted to any University or Institution for the award of any degree.
\par~
\par~
\par~
\par~
\par~
\par~
\begin{flushright}
Place: NITK, Surathkal \hfill MAHESH KRISHNA K\\
Date: $\quad$ February 2022 \hfill 165106MA16F02\\
Department of MACS\\
NITK, Surathkal
\end{flushright}
\leavevmode\newpage
\thispagestyle{empty}
\leavevmode\newpage
\thispagestyle{empty}
\def0.1{0.1}
\par~
\begin{center}
\textbf{{\fontsize{16}{1em}\selectfont CERTIFICATE}}
\end{center}
\par This is to \textit{certify} that the research thesis entitled
\textbf{METRIC, SCHAUDER AND OPERATOR-VALUED
FRAMES} submitted by
\textbf{Mr. \; MAHESH \; KRISHNA \; K} (Register Number : 165106MA16F02) as the record of the research work carried out by him is \textit{accepted as the research thesis submission} in partial fulfillment of the requirements for the award of degree of \textbf{Doctor of Philosophy}.
\par~
\par~
\par~
\par~
\begin{flushright}
Place: NITK, Surathkal \hfill \textbf{Dr. P. Sam Johnson}\\
Date: $\quad$ February 2022 \hfill Research Guide\\
\hfill Department of MACS\\
\hfill NITK Surathkal, Karnataka, India\\
\par~
\hspace{2cm}
\end{flushright}
\par~
\par~
\par~
\par~
\begin{flushright}
\textbf{Chairman - DRPC}\\
\hspace{3cm}
(Signature with Date and Seal)
\end{flushright}
\leavevmode\newpage
\thispagestyle{empty}
\leavevmode\newpage
\thispagestyle{empty}
\def0.1{0.1}
\begin{center}
{\onehalfspacing \section*{\LARGE{ACKNOWLEDGEMENTS}}}
\end{center}
I begin by remembering my guide Dr. P. Sam Johnson. It is because of him of whatever epsilon today I am.\\
\indent The kind reply by Prof. Victor Kaftal, University of Cincinnati, Ohio developed confidence in me to do research. I am thankful to Prof. Victor Kaftal very much. Aslo, I thank Prof. P. G. Casazza, Missouri University, USA, Prof. Radu Balan, University of Maryland, USA, Prof. D. Freeman, St. Louis University, USA, Prof. Orr Moshe Shalit, Technion, Israel, Prof. Terence Tao, University of California, Los Angeles, USA and Prof. B. V. Rajarama Bhat, ISI Bangalore for their replies to my mails. I am thankful to all of them.\\
I am happy to thank my friends Ramu G., Vinoth A., Kanagaraj K., Palanivel R., Niranjan P.K., Sumukha S., Saumya Y. M., Shivarama K. N., Sachin M., Mahesh Mayya, Chaitanya G. K., Manasa M., Rashmi K., Megha P. for their help in my life at NITK.\\
\indent My Research Progress Assessment Committee (RPAC) members Prof. Subhash C. Yaragal, Department of Civil Engineering and Prof. B. R. Shankar, Department of MACS helped me at various stages of research. I thank the RPAC members.\\
\indent I would like to thank former heads of MACS department Prof. Santhosh George, Prof. B. R. Shankar, Prof. S. S. Kamath and the present head Dr. R. Madhusudhan for their help at different times. Further, I thank all faculty members of MACS department and office staff.\\
\indent It was in MTTS where I started thinking seriously about results. I am thankful to Prof. S. Kumaresan, Central University of Hyderabad for the initiation of MTTS which transformed Mathematics of India to a greater extent. \\
\indent The concept of Advanced Training in Mathematics Schools (ATM Schools) is a very useful initiative of National Centre for Mathematics (NCM) India which helps a lot for Ph.D. Scholars. I am fortunate to attend a dozen of these schools. Thanks to initiators and organizers of various ATM schools.\\
\indent Ph.D. course work papers of mine were given by Dr. Murugan V., Mrs. Sujatha D. Achar, Prof. B. R. Shankar and Prof. A. H. Sequeira. I am thankful to them.\\
\indent When I was teaching at Centre for Post-Graduate Studies and Research, St. Philomena College at Puttur, my classmates and colleagues Vaishnavi C. and Prasad H. M. helped at various times. My students also helped a lot understand the subject better. It is necessary to remember all of them.\\
\indent Prof. M. S. Balasubramani, Prof. S. Parameshwara Bhatta, Mohana K. S., Dr. Chandru Hegde taught me at Mangalore University. I particularly remember them for their teaching.\\
\indent I end the acknowledgements by remembering my father Narasimha D. K., my mother Sundari V., my sisters Shraddha K., Pavithra K., Pavana K., my brother late Ganesh K. R. and my primary teacher Nataliya K.
\vspace{3cm}
\noindent Place: NITK, Surathkal \hfill MAHESH KRISHNA K
\\Date: 17 February 2022
\thispagestyle{empty}
\bigskip\medskip
\leavevmode\newpage
\pagenumbering{roman}
\addcontentsline{toc}{section}{ABSTRACT}
\def0.1{0.1}
\pagenumbering{roman}
\begin{center}
{\onehalfspacing \section*{\LARGE{ABSTRACT}}}
\end{center}
Notion of frames and Bessel sequences for metric spaces have been introduced. This notion is related with the notion of Lipschitz free Banach spaces. \ It is proved that every separable metric space admits a metric $\mathcal{M}_d$-frame. Through Lipschitz-free Banach spaces it is showed that there is a correspondence
between frames for metric spaces and frames for subsets of Banach spaces. Several characterizations of metric frames are obtained. Stability results are also presented. Non linear multipliers are introduced and studied. This notion is connected with the notion of Lipschitz compact operators. Continuity properties of multipliers are discussed.
For a subclass of approximated Schauder frames for Banach spaces, characterization result is derived using standard Schauder basis for standard sequence spaces. Duals of a subclass of approximate Schauder frames are completely described. Similarity of this class is characterized and interpolation result is derived using orthogonality. A dilation result is obtained. A new identity is derived for Banach spaces which admit a homogeneous semi-inner product. Some stability results are obtained for this class.
A generalization of operator-valued frames for Hilbert spaces are introduced which unifies all the known generalizations of frames for Hilbert spaces. This notion has been studied in depth by imposing factorization property of the frame operator. Its duality, similarity and orthogonality are addressed. Connections between this notion and unitary representations of groups and group-like unitary systems are derived. Paley-Wiener theorem for this class are derived.
\vspace{1cm}
\noindent {\small \textbf{Keywords: Frame, Riesz basis, Bessel sequence, Lipschitz function, multiplier, operator-valued frame, metric space. } }
\restoregeometry
\leavevmode\newpage
\tableofcontents
\newpage
\pagenumbering{arabic}
{\onehalfspacing \chapter{INTRODUCTION}\label{chap1} }
\vspace{0.5cm}
{\onehalfspacing \section{GENERAL INTRODUCTION}
A vector in a vector space is usually obtained as a linear combination of elements of a basis for the vector space. Thus a vector is fully known if we know the coefficients in the representation of it using basis elements. However, as dimension of the space increases, it is difficult to get the coefficients. Hence we look for nice spaces and certain bases which give coefficients of a given vector easily. For this purpose, Hilbert spaces and orthonormal bases become a very handy tool to obtain representation of a vector. \\
Orthonormal bases for Hilbert spaces have practical disadvantages. Since each coefficient in the expansion is very important, a small error in one of the coefficient leads to significant variation in the resultant vector and the actual vector. Thus we seek a collection in Hilbert space which gives representation as well as a small change in coefficient need not effect much to the original vector. This is where the theory of frames becomes important.\\
Historically it was \cite{GABOR}, who first studied representation of functions using translations and modulations of a single function (\cite{FEICHTINGERSTROHMERBOOK, FEICHTINGERSTROHMERBOOK2, GROCHENIGBOOK, FEICHTINGERLUEFWERTHER, FEICHTINGERKOZEKLUEF, SONDERGAARD}). In 1947, Sz. Nagy studied sequences which are close to orthonormal bases using Paley-Wiener type results (\cite{NAGYEXP}). Modern definition of frames was set by \cite{DUFFIN1} in the study of sequences of type $ \{e^{i\lambda_nx}\}_{n\in \mathbb{Z}}, \lambda_n \in \mathbb{C}, x\in \left(-r, r \right), r>0$. After this work, \cite{YOUNG} made an account of frames in his book `An introduction to nonharmonic Fourier series'.\\
Paper of Daubechies, Grossmann, and Meyer (\cite{MEYER1}) triggered the area of frames for Hilbert spaces. Later, the paper of \cite{BENEDETTOFICKUS} influenced the development of frame theory for finite dimensional Hilbert spaces. Today theory of frames find its uses in many areas such as wireless communication (\cite{STROHMERAPP}), signal processing (\cite{MALLAT}), image processing (\cite{DONOHOELADOPTIMAL}), sampling theory (\cite{BSAMPLING}), filter banks (\cite{FICKUSMASSAR}), psycho acoustics (\cite{BALAZSPSYC}), quantum design (\cite{BODMANNHAASDESIGN}), quantum channels (\cite{HANJUSTE}), quantum optics (\cite{JAMIOIKOWSKI}), quantum measurement (\cite{ELDARFORNEYMEASUREMENT}), numerical approximation (\cite{ADCOCKHUYBRECHS}), Sigma-Delta quantization (\cite{BENEDETTOPOWELL}), coding (\cite{STROHMERHEATHCODING}) and graph theory (\cite{BODMANNPAULSEN}). For a comprehensive look on the theory of frames, we refer (\cite{CHRISTENSEN}, \cite{HANLARSON}, \cite{HANKORNELSONLARSON}, \cite{CASAZZABOOK}, \cite{WALDRONBOOK}, \cite{HEILBOOK}, \cite{OKOUDJOUBOOK}, \cite{PESENSON}).
Since many spaces appearing both in theoretical and practicals are Banach spaces which may not be Hilbert spaces, there is a need for extending the notion of frames to Banach spaces. This was first done by
\cite{GROCHENIG}. After the study of several function spaces (\cite{FEICHTINGERCHOOSE}), Gr\"{o}chenig first studied the notion of an atomic decomposition for
Banach spaces and then defined the notion of a Banach frame.
\cite{FEICHTINGGERGROCHENIG1, FEICHTINGGERGROCHENIG2, FEICHTINGGERGROCHENIG3} in 90's developed
a theory of atomic decompositions and frames for a large class of function spaces such as modulation spaces (\cite{FEICHTINGERLOOKING}) and coorbit spaces (\cite{BERGE}), via,
group representations and projective representations.\\
Abstract study of atomic decompositions and frames for Banach spaces started from the
fundamental paper (\cite{CASAZZAHANLARSONFRAMEBANACH}). Further study and variations of the frames for Banach spaces are done in \cite{CARANDOLASSALLESCHMIDBERG}, \cite{TEREKHIN2010}, \cite{TEREKHIN2009}, \cite{TEREKHIN2004}, \cite{FORNASIERALPHA}, \cite{CASAZZACHRISTENSENSTOEVA}, \cite{STOEVA2009}, \cite{STOEVA2012}, \cite{ALDROUBISLANTED}, \cite{GROCHENIGLOCALIZATION} and so on.
{\onehalfspacing \section{ORTHONORMAL BASES, RIESZ BASES, FRAMES AND\\ BESSEL SEQUENCES FOR HILBERT SPACES}
In the study of integral equations, Hilbert studied the space of square integrable sequences (\cite{BLANCHARD}). Later, John \cite{VONNEUMANN} formulated the notion of Hilbert spaces.
\begin{definition}(cf. \cite{LIMAYE})
A vector space $\mathcal{H}$ over $\mathbb{K}$ ($\mathbb{R}$ or $\mathbb{C}$) is said to be a \textbf{Hilbert space} if there exists a map $\langle \cdot, \cdot \rangle:\mathcal{H} \times \mathcal{H}\to \mathbb{K}$ such that the following axioms hold.
\begin{enumerate}[label=(\roman*)]
\item $\langle h, h \rangle \geq 0$, $ \forall h \in \mathcal{H}$.
\item If $h \in \mathcal{H}$ is such that $\langle h, h \rangle = 0$, then $h=0$.
\item $\langle h, h_1 \rangle =\overline{\langle h_1, h \rangle}$, $ \forall h, h_1 \in \mathcal{H}$.
\item $\langle \alpha h+ h_1, h_2 \rangle =\alpha \langle h, h_2 \rangle+\langle h_1, h_2 \rangle$, $ \forall h, h_1, h_2 \in \mathcal{H}$, $\forall \alpha \in \mathbb{K}$.
\item $\mathcal{H}$ is complete with respect to the norm $\|h\|\coloneqq \sqrt{\langle h, h \rangle}$.
\end{enumerate}
\end{definition}
We now mention two important examples of Hilbert spaces.
\begin{example}
\begin{enumerate}[label=(\roman*)](cf. \cite{LIMAYE})
\item Let $n\in \mathbb{N}$. The space $\mathbb{K}^n$ equipped with the inner product
\begin{align*}
\langle (a_k)_{k=1}^n, (b_k)_{k=1}^n\rangle \coloneqq \sum_{k=1}^na_k\overline{b_k}, \quad \forall(a_k)_{k=1}^n, (b_k)_{k=1}^n \in \mathbb{K}^n
\end{align*}
is a finite dimensional separable Hilbert space.
\item The space $\ell^2(\mathbb{N}) \coloneqq \{\{a_n\}_n: a_n \in \mathbb{K}, \forall n \in \mathbb{N}, \sum_{n=1}^{\infty}|a_n|^2<\infty\}$ equipped with the inner product
\begin{align*}
\langle \{a_n\}_n, \{b_n\}_n\rangle \coloneqq \sum_{n=1}^{\infty}a_n\overline{b_n}, \quad \forall \{a_n\}_n, \{b_n\}_n \in \ell^2(\mathbb{N})
\end{align*}
is an infinite dimensional separable Hilbert space. The space $\ell^2(\mathbb{N})$ is known as the standard separable Hilbert space.
\end{enumerate}
\end{example}
Throughout this thesis, we assume that all of Hilbert spaces are separable.
Among all kinds of sets in a Hilbert space, orthonormal sets are the easiest to handle, whose definition reads as follows.
\begin{definition}(cf. \cite{LIMAYE})
A collection $\{\tau_n\}_{n}$ in a Hilbert space $\mathcal{H}$ is called an \textbf{orthonormal set} in $\mathcal{H}$ if $\langle \tau_j, \tau_k\rangle =\delta_{j,k}, \forall j,k \in \mathbb{N}$.
\end{definition}
Following theorem, known as Gram-Schmidt orthonormalization (cf. \cite{LEON100}) shows that a linearly independent sequence of vectors can be converted into an orthonormal set such that at each stage of conversion the spaces spanned by the original set and transformed set are the same.
\begin{theorem}(cf. \cite{LIMAYE}) (\textbf{Gram-Schmidt orthonormalization})
Let $\{\tau_n\}_{n}$ be a linearly independent subset of $\mathcal{H}$. Define $\omega_1\coloneqq \tau_1$, $\rho_1\coloneqq \omega_1/{\|\omega_1\|}$ and
\begin{align*}
\omega_n\coloneqq \tau_n-\sum_{k=1}^{n-1}\langle \tau_n, \rho_k\rangle \rho_k, \quad \rho_n\coloneqq \frac{\omega_n}{\|\omega_n\|},\quad \forall n \geq 2.
\end{align*}
Then $\{\rho_n\}_{n}$ is orthonormal and
\begin{align*}
\operatorname{span}\{\rho_k\}_{k=1}^n=\operatorname{span}\{\tau_k\}_{k=1}^n,\quad \forall n \geq 1.
\end{align*}
\end{theorem}
One of the most important inequalities associated with an orthonormal sequence is the Bessel's inequality. It is a generalization of Cauchy-Schwarz inequality.
\begin{theorem}(cf. \cite{LIMAYE}) (\textbf{Bessel's inequality})
If $\{\tau_n\}_{n}$ is an orthonormal set in $\mathcal{H}$, then the series $\sum_{n=1}^\infty|\langle h, \tau_n\rangle |^2$ converges for all $h \in \mathcal{H}$ and
\begin{align*}
\sum_{n=1}^\infty|\langle h, \tau_n\rangle |^2\leq \|h\|^2, \quad \forall h \in \mathcal{H}.
\end{align*}
\end{theorem}
Next theorem characterizes convergence of series in a Hilbert space with that of sequence of scalars.
\begin{theorem}(cf. \cite{LIMAYE}) (\textbf{Riesz-Fisher theorem})
Let $\{a_n\}_{n}$ be a sequence of scalars and $ \{\tau_n\}_{n}$ be an orthonormal set in $\mathcal{H}$. Then
\begin{align*}
\sum_{n=1}^\infty a_n\tau_n \text{ converges in } \mathcal{H} \text{ if and only if } \sum_{n=1}^\infty|a_n|^2 \text{ converges in } \mathbb{R}.
\end{align*}
\end{theorem}
Next theorem shows that given any orthonormal set and an element in a Hilbert space, the inner product of the element with the members of an orthonormal set can be non zero at most countably many times.
\begin{theorem}(cf. \cite{LIMAYE})
Let $ \{\tau_n\}_{n}$ be an orthonormal set in $\mathcal{H}$ and $h \in \mathcal{H}$. Then the set $E_h\coloneqq \{\tau_n: \langle h, \tau_n\rangle \neq 0, n \in \mathbb{N}\}$ is either finite or countable.
\end{theorem}
A natural analogue of basis for finite dimensional vector spaces to that of infinite dimensional Hilbert spaces is the notion of Schauder basis and orthonormal basis.
\begin{definition}(cf. \cite{CHRISTENSEN})\label{ONBDEFINITIONOLE}
A collection $ \{\tau_n\}_{n}$ in $\mathcal{H}$ is called
\begin{enumerate}[label=(\roman*)]
\item a \textbf{Schauder basis} for $\mathcal{H}$ if for each $h \in \mathcal{H}$, there exists a unique collection $\{a_n(h)\}_{n }$ of scalars such that $\sum_{n=1}^\infty a_n(h)\tau_n$ converges in $\mathcal{H}$ and $h=\sum_{n=1}^\infty a_n(h)\tau_n$.
\item an \textbf{orthonormal basis} for $\mathcal{H}$ if it is a Schauder basis for $\mathcal{H}$ and it is orthonormal.
\end{enumerate}
\end{definition}
We now give various examples of orthonormal bases for Hilbert spaces.
\begin{example}(cf. \cite{CHRISTENSEN})
\begin{enumerate}[label=(\roman*)]
\item Define $e_n\coloneqq\{\delta_{n,k}\}_{k}$, where $\delta_{\cdot,\cdot}$ is the Kronecker delta. Then $\{e_n\}_{n }$ is an orthonormal basis for $\ell^2(\mathbb{N})$. This is known as the standard orthonormal basis for $\ell^2(\mathbb{N})$.
\item Define
\begin{align*}
\mathcal{L}^2[0,1]\coloneqq \left\{f:[0,1]\to \mathbb{C} \text{ is measurable and } \int_{0}^{1}|f(x)|^2\,dx<\infty\right\}
\end{align*}
equipped with the inner product
\begin{align*}
\langle f, g \rangle \coloneqq \int_{0}^{1}f(x)\overline{g(x)}\,dx.
\end{align*}
Let $n \in \mathbb{Z}$. Define $e_n: [0,1] \ni x \mapsto e^{2\pi inx}\in \mathbb{C}$. Then $\{e_n\}_{n=-\infty}^\infty$ is an orthonormal basis for $\mathcal{L}^2[0,1]$.
\item (\textbf{Gabor basis}) Define
\begin{align*}
\mathcal{L}^2(\mathbb{R})\coloneqq \left\{f:\mathbb{R}\to \mathbb{C} \text{ is measurable and } \int_{-\infty}^{\infty}|f(x)|^2\,dx<\infty\right\}
\end{align*}
equipped with the inner product
\begin{align*}
\langle f, g \rangle \coloneqq \int_{-\infty}^{\infty}f(x)\overline{g(x)}\,dx.
\end{align*}
Let $ \chi_{[0,1]}$ be the characteristic function on $[0,1]$. For $j, k \in \mathbb{Z}$, define $f_{j,k}(x)\coloneqq e^{2\pi i jx} \chi_{[0,1]}(x-k), \forall x \in \mathbb{R}$. Then $\{f_{j,k}\}_{j, k \in \mathbb{Z}}$ is an orthonormal basis for $\mathcal{L}^2(\mathbb{R})$.
\item (\textbf{Haar system}) Let $\psi$ be the Haar function defined on $\mathbb{R}$ by
\begin{align*}
\psi(x) \coloneqq\left\{
\begin{array}{ll}
1 & \operatorname{ if } ~0 \leq x<\frac{1}{2} \\
-1 & \operatorname{ if } ~\frac{1}{2}\leq x \leq 1 \\
0 & \operatorname{ otherwise} .
\end{array}
\right.
\end{align*}
For $j, k \in \mathbb{Z}$, let $\psi_{j,k}(x)\coloneqq 2^{j/2}\psi(2^jx-k), \forall x \in \mathbb{R}$. Then $\{\psi_{j,k}\}_{j, k \in \mathbb{Z}}$ is an orthonormal basis for $\mathcal{L}^2(\mathbb{R})$.
\item (cf. \cite{CHRISTENSEN}) For $a, b >0$, define
\begin{align*}
T_a:\mathcal{L}^2(\mathbb{R}) \ni f \mapsto T_af \in \mathcal{L}^2(\mathbb{R}), \quad T_af:\mathbb{R} \mapsto (T_af) (x)\coloneqq f(x-a) \in \mathbb{C}
\end{align*}
and
\begin{align*}
E_b:\mathcal{L}^2(\mathbb{R}) \ni f \mapsto E_bf \in \mathcal{L}^2(\mathbb{R}), \quad E_bf:\mathbb{R} \mapsto (E_bf) (x)\coloneqq e^{2 \pi i bx} f(x)\in \mathbb{C}.
\end{align*}
Let $g:\mathbb{R}\to \mathbb{C}$ be a continuous function with compact support. Then, for any $a,b>0$, $ \{E_{mb}T_{na}g\}_{n,m \in \mathbb{Z}}$ is not an orthonormal basis for $\mathcal{L}^2(\mathbb{R})$.
\end{enumerate}
\end{example}
Following theorem shows that given an orthonormal set, we can check whether it is an orthonormal basis by checking several equivalent conditions rather appealing to Definition \ref{ONBDEFINITIONOLE}, which is difficult in many cases.
\begin{theorem}(cf. \cite{CHRISTENSEN})\label{CHARORTHONORMALINTRO}
Let $ \{\tau_n\}_{n}$ be an orthonormal set in $\mathcal{H}$. The following are equivalent.
\begin{enumerate}[label=(\roman*)]
\item $ \{\tau_n\}_{n}$ is an orthonormal basis for $\mathcal{H}$.
\item (\textbf{Fourier expansion}) $h= \sum_{n=1}^\infty\langle h, \tau_n\rangle \tau_n, \forall h \in \mathcal{H}$.
\item (\textbf{Parseval identity for the inner product}) $\langle h, g\rangle= \sum_{n=1}^\infty\langle h, \tau_n\rangle \langle \tau_n, g\rangle , \forall h, g \in \mathcal{H}$.
\item (\textbf{Parseval identity for the norm}) $\|h\|^2=\sum_{n=1}^\infty|\langle h, \tau_n\rangle |^2, h \in \mathcal{H}.$
\item $\overline{\operatorname{span}} \{\tau_n\}_{n}=\mathcal{H}$.
\item If $h \in \mathcal{H}$ is such that $\langle h, \tau_n\rangle = 0, \forall n \in \mathbb{N}$, then $h=0$.
\end{enumerate}
\end{theorem}
\begin{theorem}(cf. \cite{CHRISTENSEN})
A Hilbert space is separable if and only if it has a countable orthonormal basis.
\end{theorem}
Hilbert spaces have the remarkable property that every bounded linear functional from the space to the scalar field is given by the inner product with a unique element in the space.
\begin{theorem}(cf. \cite{LIMAYE}) (\textbf{Riesz representation theorem})
Let $f: \mathcal{H}\ni \to \mathbb{K}$ be a bounded linear functional. Then there exists a unique $\tau_f\in \mathcal{H}$ such that
\begin{align*}
f(h)=\langle h, \tau_f \rangle , \quad \forall h \in \mathcal{H}\quad \text{ and } \quad \|f\|=\|\tau_f\|.
\end{align*}
\end{theorem}
Riesz representation theorem opens the door to the following definition.
\begin{definition}(cf. \cite{LIMAYE})
Let $T:\mathcal{H} \to \mathcal{H}_0$ be a bounded linear operator. The unique bounded linear operator $T^*:\mathcal{H}_0 \to \mathcal{H}$ such that
\begin{align*}
\langle T h, h_0 \rangle =\langle h, T^*h_0\rangle , \quad \forall h \in \mathcal{H}, ~\forall h_0 \in \mathcal{H}_0
\end{align*}
is called as the \textbf{adjoint} of $T$.
\end{definition}
Hilbert spaces are studied along with various kinds of operators. These are defined as follows.
\begin{definition}(cf. \cite{LIMAYE})
Let $T:\mathcal{H} \to \mathcal{H}_0$ be a bounded linear operator. The operator $T$ is said to be
\begin{enumerate}[label=(\roman*)]
\item \textbf{invertible} if there exists a bounded linear operator $S:\mathcal{H}_0 \to \mathcal{H}$ such that $ST=I_\mathcal{H}$ and $TS=I_{\mathcal{H}_0}$.
\item \textbf{isometry} if $\|Th\|=\|h\|, $ $ \forall h \in \mathcal{H}$.
\item \textbf{unitary} if $TT^*=I_{\mathcal{H}_0}$, $T^*T=I_\mathcal{H}$.
\end{enumerate}
\end{definition}
\begin{definition}(cf. \cite{LIMAYE})
Let $T:\mathcal{H} \to \mathcal{H}$ be a bounded linear operator.
\begin{enumerate}[label=(\roman*)]
\item Operator $T$ is said to be a \textbf{normal} operator if $TT^*=T^*T$.
\item Operator $T$ is said to be a \textbf{projection} if $T^2=T=T^*$.
\item Operator $T$ is said to be a \textbf{self-adjoint} operator if $T=T^*$.
\item Operator $T$ is said to be a \textbf{positive} operator if $T=T^*$ and $\langle Th, h \rangle \geq 0$, $ \forall h \in \mathcal{H}$.
\end{enumerate}
\end{definition}
\begin{theorem}(cf. \cite{LIMAYE})
Let $\mathcal{H}$ be a separable Hilbert space.
\begin{enumerate}[label=(\roman*)]
\item If $\mathcal{H}$ is finite dimensional, then $\mathcal{H}$ is isometrically isomorphic to $\mathbb{K}^n$, for some $n$.
\item If $\mathcal{H}$ is infinite dimensional, then $\mathcal{H}$ is isometrically isomorphic to $\ell^2(\mathbb{N})$.
\end{enumerate}
\end{theorem}
Orthonormal bases have the nice property that given a single orthonormal basis, we can generate all of them just by acting unitary operators.
\begin{theorem}(cf. \cite{CHRISTENSEN})
Let $ \{\tau_n\}_{n}$ be an orthonormal basis for $\mathcal{H}$. Then the set of all orthonormal bases for $\mathcal{H}$ are precisely the families $\{U\tau_n\}_{n}$, where $U\in \mathcal{B}(\mathcal{H})$ is unitary.
\end{theorem}
Hilbert spaces have another nice property that closed subspaces decompose the original space.
\begin{theorem}(cf. \cite{LIMAYE}) (\textbf{Orthogonal complement theorem})
If $\mathcal{W}$ is a closed subspace of $\mathcal{H}$, then $\mathcal{H}=\mathcal{W}\oplus \mathcal{W}^\perp$, where $\mathcal{W}^\perp$ is the orthogonal complement of $\mathcal{W}$ in $\mathcal{H}$.
\end{theorem}
First level of generalization of orthonormal basis is that of Riesz basis. These are defined as follows.
\begin{definition}(cf. \cite{CHRISTENSEN})\label{RIESZBASISDEFINITION}
A collection $\{\omega_n\}_{n}$ in $\mathcal{H}$ is called a \textbf{Riesz basis} for $\mathcal{H}$ if there exist an orthonormal basis $ \{\tau_n\}_{n}$ for $\mathcal{H}$ and an invertible $T\in \mathcal{B}(\mathcal{H})$ such that $\omega_n=T\tau_n, \forall n$.
\end{definition}
As written by \cite{SIMONBOOK}, the origin of the term "Riesz basis" is unknown.
By taking $T$ as the identity operator, we easily see that every orthonormal basis is a Riesz basis.
\begin{example}
\begin{enumerate}[label=(\roman*)]
\item Let $\{\lambda_n\}_{n=1}^\infty$ be a sequence of scalars such that there exist $a,b>0$ with $a\leq |\lambda_n|\leq b, \forall n \in \mathbb{N}$. Then $\{\lambda_ne_n\}_{n=1}^\infty$ is a Riesz basis for $\ell^2(\mathbb{N})$, since it is image of the standard orthonormal basis $\{e_n\}_{n=1}^\infty$ under the invertible operator $T:\ell^2(\mathbb{N}) \ni \{x_n\}_{n=1}^\infty \mapsto \{\lambda_nx_n\}_{n=1}^\infty \in \ell^2(\mathbb{N})$. We note further that if $|\lambda_n|\neq1,$ for at least one $n$, then $\{\lambda_ne_n\}_{n=1}^\infty$ is not an orthonormal basis for $\ell^2(\mathbb{N})$.
\item (\cite{KADEC}) (\textbf{Kadec 1/4 theorem}) Let $ \{\lambda_n\}_{n\in \mathbb{Z}}$ be a sequence of reals such that
\begin{align*}
\sup_{n \in \mathbb{N}}\left|\lambda_n-n\right|< \frac{1}{4},\quad \forall n\in \mathbb{Z}.
\end{align*}
Define $f_n:\left(-\pi, \pi \right) \ni x \mapsto e^{i\lambda_nx} \in \mathbb{C}$, $\forall n\in \mathbb{Z}$. Then
$ \{f_n\}_{n\in \mathbb{Z}}$ is a Riesz basis for $\mathcal{L}^2(-\pi, \pi)$. It was shown that 1/4 is the optimal constant (\cite{LEVINSON}, cf. \cite{CHRISTENSENBULLETIN}).
\item (\cite{CASSAZAEVERYSUM}) (\textbf{Kalton-Casazza theorem})
A linear combination of two orthonormal bases is a Riesz basis.
\item (cf. \cite{CHRISTENSEN}) For $a, b >0$, define
\begin{align*}
T_a:\mathcal{L}^2(\mathbb{R}) \ni f \mapsto T_af \in \mathcal{L}^2(\mathbb{R}), \quad T_af:\mathbb{R} \mapsto (T_af) (x)\coloneqq f(x-a) \in \mathbb{C}
\end{align*}
and
\begin{align*}
E_b:\mathcal{L}^2(\mathbb{R}) \ni f \mapsto E_bf \in \mathcal{L}^2(\mathbb{R}), \quad E_bf:\mathbb{R} \mapsto (E_bf) (x)\coloneqq e^{2 \pi i bx} f(x)\in \mathbb{C}.
\end{align*}
Let $g:\mathbb{R}\to \mathbb{C}$ be a continuous function with compact support. Then, for any $a,b>0$, $ \{E_{mb}T_{na}g\}_{n,m \in \mathbb{Z}}$ is not a Riesz basis for $\mathcal{L}^2(\mathbb{R})$.
\end{enumerate}
\end{example}
\begin{remark}
Let $ \{\tau_n\}_{n}$ be an orthonormal basis for $\mathcal{H}$. Since an invertible map preserves the cardinality, it follows that for each $n\in \mathbb{N}$ , the set $\{\tau_j\}_{j=1}^n$ can not be a Riesz basis for $\mathcal{H}$.
\end{remark}
Like orthonormal basis, Riesz basis will also give a series representation of every element in a Hilbert space.
\begin{theorem}(cf. \cite{CHRISTENSEN}) \label{RIESZISAFRAME}
Let $ \{\tau_n\}_{n}$ be a Riesz basis for $\mathcal{H}$.
\begin{enumerate}[label=(\roman*)]
\item There exists a unique collection $\{\omega_n\}_{n}$ in $\mathcal{H}$ such that
\begin{align}\label{RBEXPANSION}
h=\sum_{n=1}^\infty\langle h, \omega_n\rangle \tau_n, \quad \forall h \in \mathcal{H}.
\end{align}
Moreover, $\{\omega_n\}_{n}$ is a Riesz basis for $\mathcal{H}$ and the series in Eq. (\ref{RBEXPANSION}) converges unconditionally for all $h \in \mathcal{H}$.
\item There exist $a,b>0$ such that $a \|h\|^2\leq \sum_{n=1}^\infty|\langle h, \tau_n\rangle |^2 \leq b\|h\|^2, \forall h \in \mathcal{H}.$
\end{enumerate}
\end{theorem}
Next result says that there is a characterization of Riesz basis which is free from orthonormal basis. It also gives a tool to check whether a collection is a Riesz basis for a given Hilbert space. To state the result, we need two definitions.
\begin{definition}(cf. \cite{CHRISTENSEN})
A sequence $ \{\tau_n\}_{n}$ in a Hilbert space $\mathcal{H}$ is said to be \textbf{complete} if $\overline{\text{span}}_{n\in \mathbb{N}}\{\tau_n\}=\mathcal{H}$.
\end{definition}
\begin{definition}(cf. \cite{CHRISTENSEN})
A sequence $ \{\omega_n\}_{n}$ in a Hilbert space $\mathcal{H}$ is said to be \textbf{biorthogonal} to a sequence $ \{\tau_n\}_{n}$ in $\mathcal{H}$ if $\langle \omega_n, \tau_m\rangle =\delta_{n,m}$ for all $n,m$.
\end{definition}
\begin{theorem}(cf. \cite{CHRISTENSEN, HEILBOOK, STOEVACHARAFOURIER}) \label{RIESZBASISTHM}
For a sequence $ \{\tau_n\}_{n}$ in $\mathcal{H}$, the following are equivalent.
\begin{enumerate}[label=(\roman*)]
\item $ \{\tau_n\}_{n}$ is a Riesz basis for $\mathcal{H}$.
\item $\overline{\operatorname{span}} \{\tau_n\}_{n}=\mathcal{H}$ and there exist $a,b>0$ such that for every finite subset $\mathbb{S}$ of $\mathbb{N}$,
\begin{align}\label{USEFULRESULT}
a\sum_{n \in \mathbb{S}}|c_n|^2\leq \left\|\sum_{n \in \mathbb{S}}c_n\tau_n\right\|^2 \leq b \sum_{n \in \mathbb{S}}|c_n|^2, \quad\forall c_n \in \mathbb{K}.
\end{align}
\item $ \{\tau_n\}_{n}$ is complete in $\mathcal{H}$ and the operator given by the \textbf{infinite Gram matrix} $[\langle \tau_m, \tau_n\rangle ]_{1\leq n, m <\infty}$ defined by
\begin{align*}
\ell^2(\mathbb{N})\ni \{c_m\}_{m} \mapsto \left\{\sum_{m=1}^\infty \langle \tau_n, \tau_m \rangle c_m\right\}_n\in \ell^2(\mathbb{N})
\end{align*}
is a bounded invertible operator on $\ell^2(\mathbb{N})$.
\item $\{\tau_n\}_{n}$ is a bounded unconditional Schauder basis for $\mathcal{H}$.
\item $\{\tau_n\}_{n}$ is a Schauder basis for $\mathcal{H}$ such that $\sum_{n=1}^{\infty}c_n\tau_n$ converges in $\mathcal{H}$ if and only if $\sum_{n=1}^{\infty}|c_n|^2<\infty$.
\end{enumerate}
\end{theorem}
\begin{remark}\label{COUNTERRIESZ}
Using Inequality \ref{USEFULRESULT}, we can show that certain collection of vectors is not a Riesz basis. As an illustration, let $\{e_n\}_{n=1}^\infty$ be the standard orthonormal basis for $\ell^2(\mathbb{N})$. We claim that $\{e_1\}\cup\{e_n\}_{n=1}^\infty$ is not a Riesz basis for $\ell^2(\mathbb{N})$. Suppose the claim fails, then we get an $a>0$ such that first inequality in Inequality \ref{USEFULRESULT} holds. By taking $1$ and $-1$ we see that
\begin{align*}
a(|1|^2+|-1|^2)\leq \|1\cdot e_1+(-1)\cdot e_1\|^2=0 ~\Rightarrow a=0,
\end{align*}
which is a contradiction. Hence $\{e_1\}\cup\{e_n\}_{n=1}^\infty$ can not be a Riesz basis for $\ell^2(\mathbb{N})$.
\end{remark}
Theorem \ref{RIESZBASISTHM} leads to the following definition.
\begin{definition}(cf. \cite{CHRISTENSEN})
A sequence $ \{\omega_n\}_{n}$ in a Hilbert space $\mathcal{H}$ is said to be a \textbf{Riesz sequence} for $\mathcal{H}$ if there exist $a,b>0$ such that for every finite subset $\mathbb{S}$ of $\mathbb{N}$,
\begin{align*}
a\sum_{n \in \mathbb{S}}|c_n|^2\leq \left\|\sum_{n \in \mathbb{S}}c_n\tau_n\right\|^2 \leq b \sum_{n \in \mathbb{S}}|c_n|^2, \quad\forall c_n \in \mathbb{K}.
\end{align*}
\end{definition}
Theorem \ref{RIESZBASISTHM} says that every Riesz basis is a Riesz sequence. It is easy to see that a Riesz sequence need not be a Riesz basis. Following theorem gives another characterization of Riesz basis which is also free from orthonormal basis. It also helps to check whether a collection is a Riesz basis.
\begin{theorem}(cf. \cite{GOHBERGKREIN})\label{LORCH} (\textbf{Kothe-Lorch theorem})
A sequence $\{\tau_n\}_{n}$ is a Riesz basis for $\mathcal{H}$ if and only if the following three conditions hold.
\begin{enumerate}[label=(\roman*)]
\item $\{\tau_n\}_{n}$ is an unconditional Schauder basis for $\mathcal{H}$.
\item $0<\operatorname{inf}_{n \in \mathbb{N}}\|\tau_n\|\leq \operatorname{sup}\limits_{n \in \mathbb{N}}\|\tau_n\|<\infty. $
\end{enumerate}
\end{theorem}
\begin{remark}
Since the collection $\{e_1\}\cup\{e_n\}_{n=1}^\infty$ in Remark \ref{COUNTERRIESZ} is not a Schauder basis for $\ell^2(\mathbb{N})$, using Theorem \ref{LORCH}, we again conclude that it is not a Riesz basis. However, the collection $\{e_1\}\cup\{e_n\}_{n=1}^\infty$ satisfies conditions (ii) and (iii) in Theorem \ref{LORCH}.
\end{remark}
It is clear that whenever we perturb an orthonormal basis, we may not get an orthonormal basis. However, it is a classical theorem of Paley and Wiener which says whenever we perturb an orthonormal basis, we get a Riesz basis.
\begin{theorem}(\cite{PALEYWIENER}, cf. \citealp{YOUNG}) \label{PALEYWIENERTHEOREM} (\textbf{Paley-Wiener theorem})
Let $ \{\tau_n\}_{n}$ be an orthonormal basis for $\mathcal{H} $. If $ \{\omega_n\}_{n}$ in $\mathcal{H} $ is such that there exists $ 0< \alpha <1 $ and for every $m=1,2,\dots, $
\begin{align*}
\left\|\sum\limits_{n=1}^mc_n(\tau_n-\omega_n) \right\|\leq \alpha \left(\sum\limits_{n=1}^m|c_n|^2\right)^\frac{1}{2},\quad \forall c_n \in \mathbb{K},
\end{align*}
then $ \{\omega_n\}_{n}$ is a Riesz basis for $\mathcal{H} $.
\end{theorem}
Next level of generalization of Riesz basis for Hilbert spaces is the notion of frame.
\begin{definition}(\cite{DUFFIN1})\label{OLE}
A collection $ \{\tau_n\}_{n}$ in a Hilbert space $ \mathcal{H}$ is said to be a \textbf{frame} for $\mathcal{H}$ if there exist $ a, b >0$ such that
\begin{equation}\label{SEQUENTIALEQUATION1}
\text{ (\textbf{Frame inequalities}) } \quad a\|h\|^2\leq\sum_{n=1}^\infty|\langle h, \tau_n \rangle|^2 \leq b\|h\|^2 ,\quad \forall h \in \mathcal{H}.
\end{equation}
Constants $ a$ and $ b$ are called as \textbf{lower frame bound} and \textbf{upper frame bound}, respectively. Supremum (resp. infimum) of the set of all lower (resp. upper) frame bounds is called \textbf{optimal lower frame bound} (resp. \textbf{optimal upper frame bound}). If the optimal frame bounds are equal, then the frame is called as \textbf{tight frame}. A tight frame whose optimal frame bound is one is termed as \textbf{Parseval frame}.
\end{definition}
As recorded by \cite{CHEBIRA}, and \cite{HEIL}, the reason for using the term ``frame" is unknown.
Note that in Definition \ref{OLE} we indexed the frame by natural numbers. Since the convergence of series in Definition \ref{OLE} is unconditional, any rearrangement of a frame is again a frame. We also note that Definition \ref{OLE} can be formulated for arbitrary indexing set $\mathbb{J}$. In this case, by the convergence of the series we mean the convergence of the net obtained by the set inclusion, on the collections of all finite subsets of $\mathbb{J}$.
From (ii) in Theorem \ref{RIESZISAFRAME} we can conclude that every Riesz basis is a frame. On the other hand, every frame can be written as a finite union of Riesz sequences (which is called as \textbf{Feichtinger conjecture} (\cite{CASAZZACHRISTENSENLINDNERVERSHYNIN})) and is known as \textbf{Marcus-Spielman-Srivastava Theorem} (cf. \cite{CASAZZATREMAIN2016, CASAZZAEDIDIN}). We now give various examples of frames.
\begin{example}\label{FRAMEEXAMPLES}
\begin{enumerate}[label=(\roman*)]
\item Let $\{e_n\}_{n=1}^\infty$ be an orthonormal basis for $\mathcal{H}$ and let $m \in \mathbb{N}$. Then the collection $\{e_1, \dots, e_m\}\cup\{e_n\}_{n=1}^\infty$ is a frame for $\mathcal{H}$ with bounds 1 and 2. In fact, for all $h \in
\mathcal{H}$,
\begin{align*}
1 \cdot\|h\|^2=\sum_{n=1}^\infty|\langle h, e_n \rangle|^2\leq\sum_{k=1}^m|\langle h, e_k \rangle|^2 + \sum_{n=1}^\infty|\langle h, e_n \rangle|^2 \leq 2\sum_{n=1}^\infty|\langle h, e_n \rangle|^2=2\|h\|^2.
\end{align*}
In particular, the collection in Remark \ref{COUNTERRIESZ} is a frame for $\ell^2(\mathbb{N})$.
\item (cf. \cite{CHRISTENSEN}) (\textbf{Harmonic frame}) Let $m, n \in \mathbb{N}$, $n\leq m$ and $\omega_1,\dots, \omega_m$ be the distinct $m^\text{th}$ roots of unity. Define
\begin{align*}
\eta_k\coloneqq\frac{1}{\sqrt{m}}(\omega_1^k, \dots, \omega_n^k), \quad 1\leq k \leq m.
\end{align*}
Then $\{\eta_k\}_{k=1}^m$ is a Parseval frame for $\mathbb{C}^n$.
\item (cf. \cite{CHEBIRA}, \cite{SHOR}) (\textbf{Mercedes-Benz frame} or \textbf{Peres-Wooters states}) $\{(0,1), (-\sqrt{3}/2, -1/2),(\sqrt{3}/2, -1/2) \}$ is a tight frame for $\mathbb{R}^2$ with bound $3/2$.
\item (cf. \cite{HANKORNELSONLARSON}) For $n\geq3$, the collection $\{(\cos(2 \pi j/n), \sin (2 \pi j/n)\}_{j=0}^{n-1}$ is a tight frame for $\mathbb{R}^2$. It has to be noted that we can not take $n=2$ because $\{(1,0), (-1,0)\}$ is not a frame for $\mathbb{R}^2$.
\item (cf. \cite{CHRISTENSEN}) (\textbf{Gabor frame} or \textbf{Weyl-Heisenberg frame}) Let $g$ be the Gaussian defined by $g:\mathbb{R}\ni x \mapsto g(x)\coloneqq e^{-x^2} \in \mathbb{R}$ and let $a,b>0$. Define $f_{n,m}:\mathbb{R}\ni x \mapsto e^{2\pi imbx}g(x-na)\in \mathbb{R},$ $\forall n, m\in \mathbb{Z}$. Then
$ \{f_{n,m}\}_{n,m \in \mathbb{Z}}$ is a (Gabor) frame for $\mathcal{L}^2(\mathbb{R})$ if and only if $ab<1$. Moreover, if $ \{f_{n,m}\}_{n,m \in \mathbb{Z}}$ is a frame for $\mathcal{L}^2(\mathbb{R})$, then $ab=1$ if and only if $ \{f_{n,m}\}_{n,m \in \mathbb{Z}}$ is a Riesz basis for $\mathcal{L}^2(\mathbb{R})$.
\item (\cite{DUFFIN1}) Let $ \{\lambda_n\}_{n\in \mathbb{Z}}$ be a sequence of scalars such that there are constants $d, L,\delta>0$ satisfying
\begin{align*}
\left|\lambda_n-\frac{n}{d}\right|\leq L,\quad \forall n\in \mathbb{Z}\quad \text{ and } \quad |\lambda_n-\lambda_m|\geq \delta,\quad \forall n, m\in \mathbb{Z}, n\neq m.
\end{align*}
Let $0<r<d \pi$ and define $f_n:\left(-r, r \right) \ni x \mapsto e^{i\lambda_nx} \in \mathbb{C}$, $\forall n\in \mathbb{Z}$. Then
$ \{f_n\}_{n\in \mathbb{Z}}$ is a frame for $\mathcal{L}^2(-r, r)$.
\item (cf. \cite{FEICHTINGERSTROHMERBOOK2}) For $a, b >0$, define
\begin{align*}
T_a:\mathcal{L}^2(\mathbb{R}) \ni f \mapsto T_af \in \mathcal{L}^2(\mathbb{R}), \quad T_af:\mathbb{R} \mapsto (T_af) (x)\coloneqq f(x-a) \in \mathbb{C}
\end{align*}
and
\begin{align*}
E_b:\mathcal{L}^2(\mathbb{R}) \ni f \mapsto E_bf \in \mathcal{L}^2(\mathbb{R}), \quad E_bf:\mathbb{R} \mapsto (E_bf) (x)\coloneqq e^{2 \pi i bx} f(x)\in \mathbb{C}.
\end{align*}
For $c>0$, let $ \chi_{[0,c]}$ be the characteristic function on $[0,c]$. Then for $a\leq c \leq 1$, $ \{E_mT_{na} \chi_{[0,c]}\}_{n,m \in \mathbb{Z}}$ is a (Gabor) frame for $\mathcal{L}^2(\mathbb{R})$ (this is a particular case of the celebrated $abc$-problem for Gabor systems (\cite{DAISUN})).
\item (\cite{JANSSENSTROHMER2002}) Let $T_a$ and $E_b$ be the operators in (vii). Let $g(x)\coloneqq \cosh (\pi x) =\frac{2}{e^{\pi x}+e^{-\pi x}}$, $\forall x \in \mathbb{R}$. Then, for $ab<1$, $ \{E_{mb}T_{na}g\}_{n,m \in \mathbb{Z}}$ is a (Gabor) frame for $\mathcal{L}^2(\mathbb{R})$.
\item (\cite{JANSSEN2003}) Let $T_a$ and $E_b$ be the operators in (vii). Let $g(x)\coloneqq e^{-|x|}$, $\forall x \in \mathbb{R}$. Then, for $ab<1$, $ \{E_{mb}T_{na}g\}_{n,m \in \mathbb{Z}}$ is a (Gabor) frame for $\mathcal{L}^2(\mathbb{R})$.
\item (\cite{JANSSEN1996}) Let $T_a$ and $E_b$ be the operators in (vii). Let $g(x)\coloneqq e^{-x} \chi_{[0,\infty)}(x)$, $\forall x \in \mathbb{R}$. Then $ \{E_{mb}T_{na}g\}_{n,m \in \mathbb{Z}}$ is a (Gabor) frame for $\mathcal{L}^2(\mathbb{R})$ if and only if $ab\leq 1$.
\item (\cite{CASSAZAEVERYSUM}) (\textbf{Kalton-Casazza theorem})
Every frame is a sum of three orthonormal bases.
\item (cf. \cite{CHRISTENSEN}) (\textbf{Wavelet frame}) Let $0<b<0.0084$. Define the \textbf{Mexican hat} function
\begin{align*}
\psi(x)\coloneqq \frac{2}{\sqrt{3}}\pi^{\frac{-1}{4}}(1-x^2)e^{\frac{-x^2}{2}}, \quad \forall x \in \mathbb{R}.
\end{align*}
For $j,k \in \mathbb{Z}$, define
\begin{align*}
\psi_{j,k}(x)\coloneqq \psi(2^jx-kb), \quad \forall x \in \mathbb{R}.
\end{align*}
Then $\{2^\frac{j}{2}\psi_{j,k}\}_{j,k\in \mathbb{Z}}$ is a (wavelet) frame for $\mathcal{L}^2(\mathbb{R})$.
\item (\cite{STROHMERHEATHCODING, BENEDETTOKOLESAR}) (\textbf{Grassmannian frame}) The $n$-equally spaced lines in $\mathbb{R}^2$, namely $\{(\cos( \pi j/n), \sin ( \pi j/n)\}_{j=0}^{n-1}$ is a (Grassmannian) frame for $\mathbb{R}^2$.
\item (\cite{BENEDETTOFICKUS}) (\textbf{Group frame}) Vertices of each of (five) \textbf{Platonic solids} is a tight frame for $\mathbb{R}^3$.
\item (cf. \cite{WALDRONBOOK}) (\textbf{Equiangular frame}) For each $d\in \mathbb{N}$, $d+1$ vertices of the regular simplex in $\mathbb{R}^d$ is an (equiangular) frame for $\mathbb{R}^d$.
\end{enumerate}
\end{example}
We now give various examples which are not frames.
\begin{example}
\begin{enumerate}[label=(\roman*)]
\item (cf. \cite{CHRISTENSEN}) If $\{\tau_n\}_{n=1}^\infty$ is an orthonormal basis for $\mathcal{H}$, then $\{\tau_n+\tau_{n+1}\}_{n=1}^\infty$ is not a frame for $\mathcal{H}$.
\item (cf. \cite{BACHMANNARICIBECKENSTEIN}) If $\{\tau_n\}_{n=1}^\infty$ is an orthonormal basis for $\mathcal{H}$, then $\{\frac{\tau_n}{n}\}_{n=1}^\infty$ is not a frame for $\mathcal{H}$.
\item (cf. \cite{CHRISTENSEN}) If $\{\tau_n\}_{n=-\infty}^\infty$ is a Riesz basis for $\mathcal{H}$, then $\{\tau_n+\tau_{n+1}\}_{n=-\infty}^\infty$ is not a frame for $\mathcal{H}$.
\item (cf. \cite{CHRISTENSEN}) For $n\in \mathbb{Z}$, define $T_n:\mathcal{L}^2(\mathbb{R}) \ni f \mapsto T_nf \in \mathcal{L}^2(\mathbb{R})$, $ T_nf:\mathbb{R} \ni x \mapsto f(x-n) \in \mathbb{C}$. Then for any $\phi \in \mathcal{L}^2(\mathbb{R})$, $ \{T_n\phi\}_{n \in \mathbb{Z}}$ is not a frame for $ \mathcal{L}^2(\mathbb{R})$.
\item (\cite{CHRISTENSENHASANNASABRASHIDI, ALDROUBI2017}) Let $\mathcal{H} $ be an infinite dimensional Hilbert space and $T: \mathcal{H}\to \mathcal{H}$ be a bounded linear operator which is unitary or compact. Then for every $\tau \in \mathcal{H}$, $\{T^n\tau\}_{n=0}^\infty$ is not a frame for $\mathcal{H}$ (this is a particular case of dynamical sampling (\cite{ALDROUBICABRELLIMOLTERTANG, ALDROUBICABRELLICAKMAK})).
\item (cf. \cite{FEICHTINGERSTROHMERBOOK2}) Let $T_a$ and $E_b$ be the operators in (vii) of Example \ref{FRAMEEXAMPLES}. For $c>0$, let $ \chi_{[0,c]}$ be the characteristic function on $[0,c]$. Then for $c\leq a $ or $a>1$, $ \{E_mT_{na} \chi_{[0,c]}\}_{n,m \in \mathbb{Z}}$ is not a frame for $\mathcal{L}^2(\mathbb{R})$.
\end{enumerate}
\end{example}
There is a simple criterion to check for frames in finite dimensional Hilbert spaces. This reads as follows.
\begin{theorem}(cf. \cite{HANKORNELSONLARSON})\label{FINITEDIMESIONALCHARAC}
A finite set of vectors for a finite dimensional Hilbert space is a frame if and only if it spans the space.
\end{theorem}
\begin{remark}
\begin{enumerate}[label=(\roman*)]
\item Theorem \ref{FINITEDIMESIONALCHARAC} gives a very useful algebraic criterion for checking whether a finite set of vectors is a frame for a finite dimensional space rather verifying the analytic condition (frame inequality) which is harder in many cases.
\item Theorem \ref{FINITEDIMESIONALCHARAC} does not tell that a frame for a finite dimensional Hilbert space is finite. A finite dimensional Hilbert space can have a frame with infinitely many elements. For example, $\{\frac{1}{n}\}_{n=1}^\infty$ is a tight frame for $\mathbb{C}$ (as a vector space over itself), because
\begin{align*}
\sum_{n=1}^\infty\left|\left\langle h, \frac{1}{n} \right\rangle\right|^2=\sum_{n=1}^\infty\left|\frac{h}{n}\right|^2=\frac{\pi^2}{6}|h|^2,\quad \forall h \in \mathbb{C}.
\end{align*}
\item Suppose $\operatorname{dim}(\mathcal{H})=n$. From Theorem \ref{FINITEDIMESIONALCHARAC} we see that a spanning set having at least $n+1$ elements is a frame for $\mathcal{H}$ but not a Riesz basis for $\mathcal{H}$.
\item A spanning set need not be a frame. For instance, $\{n\}_{n=1}^\infty$ spans $\mathbb{C}$ but $\sum_{n=1}^\infty|\left\langle 1, n \right\rangle|^2$ $=\sum_{n=1}^\infty n^2=\infty.$ Hence $\{n\}_{n=1}^\infty$ is not a frame for $\mathbb{C}$.
\end{enumerate}
\end{remark}
Following theorem is the most important result in the theory of frames.
\begin{theorem}(\cite{DUFFIN1}, \cite{CHRISTENSEN}, \cite{HANLARSON})\label{MOSTIMPORTANT}
Let $ \{\tau_n\}_{n}$ be a frame for $\mathcal{H}$ with bounds $a$ and $b$. Then
\begin{enumerate}[label=(\roman*)]
\item $\overline{\operatorname{span}} \{\tau_n\}_{n}=\mathcal{H}$.
\item The map $\theta_\tau:\mathcal{H}\ni h \mapsto \theta_\tau h \coloneqq \{\langle h, \tau_n\rangle\}_{n}\in \ell^2(\mathbb{N}) $ is a well-defined bounded linear operator. Further, $\sqrt{a}\|h\|\leq \|\theta_\tau h\|\leq \sqrt{b}\|h\|, \forall h \in \mathcal{H}$. In particular, $\theta_\tau$ is injective and its range is closed.
\item The map $ S_\tau:\mathcal{H} \ni h \mapsto S_\tau h\coloneqq \sum_{n=1}^\infty\langle h, \tau_n\rangle \tau_n \in \mathcal{H}$ is a well-defined bounded linear positive invertible operator. Further,
\begin{align*}
a\|h\|^2\leq \langle S_\tau h, h\rangle\leq b\|h\|^2, \quad \forall h \in \mathcal{H}, \quad \quad a\|h\|\leq \|S_\tau h\|\leq b\|h\|, \quad \forall h \in \mathcal{H}.
\end{align*}
\item (\textbf{General Fourier expansion} or \textbf{frame decomposition})
\begin{align}\label{GFE}
h=\sum_{n=1}^\infty\langle h, \tau_n\rangle S_\tau^{-1}\tau_n=\sum_{n=1}^\infty\langle h, S_\tau^{-1}\tau_n\rangle \tau_n, \quad \forall h \in \mathcal{H}.
\end{align}
\item $\theta_\tau^* ( \{a_n\}_{n})=\sum_{n}a_n \tau_n, \forall \{a_n\}_{n} \in \ell^2(\mathbb{N})$. In particular, $\theta_\tau^*e_n=\tau_n, \forall n \in \mathbb{N}$.
\item $ S_\tau$ factors as $S_\tau=\theta_\tau^*\theta_\tau$.
\item $\theta_\tau^*$ is surjective.
\item $\|S_\tau^{-1}\|^{-1}$ is the optimal lower frame bound and $\|S_\tau\|=\|\theta_\tau\|^2$ is the optimal upper frame bound.
\item $P_\tau\coloneqq\theta_\tau S_\tau^{-1}\theta_\tau^*$ is an orthogonal projection onto $\theta_\tau(\mathcal{H}).$
\item $ \{\tau_n\}_{n}$ is Parseval if and only if $\theta_\tau$ is an isometry if and only if $\theta_\tau\theta_\tau^*$ is a projection.
\item $ \{S_\tau^{-1}\tau_n\}_{n}$ is a frame for $\mathcal{H}$ with bounds $b^{-1}$ and $a^{-1}$.
\item $ \{S_\tau^{-1/2}\tau_n\}_{n}$ is a Parseval frame for $\mathcal{H}$.
\item (\textbf{Best approximation}) If $ h \in \mathcal{H}$ has representation $ h=\sum_{n=1}^\infty c_n\tau_n,$ for some scalar sequence $ \{c_n\}_{n}\in \ell^2(\mathbb{N})$, then
$$ \sum\limits_{n=1}^\infty |c_n|^2 =\sum\limits_{n=1}^\infty |\langle h, S_\tau^{-1}\tau_n\rangle|^2+\sum\limits_{n=1}^\infty | c_n-\langle h, S_\tau^{-1}\tau_n\rangle|^2. $$
\end{enumerate}
\end{theorem}
Theorem \ref{MOSTIMPORTANT} says several things. First, it says that every vector in the Hilbert
space admits an expansion, called as general Fourier expansion, similar to
Fourier
expansion coming from an orthonormal basis for a Hilbert space. Second, it
says that coefficients in the expansion of a vector need not be unique. This is
particularly important in applications, since loss in the information of a
vector is less if some of the coefficients are missing. Third, given a frame,
it naturally generates other frames. Fourth, a frame gives a bounded linear
injective operator from the less known inner product on the Hilbert space $\mathcal{H}$ to
the well known standard inner product on the standard separable Hilbert space $\ell^2(\mathbb{N})$.
Frame inequality now clearly says that there is a comparison of norms between $\mathcal{H}$ and
the standard Hilbert space $\ell^2(\mathbb{N})$. Fifth, a frame embeds $\mathcal{H}$ in $\ell^2(\mathbb{N})$
through the bounded linear operator $\theta_\tau$. Sixth, whenever a Hilbert
space admits a frame it becomes an image of a surjective operator $\theta_\tau^*$ from
the $\ell^2(\mathbb{N})$ to it.
An easy observation from Theorem \ref{MOSTIMPORTANT} is that for an infinite dimensional Hilbert space, a finite collection of vectors can not be a frame.
The operators $\theta_\tau$, $\theta_\tau^*$ and $S_\tau$ in Theorem \ref{MOSTIMPORTANT} are called as \textbf{analysis operator}, \textbf{synthesis operator} and \textbf{frame operator}, respectively (cf. \cite{CHRISTENSEN}).
Dilation theory usually tries to extend operator on Hilbert space to larger Hilbert space which are easier to handle as well as well-understood and study the original operator as a slice of it (\cite{LEVYSHALIT, ARVESON, NAGY}). As long as frame theory for Hilbert spaces is considered, following theorem is known as Naimark-Han-Larson dilation theorem. This was proved independently by \cite{HANLARSON} and by \cite{KASHINKULIKOVA}. History of this theorem is nicely presented in the paper (\cite{CZAJA}).
\begin{theorem}(\cite{HANLARSON, KASHINKULIKOVA}) \label{DILATIONTHEOREMHILBERTSPACE} (\textbf{Naimark-Han-Larson dilation theorem})
A collection $ \{\tau_n\}_{n}$ in $\mathcal{H}$ is a
\begin{enumerate}[label=(\roman*)]
\item frame for $\mathcal{H}$ if and only if the exist a Hilbert space $\mathcal{H}_1 \supseteq \mathcal{H}$, a Riesz basis $ \{\omega_n\}_{n}$ for $\mathcal{H}_1 $ and a projection $P:\mathcal{H}_1 \to \mathcal{H}$ such that $\tau_n=P\omega_n, \forall n \in \mathbb{N}$.
\item Parseval frame for $\mathcal{H}$ if and only if the exist a Hilbert space $\mathcal{H}_1 \supseteq \mathcal{H}$, an orthonormal basis $ \{\omega_n\}_{n}$ for $\mathcal{H}_1 $ and an orthogonal projection $P:\mathcal{H}_1 \to \mathcal{H}$ such that $\tau_n=P\omega_n, \forall n \in \mathbb{N}$.
\end{enumerate}
\end{theorem}
In order to construct an element of the Hilbert space using frames using Equation \ref{GFE}, we have to first determine inverse of the frame operator which is difficult in general. Thus we seek a way to approximate an element using a sequence which does not involve calculating inverse of frame operator. This is given in the following theorem.
\begin{proposition}(\cite{DUFFIN1}) (\textbf{Frame algorithm})
Let $ \{\tau_n\}_{n=1}^\infty$ be a frame for $ \mathcal{H}$ with bounds $a$ and $b$. For $ h \in \mathcal{H}$ define
$$ h_0\coloneqq0, \quad h_n\coloneqq h_{n-1}+\frac{2}{a+b}S_{\tau}(h-h_{n-1}), \quad\forall n \geq1.$$
Then
$$ \|h_n-h\|\leq \left(\frac{b-a}{b+a}\right)^n\|h\|, \quad\forall n \geq1.$$
In particular, $h_n\to h$ as $n\to \infty$.
\end{proposition}
Given a collection $\{\tau_n\}_n$, in general, it is difficult to find $a$ and $b$ such that the two inequalities in (\ref{SEQUENTIALEQUATION1}) hold. Therefore, it is natural to ask whether there is a characterization for frame without using frame bounds. Orthonormal bases are the simplest and easiest sequences we can handle in a Hilbert space, so one can attempt to obtain characterization using orthonormal bases. Since every separable Hilbert space is isometrically isomorphic to the standard Hilbert space $\ell^2(\mathbb{N})$ and the standard unit vectors $\{e_n\}_n$ form an orthonormal basis for $\ell^2(\mathbb{N})$, one can further ask whether frames can be characterized using $\{e_n\}_n$. This question was answered affirmatively by \cite{HOLUB} as follows.
\begin{theorem}(\cite{HOLUB})\label{HOLUBTHEOREM} (\textbf{Holub's theorem})
A sequence $\{\tau_n\}_n$ in $\mathcal{H}$ is a
frame for $\mathcal{H}$ if and only if there exists a surjective bounded linear operator $T:\ell^2(\mathbb{N}) \to \mathcal{H}$ such that $Te_n=\tau_n$, for all $n \in \mathbb{N}$.
\end{theorem}
There is a slight variation of Theorem \ref{HOLUBTHEOREM} given by \cite{CHRISTENSEN}.
\begin{theorem}(\cite{CHRISTENSEN})\label{OLECHA}
Let $\{\omega_n\}_n$ be an orthonormal basis for $\mathcal{H}$. Then a sequence $\{\tau_n\}_n$ in $\mathcal{H}$ is a
frame for $\mathcal{H}$ if and only if there exists a surjective bounded linear operator $T:\mathcal{H} \to \mathcal{H}$ such that $T\omega_n=\tau_n$, for all $n \in \mathbb{N}$.
\end{theorem}
Given a frame $\{\tau_n\}_n$ for $\mathcal{H}$ we now consider the frame $\{S_\tau^{-1}\tau_n\}_n$. This frame satisfies Equation (\ref{GFE}). However, in general there may be other frames satisfying the Equation (\ref{GFE}) like $\{S_\tau^{-1}\tau_n\}_n$. This leads to the notion of dual frames as stated below.
\begin{definition}(cf. \cite{CHRISTENSEN})
Let $\{\tau_n\}_n$ be a frame for $\mathcal{H}$. A frame $\{\omega_n\}_n$ for $ \mathcal{H}$ is said to be a \textbf{dual} frame for $\{\tau_n\}_n$ if
\begin{align*}
h=\sum_{n=1}^\infty \langle h, \omega_n\rangle \tau_n=\sum_{n=1}^\infty
\langle h, \tau_n\rangle \omega_n, \quad \forall h \in
\mathcal{H}.
\end{align*}
\end{definition}
Just like characterization of frames, given a frame, we seek a description of each of its dual frame. This problem was solved by \cite{LI} in the following two lemmas and a theorem.
\begin{lemma}(\cite{LI})\label{LILEMMA1}
Let $\{\tau_n\}_n$ be a frame for $\mathcal{H}$ and $\{e_n\}_n$ be the standard orthonormal basis for $\ell^2(\mathbb{N})$. Then a frame $\{\omega_n\}_n$ is a dual frame for $\{\tau_n\}_n$ if and only if
\begin{align*}
\omega_n=U e_n, \quad \forall n \in \mathbb{N},
\end{align*}
where $U:\ell^2(\mathbb{N})\to \mathcal{H}$ is a bounded left-inverse of $\theta_\tau$.
\end{lemma}
\begin{lemma}(\cite{LI})\label{LILEMMA2}
Let $\{\tau_n\}_n$ be a frame for $\mathcal{H}$. Then $L:\ell^2(\mathbb{N})\to \mathcal{H}$ is a bounded left-inverse of $\theta_\tau$ if and only if
\begin{align*}
L=S_\tau^{-1}\theta_\tau^*+V(I_{\ell^2(\mathbb{N})}-\theta_\tau S_\tau^{-1} \theta_\tau^*),
\end{align*}
where $V:\ell^2(\mathbb{N})\to \mathcal{H}$ is a bounded operator.
\end{lemma}
\begin{theorem}(\cite{LI})\label{LITHM}
Let $\{\tau_n\}_n$ be a frame for $\mathcal{H}$. Then a frame $\{\omega_n\}_n$ is a dual frame for $\{\tau_n\}_n$ if and only if
\begin{align*}
\omega_n=S_\tau^{-1} \tau_n+\rho_n-\sum_{k=1}^{\infty}\langle S_\tau^{-1} \tau_n, \tau_k\rangle \rho_k, \quad \forall n \in \mathbb{N},
\end{align*}
where $\{\rho_n\}_n$ is a sequence in $\mathcal{H}$ such that there exists $b>0$ satisfying
\begin{align*}
\sum_{n=1}^{\infty}|\langle h, \rho_n\rangle |^2\leq b \|h\|^2, \quad \forall h \in \mathcal{H}.
\end{align*}
\end{theorem}
We again consider the frame $\{S_\tau^{-1}\tau_n\}_n$. Note that this frame is obtained by the action of an invertible operator $S_\tau^{-1}$ to the original frame $\{\tau_n\}_n$. This leads to the question: what are all the frames which are obtained by operating an invertible operator to the given frame? This naturally brings us to the following definition.
\begin{definition}(\cite{BALAN})\label{SIMILARDEFHILBERT}
Two frames $\{\tau_n\}_n$ and $\{\omega_n\}_n$ for $ \mathcal{H}$ are said to be \textbf{similar} or \textbf{equivalent} if there exists a bounded invertible operator $T:\mathcal{H} \to \mathcal{H}$ such that
\begin{align}\label{SIMILARITYEQUATION}
\omega_n=T \tau_n, \quad\forall n \in \mathbb{N}.
\end{align}
\end{definition}
Given frames $\{\tau_n\}_n$ and $\{\omega_n\}_n$, it is rather difficult to check whether they are similar because one has to get an invertible operator and verify Equation (\ref{SIMILARITYEQUATION}) for every natural number. Thus it is better if there is a characterization which does not involve natural numbers and involves only operators. Further, it is natural to ask whether there is a formula for the operator $T$ which gives similarity. This was done by \cite{BALAN} and independently by \cite{HANLARSON} which states as follows.
\begin{theorem}(\cite{BALAN, HANLARSON})\label{BALANCHARSIM}
For two frames $\{\tau_n\}_n$ and $\{\omega_n\}_n$ for $ \mathcal{H}$, the following are equivalent.
\begin{enumerate}[label=(\roman*)]
\item $\{\tau_n\}_n$ and $\{\omega_n\}_n$ are similar, i.e., there exists a bounded invertible operator $T:\mathcal{H} \to \mathcal{H}$ such that $\omega_n=T \tau_n$, $\forall n \in \mathbb{N}$.
\item $\theta_\omega=\theta_\tau T$, for some bounded invertible operator $T:\mathcal{H} \to \mathcal{H}$.
\item $ P_\omega=P_\tau$.
\end{enumerate}
If one of the above conditions is satisfied, then the invertible operator in (i) and (ii) is unique and is given by $T=S_\tau^{-1}\theta_\tau^*\theta_\omega$.
\end{theorem}
For a given subset $\mathbb{M} $ of $ \mathbb{N}$, set $
S_\mathbb{M} :\mathcal{H} \ni h \mapsto \sum_{n\in \mathbb{M}} \langle h, \tau_n\rangle\tau_n\in
\mathcal{H}$. Because of Inequalities (\ref{SEQUENTIALEQUATION1}), $
S_\mathbb{M} $
is a well-defined bounded positive operator (which may not be invertible). Let $\mathbb{M}^\text{c}$ denote the complement of $\mathbb{M}$ in $\mathbb{N}$.
Casazza, Edidin, and Kutyniok derived following identities for frames for Hilbert spaces (\cite{BALACASAZZAEDIDINKUTYNIOKFIRST, BALANSIGNAL}).
\begin{theorem}(\cite{BALACASAZZAEDIDINKUTYNIOK, BALACASAZZAEDIDINKUTYNIOKFIRST}) (\textbf{Frame identity}) \label{CASAZZAGENERAL}
Let $\{\tau_n\}_n$ be a frame for $\mathcal{H}$. Then for every $\mathbb{M} \subseteq \mathbb{N}$,
\begin{align*}
\sum_{n\in \mathbb{M}}|\langle h, \tau_n\rangle|^2-\sum_{n=1}^\infty|\langle S_\mathbb{M}h, \tilde{\tau}_n\rangle|^2=\sum_{n\in \mathbb{M}^\text{c}}|\langle h, \tau_n\rangle|^2-\sum_{n=1}^\infty|\langle S_{\mathbb{M}^\text{c}}h, \tilde{\tau}_n\rangle|^2,\quad \forall h \in \mathcal{H}.
\end{align*}
\end{theorem}
\begin{theorem}(\cite{BALACASAZZAEDIDINKUTYNIOK, BALACASAZZAEDIDINKUTYNIOKFIRST}) (\textbf{Parseval frame identity}) \label{SECOND}
Let $\{\tau_n\}_n$ be a Parseval frame for $\mathcal{H}$. Then for every $\mathbb{M} \subseteq \mathbb{N}$,
\begin{align*}
\sum_{n\in \mathbb{M}}|\langle h, \tau_n\rangle|^2-\left\|\sum_{n\in \mathbb{M}}\langle h, \tau_n\rangle \tau_n\right\|^2=\sum_{n\in \mathbb{M}^\text{c}}|\langle h, \tau_n\rangle|^2-\left\|\sum_{n\in \mathbb{M}^\text{c}}\langle h, \tau_n\rangle \tau_n\right\|^2,\quad \forall h \in \mathcal{H}.
\end{align*}
\end{theorem}
Theorem \ref{SECOND} has applications. It was applied to get the following remarkable lower estimate for Parseval frames.
\begin{theorem}(\cite{BALACASAZZAEDIDINKUTYNIOK, GAVRUTA})\label{THIRD}
Let $\{\tau_n\}_n$ be a Parseval frame for $\mathcal{H}$. Then for every $\mathbb{M} \subseteq \mathbb{N}$,
\begin{align*}
\sum_{n\in \mathbb{M}}|\langle h, \tau_n\rangle|^2+\left\|\sum_{n\in \mathbb{M}^\text{c}}\langle h, \tau_n\rangle \tau_n\right\|^2&=\sum_{n\in \mathbb{M}^\text{c}}|\langle h, \tau_n\rangle|^2+\left\|\sum_{n\in \mathbb{M}}\langle h, \tau_n\rangle \tau_n\right\|^2\\
&\geq \frac{3}{4}\|h\|^2,\quad \forall h \in \mathcal{H}.
\end{align*}
Further, the bound 3/4 is optimal.
\end{theorem}
As another application, Theorem \ref{THIRD} was used in the study of Parseval frames with finite excesses (\cite{BAKICBERIC, BALANCASAZZAHEIL12}). \\
Like duality, there is another notion called as orthogonality for frames for Hilbert spaces. This was first introduced by \cite{BALANTHESIS} in his Ph.D. thesis and further studied by \cite{HANLARSON}.
\begin{definition}(\cite{BALANTHESIS, HANLARSON})
Let $\{\tau_n\}_n$ be a frame for $\mathcal{H}$. A frame $\{\omega_n\}_n$ for $ \mathcal{H}$ is said to be an \textbf{orthogonal} frame for $\{\tau_n\}_n$ if
\begin{align*}
0=\sum_{n=1}^\infty \langle h, \omega_n\rangle \tau_n=\sum_{n=1}^\infty
\langle h, \tau_n\rangle \omega_n, \quad \forall h \in
\mathcal{H}.
\end{align*}
\end{definition}
Remarkable property of orthogonal frames is that we can interpolate as well as we can take direct sum of them to get new frames. These are illustrated in the following two results.
\begin{proposition}(\cite{HANKORNELSONLARSON, HANLARSON})
Let $ \{\tau_n\}_n $ and $ \{\omega_n\}_n$ be two Parseval frames for $\mathcal{H}$ which are orthogonal. If $C,D \in \mathcal{B}(\mathcal{H})$ are such that $ C^*C+D^*D=I_\mathcal{H}$, then $\{C\tau_n+D\omega_n\}_{n}$ is a Parseval frame for $\mathcal{H}$. In particular, if scalars $ c,d,e,f$ satisfy $|c|^2+|d|^2 =1$, then $ \{c\tau_n+d\omega_n\}_{n} $ is a Parseval frame.
\end{proposition}
\begin{proposition}(\cite{HANKORNELSONLARSON, HANLARSON})
If $ \{\tau_n\}_n $ and $ \{\omega_n\}_n$ are orthogonal frames for $\mathcal{H}$, then $\{\tau_n\oplus \omega_n\}_{n}$ is a frame for $ \mathcal{H}\oplus \mathcal{H}.$ Further, if both $ \{\tau_n\}_n $ and $ \{\omega_n\}_n$ are Parseval, then $\{\tau_n\oplus \omega_n\}_{n}$ is Parseval.
\end{proposition}
Recall that Paley-Wiener theorem \ref{PALEYWIENERTHEOREM} says that sequences which are close to orthonormal bases are Riesz bases. Since a frame will also give a series representation, it is natural to ask whether a sequence close to frame is a frame. This was first derived by \cite{PALEY1} which showed that sequences which are quadratically close to frames are again frames.
\begin{theorem}(\cite{PALEY1})\label{FIRSTPER} (\textbf{Christensen's quadratic perturbation})
Let $ \{\tau_n\}_{n=1}^\infty$ be a frame for $\mathcal{H} $ with bounds $ a$ and $b$. If $ \{\omega_n\}_{n=1}^\infty$ in $\mathcal{H} $ satisfies
$$ c \coloneqq\sum_{n=1}^{\infty}\|\tau_n-\omega_n\|^2<a,$$
then $ \{\omega_n\}_{n=1}^\infty$ is a frame for $\mathcal{H} $ with bounds $a\left(1-\sqrt{\frac{c}{a}}\right)^2 $ and $b\left(1+\sqrt{\frac{c}{b}}\right)^2.$
\end{theorem}
Three months later, Christensen generalized Theorem \ref{FIRSTPER}.
\begin{theorem}(\cite{PALEY2})\label{SECONDPER} (\textbf{Christensen perturbation})
Let $ \{\tau_n\}_{n=1}^\infty$ be a frame for $\mathcal{H} $ with bounds $ a$ and $b$. If $ \{\omega_n\}_{n=1}^\infty$ in $\mathcal{H} $ is such that there exist $ \alpha, \gamma \geq0$ with $\alpha+\frac{\gamma}{\sqrt{a}}< 1 $ and
$$\left\|\sum_{n=1}^{m}c_n(\tau_n-\omega_n) \right\|\leq \alpha\left\|\sum_{n=1}^{m}c_n\tau_n\right \|+\gamma \left(\sum_{n=1}^{m}|c_n|^2\right)^\frac{1}{2}, \quad\forall c_1, \dots, c_m \in \mathbb{K}, m=1, 2,\dots, $$
then $ \{\omega_n\}_{n=1}^\infty$ is a frame for $\mathcal{H} $ with bounds $a\left(1-(\alpha+\frac{\gamma}{\sqrt{a}})\right)^2 $ and $b\left(1+(\alpha+\frac{\gamma}{\sqrt{b}})\right)^2.$
\end{theorem}
After two years, Casazza and Christensen further extended Theorem \ref{SECONDPER}.
\begin{theorem}(\cite{CASAZZACHRISTENSTENPERTURBATION})\label{OLECAZASSA} (\textbf{Casazza-Christensen perturbation})
Let $ \{\tau_n\}_{n=1}^\infty$ be a frame for $\mathcal{H} $ with bounds $ a$ and $b$. If $ \{\omega_n\}_{n=1}^\infty$ in $\mathcal{H} $ is such that there exist $ \alpha, \beta, \gamma \geq0$ with $ \max\{\alpha+\frac{\gamma}{\sqrt{a}}, \beta\}<1$ and
\begin{align*}
\left\|\sum_{n=1}^{m}c_n(\tau_n-\omega_n) \right\|&\leq \alpha\left\|\sum_{n=1}^{m}c_n\tau_n\right \|+\gamma \left(\sum_{n=1}^{m}|c_n|^2\right)^\frac{1}{2}+\beta\left\|\sum_{n=1}^{m}c_n\omega_n\right \|, \\
& \quad \quad\forall c_1, \dots, c_m \in \mathbb{K}, m=1, 2, \dots,
\end{align*}
then $ \{\omega_n\}_{n=1}^\infty$ is a frame for $\mathcal{H} $ with bounds $a\left(1-\frac{\alpha+\beta+\frac{\gamma}{\sqrt{a}}}{1+\beta}\right)^2 $ and $b\left(1+\frac{\alpha+\beta+\frac{\gamma}{\sqrt{b}}}{1-\beta}\right)^2.$
\end{theorem}
We next consider Bessel sequences which is next level of generalization of frames.
\begin{definition}(cf. \cite{CHRISTENSEN})
A collection $ \{\tau_n\}_{n}$ in a Hilbert space $ \mathcal{H}$ is said to be a \textbf{Bessel sequence} for $\mathcal{H}$ if there exists a real constant $ b >0$ such that
\begin{equation}\label{BESSEL INEQUALITY}
\text{ (\textbf{General Bessel's inequality}) } \quad \sum_{n=1}^\infty|\langle h, \tau_n \rangle|^2 \leq b\|h\|^2 ,\quad \forall h \in \mathcal{H}.
\end{equation}
Constant $ b$ is called as a Bessel bound for $ \{\tau_n\}_{n}$.
\end{definition}
Inequality \ref{SEQUENTIALEQUATION1} is stronger than Inequality \ref{BESSEL INEQUALITY}. Hence every frame is a Bessel sequence. Using Cauchy-Schwarz inequality, it follows that every finite set of vectors is a Bessel sequence (cf. \cite{CHRISTENSEN}). Now using Theorem \ref{FINITEDIMESIONALCHARAC}, we get plenty of Bessel sequences which are not frames (in finite dimensions). As an example in infinite dimensions, we claim that $\{e_2,e_3, \dots\}$ is a Bessel sequence for $\ell^2(\mathbb{N})$ but not a frame. Clearly $\{e_2,e_3, \dots\}$ satisfies Inequality \ref{BESSEL INEQUALITY}. If this is a frame, let $a>0$ be such that first inequality in \ref{SEQUENTIALEQUATION1} holds. Then by taking $h=e_1$, we get $a\|e_1\|^2\leq \sum_{n=2}^{\infty}|\langle e_1, e_n \rangle|^2=0 ~ \Rightarrow a=0,$ which is a contradiction.
\begin{example}
\begin{enumerate}[label=(\roman*)]
\item If $ \{\tau_n\}_{n} $ is a frame for $\mathcal{H}$, then for every subset $\mathbb{S}$ of $\mathbb{N}$, $\{\tau_n\}_{n \in \mathbb{S}} $ is a Bessel sequence for $\mathcal{H}$, because $\sum_{n \in \mathbb{S}}|\langle h, \tau_n \rangle|^2 \leq \sum_{n=1}^\infty|\langle h, \tau_n \rangle|^2 , \forall h \in \mathcal{H}.$
\item (cf. \cite{CHRISTENSEN}) Let $g\in \mathcal{L}^2(\mathbb{R})$ be bounded, compactly supported function and $a,b>0$. Define $f_{n,m}:\mathbb{R}\ni x \mapsto e^{2\pi imbx}g(x-na)\in \mathbb{R},$ $\forall n, m\in \mathbb{Z}$. Then
$ \{f_{n,m}\}_{n,m \in \mathbb{Z}}$ is a Bessel sequence for $\mathcal{L}^2(\mathbb{R})$.
\item (cf. \cite{CHRISTENSEN}) If $\{\tau_n\}_{n=1}^\infty$ is an orthonormal basis for $\mathcal{H}$, then $\{\tau_n+\tau_{n+1}\}_{n=1}^\infty$ is a Bessel sequence for $\mathcal{H}$ (but not a frame for $\mathcal{H}$).
\end{enumerate}
\end{example}
\begin{theorem}(cf. \cite{CHRISTENSEN})\label{OLEBESSELCHARACTERIZATION12}
A collection $ \{\tau_n\}_{n}$ is a Bessel sequence for $\mathcal{H}$ if and only if the map $\ell^2(\mathbb{N}) \ni \{\tau_n\}_{n} \mapsto \sum_{n=1}^\infty a_n\tau_n \in \mathcal{H}$ is a well-defined bounded-linear operator. Moreover, if $ \{\tau_n\}_{n}$ is a Bessel sequence for $\mathcal{H}$, then the operator $ \mathcal{H} \ni h \mapsto \sum_{n=1}^\infty\langle h, \tau_n\rangle \tau_n \in \mathcal{H}$ is positive.
\end{theorem}
Since a Bessel sequence need not be a frame, it is natural to ask the following question: Given a Bessel sequence, can we add extra elements to it so that the resulting sequence is a frame? Answer is positive. This result was obtained by \cite{LISUN}.
\begin{theorem}(\cite{LISUN})\label{BESSELEXPANSIONHILBERT}
Every Bessel sequence in a Hilbert space can be expanded to a tight frame. Moreover, we can expand a Bessel sequence in infinitely many ways to tight frames.
\end{theorem}
\cite{LISUN} further observed that if a Bessel sequence for a Hilbert space can be expanded finitely to get a tight frame, then the number of elements added can not be small. Precise statement reads as follows.
\begin{theorem}(\cite{LISUN})\label{BESSELNUMBERHILBERT}
Let $ \{\tau_n\}_{n}$ be a Bessel sequence for $\mathcal{H}$. If $ \{\tau_n \}_{n}\cup \{\omega_k\}_{k=1}^N $ is a $\lambda$-tight frame for $\mathcal{H}$, then
\begin{align}\label{LISUNNUMBER}
N\geq \dim (\lambda I_\mathcal{H}-S_{\tau}) (\mathcal{H}).
\end{align}
Further, the Inequality (\ref{LISUNNUMBER}) can not be improved.
\end{theorem}
\section{RIESZ BASES, FRAMES AND BESSEL SEQUENCES FOR BANACH SPACES}
Definition of Riesz basis, as given in Definition \ref{RIESZBASISDEFINITION} requires the notion of inner product. Due to the lack of inner product in a Banach space, Definition \ref{RIESZBASISDEFINITION} can not be carried over to Banach spaces. However, Theorem \ref{RIESZBASISTHM} allows to define Riesz basis for Banach spaces as follows.
\begin{definition}(\cite{ALDROUBISUNTANG})\label{RIESZBASISDEFINITIONBANACHSPACE}
Let $1<q<\infty$ and $\mathcal{X}$ be a Banach space. A collection $\{\tau_n\}_{n}$ in $\mathcal{X}$ is said to be a
\begin{enumerate}[label=(\roman*)]
\item \textbf{q-Riesz sequence} for $\mathcal{X}$ if there exist $a,b>0$ such that for every finite subset $\mathbb{S}$ of $\mathbb{N}$,
\begin{align}\label{RIESZSEQUENCEINEQUALITY}
a \left(\sum_{n \in \mathbb{S}}|c_n|^q\right)^\frac{1}{q}\leq \left\|\sum_{n \in \mathbb{S}}c_n\tau_n\right\|\leq b \left(\sum_{n \in \mathbb{S}}|c_n|^q\right)^\frac{1}{q}, \quad \forall c_n \in \mathbb{K}.
\end{align}
\item \textbf{q-Riesz basis} for $\mathcal{X}$ if it is a q-Riesz sequence for
$\mathcal{X}$ and $\overline{\operatorname{span}}\{\tau_n\}_{n}=\mathcal{X}$.
\end{enumerate}
\end{definition}
\begin{example}
Let $\{e_n\}_{n}$ be the standard Schauder basis for $\ell^p(\mathbb{N})$ and $A:\ell^p(\mathbb{N})\to \ell^p(\mathbb{N})$ be a bounded linear invertible operator. Then it follows that $\{Ae_n\}_{n}$ is a p-Riesz basis for $\ell^p(\mathbb{N})$.
\end{example}
Like Theorem \ref{RIESZISAFRAME}, we have a similar result for p-Riesz basis.
\begin{theorem}(\cite{CHRISTENSENSTOEVA})
Let $\{f_n\}_{n}$ be a q-Riesz basis for $\mathcal{X}^*$ and let $p$ be the conjugate index of $q$. Then there exists a unique p-Riesz basis $\{\tau_n\}_{n}$ for $\mathcal{X}$ such that
\begin{align*}
x=\sum_{n=1}^{\infty}f_n(x)\tau_n, \quad \forall x \in \mathcal{X} \quad \text{ and } \quad f=\sum_{n=1}^{\infty}f(\tau_n)f_n, \quad \forall f \in \mathcal{X}^*.
\end{align*}
\end{theorem}
By realizing that the functional $\mathcal{H} \ni h \mapsto \langle h, \tau_n\rangle \in \mathbb{K}$ is bounded linear, Definition \ref{OLE} leads to the following in Banach spaces.
\begin{definition}(\cite{ALDROUBISUNTANG, CHRISTENSENSTOEVA})\label{FRAMEDEFINITIONBANACH}
Let $1<p<\infty$ and $\mathcal{X}$ be a Banach space.
\begin{enumerate}[label=(\roman*)]
\item A collection $\{f_n\}_{n}$ of bounded linear functionals in $\mathcal{X}^*$ is said to be a \textbf{p-frame} for $\mathcal{X}$ if there exist $a,b>0$ such that
\begin{align*}
a\|x\|\leq \left(\sum_{n=1}^{\infty}|f_n(x)|^p\right)^\frac{1}{p}\leq b\|x\|,\quad \forall x \in \mathcal{X}.
\end{align*}
If $a$ can take the value 0,
then we say $\{f_n\}_{n}$ is a p-Bessel sequence for $\mathcal{X}$.
\item A collection $\{\tau_n\}_{n}$ in $\mathcal{X}$ is said to
be a \textbf{p-frame} for $\mathcal{X}^*$ if there exist $a,b>0$ such that
\begin{align*}
a\|f\|\leq \left(\sum_{n=1}^{\infty}|f(\tau_n)|^p\right)^\frac{1}{p}\leq b\|f\|,\quad \forall f \in \mathcal{X}^*.
\end{align*}
\end{enumerate}
\end{definition}
\begin{example}
\begin{enumerate}[label=(\roman*)]
\item Let $\{e_n\}_{n}$ be the standard Schauder basis for $\ell^p(\mathbb{N})$, $\{\zeta_n\}_{n}$ be the coordinate functionals associated to $\{e_n\}_{n}$ and $A:\ell^p(\mathbb{N})\to \ell^p(\mathbb{N})$ be a bounded linear invertible operator. Then it follows that $\{\zeta_nA\}_{n}$ is a p-frame for $\ell^p(\mathbb{N})$.
\item (\cite{ALDROUBISUNTANG}) Let $1\leq p <\infty$ and $a \in \mathbb{R}$. Define
\begin{align*}
T_a:\mathcal{L}^p(\mathbb{R}) \ni f \mapsto T_af \in \mathcal{L}^p(\mathbb{R}), \quad T_af:\mathbb{R} \mapsto (T_af) (x)\coloneqq f(x-a) \in \mathbb{C}.
\end{align*}
Define
\begin{align*}
W\coloneqq\left\{f:\mathbb{R} \to \mathbb{C}\bigg| \sup_{x \in \mathbb{R}}\sum_{k\in \mathbb{Z}} |T_kf(x)|<\infty\right\}
\end{align*}
and for $1<p<\infty$, $\phi \in W$,
\begin{align*}
S_p\coloneqq \left\{\sum_{k \in \mathbb{Z}}c_kT_k\phi \bigg|\{c_k\}_{k \in \mathbb{Z}}\in \ell^p(\mathbb{Z})\right\}.
\end{align*}
\end{enumerate}
Then $S_p$ is a closed subspace of $\mathcal{L}^p(\mathbb{R})$ and $\{T_k\phi\}_{k \in \mathbb{Z}}$ is a p-frame for $\mathcal{L}^p(\mathbb{R})$.
\end{example}
Like Theorem \ref{OLEBESSELCHARACTERIZATION12}, we have a similar result for Banach spaces.
\begin{theorem}(\cite{CHRISTENSENSTOEVA})\label{pFRAMECHAR}
Let $\mathcal{X}$ be a Banach space and $\{f_n\}_{n}$ be a sequence in
$\mathcal{X}^*$.
\begin{enumerate}[label=(\roman*)]
\item $\{f_n\}_{n}$ is a p-Bessel sequence for
$\mathcal{X}$ with bound $b$ if and only if
\begin{align}\label{BASSELOPERATORCHARACTERIZATION}
T: \ell^q (\mathbb{N}) \ni \{a_n\}_{n} \mapsto \sum_{n=1}^\infty a_nf_n \in \mathcal{X}^*
\end{align}
is a well-defined (hence bounded) linear operator and $\|T\|\leq b$ (where $q$ is the conjugate
index of $p$).
\item If $\mathcal{X}$ is reflexive, then $\{f_n\}_{n}$ is a p-frame
for $\mathcal{X}$ if and only if the operator $T$ in
(\ref{BASSELOPERATORCHARACTERIZATION}) is surjective.
\end{enumerate}
\end{theorem}
Rather working on p-frames, one can consider a general notion of frames, which are generalizations of $\ell^p (\mathbb{N})$ spaces. For this, we need the notion of BK-space (Banach scalar valued sequence
space or Banach coordinate space).
\begin{definition} (cf. \cite{BANASMURSALEEN})
A sequence space $\mathcal{X}_d$ is said to be a \textbf{BK-space} if it is a Banach space and all the coordinate functionals are continuous, i.e.,
whenever $\{x_n\}_n$ is a sequence in $\mathcal{X}_d$ converging to $x \in \mathcal{X}_d$, then each coordinate of
$x_n$ converges to each coordinate of $x$.
\end{definition}
Familiar sequence spaces like $\ell^p(\mathbb{N})$, $c(\mathbb{N})$ (space of convergent sequences) and $c_0(\mathbb{N})$ (space of sequences converging to zero) are examples of BK-spaces. We now recall an example of a sequence space which is not a BK-space.
\begin{example}
The space $\mathcal{X}_d\coloneqq\{\{x_n\}_{n=0}^\infty: x_n \in \mathbb{K}, \forall n \in \mathbb{N}\cup \{0\}\}$ equipped with the metric
\begin{align*}
d(\{x_n\}_{n=0}^\infty, \{y_n\}_{n=0}^\infty)\coloneqq\sum_{n=0}^{\infty} \frac{1}{2^n}\frac{|x_n-y_n|}{1+|x_n-y_n|}, \quad \forall \{x_n\}_{n=0}^\infty, \{y_n\}_{n=0}^\infty \in \mathcal{X}_d
\end{align*}
is not a BK-space.
\end{example}
\begin{definition}(\cite{CASAZZACHRISTENSENSTOEVA})\label{XDFRAME}
Let $\mathcal{X}$ be a Banach space and $\mathcal{X}_d$ be an associated BK-space. A collection $\{f_n\}_n$ in $\mathcal{X}^*$ is said to be a \textbf{$\mathcal{X}_d$-frame}
for $\mathcal{X}$ if the following holds.
\begin{enumerate}[label=(\roman*)]
\item $\{f_n(x)\}_n \in \mathcal{X}_d$, for each $x \in \mathcal{X}$.
\item There exist $a,b>0$ such that $ a\|x\|\leq \|\{f_n(x)\}_n\|\leq b\|x\|, \forall x \in \mathcal{X}.$
\end{enumerate}
Constants $a$ and $b$ are called as $\mathcal{X}_d$-frame bounds.
\end{definition}
\begin{definition}(\cite{CASAZZACHRISTENSENSTOEVA})
Let $\mathcal{X}$ be a Banach space and $\mathcal{X}_d$ be an associated BK-space. A collection $\{\tau_n\}_n$ in $\mathcal{X}$ is said to be a \textbf{$\mathcal{X}_d$-frame}
for $\mathcal{X}$ if the following holds.
\begin{enumerate}[label=(\roman*)]
\item $\{f(\tau_n)\}_n \in \mathcal{X}_d$, for each $f \in \mathcal{X}^*$.
\item There exist $a,b>0$ such that $ a\|f\|\leq \|\{f(\tau_n)\}_n\|\leq b\|f\|, \forall f \in \mathcal{X}^*.$
\end{enumerate}
\end{definition}
\begin{definition}(\cite{GROCHENIG})\label{BANACHFRAMEDEF}
Let $\mathcal{X}$ be a Banach space and $\mathcal{X}_d$ be an associated BK-space. Let $\{f_n\}_n$ be a collection in $\mathcal{X}^*$ and $S:\mathcal{X}_d \to \mathcal{X}$
be a bounded linear operator.
The pair $(\{f_n\}_n, S)$ is said to be a \textbf{Banach frame} for $\mathcal{X}$
if the following holds.
\begin{enumerate}[label=(\roman*)]
\item $\{f_n(x)\}_n \in \mathcal{X}_d$, for each $x \in \mathcal{X}$.
\item There exist $a,b>0$ such that
$
a\|x\|\leq \|\{f_n(x)\}_n\|\leq b\|x\|, \forall x \in \mathcal{X}.
$
\item $S(\{f_n(x)\}_n)=x$, for each $x \in \mathcal{X}$.
\end{enumerate}
Constants $a$ and $b$ are called as \textbf{lower Banach frame bound} and \textbf{upper Banach frame bound}, respectively. The operator $S$ is called as \textbf{reconstruction operator} and the operator $\theta_f:\mathcal{X} \ni x \mapsto \theta_f(x)\coloneqq \{f_n(x)\}_n\in \mathcal{X}_d$ is called as \textbf{analysis operator}.
\end{definition}
\begin{example}
\begin{enumerate}[label=(\roman*)]
\item Let $\{\tau_n\}_n$ be a frame for a Hilbert space $\mathcal{H}$ with bounds $a$ and $b$. Let $f \in \mathcal{H}^* $. Let $h_f \in \mathcal{H}$ be such that $f(h) =\langle h, h_f\rangle $, $\forall h \in \mathcal{H}$ and $\|f\|=\|h_f\|$. Then
\begin{align*}
a\|f\|^2=a\|h_f\|^2\leq \sum_{n=1}^\infty |\langle h_f, \tau_n\rangle |^2= \sum_{n=1}^\infty|f(\tau_n)|^2\leq b\|h_f\|^2=b\|f\|^2.
\end{align*}Therefore $\{\tau_n\}_n$ is an $\ell^2(\mathbb{N})$-frame for $\mathcal{H}$.
\item Let $\{\tau_n\}_n$ be a frame for Hilbert space $\mathcal{H}$. We define $f_n (h)\coloneqq \langle h, \tau_n\rangle $, $ \forall h \in \mathcal{H}$, $ \forall n$ and $S\coloneqq S_\tau^{-1}\theta_\tau^*$. Then $(\{f_n\}_n, S)$ is a Banach frame for $\mathcal{H}$.
\item Let $\{\tau_n\}_n$ be a frame for $\mathcal{H}$. We define $f_n (h)\coloneqq \langle h, S_\tau^{-1}\tau_n\rangle $, $ \forall h \in \mathcal{H}$, $ \forall n$ and $S\coloneqq \theta_\tau^*$. Then $(\{f_n\}_n, S)$ is a Banach frame for $\mathcal{H}$.
\item (\cite{CASAZZACHRISTENSENSTOEVA}) Let $\{\tau_n\}_n$ be an orthonormal basis for a Hilbert space $\mathcal{H}$. We define
\begin{align*}
\mathcal{X}_d\coloneqq \{\{\langle h, \tau_n+\tau_{n+1}\rangle\}_n:h \in \mathcal{H} \}=\{\{ a_n+a_{n+1}\}_n:\{a\}_n \in \ell^2(\mathbb{N}) \}
\end{align*}
equipped with the norm
\begin{align*}
\|\{ a_n+a_{n+1}\}_n\|\coloneqq \left(\sum_{n=1}^{\infty}|a_n|^2\right)^\frac{1}{2}.
\end{align*}
Then $\mathcal{X}_d$ is a BK-space. Define $f_n:\mathcal{H} \ni h \mapsto \langle h, \tau_n+\tau_{n+1}\rangle \in \mathbb{K}$, $\forall n$, $T: \mathcal{H} \ni h \mapsto \{f_n(h)\}_n \in \mathcal{X}_d$ and set $S\coloneqq T^{-1}$. Then $(\{f_n\}_n, S)$ is a Banach frame for $\mathcal{H}$. However, $\{\tau_n+\tau_{n+1}\}_n$ is not a frame for $\mathcal{H}$.
\end{enumerate}
\end{example}
For Hilbert spaces it follows immediately that every Hilbert space has a frame because separable spaces have
orthonormal bases and an orthonormal basis is a frame.
Using Hahn-Banach theorem, the following result was proved in (\cite{CASAZZAHANLARSONFRAMEBANACH}).
\begin{theorem}(\cite{CASAZZAHANLARSONFRAMEBANACH})\label{BANACHFRAMEEXISTSSEPARABLE}
Every separable Banach space admits a Banach frame.
\end{theorem}
The notion of atomic decomposition is studied along with the notion of frames for Banach spaces. This is defined as follows.
\begin{definition}(\cite{GROCHENIG})\label{ATOMICDECOMPODEFI}
Let $\mathcal{X}$ be a Banach space and $\mathcal{X}_d$ be an associated BK-space. Let $\{f_n\}_n$ be a collection in $\mathcal{X}^*$ and
$\{\tau_n\}_n$ be a collection in $\mathcal{X}$. The pair $(\{f_n\}_n, \{\tau_n\}_n)$ is said to be an \textbf{atomic decomposition} for $\mathcal{X}$
if the following holds.
\begin{enumerate}[label=(\roman*)]
\item $\{f_n(x)\}_n \in \mathcal{X}_d$, for each $x \in \mathcal{X}$.
\item There exist $a,b>0$ such that
$
a\|x\|\leq \|\{f_n(x)\}_n\|\leq b\|x\|, \forall x \in \mathcal{X}.
$
\item $x=\sum_{n=1}^\infty f_n(x)\tau_n$, for each $x \in \mathcal{X}$.
\end{enumerate}
Constants $a$ and $b$ are called as \textbf{lower atomic bound} and \textbf{upper atomic bound}, respectively.
\end{definition}
\begin{example}
\begin{enumerate}[label=(\roman*)]
\item Let $\{\tau_n\}_n$ be a frame for $\mathcal{H}$. By defining $f_n (h)\coloneqq \langle h, S_\tau^{-1} \tau_n\rangle $, $ \forall h \in \mathcal{H}$, $ \forall n$, the pair $(\{f_n\}_n, \{\tau_n\}_n)$ satisfies all the conditions of Definition \ref{ATOMICDECOMPODEFI} and hence it is an atomic decomposition for $\mathcal{H}$ .
\item Let $\{\tau_n\}_n$ be a frame for $\mathcal{H}$. By defining $f_n (h)\coloneqq \langle h, \tau_n\rangle $, $ \forall h \in \mathcal{H}$, and $\omega_n \coloneqq S_\tau^{-1}\tau_n$, $ \forall n$, the pair $(\{f_n\}_n, \{\omega_n\}_n)$ satisfies all the conditions of Definition \ref{ATOMICDECOMPODEFI} and hence it is an atomic decomposition for $\mathcal{H}$.
\end{enumerate}
\end{example}
Now we make a detailed observation that the notions of atomic decompositions and Banach frames are completely different. Definition \ref{ATOMICDECOMPODEFI} of atomic decomposition demands the expression of every element of a Banach space as a convergent series in the same Banach space using a collection of bounded linear functionals on the space and a collection of elements in the same Banach space. On the other hand, Definition \ref{BANACHFRAMEDEF} of Banach frame demands the expression of every element of a Banach space using a bounded linear operator from the BK-space to the Banach space and a collection of bounded linear functionals on the same space without bothering about a converging series expansion. Theorem \ref{BANACHFRAMEEXISTSSEPARABLE} guarantees the existence of Banach frame for every separable Banach space. However, the following remarkable result gives a lot of conditions on Banach spaces to admit an atomic decomposition.
\begin{theorem}(\cite{CASAZZAHANLARSONFRAMEBANACH,PELCZYNSKI, JOHNSONROSENTHALZIPPIN})
For a Banach space $\mathcal{X}$, the following are equivalent.
\begin{enumerate}[label=(\roman*)]
\item $\mathcal{X}$ has an atomic decomposition.
\item $\mathcal{X}$ has a finite dimensional expansion of identity.
\item $\mathcal{X}$ is complemented in a Banach space with a Schauder basis.
\item $\mathcal{X}$ has bounded approximation property.
\end{enumerate}
\end{theorem}
There is a close relationship between atomic decomposition and Banach frames for certain classes of Banach spaces. These are exhibited in the following theorems.
\begin{theorem}\cite{CASAZZAHANLARSONFRAMEBANACH}
Let $\mathcal{X}$ be a Banach space, $\mathcal{X}_d$ be a BK-space, $\{f_n\}_n$ be a collection in $\mathcal{X}^*$ and $S:\mathcal{X}_d \to \mathcal{X}$ be a bounded linear operator. If the canonical unit vectors $\{e_n\}_n$ are in $\mathcal{X}_d$, then the following are equivalent.
\begin{enumerate}[label=(\roman*)]
\item $(\{f_n\}_n, S)$ is a Banach frame for $\mathcal{X}$ and $\{e_n\}_n$ is a Schauder basis for $\mathcal{X}_d$.
\item $(\{f_n\}_n, \{Se_n\}_n)$ is an atomic decomposition for $\mathcal{X}$.
\end{enumerate}
\end{theorem}
\begin{theorem}(\cite{CASAZZAHANLARSONFRAMEBANACH, PELCZYNSKI, JOHNSONROSENTHALZIPPIN})
Let $\mathcal{X}$ be a Banach space and $\mathcal{X}_d$ be a BK-space. Let $\{f_n\}_n$ be a collection in $\mathcal{X}^*$ and $S:\mathcal{X}_d \to \mathcal{X}$ be a bounded linear operator. If the canonical unit vectors $\{e_n\}_n$ are in $\mathcal{X}_d$, then the following are equivalent.
\begin{enumerate}[label=(\roman*)]
\item There exists a BK-space $\mathcal{X}_d$ such that $(\{f_n\}_n, \{\tau_n\}_n)$ is an atomic decomposition for $\mathcal{X}$.
\item There exists a BK-space $\mathcal{Y}_d$ which has canonical unit vectors $\{e_n\}_n$ as Schauder basis and a bounded linear operator $S:\mathcal{Y}_d \to \mathcal{X}$ such that $(\{f_n\}_n, S)$ is a Banach frame for $\mathcal{X}$. Further, $S$ can be taken as a projection and $Se_n=\tau_n$, for all $n \in \mathbb{N}$.
\end{enumerate}
\end{theorem}
Next theorem shows that there is a relation between atomic decompositions and projections.
\begin{theorem}(\cite{CASAZZAHANLARSONFRAMEBANACH})
Let $\mathcal{X}$ be a Banach space, $\{\tau_n\}_n$ be a collection in $\mathcal{X}$, $\{f_n\}_n$ be a collection in $\mathcal{X}^*$ and $S:\mathcal{X}_d \to \mathcal{X}$ be a bounded linear operator. Let $\{e_n\}_n$ be the standard unit vectors in $\mathcal{X}_d$. Then the following are equivalent.
\begin{enumerate}[label=(\roman*)]
\item There is a BK-space $\mathcal{X}_d$ such that $(\{f_n\}_n, \{\tau_n\}_n)$ is an atomic decomposition for $\mathcal{X}$.
\item There is a Banach space $\mathcal{Z}$ with a Schauder basis $\{\omega_n\}_n$ such that $\mathcal{X} \subseteq \mathcal{Z}$ and there is a bounded linear projection $P:\mathcal{Z}\to \mathcal{X} $ such that $P\omega_n=\tau_n,$ $ \forall n \in \mathbb{N}$.
\end{enumerate}
\end{theorem}
We mention here before passing that it is known, one can simultaneously construct Banach frames and atomic decompositions for certain classes of Banach spaces such as coorbit spaces (\cite{GROCHENIG}), $\alpha$-modulation spaces (\cite{FORNASIERALPHA}), decomposition spaces (\cite{BORUPNIELSEN}), homogeneous spaces (\cite{DAHLKESTEIDLTESCHKE}), weighted coorbit spaces (\cite{DAHLKESTEIDLTESCHKE1}), generalized coorbit spaces (\cite{DAHLKEFORNASIERRAUHUTTESCHKE}), inhomogeneous function spaces (\cite{RAUHUTULLRICH}) and Bergman spaces (\cite{CHRISTENSENGROCHENIGOLAFSSON}).
Using approximation property for Banach spaces, \cite{CASAZZARECONSTRUCTION} proved that $\mathcal{X}_d$-frame for a Banach space need not admit representation of every element of the Banach space. With regard to this, the following result gives information when it is possible to express element of the Banach space using series.
\begin{theorem}(\cite{CASAZZACHRISTENSENSTOEVA})\label{CASAZZASEPARABLECHARACTERIZATION}
Let $\mathcal{X}_d$ be a BK-space and
$\{f_n\}_n$ be an $\mathcal{X}_d$-frame for $\mathcal{X}$. Let $ \theta_f:\mathcal{X} \ni x \mapsto \{f_n(x)\}_n \in \mathcal{X}_d$ (this map is a well-defined linear bounded below operator).
The following are equivalent.
\begin{enumerate}[label=(\roman*)]
\item $ \theta_f(\mathcal{X})$ is complemented in $\mathcal{X}_d$.
\item The operator $\theta_f^{-1}:\theta_f(\mathcal{X}) \rightarrow \mathcal{X}$ can be extended to a bounded linear operator $T_f: \mathcal{X}_d \rightarrow \mathcal{X}.$
\item There exists a bounded linear operator $S: \mathcal{X}_d \rightarrow \mathcal{X}$ such that $(\{f_n\}_n, S)$ is a Banach frame for $\mathcal{X}$.
\end{enumerate}
Also, the condition
\begin{enumerate}[label=(\roman*)]\addtocounter{enumi}{3}
\item There exists a sequence $\{\tau_n\}_n$ in $\mathcal{X}$ such that $\sum_{n=1}^\infty a_n \tau_n$ is convergent in $\mathcal{X}$
for all $\{a_n\}_n$ in $\mathcal{X}_d$ and $x=\sum_{n=1}^\infty f_n(x) \tau_n, \forall x \in \mathcal{X}.$
\end{enumerate}
implies each of (i)-(iii). If we also assume that the canonical unit vectors $\{e_n\}_n$ form a Schauder basis for $\mathcal{X}_d$,
(iv) is equivalent to the above (i)-(iii) and to the following condition (v).
\begin{enumerate}[label=(\roman*)]\addtocounter{enumi}{4}
\item There exists an $\mathcal{X}_d^*$-Bessel sequence $\{\tau_n\}_n\subseteq \mathcal{X}\subseteq \mathcal{X}^{**}$ for
$\mathcal{X}^*$ such that $x=\sum_{n=1}^\infty f_n(x) \tau_n, $ $ \forall x \in \mathcal{X}.$
\end{enumerate}
If the canonical unit vectors $\{e_n\}_n$ form a Schauder basis for both $\mathcal{X}_d$ and $\mathcal{X}_d^*$, then (i)-(v) is equivalent
to the following condition (vi).
\begin{enumerate}[label=(\roman*)]\addtocounter{enumi}{5}
\item There exists an $\mathcal{X}_d^*$-Bessel sequence $\{\tau_n\}_n\subseteq \mathcal{X}\subseteq \mathcal{X}^{**}$ for
$\mathcal{X}^*$ such that $f=\sum_{n=1}^\infty f(\tau_n) f_n, $ $ \forall f \in \mathcal{X}^*.$
\end{enumerate}
In each of the cases (v) and (vi), $\{\tau_n\}_n$ is actually an $\mathcal{X}_d^*$-frame for $\mathcal{X}^*$.
\end{theorem}
We end this section by mentioning a perturbation theorem for Banach frames.
\begin{theorem}(\cite{CHRISTENSENHEIL})\label{PERTURBATIONBANACH12} (\textbf{Christensen-Heil perturbation})
Let $(\{f_n\}_{n}, S)$ be a
Banach frame for a Banach space $\mathcal{X}$. Let $\{g_n\}_{n}$ be a collection in
$\mathcal{X}^*$ satisfying the following.
\begin{enumerate}[label=(\roman*)]
\item There exist $\alpha, \gamma\geq 0$ such that
\begin{align*}
\|\{(f_n-g_n)(x)\}_n\|\leq \alpha \|\{f_n(x)\}_n\|+\gamma \|x-y\|, ~ \forall x, y \in \mathcal{X}.
\end{align*}
\item $\alpha \|\theta_f\|+\gamma\leq \|S\|^{-1}.$
\end{enumerate}
Then there exists a reconstruction operator $T$ such that $(\{f_n\}_{n}, T)$ is a
Banach frame for $\mathcal{X}$ with bounds
$
\|S\|^{-1}-(\alpha\|\theta_f\|+\gamma) $ and $
\|\theta_f\|+(\alpha\|\theta_f\|+\gamma).$
\end{theorem}
\section{MULTIPLIERS FOR BANACH SPACES}
Let $\{\lambda_n\}_n \in \ell^\infty(\mathbb{N})$ and $\{\tau_n\}_n$, $\{\omega_n\}_n$ be
sequences in a Hilbert space $\mathcal{H}$. For $x,y \in \mathcal{H}$, the
operator defined by $\mathcal{H}\ni h \mapsto \langle h, y\rangle x\in \mathcal{H}$ is denoted by $x\otimes \overline{y}$ .
The study of operators of the form
\begin{align}\label{FIRST EQUATION}
\sum_{n=1}^{\infty}\lambda_n (\tau_n\otimes \overline{\omega_n})
\end{align}
began with \cite{SCHATTEN}, in connection with the study of compact
operators. Schatten studied the operator in (\ref{FIRST EQUATION}) whenever
$\{\tau_n\}_n$ and $\{\omega_n\}_n$ are orthonormal sequences in a Hilbert space
$\mathcal{H}$. He showed that if $\{\lambda_n\}_n \in
\ell^\infty(\mathbb{N})$ and $\{\tau_n\}_n$, $\{\omega_n\}_n$ are orthonormal sequences
in a Hilbert space $\mathcal{H}$, then the map in (\ref{FIRST EQUATION}) is a well-defined bounded linear
operator. Later, operators in (\ref{FIRST EQUATION}) are studied mainly in
connection with Gabor analysis (\cite{FEICHTINGERNOWAK}, \cite{BENEDETTOPFANDER},
\cite{DORFLERTORRESANI}, \cite{GIBSONLAMOUREUXMARGRAVE}, \cite{CORDEROGROCHENIG}, \cite{SKRETTINGLAND}). This was generalized by
\cite{BALAZSBASIC} who replaced orthonormal sequences by Bessel sequences.
Let $\{f_n\}_n$
be a sequence in the dual space $\mathcal{X}^*$ of a Banach
space $\mathcal{X}$ and $\{\tau_n\}_n$ be a sequence in a Banach space
$\mathcal{Y}$. The operator $\tau \otimes f$ is defined by $\tau
\otimes f:\mathcal{X}\ni x \mapsto f(x)\tau\in \mathcal{Y}$. It was \cite{RAHIMIBALAZSMUL} who extended the operator in (\ref{FIRST
EQUATION}) from Hilbert spaces to Banach spaces. For a Banach space
$\mathcal{X}$ and dual space $\mathcal{X}^*$, they considered the operators (called as \textbf{multipliers}) of the form
\begin{align}\label{SECOND EQUATION}
M_{\lambda,f, \tau} \coloneqq\sum_{n=1}^{\infty}\lambda_n (\tau_n\otimes f_n).
\end{align}
Rahimi and Balazs studied the operator in (\ref{SECOND EQUATION}), whenever
$\{\tau_n\}_n$ is a p-Bessel sequence for
$\mathcal{X}^*$ and $\{f_n\}_n$ is a q-Bessel sequence for $\mathcal{X}$ ($q$ is the
conjugate index of $p$). Besides theoretical importance, multipliers also play
important role in Physics, signal processing, acoustics (\cite{BALAZSSTOEVAMULAPP}).
Fundamental result obtained by \cite{RAHIMIBALAZSMUL} is the
following. In this section, $q$ denotes the
conjugate index of $p$.
\begin{theorem}(\cite{RAHIMIBALAZSMUL})\label{RAHIMIBALAZS}
Let $\{f_n\}_{n}$ be a
p-Bessel sequence for a Banach space $\mathcal{X}$ with bound $b$ and
$\{\tau_n\}_{n}$ be a
q-Bessel sequence for the dual of a Banach space $\mathcal{Y}$ with bound
$d$. If
$\{\lambda_n\}_n \in \ell^\infty(\mathbb{N})$, then the map
\begin{align*}
T: \mathcal{X} \ni x \mapsto \sum_{n=1}^{\infty}\lambda_n (\tau_n\otimes
f_n) x \in \mathcal{Y}
\end{align*}
is a well-defined bounded linear operator and
$\|T\|\leq bd\|\{\lambda_n\}_n\|_\infty.$
\end{theorem}
By varying only the symbol in a multiplier, we get a bounded linear operator which has the nice property stated in the following theorem.
\begin{proposition}(\cite{RAHIMIBALAZSMUL})
Let $\{f_n\}_{n}$ be a
p-Bessel sequence for a Banach space $\mathcal{X}$ with non-zero
elements, $\{\tau_n\}_{n}$ be a q-Riesz sequence for
$\mathcal{Y}$ and let $\{\lambda_n\}_n \in
\ell^\infty(\mathbb{N})$. Then the mapping
\begin{align*}
T:\ell^\infty(\mathbb{N})\ni \{\lambda_n\}_n \mapsto M_{\lambda,f, \tau}
\in \mathcal{B}(\mathcal{X}, \mathcal{Y})
\end{align*}
is a well-defined injective bounded linear operator.
\end{proposition}
From the spectral theory of compact operators in Hilbert spaces, it easily follows that the symbol of compact operator converges to zero. Following is a general result for Banach spaces.
\begin{proposition}(\cite{RAHIMIBALAZSMUL})\label{RAHIMIBALAZSMULTIPLIERCOMPACT}
Let $\{f_n\}_{n}$ be
a p-Bessel sequence for a Banach space $\mathcal{X}$ with bound $b$
and $\{\tau_n\}_{n}$ be a q-Bessel sequence
for $\mathcal{Y}$ with bound $d$. If
$\{\lambda_n\}_n \in c_0(\mathbb{N})$, then $M_{\lambda,f, \tau}$ is a
compact operator.
\end{proposition}
Following theorem shows that multipliers behave nicely with respect to change in its parameters. These are known as continuity properties of multipliers in the literature.
\begin{theorem}(\cite{RAHIMIBALAZSMUL})\label{RAHIMIBALAZSMULTIPLIERCONTINUITY}
Let $\{f_n\}_{n}$ be a
p-Bessel sequence for $\mathcal{X}$ with bound $b$,
$\{\tau_n\}_{n}$ be a q-Bessel sequence for
$\mathcal{Y}$ with bound $d$ and
$\{\lambda_n\}_n \in \ell^\infty(\mathbb{N})$. Let $k \in \mathbb{N}$ and let
$\lambda^{(k)}=\{\lambda_1^{(k)},\lambda_2^{(k)}, \dots \}$,
$\lambda=\{\lambda_1,\lambda_2, \dots \}$,
$\tau^{(k)}=\{\tau_1^{(k)}, \tau_2^{(k)}, \dots\}$,
$\tau_n^{k} \in \mathcal{X}$, $\tau=\{\tau_1, \tau_2, \dots\}$. Assume that for
each $k$, $\lambda^{(k)}\in \ell^\infty(\mathbb{N})$ and
$\tau^{(k)}$ is a q-Bessel sequence for
$\mathcal{Y}$.
\begin{enumerate}[label=(\roman*)]
\item If $\lambda^{(k)} \to \lambda $ as $k \rightarrow \infty $ in p-norm, then
\begin{align*}
\|M_{\lambda^{(k)},f, \tau}-M_{\lambda,f, \tau}\| \to 0 \quad \text{ as } \quad k \to \infty.
\end{align*}
\item If $\{\lambda_n\}_n \in \ell^p(\mathbb{N})$ and $\sum_{n=1}^{\infty}\|\tau_n^{(k)}-\tau_n\|^q \to 0 \text{ as } k \to \infty $, then
\begin{align*}
\|M_{\lambda, f, \tau^{(k)}}-M_{\lambda,f, \tau}\| \to 0 \quad \text{ as } \quad k \to \infty.
\end{align*}
\end{enumerate}
\end{theorem}
\section{LIPSCHITZ OPERATORS AND LIPSCHITZ COMPACT \\
OPERATORS}
We
first recall the definition of Lipschitz function.
\begin{definition}(cf. \cite{WEAVER})
Let $\mathcal{M}$,
$\mathcal{N}$ be metric spaces. A function $f:\mathcal{M} \rightarrow
\mathcal{N}$ is said to be \textbf{Lipschitz} if there exists $b> 0$ such that
\begin{align*}
d(f(x), f(y)) \leq b\, d(x,y), \quad \forall x, y \in \mathcal{M}.
\end{align*}
\end{definition}
One of the most important results in the study of Lipschitz functions is the following (which is similar to the Banach-Mazur theorem for Banach spaces which shows that every separable Banach space embeds isometrically in the Banach space of continuous functions on $[0,1]$ (cf. \cite{ALBIACKALTON})).
\begin{theorem} (cf. \cite{KALTONLANCIEN, AHARONI}) (\textbf{Aharoni's theorem}) \label{AHARONITHEOREM}
If $\mathcal{M}$ is a separable metric space, then there exists a function $f: \mathcal{M} \to c_0(\mathbb{N})$ and a constant $b>0$ such that
\begin{align*}
d(x,y)\leq \|f(x)-f(y)\|\leq b \, d(x,y), \quad \forall x, y \in \mathcal{M}.
\end{align*}
\end{theorem}
\begin{definition}(cf. \cite{WEAVER})
A metric space $\mathcal{M}$ with a reference point which is usually denoted by 0 is called as a \textbf{pointed metric space}. In this case, we write $(\mathcal{M}, 0)$ is a pointed metric space.
\end{definition}
Note that every metric space is a pointed metric space by fixing any point of the space.
Just like norm of linear operator, a reasonable measure of a Lipschitz function can be defined. This is exhibited in the next definition.
\begin{definition}(cf. \cite{WEAVER})
Let $\mathcal{X}$ be a Banach space.
\begin{enumerate}[label=(\roman*)]
\item Let $\mathcal{M}$ be a metric space. The collection $\operatorname{Lip}(\mathcal{M}, \mathcal{X})$
is defined as $\operatorname{Lip}(\mathcal{M}, \mathcal{X})\coloneqq \{f:f:\mathcal{M}
\rightarrow \mathcal{X} \operatorname{ is ~ Lipschitz} \}.$ For $f \in \operatorname{Lip}(\mathcal{M}, \mathcal{X})$, the \textbf{Lipschitz number}
is defined as
\begin{align*}
\operatorname{Lip}(f)\coloneqq \sup_{x, y \in \mathcal{M}, ~x\neq
y} \frac{\|f(x)-f(y)\|}{d(x,y)}.
\end{align*}
\item Let $(\mathcal{M}, 0)$ be a pointed metric space. The collection $\operatorname{Lip}_0(\mathcal{M}, \mathcal{X})$
is defined as $\operatorname{Lip}_0(\mathcal{M}, \mathcal{X})\coloneqq \{f:f:\mathcal{M}
\rightarrow \mathcal{X} \operatorname{ is ~ Lipschitz ~ and } f(0)=0\}.$
For $f \in \operatorname{Lip}_0(\mathcal{M}, \mathcal{X})$, the \textbf{Lipschitz norm}
is defined as
\begin{align*}
\|f\|_{\operatorname{Lip}_0}\coloneqq \sup_{x, y \in \mathcal{M}, x\neq
y} \frac{\|f(x)-f(y)\|}{d(x,y)}.
\end{align*}
\end{enumerate}
\end{definition}
It is well-known that given two Banach spaces $\mathcal{X}$ and $\mathcal{Y}$, the collection of all bounded linear maps from $\mathcal{X}$ to $\mathcal{Y}$ is a Banach space with respect to operator-norm. A similar result holds for base point preserving Lipschitz maps from pointed metric spaces to Banach spaces.
\begin{theorem}(cf. \cite{WEAVER})\label{LIPISABANACHALGEBRA}
Let $\mathcal{X}$ be a Banach space.
\begin{enumerate}[label=(\roman*)]
\item If $\mathcal{M}$ is a metric space, then $\operatorname{Lip}(\mathcal{M},
\mathcal{X})$ is a semi-normed vector space with respect to the semi-norm $\operatorname{Lip}(\cdot)$.
\item If $(\mathcal{M}, 0)$ is a pointed metric space, then $\operatorname{Lip}_0(\mathcal{M},
\mathcal{X})$ is a Banach space with respect to the norm
$\|\cdot\|_{\operatorname{Lip}_0}$. Further, $\operatorname{Lip}_0(\mathcal{X})\coloneqq\operatorname{Lip}_0(\mathcal{X},
\mathcal{X})$ is a unital Banach algebra. In particular, if $T \in \operatorname{Lip}_0(\mathcal{X})$ satisfies $
\|T-I_\mathcal{X}\|_{\operatorname{Lip}_0}<1,$
then $T $ is invertible and $T^{-1} \in \operatorname{Lip}_0(\mathcal{X})$.
\end{enumerate}
\end{theorem}
In the study of Lipschitz functions it is natural to shift from metric space to the setting of Banach spaces and use functional analysis tools on Banach spaces. This is achieved through the following theorem.
\begin{theorem}(cf. \cite{WEAVER, KALTON, ARENSEELLS})\label{POINTEDSPLITS}
Let $(\mathcal{M},0)$ be a pointed metric space. Then there exists a Banach space $\mathcal{F}(\mathcal{M})$ and an isometric embedding $e:\mathcal{M} \to \mathcal{F}(\mathcal{M})$ satisfying
the following universal property: for each Banach space $\mathcal{X}$ and each $f \in \operatorname{Lip}_0(\mathcal{M}, \mathcal{X})$, there is a unique bounded linear operator
$T_f :\mathcal{F}(\mathcal{M})\to \mathcal{X} $ such that $T_fe=f$, i.e., the following diagram commutes.
\begin{center}
\[
\begin{tikzcd}
\mathcal{M} \arrow[d,"e"] \arrow[dr,"f"]\\
\mathcal{F}(\mathcal{M}) \arrow[r,"T_f"] & \mathcal{X}
\end{tikzcd}
\]
\end{center}
Further, $\|T_f\|=\|f\|_{\operatorname{Lip}_0}$. This property characterizes the pair $(\mathcal{F}(\mathcal{M}),e)$ uniquely upto isometric isomorphism. Moreover, the map
$\operatorname{Lip}_0(\mathcal{M}, \mathcal{X})\ni f \mapsto T_f \in \mathcal{B}(\mathcal{F}(\mathcal{M}),\mathcal{X}) $ is an isometric isomorphism.
\end{theorem}
The space $\mathcal{F}(\mathcal{M})$ is known as \textbf{Arens-Eells space} or \textbf{Lipschitz-free Banach space} (\cite{GODEFROYSURVEY}). Theorem \ref{POINTEDSPLITS} tells that in order to `find' the space $\operatorname{Lip}_0(\mathcal{M}, \mathcal{X})$, we can find first $\mathcal{F}(\mathcal{M})$ and then $\mathcal{B}(\mathcal{F}(\mathcal{M}),\mathcal{X}) $. In particular, $\operatorname{Lip}_0(\mathcal{M}, \mathbb{K})$ is isometrically isomorphic to $\mathcal{F}(\mathcal{M})^*$. For this reason $\mathcal{F}(\mathcal{M})$ is also called as \textbf{predual} of metric space $\mathcal{M}$. The bounded linear operator $T_f$ is called as \textbf{linearization} of $f$. We now mention some examples of Lipschitz-free spaces for certain metric spaces.
\begin{example}
\begin{enumerate}[label=(\roman*)] (cf. \cite{WEAVER}, \cite{DUBEITYMCHATYN})
\item If $\mathbb{R}$ is considered with usual metric, then $\mathcal{F}(\mathbb{R}) \cong \mathcal{L}^1(\mathbb{R})$.
\item If $\mathcal{M}$ is any separable metric tree, then $\mathcal{F}(\mathcal{M})\cong \mathcal{L}^1([0,1])$.
\item If $\mathcal{M}$ is any set equipped with the metric $d(x,y)\coloneqq 2$ whenever $x,y \in \mathcal{M}$ with $x\neq y$ and $d(x,x)\coloneqq 0$, $\forall x \in \mathcal{M}$, then $\mathcal{F}(\mathcal{M})\cong \ell^1(\mathcal{M})$.
\item If $\mathbb{N}$ is considered with usual metric, then $\mathcal{F}(\mathbb{N})\cong \ell^1(\mathbb{N})$.
\item $\mathcal{F}(\ell^1(\mathbb{N}))\cong \mathcal{L}^1(\mathbb{R})$.
\end{enumerate}
\end{example}
In the theory of bounded linear operators between Banach spaces, an operator is
said to be compact if the image of the unit ball under the operator is
precompact (\cite{FABIAN}). Linearity of the operator now gives various
characterizations of
compactness and plays an important role in rich theories such as theory of integral equations, spectral
theory, theory of Fredholm operators, operator algebra (C*-algebra),
K-theory, Calkin algebra, (operator) ideal theory, approximation
properties of Banach spaces, Schauder basis theory. Lack of linearity is a
hurdle when one tries to define compactness of non-linear maps. This hurdle was
successfully crossed in the paper (\cite{JIMENEZSEPUILCREMOISES}) which began the study of Lipschitz compact
operators.
\begin{definition}(\cite{JIMENEZSEPUILCREMOISES})
If $\mathcal{M}$ is a metric space and $\mathcal{X}$ is a Banach space, then the \textbf{Lipschitz image} of a Lipschitz map (also called as \textbf{Lipschitz operator}) $f:\mathcal{M}\rightarrow \mathcal{X}$ is defined as the set
\begin{align}\label{LIPSCHITZIMAGE}
\left\{\frac{f(x)-f(y)}{d(x,y)}:x, y \in \mathcal{M}, x\neq y\right\}.
\end{align}
\end{definition}
We observe that whenever an operator is linear, the set in (\ref{LIPSCHITZIMAGE}) is simply the image
of the unit sphere.
\begin{definition}(\cite{JIMENEZSEPUILCREMOISES})\label{LIPSCHITZCOMPACTDEFINITION}
If $(\mathcal{M}, 0)$ is a pointed metric space and $\mathcal{X}$ is a
Banach space, then a Lipschitz map $f:\mathcal{M}\rightarrow \mathcal{X}$ such
that $f(0)=0$ is said to be \textbf{Lipschitz compact} if its Lipschitz image is
relatively compact in $\mathcal{X}$, i.e., the closure of the set in
(\ref{LIPSCHITZIMAGE}) is compact in $\mathcal{X}$.
\end{definition}
As showed in (\cite{JIMENEZSEPUILCREMOISES}), there is a large collection of Lipschitz compact operators. To state this, first we need a definition.
\begin{definition}(\cite{CHENZHENG})
Let $(\mathcal{M}, 0)$ be a pointed metric space and $\mathcal{X}$ be a Banach space. A Lipschitz operator $f:\mathcal{M}\rightarrow \mathcal{X}$ such that $f(0)=0$ is said to be \textbf{strongly Lipschitz p-nuclear} ($1\leq p <\infty$) if there exist operators $A \in \mathcal{B}(\ell^p(\mathbb{N}), \mathcal{X})$, $g \in \operatorname{Lip}_0(\mathcal{M},
\ell^\infty(\mathbb{N}))$ and a diagonal operator $M_\lambda \in \mathcal{B}(\ell^\infty(\mathbb{N}), \ell^p(\mathbb{N}))$ induced by a sequence $\lambda \in \ell^p(\mathbb{N})$ such that $f=AM_\lambda g$, i.e., the following diagram commutes.
\begin{center}
\[
\begin{tikzcd}
\mathcal{M} \arrow[r, "f"]\arrow[d, "g"]& \mathcal{X} \\
\ell^\infty(\mathbb{N})\arrow[r, "M_\lambda" ]& \ell^p(\mathbb{N})\arrow[u, "A"
]
\end{tikzcd}
\]
\end{center}
\end{definition}
\begin{proposition}(\cite{JIMENEZSEPUILCREMOISES})
Every strongly Lipschitz p-nuclear operator from a pointed metric space to a Banach space is Lipschitz compact.
\end{proposition}
Since the image of a linear operator is a subspace, the natural definition of finite rank operator is that image is a finite dimensional subspace. The image of Lipschitz map may not be a subspace. Thus care has to be taken while defining rank of such maps.
\begin{definition}(\cite{JIMENEZSEPUILCREMOISES})\label{LFR}
If $(\mathcal{M}, 0)$ is a pointed metric space and $\mathcal{X}$ is a
Banach space, then a Lipschitz function $f:\mathcal{M}\rightarrow \mathcal{X}$
such that $f(0)=0$ is said to have \textbf{Lipschitz finite dimensional rank} if the linear
hull of its Lipschitz image is a finite dimensional subspace of $\mathcal{X}$.
\end{definition}
\begin{definition}(\cite{JIMENEZSEPUILCREMOISES})\label{FR}
If $\mathcal{M}$ is a metric space and $\mathcal{X}$ is a Banach space, then a Lipschitz function $f:\mathcal{M}\rightarrow \mathcal{X}$ is said to have \textbf{finite dimensional rank} if the linear hull of its image is a finite dimensional subspace of $\mathcal{X}$.
\end{definition}
Next theorem shows that for pointed metric spaces, Definitions \ref{LFR} and \ref{FR} are
equivalent.
\begin{theorem}(\cite{JIMENEZSEPUILCREMOISES, ACHOUR})\label{LIPSCHITCOMPACTIFFLINEAR}
Let $(\mathcal{M}, 0)$ be a pointed metric space and $\mathcal{X}$ be a Banach space. For a Lipschitz function $f:\mathcal{M}\rightarrow \mathcal{X}$ such that $f(0)=0$, the following are equivalent.
\begin{enumerate}[label=(\roman*)]
\item $f$ has Lipschitz finite dimensional rank.
\item $f$ has finite dimensional rank.
\item There exist $ f_1, \dots, f_n$ in $\operatorname{Lip}_0(\mathcal{M}, \mathbb{K})$ and $\tau_1, \dots, \tau_n$ in $\mathcal{X}$ such that
\begin{align*}
f(x)=\sum_{k=1}^{n}f_k(x)\tau_k, \quad \forall x \in \mathcal{M}.
\end{align*}
\end{enumerate}
\end{theorem}
In Hilbert spaces (and not in Banach spaces), every compact operator is approximable by finite rank
operators in the operator norm (cf. \cite{FABIAN}). Following is the definition of approximable
operator for Lipschitz maps.
\begin{definition}(\cite{JIMENEZSEPUILCREMOISES})
If $(\mathcal{M}, 0)$ is a pointed metric space and $\mathcal{X}$ is a Banach space, then a Lipschitz function $f:\mathcal{M}\rightarrow \mathcal{X}$ such that $f(0)=0$ is said to be \textbf{Lipschitz approximable} if it is the limit in the Lipschitz norm of a sequence of Lipschitz finite rank operators from $\mathcal{M}$ to $\mathcal{X}$.
\end{definition}
\begin{theorem}(\cite{JIMENEZSEPUILCREMOISES})\label{LIPSCHITZAPPROMABLEISCOMPACT}
Every Lipschitz approximable operator from pointed metric space $(\mathcal{M}, 0)$ to a Banach space $\mathcal{X}$ is Lipschitz compact.
\end{theorem}
\vspace{0.5cm}
{\onehalfspacing \section{OPERATOR-VALUED ORTHONORMAL BASES, RIESZ \\BASES, FRAMES AND BESSEL SEQUENCES IN HILBERT SPACES}
Through a decade long research, the frame theory for Hilbert spaces was extended to larger extent by
Kaftal, Larson and Zhang with the introduction of the notion of operator-valued frame (OVF) in 2009. In the theory of operator-valued frames, the sequence $\{L_n\}_{n} $ of operators play an important role. These are defined as follows.
\begin{definition}(\cite{KAFTALLARSONZHANG})\label{LJDEFINITION}
Given $n \in \mathbb{N}$, we define
\begin{align*}
L_n : \mathcal{H}_0 \ni h \mapsto L_nh\coloneqq e_n\otimes h \in \ell^2(\mathbb{N}) \otimes \mathcal{H}_0,
\end{align*} where $\{e_n\}_{n} $ is the standard orthonormal basis for $\ell^2(\mathbb{N})$.
\end{definition}
Following proposition shows properties of the operator $L_n$.
\begin{proposition}(\cite{KAFTALLARSONZHANG})\label{LJORTHO}
The operators $L_n$ in Definition \ref{LJDEFINITION} satisfy the following.
\begin{enumerate}[label=(\roman*)]
\item Each $L_n$ is an isometry from $\mathcal{H}_0 $ to $ \ell^2(\mathbb{N}) \otimes \mathcal{H}_0$, and for
$ n,m \in \mathbb{N}$ we have
\begin{align}\label{LEQUATION}
L_n^*L_m =
\left\{
\begin{array}{ll}
I_{\mathcal{H}_0 } & \mbox{if } n=m \\
0 & \mbox{if } n\neq m
\end{array}
\right.
~\text{and} \quad
\sum\limits_{n=1}^\infty L_nL_n^*=I_{\ell^2(\mathbb{N})}\otimes I_{\mathcal{H}_0}
\end{align}
where the convergence is in the strong-operator topology.
\item $L_m^*(\{a_n\}_{n}\otimes y) =a_my,$ $ \forall \{a_n\}_{n} \in \ell^2(\mathbb{N}), \forall y \in \mathcal{H}_0$, for each $ m $ in $ \mathbb{N}.$
\end{enumerate}
\end{proposition}
Orthonormal basis for Hilbert spaces are defined in Definition \ref{ONBDEFINITIONOLE}. Considering Definition \ref{ONBDEFINITIONOLE} and Parseval identity ((iv) in Theorem \ref{CHARORTHONORMALINTRO}) for orthonormal basis, \cite{SUN1} defined the notion of orthonormal basis for operators.
\begin{definition}(\cite{SUN1})\label{ONBDEFINITIONOVHS}
A collection $ \{F_n \}_{n}$ in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ is said to be an \textbf{orthonormal basis} or \textbf{a G-basis} in $ \mathcal{B}(\mathcal{H},\mathcal{H}_0)$ if
$$\langle F_n ^*y, F_k^*z\rangle=\delta_{n,k}\langle y, z\rangle , \quad \forall y, z \in \mathcal{H}_0, ~\forall n , k \in \mathbb{N}$$
and
$$ \sum\limits_{n=1}^\infty\|F_n h\|^2=\|h\|^2, \quad \forall h \in \mathcal{H}.$$
\end{definition}
We observe $\langle F_n ^*y, F_k^*z\rangle=\delta_{n ,k}\langle y, z\rangle , \forall y, z \in \mathcal{H}_0, $ $\forall n , k \in \mathbb{N}$ if and only if $F_n F_k^*=\delta_{n ,k}I_{\mathcal{H}_0} , $ $ \forall n , k \in \mathbb{N}$. Hence if $ \{F_n \}_{n}$ is an orthonormal basis, then $\|F_n \|^2=\|F_n F_n ^*\|=1, \forall n \in \mathbb{N} $ and $ \sum_{n=1}^\infty F_n ^*F_n =I_\mathcal{H}$. Further, using Proposition \ref{LJORTHO} we have
\begin{align*}
\left (\sum_{n=1}^\infty A_n^*L^*_n\right)\left(\sum_{k=1}^\infty L_kA_k\right)=I_\mathcal{H}.
\end{align*}
Consider the case $\mathcal{H}_0=\mathbb{K}$. For each $n\in \mathbb{N}$, via Riesz representation theorem (cf. \cite{LIMAYE}), there exists a unique $\tau_n\in \mathcal{H}$ such that $F_nh=\langle h, \tau_n\rangle, \forall h \in \mathcal{H}$. Now first condition in Definition \ref{ONBDEFINITIONOVHS} tells $\langle F_n^*y, F_k^*z\rangle=y\overline{z}\langle \tau_n, \tau_k\rangle=y\overline{z}\delta_{j,k}, \forall j,k\in \mathbb{N},\forall y, z \in \mathbb{K}$ which shows that $ \{\tau_n\}_{n}$ is orthonormal. Second condition in Definition \ref{ONBDEFINITIONOVHS} says that $\sum_{n=1}^\infty|\langle h, \tau_n\rangle|^2=\|h\|^2, \forall h \in \mathcal{H}$. Theorem \ref{CHARORTHONORMALINTRO} now tells that $ \{\tau_n\}_{n}$ is an orthonormal basis for $\mathcal{H}$. Hence Definition \ref{ONBDEFINITIONOVHS} generalizes the definition of orthonormal basis.
\begin{example}\label{ONBOVF}
\begin{enumerate}[label=(\roman*)]
\item (\cite{SUN1}) Let $ \{\tau_n\}_{n}$ be an orthonormal basis for $\mathcal{H}$. Define $F_n : \mathcal{H} \ni h \mapsto \langle h, \tau_n\rangle \in \mathbb{K} $, for each $ n \in \mathbb{N}$. Then $ \{F_n\}_{n } $ is an operator-valued orthonormal basis in $ \mathcal{B}(\mathcal{H}, \mathbb{K})$.
\item If $ U: \mathcal{H}\rightarrow \mathcal{H}_0$ is unitary, then $\{U\}$ is an orthonormal basis in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$.
\item \label{CUNTZ} Let $n\geq 2 $ and $A_1, \dots, A_n$ be $n$ isometries on $\ell^2(\mathbb{N})$ such that $A_1A_1^*+\cdots +A_nA_n^*=I_{\ell^2(\mathbb{N})}$ (these are Cuntz algebra generators (\cite{CUNTZ})). We then have
\begin{align*}
(A_jA_j^*)^2=A_j(A_j^*A_j)A_j^*=A_jI_\mathcal{H}A_j^*=A_jA_j^*.
\end{align*}
Hence $A_jA_j^*$'s are projections. Further $A_j^*A_k=0$, $ \forall j\neq k$.
Therefore $ \{A_j^*\}_{j=1}^n $ is an operator-valued orthonormal basis in $ \mathcal{B}(\ell^2(\mathbb{N}))$.
\item Equation (\ref{LEQUATION}) says that $ \{L^*_n\}_{n}$ is an orthonormal basis in $ \mathcal{B}(\ell^2(\mathbb{N})\otimes\mathcal{H}_0, \mathcal{H}_0)$.
\end{enumerate}
\end{example}
Using Theorem \ref{RIESZBASISTHM}, \cite{SUN1} defined the notion of Riesz basis for operators.
\begin{definition}(\cite{SUN1})
A collection $\{A_n\}_{n}$ in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ is said to be an \textbf{operator-valued Riesz basis} or \textbf{G-Riesz basis} in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ if it satisfies the following.
\begin{enumerate}[label=(\roman*)]
\item $\{h \in \mathcal{H}:A_nh=0, \forall n \in \mathbb{N}\}=\{0\}$.
\item There exist $a,b>0$ such that for every finite subset $\mathbb{S}$ of $\mathbb{N}$,
\begin{align*}
a\sum_{n \in \mathbb{S}}\|h_n\|^2\leq \left\|\sum_{n \in \mathbb{S}}A_n^*h_n\right\|^2 \leq b \sum_{n \in \mathbb{S}}\|h_n\|^2, \quad\forall h_n \in \mathcal{H}_0.
\end{align*}
\end{enumerate}
\end{definition}
\begin{example}
\begin{enumerate}[label=(\roman*)]
\item (\cite{SUN1}) Let $ \{\tau_n\}_{n}$ be a Riesz basis for $\mathcal{H}$. Define $A_n : \mathcal{H} \ni h \mapsto \langle h, \tau_n\rangle \in \mathbb{K} $, for each $ n \in \mathbb{N}$. Then $A_n^*y=y\tau_n, \forall y \in \mathbb{K}$. Now from Theorem \ref{RIESZBASISTHM}, $ \{A_n\}_{n} $ is an operator-valued Riesz basis in $ \mathcal{B}(\mathcal{H}, \mathbb{K})$.
\item If $ U: \mathcal{H}\rightarrow \mathcal{H}_0$ is invertible, then $\{U\}$ is an operator-valued Riesz basis in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$.
\item Let $A_1,\dots, A_n$ be as in \ref{CUNTZ} of Example \ref{ONBOVF} and let $ A,B \in \mathcal{B}(\mathcal{H})$ be invertible. Then $ \{AA_j^*B\}_{j=1}^n $ is an operator-valued Riesz basis in $ \mathcal{B}(\mathcal{H})$. In fact, if $AA_j^*Bh=0, \forall j$, then $A_j^*Bh=0, \forall j $ which gives
\begin{align*}
h=B^{-1}Bh=B^{-1}\left(\sum_{k=1}^nA_kA_k^*Bh\right)=0
\end{align*}
and every finite subset $ \mathbb{S}$ of $ \mathbb{N}$,
\begin{align*}
(\|B^{-1}\|\|A^{-1}\|)^{-1}\sum\limits_{j\in \mathbb{S}}\|h_j\|^2\leq \left\| \sum\limits_{j\in\mathbb{S}}B^*A_jA^*h_j \right \|^2 \leq \|B\|\|A\|\sum\limits_{j\in \mathbb{S}}\|h_j\|^2,
\end{align*}
for all $h_j \in \mathcal{H}$.
\end{enumerate}
\end{example}
\cite{SUN1} derived the following result which tells that we can define the notion of operator-valued Riesz basis in a manner similar to Definition \ref{RIESZBASISDEFINITION}.
\begin{theorem}(\cite{SUN1})
A collection $\{A_n\}_{n}$ in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ is an operator-valued Riesz basis if and only if there exist an operator-valued orthonormal basis $ \{F_n\}_{n}$ in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ and an invertible $T\in \mathcal{B}(\mathcal{H})$ such that $A_n=F_nU, \forall n$.
\end{theorem}
Historically, many generalizations of frames for Hilbert spaces are proposed such as frames for subspaces (\cite{CASAZZASUBSPACE}), fusion frames (\cite{CASAZZAFUSION}), outer frames (\cite{OUTER}), oblique frames (\cite{CHRISTENSENOBLIQUE}), pseudo frames (\cite{LIPSEUDO}), quasi-projectors (\cite{FORNASIERQUASI}). It was in 2006, when Sun gave the definition of G-frame which unified all these notions of frames for Hilbert spaces.
\begin{definition}(\cite{SUN1})\label{SUNDEF}
A collection $ \{A_n\}_{n} $ in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ is said to be a \textbf{G-frame} in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ if there exist $ a, b >0$ such that
\begin{align*}
a\|h\|^2\leq\sum_{n=1}^\infty\|A_nh\|^2 \leq b\|h\|^2 ,\quad \forall h \in \mathcal{H}.
\end{align*}
\end{definition}
Basic idea for the notion of OVF is the following. Definition \ref{OLE} can be written in an equivalent form as
\begin{align}\label{SEQUENTIALEQUATION2}
&\text{the map}~ S_\tau: \mathcal{H} \ni h \mapsto \sum_{n=1}^\infty\langle h, \tau_n\rangle \tau_n \in \mathcal{H} ~\text{is a well-defined bounded} \nonumber\\
&\text{positive invertible operator}.
\end{align}
If we now define $ A_n : \mathcal{H} \ni h \mapsto \langle h, x_n\rangle \in \mathbb{K} $, for each $ n \in \mathbb{N}$, then Statement (\ref{SEQUENTIALEQUATION2}) can be rewritten as
\begin{align}\label{SEQUENTIALEQUATION3}
&\sum_{n=1}^\infty A_n^*A_n ~\text{converges in the strong-operator topology on } \mathcal{B}(\mathcal{H}) \text{ to a} \nonumber\\
&\text{ bounded positive invertible operator.}
\end{align}
Now Statement (\ref{SEQUENTIALEQUATION3}) leads to
\begin{definition}(\cite{KAFTALLARSONZHANG})\label{KAFTAL}
A collection $ \{A_n\}_{n} $ in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ is said to be an \textbf{operator-valued frame} (OVF) in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ if the series
\begin{align*}
\text{(\textbf{Operator-valued frame operator})}\quad S_A\coloneqq \sum_{n=1}^\infty A_n^*A_n
\end{align*}
converges in the strong-operator topology on $ \mathcal{B}(\mathcal{H})$ to a bounded invertible operator.\\
Constants $ a$ and $ b$ are called as lower and upper frame bounds, respectively. Supremum (resp. infimum) of the set of all lower (resp. upper) frame bounds is called optimal lower (resp. upper) frame bound. If the optimal frame bounds are equal, then the frame is called as tight operator-valued frame. A tight operator-valued frame whose optimal frame bound is one is termed as Parseval operator-valued frame.
\end{definition}
Before proceeding, we first show that Definitions \ref{SUNDEF} and \ref{KAFTAL} are equivalent. For this, we first need a result.
\begin{theorem}(\cite{SUN1})\label{SUNIMPO}
If $ \{A_n\}_{n} $ in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ is a G-frame, then the map $S_A:\mathcal{H}\ni h \mapsto \sum_{n=1}^\infty A^*_nA_nh \in \mathcal{H}$ is a well-defined bounded linear invertible operator.
\end{theorem}
Following theorem will reflect the equivalence of notions of OVF and G-frame.
\begin{theorem}\label{OVFIFANDONLYIFGFRAME}
A collection $ \{A_n\}_{n} $ in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ is a G-frame in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ if and only if $ \{A_n\}_{n}$ is an operator-valued frame in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$.
\end{theorem}
\begin{proof}
$(\Rightarrow)$ Theorem \ref{SUNIMPO} says that $S_A$ is a bounded linear invertible operator. Since $A_n^*A_n\geq0, \forall n \in \mathbb{N}$, $S_A$ is positive. Hence $ \{A_n\}_{n}$ is an OVF.\\
$(\Leftarrow)$ Since $\sum_{n=1}^\infty A^*_nA_n$ is positive invertible, there are $a,b>0$ such that $aI_\mathcal{H}\leq\sum_{n=1}^\infty A^*_nA_n $ $\leq b I_\mathcal{H}$. This implies $a\|h\|^2\leq \langle\sum_{n=1}^\infty A^*_nA_nh, h\rangle =\sum_{n=1}^\infty\|A_nh\|^2\leq b \|h\|^2, \forall h \in \mathcal{H}$. Hence $ \{A_n\}_{n}$ is a G-frame.
\end{proof}
\begin{remark}
Even though Sun's paper (\cite{SUN1}) published earlier than the paper by \cite{KAFTALLARSONZHANG}, it is mentioned in the introduction of paper (\cite{KAFTALLARSONZHANG}) that authors of paper (\cite{KAFTALLARSONZHANG}) started the work in January 1999.
\end{remark}
\begin{example}
\begin{enumerate}[label=(\roman*)]
\item (\cite{SUN1}) Let $ \{\tau_n\}_{n}$ be a frame for $\mathcal{H}$. Define $A_n : \mathcal{H} \ni h \mapsto \langle h, \tau_n\rangle \in \mathbb{K} $, for each $ n \in \mathbb{N}$. Then $A_n^*y=y\tau_n, \forall y \in \mathbb{K}$. Now from Theorem \ref{MOSTIMPORTANT}, the map $\mathcal{H}\ni h \mapsto \sum_{n=1}^\infty A_n^*A_nh=\sum_{n=1}^\infty \langle h, \tau_n\rangle \tau_n\in \mathcal{H}$ is a well-defined positive invertible operator. Hence $ \{A_n\}_{n} $ is an operator-valued frame in $ \mathcal{B}(\mathcal{H}, \mathbb{K})$.
\item If $ A: \mathcal{H}\rightarrow \mathcal{H}_0$ is a bounded below linear operator, then $\{A\}$ is an operator-valued frame in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$.
\item Let $A_1,\dots, A_n$ be as in \ref{CUNTZ} of Example \ref{ONBOVF} and let $ A,B \in \mathcal{B}(\mathcal{H})$ be bounded below. Then $ \{AA_j^*B\}_{j=1}^n $ is an operator-valued frame in $ \mathcal{B}(\mathcal{H})$.
\end{enumerate}
\end{example}
The fundamental tool used in the study of OVF is the factorization of frame operator $S_A$. This and other important properties of OVFs are stated in the following theorem.
\begin{theorem}(\cite{KAFTALLARSONZHANG}, \cite{SUN1})
Let $ \{A_n\}_{n} $ be an OVF in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$. Then
\begin{enumerate}[label=(\roman*)]
\item $\overline{\operatorname{span}} \cup_{n=1}^\infty A_n^*(\mathcal{H}_0)=\mathcal{H}$.
\item The analysis operator
\begin{align*}
\theta_A:\mathcal{H} \ni h \mapsto \theta_A h\coloneqq\sum_{n=1}^\infty L_nA_n h \in \ell^2(\mathbb{N}) \otimes \mathcal{H}_0
\end{align*}
is a well-defined bounded linear operator. Further, $\sqrt{a}\|h\|\leq \|\theta_A h\|\leq \sqrt{b}\|h\|, \forall h \in \mathcal{H}$. In particular, $\theta_A$ is injective and its range is closed.
\item We have
\begin{align*}
a\|h\|^2\leq \langle S_A h, h\rangle\leq b\|h\|^2, \forall h \in \mathcal{H}\text{ and } a\|h\|\leq \|S_A h\|\leq b\|h\|, \forall h \in \mathcal{H}.
\end{align*}
\item $h=\sum_{n=1}^\infty(A_nS_A^{-1})^*A_nh=\sum_{n=1}^\infty A_n^*(A_nS_A^{-1})h, \forall h \in \mathcal{H}$.
\item $\theta_A^*z= \sum\limits_{n=1}^\infty A_n^*L_n^*z $, $ \forall z \in \ell^2(\mathbb{N})\otimes \mathcal{H}_0$.
\item Frame operator
factors as $S_A=\theta_A^*\theta_A.$
\item $\theta_A^*$ is surjective.
\item $\|S_A^{-1}\|^{-1}$ is the optimal lower frame bound and $\|S_A\|=\|\theta_A\|^2$ is the optimal upper frame bound.
\item $ P_A \coloneqq \theta_A S_A^{-1} \theta_A^*:\ell^2(\mathbb{N})\otimes \mathcal{H}_0 \to \ell^2(\mathbb{N})\otimes \mathcal{H}_0$ is an orthogonal projection onto $ \theta_A(\mathcal{H})$.
\item $ \{A_n\}_{n}$ is Parseval if and only if $\theta_A$ is an isometry if and only if $\theta_A\theta_A^*$ is a projection.
\item $ \{A_nS_A^{-1}\}_{n}$ is an OVF in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ with bounds $b^{-1}$ and $a^{-1}$.
\item $ \{A_nS_A^{-1/2}\}_{n}$ is a Parseval OVF in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$.
\item (\textbf{Best approximation}) If $ h \in \mathcal{H}$ has representation $ h=\sum_{n=1}^\infty A_n^*y_n,$ for some sequence $ \{y_n\}_{n}$ in $\mathcal{H}_0$, then
$$ \sum\limits_{n=1}^\infty \|y_n\|^2 =\sum\limits_{n=1}^\infty \|A_nS_A^{-1}h\|^2+\sum\limits_{n=1}^\infty \| y_n-A_nS_A^{-1}h\|^2. $$
\end{enumerate}
\end{theorem}
Similar to the notion of duality, orthogonality and similarity for frames in Hilbert spaces, there are similar notions for operator-valued frames. We now recall these notions and mention some results.
\begin{definition}(\cite{KAFTALLARSONZHANG})
An OVF $ \{B_n\}_{n}$ in $\mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ is said to be a \textbf{dual} for an OVF $ \{A_n\}_{n}$ in $\mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ if $ \sum_{n=1}^\infty B^*_nA_n=I_{\mathcal{H}}.$
\end{definition}
\begin{definition}(\cite{KAFTALLARSONZHANG})
An OVF $ \{B_n\}_{n}$ in $\mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ is said to be \textbf{orthogonal} to an OVF $ \{A_n\}_{n} $ in $\mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ if $ \sum_{n=1}^\infty B^*_nA_n=0. $
\end{definition}
\begin{proposition}(\cite{KAFTALLARSONZHANG})
Let $ \{A_n\}_{n}$ and $ \{B_n\}_{n}$ be two Parseval OVFs in $\mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ which are orthogonal. If $C,D,E,F \in \mathcal{B}(\mathcal{H})$ are such that $ C^*C+D^*D=I_\mathcal{H}$, then $ \{A_nC+B_nD\}_{n} $ is a Parseval OVF in $\mathcal{B}(\mathcal{H}, \mathcal{H}_0)$. In particular, if scalars $ c,d,$ satisfy $|c|^2+|d|^2 =1$, then $ \{cA_n+dB_n\}_{n} $ is a Parseval OVF.
\end{proposition}
\begin{proposition}(\cite{KAFTALLARSONZHANG})
If $ \{A_n\}_{n},$ and $ \{B_n\}_{n} $ are orthogonal OVFs in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$, then $\{A_n\oplus B_n\}_{n}$ is an OVF in $ \mathcal{B}(\mathcal{H}\oplus \mathcal{H}, \mathcal{H}_0).$ Further, if both $ \{A_n\}_{n} $ and $ \{B_n\}_{n} $ are Parseval, then $\{A_n\oplus B_n\}_{n}$ is Parseval.
\end{proposition}
\begin{definition}(\cite{KAFTALLARSONZHANG})\label{SIMILARITYOVFKAFTALLARSONZHANG}
An OVF $ \{B_n\}_{n} $ in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ is said to be \textbf{similar} or \textbf{equivalent} to an OVF $ \{A_n\}_{n} $ in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ if there exists a bounded invertible $ R_{A,B} \in \mathcal{B}(\mathcal{H})$ such that $ B_n=A_nR_{A,B} , \forall n \in \mathbb{N}.$
\end{definition}
Similar frames share some nice properties that knowing analysis, synthesis and frame operators of one give that of another.
\begin{lemma}(\cite{KAFTALLARSONZHANG})
Let $ \{A_n\}_{n} $ and $ \{B_n\}_{n}$ be similar OVFs and $B_n=A_nR_{A,B} , \forall n \in \mathbb{N}$, for some invertible $ R_{A,B} \in \mathcal{B}(\mathcal{H}).$ Then
\begin{enumerate}[label=(\roman*)]
\item $ \theta_B=\theta_A R_{A,B}$.
\item $S_{B}=R_{A,B}^*S_{A}R_{A,B}$.
\item $P_{B}=P_{A}.$
\end{enumerate}
\end{lemma}
There is a complete classification of similarity using operators.
\begin{theorem}(\cite{KAFTALLARSONZHANG})
For two OVFs $ \{A_n\}_{n} $ and $ \{B_n\}_{n} $, the following are equivalent.
\begin{enumerate}[label=(\roman*)]
\item $B_n=A_nR_{A,B} , \forall n \in \mathbb{N},$ for some invertible $ R_{A,B} \in \mathcal{B}(\mathcal{H}). $
\item $\theta_B=\theta_AR_{A,B} $ for some invertible $ R_{A,B} \in \mathcal{B}(\mathcal{H}). $
\item $P_{B}=P_{A}.$
\end{enumerate}
If one of the above conditions is satisfied, then invertible operators in $ \operatorname{(i)}$ and $ \operatorname{(ii)}$ are unique and are given by $R_{A,B}=S_{A}^{-1}\theta_A^*\theta_B$. In the case that $ \{A_n\}_{n} $ is Parseval, then $ \{B_n\}_{n}$ is Parseval if and only if $R_{A,B}$ is unitary.
\end{theorem}
In the study of frames, rather indexing with natural numbers or other indexing sets, it is more useful in some cases to study frames indexed by groups and generated by a single operator.
Let $ G$ be a discrete topological group and $ \{\chi_g\}_{g\in G}$ be the standard orthonormal basis for $\ell^2(G) $. Let $\lambda $ be the left regular representation of $ G$ defined by $ \lambda_g\chi_q(r)=\chi_q(g^{-1}r), $ $ \forall g, q, r \in G$ and $\rho $ be the right regular representation of $ G$ defined by $ \rho_g\chi_q(r)=\chi_q(rg), $ $\forall g, q, r \in G.$ By $\mathscr{L}(G) $, we mean the von Neumann algebra generated by unitaries $\{\lambda_g\}_{g\in G} $ in $ \mathcal{B}(\ell^2(G))$. Similarly $\mathscr{R}(G) $ denotes the von Neumann algebra generated by $\{\rho_g\}_{g\in G} $ in $ \mathcal{B}(\ell^2(G))$. We recall that $\mathscr{L}(G)'=\mathscr{R}(G)$, $ \mathscr{R}(G)'=\mathscr{L}(G) $ (cf. \cite{CONWAY}), where $\mathcal{A}'$ denotes the commutant of $\mathcal{A}\subseteq \mathcal{B}(\mathcal{H})$.
\begin{definition}(\cite{KAFTALLARSONZHANG})
Let $ \pi$ be a unitary representation of a discrete
group $ G$ on a Hilbert space $ \mathcal{H}.$ An operator $ A$ in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ is called a \textbf{operator frame generator} (resp. a Parseval frame generator) w.r.t. an operator $ \Psi$ in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ if $\{A_g\coloneqq A \pi_{g^{-1}}\}_{g\in G}$ is a factorable weak OVF (resp. Parseval) in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$. In this case, we say $ A$ is an operator frame generator for $\pi$.
\end{definition}
Frames generated by groups have the remarkable property that the frame operator belongs to the commutant of $\pi(G)$. These and other properties are given in the following proposition.
\begin{proposition}(\cite{KAFTALLARSONZHANG})
Let $ A$ and $ B$ be operator frame generators in $\mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ for a unitary representation $ \pi$ of $G$ on $ \mathcal{H}.$ Then
\begin{enumerate}[label=(\roman*)]
\item $ \theta_A\pi_g=(\lambda_g\otimes I_{\mathcal{H}_0})\theta_A, \theta_B \pi_g=(\lambda_g\otimes I_{\mathcal{H}_0})\theta_B, \forall g \in G.$
\item $ \theta_A^*\theta_B$ is in the commutant $ \pi(G)'$ of $ \pi(G)''.$ Further, $ S_{A} \in \pi(G)'$.
\item $ \theta_AT\theta_A^*, \theta_AT\theta_B^* \in \mathscr{R}(G)\otimes \mathcal{B}(\mathcal{H}_0), \forall T \in \pi(G)'.$ In particular, $ P_{A} \in \mathscr{R}(G)$ $\otimes \mathcal{B}(\mathcal{H}_0). $
\end{enumerate}
\end{proposition}
Following theorem gives a characterization of frames generated by unitary representation without using representation.
\begin{theorem}(\cite{KAFTALLARSONZHANG})
Let $ G$ be a discrete group, $ e$ be the identity of $G$ and $ \{A_g\}_{g\in G}$ be a Parseval OVF in $ \mathcal{B}(\mathcal{H},\mathcal{H}_0).$ Then there is a unitary representation $ \pi$ of $ G$ on $ \mathcal{H}$ for which
$$ A_g=A_e\pi_{g^{-1}}, \quad\forall g \in G$$
if and only if
$$A_{gp}A_{gq}^*=A_pA_q^* ,\quad \forall g,p,q \in G.$$
\end{theorem}
One of the most important results obtained by \cite{KAFTALLARSONZHANG} is the connectedness of the set of all generators of operator-valued frames generated by groups.
\begin{theorem}(\cite{KAFTALLARSONZHANG})\label{KAFTALPATHCONNECTED}
Let $\pi$ be a unitary representation of a discrete group $G$
on $ \mathcal{H}$. Suppose
\begin{align*}
\emptyset \neq \mathscr{F}_{G}\coloneqq\{&A \in \mathcal{B}(\mathcal{H},\mathcal{H}_0): \{A\pi_{g^{-1}}\}_{g \in G} ~\text{is an operator-valued frame in}\\
& \mathcal{B}(\mathcal{H},\mathcal{H}_0)\}.
\end{align*}
\begin{enumerate}[label=(\roman*)]
\item If $ \dim \mathcal{H}_0<\infty$, then $ \mathscr{F}_{G}$ is \textbf{path-connected} in the \textbf{operator-norm topology} on $\mathcal{B}(\mathcal{H},\mathcal{H}_0)$.
\item If $ \dim \mathcal{H}_0=\infty$, then $ \mathscr{F}_{G}$ is path-connected in the operator-norm topology on $\mathcal{B}(\mathcal{H},\mathcal{H}_0)$ if and only if the von Neumann algebra $\mathscr{R}(G)$ generated by the right regular representations of $G$ is \textbf{diffuse} (i.e., $\mathscr{R}(G)$ has no nonzero minimal projections).
\end{enumerate}
\end{theorem}
Stability result for OVFs is due to \cite{SUNSTABILITY}.
\begin{theorem}(\cite{SUNSTABILITY})
Let $ \{A_n\}_{n} $ be an OVF in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0) $ with frame bounds $ a$ and $b$. Suppose $ \{B_n\}_{n } $ in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0) $ is such that there exist $\alpha, \beta, \gamma \geq 0 $ with $ \max\{\alpha+\frac{\gamma}{\sqrt{a}}, \beta\}<1$
and for all $m=1,2, \dots, $
\begin{align*}
\left\|\sum\limits_{n=1}^m(A_n^*-B_n^*)y_n\right\|\leq \alpha\left\|\sum\limits_{n=1}^mA_n^*y_n\right\|+\beta\left\|\sum\limits_{n=1}^mB_n^*y_n\right\|+\gamma \left(\sum\limits_{n=1}^m\|y_n\|^2\right)^\frac{1}{2},\nonumber
\quad \forall y_n \in \mathcal{H}_0.
\end{align*}
Then $\{B_n\}_{n} $ is an operator-valued frame in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0) $ with frame bounds
\begin{align*}
a\left(1-\frac{\alpha+\beta+\frac{\gamma}{\sqrt{a}}}{1+\beta}\right)^2 \text{~and~} b\left(1+\frac{\alpha+\beta+\frac{\gamma}{\sqrt{b}}}{1-\beta}\right)^2.
\end{align*}
\end{theorem}
Like Bessel sequences for Hilbert spaces, there is a similar notion for operators.
\begin{definition}(\cite{SUN1})
A collection $\{A_n\}_{n} $ in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0) $ is said to be an \textbf{operator-valued Bessel sequence} (or \textbf{G-Bessel sequence}) if there exists $b>0$ such that
\begin{align*}
\text{ (\textbf{Operator-valued Bessel's inequality}) }\quad \sum\limits_{n=1}^\infty \|A_nh\|^2\leq b\|h\|^2,\quad \forall h \in \mathcal{H}.
\end{align*}
Constant $b$ is called as a Bessel bound for $ \{A_n\}_{n }$.
\end{definition}
Similar to Theorem \ref{OVFIFANDONLYIFGFRAME} we have the following result.
\begin{theorem}
A collection $\{A_n\}_{n} $ in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0) $ is an operator-valued Bessel sequence if and only if the series $ \sum_{n} A_n^*A_n$ converges in the strong-operator topology on $ \mathcal{B}(\mathcal{H})$ to a bounded operator.
\end{theorem}
Following are some examples of operator-valued Bessel sequences.
\begin{example}
\begin{enumerate}[label=(\roman*)]
\item (\cite{SUN1}) Let $ \{\tau_n\}_{n}$ be a Bessel sequence for $\mathcal{H}$. Define $A_n : \mathcal{H} \ni h \mapsto \langle h, \tau_n\rangle \in \mathbb{K} $, for each $ n \in \mathbb{N}$. Then $A_n^*y=y\tau_n, \forall y \in \mathbb{K}$. Now from Theorem \ref{OLEBESSELCHARACTERIZATION12}, the map $\mathcal{H}\ni h \mapsto \sum_{n=1}^\infty A_n^*A_nh=\sum_{n=1}^\infty \langle h, \tau_n\rangle \tau_n\in \mathcal{H}$ is a well-defined positive operator. Hence $ \{A_n\}_{n} $ is an operator-valued Bessel sequence in $ \mathcal{B}(\mathcal{H}, \mathbb{K})$.
\item From operator-norm inequality, we see that any finite collection of operators is an operator-valued Bessel sequence.
\end{enumerate}
\end{example}
{\onehalfspacing \chapter{FRAMES FOR METRIC SPACES}\label{chap2} }
\vspace{0.5cm}
{\onehalfspacing \section{BASIC PROPERTIES}
In this chapter, we define frames for metric spaces and derive several fundamental properties.
\begin{definition}\label{PFRAMEFORMETRIC}(\textbf{p-frame for metric space})
Let $\mathcal{M}$ be a metric space and $p \in [1,\infty)$. A collection $\{f_n\}_{n}$ of Lipschitz functions in $ \operatorname{Lip}(\mathcal{M}, \mathbb{K})$ is said to be a \textbf{metric p-frame} or \textbf{Lipschitz p-frame} for $\mathcal{M}$ if there exist $a,b>0$ such that
\begin{align*}
a\,d(x,y)\leq \left(\sum_{n=1}^{\infty}|f_n(x)-f_n(y)|^p\right)^\frac{1}{p}\leq b\,d(x,y),\quad \forall x, y \in \mathcal{M}.
\end{align*}
If we do not demand the first inequality, then we say $\{f_n\}_{n}$ is a metric p-Bessel sequence for $\mathcal{M}$.
\end{definition}
We now see that whenever $\mathcal{M}$ is a Banach space and $f_n$'s are linear functionals, then Definition \ref{PFRAMEFORMETRIC} reduces to Definition \ref{FRAMEDEFINITIONBANACH}. We now give various examples.
\begin{example}
Let $\{f_n\}_{n}$ be a p-frame for a Banach space $\mathcal{X}$. Choose any bi-Lipschitz function $A:\mathcal{X}\to \mathcal{X}$. Then it follows that $\{f_nA\}_{n}$ is a metric p-frame for $\mathcal{X}$.
\end{example}
\begin{example}\label{1FRAMEFIRST}
Let $1<a<\infty.$ Let us take $\mathcal{M}\coloneqq[a,\infty)$ and define $f_n:\mathcal{M}\to \mathbb{R}$ by
\begin{align*}
f_0(x)&\coloneqq 1, \quad \forall x \in \mathcal{M}\\
f_n(x)&\coloneqq \frac{(\log x)^n}{n!}, \quad \forall x \in \mathcal{M}, \forall n\geq 1.
\end{align*}
Then $f_n'(x)=\frac{(\log x)^{(n-1)}}{(n-1)!x}$, $\forall x \in \mathcal{M}, \forall n\geq1.$ Since $f_n'$ is bounded on $\mathcal{M}$, $\forall n\geq1$, it follows that $f_n$ is Lipschitz on $\mathcal{M}$, $\forall n\geq1$. For $x, y \in \mathcal{M}$ with $x<y$, we now see that
\begin{align*}
\sum_{n=0}^{\infty}|f_n(x)-f_n(y)|&=\sum_{n=0}^{\infty}\left|\frac{(\log
x)^n}{n!}-\frac{(\log y)^n}{n!}\right|=\sum_{n=0}^{\infty}\frac{(\log
y)^n}{n!}-\sum_{n=0}^{\infty}\frac{(\log x)^n}{n!}
\\
&=e^{\log y}-e^{\log x}=y-x=|x-y|.
\end{align*}
Hence $\{f_n\}_n$ is a metric 1-frame for $\mathcal{M}$.
\end{example}
\begin{example}\label{1FRAMESECOND}
Let $1<a<b<\infty.$ We take $\mathcal{M}\coloneqq[\frac{1}{1-a},\frac{1}{1-b}]$ and define $f_n:\mathcal{M}\to \mathbb{R}$ by
\begin{align*}
f_n(x)&\coloneqq\left(1-\frac{1}{x}\right)^n, \quad \forall x \in \mathcal{M}, \forall n \geq 0.
\end{align*}
Then $f_n'(x)=\frac{n}{-x^2}\left(1-\frac{1}{x}\right)^{n-1}$, $\forall x \in \mathcal{M}, \forall n\geq1.$ Therefore $f_n$ is a Lipschitz function, for each $n\geq1.$ We now see that $\{f_n\}_n$ is a metric 1-frame for $\mathcal{M}$. In fact, for $x, y \in \mathcal{M},$ with $x<y$, we have
\begin{align*}
\sum_{n=0}^{\infty}|f_n(x)-f_n(y)|&=\sum_{n=0}^{\infty}\left|\left(1-\frac{1}{x}\right)^n-\left(1-\frac{1}{y}\right)^n\right|=\sum_{n=0}^{\infty}\left(1-\frac{1}{y}\right)^n-\sum_{n=0}^{\infty}\left(1-\frac{1}{x}\right)^n
\\
&=y-x=|x-y|.
\end{align*}
\end{example}
\begin{example}\label{LINEARGOOD}
Let $\{f_n\}_{n}$ be a p-frame for a Banach space $\mathcal{X}$. Let $\phi: \mathbb{K}\to \mathbb{K}$ be bi-Lipschitz and define $g_n (x)\coloneqq \phi (f_n(x)), $ $\forall x \in \mathcal{X}, $ $\forall n \in \mathbb{N}$. It then follows that $\{g_n\}_n $ is a metric p-frame for $\mathcal{X}$.
\end{example}
By looking at Theorem \ref{pFRAMECHAR} we can ask whether there is a result similar for metric p-frames. We answer this partially through the following theorem.
\begin{theorem}\label{PBESSELCHAR}
Let $(\mathcal{M},0)$ be a pointed metric space and $\{f_n\}_{n}$ be a sequence in $\operatorname{Lip}_0(\mathcal{M},
\mathbb{K})$. Then $\{f_n\}_{n}$ is a metric p-Bessel sequence for
$\mathcal{M}$ with bound $b$ if and only if
\begin{align}\label{LIPBASSELOPERATORCHARACTERIZATION}
&T: \ell^q (\mathbb{N})\ni \{a_n\}_{n} \mapsto T\{a_n\}_{n} \in \operatorname{Lip}_0(\mathcal{M}\times \mathcal{M},
\mathbb{K}),\\
&T\{a_n\}_{n}: \mathcal{M}\times \mathcal{M} \ni (x,y) \mapsto \sum_{n=1}^\infty a_n(f_n(x)-f_n(y)) \in \mathbb{K} \nonumber
\end{align}
is a well-defined (hence bounded) linear operator and $\|T\|\leq b$ (where $q$ is the conjugate
index of $p$).
\end{theorem}
\begin{proof}
$(\Rightarrow)$ Let $\{a_n\}_{n} \in \ell^q (\mathbb{N})$ and $n, m\in \mathbb{N}$ with $n<m$. First we have to show that the series in (\ref{LIPBASSELOPERATORCHARACTERIZATION}) is convergent. For all $x, y \in \mathcal{M}$,
\begin{align*}
\left|\sum_{k=n}^{m}a_k(f_k(x)-f_k(y))\right|&\leq \left(\sum_{k=n}^{m}|a_k|^q\right)^\frac{1}{q}\left(\sum_{k=n}^{m}|f_k(x)-f_k(y)|^p\right)^\frac{1}{p}\\
&\leq b \left(\sum_{k=n}^{m}|a_k|^q\right)^\frac{1}{q}\, d(x,y).
\end{align*}
Therefore the series in (\ref{LIPBASSELOPERATORCHARACTERIZATION}) converges. We next show that the map $T\{a_n\}_{n}$ is Lipschitz. Consider
\begin{align*}
&\left\|T\{a_n\}_{n}\right\|_{\operatorname{Lip}_0} =\sup_{(x, y), (u,v) \in \mathcal{M}\times \mathcal{M}, (x, y)\neq (u,v)}\frac{|T\{a_n\}_{n}(x,y)-T\{a_n\}_{n}(u,v)|}{d(x,u)+d(y,v)}\\
&\quad=\sup_{(x, y), (u,v) \in \mathcal{M}\times \mathcal{M}, (x, y)\neq (u,v)}\frac{|\sum_{n=1}^{\infty}a_n(f_n(x)-f_n(u))-\sum_{n=1}^{\infty}a_n(f_n(y)-f_n(v))|}{d(x,u)+d(y,v)}\\
&\quad\leq \sup_{(x, y), (u,v) \in \mathcal{M}\times \mathcal{M}, (x, y)\neq (u,v)}\frac{|\sum_{n=1}^{\infty}a_n(f_n(x)-f_n(u))|+|\sum_{n=1}^{\infty}a_n(f_n(y)-f_n(v))|}{d(x,u)+d(y,v)}\\
&\quad\leq b\sup_{(x, y), (u,v) \in \mathcal{M}\times \mathcal{M}, (x, y)\neq (u,v)}\frac{\left(\sum_{n=1}^{\infty}|a_n|^q\right)^\frac{1}{q}\, d(x,u)+\left(\sum_{n=1}^{\infty}|a_n|^q\right)^\frac{1}{q}\, d(y,v)}{d(x,u)+d(y,v)}\\
&\quad=b\left(\sum_{n=1}^{\infty}|a_n|^q\right)^\frac{1}{q}.
\end{align*}
Hence $T$ is well-defined. Clearly $T$ is linear. Boundedness of $T$ with bound $b$ will follow from previous calculation.\\
$(\Leftarrow)$ From the definition of $T$, it is bounded by Banach-Steinhaus. Given $x, y \in \mathcal{M}$, we define a map
\begin{align*}
\Phi_{x,y}: \ell^q (\mathbb{N}) \ni \{a_n\}_{n} \mapsto \Phi_{x,y}\{a_n\}_{n}\coloneqq \sum_{n=1}^{\infty}a_n(f_n(x)-f_n(y))\in \mathbb{K}
\end{align*}
which is a bounded linear functional. Hence $\{f_n(x)-f_n(y)\}_{n}\in \ell^p (\mathbb{N})$. Let $\{e_n\}_{n}$ be the standard Schauder basis for $ \ell^p (\mathbb{N})$. Then
\begin{align*}
\|\Phi_{x,y}\|=\left(\sum_{n=1}^{\infty}|\Phi_{x,y}\{e_n\}_{n}|^p\right)^\frac{1}{p}= \left(\sum_{n=1}^{\infty}|f_n(x)-f_n(y)|^p\right)^\frac{1}{p}.
\end{align*}
Now
\begin{align*}
b\left(\sum_{n=1}^{\infty}|a_n|^q\right)^\frac{1}{q}&=b\|\{a_n\}_{n}\|\geq \|T\{a_n\}_{n}\|_{\operatorname{Lip}_0}\\
&\geq \sup_{(x, 0), (y,0) \in \mathcal{M}\times \mathcal{M}, (x, 0)\neq (y,0)}\frac{|T\{a_n\}_{n}(x,0)-T\{a_n\}_{n}(y,0)|}{d(x,y)}\\
&=\sup_{(x, 0), (y,0) \in \mathcal{M}\times \mathcal{M}, (x, 0)\neq (y,0)}\frac{|\sum_{n=1}^{\infty}a_n(f_n(x)-f_n(y))|}{d(x,y)}\\
&=\sup_{(x, 0), (y,0) \in \mathcal{M}\times \mathcal{M}, (x, 0)\neq (y,0)}\frac{|\Phi_{x,y}\{a_n\}_{n}|}{d(x,y)}
\end{align*}
which implies
\begin{align*}
|\Phi_{x,y}\{a_n\}_{n}|\leq b \left(\sum_{n=1}^{\infty}|a_n|^q\right)^\frac{1}{q}\,d(x,y), \quad \forall x, y \in \mathcal{M}.
\end{align*}
Using all these, we finally get
\begin{align*}
\left(\sum_{n=1}^{\infty}|f_n(x)-f_n(y)|^p\right)^\frac{1}{p}=\|\Phi_{x,y}\|\leq b\, d(x,y), \quad \forall x, y \in \mathcal{M}.
\end{align*}
Hence $\{f_n\}_{n}$ is a metric p-Bessel sequence for
$\mathcal{M}$ with bound $b$.
\end{proof}
In the spirit of definition of $\mathcal{X}_d$-frame, Definition \ref{PFRAMEFORMETRIC} can be generalized.
\begin{definition}\label{XDMETRICFRAME}
Let $\mathcal{M}$ be a metric space and $\mathcal{M}_d$ be an associated BK-space. Let
$\{f_n\}_{n}$ be a sequence in $\operatorname{Lip}(\mathcal{M}, \mathbb{K})$. We say that $\{f_n\}_{n}$ is a \textbf{metric $\mathcal{M}_d$-frame} (or \textbf{Lipschitz $\mathcal{M}_d$-frame}) for $\mathcal{M}$ if the following conditions hold.
\begin{enumerate}[label=(\roman*)]
\item $\{f_n(x)\}_{n} \in \mathcal{M}_d$, for each $x \in \mathcal{M}$,
\item There exist positive $a, b$ such that
$
a\, d(x,y) \leq \|\{f_n(x)-f_n(y)\}_n\|_{\mathcal{M}_d} \leq b\, d(x,y), $ $ \forall x
, y\in \mathcal{M}.
$
\end{enumerate}
We say constant $a$ as \textbf{lower metric $\mathcal{M}_d$-frame bound}
and constant $b$ as \textbf{upper metric $\mathcal{M}_d$-frame bound}. If we do not demand the first inequality, then we say $\{f_n\}_{n}$ is a metric \textbf{$\mathcal{M}_d$-Bessel sequence}.
\end{definition}
An easier way of producing metric $\mathcal{M}_d$-frame is the following. Let $\mathcal{M}_d$ be a BK-space which admits a Schauder basis $\{\tau_n\}_{n}$. Let $\{f_n\}_{n}$ be the coefficient functionals associated with $\{\tau_n\}_{n}$. Let $\mathcal{M}$ be a metric space and $A:\mathcal{M} \rightarrow \mathcal{M}_d$ be bi-Lipschitz with bounds $a$ and $b$. Define $g_n\coloneqq f_n A, \forall n$. Then $g_n$ is a Lipschitz function for all $n$. Now
\begin{align*}
a\, d(x,y) &\leq \|Ax-Ay\|_{\mathcal{M}_d}=\|\{f_n(Ax-Ay)\}_n\|_{\mathcal{M}_d}\\
&=\|\{f_n(Ax)-f_n(Ay)\}_n\|_{\mathcal{M}_d}
=\|\{g_n(x)-g_n(y)\}_n\|_{\mathcal{M}_d} \leq b\, d(x,y), \quad \forall x
, y\in \mathcal{M}.
\end{align*}
Hence $\{g_n\}_{n}$ is a metric $\mathcal{M}_d$-frame for $\mathcal{M}$.\\
Following result ensures that metric frames are universal in nature.
\begin{theorem}\label{METRICFRAMEEXISTS3}
Every separable metric space $ \mathcal{M}$ admits a metric $\mathcal{M}_d$-frame.
\end{theorem}
\begin{proof}
From Theorem \ref{AHARONITHEOREM} it follows that there exists a bi-Lipschitz function $f: \mathcal{M} \to c_0(\mathbb{N})$. Let $ p_n: c_0(\mathbb{N}) \to \mathbb{K}$ be the coordinate projection, for each $n$. If we now set $f_n\coloneqq p_nf$, for each $n$, then $\{f_n\}_{n}$ is a metric $c_0(\mathbb{N})$-frame for $\mathcal{M}$.
\end{proof}
Given metric $\mathcal{M}_d$-frames $\{f_n\}_{n}$, $\{g_n\}_{n}$ and a nonzero scalar $\lambda$, one can naturally ask whether we can scale and add them to get new frames? i.e., whether $\{f_n+\lambda g_n\}_{n}$ is a frame? In the case of Hilbert spaces, a use of Minkowski's inequality shows that whenever $\{\tau_n\}_{n}$ and $\{\omega_n\}_{n}$ are frames for a Hilbert space $\mathcal{H}$, then $\{\tau_n+\lambda \omega_n\}_{n}$ is a Bessel sequence for $\mathcal{H}$. In general, this sequence need not be a frame for $\mathcal{H}$. Thus we have to impose extra conditions to ensure the existence of lower frame bound. For Hilbert spaces this is done by \cite{FAVIERZALIK}. We now obtain similar results for metric spaces.
\begin{theorem}
Let $\{f_n\}_{n}$ be a metric $\mathcal{M}_d$-frame for metric space $\mathcal{M}$ with bounds $a$ and $b$. Let $\lambda$ be a non-zero scalar. Then
\begin{enumerate}[label=(\roman*)]
\item $\{\lambda f_n\}_{n}$ is a metric $\mathcal{M}_d$-frame for $\mathcal{M}$ with bounds $a\lambda$ and $b\lambda$.
\item If $A:\mathcal{M} \rightarrow \mathcal{M}$ is bi-Lipschitz with bounds $c$ and $d$, then $\{f_nA\}_{n}$ is a metric $\mathcal{M}_d$-frame for $\mathcal{M}$ with bounds $ac$ and $bd$.
\item If $\{g_n\}_{n}$ is a metric $\mathcal{M}_d$-Bessel sequence for $\mathcal{M}$ with bound $d$ and $|\lambda|<\frac{a}{d}$, then $\{f_n+\lambda g_n\}_{n}$ is a metric $\mathcal{M}_d$-frame for $\mathcal{M}$ with bounds $a-|\lambda|d$ and $b+|\lambda|d$.
\end{enumerate}
\end{theorem}
\begin{proof}
First two conclusions follow easily. For the upper frame bound of $\{f_n+\lambda g_n\}_{n}$ we use triangle inequality. Now for lower frame bound, using reverse triangle inequality, we get
\begin{align*}
&\|\{(f_n+\lambda g_n)(x)-(f_n+\lambda g_n)(y)\}_n\|_{\mathcal{M}_d}=\|\{f_n(x)-f_n(y)+ \lambda( g_n(x)- g_n(y))\}_n\|_{\mathcal{M}_d}\\
&\quad\geq \|\{f_n(x)-f_n(y)\}_n\|_{\mathcal{M}_d}-\|\{ \lambda( g_n(x)- g_n(y))\}_n\|_{\mathcal{M}_d}\\
&\quad \geq a\, d(x,y)- |\lambda| \, d(x,y)
=(a-|\lambda|)\, d(x,y), \quad \forall x
, y\in \mathcal{M}.
\end{align*}
\end{proof}
We next define ``metric frame" which is stronger than Definition \ref{XDMETRICFRAME} in light of definition of Banach frame.
\begin{definition}\label{METRICBANACHFRAME}
Let $\mathcal{M}$ be a metric space and $\mathcal{M}_d$ be an associated BK-space. Let
$\{f_n\}_{n}$ be a sequence in $\operatorname{Lip}(\mathcal{M}, \mathbb{K})$
and $S: \mathcal{M}_d \rightarrow \mathcal{M}$. We say that $(\{f_n\}_{n}, S)$ is a \textbf{metric frame} or \textbf{Lipschitz metric
frame} for $\mathcal{M}$ if the following conditions hold.
\begin{enumerate}[label=(\roman*)]
\item $\{f_n(x)\}_{n} \in \mathcal{M}_d$, for each $x \in \mathcal{M}$,
\item There exist positive $a, b$ such that
$
a\, d(x,y) \leq \|\{f_n(x)-f_n(y)\}_n\|_{\mathcal{M}_d} \leq b\, d(x,y), $ $ \forall x
, y\in \mathcal{M},
$
\item $S$ is Lipschitz and $S(\{f_n(x)\}_{n})=x$, for each $x \in \mathcal{M}$.
\end{enumerate}
Mapping $S$ is called as
Lipschitz reconstruction operator. We say constant $a$ as \textbf{lower frame bound}
and constant $b$ as \textbf{upper frame bound}. If we do not demand the first inequality, then we say $(\{f_n\}_{n}, S)$ is a \textbf{metric Bessel sequence}.
\end{definition}
We observe that if $(\{f_n\}_{n}, S)$ is a metric frame for $\mathcal{M}$, then condition (i) in Definition \ref{METRICBANACHFRAME} tells
that the mapping (we call as analysis map)
\begin{align*}
\theta_f:\mathcal{M} \ni x \mapsto \theta_f x\coloneqq \{f_n(x)\}_{n} \in \mathcal{M}_d
\end{align*}
is well-defined and condition (ii) in Definition \ref{METRICBANACHFRAME} tells that $\theta_f$ satisfies
\begin{align*}
a\, d(x,y)\leq \|\theta_f x -\theta_fy \|\leq b\, d(x,y), \quad \forall x
, y\in \mathcal{M}.
\end{align*}
Hence $\theta_f$ is bi-Lipschitz and injective. Thus a metric frame puts the space
into $\mathcal{M}_d$ via $\theta_f$ and reconstructs
every element by using reconstruction operator $S$. Now note that $S\theta_f =I_\mathcal{M}$. This operator description helps us to derive the following propositions easily.
\begin{proposition}
If $(\{f_n\}_{n}, S)$ is a metric frame for $\mathcal{M}$, then $P_{f, S}\coloneqq \theta_f S: \mathcal{M}_d \to \mathcal{M}_d$ is idempotent and $P_{f, S}(\mathcal{M}_d)=\theta_f(\mathcal{M}_d).$
\end{proposition}
\begin{proposition}
Let $\{f_n\}_{n}$ be a $\mathcal{M}_d$-frame for $\mathcal{M}$ and $S: \mathcal{M}_d \rightarrow \mathcal{M}$ be Lipschitz. Then $(\{f_n\}_{n}, S)$ is a metric frame for $\mathcal{M}$ if and only if $S$ is a left-Lipschitz inverse of $\theta_f$ if and only if $\theta_f$ is a right-Lipschitz inverse of $S$.
\end{proposition}
We now give some explicit examples of metric frames.
\begin{example}
Let $\mathcal{M}$, $\{f_n\}_{n}$ be as in Example \ref{1FRAMEFIRST} and let $a=1$. Take\\ $\mathcal{M}_d \coloneqq \ell^1(\{0\}\cup \mathbb{N})$ and define
\begin{align*}
S:\mathcal{M}_d \ni \{a_n\}_{n} \mapsto S \{a_n\}_{n} \coloneqq 1+\left| \sum_{n=1}^{\infty} a_n\right| \in \mathcal{M}.
\end{align*}
Then
\begin{align*}
|S \{a_n\}_{n}-S \{b_n\}_{n}|&=\left|| \sum_{n=1}^{\infty} a_n|-| \sum_{n=1}^{\infty} b_n| \right|\leq \left| \sum_{n=1}^{\infty} a_n- \sum_{n=1}^{\infty} b_n \right|\\
&=\left| \sum_{n=1}^{\infty} (a_n-b_n)\right|\leq \sum_{n=1}^{\infty} |a_n-b_n|\leq \sum_{n=0}^{\infty} |a_n-b_n|\\
&=\|\{a_n\}_{n}-\{b_n\}_{n}\|,\quad \forall \{a_n\}_{n}, \{b_n\}_{n} \in \ell^1(\{0\}\cup \mathbb{N}).
\end{align*}
Thus $S$ is Lipschitz. Further,
\begin{align*}
S(\{f_n(x)\}_{n})=1+\left| \sum_{n=1}^{\infty} f_n(x)\right|=1+\sum_{n=1}^{\infty}\frac{(\log x)^n}{n!}=x,\quad \forall x \in \mathcal{M}.
\end{align*}
Hence $(\{f_n\}_{n}, S)$ is a metric frame for $\mathcal{M}$.
Note that if we define \begin{align*}
T:\mathcal{M}_d \ni \{a_n\}_{n} \mapsto S \{a_n\}_{n} \coloneqq 1+ \sum_{n=1}^{\infty} |a_n| \in \mathcal{M},
\end{align*}
then
\begin{align*}
|T \{a_n\}_{n}-T \{b_n\}_{n}|&=\left| \sum_{n=1}^{\infty} |a_n|- \sum_{n=1}^{\infty} |b_n| \right|= \left| \sum_{n=1}^{\infty} (|a_n|-|b_n|) \right|\\
&\leq \sum_{n=1}^{\infty} \bigg||a_n|-|b_n|\bigg|\leq \sum_{n=1}^{\infty} |a_n-b_n|\leq \sum_{n=0}^{\infty} |a_n-b_n|\\
&=\|\{a_n\}_{n}-\{b_n\}_{n}\|,\quad \forall \{a_n\}_{n}, \{b_n\}_{n} \in \ell^1(\{0\}\cup \mathbb{N}).
\end{align*}
Thus $T$ is Lipschitz. Hence $(\{f_n\}_{n}, T)$ is also a metric frame for $\mathcal{M}$.
\end{example}
\begin{example}
Let $f_1: \mathbb{K}\to \mathbb{K}$ be bi-Lipschitz and let $f_2, \dots, f_m: \mathbb{K}\to \mathbb{K}$ be $m$ Lipschitz maps such that
\begin{align*}
f_1(x)+\dots+ f_m(x)=x, \quad \forall x \in \mathbb{K}.
\end{align*}
We now define $S: \mathbb{K}^m \ni (x_1, \dots, x_m)\mapsto \sum_{j=1}^{m}x_j \in \mathbb{K}$. Then $(\{f_n\}_{n}, S)$ is a metric frame for $\mathbb{K}$. Note that the operator $S$ is linear.
\end{example}
After the definition of metric frame, the first question which comes is the
existence. In Theorem \ref{BANACHFRAMEEXISTSSEPARABLE} it was proved that every separable Banach
space admits a Banach frame. Even though this result is not known in metric space settings, two results are derived one is close to the definition of metric frame and another gives existence under certain assumptions. To do this
we prove a result which we derive from Mc-Shane extension theorem.
\begin{theorem}\label{MCSHANE}(\textbf{Mc-Shane extension theorem}) (cf. \cite{WEAVER})
Let $\mathcal{M}$ be a metric space and $\mathcal{M}_0$ be a nonempty subset
of $\mathcal{M}$. If $f_0:\mathcal{M}_0 \rightarrow \mathbb{R} $ is Lipschitz,
then there exists a Lipschitz function $f:\mathcal{M} \rightarrow \mathbb{R}
$ such that $f|{\mathcal{M}_0}=f_0$ and
$\operatorname{Lip}(f)=\operatorname{Lip}(f_0)$.
\end{theorem}
Using Theorem \ref{MCSHANE} we derive the following.
\begin{corollary}\label{MCSHANECORO}
If $(\mathcal{M}, 0)$ is a pointed metric space, then for every $x
\in \mathcal{M}$, there exists a Lipschitz function $f:\mathcal{M} \rightarrow
\mathbb{R}$ such that $f(x)=d(x,0)$, $f(0)=0$ and $\operatorname{Lip}(f)=1$.
\end{corollary}
\begin{proof}
Case (i) : $x\neq0$.
Define $\mathcal{M}_0\coloneqq \{0,x\}$ and $f_0(0)=0$, $f_0(x)=d(x,0)$. Then
$|f_0(x)-f_0(0)|=d(x,0)$ and hence $f_0$ is Lipschitz. Application of Theorem \ref{MCSHANE} now
completes the proof.\\
Case (ii) : $x=0$.
Take a non-zero point $y
\in \mathcal{M}$. We use the argument in case (i) by
replacing $y$ in the place of $x$.
\end{proof}
\begin{theorem}\label{METRICFRAMEEXISTS}
Let $ \mathcal{M}$ be a separable metric space. Then there exist a BK-space $ \mathcal{M}_d$, a sequence $\{f_n\}_{n}$ in $\operatorname{Lip}_0(\mathcal{M}, \mathbb{R})$
and a function $S: \mathcal{M}_d \rightarrow \mathcal{M}$ such that
\begin{enumerate}[label=(\roman*)]
\item $\{f_n(x)\}_{n} \in \mathcal{M}_d$, for each $x \in \mathcal{M}$,
\item
$
\|\{f_n(x)-f_n(y)\}_n\|_{\mathcal{M}_d} \leq \, d(x,y), \forall x
, y\in \mathcal{M},
$
\item $S(\{f_n(x)\}_{n})=x$, for each $x \in \mathcal{M}$.
\end{enumerate}
\end{theorem}
\begin{proof}
Let $\{x_n\}_{n}$ be a dense set in $ \mathcal{M}$. Then for each $n \in \mathbb{N}$, from Corollary \ref{MCSHANECORO}
there exists a Lipschitz function $f_n:\mathcal{M} \rightarrow
\mathbb{R}$ such that $f_n(x_n)=d(x_n,0)$, $f_n(0)=0$ and
$\operatorname{Lip}(f_n)=1$. Let $ x \in \mathcal{M}$ be fixed. Now for each $n
\in\mathbb{N}$,
\begin{align*}
|f_n(x)|=|f_n(x)-f_n(0)|\leq \|f_n\|_{\operatorname{Lip}_0}\, d (x,0)=d (x,0)
\end{align*}
which gives $\sup_{n \in\mathbb{N}}|f_n(x)|\leq d (x,0)$. Since $\{x_n\}_{n}$
is dense, there exists a sub sequence $\{x_{n_k}\}_{k}$ of $\{x_n\}_{n}$ such
that $x_{n_k} \rightarrow x$ as $n \to \infty.$ From the inequality
\begin{align*}
|d(y,z)-d(y,w)|\leq d(z,w), \quad \forall y,z,w \in \mathcal{M}
\end{align*}
we see then that $d(x_{n_k}, 0) \rightarrow d(x,0)$ as $n \to \infty.$ Consider
\begin{align*}
d(x_{n_k}, 0)&=f_{n_k}(x_{n_k})\leq
|f_{n_k}(x_{n_k})-f_{n_k}(x)|+|f_{n_k}(x)|\\
&\leq 1 .d(x_{n_k},
x)+|f_{n_k}(x)|,\quad \forall k \in\mathbb{N}\\
\implies &\lim_{k \to \infty}(d(x_{n_k}, 0)-d(x_{n_k},
x))\leq\sup_{k \in \mathbb{N} }(d(x_{n_k}, 0)-d(x_{n_k},
x)) \leq \sup_{k \in\mathbb{N}}|f_{n_k}(x)|.
\end{align*}
Therefore
\begin{align*}
\sup_{n \in\mathbb{N}}|f_n(x)|&\leq d (x,0)=\lim_{k \to \infty}d(x_{n_k},
0)=\lim_{k \to \infty}(d(x_{n_k}, 0)-d(x_{n_k},x))\\
&\leq \sup_{k
\in\mathbb{N}}|f_{n_k}(x)|\leq\sup_{n \in\mathbb{N}}|f_n(x)|.
\end{align*}
So we proved that
\begin{align}\label{ALMOST}
d(x,0)=\sup_{n \in\mathbb{N}}|f_n(x)|, \quad \forall x \in \mathcal{M}.
\end{align}
Define
$
\mathcal{M}^0_d\coloneqq \{\{f_n(x)\}_n: x \in \mathcal{M}\}.
$
Equality (\ref{ALMOST}) then tells that $\mathcal{M}^0_d$ is a subset of $\ell^\infty(\mathbb{N}).$ Now we define
$S_0:\mathcal{M}_d^0 \ni \{f_n(x)\}_n \mapsto x \in \mathcal{M}$. Then from
Equality (\ref{ALMOST}),
\begin{align*}
d(S_0(\{f_n(x)\}_n),S_0(\{f_n(y)\}_n))&=d(x,y)
\leq d(x,0)+d(0,y)\\
&=\sup_{n \in\mathbb{N}}|f_n(x)|+\sup_{n \in\mathbb{N}}|f_n(y)|\\
&=\|\{f_n(x)\}_n\|+\|\{f_n(y)\}_n\|, \quad \forall x, y \in
\mathcal{M}.
\end{align*}
We also have
\begin{align*}
\|\{f_n(x)-f_n(y)\}_n\|_{\mathcal{M}_d}&=\sup_{n \in\mathbb{N}}|f_n(x)-f_n(y)| \\
&\leq \sup_{n \in\mathbb{N}}\|f_n\|_{\operatorname{Lip}_0}\, d (x,y)=d (x,y), \quad \forall x
, y\in \mathcal{M}.
\end{align*}
We can now take $S$ as Lipschitz extension
of $S_0$ to $\ell^\infty(\mathbb{N})$ and $\mathcal{M}_d=\ell^\infty(\mathbb{N})$ which completes the proof.
\end{proof}
\begin{theorem}\label{METRICFRAMEEXISTS2}
If $A:\mathcal{M} \to \mathcal{M}_d$ is bi-Lipschitz and there is a Lipschitz projection $P:\mathcal{M}_d \to A(\mathcal{M})$, then $\mathcal{M}$ admits a metric frame.
\end{theorem}
\begin{proof}
Let $\{h_n\}_n$ be the sequence of coordinate functionals associated with $\mathcal{M}_d$. Define $f_n\coloneqq h_nA$ and $S \coloneqq A^{-1}P$. Then
\begin{align*}
S(\{f_n(x)\}_n)=A^{-1}P(\{h_n(Ax)\}_n)=A^{-1}PAx=A^{-1}Ax=x, \quad \forall x \in \mathcal{M}.
\end{align*}
Hence $(\{f_n\}_{n}, S)$ is a metric frame for $\mathcal{M}$.
\end{proof}
It is well-known that Mc-Schane extension theorem fails for complex valued Lipschitz functions. Thus we may ask whether we can take a complex
sequence space in Theorem \ref{METRICFRAMEEXISTS}. It is possible for certain metric spaces due to the following theorem.
\begin{theorem}(\textbf{Kirszbraun extension theorem}) (cf. \cite{VALENTINEL})
Let $\mathcal{H}$ be a Hilbert space and $\mathcal{M}_0$ be a nonempty subset
of $\mathcal{H}$. If $f_0:\mathcal{M}_0 \rightarrow \mathbb{K} $ is Lipschitz,
then there exists a Lipschitz function $f:\mathcal{H} \rightarrow \mathbb{K}
$ such that $f|{\mathcal{M}_0}=f_0$ and
$\operatorname{Lip}(f)=\operatorname{Lip}(f_0)$.
\end{theorem}
Following proposition shows that given a metric frame, we can generate more metric frames.
\begin{proposition}
Let $(\{f_n\}_{n}, S)$ be a metric frame for $\mathcal{M}$. If maps $A, B :\mathcal{M} \to \mathcal{M}$ are such that $A$ is
bi-Lipschitz, $B$ is Lipschitz and $BA=I_\mathcal{M}$, then $(\{f_nA\}_{n}, BS)$ is a metric frame for $\mathcal{M}$. In particular, if $A :\mathcal{M} \to \mathcal{M}$ is
bi-Lipschitz invertible, then $(\{f_nA\}_{n}, A^{-1}S)$ is a metric frame for $\mathcal{M}$.
\end{proposition}
\begin{proof}
Bi-Lipschitzness of $A$ tells that condition (ii) in Definition \ref{METRICBANACHFRAME} holds. Now by using $BA=I_\mathcal{M}$ we get $BS (\{f_nAx\}_{n})=BAx=x, \forall x \in \mathcal{M}.$
\end{proof}
Previous proposition not only helps to generate metric frames from metric frames but also from Banach frames. Since there are large number of examples of Banach frames for a variety of Banach spaces, just by operating with bi-Lipschitz invertible functions on subsets, it produces metric frames for that subset. Next we characterize metric frames using Lipschitz functions. The following theorem precisely says when an $\mathcal{M}_d$-frame can be converted into a metric frame.
\begin{theorem}\label{CHARLIPMETRIC}
Let $\{f_n\}_{n}$ be a metric $\mathcal{M}_d$-frame for $\mathcal{M}$. Then the following are equivalent.
\begin{enumerate}[label=(\roman*)]
\item There exists a Lipschitz projection $P:\mathcal{M}_d \to \theta_f(\mathcal{M})$.
\item There exists a Lipschitz map $V:\mathcal{M}_d \to \mathcal{M}$ such that $V|_{\theta_f(\mathcal{M})}=\theta_f^{-1}$.
\item There exists a Lipschitz map $S:\mathcal{M}_d \to \mathcal{M}$ such that $(\{f_n\}_{n}, S)$ is a metric frame for $\mathcal{M}$.
\end{enumerate}
\end{theorem}
\begin{proof}
(i) $\Rightarrow$ (ii) Define $V\coloneqq \theta_f^{-1} P$. Then for $y=\theta_f(x), x \in \mathcal{M}$ we get $Vy=V\theta_f(x)=\theta_f^{-1} P\theta_f(x)=\theta_f^{-1}\theta_f(x)=\theta_f^{-1} y$.\\
(ii) $\Rightarrow$ (i) Set $P\coloneqq \theta_fV$. Now $P^2=\theta_fV\theta_fV=\theta_fI_\mathcal{M}V=P$.\\
(ii) $\Rightarrow$ (iii) Define $S\coloneqq V$. Then $S\{f_n(x)\}_n=S\theta_f (x)=V\theta_f (x)=x$, for all $x \in \mathcal{M}$. Hence $(\{f_n\}_{n}, S)$ is a metric frame for $\mathcal{M}$.\\
(iii) $\Rightarrow$ (ii) Define $V\coloneqq S$. Then $V\theta_f (x)=S\theta_f (x)=S\{f_n(x)\}_n=x$, for all $x \in \mathcal{M}$.
\end{proof}
\section{METRIC FRAMES FOR BANACH SPACES}
Now we turn onto the representation of elements using metric frames. Naturally, to deal with sums we must look in to Banach space structure. Following theorem can be compared with Theorem \ref{CASAZZASEPARABLECHARACTERIZATION}.
\begin{theorem}\label{PALL}
Let $\{f_n\}_{n}$ be a metric p-frame for a Banach space $\mathcal{X}$. Assume that $f_n(0)=0$ for all $n$. Then the following are equivalent.
\begin{enumerate}[label=(\roman*)]
\item There exists a bounded linear map $V:\mathcal{M}_d \to \mathcal{X}$ such that $V|_{\theta_f(\mathcal{M})}=\theta_f^{-1}$.
\item There exists a bounded linear map $S:\mathcal{M}_d \to \mathcal{X}$ such that $(\{f_n\}_{n}, S)$ is a metric p-frame for $\mathcal{X}$.
\item There exists a sequence $\{\tau_n\}_{n}$ in $\mathcal{X}$ such that $\sum_{n=1}^{\infty}c_n\tau_n$ converges for all $\{c_n\}_{n}\in \ell^p(\mathbb{N})$ and
$
x=\sum_{n=1}^{\infty}f_n(x)\tau_n, \forall x \in \mathcal{X}.
$
\item There exists a q-Bessel sequence $\{\tau_n\}_{n}$ in $\mathcal{X}\subseteq \mathcal{X}^{**}$ such that
$ x=\sum_{n=1}^{\infty}f_n(x)\tau_n, $ $ \forall x \in \mathcal{X}.
$
\item There exists a q-Bessel sequence $\{\tau_n\}_{n}$ in $\mathcal{X}\subseteq \mathcal{X}^{**}$ such that
$
f=\sum_{n=1}^{\infty}f(\tau_n)f_n, $ $ \forall f \in \mathcal{X}^*.
$
\end{enumerate}
In each of the cases (iv) and (v), $\{\tau_n\}_n$ is actually a q-frame for $\mathcal{X}^*$.
\end{theorem}
\begin{proof}
Proof of (i) $\iff$ (ii) is similar to the proof of (ii) $\iff$ (iii) in Theorem \ref{CHARLIPMETRIC}.\\
(iii) $\Rightarrow$ (i) Given information tells that the map
\begin{align*}
V: \ell^p(\mathbb{N}) \ni \{c_n\}_{n} \to \sum_{n=1}^{\infty}c_n\tau_n \in \mathcal{X}
\end{align*}
is well-defined. Banach-Steinhaus theorem now asserts that $V$ is bounded. Now for $y=\theta_f(x), $ $ x \in \mathcal{X}$ we get
\begin{align*}
Vy=V\theta_f(x)=V(\{f_n(x)\}_{n})=\sum_{n=1}^{\infty}f_n(x)\tau_n=x=\theta_f^{-1} \theta_f(x)=\theta_f^{-1} y.
\end{align*}
(i) $\Rightarrow$ (iii) Let $\{e_n\}_{n}$ be the standard Schauder basis for $\ell^p(\mathbb{N})$ and define $\tau_n \coloneqq Ve_n$, for all $n$. Since $V$ is bounded linear and $\sum_{n=1}^{\infty}c_ne_n$ converges for all $\{c_n\}_{n}\in \ell^p(\mathbb{N})$, it follows that $\sum_{n=1}^{\infty}c_n\tau_n$ converges for all $\{c_n\}_{n}\in \ell^p(\mathbb{N})$. Moreover,
\begin{align*}
x=V\theta_f(x)=V(\{f_n(x)\}_{n})=\sum_{n=1}^{\infty}f_n(x)\tau_n, \quad \forall x \in \mathcal{X}.
\end{align*}
(iii) $\iff$ (iv) By considering $\tau_n$ in $\mathcal{X}^{**}$ through James embedding and using Theorem \ref{CASAZZASEPARABLECHARACTERIZATION} we get that $\{\tau_n\}_{n}$ is a q-Bessel sequence in $\mathcal{X}$ if and only if $\sum_{n=1}^{\infty}c_n\tau_n$ converges for all $\{c_n\}_{n}\in \ell^p(\mathbb{N})$. \\
(iv) $\Rightarrow$ (v) Let $b$ be a Bessel bound for $\{\tau_n\}_{n}$. Then for all $f \in \mathcal{X}^*$ and $n\in \mathbb{N}$,
\begin{align*}
&\left\|f-\sum_{k=1}^{n}f(\tau_k)f_k\right\|_{\operatorname{Lip}_0}=\sup_{x, y \in \mathcal{X},~ x\neq y} \frac{\left|\left(f-\sum_{k=1}^{n}f(\tau_k)f_k\right)(x)-\left(f-\sum_{k=1}^{n}f(\tau_k)f_k\right)(y)\right|}{\|x-y\|}\\
&\quad=\sup_{x, y \in \mathcal{X},~ x\neq y} \frac{\left|f\left(\sum_{k=1}^{\infty}f_k(x)\tau_k\right)-f\left(\sum_{k=1}^{\infty}f_k(y)\tau_k\right)-\sum_{k=1}^{\infty}f(\tau_k)(f_k(x)-f_k(y))\right|}{\|x-y\|}\\
&\quad=\sup_{x, y \in \mathcal{X}, ~x\neq y} \frac{\left|\sum_{k=1}^{n}f(\tau_k)(f_k(x)-f_k(y))-\sum_{k=1}^{\infty}f(\tau_k)(f_k(x)-f_k(y))\right|}{\|x-y\|}\\
&\quad=\sup_{x, y \in \mathcal{X}, ~x\neq y} \frac{\left|\sum_{k=n+1}^{\infty}f(\tau_k)(f_k(x)-f_k(y))\right|}{\|x-y\|}\\
&\quad\leq \sup_{x, y \in \mathcal{X}, ~x\neq y} \frac{\left(\sum_{k=n+1}^{\infty}|f(\tau_k)|^q\right)^\frac{1}{q}\left(\sum_{k=n+1}^{\infty}|f_k(x)-f_k(y)|^p\right)^\frac{1}{p}}{\|x-y\|}\\
&\quad\leq b \left(\sum_{k=n+1}^{\infty}|f_k(x)-f_k(y)|^p\right)^\frac{1}{p} \to 0 \text{ as } n \to \infty.
\end{align*}
(v) $\Rightarrow$ (iv) Let $b$ be a Bessel bound for $\{\tau_n\}_{n}$. Let $x \in \mathcal{X}$ and $n\in \mathbb{N}$. Then
\begin{align*}
\left\|x-\sum_{k=1}^{n}f_k(x)\tau_k\right\|&=\sup_{f\in \mathcal{X}^*, \|f\|=1} \left|f(x)-\sum_{k=1}^{n}f_k(x)f(\tau_k)\right|\\
&=\sup_{f\in \mathcal{X}^*, \|f\|=1} \left|\left(\sum_{k=1}^{\infty}f(\tau_k)f_k\right)(x)-\sum_{k=1}^{n}f_k(x)f(\tau_k)\right|\\
&=\sup_{f\in \mathcal{X}^*, \|f\|=1} \left|\sum_{k=n+1}^{\infty}f_k(x)f(\tau_k)\right|\\
&\leq \left(\sum_{k=n+1}^{\infty}|f(\tau_k)|^q\right)^\frac{1}{q}\left(\sum_{k=n+1}^{\infty}|f_k(x)-f_k(0)|^p\right)^\frac{1}{p}\\
&\leq b \left(\sum_{k=n+1}^{\infty}|f_k(x)|^p\right)^\frac{1}{p} \to 0 \text{ as } n \to \infty.
\end{align*}
Now we are left with proving that $\{\tau_n\}_{n}$ is a q-frame for $\mathcal{X}$. Assume (iv). Let $f \in \mathcal{X}^*.$ Then
\begin{align*}
\|f\|&=\sup_{x\in \mathcal{X}, \|x\|=1} \left|f(x)\right|=\sup_{x\in \mathcal{X}, \|x\|=1} \left|f\left(\sum_{n=1}^{\infty}f_n(x)\tau_n\right)\right|\\
&=\sup_{x\in \mathcal{X}, \|x\|=1} \left|\sum_{n=1}^{\infty}f_n(x)f(\tau_n)\right|\leq b\left(\sum_{n=1}^{\infty}|f_n(x)|^p\right)^\frac{1}{p} .
\end{align*}
Since $f$ was arbitrary, the conclusion follows.
\end{proof}
Theorem \ref{CASAZZASEPARABLECHARACTERIZATION} and Theorem \ref{PALL} suggest the following question.
For which metric spaces and BK-spaces, does Theorem \ref{PALL} hold? We next present a result which demands only reconstruction of elements using Lipschitz functions on a Banach space and not frame conditions. First we see a result for this purpose.
\begin{lemma}
(cf. \cite{CASAZZACHRISTENSENSTOEVA})\label{ANOTHER}
Given a Banach space $\mathcal{X}$ and a sequence $\{\tau_n\}_{n}$ of non-zero elements in $\mathcal{X}$, let
\begin{align*}
\mathcal{Y}_d \coloneqq \left\{\{a_n\}_{n}:\sum_{n=1}^\infty a_n \tau_n \text{ converges in } \mathcal{X}\right\}.
\end{align*}
Then $\mathcal{Y}_d$ is a Banach space with respect to the norm
\begin{align*}
\| \{a_n\}_{n}\|\coloneqq \sup_{m }\left\|\sum_{n=1}^m a_n \tau_n\right\|.
\end{align*}
Further, the canonical unit vectors form a Schauder basis for $\mathcal{Y}_d$.
\end{lemma}
\begin{theorem}
Let $\mathcal{X}$ be a Banach space and $\{f_n\}_{n}$ be a sequence in $\operatorname{Lip}_0(\mathcal{X}, \mathbb{K})$. Then the following are equivalent.
\begin{enumerate}[label=(\roman*)]
\item There exists a sequence $\{\tau_n\}_{n}$ in $\mathcal{X}$ such that
$
x=\sum_{n=1}^{\infty}f_n(x)\tau_n, \forall x \in \mathcal{X}.
$
\item Let $\{\tau_n\}_{n}$ be a sequence in $\mathcal{X}$ and define $S_n(x)\coloneqq \sum_{k=1}^{n}f_k(x)\tau_k$, $\forall x \in \mathcal{X}$, for each $n \in \mathbb{N}$. Then $\sup_{n \in\mathbb{N}}\|S_n\|_{\operatorname{Lip}_0} <\infty$ and there exist a BK-space $\mathcal{M}_d$ and a bounded linear map $S:\mathcal{M}_d \to \mathcal{M}$ such that $(\{f_n\}_{n}, S)$ is a metric frame for $\mathcal{X}$.
\end{enumerate}
Further, a choice for $\tau_n$ is $\tau_n=Se_n$ for each $n \in \mathbb{N}$, where $\{e_n\}_{n}$ is the standard Schauder basis for $\ell^p(\mathbb{N})$.
\end{theorem}
\begin{proof}
(ii) $\Rightarrow$ (i) This follows from Theorem \ref{PALL}.\\
(i) $(\Rightarrow)$ (ii) We give an argument which is similar to the arguments given in \cite{CASAZZACHRISTENSENSTOEVA}. Define $A\coloneqq \{n \in \mathbb{N}: \tau_n=0\}$ and $B\coloneqq \mathbb{N} \setminus A$. Let $c_0(A) $ be the space of sequences converging to zero, indexed by $A$, equipped with sup-norm. Let $\{e_n\}_{n\in A}$ be the canonical Schauder basis for $c_0(A) $. Since the norm is sup-norm, it easily follows that $\{\frac{1}{n(\|f_n\|_{\operatorname{Lip}_0}+1)}e_n\}_{n\in A}$ is also a Schauder basis for $c_0(A) $. Define
\begin{align*}
\mathcal{Z}_d\coloneqq \left\{\{c_n\}_{n\in A}: \sum_{n\in A}\frac{c_n}{n(\|f_n\|_{\operatorname{Lip}_0}+1)}e_n \text{ converges in } A\right\}.
\end{align*}
We equip $\mathcal{Z}_d $ with the norm
\begin{align*}
\|\{c_n\}_{n\in A}\|_{\mathcal{Z}_d}\coloneqq \left\|\frac{c_n}{n(\|f_n\|_{\operatorname{Lip}_0}+1)}\right\|_{c_0(A)}=\sup _{n\in A}\left|\frac{c_n}{n(\|f_n\|_{\operatorname{Lip}_0}+1)}\right|.
\end{align*}
Then $\{e_n\}_{n\in A}$ is a Schauder basis for $\mathcal{Z}_d $. Clearly $\mathcal{Z}_d $ is a BK-space. Let $\mathcal{Y}_d $ be as defined in Lemma \ref{ANOTHER}, for the index set $B$. Now set $\mathcal{M}_d \coloneqq \mathcal{Y}_d \oplus \mathcal{Z}_d $ equipped with norm $\|y \oplus z \|_{\mathcal{M}_d}\coloneqq \|y\|_{\mathcal{Y}_d} +\|z\|_{\mathcal{Z}_d}$. It then follows that, for each $x \in \mathcal{X}$, $\{f_n(x)\}_{n\in B}\oplus \{f_n(x)\}_{n\in A} \in \mathcal{M}_d$. We next show that $\{f_n\}_{n}$ is a metric $\mathcal{M}_d$-frame for $\mathcal{X}$. Let $x, y \in \mathcal{X}$. Then
\begin{align*}
\|x-y\|&=\left\|\sum_{n=1}^{\infty}(f_n(x)-f_n(y))\tau_n\right\|=\lim_{n\to\infty}\left\|\sum_{k=1}^{n}(f_k(x)-f_k(y))\tau_k\right\|\\
&\leq \sup _{n\in \mathbb{N}}\left\|\sum_{k=1}^{n}(f_k(x)-f_k(y))\tau_k\right\|=\sup _{n\in B}\left\|\sum_{k=1}^{n}(f_k(x)-f_k(y))\tau_k\right\|\\
&=\|\{f_n(x)-f_n(y)\}_{n\in B}\|_{\mathcal{Y}_d}\\
&\leq \|\{f_n(x)-f_n(y)\}_{n\in B}\|_{\mathcal{Y}_d}+\|\{f_n(x)-f_n(y)\}_{n\in A}\|_{\mathcal{Z}_d}\\
&= \|\{f_n(x)-f_n(y)\}_{n\in B}\oplus \{f_n(x)-f_n(y)\}_{n\in A}\|_{\mathcal{M}_d}
\end{align*}
and
\begin{align*}
&\|\{f_n(x)-f_n(y)\}_{n\in B}\oplus \{f_n(x)-f_n(y)\}_{n\in A}\|_{\mathcal{M}_d}\\
&=\|\{f_n(x)-f_n(y)\}_{n\in B}\|_{\mathcal{Y}_d}+\|\{f_n(x)-f_n(y)\}_{n\in A}\|_{\mathcal{Z}_d}\\
&=\sup _{n\in B}\left\|\sum_{k=1}^{n}(f_k(x)-f_k(y))\tau_k\right\|+\sup _{n\in A}\left|\frac{f_n(x)-f_n(y)}{n(\|f_n\|_{\operatorname{Lip}_0}+1)}\right|\\
&=\sup _{n\in B}\left\|S_n(x)-S_n(y)\right\|+\sup _{n\in A}\left|\frac{f_n(x)-f_n(y)}{n(\|f_n\|_{\operatorname{Lip}_0}+1)}\right|\\
&\leq \sup_{n \in B}\|S_n\|_{\operatorname{Lip}_0}\|x-y\|+\sup _{n\in A}\frac{\|f_n\|_{\operatorname{Lip}_0}\|x-y\|}{n(\|f_n\|_{\operatorname{Lip}_0}+1)}\\
&\leq\left(\sup_{n \in B}\|S_n\|_{\operatorname{Lip}_0}+1\right)\|x-y\|.
\end{align*}
We now define
\begin{align*}
S:\mathcal{M}_d \ni \{a_n\}_{n\in B}\oplus \{b_n\}_{n\in A}\mapsto \sum_{n\in B}a_n\tau_n \in \mathcal{X}.
\end{align*}
Clearly $S$ is linear. Boundedness of $S$ follows from the following calculation.
\begin{align*}
\|S(\{a_n\}_{n\in B}\oplus \{b_n\}_{n\in A})\|&=\left\|\sum_{n\in B}a_n\tau_n\right\|\leq \sup _{n\in B}\left\|\sum_{k=1}^n a_k\tau_k\right\|\\
&=\|\{a_n\}_{n\in B}\|_{\mathcal{Y}_d} \leq \|\{a_n\}_{n\in B}\oplus \{b_n\}_{n\in A}\|_{\mathcal{M}_d}.
\end{align*}
\end{proof}
Using Theorem \ref{POINTEDSPLITS} we derive the following result which tells that given a metric frame for a metric space we can get a metric frame using linear functionals for a subset of the Banach space.
\begin{theorem}\label{LIPIFFLINEAR}
Let $\{f_n\}_{n}$ be a sequence in $ \operatorname{Lip}_0(\mathcal{M}, \mathbb{K})$. For each $n\in \mathbb{N}$, let $T_{f_n}$ be linearization of $f_n$. Let $e$ and $\mathcal{F}(\mathcal{M})$
be as in Theorem \ref{POINTEDSPLITS}. Then $\{f_n\}_{n}$ is a
metric frame for $\mathcal{M}$ with bounds $a$ and $b$ if and only if $\{T_{f_n}\}_{n}$ is a
metric frame for $e(\mathcal{M})$ with bounds $a$ and $b$. In particular, $(\{f_n\}_{n}, S)$ is a metric frame for $\mathcal{M}$ if and only if $(\{T_{f_n}\}_{n}, eS)$ is a metric frame for $e(\mathcal{M})$.
\end{theorem}
\begin{proof}
$(\Rightarrow)$ Let $u,v \in e(\mathcal{M})$. Then $u=e(x), v=e(y)$, for some $x, y \in \mathcal{M}$. Now using the fact that $e$ is an isometry,
\begin{align*}
a\|u-v\|&=a\|e(x)-e(y)\|=a\,d(x,y)\leq \|\{f_n(x)-f_n(y)\}_n\|\\
&=\|\{(T_{f_n}e)(x)-(T_{f_n}e)(y)\}_n\|=\|\{T_{f_n}(e(x))-T_{f_n}(e(y))\}_n\|\\
&=\|\{T_{f_n}(u)-T_{f_n}(v)\}_n\|\leq b \,d(x,y)=b\|e(x)-e(y)\|=b\|u-v\|.
\end{align*}
$(\Leftarrow)$ Let $x, y \in \mathcal{M}$. Then $e(x), e(y) \in e(\mathcal{M})$. Hence
\begin{align*}
a\,d(x,y)&=a\|e(x)-e(y)\|\leq \|\{T_{f_n}(e(x))-T_{f_n}(e(y))\}_n\|\\
&=\|\{f_n(x)-f_n(y)\}_n\|\leq b\|e(x)-e(y)\|=b\,d(x,y).
\end{align*}
Since $x,y$ were arbitrary, the result follows.
\end{proof}
\begin{remark}
We can not use Theorem \ref{LIPIFFLINEAR} to view metric frames as Banach frames. The reason is that $e(\mathcal{M})$ is just a subset of $\mathcal{F}(\mathcal{M})$ and need not be a vector space. Moreover, the map $eS$ is Lipschitz and may not be linear.
\end{remark}
{\onehalfspacing \section{PERTURBATIONS}
Here we present some stability results. These are important as they say that sequences which are close to metric frames are again metric frames. On the other hand, it asserts that if we perturb a metric frame we again get a metric frame.
\begin{theorem}\label{FIRSTPERTURB}
Let $\{f_n\}_{n}$ be a
p-metric frame for $\mathcal{M}$ with bounds $a$ and $b$. Let $\{g_n\}_{n}$ be a sequence in $\operatorname{Lip}(\mathcal{M}, \mathbb{K})$ satisfying the following.
\begin{enumerate}[label=(\roman*)]
\item There exist $\alpha, \beta, \gamma \geq 0$ such that $\beta<1$, $\alpha<1$, $\gamma<(1-\alpha)a$.
\item For all $x, y \in \mathcal{M}$, and $ m=1, 2,\dots ,$
\begin{align}\label{PERINEQUA}
\left(\sum_{n=1}^m|(f_n-g_n)(x)-(f_n-g_n)(y)|^p\right)^\frac{1}{p}&\leq \alpha \left(\sum_{n=1}^m|f_n(x)-f_n(y)|^p\right)^\frac{1}{p}\nonumber\\
&+
\beta \left(\sum_{n=1}^m|g_n(x)-g_n(y)|^p\right)^\frac{1}{p}+\gamma \,d(x,y).
\end{align}
\end{enumerate}
Then $\{g_n\}_{n}$ is a
p-metric frame for $\mathcal{M}$ with bounds
$
\frac{((1-\alpha)a-\gamma)}{1+\beta}$ and $ \frac{((1+\alpha)b+\gamma)}{1-\beta}.
$
\end{theorem}
\begin{proof}
Using Minkowski's inequality and Inequality (\ref{PERINEQUA}), we get, for all $x, y \in \mathcal{M}$ and $m\in \mathbb{N}$,
\begin{align*}
&\left(\sum_{n=1}^m|g_n(x)-g_n(y)|^p\right)^\frac{1}{p}\\
&\leq \left(\sum_{n=1}^m|(f_n-g_n)(x)-(f_n-g_n)(y)|^p\right)^\frac{1}{p}+ \left(\sum_{n=1}^m|f_n(x)-f_n(y)|^p\right)^\frac{1}{p}\\
&\leq (1+\alpha) \left(\sum_{n=1}^m|f_n(x)-f_n(y)|^p\right)^\frac{1}{p}+\beta \left(\sum_{n=1}^m|g_n(x)-g_n(y)|^p\right)^\frac{1}{p}+\gamma \,d(x,y)
\end{align*}
which implies
\begin{align*}
(1-\beta)\left(\sum_{n=1}^m|g_n(x)-g_n(y)|^p\right)^\frac{1}{p}\leq (1+\alpha)\left(\sum_{n=1}^m|f_n(x)-f_n(y)|^p\right)^\frac{1}{p}+\gamma \,d(x,y),
\end{align*}
for all $x, y \in \mathcal{M}$. Since the sum $\sum_{n=1}^\infty|f_n(x)-f_n(y)|^p$ converges, $\sum_{n=1}^\infty|g_n(x)-g_n(y)|^p$ will also converge. Inequality (\ref{PERINEQUA}) now gives
\begin{align}\label{PERINEQUA2}
\left(\sum_{n=1}^\infty|(f_n-g_n)(x)-(f_n-g_n)(y)|^p\right)^\frac{1}{p}&\leq \alpha \left(\sum_{n=1}^\infty|f_n(x)-f_n(y)|^p\right)^\frac{1}{p}\nonumber\\
&+
\beta \left(\sum_{n=1}^\infty|g_n(x)-g_n(y)|^p\right)^\frac{1}{p}+\gamma \,d(x,y) .
\end{align}
By doing a similar calculation and using Inequality (\ref{PERINEQUA2}) we get for all $x, y \in \mathcal{M}$,
\begin{align*}
&\left(\sum_{n=1}^\infty|g_n(x)-g_n(y)|^p\right)^\frac{1}{p}\\
&\quad \leq \left(\sum_{n=1}^\infty|(f_n-g_n)(x)-(f_n-g_n)(y)|^p\right)^\frac{1}{p}+ \left(\sum_{n=1}^\infty|f_n(x)-f_n(y)|^p\right)^\frac{1}{p}\\
&\quad\leq (1+\alpha) \left(\sum_{n=1}^\infty|f_n(x)-f_n(y)|^p\right)^\frac{1}{p}+\beta \left(\sum_{n=1}^\infty|g_n(x)-g_n(y)|^p\right)^\frac{1}{p}+\gamma \,d(x,y)\\
&\quad\leq (1+\alpha)b \,d(x,y)+\beta \left(\sum_{n=1}^\infty|g_n(x)-g_n(y)|^p\right)^\frac{1}{p}+\gamma \,d(x,y)\\
&\quad=((1+\alpha)b+\gamma) \,d(x,y)+\beta \left(\sum_{n=1}^\infty|g_n(x)-g_n(y)|^p\right)^\frac{1}{p}
\end{align*}
which gives
\begin{align*}
(1-\beta)\left(\sum_{n=1}^\infty|g_n(x)-g_n(y)|^p\right)^\frac{1}{p}\leq ((1+\alpha)b+\gamma) \,d(x,y), \quad \forall x, y \in \mathcal{M}\\
\text{i.e.,} \left(\sum_{n=1}^\infty|g_n(x)-g_n(y)|^p\right)^\frac{1}{p}\leq \frac{((1+\alpha)b+\gamma)}{1-\beta}\,d(x,y), \quad \forall x, y \in \mathcal{M}.
\end{align*}
Hence we obtained upper frame bound for $\{g_n\}_n$. For lower frame bound, let $x, y$ $ \in \mathcal{M}$. Then
\begin{align*}
&\left(\sum_{n=1}^\infty|f_n(x)-f_n(y)|^p\right)^\frac{1}{p}\\
&\quad \leq \left(\sum_{n=1}^\infty|(f_n-g_n)(x)-(f_n-g_n)(y)|^p\right)^\frac{1}{p}+ \left(\sum_{n=1}^\infty|g_n(x)-g_n(y)|^p\right)^\frac{1}{p}\\
&\quad \leq \alpha \left(\sum_{n=1}^\infty|f_n(x)-f_n(y)|^p\right)^\frac{1}{p}+(1+\beta)\left(\sum_{n=1}^\infty|g_n(x)-g_n(y)|^p\right)^\frac{1}{p}+\gamma \,d(x,y)
\end{align*}
which implies
\begin{align*}
(1-\alpha)a\,d(x,y)&\leq (1-\alpha)\left(\sum_{n=1}^\infty|f_n(x)-f_n(y)|^p\right)^\frac{1}{p}\\
&\leq (1+\beta)\left(\sum_{n=1}^\infty|g_n(x)-g_n(y)|^p\right)^\frac{1}{p}+\gamma \,d(x,y), \quad \forall x, y \in \mathcal{M}\\
\text{i.e.,} ~\frac{((1-\alpha)a-\gamma)}{1+\beta} &\leq \left(\sum_{n=1}^\infty|g_n(x)-g_n(y)|^p\right)^\frac{1}{p}, \quad \forall x, y \in \mathcal{M}.
\end{align*}
\end{proof}
Using Theorem \ref{FIRSTPERTURB} we obtain the following result.
\begin{corollary}
Let $\{f_n\}_{n}$ be a
p-metric frame for $\mathcal{M}$ with bounds $a$ and $b$. Let $\{g_n\}_{n}$ be a sequence in $\operatorname{Lip}(\mathcal{M}, \mathbb{K})$ such that
\begin{align*}
r\coloneqq \left(\sum_{n=1}^\infty \operatorname{Lip}(f_n-g_n)^p\right)^\frac{1}{p} <a.
\end{align*}
Then $\{g_n\}_{n}$ is a
p-metric frame for $\mathcal{M}$ with bounds $a-r$ and $b+r$.
\end{corollary}
\begin{proof}
Define $\alpha\coloneqq 0$, $\beta\coloneqq 0$ and $\gamma\coloneqq r$. Then
for all $x, y \in \mathcal{M}$,
\begin{align*}
& \left(\sum_{n=1}^\infty|(f_n-g_n)(x)-(f_n-g_n)(y)|^p\right)^\frac{1}{p}\leq \left(\sum_{n=1}^\infty \operatorname{Lip}(f_n-g_n)^p\,d(x,y)^p\right)^\frac{1}{p}\\
&\quad=\left(\sum_{n=1}^\infty \operatorname{Lip}(f_n-g_n)^p\right)^\frac{1}{p}\,d(x,y)=r\,d(x,y)\\
&\quad=\alpha \left(\sum_{n=1}^\infty|f_n(x)-f_n(y)|^p\right)^\frac{1}{p}\nonumber
+
\beta \left(\sum_{n=1}^\infty|g_n(x)-g_n(y)|^p\right)^\frac{1}{p}+\gamma \,d(x,y).
\end{align*}
Thus the hypothesis in Theorem \ref{FIRSTPERTURB} holds. Hence the corollary.
\end{proof}
\begin{corollary}
Let $\{f_n\}_{n}$ be a
p-metric Bessel sequence for $\mathcal{M}$ with bound $b$. Let $\{g_n\}_{n}$ be a sequence in $\operatorname{Lip}(\mathcal{M}, \mathbb{K})$ satisfying the following.
\begin{enumerate}[label=(\roman*)]
\item There exist $\alpha, \beta, \gamma \geq 0$ such that $\beta<1$.
\item For all $x, y \in \mathcal{M}$, and $m=1,2, \dots ,$
\begin{align*}
\left(\sum_{n=1}^m|(f_n-g_n)(x)-(f_n-g_n)(y)|^p\right)^\frac{1}{p}&\leq \alpha \left(\sum_{n=1}^m|f_n(x)-f_n(y)|^p\right)^\frac{1}{p}\\
&+
\beta \left(\sum_{n=1}^m|g_n(x)-g_n(y)|^p\right)^\frac{1}{p}+\gamma \,d(x,y).
\end{align*}
\end{enumerate}
Then $\{g_n\}_{n}$ is a
p-metric Bessel sequence for $\mathcal{M}$ with bound $ \frac{((1+\alpha)b+\gamma)}{1-\beta}.$
\end{corollary}
We next derive a stability result in which we perturb the Lipschitz functions and then derive the existence of reconstruction operator. This is motivated from Theorem \ref{PERTURBATIONBANACH12}.
\begin{theorem}\label{STABILITYMA}
Let $(\{f_n\}_{n}, S)$ be a
metric frame for a Banach space $\mathcal{X}$. Assume that
$f_n(0)=0,$ for all $n \in \mathbb{N}$, and $S(0)=0$. Let $\{g_n\}_{n}$ be a collection in
$\operatorname{Lip}_0(\mathcal{X}, \mathbb{K})$ satisfying the following.
\begin{enumerate}[label=(\roman*)]
\item There exist $\alpha, \gamma\geq 0$ such that
\begin{align}\label{MFPER}
\|\{(f_n-g_n)(x)-(f_n-g_n)(y)\}_n\|\leq \alpha \|\{f_n(x)-f_n(y)\}_n\|+\gamma \|x-y\|, ~ \forall x, y \in \mathcal{X}.
\end{align}
\item $\alpha \|\theta_f\|_{\operatorname{Lip}_0}+\gamma\leq \|S\|_{\operatorname{Lip}_0}^{-1}.$
\end{enumerate}
Then there exists a reconstruction Lipschitz operator $T$ such that $(\{f_n\}_{n}, T)$ is a
metric frame for $\mathcal{X}$ with bounds
$
\|S\|_{\operatorname{Lip}_0}^{-1}-(\alpha\|\theta_f\|_{\operatorname{Lip}_0}+\gamma) $ and $
\|\theta_f\|_{\operatorname{Lip}_0}+(\alpha\|\theta_f\|_{\operatorname{Lip}_0}+\gamma).$
\end{theorem}
\begin{proof}
Let $ x\in \mathcal{X}$. Since $g_n(0)=0$ and $f_n(0)=0$ for all $n \in \mathbb{N}$, using Inequality (\ref{MFPER}),
\begin{align*}
\|\{g_n(x)\}_n\| &\leq \|\{(f_n-g_n)(x)\}_n\|+\|\{f_n(x)\}_n\|\\
&\leq (\alpha+1)\|\{f_n(x)\}_n\|+\gamma \|x\|.
\end{align*}
Therefore if we define $\theta_g:\mathcal{X} \ni x \mapsto \{g_n(x)\}_{n} \in\mathcal{M}_d$, then this map is well-defined. Again
using Inequality (\ref{MFPER}), we show that $\theta_g$ is Lipschitz. For $x, y \in \mathcal{X}$,
\begin{align*}
\|\theta_gx-\theta_gy\|&=\|\{g_n(x)-g_n(y)\}_n\|=\|\{-g_n(x)+g_n(y)\}_n\| \\
&\leq \|\{(f_n-g_n)(x)-(f_n-g_n)(y)\}_n\|+\|\{f_n(x)-f_n(y)\}_n\|\\
&\leq (1+\alpha)\|\{f_n(x)-f_n(y)\}_n\|+\gamma \|x-y\|=(1+\alpha)\|\theta_fx-\theta_fy\|+\gamma \|x-y\|\\
&\leq (1+\alpha) \|\theta_f\|_{\operatorname{Lip}_0}\|x-y\|+\gamma \|x-y\|
=((1+\alpha) \|\theta_f\|_{\operatorname{Lip}_0}+\gamma)\|x-y\|.
\end{align*}
Thus $\|\theta_g\|_{\operatorname{Lip}_0}\leq (1+\alpha) \|\theta_f\|_{\operatorname{Lip}_0}+\gamma$. Previous calculation
also tells that upper frame bound is $((1+\alpha) \|\theta_f\|_{\operatorname{Lip}_0}+\gamma)$. We see further that
Inequality (\ref{MFPER}) can be written as
\begin{align}\label{THETAFTHETAG}
\|(\theta_f-\theta_g)x-(\theta_f-\theta_g)y\|&\leq \alpha \|\theta_fx-\theta_fy\|+\gamma \|x-y\|\nonumber\\
&\leq (\alpha \|\theta_f\|_{\operatorname{Lip}_0}+\gamma)\|x-y\|, \quad \forall x, y \in \mathcal{X}.
\end{align}
Now noting
$S\theta_f=I_\mathcal{X}$ and using Inequality (\ref{THETAFTHETAG}) we see that
\begin{align*}
\|I_\mathcal{X}-S\theta_g\|_{\operatorname{Lip}_0}&=\|S\theta_f-S\theta_g\|_{\operatorname{Lip}_0}\\
&\leq \|S\|_{\operatorname{Lip}_0}\|\theta_f-\theta_g\|_{\operatorname{Lip}_0}\\
&\leq \|S\|_{\operatorname{Lip}_0}(\alpha \|\theta_f\|_{\operatorname{Lip}_0}+\gamma)<1.
\end{align*}
Since $\operatorname{Lip}_0(\mathcal{X})$ is a unital Banach algebra (Theorem \ref{LIPISABANACHALGEBRA}), last inequality tells that $S\theta_g$ is invertible and its inverse
is also a Lipschitz operator and
\begin{align*}
\|(S\theta_g)^{-1}\|_{\operatorname{Lip}_0}\leq
\frac{1}{1-\|S\|_{\operatorname{Lip}_0}(\alpha \|\theta_f\|_{\operatorname{Lip}_0}+\gamma)}.
\end{align*}
Define $T\coloneqq (S\theta_g)^{-1} S$. Then $T\theta_g=I_\mathcal{X}$ and
\begin{align*}
\|x-y\|&=\|T\theta_gx-T\theta_gy\|\leq \|T\|_{\operatorname{Lip}_0}\|\theta_gx-\theta_gy\|\\
&\leq \frac{1}{1-\|S\|_{\operatorname{Lip}_0}(\alpha \|\theta_f\|_{\operatorname{Lip}_0}+\gamma)}\|\theta_gx-\theta_gy\|,
\quad \forall x, y \in \mathcal{X}
\end{align*}
which gives the lower bound stated in the theorem.
\end{proof}
\begin{corollary}
Let $(\{f_n\}_{n}, S)$ be a
metric Bessel sequence for a Banach space $\mathcal{X}$. Assume that
$f_n(0)=0,$ for all $n \in \mathbb{N}$, and $S(0)=0$. Let $\{g_n\}_{n}$ be a collection in
$\operatorname{Lip}_0(\mathcal{X}, \mathbb{K})$ satisfying the following.
There exist $\alpha, \gamma\geq 0$ such that
\begin{align*}
\|\{(f_n-g_n)(x)-(f_n-g_n)(y)\}_n\|\leq \alpha \|\{f_n(x)-f_n(y)\}_n\|+\gamma \|x-y\|, \quad \forall x, y \in \mathcal{X}.
\end{align*}
Then there exists a reconstruction Lipschitz operator $T$ such that $(\{f_n\}_{n}, T)$ is a
metric Bessel sequence for $\mathcal{X}$ with bound
$\|\theta_f\|_{\operatorname{Lip}_0}+(\alpha\|\theta_f\|_{\operatorname{Lip}_0}+\gamma)$.
\end{corollary}
{\onehalfspacing \chapter{MULTIPLIERS FOR METRIC SPACES}\label{chap3} }
\section{DEFINITION AND BASIC PROPERTIES OF MULTIPLIERS}
In this chapter, we introduce and study multipliers for metric spaces.
We use the following notation in this chapter. Let $\mathcal{M}$ be a metric
space and $\mathcal{X}$ be a Banach space. Given $f \in
\operatorname{Lip}(\mathcal{M}, \mathbb{K})$ and $\tau \in \mathcal{X}$,
define
\begin{align*}
\tau\otimes f:\mathcal{M} \ni x \mapsto (\tau\otimes f)(x)\coloneqq f(x)\tau \in \mathcal{X}.
\end{align*}
Then it follows that $\tau\otimes f$ is a Lipschitz operator and $\operatorname{Lip}(\tau\otimes f)=\|\tau\|\operatorname{Lip}(f)$.
We first derive a result which allows us to define multipliers for metric spaces. In the sequel, $1<p<\infty$ and $q$ denotes the conjugate index of $p$.
\begin{theorem}\label{DEFINITIONEXISTENCE}
Let $\{f_n\}_{n}$ in $\operatorname{Lip}_0(\mathcal{M}, \mathbb{K})$ be a
Lipschitz p-Bessel sequence for a pointed metric space $(\mathcal{M},
0)$ with bound $b$ and $\{\tau_n\}_{n}$ in a Banach space $\mathcal{X}$ be a
Lipschitz q-Bessel sequence for $\operatorname{Lip}_0(\mathcal{X},
\mathbb{K})$ with bound $d$. If
$\{\lambda_n\}_n \in \ell^\infty(\mathbb{N})$, then the map
\begin{align*}
T: \mathcal{M} \ni x \mapsto \sum_{n=1}^{\infty}\lambda_n (\tau_n\otimes
f_n) x \in \mathcal{X}
\end{align*}
is a well-defined Lipschitz operator such that $T(0)=0$ with Lipschitz
norm at most $bd\|\{\lambda_n\}_n\|_\infty.$
\end{theorem}
\begin{proof}
Let $n , m \in \mathbb{N}$ with $n \leq m$. Then for each $x \in
\mathcal{M}$, using Holder's inequality,
\begin{align*}
\left\|\sum_{k=n}^{m}\lambda_k(\tau_k\otimes f_k)(x)\right\|&=\left\|\sum_{k=n}^{m}\lambda_k f_k(x)\tau_k\right\|=\sup_{\phi \in \mathcal{X}^*,\|\phi\|\leq 1}\left|\phi\left(\sum_{k=n}^{m}\lambda_k f_k(x)\tau_k\right)\right|\\
&=\sup_{\phi \in \mathcal{X}^*,\|\phi\|\leq 1}\left|\sum_{k=n}^{m}\lambda_k f_k(x)\phi(\tau_k)\right|\\
&\leq\sup_{\phi \in \mathcal{X}^*,\|\phi\|\leq 1}\sum_{k=n}^{m}|\lambda_k| |f_k(x)||\phi(\tau_k)|\\
&\leq \sup_{n \in \mathbb{N}}|\lambda_n|\sup_{\phi \in \mathcal{X}^*,\|\phi\|\leq 1}\sum_{k=n}^{m} |f_k(x)||\phi(\tau_k)|\\
&\leq \sup_{n \in \mathbb{N}}|\lambda_n|\sup_{\phi \in \mathcal{X}^*,\|\phi\|\leq 1}\left(\sum_{k=n}^{m}|f_k(x)|^p\right)^\frac{1}{p}\left(\sum_{k=n}^{m}|\phi(\tau_k)|^q\right)^\frac{1}{q}\\
&\leq \sup_{n \in \mathbb{N}}|\lambda_n|\sup_{\phi \in
\mathcal{X}^*,\|\phi\|\leq
1}\left(\sum_{k=n}^{m}|f_k(x)|^p\right)^\frac{1}{p}d\|\phi\|\\
&=d\sup_{n \in \mathbb{N}}|\lambda_n|\left(\sum_{k=n}^{m}|f_k(x)|^p\right)^\frac{1}{p}.
\end{align*}
Since $\left(\sum_{k=1}^{\infty}|f_k(x)|^p\right)^\frac{1}{p}$
converges, $\sum_{k=1}^{\infty}\lambda_k(\tau_k\otimes f_k)(x)$ also converges.
Now for all $x,y \in \mathcal{M}$,
\begin{align*}
\|Tx-Ty\|&=\left\|\sum_{n=1}^{\infty}\lambda_n f_n(x)\tau_n-\sum_{n=1}^{\infty}\lambda_n f_n(y)\tau_n\right\|=\left\|\sum_{n=1}^{\infty}\lambda_n (f_n(x)-f_n(y))\tau_n\right\|\\
&=\sup_{\phi \in \mathcal{X}^*,\|\phi\|\leq 1}\left|\phi\left(\sum_{n=1}^{\infty}\lambda_n (f_n(x)-f_n(y))\tau_k\right)\right|\\
&=\sup_{\phi \in \mathcal{X}^*,\|\phi\|\leq 1}\left|\sum_{n=1}^{\infty}\lambda_n (f_n(x)-f_n(y))\phi(\tau_k)\right|\\
&\leq \sup_{n \in \mathbb{N}}|\lambda_n|\sup_{\phi \in \mathcal{X}^*,\|\phi\|\leq 1}\left(\sum_{n=1}^{\infty}|f_n(x)-f_n(y)|^p\right)^\frac{1}{p}\left(\sum_{n=1}^{\infty}|\phi(\tau_n)|^q\right)^\frac{1}{q}\\
&\leq \sup_{n \in \mathbb{N}}|\lambda_n|\sup_{\phi \in \mathcal{X}^*,\|\phi\|\leq 1}\left(\sum_{n=1}^{\infty}|f_n(x)-f_n(y)|^p\right)^\frac{1}{p}d\|\phi\|\\
&=d\sup_{n \in
\mathbb{N}}|\lambda_n|\left(\sum_{n=1}^{\infty}|f_n(x)-f_n(y)|^p\right)^\frac{1}
{p}\leq bd\sup_{n \in \mathbb{N}}|\lambda_n|d(x,y).
\end{align*}
Hence
\begin{align*}
\|T\|_{\operatorname{Lip}_0}=\sup_{x, y \in \mathcal{M},~ x\neq y}
\frac{\|Tx-Ty\|}{d(x,y)}\leq bd\sup_{n \in \mathbb{N}}|\lambda_n|.
\end{align*}
\end{proof}
\begin{corollary}
Let $\{f_n\}_{n}$ in $\operatorname{Lip}(\mathcal{M}, \mathbb{K})$ be a
Lipschitz p-Bessel sequence for a metric space $\mathcal{M}$ with
bound $b$ and $\{\tau_n\}_{n}$ in a Banach space $\mathcal{X}$ be a
Lipschitz q-Bessel sequence for $\operatorname{Lip}_0(\mathcal{X},
\mathbb{K})$ with bound $d$. If
$\{\lambda_n\}_n \in \ell^\infty(\mathbb{N})$, then for fixed $z \in
\mathcal{M}$, the map
\begin{align*}
T: \mathcal{M} \ni x \mapsto \sum_{n=1}^{\infty}\lambda_n (\tau_n\otimes
(f_n-f(z)) )x \in \mathcal{X}
\end{align*}
is a well-defined Lipschitz operator with Lipschitz
number at most $bd\|\{\lambda_n\}_n\|_\infty.$
\end{corollary}
\begin{proof}
Define $g_n\coloneqq f_n-f(z), \forall n \in \mathbb{N}$. Then for all $x, y
\in \mathcal{M}$,
\begin{align*}
\left(\sum_{n=1}^{\infty}|g_n(x)-g_n(y)|^p\right)^\frac{1}{p}=\left(\sum_{n=1}^
{\infty}|f_n(x)-f_n(y)|^p\right)^\frac{1}{p}\leq b\,d(x,y).
\end{align*}
Hence $\{g_n\}_{n}$
is a
Lipschitz p-Bessel sequence for the pointed metric space $(\mathcal{M},
z)$ and we apply Theorem \ref{DEFINITIONEXISTENCE} to $\{g_n\}_{n}$ which gives the result.
\end{proof}
\begin{definition}\label{DEFINITION}
Let $\{f_n\}_{n}$ in $\operatorname{Lip}_0(\mathcal{M}, \mathbb{K})$ be
a Lipschitz p-Bessel sequence for a pointed metric space
$(\mathcal{M},
0)$ and $\{\tau_n\}_{n}$ in a Banach space $\mathcal{X}$ be a Lipschitz
q-Bessel sequence for $\operatorname{Lip}_0(\mathcal{X}, \mathbb{K})$. Let
$\{\lambda_n\}_n \in \ell^\infty(\mathbb{N})$. The Lipschitz operator
\begin{align*}
M_{\lambda,f, \tau}\coloneqq \sum_{n=1}^{\infty}\lambda_n (\tau_n\otimes f_n)
\end{align*}
is called as the \textbf{Lipschitz $(p,q)$-Bessel multiplier}. The sequence $\{\lambda_n\}_n$ is called as \textbf{symbol} for $ M_{\lambda,f, \tau}.$
\end{definition}
We easily see that Definition \ref{DEFINITION} generalizes Definition 3.2
in (\cite{RAHIMIBALAZSMUL}). By varying the symbol and fixing other parameters in
the multiplier we get map from $\ell^\infty(\mathbb{N})$ to $\operatorname{Lip}_0(\mathcal{M}, \mathcal{X})$. Property of
this map for Hilbert space was derived by Balazs (Lemma 7.1 in (\cite{BALAZSBASIC}))
and for Banach spaces it is due to Rahimi and Balazs (Proposition 3.3 in
(\cite{RAHIMIBALAZSMUL})). In the next proposition we study it in the context of
metric spaces.
\begin{proposition}\label{INJECTIVE}
Let $\{f_n\}_{n}$ in $\operatorname{Lip}_0(\mathcal{M}, \mathbb{K})$ be
a Lipschitz p-Bessel sequence for $(\mathcal{M},0)$ with non-zero
elements, $\{\tau_n\}_{n}$ in $\mathcal{X}$ be a q-Riesz sequence for
$\operatorname{Lip}_0(\mathcal{X}, \mathbb{K})$ and $\{\lambda_n\}_n \in
\ell^\infty(\mathbb{N})$. Then the mapping
\begin{align*}
T:\ell^\infty(\mathbb{N})\ni \{\lambda_n\}_n \mapsto M_{\lambda,f, \tau}
\in \operatorname{Lip}_0(\mathcal{M}, \mathcal{X})
\end{align*}
is a well-defined injective bounded linear operator.
\end{proposition}
\begin{proof}
From the norm estimate of $M_{\lambda,f, \tau}$, we see that $T$ is a
well-defined bounded linear operator. Let $\{\lambda_n\}_n, \{\mu_n\}_n \in
\ell^\infty(\mathbb{N})$ be such that $M_{\lambda,f, \tau}
=T\{\lambda_n\}_n=T\{\mu_n\}_n=M_{\mu,f, \tau} $. Then
$\sum_{n=1}^{\infty}\lambda_n f_n (x)\tau_n =M_{\lambda,f, \tau}x =M_{\mu,f,
\tau}x=\sum_{n=1}^{\infty}\mu_n f_n (x)\tau_n $, $\forall x \in
\mathcal{M}$ $\Rightarrow$ $\sum_{n=1}^{\infty}(\lambda_n-\mu_n) f_n
(x)\tau_n=0$, $\forall x \in \mathcal{M}$. Now using Inequality
(\ref{RIESZSEQUENCEINEQUALITY}),
\begin{align*}
&a \left(\sum_{n=1 }^\infty|(\lambda_n-\mu_n) f_n
(x)|^q\right)^\frac{1}{q}\leq \left\|\sum_{n=1}^\infty (\lambda_n-\mu_n) f_n
(x)\tau_n\right\|=0, \quad \forall x \in \mathcal{M}\\
&\implies (\lambda_n-\mu_n) f_n (x)=0, \quad \forall n \in \mathbb{N},
\forall x \in \mathcal{M}.
\end{align*}
Let $n \in \mathbb{N}$ be fixed. Since $f_n\neq 0$, there exists $x \in
\mathcal{M}$ such that $f_n(x)\neq 0$. Therefore we get $\lambda_n-\mu_n=0$. By
varying $n \in \mathbb{N}$ we arrive at $\lambda_n=\mu_n$, $\forall n \in
\mathbb{N}$. Hence $T$ is injective.
\end{proof}
\section{CONTINUITY PROPERTIES OF MULTIPLIERS}
In Proposition \ref{RAHIMIBALAZSMULTIPLIERCOMPACT}, it was obtained that whenever the symbol is in $c_0(\mathbb{N})$, then the multiplier is compact. Using the notion of Lipschitz compact operator (Definition \ref{LIPSCHITZCOMPACTDEFINITION}), we derive non linear analogue of Proposition \ref{RAHIMIBALAZSMULTIPLIERCOMPACT}.
\begin{proposition}\label{MULTIPLIERISCOMPACT}
Let $\{f_n\}_{n}$ in $\operatorname{Lip}_0(\mathcal{M}, \mathbb{K})$ be
a Lipschitz p-Bessel sequence for $(\mathcal{M}, 0)$ with bound $b$
and $\{\tau_n\}_{n}$ in $\mathcal{X}$ be a Lipschitz q-Bessel sequence
for $\operatorname{Lip}_0(\mathcal{X}, \mathbb{K})$ with bound $d$. If
$\{\lambda_n\}_n \in c_0(\mathbb{N})$, then $M_{\lambda,f, \tau}$ is a Lipschitz
compact operator.
\end{proposition}
\begin{proof}
For each $m \in \mathbb{N}$, define $ M_{\lambda_m,f, \tau}\coloneqq \sum_{n=1}^{m}\lambda_n (\tau_n\otimes f_n )$. Then $ M_{\lambda_m,f, \tau}$ is a Lipschitz finite rank operator (Theorem \ref{LIPSCHITCOMPACTIFFLINEAR}). Now
\begin{align*}
\|M_{\lambda_m,f, \tau}-M_{\lambda,f,
\tau}\|_{\operatorname{Lip}_0}&=\sup_{x, y \in \mathcal{M}, ~x\neq y}
\frac{\|(M_{\lambda_m,f, \tau}-M_{\lambda,f, \tau})x-(M_{\lambda_m,f,
\tau}-M_{\lambda,f, \tau})y\|}{d(x,y)}\\
&=\sup_{x, y \in \mathcal{M}, ~x\neq y}
\frac{\left\|\sum_{n=m+1}^{\infty}\lambda_n f_n
(x)\tau_n-\sum_{n=m+1}^{\infty}\lambda_n f_n (y)\tau_n\right\|}{d(x,y)}\\
&=\sup_{x, y \in \mathcal{M}, ~x\neq y}
\frac{\left\|\sum_{n=m+1}^{\infty}\lambda_n (f_n
(x)-f_n(y))\tau_n\right\|}{d(x,y)}\\
&\leq bd\sup_{m+1\leq n<\infty }|\lambda_n| \to 0 \text{ as } m \to \infty.
\end{align*}
Hence $M_{\lambda,f, \tau}$ is the limit of a sequence
of Lipschitz finite rank operators $\{M_{\lambda_m,f, \tau}\}_{m=1}^\infty$ with respect to the Lipschitz norm. Thus $M_{\lambda,f, \tau}$ is Lipschitz approximable and from Theorem
\ref{LIPSCHITZAPPROMABLEISCOMPACT} it follows that $M_{\lambda,f, \tau}$ is
Lipschitz compact.
\end{proof}
We now study the properties of multiplier by changing its parameters.
Following result extends Theorem \ref{RAHIMIBALAZSMULTIPLIERCONTINUITY}.
\begin{theorem}\label{MULTIPLIERISWELL}
Let $\{f_n\}_{n}$ in $\operatorname{Lip}_0(\mathcal{M}, \mathbb{K})$ be a
Lipschitz p-Bessel sequence for $\mathcal{M}$ with bound $b$ and
$\{\tau_n\}_{n}$ in $\mathcal{X}$ be a Lipschitz q-Bessel sequence for
$\operatorname{Lip}_0(\mathcal{X}, \mathbb{K})$ with bound $d$ and
$\{\lambda_n\}_n \in \ell^\infty(\mathbb{N})$. Let $k \in \mathbb{N}$ and let
$\lambda^{(k)}=\{\lambda_1^{(k)},\lambda_2^{(k)}, \dots \}$,
$\lambda=\{\lambda_1,\lambda_2, \dots \}$,
$\tau^{(k)}=\{\tau_1^{(k)}, \tau_2^{(k)}, \dots\}$,
$\tau_n^{k} \in \mathcal{X}$, $\tau=\{\tau_1, \tau_2, \dots\}$. Assume that for
each $k$, $\lambda^{(k)}\in \ell^\infty(\mathbb{N})$ and
$\tau^{(k)}$ is a pointed Lipschitz q-Bessel sequence for
$\operatorname{Lip}_0(\mathcal{X}, \mathbb{K})$.
\begin{enumerate}[label=(\roman*)]
\item If $\lambda^{(k)} \to \lambda $ as $k \rightarrow \infty $ in p-norm, then
\begin{align*}
\|M_{\lambda^{(k)},f, \tau}-M_{\lambda,f, \tau}\|_{\operatorname{Lip}_0} \to 0 \text{ as } k \to \infty.
\end{align*}
\item If $\{\lambda_n\}_n \in \ell^p(\mathbb{N})$ and $\sum_{n=1}^{\infty}\|\tau_n^{(k)}-\tau_n\|^q \to 0 \text{ as } k \to \infty $, then
\begin{align*}
\|M_{\lambda, f, \tau^{(k)}}-M_{\lambda,f, \tau}\|_{\operatorname{Lip}_0} \to 0 \text{ as } k \to \infty.
\end{align*}
\end{enumerate}
\end{theorem}
\begin{proof}
\begin{enumerate}[label=(\roman*)]
\item Using Theorem \ref{DEFINITIONEXISTENCE},
\begin{align*}
&\|M_{\lambda^{(k)},f, \tau}-M_{\lambda,f,
\tau}\|_{\operatorname{Lip}_0}\\&=\sup_{x, y \in \mathcal{M}, ~x\neq y}
\frac{\|(M_{\lambda^{(k)},f, \tau}-M_{\lambda,f, \tau})x-(M_{\lambda^{(k)},f,
\tau}-M_{\lambda,f, \tau})y\|}{d(x,y)}\\
&=\sup_{x, y \in \mathcal{M}, ~x\neq y}
\frac{\left\|\sum_{n=1}^{\infty}(\lambda_n^{(k)}-\lambda_n)f_n(x)\tau_n-\sum_{
n=1}^{\infty}(\lambda_n^{(k)}-\lambda_n)f_n(y)\tau_n\right\|}{d(x,y)}\\
&=\sup_{x, y \in \mathcal{M}, ~x\neq y}
\frac{\left\|\sum_{n=1}^{\infty}(\lambda_n^{(k)}
-\lambda_n)(f_n(x)-f_n(y))\tau_n\right\|}{d(x,y)}\\
&\leq bd \sup_{n\in \mathbb{N}}|\lambda_n^{(k)}-\lambda_n|=bd \|\{\lambda_n^{(k)}-\lambda_n\}_n\|_\infty \\
& \leq bd \|\{\lambda_n^{(k)}-\lambda_n\}_n\|_p \to 0 \text{ as } k \to \infty.
\end{align*}
\item Using Holder's inequality,
\begin{align*}
& \|M_{\lambda,f, \tau^{(k)}}-M_{\lambda,f, \tau}\|_{\operatorname{Lip}_0}\\
&=\sup_{x, y \in \mathcal{M}, ~x\neq y} \frac{\|(M_{\lambda,f,
\tau^{(k)}}-M_{\lambda,f, \tau})x-(M_{\lambda,f, \tau^{(k)}}-M_{\lambda,f,
\tau})y\|}{d(x,y)}\\
&=\sup_{x, y \in \mathcal{M}, ~x\neq y}
\frac{\left\|\sum_{n=1}^{\infty}\lambda_nf_n(x)(\tau_n^{(k)}-\tau_n)-\sum_{n=1}^
{\infty}\lambda_nf_n(y)(\tau_n^{(k)}-\tau_n)\right\|}{d(x,y)}\\
&=\sup_{x, y \in \mathcal{M}, ~x\neq y}
\frac{\left\|\sum_{n=1}^{\infty}\lambda_n(f_n(x)-f_n(y))(\tau_n^{(k)}
-\tau_n)\right\|}{d(x,y)}\\
&=\sup_{x, y \in \mathcal{M}, ~x\neq y} \ \ \sup _{\phi \in
\mathcal{X}^*,\|\phi\|\leq 1}
\frac{\left|\sum_{n=1}^{\infty}\lambda_n(f_n(x)-f_n(y))\phi(\tau_n^{(k)}
-\tau_n)\right|}{d(x,y)}\\
&\leq \sup_{x, y \in \mathcal{M}, ~x\neq y} \ \ \sup _{\phi
\in \mathcal{X}^*,\|\phi\|\leq 1}
\frac{\left(\sum_{n=1}^{\infty}|\lambda_n(f_n(x)-f_n(y))|^p\right)^\frac{1}{p}
\left(\sum_{n=1}^{\infty}|\phi(\tau_n^{(k)}-\tau_n)|^q\right)^\frac{1}{q}}{d(x,
y)}\\
&\leq \sup_{x, y \in \mathcal{M}, ~x\neq y} \ \ \sup _{\phi
\in \mathcal{X}^*,\|\phi\|\leq 1}\\
&\quad
\left\{\frac{\left(\sum_{n=1}^{\infty}|\lambda_n|^p\right)^\frac{1}{p}\left(\sum_{n=1}^
{\infty}|f_n(x)-f_n(y)|^p\right)^\frac{1}{p}\left(\sum_{n=1}^{\infty}
|\phi(\tau_n^{(k)}-\tau_n)|^q\right)^\frac{1}{q}}{d(x,y)}\right\}\\
&\leq b \|\{\lambda_n\}_n\|_p\left(\sum_{n=1}^{\infty}\|\tau_n^{(k)}-\tau_n\|^q\right)^\frac{1}{q} \to 0 \text{ as } k \to \infty.
\end{align*}
\end{enumerate}
\end{proof}
{\onehalfspacing \chapter{p-APPROXIMATE SCHAUDER FRAMES FOR BANACH SPACES}\label{chap4} }
\section{p-APPROXIMATE SCHAUDER FRAMES}
Let $\mathcal{X}$ be a separable Banach space and $\mathcal{X}^*$ be its dual. Equation (\ref{GFE}) motivated Casazza, Dilworth, Odell, Schlumprecht, and Zsak, to define the notion of Schauder frame for $\mathcal{X}$ in 2008.
\begin{definition}(\cite{CASAZZA})\label{FRAMING}
Let $\{\tau_n\}_n$ be a sequence in $\mathcal{X}$ and $\{f_n\}_n$ be a sequence in $\mathcal{X}^*.$ The pair $ (\{f_n \}_{n}, \{\tau_n \}_{n}) $ is said to be a \textbf{Schauder frame} for $\mathcal{X}$ if
\begin{align}\label{SFEQUA}
x=\sum_{n=1}^\infty
f_n(x)\tau_n, \quad \forall x \in
\mathcal{X}.
\end{align}
\end{definition}
Definition \ref{FRAMING} was generalized for $\mathbb{R}^n$ by Thomas in her Master's thesis and later to Banach spaces by Freeman, Odell, Schlumprecht, and Zsak.
\begin{definition}(\cite{FREEMANODELL, THOMAS})\label{ASFDEF}
Let $\{\tau_n\}_n$ be a sequence in $\mathcal{X}$ and $\{f_n\}_n$ be a sequence in $\mathcal{X}^*.$ The pair $ (\{f_n \}_{n}, \{\tau_n \}_{n}) $ is said to be an \textbf{approximate Schauder frame} (ASF) for $\mathcal{X}$ if
\begin{align}\label{ASFEQUA}
\text {(\textbf{Frame operator})}\quad S_{f, \tau}:\mathcal{X}\ni x \mapsto S_{f, \tau}x\coloneqq \sum_{n=1}^\infty
f_n(x)\tau_n \in
\mathcal{X}
\end{align}
is a well-defined bounded linear, invertible operator.
\end{definition}
Note that whenever $S_{f, \tau}=I_\mathcal{X}$, the identity operator on $\mathcal{X}$, Definition \ref{ASFDEF} reduces to Definition \ref{FRAMING}. Since $S_{f, \tau}$ is invertible, it follows that there are $a,b>0$ such that
\begin{align*}
a\|x\|\leq \left\|\sum_{n=1}^\infty
f_n(x)\tau_n \right\|\leq b\|x\|, \quad \forall x \in \mathcal{X}.
\end{align*}
We call $a$ as \textbf{lower ASF bound} and $b$ as \textbf{upper ASF bound}. Supremum (resp. infimum) of the set of all lower (resp. upper) ASF bounds is called \textbf{optimal lower} (resp. \textbf{optimal upper}) ASF bound. From the theory of bounded linear operators between Banach spaces, one sees that optimal lower frame bound is $ \|S_{f, \tau}^{-1}\|^{-1}$ and optimal upper frame bound is $ \|S_{f,\tau}\|$. Advantage of ASF over Schauder frame is that it is more easier to get the operator in (\ref{ASFEQUA}) as invertible than obtaining Equation (\ref{SFEQUA}).
\begin{example}(\cite{FREEMANODELL})
Let $2<p<\infty$ and $\{\lambda_n\}_n$ be an unbounded sequence of scalars. For $a\in \mathbb{R}$, define
\begin{align*}
T_a: \mathcal{L}^p(\mathbb{R})\ni f \mapsto T_af \in \mathcal{L}^p(\mathbb{R});\quad T_af: \mathbb{R} \ni x \mapsto ( T_af)(x)\coloneqq f(x-a)\in \mathbb{C}.
\end{align*}
Then there exist $\phi \in \mathcal{L}^p(\mathbb{R})$ and a sequence $\{f_n \}_{n}$, $f_n \in ( \mathcal{L}^p(\mathbb{R}))^*$, $\forall n \in \mathbb{N}$ such that $ (\{f_n \}_{n}, \{T_{\lambda_n}\phi \}_{n}) $ is an ASF for $ \mathcal{L}^p(\mathbb{R})$.
\end{example}
\begin{definition}\label{PASFDEF}
An ASF $ (\{f_n \}_{n}, \{\tau_n \}_{n}) $ for $\mathcal{X}$ is said to be a \textbf{p-approximate Schauder frame} (p-ASF), $p \in [1, \infty)$ if both the maps
\begin{align}\label{ALIGNP}
&\text{(\textbf{Analysis operator})} \quad \theta_f: \mathcal{X}\ni x \mapsto \theta_f x\coloneqq \{f_n(x)\}_n \in \ell^p(\mathbb{N}) \text{ and } \\
&\text{(\textbf{Synthesis operator})} \quad\theta_\tau : \ell^p(\mathbb{N}) \ni \{a_n\}_n \mapsto \theta_\tau \{a_n\}_n\coloneqq \sum_{n=1}^\infty a_n\tau_n \in \mathcal{X}\label{AAA}
\end{align}
are well-defined bounded linear operators. A Schauder frame which is a p-ASF is called as a \textbf{simple p-ASF} or \textbf{Parseval p-ASF}.
\end{definition}
It can be easily observed that a p-approximate Schauder frame is an approximate Schauder frame and a Schauder frame is an approximate Schauder frame. We now give an example to show that the set of all p-approximate Schauder frames is strictly smaller than the set of all approximate Schauder frames. Let $\mathcal{X}=\mathbb{K}$. Define $\tau_n\coloneqq \frac{1}{n^2}$, $f_n(x)=x, \forall x \in \mathbb{K}$, $\forall n \in \mathbb{N}$. Then $\sum_{n=1}^{\infty}f_n(x)\tau_n=\frac{\pi^2}{6}x$, $\forall x \in \mathbb{K}$. Therefore $ (\{f_n\}_{n}, \{\tau_n\}_{n}) $ is an approximate Schauder frame for $\mathcal{X}$. Let $ x \in \mathbb{K}$ be non zero. Then for every $p\in[1,\infty)$,
\begin{align*}
\sum_{n=1}^{m}|f_n(x)|^p=m|x|^p \to \infty \quad \text{as} \quad m \to \infty.
\end{align*}
Thus $\{f_n(x)\}_n\notin \ell^p(\mathbb{N})$ for any $p\in[1,\infty)$ and hence $ (\{f_n\}_{n}, \{\tau_n\}_{n}) $ is not a p-ASF for any $p\in[1,\infty)$. We next note that there is a bijection between the set of approximate Schauder frames and the set of all Schauder frames (Lemma 3.1 in (\cite{FREEMANODELL})).
We observe that, in terms of inequalities, (\ref{ALIGNP}) and (\ref{AAA}) say that there exist $c,d>0$, such that
\begin{align}
&\left(\sum_{n=1}^\infty
|f_n(x)|^p\right)^\frac{1}{p}\leq c \|x\|, \quad \forall x \in \mathcal{X} \text{ and }\label{FIRSTINEQUALITYPASF} \\
&\left\|\sum_{n=1}^\infty a_n\tau_n\right\|\leq d \left(\sum_{n=1}^\infty
|a_n|^p\right)^\frac{1}{p}, \quad \forall \{a_n\}_n \in \ell^p(\mathbb{N})\label{SECONDINEQUALITYPASF}.
\end{align}
We now give various examples of p-ASFs.
\begin{example}
Let $p\in[1,\infty)$ and $U:\mathcal{X} \rightarrow\ell^p(\mathbb{N})$, $ V: \ell^p(\mathbb{N})\to \mathcal{X}$ be bounded linear operators such that $VU$ is bounded invertible. Let $\{e_n\}_n$ denote the standard Schauder basis for $\ell^p(\mathbb{N})$ and let $\{\zeta_n\}_n$ denote the coordinate functionals associated with $\{e_n\}_n$. Define
\begin{align*}
f_n\coloneqq \zeta_n U, \quad \tau_n\coloneqq Ve_n, \quad \forall n \in \mathbb{N}.
\end{align*}
Then $ (\{f_n\}_{n}, \{\tau_n\}_{n}) $ is a p-ASF for $\mathcal{X}$. In particular, if $U:\ell^p(\mathbb{N}) \rightarrow\ell^p(\mathbb{N})$ is bounded invertible, then $(\{\zeta_nU\}_{n}, \{U^{-1}e_n\}_{n}) $ is a p-ASF for $\ell^p(\mathbb{N})$.
\end{example}
\begin{example}
Let $p\in[1,\infty)$ and $\{\tau_n\}_{n=1}^m $ be a basis for a finite dimensional Banach space $\mathcal{X}$. Choose any basis $\{f_n\}_{n=1}^m $ for $\mathcal{X}^*$. We claim that $(\{f_n\}_{n=1}^m, \{\tau_n\}_{n=1}^m) $ is a p-ASF for $\mathcal{X}$. To prove this, since $\mathcal{X}$ is finite dimensional, it suffices to prove that the map $\mathcal{X}\ni x \mapsto \sum_{n=1}^{m}f_n(x)\tau_n\in \mathcal{X}$ is injective. Let $x \in \mathcal{X}$ be such that $\sum_{n=1}^{m}f_n(x)\tau_n=0$. Since $\{\tau_n\}_{n=1}^m $ is a basis for $\mathcal{X}$, we then have $f_1(x)=\cdots=f_n(x)=0$. We then have $f(x)=0, \forall f \in \mathcal{X}^*$. Hahn-Banach theorem now says that $x=0$. Hence the claim holds and consequently $(\{f_n\}_{n=1}^m, \{\tau_n\}_{n=1}^m) $ is a p-ASF for $\mathcal{X}$.
\end{example}
\begin{example}
Recall that a spanning set is a frame for a finite dimensional Hilbert space (\cite{HANKORNELSONLARSON}). We now generalize this for p-ASFs. Let $p\in[1,\infty)$, $\mathcal{X}$ be a finite dimensional Banach space and $\{\tau_n\}_{n=1}^m $ be a spanning set for $\mathcal{X}$. We claim that there exists a collection $\{f_n\}_{n=1}^m $ in $\mathcal{X}^*$ such that $ (\{f_n\}_{n=1}^m, \{\tau_n\}_{n=1}^m) $ is a p-ASF for $\mathcal{X}$. Since $\{\tau_n\}_{n=1}^m $ spans $\mathcal{X}$, there exists a basis in the collection $\{\tau_n\}_{n=1}^m $. By rearranging, if necessary, we may assume that $\{\tau_n\}_{n=1}^r $ is a basis for $\mathcal{X}$. Let $\{f_n\}_{n=1}^r $ be the dual basis for $\{\tau_n\}_{n=1}^r $. Choose linear operators $U,V:\mathcal{X}\to \mathcal{X}$ such that $VU$ is injective or surjective. If we now set $f_{r+1}=\cdots=f_n=0$, it then follows that $(\{f_nU\}_{n=1}^m, \{V\tau_n\}_{n=1}^m) $ is a p-ASF for $\mathcal{X}$.
\end{example}
\begin{example}
Let $\mathcal{X}$ be a Banach space which admits a Schauder basis $\{\omega_n\}_{n} $ and let $\{g_n\}_{n}$ be the coordinate functionals associated with $\{e_n\}_n$. Let $U, V:\mathcal{X} \rightarrow \mathcal{X}$ be bounded linear operators such that $VU$ is invertible. Define
\begin{align*}
f_n\coloneqq g_n U, \quad \tau_n\coloneqq V\omega_n, \quad \forall n \in \mathbb{N}.
\end{align*}
Then $ (\{f_n\}_{n}, \{\tau_n\}_{n}) $ is an approximate Schauder frame for $\mathcal{X}$. If $VU=I_\mathcal{X}$, then $ (\{f_n\}_{n}, \{\tau_n\}_{n}) $ is a Schauder frame for $\mathcal{X}$.
\end{example}
\begin{example}\label{EXAMPLE2}
Let $p\in[1,\infty)$ and $U:\mathcal{X} \rightarrow\ell^p(\mathbb{N})$, $ V: \ell^p(\mathbb{N})\to \mathcal{X}$ be bounded linear operators such that $VU$ is invertible. Let $\{e_n\}_n$ denote the canonical Schauder basis for $\ell^p(\mathbb{N})$ and let $\{\zeta_n\}_n$ denote the coordinate functionals associated with $\{e_n\}_n$. Define
\begin{align*}
f_n\coloneqq \zeta_n U, \quad \tau_n\coloneqq Ve_n, \quad \forall n \in \mathbb{N}.
\end{align*}
Then $ (\{f_n\}_{n}, \{\tau_n\}_{n}) $ is a p-ASF for $\mathcal{X}$.
\end{example}
Now we have Banach space analogous of Theorem \ref{MOSTIMPORTANT}.
\begin{theorem}\label{OURS}
Let $ (\{f_n \}_{n}, \{\tau_n \}_{n}) $ be a p-ASF for $\mathcal{X}$. Then
\begin{enumerate}[label=(\roman*)]
\item We have
\begin{align}\label{REPBANACH}
x=\sum_{n=1}^\infty (f_nS_{f, \tau}^{-1})(x) \tau_n=\sum_{n=1}^\infty
f_n(x) S_{f, \tau}^{-1}\tau_n, \quad \forall x \in
\mathcal{X}.
\end{align}
\item $ (\{f_nS_{f, \tau}^{-1} \}_{n}, \{S_{f, \tau}^{-1} \tau_n \}_{n}) $ is a p-ASF for $\mathcal{X}$.
\item The analysis operator
$
\theta_f: \mathcal{X} \ni x \mapsto \{f_n(x) \}_n \in \ell^p(\mathbb{N})
$
is injective.
\item
The synthesis operator
$
\theta_\tau: \ell^p(\mathbb{N}) \ni \{a_n \}_n \mapsto \sum_{n=1}^\infty a_n\tau_n \in \mathcal{X}
$
is surjective.
\item Frame operator
splits as $S_{f, \tau}=\theta_\tau\theta_f.$
\item $P_{f, \tau}\coloneqq\theta_fS_{f,\tau}^{-1}\theta_\tau:\ell^p(\mathbb{N})\to \ell^p(\mathbb{N})$ is a projection onto $\theta_f(\mathcal{X})$.
\end{enumerate}
\end{theorem}
\begin{proof}
First follows from the continuity and linearity of $S_{f, \tau}^{-1}$. Because $S_{f, \tau}$ is invertible, we have (ii). Again invertibility of $S_{f, \tau}$ makes $\theta_f$ injective and $\theta_\tau$ surjective. (v) and (vi) are routine calculations.
\end{proof}
Now we can derive a generalization of Theorem \ref{HOLUBTHEOREM} for Banach spaces.
\begin{theorem}\label{THAFSCHAR}
Let $\{e_n\}_n$ denote the standard Schauder basis for $\ell^p(\mathbb{N})$ and let $\{\zeta_n\}_n$ denote the coordinate functionals associated with $\{e_n\}_n$. A pair $ (\{f_n\}_{n}, \{\tau_n\}_{n}) $ is a p-ASF for $\mathcal{X}$
if and only if
\begin{align*}
f_n=\zeta_n U, \quad \tau_n=Ve_n, \quad \forall n \in \mathbb{N},
\end{align*} where $U:\mathcal{X} \rightarrow\ell^p(\mathbb{N})$, $ V: \ell^p(\mathbb{N})\to \mathcal{X}$ are bounded linear operators such that $VU$ is bounded invertible.
\end{theorem}
\begin{proof}
$(\Leftarrow)$ Clearly $\theta_f$ and $\theta_\tau$ are bounded linear operators. Now let $x\in \mathcal{X}$. Then
\begin{align}\label{PASFFIRSTTHEOREMEQUATION}
S_{f, \tau}x= \sum_{n=1}^\infty
f_n(x)\tau_n=\sum_{n=1}^\infty \zeta_n(Ux)Ve_n=V\left(\sum_{n=1}^\infty \zeta_n(Ux)e_n\right)=VUx.
\end{align}
Hence $S_{f, \tau}$ is bounded invertible. \\
$(\Rightarrow)$ Define $U\coloneqq \theta_f$, $V\coloneqq \theta_\tau$. Then $\zeta_nUx=\zeta_n\theta_fx=\zeta_n(\{f_k(x)\}_k)=f_n(x)$, $\forall x \in \mathcal{X}$, $Ve_n=\theta_\tau e_n=\tau_n$, $\forall n \in \mathbb{N}$ and $VU=\theta_\tau \theta_f=S_{f, \tau}$ which is bounded invertible.
\end{proof}
Note that Theorem \ref{THAFSCHAR} generalizes Theorem \ref{HOLUBTHEOREM}. In fact, in the case of Hilbert spaces, Theorem \ref{THAFSCHAR} reads as ``A sequence $\{\tau_n\}_n$ in $\mathcal{H}$ is a
frame for $\mathcal{H}$ if and only if there exists a bounded linear operator $T:\ell^2(\mathbb{N}) \to \mathcal{H}$ such that $Te_n=\tau_n$, for all $n \in \mathbb{N}$ and $TT^*$ is invertible". Now we know that $TT^*$ is invertible if and only if $T$ is surjective.\\
Since every separable Hilbert space admits an orthonormal basis, the existence of orthonormal basis in Theorem \ref{OLECHA} is automatic. On the other hand, Enflo showed that there are separable Banach spaces which do not have Schauder basis (\cite{JAMES, ENFLO}). Thus to obtain analogous of Theorem \ref{OLECHA} for Banach spaces, we need to impose condition on $\mathcal{X}$.
\begin{theorem}\label{SCHFRS}
Assume that $\mathcal{X}$ admits a Schauder basis $\{\omega_n\}_n$. Let $\{g_n\}_n$ denote the coordinate functionals associated with $\{\omega_n\}_n$. Assume that
\begin{align}\label{ASSUMPTIONNEEDED}
\{g_n(x)\}_n \in \ell^p(\mathbb{N}), \quad \forall x \in \mathcal{X}.
\end{align}
Then a pair $ (\{f_n\}_{n}, \{\tau_n\}_{n}) $ is a p-ASF for $\mathcal{X}$
if and only if
\begin{align*}
f_n=g_n U, \quad \tau_n=V\omega_n, \quad \forall n \in \mathbb{N},
\end{align*} where $U, V:\mathcal{X} \rightarrow \mathcal{X}$ are bounded linear operators such that $VU$ is bounded invertible.
\end{theorem}
\begin{proof}
$(\Leftarrow)$ This is similar to the calculation done in (\ref{PASFFIRSTTHEOREMEQUATION}).\\
$(\Rightarrow)$ Let $T$ be the map defined by
\begin{align*}
T:\mathcal{X}\ni \sum_{n=1}^\infty a_n\omega_n\mapsto \sum_{n=1}^\infty a_ne_n\in \ell^p(\mathbb{N}).
\end{align*}
Assumption (\ref{ASSUMPTIONNEEDED}) then says that $T$ is a bounded invertible operator with inverse $ T^{-1} :\ell^p(\mathbb{N}) \ni \sum_{n=1}^\infty b_ne_n \mapsto \sum_{n=1}^\infty b_n\omega_n \in \mathcal{X}$. Define $ U\coloneqq T^{-1}\theta_f$ and $V\coloneqq\theta_\tau T$. Then $ U,V$ are bounded such that $ VU=(\theta_\tau T)(T^{-1}\theta_f)=\theta_\tau\theta_f=S_{f,\tau}$ is invertible and for $ x \in \mathcal{X}$ we have
\begin{align*}
(g_nU)(x)&= g_n(T^{-1}\theta_fx)=g_n(T^{-1}(\{f_k(x)\}_{k}))
= g_n\left(\sum_{k=1}^\infty f_k(x)T^{-1}e_k\right)
\\&=g_n\left(\sum_{k=1}^\infty f_k(x)\omega_k\right)
=\sum_{k=1}^\infty f_k(x)g_n(\omega_k)=f_n(x), \quad \forall x \in \mathcal{X}
\end{align*}
and $ V\omega_n=\theta_\tau T\omega_n=\theta_\tau e_n=\tau_n, \forall n \in \mathbb{N}$.
\end{proof}
\section{DUAL FRAMES FOR p-APPROXIMATE SCHAUDER FRA-MES}
Equation (\ref{REPBANACH}) motivates us to define the notion of dual frame as follows.
\begin{definition}\label{SIMILARITYMINE}
Let $ (\{f_n\}_{n}, \{\tau_n\}_{n}) $ be a p-ASF for $\mathcal{X}$. A p-ASF $ (\{g_n \}_{n}, \{\omega_n \}_{n}) $ for $\mathcal{X}$ is a \textbf{dual p-ASF} for $ (\{f_n \}_{n}, \{\tau_n \}_{n}) $ if
\begin{align*}
x=\sum_{n=1}^\infty g_n(x) \tau_n=\sum_{n=1}^\infty
f_n(x) \omega_n, \quad \forall x \in
\mathcal{X}.
\end{align*}
\end{definition}
Note that dual frames always exist. In fact, the Equation (\ref{REPBANACH}) shows that the frame $ (\{f_nS_{f, \tau}^{-1} \}_{n}, \{S_{f, \tau}^{-1} \tau_n \}_{n}) $ is a dual for $ (\{f_n\}_{n}, \{\tau_n\}_{n}) $. We call the frame $ (\{f_nS_{f, \tau}^{-1} \}_{n}, $ $ \{S_{f, \tau}^{-1} \tau_n \}_{n}) $ as the \textbf{canonical dual} for $ (\{f_n\}_{n}, \{\tau_n\}_{n}) $. With this notion, the following theorem follows easily.
\begin{theorem}
Let $ (\{f_n\}_{n}, \{\tau_n\}_{n}) $ be a p-ASF for $ \mathcal{X}$ with frame bounds $ a$ and $ b.$ Then
\begin{enumerate}[label=(\roman*)]
\item The canonical dual p-ASF for the canonical dual p-ASF for $ (\{f_n\}_{n}, \{\tau_n\}_{n}) $ is itself.
\item$ \frac{1}{b}, \frac{1}{a}$ are frame bounds for the canonical dual for $ (\{f_n\}_{n}, \{\tau_n\}_{n}) $.
\item If $ a, b $ are optimal frame bounds for $ (\{f_n\}_{n}, \{\tau_n\}_{n}) $, then $ \frac{1}{b}, \frac{1}{a}$ are optimal frame bounds for its canonical dual.
\end{enumerate}
\end{theorem}
One can naturally ask when a p-ASF has unique dual? An affirmative answer is in the following result.
\begin{proposition}
Let $ (\{f_n\}_{n}, \{\tau_n\}_{n}) $ be a p-ASF for $ \mathcal{X}$. If $\{\tau_n\}_{n}$ is a Schauder basis for $\mathcal{X}$ and $ f_k(\tau_n)=\delta_{k,n},\forall k,n \in \mathbb{N}$, then $ (\{f_n\}_{n}, \{\tau_n\}_{n}) $ has unique dual.
\end{proposition}
\begin{proof}
Let $ (\{g_n \}_{n}, \{\omega_n \}_{n}) $ and $ (\{u_n \}_{n}, \{\rho_n \}_{n}) $ be two dual p-ASFs for $ (\{f_n\}_{n}, \{\tau_n\}_{n}) $. Then
\begin{align*}
\sum_{n=1}^\infty(g_n(x)-u_n(x))\tau_n=0= \sum_{n=1}^\infty f_n(x)(\omega_n-\rho_n), \quad \forall x \in \mathcal{X}.
\end{align*}
First equality gives $ g_n=u_n, \forall n \in \mathbb{N}$ and evaluating second equality at a fixed $ \tau_k$ gives $ \omega_k=\rho_k$. Since $k$ was arbitrary, proposition follows.
\end{proof}
We now characterize dual frames by using analysis and synthesis operators.
\begin{proposition}\label{ORTHOGONALPRO}
For two p-ASFs $ (\{f_n\}_{n}, \{\tau_n\}_{n}) $ and $ (\{g_n \}_{n}, \{\omega_n \}_{n}) $ for $\mathcal{X}$, the following are equivalent.
\begin{enumerate}[label=(\roman*)]
\item $ (\{g_n \}_{n}, \{\omega_n \}_{n}) $ is a dual for $ (\{f_n \}_{n}, \{\tau_n \}_{n}) $.
\item $\theta_\tau\theta_g =\theta_\omega\theta_f =I_\mathcal{X}$.
\end{enumerate}
\end{proposition}
Like Lemmas \ref{LILEMMA1}, \ref{LILEMMA2} and Theorem \ref{LITHM}, we now characterize dual frames using standard Schauder basis for $\ell^p(\mathbb{N})$.
\begin{lemma}\label{ASFLEMMA1}
Let $\{e_n\}_n$ denote the standard Schauder basis for $\ell^p(\mathbb{N})$ and let $\{\zeta_n\}_n$ denote the coordinate functionals associated with $\{e_n\}_n$. Let $ (\{f_n \}_{n}, \{\tau_n \}_{n}) $ be a p-ASF for $\mathcal{X}$. Then a p-ASF $ (\{g_n \}_{n}, \{\omega_n \}_{n}) $ for $\mathcal{X}$ is a dual for $ (\{f_n \}_{n}, \{\tau_n \}_{n}) $ if and only if
\begin{align*}
g_n=\zeta_n U, \quad \omega_n=Ve_n, \quad \forall n \in \mathbb{N},
\end{align*}
where $ U:\mathcal{X} \rightarrow\ell^p(\mathbb{N})$ is a bounded right-inverse of $ \theta_\tau$, and $V: \ell^p(\mathbb{N}) \rightarrow \mathcal{X}$ is a bounded left-inverse of $ \theta_f$ such that $ VU$ is bounded invertible.
\end{lemma}
\begin{proof}
$(\Leftarrow)$ From the `if' part of proof of Theorem \ref{THAFSCHAR}, we get that $ (\{g_n \}_{n}, \{\omega_n \}_{n}) $ is a p-ASF for $\mathcal{X}$. We have to check for duality of $ (\{g_n \}_{n}, \{\omega_n \}_{n}) $. Also, we have $\theta_\tau\theta_g=\theta_\tau U=I_\mathcal{X} $, $ \theta_\omega\theta_f=V\theta_f =I_\mathcal{X}$.\\
$(\Rightarrow)$ Let $ (\{g_n \}_{n}, \{\omega_n \}_{n}) $ be a dual p-ASF for $ (\{f_n \}_{n}, \{\tau_n \}_{n}) $. Then $\theta_\tau\theta_g =I_\mathcal{X} $, $ \theta_\omega\theta_f =I_\mathcal{X}$. Define $ U\coloneqq\theta_g, V\coloneqq\theta_\omega.$ Then $ U:\mathcal{X} \rightarrow\ell^p(\mathbb{N})$ is a bounded right-inverse of $ \theta_\tau$, and $V: \ell^p(\mathbb{N}) \rightarrow \mathcal{X}$ is a bounded left-inverse of $ \theta_f$ such that the operator $ VU=\theta_\omega\theta_g=S_{g,\omega}$ is invertible. Further,
\begin{align*}
(\zeta_nU)x=\zeta_n\left(\sum_{k=1}^\infty g_k(x)e_k\right)=\sum_{k=1}^\infty g_k(x)\zeta_n(e_k)=g_n(x), \quad \forall x \in \mathcal{X}
\end{align*}
and $Ve_n=\theta_\omega e_n=\omega_n, \forall n \in \mathbb{N} $.
\end{proof}
\begin{lemma}\label{ASFLEMMA2}
Let $ (\{f_n \}_{n}, \{\tau_n \}_{n}) $ be a p-ASF for $\mathcal{X}$. Then
\begin{enumerate}[label=(\roman*)]
\item $R: \mathcal{X} \rightarrow \ell^p(\mathbb{N})$ is a bounded right-inverse of $ \theta_\tau$ if and only if
\begin{align*}
R=\theta_fS_{f,\tau}^{-1}+(I_{\ell^p(\mathbb{N})}-\theta_fS_{f,\tau}^{-1}\theta_\tau)U
\end{align*} where $U:\mathcal{X} \to \ell^p(\mathbb{N})$ is a bounded linear operator.
\item $ L:\ell^p(\mathbb{N})\rightarrow \mathcal{X}$ is a bounded left-inverse of $ \theta_f$ if and only if
\begin{align*}
L=S_{f,\tau}^{-1}\theta_\tau+V(I_{\ell^p(\mathbb{N})}-\theta_fS_{f,\tau}^{-1}\theta_\tau),
\end{align*}
where $V:\ell^p(\mathbb{N}) \to \mathcal{X}$ is a bounded linear operator.
\end{enumerate}
\end{lemma}
\begin{proof}
\begin{enumerate}[label=(\roman*)]
\item $(\Leftarrow)$ $\theta_\tau(\theta_fS_{f,\tau}^{-1}+(I_{\ell^p(\mathbb{N})}-\theta_fS_{f,\tau}^{-1}\theta_\tau)U)=I_\mathcal{X}+\theta_\tau U-I_\mathcal{X}\theta_\tau U=I_\mathcal{X}$. Therefore $\theta_fS_{f,\tau}^{-1}+(I_{\ell^p(\mathbb{N})}-\theta_fS_{f,\tau}^{-1}\theta_\tau)U$ is a bounded right-inverse of $ \theta_\tau$.
$(\Rightarrow)$ Define $U\coloneqq R $. Then $\theta_fS_{f,\tau}^{-1}+(I_{\ell^p(\mathbb{N})}-\theta_fS_{f,\tau}^{-1}\theta_\tau)U=\theta_fS_{f,\tau}^{-1}+(I_{\ell^p(\mathbb{N})}-\theta_fS_{f,\tau}^{-1}\theta_\tau)R=\theta_fS_{f,\tau}^{-1}+R-\theta_fS_{f,\tau}^{-1}=R$.
\item
$(\Leftarrow)$ $(S_{f,\tau}^{-1}\theta_\tau+V(I_{\ell^p(\mathbb{N})}-\theta_fS_{f,\tau}^{-1}\theta_\tau))\theta_f=I_\mathcal{X}+V\theta_f-V\theta_fI_\mathcal{X}=I_\mathcal{X}$. Therefore $S_{f,\tau}^{-1}\theta_\tau+V(I_{\ell^p(\mathbb{N})}-\theta_fS_{f,\tau}^{-1}\theta_\tau)$ is a bounded left-inverse of $\theta_f$.
$(\Rightarrow)$ Define $V\coloneqq L$. Then $S_{f,\tau}^{-1}\theta_\tau+V(I_{\ell^p(\mathbb{N})}-\theta_fS_{f,\tau}^{-1}\theta_\tau) =S_{f,\tau}^{-1}\theta_\tau+L(I_{\ell^p(\mathbb{N})}-\theta_fS_{f,\tau}^{-1}\theta_\tau)=S_{f,\tau}^{-1}\theta_\tau+L-S_{f,\tau}^{-1}\theta_\tau= L$.
\end{enumerate}
\end{proof}
\begin{theorem}\label{ALLDUAL}
Let $\{e_n\}_n$ denote the standard Schauder basis for $\ell^p(\mathbb{N})$ and let $\{\zeta_n\}_n$ denote the coordinate functionals associated with $\{e_n\}_n$. Let $ (\{f_n \}_{n}, \{\tau_n \}_{n}) $ be a p-ASF for $\mathcal{X}$. Then a p-ASF $ (\{g_n \}_{n}, \{\omega_n \}_{n}) $ for $\mathcal{X}$ is a dual for $ (\{f_n \}_{n}, \{\tau_n \}_{n}) $ if and only if
\begin{align*}
&g_n=f_nS_{f,\tau}^{-1}+\zeta_nU-f_nS_{f,\tau}^{-1}\theta_\tau U,\\
&\omega_n=S_{f,\tau}^{-1}\tau_n+Ve_n-V\theta_fS_{f,\tau}^{-1}\tau_n, \quad \forall n \in \mathbb{N}
\end{align*}
such that the operator
\begin{align*}
S_{f,\tau}^{-1}+VU-V\theta_fS_{f,\tau}^{-1}\theta_\tau U
\end{align*}
is bounded invertible, where $U:\mathcal{X} \to \ell^p(\mathbb{N})$ and $ V:\ell^p(\mathbb{N})\to \mathcal{X}$ are bounded linear operators.
\end{theorem}
\begin{proof}
Lemmas \ref{ASFLEMMA1} and \ref{ASFLEMMA2} give the characterization of dual frame as
\begin{align*}
&g_n=\zeta_n\theta_fS_{f,\tau}^{-1}+\zeta_nU-\zeta_n\theta_fS_{f,\tau}^{-1}\theta_\tau U=f_nS_{f,\tau}^{-1}+\zeta_nU-f_nS_{f,\tau}^{-1}\theta_\tau U,\\
&\omega_n=S_{f,\tau}^{-1}\theta_\tau e_n+Ve_n-V\theta_fS_{f,\tau}^{-1}\theta_\tau e_n=S_{f,\tau}^{-1}\tau_n+Ve_n-V\theta_fS_{f,\tau}^{-1}\tau_n, \quad \forall n \in \mathbb{N}
\end{align*}
such that the operator
$$(S_{f,\tau}^{-1}\theta_\tau+V(I_{\ell^p(\mathbb{N})}-\theta_fS_{f,\tau}^{-1}\theta_\tau))(\theta_fS_{f,\tau}^{-1}+(I_{\ell^p(\mathbb{N})}-\theta_fS_{f,\tau}^{-1}\theta_\tau)U) $$
is bounded invertible, where $U:\mathcal{X} \to \ell^p(\mathbb{N})$ and $ V:\ell^p(\mathbb{N})\to \mathcal{X}$ are bounded linear operators. By a direct expansion and simplification we get
\begin{align*}
&(S_{f,\tau}^{-1}\theta_\tau+V(I_{\ell^p(\mathbb{N})}-\theta_fS_{f,\tau}^{-1}\theta_\tau))(\theta_fS_{f,\tau}^{-1}+(I_{\ell^p(\mathbb{N})}-\theta_fS_{f,\tau}^{-1}\theta_\tau)U)
\\
&\quad=S_{f,\tau}^{-1}+VU-V\theta_fS_{f,\tau}^{-1}\theta_\tau U.
\end{align*}
\end{proof}
We know that a bounded linear operator from $\ell^2(\mathbb{N}) $ to $\mathcal{H} $ is given by a Bessel sequence (Theorem \ref{OLEBESSELCHARACTERIZATION12}). Thus, for Hilbert spaces, Theorem \ref{ALLDUAL} becomes Theorem \ref{LITHM}.
\section{SIMILARITY FOR p-APPROXIMATE SCHAUDER FRAM-ES}
We define Definition \ref{SIMILARDEFHILBERT} to Banach spaces as follows.
\begin{definition}
Two p-ASFs $ (\{f_n\}_{n}, \{\tau_n\}_{n}) $ and $ (\{g_n \}_{n}, \{\omega_n \}_{n}) $ for $\mathcal{X}$ are said to be \textbf{similar} or \textbf{equivalent} if there exist bounded invertible operators $T_{f,g}, T_{\tau,\omega} :\mathcal{X} \to \mathcal{X}$ such that
\begin{align*}
g_n=f_nT_{f,g},\quad \omega_n= T_{\tau,\omega}\tau_n, \quad \forall n \in \mathbb{N}.
\end{align*}
\end{definition}
Since the operators giving similarity are bounded invertible, the notion of similarity is symmetric. Further, a routine calculation shows that it is an equivalence relation (hence the name equivalent) on the set
\begin{align*}
\{(\{f_n\}_{n}, \{\tau_n\}_{n}): (\{f_n\}_{n}, \{\tau_n\}_{n}) \text{ is a p-ASF for } \mathcal{X}\}.
\end{align*}
We now characterize similarity using just operators. In the sequel, given a p-ASF $ (\{f_n\}_{n}, \{\tau_n\}_{n}) $, we set $P_{f, \tau}\coloneqq \theta_fS_{f,\tau}^{-1}\theta_\tau$.
\begin{theorem}\label{SEQUENTIALSIMILARITY}
For two p-ASFs $ (\{f_n\}_{n}, \{\tau_n\}_{n}) $ and $ (\{g_n \}_{n}, \{\omega_n \}_{n}) $ for $\mathcal{X}$, the following are equivalent.
\begin{enumerate}[label=(\roman*)]
\item $g_n=f_nT_{f, g} , \omega_n=T_{\tau,\omega}\tau_n, \forall n \in \mathbb{N}$, for some bounded invertible operators $T_{f,g}, T_{\tau,\omega}:\mathcal{X} \to \mathcal{X}.$
\item $\theta_g=\theta_f T_{f,g}, \theta_\omega=T_{\tau,\omega}\theta_\tau$, for some bounded invertible operators $T_{f,g}, T_{\tau,\omega}:\mathcal{X} \to \mathcal{X}.$
\item $P_{g,\omega}=P_{f, \tau}.$
\end{enumerate}
If one of the above conditions is satisfied, then invertible operators in $\operatorname{(i)}$ and $\operatorname{(ii)}$ are unique and are given by $T_{f,g}= S_{f,\tau}^{-1}\theta_\tau\theta_g, T_{\tau, \omega}=\theta_\omega\theta_fS_{f,\tau}^{-1}.$ In the case that $ (\{f_n \}_{n}, \{\tau_n \}_{n}) $ is a simple p-ASF, then $ (\{g_n \}_{n}, \{\omega_n \}_{n}) $ is a simple p-ASF if and only if $T_{\tau, \omega}T_{f,g} =I_\mathcal{X}$ if and only if $ T_{f,g}T_{\tau, \omega} =I_\mathcal{X}$.
\end{theorem}
\begin{proof}
(i) $\Rightarrow $ (ii) $ \theta_gx=\{g_n(x)\}_{n}=\{f_n(T_{f,g}x)\}_{n}=\theta_f(T_{f,g}x), \forall x \in \mathcal{X}$, $ \theta_\omega(\{a_n\}_{n})=\sum_{n=1}^\infty a_n\omega_n=\sum_{n=1}^\infty a_nT_{\tau,\omega}\tau_n=T_{\tau,\omega}( \theta_\tau(\{a_n\}_{n})) , \forall \{a_n\}_{n} \in \ell^p(\mathbb{N})$.\\
(ii) $\Rightarrow $ (iii) $ S_{g,\omega}= \theta_\omega\theta_g=T_{\tau,\omega} \theta_\tau\theta_f T_{f,g} =T_{\tau,\omega} S_{f, \tau}T_{f,g}$ and
\begin{align*}
P_{g,\omega}=\theta_g S_{g,\omega}^{-1} \theta_\omega=(\theta_f T_{f,g})(T_{\tau,\omega} S_{f, \tau}T_{f,g})^{-1}(T_{\tau,\omega} \theta_\tau)= P_{f, \tau}.
\end{align*}
(ii) $\Rightarrow $ (i) $ \sum_{n=1}^\infty g_n(x)e_n=\theta_g(x)=\theta_f(T_{f,g}x)=\sum_{n=1}^\infty f_n(T_{f,g}x)e_n, \forall x \in \mathcal{X}.$ This clearly gives (i).\\
(iii) $\Rightarrow $ (ii) $\theta_g=P_{g,\omega} \theta_g= P_{f,\tau}\theta_g=\theta_f(S_{f,\tau}^{-1}\theta_{\tau}\theta_g)$, and $$\theta_\omega=\theta_\omega P_{g,\omega}=\theta_\omega P_{f,\tau}=(\theta_\omega\theta_fS_{f,\tau}^{-1})\theta_\tau .$$ We show that $S_{f,\tau}^{-1}\theta_{\tau}\theta_g$ and $\theta_\omega\theta_fS_{f,\tau}^{-1} $ are invertible. For,
\begin{align*}
&(S_{f,\tau}^{-1}\theta_{\tau}\theta_g)(S_{g,\omega}^{-1}\theta_{\omega}\theta_f)=S_{f,\tau}^{-1}\theta_{\tau}P_{g,\omega}\theta_f=S_{f,\tau}^{-1}\theta_{\tau} P_{f,\tau}\theta_f=I_\mathcal{X},\\
&(S_{g,\omega}^{-1}\theta_{\omega}\theta_f)(S_{f,\tau}^{-1}\theta_{\tau}\theta_g)=S_{g,\omega}^{-1}\theta_{\omega} P_{f,\tau}\theta_g=S_{g,\omega}^{-1}\theta_{\omega}P_{g,\omega}\theta_g=I_\mathcal{X}
\end{align*}
and
\begin{align*}
&(\theta_\omega\theta_fS_{f,\tau}^{-1})(\theta_\tau\theta_gS_{g,\omega}^{-1})=\theta_\omega P_{f,\tau}\theta_gS_{g,\omega}^{-1}=\theta_\omega P_{g,\omega}\theta_gS_{g,\omega}^{-1}=I_\mathcal{X},\\
&(\theta_\tau\theta_gS_{g,\omega}^{-1})(\theta_\omega\theta_fS_{f,\tau}^{-1})=\theta_\tau P_{g,\omega}\theta_fS_{f,\tau}^{-1}=\theta_\tau P_{f,\tau}\theta_fS_{f,\tau}^{-1}=I_\mathcal{X}.
\end{align*}
Let $T_{f,g}, T_{\tau,\omega}:\mathcal{X} \to \mathcal{X}$ be bounded invertible and $g_n=f_nT_{f, g}, \omega_n=T_{\tau,\omega}\tau_n, \forall n \in \mathbb{N}$. Then $\theta_g=\theta_fT_{f, g} $ says that $\theta_\tau\theta_g=\theta_\tau\theta_fT_{f, g}=S_{f,\tau}T_{f, g} $ which implies $ T_{f, g} =S_{f,\tau}^{-1}\theta_\tau\theta_g$, and $\theta_\omega=T_{\tau,\omega}\theta_\tau $ says $\theta_\omega\theta_f=T_{\tau,\omega}\theta_\tau\theta_f=T_{\tau,\omega}S_{f,\tau} $. Hence $T_{\tau,\omega}=\theta_\omega\theta_fS_{f,\tau}^{-1} $.
\end{proof}
It is easy to see that for Hilbert spaces, Theorem \ref{SEQUENTIALSIMILARITY} reduces to Theorem \ref{BALANCHARSIM}. \\
Definition \ref{SIMILARITYMINE} introduced the notion of dual frames. A twin notion associated is the notion of orthogonality.
\begin{definition}\label{ORTHOGONALDEF}
Let $ (\{f_n\}_{n}, \{\tau_n\}_{n}) $ be a p-ASF for $\mathcal{X}$. A p-ASF $ (\{g_n \}_{n}, \{\omega_n \}_{n}) $ for $\mathcal{X}$ is \textbf{orthogonal} for $ (\{f_n \}_{n}, \{\tau_n \}_{n}) $ if
\begin{align*}
0=\sum_{n=1}^\infty g_n(x) \tau_n=\sum_{n=1}^\infty
f_n(x) \omega_n, \quad \forall x \in
\mathcal{X}.
\end{align*}
\end{definition}
Unlike duality, the notion orthogonality is symmetric but not reflexive. Further, dual p-ASFs cannot be orthogonal to each other and orthogonal p-ASFs cannot be dual to each other. Moreover, if $ (\{g_n\}_{n}, \{\omega_n\}_n)$ is orthogonal for $ (\{f_n\}_{n}, \{\tau_n\}_n)$, then both $ (\{f_n\}_{n}, \{\omega_n\}_n)$ and $ (\{g_n\}_{n}, \{\tau_n\}_n)$ are not p-ASFs. Similar to Proposition \ref{ORTHOGONALPRO} we have the following proposition.
\begin{proposition}
For two p-ASFs $ (\{f_n\}_{n}, \{\tau_n\}_{n}) $ and $ (\{g_n \}_{n}, \{\omega_n \}_{n}) $ for $\mathcal{X}$, the following are equivalent.
\begin{enumerate}[label=(\roman*)]
\item $ (\{g_n \}_{n}, \{\omega_n \}_{n}) $ is orthogonal for $ (\{f_n \}_{n}, \{\tau_n \}_{n}) $.
\item $\theta_\tau\theta_g =\theta_\omega\theta_f =0$.
\end{enumerate}
\end{proposition}
Usefulness of orthogonal frames is that we have interpolation result, i.e., these frames can be stitched along certain curves (in particular, on the unit circle centered at the origin) to get new frames.
\begin{theorem}\label{INTERPOLATION}
Let $ (\{f_n\}_{n}, \{\tau_n\}_{n}) $ and $ (\{g_n \}_{n}, \{\omega_n \}_{n}) $ be two Parseval p-ASFs for $\mathcal{X}$ which are orthogonal. If $A,B,C,D :\mathcal{X}\to \mathcal{X}$ are bounded linear operators and $ CA+DB=I_\mathcal{X}$, then
\begin{align*}
(\{f_nA+g_nB\}_{n}, \{C\tau_n+D\omega_n\}_{n})
\end{align*}
is a simple p-ASF for $\mathcal{X}$. In particular, if scalars $ a,b,c,d$ satisfy $ca+db =1$, then
$ (\{af_n+bg_n\}_{n}, \{c\tau_n+d\omega_n\}_{n}) $ is a simple p-ASF for $\mathcal{X}$.
\end{theorem}
\begin{proof}
By a calculation we find
\begin{align*}
\theta_{fA+gB} x = \{(f_nA+g_nB)(x) \}_{n}=\{f_n(Ax) \}_{n}+\{g_n(Bx) \}_{n}=\theta_f(Ax)+\theta_g(Bx), \quad \forall x \in \mathcal{X}
\end{align*}
and
\begin{align*}
\theta_{C\tau+D\omega}(\{a_n \}_{n})=\sum_{n=1}^\infty a_n(C\tau_n+D\omega_n)=C\theta_\tau(\{a_n \}_{n})+D\theta_\omega(\{a_n \}_{n}), \quad \forall \{a_n\}_n \in \ell^p(\mathbb{N}).
\end{align*}
So
\begin{align*}
S_{fA+gB,C\tau+D\omega} &=\theta_{C\tau+D\omega} \theta_{fA+gB}= ( C\theta_\tau+ D\theta_\omega)(\theta_fA+\theta_gB)\\
&=C\theta_\tau\theta_fA+C\theta_\tau\theta_gB+D\theta_\omega\theta_fA+D\theta_\omega\theta_gB\\
&=CS_{f,\tau}A+0+0+DS_{g,\omega}B
=CI_\mathcal{X}A+DI_\mathcal{X}B=I_\mathcal{X}.
\end{align*}
\end{proof}
Using Theorem \ref{SEQUENTIALSIMILARITY} we finally relate three notions duality, similarity and orthogonality.
\begin{proposition}\label{LASTONE}
For every p-ASF $(\{f_n\}_{n}, \{\tau_n\}_{n})$, the canonical dual for $(\{f_n\}_{n}, \{\tau_n\}_{n})$ is the only dual p-ASF that is similar to $(\{f_n\}_{n}, \{\tau_n\}_{n})$.
\end{proposition}
\begin{proof}
Let us suppose that two p-ASFs $(\{f_n\}_{n}, \{\tau_n\}_{n})$ and $ (\{g_n \}_{n}, \{\omega_n \}_{n}) $ are similar and dual to each other. Then there exist bounded invertible operators $T_{f,g}, T_{\tau,\omega} :\mathcal{X}\to \mathcal{X}$ such that $ g_n=f_nT_{f,g},\omega_n=T_{\tau,\omega}\tau_n ,\forall n \in \mathbb{N}$. Theorem \ref{SEQUENTIALSIMILARITY} then gives
\begin{align*}
T_{f,g}=S_{f,\tau}^{-1}\theta_\tau\theta_g=S_{f,\tau}^{-1}I_\mathcal{X}=S_{f,\tau}^{-1}\text{ and }T_{\tau, \omega}=\theta_\omega\theta_fS_{f,\tau}^{-1}=I_\mathcal{X}S_{f,\tau}^{-1}=S_{f,\tau}^{-1}.
\end{align*}
Hence $ (\{g_n \}_{n}, \{\omega_n \}_{n}) $ is the canonical dual for $(\{f_n\}_{n}, \{\tau_n\}_{n})$.
\end{proof}
\begin{proposition}\label{LASTTWO}
Two similar p-ASFs cannot be orthogonal.
\end{proposition}
\begin{proof}
Let $(\{f_n\}_{n}, \{\tau_n\}_{n})$ and $ (\{g_n \}_{n}, \{\omega_n \}_{n}) $ be two p-ASFs which are similar. Then there exist bounded invertible operators $T_{f,g}, T_{\tau,\omega} :\mathcal{X}\to \mathcal{X}$ such that $ g_n=f_nT_{f,g},\omega_n=T_{\tau,\omega}\tau_n ,\forall n \in \mathbb{N}$. Theorem \ref{SEQUENTIALSIMILARITY} then says $\theta_g=\theta_f T_{f,g}, \theta_\omega=T_{\tau,\omega}\theta_\tau $. Therefore
\begin{align*}
\theta_\tau \theta_g=\theta_\tau\theta_f T_{f,g}=S_{f,\tau}T_{f,g}\neq0.
\end{align*}
\end{proof}
\begin{remark}
For every p-ASF $(\{f_n\}_{n}, \{\tau_n\}_{n}),$ both p-ASFs
\begin{align*}
(\{f_n{S}_{f, \tau}^{-1}\}_{n}, \{\tau_n\}_{n}) \text{ and } (\{f_n\}_{n}, \{{S}_{f, \tau}^{-1}\tau_n\}_{n})
\end{align*}
are simple p-ASFs and are similar to $(\{f_n\}_{n}, \{\tau_n\}_{n})$. Therefore each p-ASF is similar to simple p-ASFs.
\end{remark}
\section{DILATION THEOREM FOR p-APPROXIMATE SCHAUDER FRAMES}
Here we derive a generalization of Theorem \ref{DILATIONTHEOREMHILBERTSPACE} (Naimark-Han-Larson dilation theorem) for frames in Hilbert spaces to p-ASFs for Banach spaces. In order to derive the dilation result we must have a notion of Riesz basis for Banach space. Theorem \ref{RIESZBASISTHM} gives various characterizations for Riesz bases for Hilbert spaces but all uses (implicitly or explicitly) inner product structures and orthonormal bases. These characterizations lead to the notion of p-Riesz basis for Banach spaces using a single sequence in the Banach space (Definition \ref{RIESZBASISDEFINITIONBANACHSPACE}) but we consider a different notion in this chapter. \\
To define the notion of Riesz basis, which is compatible with Hilbert space situation, we first derive an operator-theoretic characterization for Riesz basis in Hilbert spaces, which does not use the inner product of Hilbert space. To do so, we need a result from Hilbert space frame theory.
\begin{theorem}\label{RIESZBASISCHAROURS}
For sequence $\{\tau_n\}_n$ in $\mathcal{H}$, the following are equivalent.
\begin{enumerate}[label=(\roman*)]
\item $\{\tau_n\}_n$ is a Riesz basis for $ \mathcal{H}$.
\item $\{\tau_n\}_n$ is a frame for $ \mathcal{H}$ and
\begin{align}\label{RIESZEQUATIONTHEOREM}
\theta_\tau S_\tau^{-1} \theta_\tau^*=I_{\ell^2(\mathbb{N})}.
\end{align}
\end{enumerate}
\end{theorem}
\begin{proof}
(i) $\implies$ (ii) From Theorem \ref{RIESZISAFRAME} that a Riesz basis is a frame. Now there exist an orthonormal basis $\{\omega_n\}_n$ for $\mathcal{H}$ and a bounded invertible operator $T: \mathcal{H}\to \mathcal{H}$ such that $T\omega_n=\tau_n$, for all $n \in \mathbb{N}$. We then have
\begin{align*}
S_\tau h&= \sum_{n=1}^\infty \langle h, \tau_n\rangle\tau_n= \sum_{n=1}^\infty \langle h, T\omega_n\rangle T \omega_n\\
&=T\left(\sum_{n=1}^\infty \langle T^*h, \omega_n\rangle \omega_n\right)=TT^*h, \quad \forall h \in \mathcal{H}.
\end{align*}
Therefore
\begin{align*}
\theta_\tau S_\tau^{-1} \theta_\tau^*\{a_n\}_n&=\theta_\tau (TT^*)^{-1} \theta_\tau^*\{a_n\}_n=\theta_\tau (T^*)^{-1}T^{-1} \theta_\tau^*\{a_n\}_n\\
&=\theta_\tau (T^*)^{-1}T^{-1}\left(\sum_{n=1}^\infty a_n\tau_n\right)=\theta_\tau (T^*)^{-1}T^{-1}\left(\sum_{n=1}^\infty a_nT\omega_n\right)\\
&=\theta_\tau \left(\sum_{n=1}^\infty a_n(T^*)^{-1}\omega_n\right)=\sum_{k=1}^{\infty}\left\langle \sum_{n=1}^\infty a_n(T^*)^{-1}\omega_n, \tau_k\right\rangle e_k \\
&=\sum_{k=1}^{\infty}\left\langle \sum_{n=1}^\infty a_n(T^*)^{-1}\omega_n, T\omega_k\right\rangle e_k\\
&=\sum_{k=1}^{\infty}\left\langle \sum_{n=1}^\infty a_n\omega_n, \omega_k\right\rangle e_k=\{a_k\}_k, \quad\forall\{a_n\}_n \in \ell^2(\mathbb{N}).
\end{align*}
(ii) $\implies$ (i) From Holub's theorem (Theorem \ref{HOLUBTHEOREM}), there exists a surjective bounded linear operator $T:\ell^2(\mathbb{N}) \to \mathcal{H}$ such that $Te_n=\tau_n$, for all $n \in \mathbb{N}$. Since all separable Hilbert spaces are isometrically isomorphic to one another and orthonormal bases map into orthonormal bases, without loss of generality we may assume that $\{e_n\}_n$ is an orthonormal basis for $\mathcal{H}$ and the domain of $T$ is $\mathcal{H}$. Our job now reduces in showing $T$ is invertible. Since $T$ is already surjective, to show it is invertible, it suffices to show it is injective. Let $\{a_n\}_n \in \ell^2(\mathbb{N}).$ Then $\{a_n\}_n=\theta_\tau (S_\tau^{-1} \theta_\tau^*\{a_n\}_n)$. Hence $\theta_\tau$ is surjective. We now find
\begin{align*}
\theta_\tau h=\sum_{n=1}^{\infty}\langle h, \tau_n\rangle e_n=\sum_{n=1}^{\infty}\langle h, Te_n\rangle e_n=T^*h, \quad \forall h \in \mathcal{H}.
\end{align*}
Therefore
\begin{align*}
\operatorname{Kernel} (T)=T^*(\mathcal{H})^\perp=\theta_\tau(\mathcal{H})^\perp=\mathcal{H}^\perp=\{0\}.
\end{align*}
Hence $T$ is injective.
\end{proof}
Theorem \ref{RIESZBASISCHAROURS} leads to the following definition of p-approximate Riesz basis.
\begin{definition}
A pair $ (\{f_n \}_{n}, \{\tau_n \}_{n}) $ is said to be a \textbf{p-approximate Riesz basis} for $\mathcal{X}$ if it is a p-ASF for $ \mathcal{X}$ and $\theta_fS_{f,\tau}^{-1}\theta_\tau=I_{\ell^p(\mathbb{N})}$.
\end{definition}
\begin{example}\label{EXAMPLE3}
Let $p\in[1,\infty)$ and $U:\mathcal{X} \rightarrow\ell^p(\mathbb{N})$, $ V: \ell^p(\mathbb{N})\to \mathcal{X}$ be bounded invertible linear operators. Let $\{e_n\}_n$, $\{\zeta_n\}_n$, $ \{f_n\}_{n}$, and $ \{\tau_n\}_{n}$ be as in Example \ref{EXAMPLE2}. Then $ (\{f_n\}_{n}, \{\tau_n\}_{n}) $ is a p-approximate Riesz basis for $\mathcal{X}$.
\end{example}
We now derive the dilation theorem.
\begin{theorem}\label{DILATIONTHEOREMPASF}
(\textbf{Dilation theorem for p-approximate Schauder frames}) Let $ (\{f_n \}_{n}$, $\{\tau_n \}_{n}) $ be a p-ASF for $\mathcal{X}$. Then there exist a Banach space $\mathcal{X}_1$ which contains $\mathcal{X}$ isometrically and a p-approximate Riesz basis $ (\{g_n \}_{n}, \{\omega_n \}_{n}) $ for $\mathcal{X}_1$ such that
\begin{align*}
f_n=g_nP_{|\mathcal{X}}, \quad\tau_n=P\omega_n, \quad \forall n \in \mathbb{N},
\end{align*}
where $P:\mathcal{X}_1\rightarrow \mathcal{X}$ is onto projection.
\end{theorem}
\begin{proof}
Let $\{e_n\}_n$ denote the standard Schauder basis for $\ell^p(\mathbb{N})$ and let $\{\zeta_n\}_n$ denote the coordinate functionals associated with $\{e_n\}_n$. Define
\begin{align*}
\mathcal{X}_1\coloneqq\mathcal{X}\oplus(I_{\ell^p(\mathbb{N})}-P_{f, \tau})(\ell^p(\mathbb{N})), \quad P:\mathcal{X}_1 \ni x\oplus y\mapsto x\oplus 0 \in \mathcal{X}_1
\end{align*}
and
\begin{align*}
\omega_n\coloneqq \tau_n\oplus (I_{\ell^p(\mathbb{N})}-P_{f, \tau})e_n \in \mathcal{X}_1, \quad \quad g_n\coloneqq f_n \oplus \zeta_n (I_{\ell^p(\mathbb{N})}-P_{f, \tau})\in \mathcal{X}_1^*, \quad \forall n \in \mathbb{N}.
\end{align*}
Then clearly $\mathcal{X}_1$ contains $\mathcal{X}$ isometrically, $P:\mathcal{X}_1\rightarrow \mathcal{X}$ is onto projection and
\begin{align*}
&(g_nP_{|\mathcal{X}})(x)=g_n(P_{|\mathcal{X}}x)=g_n(x)=(f_n \oplus \zeta_n (I_{\ell^p(\mathbb{N})}-P_{f, \tau}))(x\oplus 0)=f_n(x),\quad \forall x \in \mathcal{X},\\
& P\omega_n=P(\tau_n\oplus (I_{\ell^p(\mathbb{N})}-P_{f, \tau})e_n)=\tau_n, \quad \forall n \in \mathbb{N}.
\end{align*}
Since the operator $I_{\ell^p(\mathbb{N})}-P_{f, \tau}$ is idempotent, it follows that $(I_{\ell^p(\mathbb{N})}-P_{f, \tau})(\ell^p(\mathbb{N}))$ is a closed subspace of $\ell^p(\mathbb{N})$ and hence a Banach space. Therefore $\mathcal{X}_1$ is a Banach space. Let $x\oplus y \in \mathcal{X}_1$ and we shall write $y=\{a_n\}_n \in \ell^p(\mathbb{N})$. We then see that
\begin{align*}
&\sum_{n=1}^{\infty}(\zeta_n (I_{\ell^p(\mathbb{N})}-P_{f, \tau}))(y)\tau_n=\sum_{n=1}^{\infty}\zeta_n(y)\tau_n-\sum_{n=1}^{\infty}\zeta_n(P_{f, \tau}(y))\tau_n\\
&\quad=\sum_{n=1}^{\infty}\zeta_n(\{a_k\}_{k})\tau_n-\sum_{n=1}^{\infty}\zeta_n(\theta_fS_{f, \tau}^{-1}\theta_\tau(\{a_k\}_{k}))\tau_n\\
&\quad=\sum_{n=1}^{\infty}a_n\tau_n-\sum_{n=1}^{\infty}\zeta_n\left(\theta_fS_{f, \tau}^{-1}\left(\sum_{k=1}^{\infty}a_k\tau_k\right)\right)\tau_n\\
&\quad
=\sum_{n=1}^{\infty}a_n\tau_n-\sum_{n=1}^{\infty}\zeta_n\left(\sum_{k=1}^{\infty}a_k\theta_fS_{f, \tau}^{-1}\tau_k\right)\tau_n\\
&\quad=\sum_{n=1}^{\infty}a_n\tau_n-\sum_{n=1}^{\infty}\zeta_n\left(\sum_{k=1}^{\infty}a_k\sum_{r=1}^{\infty}f_r(S_{f, \tau}^{-1}\tau_k)e_r\right)\tau_n\\
&\quad
=\sum_{n=1}^{\infty}a_n\tau_n-\sum_{n=1}^{\infty}\sum_{k=1}^{\infty}a_k\sum_{r=1}^{\infty}f_r(S_{f, \tau}^{-1}\tau_k)\zeta_n(e_r)\tau_n\\
&\quad=\sum_{n=1}^{\infty}a_n\tau_n-\sum_{n=1}^{\infty}\sum_{k=1}^{\infty}a_kf_n(S_{f, \tau}^{-1}\tau_k)\tau_n\\
&\quad=\sum_{n=1}^{\infty}a_n\tau_n-\sum_{k=1}^{\infty}a_k\sum_{n=1}^{\infty}f_n(S_{f, \tau}^{-1}\tau_k)\tau_n\\
&\quad=\sum_{n=1}^{\infty}a_n\tau_n-\sum_{k=1}^{\infty}a_k\tau_k=0
\end{align*}
and
\begin{align*}
& \sum_{n=1}^{\infty}f_n(x)(I_{\ell^p(\mathbb{N})}-P_{f, \tau})e_n=\sum_{n=1}^{\infty}f_n(x)e_n-\sum_{n=1}^{\infty}f_n(x)P_{f, \tau}e_n\\
&=\sum_{n=1}^{\infty}f_n(x)e_n-\sum_{n=1}^{\infty}f_n(x)\theta_fS_{f, \tau}^{-1}\theta_\tau e_n\\
&=\sum_{n=1}^{\infty}f_n(x)e_n-\sum_{n=1}^{\infty}f_n(x)\theta_fS_{f, \tau}^{-1}\tau_n\\
&=\sum_{n=1}^{\infty}f_n(x)e_n-\sum_{n=1}^{\infty}f_n(x)\sum_{k=1}^{\infty}f_k(S_{f, \tau}^{-1}\tau_n)e_k
\\
&=\sum_{n=1}^{\infty}f_n(x)e_n-\sum_{n=1}^{\infty}\sum_{k=1}^{\infty}f_n(x)f_k(S_{f, \tau}^{-1}\tau_n)e_k\\
&=\sum_{n=1}^{\infty}f_n(x)e_n-\sum_{k=1}^{\infty}\sum_{n=1}^{\infty}f_n(x)f_k(S_{f, \tau}^{-1}\tau_n)e_k\\
&=\sum_{n=1}^{\infty}f_n(x)e_n-\sum_{k=1}^{\infty}f_k\left(\sum_{n=1}^{n}f_n(x)S_{f, \tau}^{-1}\tau_n\right)e_k\\
&=\sum_{n=1}^{\infty}f_n(x)e_n-\sum_{k=1}^{\infty}f_k(x)e_k=0.
\end{align*}
By using previous two calculations, we get
\begin{align*}
& S_{g, \omega}(x\oplus y) =\sum_{n=1}^{\infty}g_n(x\oplus y)\omega_n\\
&=\sum_{n=1}^{\infty}(f_n \oplus \zeta_n (I_{\ell^p(\mathbb{N})}-P_{f, \tau}))(x\oplus y)(\tau_n\oplus (I_{\ell^p(\mathbb{N})}-P_{f, \tau})e_n)\\
&=\sum_{n=1}^{\infty}(f_n(x) + (\zeta_n (I_{\ell^p(\mathbb{N})}-P_{f, \tau}))(y))(\tau_n\oplus (I_{\ell^p(\mathbb{N})}-P_{f, \tau})e_n)\\
&=\left(\sum_{n=1}^{\infty}f_n(x)\tau_n+\sum_{n=1}^{\infty}(\zeta_n (I_{\ell^p(\mathbb{N})}-P_{f, \tau}))(y)\tau_n\right)\oplus\\
&\quad \left(\sum_{n=1}^{\infty}f_n(x)(I_{\ell^p(\mathbb{N})}-P_{f, \tau})e_n+\sum_{n=1}^{\infty}(\zeta_n (I_{\ell^p(\mathbb{N})}-P_{f, \tau}))(y)(I_{\ell^p(\mathbb{N})}-P_{f, \tau})e_n\right)\\
&=(S_{f, \tau}x+0)\oplus \left(0+(I_{\ell^p(\mathbb{N})}-P_{f, \tau})\sum_{n=1}^{\infty}\zeta_n ((I_{\ell^p(\mathbb{N})}-P_{f, \tau})y)e_n\right)\\
&=S_{f, \tau}x\oplus (I_{\ell^p(\mathbb{N})}-P_{f, \tau})(I_{\ell^p(\mathbb{N})}-P_{f, \tau})y=S_{f, \tau}x\oplus (I_{\ell^p(\mathbb{N})}-P_{f, \tau})y\\
&=(S_{f, \tau}\oplus (I_{\ell^p(\mathbb{N})}-P_{f, \tau}))(x\oplus y).
\end{align*}
Since the operator $I_{\ell^p(\mathbb{N})}-P_{f, \tau}$ is idempotent, $I_{\ell^p(\mathbb{N})}-P_{f, \tau}$ becomes the identity operator on the space $(I_{\ell^p(\mathbb{N})}-P_{f, \tau})(\ell^p(\mathbb{N}))$. Hence we get that the operator $S_{g, \omega}=S_{f, \tau}\oplus (I_{\ell^p(\mathbb{N})}-P_{f, \tau})$ is bounded invertible from $\mathcal{X}_1$ onto itself. We next show that $ (\{g_n \}_{n}, \{\omega_n \}_{n}) $ is a p-approximate Riesz basis for $\mathcal{X}_1$. For this, first we find $\theta_g$ and $\theta_\omega$. Consider
\begin{align*}
\theta_g(x\oplus y)&=\{g_n(x\oplus y)\}_{n}=\{(f_n \oplus \zeta_n (I_{\ell^p(\mathbb{N})}-P_{f, \tau}))(x\oplus y)\}_{n}\\
&=\{f_n (x) +\zeta_n ((I_{\ell^p(\mathbb{N})}-P_{f, \tau}) y)\}_{n}=\{f_n (x)\}_{n} +\{\zeta_n ((I_{\ell^p(\mathbb{N})}-P_{f, \tau}) y)\}_{n}\\
&=\theta_fx+\sum_{n=1}^{\infty}\zeta_n ((I_{\ell^p(\mathbb{N})}-P_{f, \tau}) y )e_n=\theta_fx+(I_{\ell^p(\mathbb{N})}-P_{f, \tau}) y , \quad\forall x\oplus y \in \mathcal{X}_1
\end{align*}
and
\begin{align*}
\theta_\omega\{a_n\}_n&=\sum_{n=1}^{\infty}a_n\omega_n=\sum_{n=1}^{\infty}a_n(\tau_n\oplus (I_{\ell^p(\mathbb{N})}-P_{f, \tau})e_n)\\
&= \left(\sum_{n=1}^{\infty}a_n\tau_n\right) \oplus \left(\sum_{n=1}^{\infty}a_n(I_{\ell^p(\mathbb{N})}-P_{f, \tau})e_n\right)\\
&=\theta_\tau\{a_n\}_n\oplus (I_{\ell^p(\mathbb{N})}-P_{f, \tau})\left(\sum_{n=1}^{\infty}a_ne_n\right)\\
&=\theta_\tau\{a_n\}_n\oplus (I_{\ell^p(\mathbb{N})}-P_{f, \tau})\{a_n\}_n, \quad\forall \{a_n\}_n \in \ell^p(\mathbb{N}).
\end{align*}
Therefore
\begin{align*}
P_{g, \omega}\{a_n\}_n&=\theta_g S_{g, \omega}^{-1}\theta_\omega\{a_n\}_n=\theta_gS_{g, \omega}^{-1}(\theta_\tau\{a_n\}_n\oplus (I_{\ell^p(\mathbb{N})}-P_{f, \tau})\{a_n\}_n)\\
&=\theta_g(S_{f, \tau}^{-1}\oplus (I_{\ell^p(\mathbb{N})}-P_{f, \tau}) )(\theta_\tau\{a_n\}_n\oplus (I_{\ell^p(\mathbb{N})}-P_{f, \tau})\{a_n\}_n)\\
&=\theta_g(S_{f, \tau}^{-1} \theta_\tau\{a_n\}_n\oplus (I_{\ell^p(\mathbb{N})}-P_{f, \tau})^2\{a_n\}_n)\\
&=\theta_g(S_{f, \tau}^{-1} \theta_\tau\{a_n\}_n\oplus (I_{\ell^p(\mathbb{N})}-P_{f, \tau})\{a_n\}_n)\\
&=\theta_f(S_{f, \tau}^{-1} \theta_\tau\{a_n\}_n)+(I_{\ell^p(\mathbb{N})}-P_{f, \tau})(I_{\ell^p(\mathbb{N})}-P_{f, \tau})\{a_n\}_n\\
&=P_{f, \tau}\{a_n\}_n+(I_{\ell^p(\mathbb{N})}-P_{f, \tau})\{a_n\}_n=\{a_n\}_n, \quad \forall \{a_n\}_n \in \ell^p(\mathbb{N}).
\end{align*}
\end{proof}
\begin{corollary}(\cite{HANLARSON, KASHINKULIKOVA})
Let $\{\tau_n\}_n$ be a frame for $ \mathcal{H}$. Then there exist a Hilbert space $ \mathcal{H}_1 $ which contains $ \mathcal{H}$ isometrically and a Riesz basis $\{\omega_n\}_n$ for $ \mathcal{H}_1$ such that
\begin{align*}
\tau_n=P\omega_n, \quad\forall n \in \mathbb{N},
\end{align*}
where $P$ is the orthogonal projection from $\mathcal{H}_1$ onto $\mathcal{H}$.
\end{corollary}
\begin{proof}
Let $\{\tau_n\}_n$ be a frame for $\mathcal{H}$. Define
\begin{align*}
f_n:\mathcal{H} \ni h \mapsto f_n(h)\coloneqq \langle h, \tau_n\rangle \in \mathbb{K}, \quad \forall n \in \mathbb{N}.
\end{align*}
Then $\theta_f=\theta_\tau$. Note that now $ (\{f_n \}_{n}, \{\tau_n \}_{n}) $ is a 2-approximate frame for $\mathcal{H}$. Theorem \ref{DILATIONTHEOREMPASF} now says that there exist a Banach space $\mathcal{X}_1$ which contains $\mathcal{H}$ isometrically and a 2-approximate Riesz basis $ (\{g_n \}_{n}, \{\omega_n \}_{n}) $ for $\mathcal{X}_1=\mathcal{H}\oplus(I_{\ell^2(\mathbb{N})}-P_{\tau})(\ell^2(\mathbb{N}))$ such that
\begin{align*}
f_n=g_nP_{|\mathcal{H}}, \quad\tau_n=P\omega_n, \quad \forall n \in \mathbb{N},
\end{align*}
where $P:\mathcal{X}_1\rightarrow \mathcal{H}$ is onto projection. Since $(I_{\ell^2(\mathbb{N})}-P_{\tau})(\ell^2(\mathbb{N}))$ is a closed subspace of the Hilbert space $\ell^2(\mathbb{N})$, $\mathcal{X}_1$ now becomes a Hilbert space. From the definition of $P$ we get that it is an orthogonal projection. To prove $\{\omega_n\}_n$ is a Riesz basis for $\mathcal{X}_1$, we use Theorem \ref{RIESZBASISCHAROURS}. Since $\{\tau_n\}_n$ is a
frame for $\mathcal{H}$, there exist $a,b>0$ such that
\begin{align*}
a\|h\|^2 \leq \sum_{n=1}^\infty |\langle h, \tau_n\rangle|^2\leq b\|h\|^2, \quad \forall h \in \mathcal{H}.
\end{align*}
Let $h\oplus (I_{\ell^2(\mathbb{N})}-P_{f, \tau})\{a_k\}_k\in \mathcal{X}_1$. Then by noting $b\geq1$, we get
\begin{align*}
&\sum_{n=1}^{\infty}|\langle h\oplus (I_{\ell^2(\mathbb{N})}-P_{ \tau})\{a_k\}_k, \omega_n\rangle|^2\\
&\quad=\sum_{n=1}^{\infty}|\langle h\oplus (I_{\ell^2(\mathbb{N})}-P_{ \tau})\{a_k\}_k, \tau_n\oplus (I_{\ell^2(\mathbb{N})}-P_{ \tau})e_n\rangle|^2\\
&\quad=\sum_{n=1}^{\infty}|\langle h, \tau_n\rangle|^2+\sum_{n=1}^{\infty}|\langle (I_{\ell^2(\mathbb{N})}-P_{ \tau})\{a_k\}_k, (I_{\ell^2(\mathbb{N})}-P_{ \tau})e_n\rangle|^2\\
&\quad=\sum_{n=1}^{\infty}|\langle h, \tau_n\rangle|^2+\sum_{n=1}^{\infty}|\langle (I_{\ell^2(\mathbb{N})}-P_{ \tau})(I_{\ell^2(\mathbb{N})}-P_{ \tau})\{a_k\}_k, e_n\rangle|^2\\
&\quad=\sum_{n=1}^{\infty}|\langle h, \tau_n\rangle|^2+\sum_{n=1}^{\infty}|\langle (I_{\ell^2(\mathbb{N})}-P_{ \tau})\{a_k\}_k, e_n\rangle|^2\\
&\quad=\sum_{n=1}^{\infty}|\langle h, \tau_n\rangle|^2+\|(I_{\ell^2(\mathbb{N})}-P_{ \tau})\{a_k\}_k\|^2\\
&\quad\leq b\|h\|^2+\|(I_{\ell^2(\mathbb{N})}-P_{\tau})\{a_k\}_k\|^2\\
&\quad\leq b(\|h\|^2+\|(I_{\ell^2(\mathbb{N})}-P_{ \tau})\{a_k\}_k\|^2)\\
&\quad=b\| h\oplus (I_{\ell^2(\mathbb{N})}-P_{\tau})\{a_k\}_k\|^2.
\end{align*}
Previous calculation tells that $\{\omega_n\}_n$ is a Bessel sequence
for $\mathcal{X}_1$. Hence $S_\omega:\mathcal{X}_1 \ni x\oplus\{a_k\}_k\mapsto \sum_{n=1}^{\infty} \langle x\oplus\{a_k\}_k, \omega_n\rangle \omega_n\in \mathcal{X}_1$ is a well-defined bounded linear operator. Next we claim that
\begin{align}\label{CLAIM}
g_n(x\oplus\{a_k\}_k)=\langle x\oplus\{a_k\}_k, \omega_n\rangle, \quad \forall x\oplus\{a_k\}_k \in \mathcal{X}_1, \forall n \in \mathbb{N}.
\end{align}
Consider
\begin{align*}
&g_n(x\oplus\{a_k\}_k)=(f_n \oplus \zeta_n (I_{\ell^2(\mathbb{N})}-P_{ \tau}))(x\oplus\{a_k\}_k)\\
&=f_n(x)+\zeta_n ((I_{\ell^2(\mathbb{N})}-P_{ \tau})\{a_k\}_k)
=f_n(x)+\zeta_n\left(\{a_k\}_k\right)-\zeta_n (P_{ \tau}\{a_k\}_k)\\
&=f_n(x)+\zeta_n\left(\{a_k\}_k\right)-\zeta_n (\theta_\tau S_{ \tau}^{-1}\theta_\tau^*\{a_k\}_k)
=f_n(x)+a_n-\zeta_n\left(\theta_\tau S_{ \tau}^{-1}\left(\sum_{k=1}^{\infty}a_k\tau_k\right)\right)\\
&=f_n(x)+a_n-\zeta_n\left(\sum_{k=1}^{\infty}a_k\theta_\tau S_{ \tau}^{-1}\tau_k\right)
=f_n(x)+a_n-\zeta_n\left(\sum_{k=1}^{\infty}a_k\sum_{r=1}^{\infty}\langle S_{ \tau}^{-1}\tau_k, \tau_r \rangle e_r\right)\\
&=f_n(x)+a_n-\sum_{k=1}^{\infty}a_k\langle S_{ \tau}^{-1}\tau_k, \tau_n \rangle= \langle x, \tau_n\rangle+a_n-\sum_{k=1}^{\infty}a_k\langle S_{ \tau}^{-1}\tau_k, \tau_n \rangle \quad \text{ and }
\end{align*}
\begin{align*}
&\langle x\oplus\{a_k\}_k, \omega_n\rangle=\langle x\oplus\{a_k\}_k, \tau_n\oplus (I_{\ell^2(\mathbb{N})}-P_{ \tau})e_n\rangle\\
&\quad=\langle x, \tau_n\rangle+\langle \{a_k\}_k, (I_{\ell^2(\mathbb{N})}-P_{ \tau})e_n\rangle
=\langle x, \tau_n\rangle+\langle \{a_k\}_k, e_n\rangle+\langle \{a_k\}_k, P_{ \tau}e_n\rangle\\
&\quad=\langle x, \tau_n\rangle+a_n-\left \langle\{a_k\}_k, \theta_\tau S_{ \tau}^{-1}\theta_\tau^*e_n\right\rangle=\langle x, \tau_n\rangle+a_n-\left \langle\{a_k\}_k, \theta_\tau S_{ \tau}^{-1}\tau_n\right\rangle\\
&\quad=\langle x, \tau_n\rangle+a_n- \langle\{a_k\}_k, \{ \langle S_{ \tau}^{-1}\tau_n, \tau_k \rangle\}_k \rangle=\langle x, \tau_n\rangle+a_n-\sum_{k=1}^{\infty}a_k\overline{\langle S_{ \tau}^{-1}\tau_n, \tau_k \rangle}\\
&\quad=\langle x, \tau_n\rangle+a_n-\sum_{k=1}^{\infty}a_k\langle\tau_k,S_{ \tau}^{-1}\tau_n \rangle=\langle x, \tau_n\rangle+a_n-\sum_{k=1}^{\infty}a_k\langle S_{ \tau}^{-1}\tau_k, \tau_n \rangle.
\end{align*}
Thus Equation (\ref{CLAIM}) holds. Therefore for all $x\oplus\{a_k\}_k\in \mathcal{X}_1$,
\begin{align*}
S_{g,\omega}(x\oplus\{a_k\}_k)=\sum_{n=1}^{\infty}g_n(x\oplus\{a_k\}_k)\omega_n=\sum_{n=1}^{\infty}\langle x\oplus\{a_k\}_k, \omega_n\rangle\omega_n=S_{\omega}(x\oplus\{a_k\}_k).
\end{align*}
Since $ S_{g,\omega}$ is invertible, $S_{\omega}$ becomes invertible. Clearly $S_{\omega}$ is positive. Therefore
\begin{align*}
\frac{1}{\|S_{\omega}\|^{-1}}\|g\|^2\leq \langle S_{\omega}g, g\rangle\leq \|S_\omega\| \|g\|^2, \quad \forall g \in \mathcal{X}_1.
\end{align*}
Hence
\begin{align*}
\frac{1}{\|S_{\omega}\|^{-1}}\|g\|^2\leq \sum_{n=1}^{\infty}|\langle g, \omega_n\rangle|^2\leq \|S_\omega\| \|g\|^2, \quad \forall g \in \mathcal{X}_1.
\end{align*}
Hence $\{\omega_n\}_n$ is a frame
for $\mathcal{X}_1$.
Finally we show Equation (\ref{RIESZEQUATIONTHEOREM}) in Theorem \ref{RIESZBASISCHAROURS} for the frame $\{\omega_n\}_n$. Consider
\begin{align*}
&\theta_\omega S_\omega^{-1} \theta_\omega^*\{a_n\}_n=\theta_\omega S_\omega^{-1}\left(\sum_{n=1}^{\infty}a_n\omega_n\right)=\theta_\omega \left(\sum_{n=1}^{\infty}a_nS_\omega^{-1}\omega_n\right)\\
&=\sum_{k=1}^{\infty} \left\langle \sum_{n=1}^{\infty}a_nS_\omega^{-1}\omega_n, \omega_k\right \rangle=\sum_{k=1}^{\infty} \sum_{n=1}^{\infty}a_n\langle S_\omega^{-1}\omega_n, \omega_k \rangle\\
&=\sum_{k=1}^{\infty} \sum_{n=1}^{\infty}a_n\langle (S_\tau^{-1}\oplus(I_{\ell^2(\mathbb{N})}-P_{ \tau}) )(\tau_n\oplus (I_{\ell^2(\mathbb{N})}-P_{ \tau})e_n), \tau_k\oplus (I_{\ell^2(\mathbb{N})}-P_{ \tau})e_k \rangle\\
&=\sum_{k=1}^{\infty} \sum_{n=1}^{\infty}a_n\langle S_\tau^{-1}\tau_n\oplus(I_{\ell^2(\mathbb{N})}-P_{ \tau}) ^2e_n, \tau_k\oplus (I_{\ell^2(\mathbb{N})}-P_{ \tau})e_k \rangle\\
&=\sum_{k=1}^{\infty} \sum_{n=1}^{\infty}a_n(\langle S_\tau^{-1}\tau_n, \tau_k \rangle+\langle (I_{\ell^2(\mathbb{N})}-P_{ \tau}) e_n, (I_{\ell^2(\mathbb{N})}-P_{ \tau})e_k \rangle)\\
&=\sum_{k=1}^{\infty} \left\langle \sum_{n=1}^{\infty}a_nS_\tau^{-1}\tau_n, \tau_k\right \rangle+\sum_{k=1}^{\infty} \sum_{n=1}^{\infty}a_n\langle (I_{\ell^2(\mathbb{N})}-P_{f, \tau}) e_n, (I_{\ell^2(\mathbb{N})}-P_{ \tau})e_k \rangle\\
&=P_\tau\{a_n\}_n+\sum_{k=1}^{\infty} \sum_{n=1}^{\infty}a_n\langle (I_{\ell^2(\mathbb{N})}-P_{ \tau}) e_n,e_k \rangle\\
&=P_\tau\{a_n\}_n+\sum_{k=1}^{\infty}\sum_{n=1}^{\infty}a_n\langle e_n,e_k \rangle-\sum_{k=1}^{\infty}\sum_{n=1}^{\infty}a_n\langle P_\tau e_n,e_k \rangle\\
&=P_\tau\{a_n\}_n+\sum_{k=1}^{\infty}a_ke_k-\sum_{k=1}^{\infty}\sum_{n=1}^{\infty}a_n\langle \theta_\tau S^{-1}_\tau \theta_\tau ^*e_n,e_k \rangle\\
&=P_\tau\{a_n\}_n+\sum_{k=1}^{\infty}a_ke_k-\sum_{k=1}^{\infty}\sum_{n=1}^{\infty}a_n\langle S^{-1}_\tau \tau_n,\theta_\tau^*e_k \rangle\\
&=P_\tau\{a_n\}_n+\sum_{k=1}^{\infty}a_ke_k-\sum_{k=1}^{\infty}\sum_{n=1}^{\infty}a_n\langle S^{-1}_\tau \tau_n,\tau_k \rangle\\
&=P_\tau\{a_n\}_n+\sum_{k=1}^{\infty}a_ke_k-P_\tau\{a_n\}_n= \{a_n\}_n,\quad\forall \{a_n\}_n \in \ell^2(\mathbb{N}).
\end{align*}
Thus $\{\omega_n\}_n$ is a Riesz basis
for $\mathcal{X}_1$ which completes the proof.
\end{proof}
We now illustrate Theorem \ref{DILATIONTHEOREMPASF} with an example.
\begin{example}
Let $p\in[1,\infty)$. Let $\{e_n\}_n$ denote the canonical Schauder basis for $\ell^p(\mathbb{N})$ and let $\{\zeta_n\}_n$ denote the coordinate functionals associated with $\{e_n\}_n$ respectively. Define
\begin{align*}
R: \ell^p(\mathbb{N}) \ni (x_n)_{n=1}^\infty\mapsto (0,x_1,x_2, \dots)\in \ell^p(\mathbb{N}),\\
L: \ell^p(\mathbb{N}) \ni (x_n)_{n=1}^\infty\mapsto (x_2,x_3,x_4, \dots)\in \ell^p(\mathbb{N}).
\end{align*}
Then $LR=I_{\ell^p(\mathbb{N})}$. Example \ref{EXAMPLE3} says that $ (\{f_n\coloneqq \zeta_nR\}_{n}, \{\tau_n\coloneqq Le_n\}_{n}) $ is a p-ASF for $\ell^p(\mathbb{N})$. Note that $\theta_f=R$ and $\theta_\tau=L$. Therefore $S_{f, \tau}=LR=I_{\ell^p(\mathbb{N})}$ and $P_{f, \tau}=RL.$ Then
\begin{align*}
(I_{\ell^p(\mathbb{N})}-P_{f, \tau})(x_n)_{n=1}^\infty&=(x_n)_{n=1}^\infty-
RL(x_n)_{n=1}^\infty\\
&=(x_n)_{n=1}^\infty-(0,x_2,x_3, \dots)=(x_1, 0, 0, \dots),\quad \forall (x_n)_{n=1}^\infty\in \ell^p(\mathbb{N})
\end{align*}
which says that $(I_{\ell^p(\mathbb{N})}-P_{f, \tau})(\ell^p(\mathbb{N}))\cong \mathbb{K}$. Using Theorem \ref{DILATIONTHEOREMPASF},
\begin{align*}
&\mathcal{X}_1=\ell^p(\mathbb{N})\oplus (I_{\ell^p(\mathbb{N})}-P_{f, \tau})(\ell^p(\mathbb{N}))\cong\ell^p(\mathbb{N})\oplus\mathbb{K}\cong \ell^p(\mathbb{N}\cup \{0\})\\
&P:\ell^p(\mathbb{N}\cup \{0\})\ni (x_n)_{n=0}^\infty \mapsto (x_n)_{n=1}^\infty \in \ell^p(\mathbb{N}),
\end{align*}
\begin{align*}
\omega_1&=\tau_1\oplus (I_{\ell^p(\mathbb{N})}-P_{f, \tau})\tau_1=Le_1\oplus (I_{\ell^p(\mathbb{N})}-P_{f, \tau})Le_1
=0\oplus 0,\\
\omega_2&=\tau_2\oplus (I_{\ell^p(\mathbb{N})}-P_{f, \tau})\tau_2=Le_n\oplus (I_{\ell^p(\mathbb{N})}-P_{f, \tau})Le_2\\
&=e_{1}\oplus (I_{\ell^p(\mathbb{N})}-P_{f, \tau})e_{1}=e_{1}\oplus RLe_{1}=e_{1}\oplus 0,\\
\omega_n&=\tau_n\oplus (I_{\ell^p(\mathbb{N})}-P_{f, \tau})\tau_n=Le_n\oplus (I_{\ell^p(\mathbb{N})}-P_{f, \tau})Le_n\\
&=e_{n-1}\oplus (I_{\ell^p(\mathbb{N})}-P_{f, \tau})e_{n-1}=e_{n-1}\oplus RLe_{n-1}=e_{n-1}\oplus e_{n-1}, \quad \forall n \geq 3,\\
g_n&=f_n\oplus \zeta_n(I_{\ell^p(\mathbb{N})}-P_{f, \tau})=\zeta_nR\oplus \zeta_nRL=\zeta_nR(I_{\ell^p(\mathbb{N})}\oplus L), \quad \forall n \in \mathbb{N}
\end{align*}
and $ (\{g_n\}_{n}, \{\omega_n\}_{n}) $ is a p-approximate Riesz basis for $\ell^p(\mathbb{N})$.
\end{example}
\section{NEW IDENTITY FOR PARSEVAL p-APPROXIMATE SCH-AUDER FRAMES}
Certain classes of Banach spaces known as homogeneous semi-inner product spaces admit a kind of inner product and can be studied with certain similarities with Hilbert spaces. These spaces are introduced by \cite{LUMER} and studied extensively by \cite{GILES}. We now recall the fundamentals of semi-inner products. Let $\mathcal{X}$ be a vector space over $\mathbb{K}$. A map $[\cdot, \cdot]:\mathcal{X}\times \mathcal{X} \to \mathbb{K}$ is said to be a \textbf{homogeneous semi-inner product} if it satisfies the following.
\begin{enumerate}[label=(\roman*)]
\item $[x,x]>0$, for all $x \in \mathcal{X}, x\neq 0$.
\item $[\lambda x,y]=\lambda [x,y]$, for all $x,y \in \mathcal{X}$, for all $\lambda \in \mathbb{K}$.
\item $[x, \lambda y]=\overline{\lambda} [x,y]$, for all $x,y \in \mathcal{X}$, for all $\lambda \in \mathbb{K}$.
\item $[x+y, z]=[x, z]+[y, z]$, for all $x,y,z \in \mathcal{X}$.
\item $| [x,y]|^2\leq [x,x][y,y]$, for all $x,y \in \mathcal{X}$.
\end{enumerate}
A homogeneous semi-inner product $[\cdot, \cdot]$ induces a \textbf{norm} which is defined as $\|x\|\coloneqq \sqrt{[x,x]}$. A prototypical example of homogeneous semi-inner product spaces is the standard $\ell^p(\mathbb{N})$ space, $1<p<\infty$, equipped with semi-inner product defined as follows. For $x=\{x_n\}_n,$ $y=\{y_n\}_n\in \ell^p(\mathbb{N})$, define
\begin{align*}
[x,y]\coloneqq \begin{cases*}
\frac{\sum\limits_{n=1}^{\infty}x_n\overline{y_n}|y_n|^{p-2}}{\|y\|_p^{p-2}} \quad& if $ y \neq 0 $ \\
0 & if $ y=0.$ \\
\end{cases*}
\end{align*}
For certain classes of Banach spaces we have Riesz representation theorem.
\begin{theorem}(\cite{GILES})\label{RIESZ} (\textbf{Riesz representation theorem for Banach spaces})
Let $\mathcal{X}$ be a complete homogeneous semi-inner product space. If $\mathcal{X}$ is continuous and uniformly convex, then for every bounded linear functional $f: \mathcal{X}\to \mathbb{K}$, there exists a unique $y \in \mathcal{X}$ such that $f(x)=[x,y]$, for all $x \in \mathcal{X}$.
\end{theorem}
Theorem \ref{RIESZ} leads to the notion of generalized adjoint whose existence is assured by the following theorem. Two state the result we need two definitions.
\begin{definition}(cf. \cite{GILES})
Let $\mathcal{X}$ be a complete homogeneous semi-inner product space. Space $\mathcal{X}$ is said to be \textbf{continuous} if
\begin{align*}
\text{Re}([x,y+\lambda x])\to \text{Re}([x,y]), \text{ for all real } \lambda \to 0, \forall x,y \in \mathcal{X} \text{ such that } \|x\|=\|y\|=1.
\end{align*}
\end{definition}
\begin{definition}(\cite{GILES})
A Banach space is said to be \textbf{uniformly convex} if given $\epsilon >0$, there exists an $\delta>0$ such that if $ x,y \in \mathcal{X}$ are such that $\|x\|=\|y\|=1$ and $\|x-y\|>\epsilon$, then $\|x+y\|\leq 2(1-\delta)$.
\end{definition}
\begin{theorem}(\cite{KOEHLER})\label{THEOREMK}
Let $\mathcal{X}$ be a complete homogeneous semi-inner product space. If $\mathcal{X}$ is continuous and uniformly convex, then for every bounded linear operator $A: \mathcal{X}\to \mathcal{X}$, there exists a unique map $A^\dagger:\mathcal{X}\to \mathcal{X}$, which may not be linear or continuous (called as \textbf{generalized adjoint} of $A$) such that
\begin{align*}
[Ax,y]=[x,A^\dagger y],\quad \forall x,y \in \mathcal{X}.
\end{align*}
Moreover, the following statements hold.
\begin{enumerate}[label=(\roman*)]
\item $(\lambda A)^\dagger=\overline{\lambda}A^\dagger$, for all $\lambda \in \mathbb{K}$.
\item $A^\dagger$ is injective if and only if $\overline{ A(\mathcal{X})}=\mathcal{X}$.
\item If the norm of $\mathcal{X}$ is strongly (Frechet) differentiable, then $A^\dagger$ is continuous.
\end{enumerate}
\end{theorem}
Throughout this section we assume that $\mathcal{X}$ is a continuous, uniformly convex, homogeneous semi-inner product space. Let $ (\{f_n \}_{n}, \{\tau_n \}_{n}) $ be a p-ASF for $\mathcal{X}$. Theorem \ref{RIESZ} then says that each $f_n$ can be identified with unique $\omega_n \in \mathcal{X}$ which satisfies $f_n(x)=[x,\omega_n]$, for all $x \in \mathcal{X}$. Note that
\begin{align*}
\sum_{n=1}^{\infty}[x, (S_{\omega, \tau}^{-1})^\dagger\omega_n]S_{\omega, \tau}^{-1}\tau_n=S_{\omega, \tau}^{-1}\left(\sum_{n=1}^{\infty}[ S_{\omega, \tau}^{-1}x, \omega_n]\tau_n\right)=S_{\omega, \tau}^{-1}x, \quad \forall x \in \mathcal{X}.
\end{align*}
Hence $ (\{\tilde{\omega}_n\coloneqq(S_{\omega, \tau}^{-1})^\dagger\omega_n \}_{n}, \{\tilde{\tau}_n\coloneqq S_{\omega, \tau}^{-1}\tau_n \}_{n}) $ is a p-ASF for $\mathcal{X}$ which is called as canonical dual frame for $ (\{\omega_n \}_{n}, \{\tau_n \}_{n}) $.
Given $\mathbb{M} \subseteq \mathbb{N}$, we define $
S_\mathbb{M} :\mathcal{X} \ni x \mapsto \sum_{n\in \mathbb{M}} \langle x, \omega_n\rangle\tau_n\in
\mathcal{X}$. Because of Inequalities (\ref{FIRSTINEQUALITYPASF}) and (\ref{SECONDINEQUALITYPASF}), the map $S_\mathbb{M}$ is a well-defined bounded linear operator. Note that the operator $S_\mathbb{M}$ may not be invertible. In Proposition 2.2 in (\cite{BALACASAZZAEDIDINKUTYNIOK}) it is derived that if operators $U, V:\mathcal{H} \to \mathcal{H}$ satisfy $U+V=I_\mathcal{H}$, then $U-V=U^2-V^2$. This remains valid for Banach spaces.
\begin{lemma}\label{LEMMA}
If operators $U, V:\mathcal{X} \to \mathcal{X}$ satisfy $U+V=I_\mathcal{X}$, then $U-V=U^2-V^2$.
\end{lemma}
\begin{proof}
We follow the ideas in the proof of Proposition 2.2 in (\cite{BALACASAZZAEDIDINKUTYNIOK}):
\begin{align*}
U-V&=U-(I_\mathcal{X}-U)=2U-I_\mathcal{X}=U^2-(I_\mathcal{X}-2U+U^2)\\
&=U^2-(I_\mathcal{X}-U)^2=U^2-V^2.
\end{align*}
\end{proof}
We now have Banach space version of Theorem \ref{CASAZZAGENERAL}.
\begin{theorem}\label{OURGENERAL}
Let $ (\{\omega_n \}_{n}, \{\tau_n \}_{n}) $ be a p-ASF for $\mathcal{X}$. Then for every $\mathbb{M} \subseteq \mathbb{N}$, and for all $x \in \mathcal{X}$,
\begin{align*}
\sum_{n\in \mathbb{M}}[x, \omega_n][\tau_n,x]-\sum_{n=1}^{\infty}[S_\mathbb{M}x,\tilde{\omega}_n][\tilde{\tau}_n,S_\mathbb{M}^\dagger x]&=\sum_{n\in \mathbb{M}^\text{c}}[x, \omega_n][\tau_n,x]-\sum_{n=1}^{\infty}[S_{\mathbb{M}^\text{c}}x,\tilde{\omega}_n][\tilde{\tau}_n,S_{\mathbb{M}^\text{c}}^\dagger x] .
\end{align*}
\end{theorem}
\begin{proof}
For notational convenience, we denote $S_{f, \tau}$ by $S$. We clearly have $S_\mathbb{M}+S_{\mathbb{M}^\text{c}}=S$. Using $S^{-1}S_\mathbb{M}+S^{-1}S_{\mathbb{M}^\text{c}}=I_\mathcal{X}$ and Lemma \ref{LEMMA}, we get $S^{-1}S_\mathbb{M}-S^{-1}S_{\mathbb{M}^\text{c}}=(S^{-1}S_\mathbb{M})^2-(S^{-1}S_{\mathbb{M}^\text{c}})^2=S^{-1}S_\mathbb{M}S^{-1}S_\mathbb{M}-S^{-1}S_{\mathbb{M}^\text{c}}S^{-1}S_{\mathbb{M}^\text{c}}$ which gives
\begin{align*}
S^{-1}S_\mathbb{M}-S^{-1}S_\mathbb{M}S^{-1}S_\mathbb{M}=S^{-1}S_{\mathbb{M}^\text{c}}-S^{-1}S_{\mathbb{M}^\text{c}}S^{-1}S_{\mathbb{M}^\text{c}}.
\end{align*}
Therefore for all $x, y \in \mathcal{X}$,
\begin{align*}
[S^{-1}S_\mathbb{M}x,y]-[S^{-1}S_\mathbb{M}S^{-1}S_\mathbb{M}x, y]=[S^{-1}S_{\mathbb{M}^\text{c}}x,y]-[S^{-1}S_{\mathbb{M}^\text{c}}S^{-1}S_{\mathbb{M}^\text{c}}x,y].
\end{align*}
In particular, for all $x \in \mathcal{X}$,
\begin{align*}
[S^{-1}S_\mathbb{M}x,S^\dagger x]-[S^{-1}S_\mathbb{M}S^{-1}S_\mathbb{M}x, S^\dagger x]&=[S^{-1}S_{\mathbb{M}^\text{c}}x,S^\dagger x]-[S^{-1}S_{\mathbb{M}^\text{c}}S^{-1}S_{\mathbb{M}^\text{c}}x,S^\dagger x]
\end{align*}
which gives
\begin{align}\label{FIRST}
[S_\mathbb{M}x,x]-[S^{-1}S_\mathbb{M}x, S_\mathbb{M}^\dagger x]=[S_{\mathbb{M}^\text{c}}x,x]-[S^{-1}S_{\mathbb{M}^\text{c}}x,S_{\mathbb{M}^\text{c}}^\dagger x], \quad \forall x \in \mathcal{X}.
\end{align}
Now note that
\begin{align*}
\sum_{n=1}^{\infty}[x,\tilde{\omega}_n][\tilde{\tau}_n,y]&=\sum_{n=1}^{\infty}[x,(S^{-1})^\dagger\omega_n][S^{-1}\tau_n,y]=\sum_{n=1}^{\infty}[S^{-1}x, \omega_n][S^{-1}\tau_n,y]\\
&=\left[\sum_{n=1}^{\infty}[S^{-1}x, \omega_n]S^{-1}\tau_n,y \right]=\left[S^{-1}\left(\sum_{n=1}^{\infty}[S^{-1}x, \omega_n]\tau_n\right),y \right]\\
&=[S^{-1}x,y],
\quad \forall x,y \in \mathcal{X}.
\end{align*}
Equation (\ref{FIRST}) now gives
\begin{align*}
\sum_{n\in \mathbb{M}}[x, \omega_n][\tau_n,x]-\sum_{n=1}^{\infty}[S_\mathbb{M}x,\tilde{\omega}_n][\tilde{\tau}_n,S_\mathbb{M}^\dagger x]&=\sum_{n\in \mathbb{M}^\text{c}}[x, \omega_n][\tau_n,x]\\
&\quad -\sum_{n=1}^{\infty}[S_{\mathbb{M}^\text{c}}x,\tilde{\omega}_n][\tilde{\tau}_n,S_{\mathbb{M}^\text{c}}^\dagger x] ,
& \quad \forall x \in \mathcal{X}.
\end{align*}
\end{proof}
A look at Theorem \ref{SECOND} makes a guess of the following statement for Banach spaces. Let $ (\{\omega_n \}_{n}, \{\tau_n \}_{n}) $ be a Parseval p-ASF for $\mathcal{X}$. Then for every $\mathbb{M} \subseteq \mathbb{N}$,
\begin{align}\label{WRONGEQUATION}
\sum_{n\in \mathbb{M}}[x, \omega_n][\tau_n,x]-\left\|\sum_{n\in \mathbb{M}}[ x, \omega_n] \tau_n\right\|^2=\sum_{n\in \mathbb{M}^\text{c}}[x, \omega_n][\tau_n,x]-\left\|\sum_{n\in \mathbb{M}^\text{c}}[ x, \omega_n] \tau_n\right\|^2, \quad \forall x \in \mathcal{X}.
\end{align}
However, the correct Banach space version of Theorem \ref{SECOND} is not Equation (\ref{WRONGEQUATION}) but it is stated in the next theorem.
\begin{theorem} (\textbf{Parseval p-ASF identity}) \label{PASFID}
Let $ (\{\omega_n \}_{n}, \{\tau_n \}_{n}) $ be a Parseval p-ASF for $\mathcal{X}$. Then for every $\mathbb{M} \subseteq \mathbb{N}$,
\begin{align*}
&\sum_{n\in \mathbb{M}}[x, \omega_n][\tau_n,x]-\sum_{n\in \mathbb{M}}\sum_{k\in \mathbb{M}}[x, \omega_n][\tau_n,\omega_k][\tau_k,x]\\
&\quad=\sum_{n\in \mathbb{M}^\text{c}}[x, \omega_n][\tau_n,x]-\sum_{n\in \mathbb{M}^\text{c}}\sum_{k\in \mathbb{M}^\text{c}}[x, \omega_n][\tau_n,\omega_k][\tau_k,x], \quad \forall x \in \mathcal{X}.
\end{align*}
\end{theorem}
\begin{proof}
Using Theorem \ref{OURGENERAL}, for all $ x \in \mathcal{X}$,
\begin{align*}
&\sum_{n\in \mathbb{M}}[x, \omega_n][\tau_n,x]-\sum_{n\in \mathbb{M}}\sum_{k\in \mathbb{M}}[x, \omega_n][\tau_n,\omega_k][\tau_k,x]\\
&= \sum_{n\in \mathbb{M}}[x, \omega_n][\tau_n,x]-\left[\sum_{n\in \mathbb{M}}[x, \omega_n]\sum_{k\in \mathbb{M}}[\tau_n,\omega_k]\tau_k, x\right]\\
&=\sum_{n\in \mathbb{M}}[x, \omega_n][\tau_n,x]-\left[\sum_{n\in \mathbb{M}}[x, \omega_n]S_\mathbb{M}\tau_n, x\right]\\
&=\sum_{n\in \mathbb{M}}[x, \omega_n][\tau_n,x]-\left[S_\mathbb{M}\left(\sum_{n\in \mathbb{M}}[x, \omega_n]\tau_n\right), x\right]\\
&=\sum_{n\in \mathbb{M}}[x, \omega_n][\tau_n,x]-\left[S_\mathbb{M} S_\mathbb{M}x, x\right]\\
&=\sum_{n\in \mathbb{M}}[x, \omega_n][\tau_n,x]-\left[ S_\mathbb{M}x, S_\mathbb{M}^\dagger x\right]\\
&=\sum_{n\in \mathbb{M}}[x, \omega_n][\tau_n,x]-\left[\sum_{n=1}^{\infty}[ S_\mathbb{M}x, \omega_n]\tau_n,S_\mathbb{M}^\dagger x\right]\\
&=\sum_{n\in \mathbb{M}}[x, \omega_n][\tau_n,x]-\sum_{n=1}^{\infty}[ S_\mathbb{M}x, \omega_n][\tau_n,S_\mathbb{M}^\dagger x]\\
&=\sum_{n\in \mathbb{M}^\text{c}}[x, \omega_n][\tau_n,x]-\sum_{n=1}^{\infty}[S_{\mathbb{M}^\text{c}}x,\omega_n][\tau_n,S_{\mathbb{M}^\text{c}}^\dagger x]
\\
&=\sum_{n\in \mathbb{M}^\text{c}}[x, \omega_n][\tau_n,x]-\sum_{n\in \mathbb{M}^\text{c}}\sum_{k\in \mathbb{M}^\text{c}}[x, \omega_n][\tau_n,\omega_k][\tau_k,x].
\end{align*}
\end{proof}
In terms of $S_\mathbb{M}$ and $S_\mathbb{M}^\text{c}$, Theorem \ref{PASFID} can be written as
\begin{align}\label{OPERATORDESCRPTION}
S_\mathbb{M}-S_\mathbb{M}^2=S_{\mathbb{M}^\text{c}}-S_{\mathbb{M}^\text{c}}^2\quad \text{ or } \quad S_\mathbb{M}+S_{\mathbb{M}^\text{c}}^2=S_{\mathbb{M}^\text{c}}+S_\mathbb{M}^2.
\end{align}
We now give an application of Theorem \ref{PASFID}. This is Banach space version of Theorem \ref{THIRD}.
\begin{theorem}\label{LOWER234}
Let $ (\{\omega_n \}_{n}, \{\tau_n \}_{n}) $ be a Parseval p-ASF for $\mathcal{X}$. Let $\mathbb{M} \subseteq \mathbb{N}$. If $x \in \mathcal{X}$ is such that $[(S_\mathbb{M}-\frac{1}{2}I_\mathcal{X})^2x, x]\geq 0$, then
\begin{align*}
&\sum_{n\in \mathbb{M}}[x, \omega_n][\tau_n,x]+\sum_{n\in \mathbb{M}^\text{c}}\sum_{k\in \mathbb{M}^\text{c}}[x, \omega_n][\tau_n,\omega_k][\tau_k,x]\\
&=\sum_{n\in \mathbb{M}^\text{c}}[x, \omega_n][\tau_n,x]+\sum_{n\in \mathbb{M}}\sum_{k\in \mathbb{M}}[x, \omega_n][\tau_n,\omega_k][\tau_k,x]
\geq \frac{3}{4}\|x\|^2 , \quad \text{ for that } x.
\end{align*}
\end{theorem}
\begin{proof}
We first compute
\begin{align*}
S_\mathbb{M}^2+S_{\mathbb{M}^\text{c}}^2 &= S_\mathbb{M}^2+ (I_\mathcal{X}-S_\mathbb{M})^2=2 S_\mathbb{M}^2-2S_\mathbb{M}+I_\mathcal{X}\\
&
=2\left(S_\mathbb{M}-\frac{1}{2}I_\mathcal{X}\right)^2+\frac{1}{2}I_\mathcal{X}.
\end{align*}
Hence if $x \in \mathcal{X}$ satisfies $[(S_\mathbb{M}-\frac{1}{2}I_\mathcal{X})^2x, x]\geq 0$, then
\begin{align*}
[( S_\mathbb{M}^2+S_{\mathbb{M}^\text{c}}^2)x,x]\geq \frac{1}{2}\|x\|^2.
\end{align*}
Now using Equation (\ref{OPERATORDESCRPTION}) we get
\begin{align*}
&2\sum_{n\in \mathbb{M}}[x, \omega_n][\tau_n,x]+2\sum_{n\in \mathbb{M}^\text{c}}\sum_{k\in \mathbb{M}^\text{c}}[x, \omega_n][\tau_n,\omega_k][\tau_k,x]=2[S_\mathbb{M}x,x]+2[ S_{\mathbb{M}^\text{c}}^2x,x]\\
&=[2(S_\mathbb{M}+S_{\mathbb{M}^\text{c}}^2)x,x]=[((S_\mathbb{M}+S_{\mathbb{M}^\text{c}}^2)+(S_\mathbb{M}+S_{\mathbb{M}^\text{c}}^2))x,x]\\
&=[((S_\mathbb{M}+S_{\mathbb{M}^\text{c}}^2)+(S_{\mathbb{M}^\text{c}}+S_\mathbb{M}^2))x,x]=[(I_\mathcal{X}+S_{\mathbb{M}^\text{c}}^2+S_\mathbb{M}^2)x,x]\\
&=\|x\|^2 +[( S_\mathbb{M}^2+S_{\mathbb{M}^\text{c}}^2)x,x] \geq \frac{3}{2}\|x\|^2 , \quad \forall x \in \mathcal{X}.
\end{align*}
\end{proof}
\section{PALEY-WIENER THEOREM FOR p-APPROXIMATE \\
SCHAUDER FRAMES}\label{SECTIONTWO}
In order to derive Paley-Wiener theorem for p-ASFs, we need a generalization of result of \cite{HILDING}.
\begin{theorem}(\cite{CASAZZACHRISTENSTENPERTURBATION, CASAZZAKALTON, VANEIJNDHOVEN})\label{cc1} (\textbf{Casazza-Christensen-Kalton-van Eijndhoven perturbation})
Let $ \mathcal{X}, \mathcal{Y}$ be Banach spaces and $ A: \mathcal{X}\rightarrow \mathcal{Y}$ be a bounded invertible operator. If a bounded operator $ B : \mathcal{X}\rightarrow \mathcal{Y}$ is such that there exist $ \alpha, \beta \in \left [0, 1 \right )$ with
$$ \|Ax-Bx\|\leq\alpha\|Ax\|+\beta\|Bx\|,\quad \forall x \in \mathcal{X},$$
then $ B $ is invertible and
$$ \frac{1-\alpha}{1+\beta}\|Ax\|\leq\|Bx\|\leq\frac{1+\alpha}{1-\beta} \|Ax\|, \quad\forall x \in \mathcal{X};$$
$$ \frac{1-\beta}{1+\alpha}\frac{1}{\|A\|}\|y\|\leq\|B^{-1}y\|\leq\frac{1+\beta}{1-\alpha} \|A^{-1}\|\|y\|, \quad\forall y \in \mathcal{Y}.$$
\end{theorem}
In the sequel, the standard Schauder basis for $\ell^p(\mathbb{N})$ is denoted by $\{e_n \}_{n}$.
\begin{theorem}\label{OURPERTURBATION}
Let $ (\{f_n \}_{n}, \{\tau_n \}_{n}) $ be a p-ASF for $\mathcal{X}$. Assume that a collection $\{\tau_n \}_{n} $ in $\mathcal{X}$ is such that there exist $\alpha, \beta, \gamma \geq 0$ with $ \max\{\alpha+\gamma\|\theta_f S_{f,\tau}^{-1}\|, \beta\}<1$ and
\begin{align}\label{PEREQUATIONA}
\left\|\sum_{n=1}^{m}c_n(\tau_n-\omega_n)\right\|\leq\nonumber& \alpha\left\|\sum_{n=1}^{m}c_n\tau_n\right \|+\gamma \left(\sum_{n=1}^{m}|c_n|^p\right)^\frac{1}{p}+\beta\left\|\sum_{n=1}^{m}c_n\omega_n\right \|, \\
&\quad\forall c_1, \dots, c_m \in \mathbb{K}, ~ m=1,2, \dots.
\end{align}
Then $ (\{f_n \}_{n}, \{\omega_n \}_{n}) $ is a p-ASF for $\mathcal{X}$ with bounds
\begin{align*}
\frac{1-(\alpha+\gamma\|\theta_f S_{f,\tau}^{-1}\|)}{(1+\beta)\|S_{f,\tau}^{-1}\|} \quad \text{and} \quad \left(\frac{1+\alpha}{1-\beta}\|\theta_\tau\|+\frac{\gamma}{1-\beta}\right)\|\theta_f\|.
\end{align*}
\end{theorem}
\begin{proof}
For $ m=1,2, \dots$ and for every $c_1, \dots, c_m \in \mathbb{K}$,
\begin{align*}
\left\|\sum_{n=1}^{m}c_n\omega_n\right\|&\leq \left\|\sum_{n=1}^{m}c_n(\tau_n-\omega_n)\right\|+\left\|\sum_{n=1}^{m}c_n\tau_n\right\|\\
&\leq (1+\alpha)\left\|\sum_{n=1}^{m}c_n\tau_n\right \|+\gamma \left(\sum_{n=1}^{m}|c_n|^p\right)^\frac{1}{p}+\beta\left\|\sum_{n=1}^{m}c_n\omega_n\right \|.
\end{align*}
Hence
\begin{align*}
\left\| \sum\limits_{n=1}^mc_n\omega_n\right\|\leq\frac{1+\alpha}{1-\beta}\left\| \sum\limits_{n=1}^mc_n\tau_n\right\|+\frac{\gamma}{1-\beta}\left( \sum\limits_{n=1}^m|c_n|^p\right)^\frac{1}{p}, \quad\forall c_1, \dots, c_m \in \mathbb{K}, m=1,2, \dots.
\end{align*}
Therefore $\theta_\omega$ is well-defined bounded linear operator with
\begin{align*}
\|\theta_\omega\|\leq\frac{1+\alpha}{1-\beta}\|\theta_\tau\|+\frac{\gamma}{1-\beta}.
\end{align*}
Now Equation (\ref{PEREQUATIONA}) gives
\begin{align*}
\left\|\sum_{n=1}^{\infty}c_n(\tau_n-\omega_n)\right\|\leq \alpha\left\|\sum_{n=1}^{\infty}c_n\tau_n\right \|+\gamma \left(\sum_{n=1}^{\infty}|c_n|^p\right)^\frac{1}{p}+\beta\left\|\sum_{n=1}^{\infty}c_n\omega_n\right \|, \quad\forall \{c_n\}_n \in \ell^p(\mathbb{N}).
\end{align*}
That is,
\begin{align}\label{PEREQUATIONB}
\|\theta_\tau \{c_n\}_n-\theta_\omega \{c_n\}_n\|&\leq \alpha \|\theta_\tau \{c_n\}_n\|+\gamma\left( \sum\limits_{n=1}^\infty|c_n|^p\right)^\frac{1}{p}+\beta \|\theta_\omega \{c_n\}_n\|,\nonumber\\
& \quad\forall \{c_n\}_n \in \ell^p(\mathbb{N}).
\end{align}
By taking $\{c_n\}_n =\{f_n(S_{f,\tau}^{-1}x)\}_n=\theta_fS_{f,\tau}^{-1}x$ in Equation (\ref{PEREQUATIONB}), we get
\begin{align*}
\|\theta_\tau \theta_fS_{f,\tau}^{-1}x-\theta_\omega \theta_fS_{f,\tau}^{-1}x\|\leq \alpha \|\theta_\tau \theta_fS_{f,\tau}^{-1}x\|+\gamma\left( \sum\limits_{n=1}^\infty|f_n(S_{f,\tau}^{-1}x)|^p\right)^\frac{1}{p}+\beta \|\theta_\omega\theta_fS_{f,\tau}^{-1}x\|,
\end{align*}
for all $x \in \mathcal{X}.$ That is,
\begin{align*}
\|x-S_{f,\omega}S_{f,\tau}^{-1}x\|&\leq \alpha \| x\|+\gamma\|\theta_fS_{f,\tau}^{-1}x\|+\beta \|S_{f,\omega}S_{f,\tau}^{-1}x\|\\
&\leq (\alpha +\gamma\|\theta_fS_{f,\tau}^{-1}\|)\|x\|+\beta \|S_{f,\omega}S_{f,\tau}^{-1}x\|, \quad\forall x \in \mathcal{X}.
\end{align*}
Since $ \max\{\alpha+\gamma\|\theta_f S_{f,\tau}^{-1}\|, \beta\}<1$, we can use Theorem \ref{cc1} to get the operator $S_{f,\omega}S_{f,\tau}^{-1}$ to be invertible and
\begin{align*}
\|(S_{g,\omega} S_{f,\tau}^{-1})^{-1}\| \leq \frac{1+\beta}{1-(\alpha+\gamma\|\theta_f S_{f,\tau}^{-1}\|)}.
\end{align*}
Hence the operator $S_{f,\omega}=(S_{f,\omega}S_{f,\tau}^{-1})S_{f,\tau}$ is invertible. Therefore $ (\{f_n \}_{n}, \{\omega_n \}_{n}) $ is a p-ASF for $\mathcal{X}$. We get the frame bounds from the following calculations:
\begin{align*}
&\| S_{f,\omega}^{-1}\|\leq\|S_{f,\tau}^{-1}\|\| S_{f,\tau}S_{f,\omega}^{-1}\| \leq \frac{\|S_{f,\tau}^{-1}\|(1+\beta)}{1-(\alpha+\gamma\|\theta_f S_{f,\tau}^{-1}\|)}\quad\text{ and }\\
&\|S_{f,\omega}\|\leq \|\theta_\omega\|\|\theta_f\|\leq \left(\frac{1+\alpha}{1-\beta}\|\theta_\tau\|+\frac{\gamma}{1-\beta}\right)\|\theta_f\|.
\end{align*}
\end{proof}
\begin{remark}\label{OURCOROLLARY}
Theorem \ref{OLECAZASSA} is a corollary of Theorem \ref{OURPERTURBATION}. In particular, Theorems \ref{FIRSTPER} and \ref{SECONDPER} are corollaries of Theorem \ref{OURPERTURBATION}. Indeed,
let $\{\tau_n\}_n$ be a frame for $\mathcal{H}$. We define
\begin{align*}
f_n:\mathcal{H} \ni h \mapsto f_n(h)\coloneqq \langle h, \tau_n\rangle \in \mathbb{K}, \quad \forall n \in \mathbb{N}.
\end{align*}
Then $\theta_f=\theta_\tau$ and $ (\{f_n \}_{n}, \{\tau_n \}_{n}) $ is a 2-approximate frame for $\mathcal{H}$. Theorem \ref{OURPERTURBATION} now says that $ (\{f_n \}_{n}, \{\omega_n \}_{n}) $ is a 2-ASF for $\mathcal{H}$. To prove Theorem \ref{OLECAZASSA}, it now suffices to prove that $\{\omega_n\}_n$ is a frame for $\mathcal{H}$. Since $ (\{f_n \}_{n}, \{\omega_n \}_{n}) $ is a 2-ASF for $\mathcal{H}$, it follows that $\theta_\omega$ is surjective. We now use the following result to conclude that $\{\omega_n\}_n$ is a frame for $\mathcal{H}$.
\end{remark}
\begin{theorem}(\cite{OCPSEUDOINVERSE})
A collection $\{\tau_n\}_n$ is a frame for $\mathcal{H}$ if and only if the map
\begin{align*}
T:\ell^2(\mathbb{N})\ni \{c_n \}_{n}\mapsto \sum_{n=1}^{\infty}c_n\tau_n \in \mathcal{H}
\end{align*}
is a well-defined bounded linear surjective operator.
\end{theorem}
\begin{corollary}
Let $q$ be the conjugate index of $p$. Let $ (\{f_n \}_{n}, \{\tau_n \}_{n}) $ be a p-ASF for $\mathcal{X}$. Assume that a collection $\{\tau_n \}_{n} $ in $\mathcal{X}$ is such that
and
$$ \lambda \coloneqq \sum_{n=1}^\infty\|\tau_n-\omega_n\|^q <\frac{1}{\|\theta_f S_{f,\tau}^{-1}\|^q}.$$
Then $ (\{f_n \}_{n}, \{\omega_n \}_{n}) $ is a p-ASF for $\mathcal{X}$ with bounds
\begin{align*}
\frac{1-\lambda^{1/p}\|\theta_f S_{f,\tau}^{-1}\|}{\|S_{f,\tau}^{-1}\|} \quad \text{ and } \quad (\|\theta_\tau\|+\lambda^{1/p}).
\end{align*}
\end{corollary}
\begin{proof}
Take $ \alpha =0, \beta=0, \gamma=\lambda^{1/p}$. Then $ \max\{\alpha+\gamma\|\theta_f S_{f,\tau}^{-1}\|, \beta\}<1$ and
\begin{align*}
&\left\|\sum\limits_{n=1}^{m}c_n(\tau_n-\omega_n)\right\|\leq \left(\sum\limits_{n=1}^{m}\|\tau_n-\omega_n\|^q \right)^\frac{1}{q}\left(\sum\limits_{n=1}^{m}|c_n|^p\right)^\frac{1}{p}\leq \gamma\left(\sum\limits_{n=1}^{m}|c_n|^p\right)^\frac{1}{p}, \\
& \quad \quad \quad \quad \quad \quad\quad \quad\quad \quad\forall c_1, \dots, c_m \in \mathbb{K},~ m=1, 2,\dots.
\end{align*}
By using Theorem \ref{OURPERTURBATION} we now get the result.
\end{proof}
We next derive stability result which does not demand maximum condition on parameters $\alpha$ and $\gamma$.
\begin{theorem}
Let $ (\{f_n \}_{n}, \{\tau_n \}_{n}) $ be a p-ASF for $\mathcal{X}$. Assume that a collection $\{\tau_n \}_{n} $ in $\mathcal{X}$ and a collection $ \{g_n \}_{n} $ in $\mathcal{X}^*$ are such that there exist $r,s,t,\alpha, \beta, \gamma \geq 0$ with $ \max\{ \beta,s\}<1$ and
\begin{align*}
&\left\|\sum_{n=1}^{m}(f_n-g_n)(x)e_n\right\|\leq r\left\|\sum_{n=1}^{m}f_n(x)e_n\right \|+t \|x\|+s\left\|\sum_{n=1}^{m}g_n(x)e_n\right \|, \\
&\quad \quad \quad \quad \quad \quad\quad \quad\quad \quad \quad \forall x \in \mathcal{X}, m=1, 2, \dots,\\
&\left\|\sum_{n=1}^{m}c_n(\tau_n-\omega_n)\right\|\leq \alpha\left\|\sum_{n=1}^{m}c_n\tau_n\right \|+\gamma \left(\sum_{n=1}^{m}|c_n|^p\right)^\frac{1}{p}+\beta\left\|\sum_{n=1}^{m}c_n\omega_n\right \|, \\
& \quad \quad \quad \quad \quad \quad\quad \quad\quad \quad \quad\forall c_1, \dots, c_m \in \mathbb{K}, m=1,2, \dots.
\end{align*}
Assume that one of the following holds.
\begin{enumerate}[label=(\roman*)]
\item $\sum_{n=1}^{\infty}(\|f_n-g_n\|\|S_{f,\tau}^{-1}\tau_n\|+\|g_n\|\|S_{f,\tau}^{-1}(\tau_n-\omega_n)\|)<1.$
\item $\sum_{n=1}^{\infty}(\|f_n-g_n\|\|S_{f,\tau}^{-1}\omega_n\|+\|f_n\|\|S_{f,\tau}^{-1}(\tau_n-\omega_n)\|)<1.$
\item $\sum_{n=1}^{\infty}(\|(f_n-g_n)S_{f,\tau}^{-1}\|\|\tau_n\|+\|g_nS_{f,\tau}^{-1}\|\|\tau_n-\omega_n\|)<1.$
\item $\sum_{n=1}^{\infty}(\|(f_n-g_n)S_{f,\tau}^{-1}\|\|\omega_n\|+\|f_nS_{f,\tau}^{-1}\|\|\tau_n-\omega_n\|)<1$.
\end{enumerate}
Then $ (\{g_n \}_{n}, \{\omega_n \}_{n}) $ is a p-ASF for $\mathcal{X}$. Moreover, an upper bound is
\begin{align*}
\left(\frac{1+\alpha}{1-\beta}\|\theta_\tau\|+\frac{\gamma}{1-\beta}\right)\left(\frac{1+r}{1-s}\|\theta_f\|+\frac{t}{1-s}\right).
\end{align*}
\end{theorem}
\begin{proof}
Following the initial lines in the proof of Theorem \ref{OURPERTURBATION}, we see that $\theta_g$ and $\theta_\omega$ are well-defined bounded linear operators. We now consider four cases.\\
Assume (i). Then
\begin{align*}
&\left\|x-\sum_{n=1}^{\infty}g_n(x)S_{f,\tau}^{-1}\omega_n\right\|=\left\|\sum_{n=1}^{\infty}f_n(x)S_{f,\tau}^{-1}\tau_n-\sum_{n=1}^{\infty}g_n(x)S_{f,\tau}^{-1}\omega_n\right\|\\
&\quad\leq \sum_{n=1}^{\infty}\|f_n(x)S_{f,\tau}^{-1}\tau_n-g_n(x)S_{f,\tau}^{-1}\omega_n\|\\
&\quad\leq \sum_{n=1}^{\infty}\bigg\{\|f_n(x)S_{f,\tau}^{-1}\tau_n-g_n(x)S_{f,\tau}^{-1}\tau_n\|+\|g_n(x)S_{f,\tau}^{-1}\tau_n-g_n(x)S_{f,\tau}^{-1}\omega_n\|\bigg\}\\
&\quad=\sum_{n=1}^{\infty}\bigg\{\|(f_n-g_n)(x)S_{f,\tau}^{-1}\tau_n\|+\|g_n(x)S_{f,\tau}^{-1}(\tau_n-\omega_n)\|\bigg\}\\
&\quad\leq \left(\sum_{n=1}^{\infty}\bigg\{\|f_n-g_n\|\|S_{f,\tau}^{-1}\tau_n\|+\|g_n\|\|S_{f,\tau}^{-1}(\tau_n-\omega_n)\|\bigg\}\right)\|x\|.
\end{align*}
Therefore the operator $S_{f,\tau}^{-1}S_{g,\omega}$ is invertible.\\
Assume (ii). Then
\begin{align*}
&\left\|x-\sum_{n=1}^{\infty}g_n(x)S_{f,\tau}^{-1}\omega_n\right\|=\left\|\sum_{n=1}^{\infty}f_n(x)S_{f,\tau}^{-1}\tau_n-\sum_{n=1}^{\infty}g_n(x)S_{f,\tau}^{-1}\omega_n\right\|\\
&\quad\leq \sum_{n=1}^{\infty}\|f_n(x)S_{f,\tau}^{-1}\tau_n-g_n(x)S_{f,\tau}^{-1}\omega_n\|\\
&\quad\leq \sum_{n=1}^{\infty}\bigg\{\|f_n(x)S_{f,\tau}^{-1}\tau_n-f_n(x)S_{f,\tau}^{-1}\omega_n\|+\|f_n(x)S_{f,\tau}^{-1}\omega_n-g_n(x)S_{f,\tau}^{-1}\omega_n\|\bigg\}\\
&\quad=\sum_{n=1}^{\infty}\bigg\{\|f_n(x)S_{f,\tau}^{-1}(\tau_n-\omega_n)\|+\|(f_n-g_n)(x)S_{f,\tau}^{-1}\omega_n\|\bigg\}\\
&\quad\leq \left(\sum_{n=1}^{\infty}\bigg\{\|f_n\|\|S_{f,\tau}^{-1}(\tau_n-\omega_n)\|+\|f_n-g_n\|\|S_{f,\tau}^{-1}\omega_n\|\bigg\}\right)\|x\|.
\end{align*}
Therefore the operator $S_{f,\tau}^{-1}S_{g,\omega}$ is invertible.\\
Assume (iii). Then
\begin{align*}
&\left\|x-\sum_{n=1}^{\infty}g_n(S_{f,\tau}^{-1}x)\omega_n\right\|=\left\|\sum_{n=1}^{\infty}f_n(S_{f,\tau}^{-1}x)\tau_n-\sum_{n=1}^{\infty}g_n(S_{f,\tau}^{-1}x)\omega_n\right\|\\
&\leq\sum_{n=1}^{\infty}\|f_n(S_{f,\tau}^{-1}x)\tau_n-g_n(S_{f,\tau}^{-1}x)\omega_n\| \\
&\leq \sum_{n=1}^{\infty}\bigg\{\|f_n(S_{f,\tau}^{-1}x)\tau_n-g_n(S_{f,\tau}^{-1}x)\tau_n\|+\|g_n(S_{f,\tau}^{-1}x)\tau_n-g_n(S_{f,\tau}^{-1}x)\omega_n\|\bigg\}\\
&=\sum_{n=1}^{\infty}\bigg\{\|(f_n-g_n)(S_{f,\tau}^{-1}x)\tau_n\|+\|g_n(S_{f,\tau}^{-1}x)(\tau_n-\omega_n)\|\bigg\}\\
&\leq \left(\sum_{n=1}^{\infty}\bigg\{\|(f_n-g_n)S_{f,\tau}^{-1}\|\|\tau_n\|+\|g_nS_{f,\tau}^{-1}\|\|\tau_n-\omega_n\|\bigg\}\right)\|x\|.
\end{align*}
Therefore the operator $S_{g,\omega}S_{f,\tau}^{-1}$ is invertible.\\
Assume (iv). Then
\begin{align*}
&\left\|x-\sum_{n=1}^{\infty}g_n(S_{f,\tau}^{-1}x)\omega_n\right\|=\left\|\sum_{n=1}^{\infty}f_n(S_{f,\tau}^{-1}x)\tau_n-\sum_{n=1}^{\infty}g_n(S_{f,\tau}^{-1}x)\omega_n\right\|\\
&\quad\leq\sum_{n=1}^{\infty}\|f_n(S_{f,\tau}^{-1}x)\tau_n-g_n(S_{f,\tau}^{-1}x)\omega_n\|\\
&\quad\leq \sum_{n=1}^{\infty}\bigg\{\|f_n(S_{f,\tau}^{-1}x)\tau_n-f_n(S_{f,\tau}^{-1}x)\omega_n\|+\|f_n(S_{f,\tau}^{-1}x)\omega_n-g_n(S_{f,\tau}^{-1}x)\omega_n\|\bigg\}\\
&\quad=\sum_{n=1}^{\infty}\bigg\{\|f_n(S_{f,\tau}^{-1}x)(\tau_n-\omega_n)\|+\|(f_n-g_n)(S_{f,\tau}^{-1}x)\omega_n\|\bigg\}\\
&\quad\leq \left(\sum_{n=1}^{\infty}\bigg\{\|f_nS_{f,\tau}^{-1}\|\|\tau_n-\omega_n\|+\|(f_n-g_n)S_{f,\tau}^{-1}\|\|\omega_n\|\bigg\}\right)\|x\|.
\end{align*}
Therefore the operator $S_{g,\omega}S_{f,\tau}^{-1}$ is invertible.
Hence in each of the assumptions we get that $ (\{g_n \}_{n}, \{\omega_n \}_{n}) $ is a p-ASF for $\mathcal{X}$.
\end{proof}
We end this chapter by deriving results on the expansion of sequences to approximate Schauder frames.
A routine Hilbert space argument shows that a sequence $\{\tau_n\}_n$ is a Bessel sequence for Hilbert space $\mathcal{H}$ if and only if the map $ S_\tau :\mathcal{H} \ni h \mapsto \sum_{n=1}^\infty \langle h, \tau_n\rangle\tau_n\in
\mathcal{H} $ is a well-defined bounded linear operator. In fact, if $\{\tau_n\}_n$ is a Bessel sequence, then both maps $\theta_\tau:\mathcal{H} \ni h \mapsto \theta_\tau h \coloneqq\{\langle h, \tau_n\rangle \}_n \in \ell^2(\mathbb{N})$ and $\theta_\tau^*:\ell^2(\mathbb{N}) \ni \{a_n\}_n \mapsto \theta_\tau^*\{a_n\}_n\coloneqq \sum_{n=1}^{\infty}a_n\tau_n \in \mathcal{H}$ are well-defined bounded linear operators (Chapter 3 in \cite{CHRISTENSEN}). Now $\theta_\tau^*\theta_\tau=S_\tau$ and hence $S_\tau$ is a well-defined bounded linear operator. Conversely, let $S_\tau$ be a well-defined bounded linear operator. Definition of $S_\tau$ says that it is a positive operator. Thus there exists $b>0$ such that $\langle S_\tau h, h \rangle \leq b\|h\|^2$, $\forall h \in \mathcal{H}$. Again using the definition of $S_\tau$ gives that $\{\tau_n\}_n$ is a Bessel sequence. This observation and Definition \ref{ASFDEF} make us to define the following.
\begin{definition}
Let $\{\tau_n\}_n$ be a sequence in a Banach space $\mathcal{X}$ and $\{f_n\}_n$ be a sequence in $\mathcal{X}^*$. The pair $ (\{f_n \}_{n}, \{\tau_n \}_{n}) $ is said to be a \textbf{weak reconstruction sequence} or \textbf{approximate Bessel sequence} (ABS) for $\mathcal{X}$ if $ S_{f, \tau}:\mathcal{X}\ni x \mapsto S_{f, \tau}x\coloneqq \sum_{n=1}^\infty
f_n(x)\tau_n \in
\mathcal{X}$ is a well-defined bounded linear operator.
\end{definition}
We next recall the reconstruction property of Banach spaces.
\begin{definition}(\cite{CASAZZARECONSTRUCTION})
A Banach space $\mathcal{X}$ is said to have the \textbf{reconstruction property} if there exists a sequence $\{\tau_n\}_n$ in $\mathcal{X}$ and a sequence $\{f_n\}_n$ in $\mathcal{X}^*$ such that $x=\sum_{n=1}^\infty
f_n(x)\tau_n , \forall x \in \mathcal{X}.$
\end{definition}
Using \textbf{approximation property of Banach spaces} (cf. \cite{CASAZZAAPPROXIMATION}), Casazza and Christensen proved the following result.
\begin{theorem}(\cite{CASAZZARECONSTRUCTION})\label{RECTHEOREM}
There exists a Banach space $\mathcal{X}$ such that $\mathcal{X}$ does not have the reconstruction property.
\end{theorem}
Now we have the following characterization. This is a result which is in contrast with Theorem \ref{BESSELEXPANSIONHILBERT}.
\begin{theorem}\label{CHARBESSELTOFRAME}
Let $ (\{f_n \}_{n}, \{\tau_n \}_{n}) $ be a weak reconstruction sequence for $\mathcal{X}$. Then the following are equivalent.
\begin{enumerate}[label=(\roman*)]
\item $ (\{f_n \}_{n}, \{\tau_n \}_{n}) $ can be expanded to an ASF for $\mathcal{X}$.
\item $\mathcal{X}$ has the reconstruction property.
\end{enumerate}
\end{theorem}
\begin{proof}
\begin{enumerate}[label=(\roman*)]
\item $\Rightarrow $ (ii) Let $\{\omega_n\}_n$ be a sequence in $\mathcal{X}$ and $\{g_n\}_n$ be a sequence in $\mathcal{X}^*$ such that $ (\{f_n \}_{n}\cup \{g_n \}_{n}, \{\tau_n \}_{n}\cup\{\omega_n \}_{n} ) $ is an ASF for $\mathcal{X}$. Let $S_{(f,g), (\tau, \omega)}$ be the frame operator for $ (\{f_n \}_{n}\cup \{g_n \}_{n}, \{\tau_n \}_{n}\cup\{\omega_n \}_{n} ) $. Then
\begin{align*}
x&=S_{(f,g), (\tau, \omega)}^{-1}S_{(f,g), (\tau, \omega)}x=S_{(f,g), (\tau, \omega)}^{-1}\left( \sum_{n=1}^{\infty}f_n(x)\tau_n+\sum_{n=1}^{\infty}g_n(x)\omega_n\right)\\
&=\sum_{n=1}^{\infty}f_n(x)S_{(f,g), (\tau, \omega)}^{-1}\tau_n+\sum_{n=1}^{\infty}g_n(x)S_{(f,g), (\tau, \omega)}^{-1}\omega_n, \quad \forall x \in \mathcal{X}
\end{align*}
which shows that $\mathcal{X}$ has the reconstruction property.
\item $\Rightarrow $ (i) $\{\omega_n\}_n$ be a sequence in $\mathcal{X}$ and $\{g_n\}_n$ be a sequence in $\mathcal{X}^*$ such that
$x=\sum_{n=1}^\infty
g_n(x)\omega_n $, $ \forall x \in \mathcal{X}.$ Define $h_n\coloneqq g_n $, $\rho_n \coloneqq (I_\mathcal{X}-S_{f, \tau})\omega_n $, for all $n \in \mathbb{N}$. Then
\begin{align*}
\sum_{n=1}^{\infty}f_n(x)\tau_n+\sum_{n=1}^{\infty}h_n(x)\rho_n&=\sum_{n=1}^{\infty}f_n(x)\tau_n+\sum_{n=1}^{\infty}g_n(x)(I_\mathcal{X}-S_{f, \tau})\omega_n\\
&=S_{f, \tau}x+(I_\mathcal{X}-S_{f, \tau})\left(\sum_{n=1}^{\infty}g_n(x)\omega_n\right)\\
&=S_{f, \tau}x+(I_\mathcal{X}-S_{f, \tau})x=x, \quad \forall x \in \mathcal{X}.
\end{align*}
Therefore $ (\{f_n \}_{n}\cup \{h_n \}_{n}, \{\tau_n \}_{n}\cup\{\rho_n \}_{n} ) $ is an ASF for $\mathcal{X}$.
\end{enumerate}
\end{proof}
We now show that there are infinitely many ways to expand a weak reconstruction sequence into an ASF.
\begin{corollary}\label{NOT}
There exists a Banach space $\mathcal{X}$ such that given any weak reconstruction sequence $ (\{f_n \}_{n}, \{\tau_n \}_{n}) $ for $\mathcal{X}$, $ (\{f_n \}_{n}, \{\tau_n \}_{n}) $ can not be expanded to an ASF for $\mathcal{X}$.
\end{corollary}
\begin{proof}
From Theorem \ref{RECTHEOREM}, there exists a Banach space $\mathcal{X}$ which does not have the reconstruction property. Let $ (\{f_n \}_{n}, \{\tau_n \}_{n}) $ be any weak reconstruction sequence for $\mathcal{X}$. Theorem \ref{CHARBESSELTOFRAME} now says that $ (\{f_n \}_{n}, \{\tau_n \}_{n}) $ can not be expanded to an ASF for $\mathcal{X}$.
\end{proof}
Following corollary is an easy consequence of Theorem \ref{CHARBESSELTOFRAME}.
\begin{corollary}
Let $ (\{f_n \}_{n}, \{\tau_n \}_{n}) $ be a weak reconstruction sequence for $\mathcal{X}$. If $\mathcal{X}$ admits a Schauder basis, then $ (\{f_n \}_{n}, \{\tau_n \}_{n}) $ can be expanded to an ASF for $\mathcal{X}$.
\end{corollary}
Note that Theorem \ref{CHARBESSELTOFRAME} may not add countably many elements to a weak reconstruction sequence to get an ASF. In the following example we show that it adds just one element to a weak reconstruction sequence and yields an ASF.
\begin{example}\label{EXAMPLEOME}
Let $p\in[1,\infty)$ and let $\{e_n\}_n$ denote the standard Schauder basis for $\ell^p(\mathbb{N})$ and let $\{\zeta_n\}_n$ denote the coordinate functionals associated with $\{e_n\}_n$. Define
\begin{align*}
&R: \ell^p(\mathbb{N}) \ni (x_n)_{n=1}^\infty\mapsto (0,x_1,x_2, \dots)\in \ell^p(\mathbb{N}),\\
&L: \ell^p(\mathbb{N}) \ni (x_n)_{n=1}^\infty\mapsto (x_2,x_3,x_4, \dots)\in \ell^p(\mathbb{N}).
\end{align*}
Clearly $ (\{f_n\coloneqq \zeta_nL\}_{n}, \{\tau_n\coloneqq Re_n\}_{n}) $ is a weak reconstruction sequence for $\ell^p(\mathbb{N})$. Note that $S_{f, \tau}=RL$ and
\begin{align*}
& (I_{\ell^p(\mathbb{N})}-S_{f, \tau})e_1=e_1-RLe_1=e_1-0=e_1,\\
& (I_{\ell^p(\mathbb{N})}-S_{f, \tau})e_n=e_n-RLe_n=e_n-Re_{n-1}=e_n-e_n=0, \quad \forall n \geq 2.
\end{align*}
Let $g_n\coloneqq \zeta_n$ and $\omega_n\coloneqq e_n$, $\forall n \in \mathbb{N}$. Theorem \ref{CHARBESSELTOFRAME} now says that
$ (\{f_n \}_{n}\cup \{h_1 \}, \{\tau_n \}_{n}\cup\{\rho_1 \} ) $ is an ASF for $\ell^p(\mathbb{N})$.
\end{example}
It may be possible to expand a weak reconstruction sequence to a tight ASF by adding finitely many elements Hilbert space. In this case, we can estimate the number of elements added to a tight ASF. This is given in the following theorem which can be compared with Theorem \ref{BESSELNUMBERHILBERT}.
\begin{theorem}\label{NUMBERINEQUALITY}
Let $ (\{f_n \}_{n}, \{\tau_n \}_{n}) $ be a weak reconstruction sequence for $\mathcal{X}$. If $ (\{f_n \}_{n}\cup \{g_k\}_{k=1}^N, \{\tau_n \}_{n}\cup \{\omega_k\}_{k=1}^N) $ is a $\lambda$-tight ASF for $\mathcal{X}$, then
\begin{align}\label{KSNUMBER}
N\geq \dim (\lambda I_\mathcal{X}-S_{f, \tau}) (\mathcal{X}).
\end{align}
Further, the Inequality (\ref{KSNUMBER}) can not be improved.
\end{theorem}
\begin{proof}
Let $S_{(f,g), (\tau, \omega)}$ be the frame operator for $ (\{f_n \}_{n}\cup \{g_k\}_{k=1}^N, \{\tau_n \}_{n}\cup \{\omega_k\}_{k=1}^N) $. Set $S_{g, \omega}(x)\coloneqq\sum_{k=1}^{N}g_k(x)\omega_k, \forall x \in \mathcal{X}$. Then
\begin{align*}
\lambda x =S_{(f,g), (\tau, \omega)}x=\sum_{n=1}^{\infty}f_n(x)\tau_n+\sum_{k=1}^{N}g_k(x)\omega_k=S_{f, \tau}x+S_{g,\omega}x,\quad \forall x \in \mathcal{X}.
\end{align*}
Therefore
\begin{align*}
N \geq \dim S_{g, \omega} (\mathcal{X}) = \dim (\lambda I_\mathcal{X}-S_{f, \tau}) (\mathcal{X}).
\end{align*}
Example \ref{EXAMPLEOME} says that inequality in Theorem \ref{NUMBERINEQUALITY} can not be improved.
\end{proof}
We now state the definition of a p-weak reconstruction sequence and give a extension theorem for p-weak reconstruction sequences.
\begin{definition}
Let $p \in [1, \infty)$. A weak reconstruction sequence $ (\{f_n \}_{n}, \{\tau_n \}_{n}) $ for $\mathcal{X}$ is said to be a \textbf{p-weak reconstruction sequence} or \textbf{p-approximate Bessel sequence} (p-ABS) for $\mathcal{X}$ if both the maps $
\theta_f: \mathcal{X}\ni x \mapsto \theta_f x\coloneqq \{f_n(x)\}_n \in \ell^p(\mathbb{N}) $ and $
\theta_\tau : \ell^p(\mathbb{N}) \ni \{a_n\}_n \mapsto \theta_\tau \{a_n\}_n\coloneqq \sum_{n=1}^\infty a_n\tau_n \in \mathcal{X}$
are well-defined bounded linear operators.
\end{definition}
\begin{theorem}
Let $p \in [1, \infty)$. If $ (\{f_n \}_{n}, \{\tau_n \}_{n}) $ is a p-weak reconstruction sequence for $\ell^p(\mathbb{N}), $ then $ (\{f_n \}_{n}, \{\tau_n \}_{n}) $ can be expanded to a p-ASF.
\end{theorem}
\begin{proof}
Let $ \{e_n \}_{n}$ and $ \{\zeta_n \}_{n}$ be as in Example \ref{EXAMPLEOME}. Define $h_n\coloneqq \zeta_n $, $\rho_n \coloneqq (I_{\ell^p(\mathbb{N})}-S_{f, \tau})e_n $, for all $n \in \mathbb{N}$. Then it follows that $ (\{f_n \}_{n}\cup \{h_n \}_{n}, \{\tau_n \}_{n}\cup\{\rho_n \}_{n} ) $ is a p-ASF for $\ell^p(\mathbb{N}) $.
\end{proof}
{\onehalfspacing \chapter{WEAK OPERATOR-VALUED FRAMES}\label{chap5}}
\vspace{0.5cm}
\section{BASIC PROPERTIES}
Let $\mathcal{H},\mathcal{H}_0$ be Hilbert spaces and $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ be the collection of all bounded linear operators from $\mathcal{H}$ to $\mathcal{H}_0$. In this chapter, we study a generalization of the notion of operator-valued frame by studying the convergence of the series
$
\sum_{n=1}^\infty \Psi_n^*A_n
$
to a bounded invertible operator in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$.
\begin{definition}
Let $ \{A_n\}_{n} $ and $ \{\Psi_n\}_{n} $ be collections in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$. The pair $( \{A_n\}_{n}, $ $ \{\Psi_n\}_{n} )$ is said to be a \textbf{weak operator-valued frame} (weak OVF) in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0) $ if the series
\begin{align*}
\text{(\textbf{Operator-valued frame operator})}\quad S_{A, \Psi} \coloneqq \sum_{n=1}^\infty \Psi_n^*A_n
\end{align*}
converges in the strong-operator topology on $ \mathcal{B}(\mathcal{H})$ to a bounded invertible operator. If $ S_{A, \Psi}=I_\mathcal{H}$, then it is called as a Parseval weak OVF.
\end{definition}
We now give various examples of weak OVFs.
\begin{example}
\begin{enumerate}[label=(\roman*)]
\item By taking $\Psi_n\coloneqq A_n$, for all $n \in \mathbb{N}$, it follows that an \textbf{operator-valued frame} is a weak OVF. In particular, a \textbf{G-frame} is a weak OVF.
\item Let $( \{\tau_n\}_{n}, \{f_n\}_{n} )$ be a \textbf{pseudo-frame} for $\mathcal{H}$. If we define $\Psi_n\coloneqq f_n$ and $A_nh\coloneqq \langle h, \tau_n \rangle $, for all $n \in \mathbb{N}$, for all $h \in \mathcal{H}$, then $( \{A_n\}_{n}, \{\Psi_n\}_{n} )$ is a weak OVF in $ \mathcal{B}(\mathcal{H}, \mathbb{K}) $. Similarly it follows that \textbf{frames for subspaces}, \textbf{fusion frames}, \textbf{outer frames}, \textbf{oblique frames} and \textbf{quasi-projectors} are all weak OVFs.
\item Let $ C \in \mathcal{B}(\mathcal{H})$ be invertible and $ \{\tau_n\}_{n} $ be a \textbf{C-controlled frame} for $\mathcal{H}$ (\cite{BALAZSGRYBOS}). If we define $\Psi_n h \coloneqq \langle h, C\tau_n \rangle$ and $A_nh\coloneqq \langle h, \tau_n \rangle$, for all $n \in \mathbb{N}$, for all $h \in \mathcal{H}$, then $( \{A_n\}_{n}, \{\Psi_n\}_{n} )$ is a weak OVF in $ \mathcal{B}(\mathcal{H}, \mathbb{K}) $. In particular, every \textbf{weighted frame} is a weak OVF.
\item Let $( \{\tau_n\}_{n}, \{f_n\}_{n} )$ be an \textbf{approximate Schauder frame} for $\mathcal{H}$ (\cite{FREEMANODELL}). Note that it is possible for Hilbert spaces to have approximate Schauder frames which are not frames. If we define $\Psi_n\coloneqq f_n$ and $A_nh\coloneqq \langle h, \tau_n \rangle $, for all $n \in \mathbb{N}$, for all $h \in \mathcal{H}$, then $( \{A_n\}_{n}, \{\Psi_n\}_{n} )$ is a weak OVF in $ \mathcal{B}(\mathcal{H}, \mathbb{K}) $. In particular, \textbf{atomic decompositions} (\cite{CASAZZAHANLARSONFRAMEBANACH}), \textbf{framings} (\cite{CASAZZAHANLARSONFRAMEBANACH}), \textbf{cb-frames} (\cite{LIURUAN}) and \textbf{Schauder frames} (\cite{CASAZZA}) for Hilbert spaces are all weak OVFs.
\item Let $ \{\tau_n\}_{n}$ be a \textbf{signed frame} for $\mathcal{H}$ with signs $\{\sigma_n\}_{n}$ (\cite{PENGWALDRON}). If we define $\Psi_n\coloneqq h\coloneqq \langle h, \sigma_n\tau_n \rangle$ and $A_nh\coloneqq \langle h, \tau_n \rangle $, for all $n \in \mathbb{N}$, for all $h \in \mathcal{H}$, then $( \{A_n\}_{n}, \{\Psi_n\}_{n} )$ is a weak OVF in $ \mathcal{B}(\mathcal{H}, \mathbb{K}) $.
\end{enumerate}
\end{example}
Unlike in the case of OVFs, the frame operator $S_{A, \Psi}$ need not be positive. Since $S_{A, \Psi}$ is invertible, there are $a,b>0$ such that
\begin{align*}
a\|h\|\leq \|S_{A, \Psi}h\|\leq b \|h\|, \quad \forall h \in \mathcal{H}.
\end{align*}
We call such $a,b$ as lower and upper frame bounds, respectively. Supremum of the set of all lower frame bounds is called as optimal lower frame bound and infimum of the set of all upper frame bounds is called as optimal upper frame bound. We easily get that
\begin{align*}
&\text{ optimal lower frame bound }=\|S_{A,\Psi}^{-1}\|^{-1},\\
& \text{ optimal upper frame bound } = \|S_{A,\Psi}\|.
\end{align*}
We now define the notion of dual weak OVFs.
\begin{definition}
A weak OVF $ (\{B_n\}_{n} , \{\Phi_n\}_{n} )$ in $\mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ is said to be a \textbf{dual} for a weak OVF $ ( \{A_n\}_{n}, \{\Psi_n\}_{n} )$ in $\mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ if
\begin{align*}
\sum_{n=1}^\infty \Psi_n^*B_n= \sum_{n=1}^\infty\Phi^*_nA_n=I_{\mathcal{H}}.
\end{align*}
\end{definition}
Note that dual always exists for a given weak OVF. In fact, a direct calculation shows that
\begin{align*}
( \{\widetilde{A}_n\coloneqq A_nS_{A,\Psi}^{-1}\}_{n},\{\widetilde{\Psi}_n\coloneqq\Psi_n(S_{A,\Psi}^{-1})^*\}_{n})
\end{align*}
is a weak OVF and is a dual for $ ( \{A_n\}_{n}, \{\Psi_n\}_{n} )$. This weak OVF is called as the \textbf{canonical dual} for $ ( \{A_n\}_{n}, \{\Psi_n\}_{n} )$. Canonical dual has two nice properties. Following two results establish them.
\begin{proposition}
Let $( \{A_n\}_{n}, \{\Psi_n\}_{n} )$ be a weak OVF in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0).$ If $ h \in \mathcal{H}$ has representation $ h=\sum_{n=1}^\infty A_n^*y_n= \sum_{n=1}^\infty\Psi_n^*z_n, $ for some sequences $ \{y_n\}_{n},\{z_n\}_{n}$ in $ \mathcal{H}_0$, then
$$ \sum_{n=1}^\infty\langle y_n,z_n\rangle =\sum_{n=1}^\infty\langle \widetilde{\Psi}_nh,\widetilde{A}_nh\rangle+\sum_{n=1}^\infty\langle y_n-\widetilde{\Psi}_nh,z_n-\widetilde{A}_nh\rangle. $$
\end{proposition}
\begin{proof}
We start from the right side and see
\begin{align*}
&\sum\limits_{n=1}^\infty\langle \widetilde{\Psi}_nh,\widetilde{A}_nh\rangle+\sum\limits_{n=1}^\infty\langle y_n, z_n\rangle -\sum\limits_{n=1}^\infty\langle y_n, \widetilde{A}_nh\rangle-\sum\limits_{n=1}^\infty\langle \widetilde{\Psi}_nh, z_n\rangle +\sum\limits_{n=1}^\infty\langle \widetilde{\Psi}_nh,\widetilde{A}_nh\rangle\\
&=2\sum\limits_{n=1}^\infty\langle \widetilde{\Psi}_nh,\widetilde{A}_nh\rangle+ \sum\limits_{n=1}^\infty\langle y_n, z_n\rangle-\sum\limits_{n=1}^\infty\langle y_n,A_nS_{A,\Psi}^{-1}h\rangle-\sum\limits_{n=1}^\infty\langle \Psi_n(S_{A,\Psi}^{-1})^*h, z_n\rangle\\
&= 2\left\langle\sum\limits_{n=1}^\infty(S_{A,\Psi}^{-1})^*A_n^*\Psi_n(S_{A,\Psi}^{-1})^*h, h \right\rangle+ \sum\limits_{n=1}^\infty\langle y_n, z_n\rangle-\left\langle \sum\limits_{n=1}^\infty A_n^*y_n,S_{A,\Psi}^{-1}h\right \rangle \\
&\quad -\left\langle (S_{A,\Psi}^{-1})^*h , \sum\limits_{n=1}^\infty\Psi_n^*z_n\right \rangle\\
&=2 \langle (S_{A,\Psi}^{-1})^*h,h \rangle + \sum\limits_{n=1}^\infty\langle y_n, z_n\rangle -\langle h, S_{A,\Psi}^{-1}h\rangle-\langle (S_{A,\Psi}^{-1})^*h, h\rangle
\end{align*}
which gives the left side.
\end{proof}
\begin{theorem}\label{CANONICALDUALFRAMEPROPERTYOPERATORVERSIONWEAK}
Let $( \{A_n\}_{n}, \{\Psi_n\}_{n} )$ be a weak OVF with frame bounds $ a$ and $ b.$
\begin{enumerate}[label=(\roman*)]
\item The canonical dual weak OVF for the canonical dual weak OVF for $( \{A_n\}_{n}, \{\Psi_n\}_{n} )$ is itself.
\item$ \frac{1}{b}, \frac{1}{a}$ are frame bounds for the canonical dual of $( \{A_n\}_{n}, \{\Psi_n\}_{n} )$.
\item If $ a, b $ are optimal frame bounds for $( \{A_n\}_{n}, \{\Psi_n\}_{n} )$, then $ \frac{1}{b}, \frac{1}{a}$ are optimal frame bounds for its canonical dual.
\end{enumerate}
\end{theorem}
\begin{proof}
Since (ii) and (iii) follow from the property of invertible operators on Banach spaces, we have to argue for (i): frame operator for $(\{A_nS_{A,\Psi}^{-1}\}_{n}, \{\Psi_n(S_{A,\Psi}^{-1})^*\}_{n} )$ is
$$ \sum\limits_{n=1}^\infty(\Psi_n(S_{A,\Psi}^{-1})^*)^* (A_nS_{A,\Psi}^{-1}) =S_{A,\Psi}^{-1}\left(\sum\limits_{n=1}^\infty\Psi_n ^*A_n\right)S_{A,\Psi}^{-1} =S_{A,\Psi}^{-1}S_{A,\Psi}S_{A,\Psi}^{-1}= S_{A,\Psi}^{-1}.$$
Therefore, its canonical dual is $(\{(A_nS_{A,\Psi}^{-1})S_{A,\Psi}\}_{n} , \{(\Psi_n(S_{A,\Psi}^{-1})^*)S_{A,\Psi}^*\}_{n})$ which is the original frame.
\end{proof}
For the further study of weak OVFs, we impose some conditions so that the frame operator splits.
\begin{definition}
A weak OVF $( \{A_n\}_{n}, \{\Psi_n\}_{n} )$ is said to be \textbf{factorable} if both the maps (called \textbf{analysis operator})
\begin{align*}
\theta_A:\mathcal{H} \ni h \mapsto \theta_A h\coloneqq\sum_{n=1}^\infty L_nA_n h \in \ell^2(\mathbb{N}) \otimes \mathcal{H}_0\\
\theta_\Psi:\mathcal{H}\ni h \mapsto \theta_\Psi h\coloneqq \sum_{n=1}^\infty L_n\Psi_n h \in \ell^2(\mathbb{N}) \otimes \mathcal{H}_0
\end{align*}
are well-defined bounded linear operators.
\end{definition}
We next give an example which shows that a weak OVF need not be factorable.
\begin{example}
On $ \mathbb{C},$ define $ A_nx\coloneqq\frac{x}{\sqrt{n}}, \forall x \in \mathbb{C}, \forall n \in \mathbb{N}$, and $\Psi_1x\coloneqq x, \Psi_nx\coloneqq0, \forall x \in \mathbb{C}, \forall n \in \mathbb{N}\setminus\{1\} $. Then $ \sum_{n=1}^\infty\Psi_n^*A_nx$ converges to an invertible operator but $ \sum_{n=1}^\infty L_nA_nx$ does not converge. In fact, using Equation (\ref{LEQUATION}),
\begin{align*}
\left\| \sum_{n=1}^mL_nA_n1\right\|^2=\sum_{n=1}^m\|A_n1\|^2=\sum_{n=1}^m\frac{1}{n} \to \infty \quad \text{ as } \quad m \to \infty.
\end{align*}
\end{example}
Equation (\ref{LEQUATION}) gives the following theorem easily.
\begin{theorem}
Let $( \{A_n\}_{n}, \{\Psi_n\}_{n} )$ be a factorable weak OVF in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$.
\begin{enumerate}[label=(\roman*)]
\item Analysis operator
\begin{align*}
\theta_A:\mathcal{H} \ni h \mapsto \theta_A h\coloneqq\sum_{n=1}^\infty L_nA_n h \in \ell^2(\mathbb{N}) \otimes \mathcal{H}_0
\end{align*}
is a well-defined bounded linear injective operator.
\item Synthesis operator
\begin{align*}
\theta_\Psi^*:\ell^2(\mathbb{N})\otimes \mathcal{H}_0 \ni z\mapsto\sum\limits_{n=1}^\infty \Psi_n^*L_n^*z \in \mathcal{H}
\end{align*}
is a well-defined bounded linear surjective operator.
\item Frame operator
factors as $S_{A,\Psi}=\theta_\Psi^*\theta_A.$
\item $ P_{A,\Psi} \coloneqq \theta_A S_{A,\Psi}^{-1} \theta_\Psi^*:\ell^2(\mathbb{N})\otimes \mathcal{H}_0 \to \ell^2(\mathbb{N})\otimes \mathcal{H}_0$ is an idempotent onto $ \theta_A(\mathcal{H})$.
\end{enumerate}
\end{theorem}
We next define the notions of Riesz and orthonormal factorable weak OVFs.
\begin{definition}\label{RIESZOVF}
A factorable weak OVF $( \{A_n\}_{n}, \{\Psi_n\}_{n} )$ in $\mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ is said to be a \textbf{Riesz OVF} if $ P_{A,\Psi}= I_{\ell^2(\mathbb{N})}\otimes I_{\mathcal{H}_0}$. A Parseval and Riesz OVF, i.e., $\theta_\Psi^*\theta_A=I_\mathcal{H} $ and $\theta_A\theta_\Psi^*=I_{\ell^2(\mathbb{N})}\otimes I_{\mathcal{H}_0} $ is called as an \textbf{orthonormal OVF}.
\end{definition}
\begin{proposition}\label{ORTHORESULT}
A factorable weak OVF $( \{A_n\}_{n}, \{\Psi_n\}_{n} )$ in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ is an orthonormal OVF if and only if it is a Parseval OVF and $ A_n\Psi_m^*=\delta_{n,m}I_{\mathcal{H}_0},\forall n,m \in \mathbb{N}$.
\end{proposition}
\begin{proof}
$(\Rightarrow)$
We have $\theta_A\theta_\Psi^*=I_{\ell^2(\mathbb{N})}\otimes I_{\mathcal{H}_0}.$ Hence
\begin{align*}
e_m\otimes y&=\theta_A\theta_\Psi^*(e_m\otimes y)=\sum_{n=1}^\infty L_nA_n\left(\sum_{k=1}^\infty\Psi^*_kL_k^*(e_m\otimes y)\right)\\
&=\sum_{n=1}^\infty L_nA_n\Psi^*_my=\sum_{n=1}^\infty(e_n\otimes A_n\Psi^*_my)\\
&= e_m\otimes( A_m\Psi^*_m y)+\sum_{n=1, n\neq m}^\infty(e_n\otimes A_n\Psi^*_my),\forall m \in \mathbb{N}, \quad y \in\mathcal{H}_0 .
\end{align*}
We then have $A_n\Psi^*_my=\delta_{n,m}y,\forall y \in \mathcal{H}_0$.
$(\Leftarrow)$ $ \theta_A\theta_\Psi^*=\sum_{n=1}^\infty L_nA_n(\sum_{k=1}^\infty\Psi_k^*L_k^*)=\sum_{n=1}^\infty L_nL_n^*=I_{\ell^2(\mathbb{N})}\otimes I_{\mathcal{H}_0}.$
\end{proof}
We now derive a dilation result for factorable weak OVFs. First we need a lemma for this.
\begin{lemma}\label{DILATIONLEMMA}
Let $( \{A_n\}_{n}, \{\Psi_n\}_{n} )$ be a factorable weak OVF in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$. Then the range of $\theta_A $ is closed.
\end{lemma}
\begin{proof}
Let $ \{h_n\}_{n=1}^\infty$ in $\mathcal{H} $ be such that $ \{\theta _Ah_n\}_{n=1}^\infty$ converges to $ y \in \mathcal{H}_0$. This gives $ S_{A,\Psi}h_n \rightarrow \theta_\Psi^*y$ as $ n \rightarrow \infty$ and this in turn gives $h_n \rightarrow S_{A,\Psi}^{-1} \theta_\Psi^*y $ as $ n \rightarrow \infty.$ An application of $ \theta_A$ gives $\theta_Ah_n \rightarrow \theta_AS_{A,\Psi}^{-1} \theta_\Psi^*y $ as $ n \rightarrow \infty.$ Therefore $ y=\theta_A(S_{A,\Psi}^{-1} \theta_\Psi^*y).$
\end{proof}
\begin{theorem}\label{OPERATORDILATION}
Let $( \{A_n\}_{n}, \{\Psi_n\}_{n} )$ be a Parseval factorable weak OVF in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ such that $ \theta_A(\mathcal{H})=\theta_\Psi(\mathcal{H})$ and $ P_{A,\Psi}$ is projection. Then there exist a Hilbert space $ \mathcal{H}_1 $ which contains $ \mathcal{H}$ isometrically and bounded linear operators $B_n,\Phi_n:\mathcal{H}_1\rightarrow \mathcal{H}_0, \forall n $ such that $(\{B_n\}_{n} ,\{\Phi_n\}_{n})$ is an orthonormal OVF in $ \mathcal{B}(\mathcal{H}_1, \mathcal{H}_0)$ and $B_n|_{\mathcal{H}}= A_n,\Phi_n|_{\mathcal{H}}=\Psi_n, \forall n \in \mathbb{N}$.
\end{theorem}
\begin{proof}
We first see that $P_{A,\Psi}$ is the orthogonal projection from $ \ell^2(\mathbb{N})\otimes \mathcal{H}_0$ onto $\theta_A(\mathcal{H})=\theta_\Psi(\mathcal{H})$. Define $ \mathcal{H}_1\coloneqq\mathcal{H}\oplus \theta_A(\mathcal{H})^\perp$. From Lemma \ref{DILATIONLEMMA}, $\mathcal{H}_1$ becomes a Hilbert space. Then $\mathcal{H} \ni h \mapsto h\oplus 0 \in \mathcal{H}_1 $ is an isometry. Set $P_{A,\Psi}^\perp\coloneqq I_{\ell^2(\mathbb{N})\otimes \mathcal{H}_0}-P_{A,\Psi}$ and define
\begin{align*}
&B_n:\mathcal{H}_1\ni h\oplus g\mapsto A_nh+L_n^*P_{A,\Psi}^\perp g \in \mathcal{H}_0, \\
& \Phi_n:\mathcal{H}_1\ni h\oplus g\mapsto \Psi_nh+L_n^*P_{A,\Psi}^\perp g \in \mathcal{H}_0 , \quad \forall n \in \mathbb{N}.
\end{align*}
Then clearly $B_n|_{\mathcal{H}}= A_n,\Phi_n|_{\mathcal{H}}=\Psi_n, \forall n \in \mathbb{N}$. Now
\begin{align*}
\theta_B(h\oplus g)=\sum_{n=1}^\infty L_nA_nh+\sum_{n=1}^\infty L_nL_n^*P_{A,\Psi}^\perp g=\theta_Ah+P_{A,\Psi}^\perp g, \quad \forall h\oplus g \in \mathcal{H}_1.
\end{align*}
Similarly $\theta_\Phi(h\oplus g)=\theta_\Psi h+P_{A,\Psi}^\perp g, \forall h\oplus g\in \mathcal{H}_1 $. Also
\begin{align*}
\langle \theta_B^*z,h\oplus g \rangle&= \langle z, \theta_B(h\oplus g) \rangle = \langle \theta_A^*z, h\rangle+\langle P_{A,\Psi}^\perp z, g\rangle \\
&= \langle \theta_A^*z\oplus P_{A,\Psi}^\perp z, h\oplus g\rangle , \quad \forall z \in \ell^2(\mathbb{N})\otimes \mathcal{H}_0 , \forall h\oplus g \in \mathcal{H}_1.
\end{align*}
Hence $\theta_B^*z=\theta_A^*z\oplus P_{A,\Psi}^\perp z, \forall z \in \ell^2(\mathbb{N})\otimes \mathcal{H}_0 $ and similarly $\theta_\Phi^*z=\theta_\Psi^*z\oplus P_{A,\Psi}^\perp z $, $\forall z \in \ell^2(\mathbb{N})\otimes \mathcal{H}_0$.
By using $\theta_A(\mathcal{H})=\theta_\Psi(\mathcal{H}) $ and $\theta_\Psi^*P_{A,\Psi}^\perp=0=P_{A,\Psi}^\perp\theta_A ,$ we get
\begin{align*}
S_{B,\Phi}(h\oplus g)&= \theta_\Phi^*(\theta_Ah+ P_{A,\Psi}^\perp g)=\theta_\Psi^*(\theta_Ah+P_{A,\Psi}^\perp g)\oplus P_{A,\Psi}^\perp(\theta_Ah+P_{A,\Psi}^\perp g)\\
&=(S_{A,\Psi}h+0)\oplus(0+P_{A,\Psi}^\perp g)=S_{A,\Psi}h\oplus P_{A,\Psi}^\perp g\\
&=I_\mathcal{H}h\oplus I_{\theta_A(\mathcal{H})^\perp}g, \quad \forall h\oplus g\in \mathcal{H}_1.
\end{align*}
Hence $(\{B_n\}_{n},\{\Phi_n\}_{n} )$ is a Parseval weak OVF in $ \mathcal{B}(\mathcal{H}_1, \mathcal{H}_0)$. We further find
\begin{align*}
P_{B,\Phi}z&=\theta_BS_{B,\Phi}^{-1}\theta_\Phi^*z=\theta_B\theta_\Phi^*z=\theta_B(\theta_\Psi^*z\oplus P_{A,\Psi}^\perp z)\\
&=\theta_A(\theta_\Psi^*z)+ P_{A,\Psi}^\perp(P_{A,\Psi}^\perp z)=P_{A,\Psi} z+P_{A,\Psi}^\perp z\\
&=P_{A,\Psi} z+((I_{\ell^2(\mathbb{N})}\otimes I_{\mathcal{H}_0})-P_{A,\Psi})z =(I_{\ell^2(\mathbb{N})}\otimes I_{\mathcal{H}_0} )z, \quad\forall z \in \ell^2(\mathbb{N})\otimes \mathcal{H}_0.
\end{align*}
Therefore $(\{B_n\}_{n} ,\{\Phi_n\}_{n})$ is a Riesz weak OVF in $ \mathcal{B}(\mathcal{H}_1, \mathcal{H}_0)$. Thus $(\{B_n\}_{n} ,\{\Phi_n\}_{n})$ is an orthonormal weak OVF in $ \mathcal{B}(\mathcal{H}_1, \mathcal{H}_0)$.
\end{proof}
\begin{theorem}\label{THAFSCHAROVF}
A pair $( \{A_n\}_{n}, \{\Psi_n\}_{n} )$ is a factorable weak OVF in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$
if and only if
\begin{align*}
A_n=L_n^* U, \quad \Psi_n=L_n^*V, \quad \forall n \in \mathbb{N},
\end{align*}
where $U, V:\mathcal{H} \rightarrow \ell^2(\mathbb{N}) \otimes \mathcal{H}_0$ are bounded linear operators such that $V^*U$ is bounded invertible.
\end{theorem}
\begin{proof}
$(\Leftarrow)$ Clearly $\theta_A$ and $\theta_\Psi$ are well-defined bounded linear operators. Let $h\in \mathcal{H}$. Then using Equation (\ref{LEQUATION}), we have
\begin{align}\label{ORIGINALEQA}
S_{A, \Psi}h= \sum_{n=1}^\infty
(L_n^*V)^*L_n^*Uh=V^*\left(\sum_{n=1}^\infty L_nL_n^*\right)Uh=V^*Uh.
\end{align}
Hence $S_{A, \Psi}$ is bounded invertible. \\
$(\Rightarrow)$ Define $U\coloneqq \sum_{n=1}^\infty L_nA_n$, $V\coloneqq \sum_{n=1}^\infty L_n\Psi_n$. Then
\begin{align*}
&L_n^* U=L_n^* \left(\sum_{k=1}^\infty L_kA_k\right)=\sum_{k=1}^\infty L_n^*L_kA_k=A_n,\\
&L_n^* V=L_n^* \left(\sum_{k=1}^\infty L_k\Psi_k\right)=\sum_{k=1}^\infty L_n^*L_k\Psi_k=\Psi_n, \quad \forall n \in \mathbb{N}
\end{align*}
and
\begin{align*}
V^*U=\left(\sum_{n=1}^\infty \Psi_n^*L_n^*\right)\left(\sum_{k=1}^\infty L_kA_k\right) = \sum_{n=1}^\infty \Psi_n^*A_n=S_{A, \Psi}
\end{align*}
which is bounded invertible.
\end{proof}
Using Theorem \ref{THAFSCHAROVF} we can characterize Riesz and orthonormal factorable weak OVFs.
\begin{corollary}\label{FIRSTCOROLLARY}
A pair $( \{A_n\}_{n}, \{\Psi_n\}_{n} )$ is a Riesz factorable weak OVF in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$
if and only if
\begin{align*}
A_n=L_n^* U, \quad \Psi_n=L_n^*V, \quad \forall n \in \mathbb{N},
\end{align*}
where $U, V:\mathcal{H} \rightarrow \ell^2(\mathbb{N}) \otimes \mathcal{H}_0$ are bounded linear operators such that $V^*U$ is bounded invertible and $ U(V^*U)^{-1}V^* =I_{\ell^2(\mathbb{N}) \otimes \mathcal{H}_0}$.
\end{corollary}
\begin{proof}
$(\Leftarrow)$ $P_{A,\Psi}= U(V^*U)^{-1}V^* =I_{\ell^2(\mathbb{N}) \otimes \mathcal{H}_0}$.
$(\Rightarrow)$ Let $U$ and $V$ be as in Theorem \ref{THAFSCHAROVF}. Then
$U(V^*U)^{-1}V^*=P_{A,\Psi}=I_{\ell^2(\mathbb{N}) \otimes \mathcal{H}_0}$.
\end{proof}
\begin{corollary}
A pair $( \{A_n\}_{n}, \{\Psi_n\}_{n} )$ is an orthonormal factorable weak OVF in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$
if and only if
\begin{align*}
A_n=L_n^* U, \quad \Psi_n=L_n^*V, \quad \forall n \in \mathbb{N},
\end{align*}
where $U, V:\mathcal{H} \rightarrow \ell^2(\mathbb{N}) \otimes \mathcal{H}_0$ are bounded linear operators such that $V^*U$ is bounded invertible and $ V^*U=I_\mathcal{H}$, $I_{\ell^2(\mathbb{N}) \otimes \mathcal{H}_0}= UV^*$.
\end{corollary}
\begin{proof}
We use Corollary \ref{FIRSTCOROLLARY}.
$ (\Leftarrow)$ $S_{A,\Psi}=V^*U=I_\mathcal{H}, P_{A,\Psi}=\theta_AS_{A,\Psi}^{-1}\theta_\Psi^*=\theta_A\theta_\Psi^*= \theta_FUV^*\theta_F^*=I_{\ell^2(\mathbb{N})}\otimes I_{\mathcal{H}_0}.$
$(\Rightarrow)$ $V^*U=S_{A,\Psi}=I_\mathcal{H},$ and by using Proposition \ref{ORTHORESULT},
\begin{align*}
UV^*&= \left(\sum_{n=1}^\infty L_n^*A_n\right)\left( \sum_{k=1}^\infty\Psi_k^*L_k\right) =
\sum_{n=1}^\infty L_nL_n^*=I_{\ell^2(\mathbb{N}) \otimes \mathcal{H}_0}.
\end{align*}
\end{proof}
\begin{theorem}\label{SECONDCHAROV}
Let $ \{F_n\}_{n}$ be an orthonormal basis in $ \mathcal{B}(\mathcal{H},\mathcal{H}_0).$ Then a pair $( \{A_n\}_{n}, $ $ \{\Psi_n\}_{n} )$ is a factorable weak OVF in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$
if and only if
\begin{align*}
A_n=F_n U, \quad \Psi_n=F_nV, \quad \forall n \in \mathbb{N},
\end{align*}
where $ U,V :\mathcal{H}\to \mathcal{H} $ are bounded linear operators such that $ V^*U$ is bounded invertible.
\end{theorem}
\begin{proof}
$(\Leftarrow)$ $ \sum_{n=1}^\infty L_n(F_nU)= (\sum_{n=1}^\infty L_nF_n)U,$ $ \sum_{n=1}^\infty L_n(F_nV)= (\sum_{n=1}^\infty L_nF_n)V.$ These show analysis operators for $ (\{F_nU\}_{n},\{F_nV\}_{n})$ are well-defined bounded linear operators and the equality $$\sum_{n=1}^\infty(F_nV)^*(F_nU)=V^*U$$
shows that it is a factorable weak OVF.
$(\Rightarrow)$ Let $( \{A_n\}_{n}, \{\Psi_n\}_{n} )$ be a factorable weak OVF. Note that the series $ \sum_{n=1}^\infty F_n^*A_n$ and $ \sum_{n=1}^\infty F_n^*\Psi_n$ converge. In fact, for each $h \in \mathcal{H}$,
\begin{align*}
\left\|\sum_{n=1}^mF_n ^*A_n h\right\|^2&=\left\langle\sum_{n=1}^mF_n^*A_nh, \sum_{k=1}^mF_k^*A_kh\right\rangle\\
&= \sum_{n=1}^m\left\langle A_nh, F_n\left(\sum_{k=1}^mF_k^*A_kh\right) \right\rangle=\sum_{n=1}^m\|A_nh\|^2.
\end{align*}
which converges to $\| \theta_Ah\|^2=\|\sum_{n=1}^\infty L_nA_nh\|^2=\sum_{n=1}^\infty\|A_nh\|^2$. Define $U\coloneqq \sum_{n=1}^\infty F_n^*A_n$ and $V\coloneqq \sum_{n=1}^\infty F_n^*\Psi_n$. Then $F_nU=A_n, F_nV=\Psi_n , \forall n \in \mathbb{N} $ and
\begin{align*}
V^*U=\left(\sum_{n=1}^\infty\Psi_n^*F_n\right)\left(\sum_{k=1}^\infty F_k^*A_k\right)=\sum_{n=1}^\infty\Psi_n^*A_n=S_{A,\Psi}
\end{align*}
which is bounded invertible.
\end{proof}
\begin{corollary}
Let $ \{F_n\}_{n}$ be an orthonormal basis in $ \mathcal{B}(\mathcal{H},\mathcal{H}_0).$ Then a pair $( \{A_n\}_{n}, \{\Psi_n\}_{n} )$ is
\begin{enumerate}[label=(\roman*)]
\item a Riesz factorable weak OVF in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$
if and only if
\begin{align*}
A_n=F_n U, \quad \Psi_n=F_nV, \quad \forall n \in \mathbb{N},
\end{align*}
where $ U,V :\mathcal{H}\to \mathcal{H} $ are bounded linear operators such that $ V^*U$ is bounded invertible and $ U(V^*U)^{-1}V^* =I_{\mathcal{H}}$.
\item an orthonormal factorable weak OVF in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$
if and only if
\begin{align*}
A_n=F_n U, \quad \Psi_n=F_nV, \quad \forall n \in \mathbb{N},
\end{align*}
where $ U,V :\mathcal{H}\to \mathcal{H} $ are bounded linear operators such that $ V^*U$ is bounded invertible and $ V^*U=I_\mathcal{H}= UV^*$.
\end{enumerate}
\end{corollary}
\begin{proof}
\begin{enumerate}[label=(\roman*)]
\item $ (\Leftarrow)$
\begin{align*}
P_{A,\Psi}&=\theta_AS_{A,\Psi}^{-1}\theta_\Psi^*=\left(\sum_{n=1}^\infty L_nF_nU\right)(V^*U)^{-1}\left(\sum_{k=1}^\infty V^*F^*_kL_k^*\right)\\
&=\theta_FU(V^*U)^{-1}V^*\theta_F^*=\theta_FI_\mathcal{H}\theta_F^* =\sum_{n=1}^\infty L_nF_n\left(\sum_{k=1}^\infty F_k^*L^*_k\right)\\
&=\sum_{n=1}^\infty L_nL_n^*=I_{\ell^2(\mathbb{N})}\otimes I_{\mathcal{H}_0}.
\end{align*}
$ (\Rightarrow)$ Let $U$ and $V$ be as in Theorem \ref{SECONDCHAROV}. Then
\begin{align*}
U(V^*U)^{-1}V^*&=\left(\sum_{k=1}^\infty F_k^*A_k\right)S_{A,\Psi}^{-1}\left(\sum_{n=1}^\infty \Psi_n^*F_n\right)\\
&=\left(\sum_{r=1}^\infty F_r^*L_r^*\right)\left(\sum_{k=1}^\infty L_kA_k\right)S_{A,\Psi}^{-1}\left(\sum_{n=1}^\infty \Psi_n^*L_n^*\right)\left(\sum_{m=1}^\infty L_mF_m\right)\\
&=\left(\sum_{r=1}^\infty F_r^*L_r^*\right)\theta_AS_{A,\Psi}^{-1}\theta_\Psi^*\left(\sum_{m=1}^\infty L_mF_m\right)\\
&=\left(\sum_{r=1}^\infty F_r^*L_r^*\right)P_{A,\Psi}\left(\sum_{m=1}^\infty L_mF_m\right) \\
&=\left(\sum_{r=1}^\infty F_r^*L_r^*\right)(I_{\ell^2(\mathbb{N})}\otimes I_{\mathcal{H}_0})\left(\sum_{m=1}^\infty L_mF_m\right) \\
&=\left(\sum_{r=1}^\infty F_r^*L_r^*\right)\left(\sum_{m=1}^\infty L_mF_m\right)
=\sum_{r=1}^\infty F_r^*F_r=I_{\mathcal{H}}.
\end{align*}
\item We use (i).
$ (\Leftarrow)$ $S_{A,\Psi}=V^*U=I_\mathcal{H}, P_{A,\Psi}=\theta_AS_{A,\Psi}^{-1}\theta_\Psi^*=\theta_A\theta_\Psi^*= \theta_FUV^*\theta_F^*=\theta_FI_\mathcal{H}\theta_F^*=I_{\ell^2(\mathbb{N})}\otimes I_{\mathcal{H}_0}.$
$(\Rightarrow)$ $V^*U=S_{A,\Psi}=I_\mathcal{H}$ and using Proposition \ref{ORTHORESULT},
\begin{align*}
UV^*= \left(\sum_{n=1}^\infty F_n^*A_n\right)\left( \sum_{k=1}^\infty\Psi_k^*F_k\right) =\sum_{n=1}^\infty F_n^*F_n=I_\mathcal{H}.
\end{align*}
\end{enumerate}
\end{proof}
We next derive another characterization which is free from natural numbers.
\begin{theorem}\label{OPERATORCHARACTERIZATIONHILBERT2}
Let $\{A_n\}_{n},\{\Psi_n\}_{n}$ be in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0).$ Then $( \{A_n\}_{n}, \{\Psi_n\}_{n} )$ is a factorable weak OVF
\begin{enumerate}[label=(\roman*)]
\item if and only if $$U:\ell^2(\mathbb{N})\otimes \mathcal{H}_0 \ni y\mapsto\sum\limits_{n=1}^\infty A_n^*L_n^*y \in \mathcal{H}, ~\text{and} ~ V:\ell^2(\mathbb{N})\otimes \mathcal{H}_0 \ni z\mapsto\sum\limits_{n=1}^\infty \Psi_n^*L^*_nz \in \mathcal{H} $$
are well-defined bounded linear operators such that $ VU^*$ is bounded invertible.
\item if and only if $$U:\ell^2(\mathbb{N})\otimes \mathcal{H}_0 \ni y\mapsto\sum\limits_{n=1}^\infty A_n^*L_n^*y \in \mathcal{H}, ~\text{and} ~ S: \mathcal{H} \ni g\mapsto \sum\limits_{n=1}^\infty L_n\Psi_ng \in \ell^2(\mathbb{N})\otimes \mathcal{H}_0 $$
are well-defined bounded linear operators such that $ S^*U^*$ is bounded invertible.
\item if and only if $$R: \mathcal{H} \ni h\mapsto \sum\limits_{n=1}^\infty L_nA_nh \in \ell^2(\mathbb{N})\otimes \mathcal{H}_0, ~\text{and} ~ V: \ell^2(\mathbb{N})\otimes \mathcal{H}_0 \ni z\mapsto\sum\limits_{n=1}^\infty \Psi_n^*L_n^*z \in \mathcal{H} $$
are well-defined bounded linear operators such that $ VR$ is bounded invertible.
\item if and only if $$ R: \mathcal{H} \ni h\mapsto \sum\limits_{n=1}^\infty L_nA_nh \in \ell^2(\mathbb{N})\otimes \mathcal{H}_0, ~\text{and} ~ S: \mathcal{H} \ni g\mapsto \sum\limits_{n=1}^\infty L_n\Psi_ng \in \ell^2(\mathbb{N})\otimes \mathcal{H}_0 $$
are well-defined bounded linear operators such that $ S^*R $ is bounded invertible.
\end{enumerate}
\end{theorem}
\begin{proof}
We prove (i) and others are similar.
$(\Rightarrow)$ Now $U=\theta_A^*$, $V=\theta_\Psi^*$ and hence $VU^*=\theta_\Psi^*\theta_A=S_{A,\Psi}$.
$(\Leftarrow)$ Now $\theta_A=U^*$, $\theta_\Psi=V^*$ and hence $S_{A,\Psi}=\theta_\Psi^*\theta_A=VU^*$.
\end{proof}
Now we try to characterize all dual OVFs.
\begin{lemma}\label{FIRSTLEMMA}
Let $( \{A_n\}_{n}, \{\Psi_n\}_{n} )$ be a factorable weak OVF in $\mathcal{B}(\mathcal{H}, \mathcal{H}_0)$. Then a factorable weak OVF $ (\{B_n\}_{n} , \{\Phi_n\}_{n} )$ in $\mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ is a dual for $( \{A_n\}_{n}, \{\Psi_n\}_{n} )$ if and only if
\begin{align*}
B_n=L_n^* U, \quad \Phi_n=L_n^*V^*, \quad \forall n \in \mathbb{N}
\end{align*}
where $U:\mathcal{H} \rightarrow \ell^2(\mathbb{N}) \otimes \mathcal{H}_0$ is a bounded right-inverse of $\theta_\Psi^* $ and $V: \ell^2(\mathbb{N}) \otimes \mathcal{H}_0\to \mathcal{H}$ is a bounded left-inverse of $\theta_A $ such that $VU$ is bounded invertible.
\end{lemma}
\begin{proof}
$(\Leftarrow)$ ``If" part of proof of Theorem \ref{THAFSCHAROVF}, says that $ (\{B_n\}_{n} , \{\Phi_n\}_{n} )$ is a factorable weak OVF in $\mathcal{B}(\mathcal{H}, \mathcal{H}_0)$. We now check for the duality of $ (\{B_n\}_{n} , \{\Phi_n\}_{n} )$. Consider $\theta_\Phi^*\theta_A=V^* \theta_A=I_\mathcal{H} $, $ \theta_\Psi^*\theta_B=\theta_\Psi^* U =I_\mathcal{H}$.\\
$(\Rightarrow)$ Let $ (\{B_n\}_{n} , \{\Phi_n\}_{n} )$ be a dual factorable weak OVF for $( \{A_n\}_{n}, \{\Psi_n\}_{n} )$. Then $\theta_\Psi^*\theta_B =I_\mathcal{H}= \theta_\Phi^*\theta_A $. Define $ U\coloneqq\theta_B, V\coloneqq\theta_\Phi^*.$ Then $U:\mathcal{H} \rightarrow \ell^2(\mathbb{N}) \otimes \mathcal{H}_0$ is a bounded right-inverse of $\theta_\Psi^* $ and $V: \ell^2(\mathbb{N}) \otimes \mathcal{H}_0\to \mathcal{H}$ is a left inverse of $\theta_A $ such that $VU=\theta_\Phi^*\theta_B=S_{B,\Phi}$ is bounded invertible. We now see
\begin{align*}
L_n^* U=L_n^*\left(\sum\limits_{k=1}^\infty L_kB_k\right)=B_n, \quad L_n^*V^*=L_n^*\left(\sum\limits_{k=1}^\infty L_k\Phi_k\right)=\Phi_n, \quad \forall n \in \mathbb{N}.
\end{align*}
\end{proof}
\begin{lemma}\label{SECONDLEMMA}
Let $( \{A_n\}_{n}, \{\Psi_n\}_{n} )$ be a factorable weak OVF in $\mathcal{B}(\mathcal{H}, \mathcal{H}_0)$. Then
\begin{enumerate}[label=(\roman*)]
\item $R:\mathcal{H} \to \ell^2(\mathbb{N})\otimes\mathcal{H}_0 $ is a bounded right-inverse of $ \theta_\Psi^*$ if and only if
\begin{align*}
R=\theta_AS_{A,\Psi}^{-1}+(I_{\ell^2(\mathbb{N})\otimes\mathcal{H}_0}-\theta_AS_{A,\Psi}^{-1}\theta_\Psi^*)U,
\end{align*}
where $U :\mathcal{H} \to \ell^2(\mathbb{N})\otimes\mathcal{H}_0$ is a bounded linear operator.
\item $L:\ell^2(\mathbb{N})\otimes\mathcal{H}_0\rightarrow \mathcal{H} $ is a bounded left-inverse of $ \theta_A$ if and only if
\begin{align*}
L=S_{A,\Psi}^{-1}\theta_\Psi^*+V(I_{\ell^2(\mathbb{N})\otimes\mathcal{H}_0}-\theta_A S_{A,\Psi}^{-1}\theta_\Psi^*),
\end{align*}
where $V:\ell^2(\mathbb{N})\otimes\mathcal{H}_0\to\mathcal{H}$ is a bounded linear operator.
\end{enumerate}
\end{lemma}
\begin{proof}
\begin{enumerate}[label=(\roman*)]
\item $(\Leftarrow)$ Let $U :\mathcal{H} \to \ell^2(\mathbb{N})\otimes\mathcal{H}_0$ be a bounded linear operator. Then
\begin{align*}
\theta_\Psi^*(\theta_AS_{A,\Psi}^{-1}+(I_{\ell^2(\mathbb{N})\otimes\mathcal{H}_0}-\theta_AS_{A,\Psi}^{-1}\theta_\Psi^*)U)=I_\mathcal{H}+\theta_\Psi^*U-\theta_\Psi^*U=I_\mathcal{H}.
\end{align*} Therefore $\theta_AS_{A,\Psi}^{-1}+(I_{\ell^2(\mathbb{N})\otimes\mathcal{H}_0}-\theta_AS_{A,\Psi}^{-1}\theta_\Psi^*)U$ is a bounded right-inverse of $\theta_\Psi^*$.
$(\Rightarrow)$ Let $R:\mathcal{H} \to \ell^2(\mathbb{N})\otimes\mathcal{H}_0 $ be a bounded right-inverse of $ \theta_\Psi^*$. Define $U\coloneqq R$. Then $ \theta_AS_{A,\Psi}^{-1}+(I_{\ell^2(\mathbb{N})\otimes\mathcal{H}_0}-\theta_AS_{A,\Psi}^{-1}\theta_\Psi^*)U= \theta_AS_{A,\Psi}^{-1}+(I_{\ell^2(\mathbb{N})\otimes\mathcal{H}_0}-\theta_AS_{A,\Psi}^{-1}\theta_\Psi^*)R=\theta_AS_{A,\Psi}^{-1}+R-\theta_AS_{A,\Psi}^{-1}=R$.
\item $(\Leftarrow)$ Let $V: \ell^2(\mathbb{N})\otimes\mathcal{H}_0\rightarrow \mathcal{H}$ be a bounded linear operator. Then
\begin{align*}
(S_{A,\Psi}^{-1}\theta_\Psi^*+V(I_{\ell^2(\mathbb{N})\otimes\mathcal{H}_0}-\theta_A S_{A,\Psi}^{-1}\theta_\Psi^*))\theta_A=I_\mathcal{H}+V\theta_A-V\theta_A I_\mathcal{H}=I_\mathcal{H}.
\end{align*} Therefore $S_{A,\Psi}^{-1}\theta_\Psi^*+V(I_{\ell^2(\mathbb{N})\otimes\mathcal{H}_0}-\theta_A S_{A,\Psi}^{-1}\theta_\Psi^*)$ is a bounded left-inverse of $\theta_A$.
$(\Rightarrow)$ Let $ L:\ell^2(\mathbb{N})\otimes\mathcal{H}_0\rightarrow \mathcal{H}$ be a bounded left-inverse of $ \theta_A$. Define $V\coloneqq L$. Then $S_{A,\Psi}^{-1}\theta_\Psi^*+V(I_{\ell^2(\mathbb{N})\otimes\mathcal{H}_0}-\theta_A S_{A,\Psi}^{-1}\theta_\Psi^*) =S_{A,\Psi}^{-1}\theta_\Psi^*+L(I_{\ell^2(\mathbb{N})\otimes\mathcal{H}_0}-\theta_A S_{A,\Psi}^{-1}\theta_\Psi^*)=S_{A,\Psi}^{-1}\theta_\Psi^*+L-I_{\mathcal{H}}S_{A,\Psi}^{-1}\theta_\Psi^*= L$.
\end{enumerate}
\end{proof}
\begin{theorem}
Let $( \{A_n\}_{n}, \{\Psi_n\}_{n} )$ be a factorable weak OVF in $\mathcal{B}(\mathcal{H}, \mathcal{H}_0)$. Then a factorable weak OVF $ (\{B_n\}_{n} , \{\Phi_n\}_{n} )$ in $\mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ is a dual for $( \{A_n\}_{n}, \{\Psi_n\}_{n} )$ if and only if
\begin{align*}
&B_n=A_nS_{A,\Psi}^{-1}+L_n^*U-A_nS_{A,\Psi}^{-1}\theta_\Psi^*U,\\
&\Phi_n=\Psi_n(S_{A,\Psi}^{-1})^*+L_n^*V^*-\Psi_n (S_{A,\Psi}^{-1})^*\theta_A^*V^*, \quad \forall n \in \mathbb{N}
\end{align*}
such that the operator
\begin{align*}
S_{A, \Psi}^{-1}+VU-V\theta_AS_{A, \Psi}^{-1}\theta_\Psi^* U
\end{align*}
is bounded invertible, where $U :\mathcal{H} \to \ell^2(\mathbb{N})\otimes\mathcal{H}_0$ and $V:\ell^2(\mathbb{N})\otimes\mathcal{H}_0\to\mathcal{H}$ are bounded linear operators.
\end{theorem}
\begin{proof}
Lemmas \ref{FIRSTLEMMA} and \ref{SECONDLEMMA}
give the characterization of dual weak OVF as
\begin{align*}
&B_n=L_n^*(\theta_AS_{A,\Psi}^{-1}+(I_{\ell^2(\mathbb{N})\otimes\mathcal{H}_0}-\theta_AS_{A,\Psi}^{-1}\theta_\Psi^*)U)\\
& \quad=A_nS_{A,\Psi}^{-1}+L_n^*U-A_nS_{A,\Psi}^{-1}\theta_\Psi^*U,\\
&\Phi_n=L_n^*(\theta_\Psi(S_{A,\Psi}^{-1})^*+(I_{\ell^2(\mathbb{N})\otimes\mathcal{H}_0}-\theta_\Psi (S_{A,\Psi}^{-1})^*\theta_A^*)V^*)\\
&\quad=\Psi_n(S_{A,\Psi}^{-1})^*+L_n^*V^*-\Psi_n (S_{A,\Psi}^{-1})^*\theta_A^*V^*, \quad \forall n \in \mathbb{N}
\end{align*}
such that the operator
$$(S_{A,\Psi}^{-1}\theta_\Psi^*+V(I_{\ell^2(\mathbb{N})\otimes\mathcal{H}_0}-\theta_A S_{A,\Psi}^{-1}\theta_\Psi^*))(\theta_AS_{A,\Psi}^{-1}+(I_{\ell^2(\mathbb{N})\otimes\mathcal{H}_0}-\theta_AS_{A,\Psi}^{-1}\theta_\Psi^*)U) $$
is bounded invertible, where $U :\mathcal{H} \to \ell^2(\mathbb{N})\otimes\mathcal{H}_0$ and $V:\ell^2(\mathbb{N})\otimes\mathcal{H}_0\to\mathcal{H}$ are bounded linear operators. We expand and get
\begin{align*}
&(S_{A,\Psi}^{-1}\theta_\Psi^*+V(I_{\ell^2(\mathbb{N})\otimes\mathcal{H}_0}-\theta_A S_{A,\Psi}^{-1}\theta_\Psi^*))(\theta_AS_{A,\Psi}^{-1}+(I_{\ell^2(\mathbb{N})\otimes\mathcal{H}_0}-\theta_AS_{A,\Psi}^{-1}\theta_\Psi^*)U)\\
&=S_{A, \Psi}^{-1}+VU-V\theta_AS_{A, \Psi}^{-1}\theta_\Psi^* U.
\end{align*}
\end{proof}
We now define the orthogonality for weak OVFs.
\begin{definition}
A weak OVF $ (\{B_n\}_{n} , \{\Phi_n\}_{n} )$ in $\mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ is said to be \textbf{orthogonal} to a weak OVF $ ( \{A_n\}_{n}, \{\Psi_n\}_{n} ) $ in $\mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ if
\begin{align*}
\sum_{n=1}^\infty\Psi_n^*B_n= \sum_{n=1}^\infty\Phi^*_nA_n=0.
\end{align*}
\end{definition}
Remarkable property of orthogonal frames is that we can interpolate as well as we can take direct sum of them to get new frames. These are illustrated in the following two results.
\begin{proposition}
Let $ ( \{A_n\}_{n}, \{\Psi_n\}_{n} ) $ and $ (\{B_n\}_{n} , \{\Phi_n\}_{n} )$ be two Parseval OVFs in $\mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ which are orthogonal. If $C,D,E,F \in \mathcal{B}(\mathcal{H})$ are such that $ C^*E+D^*F=I_\mathcal{H}$, then
\begin{align*}
(\{A_nC+B_nD\}_{n}, \{\Psi_nE+\Phi_nF\}_{n})
\end{align*}
is a Parseval weak OVF in $\mathcal{B}(\mathcal{H}, \mathcal{H}_0)$. In particular, if scalars $ c,d,e,f$ satisfy $\bar{c}e+\bar{d}f =1$, then $ (\{cA_n+dB_n\}_{n}, \{e\Psi_n+f\Phi_n\}_{n}) $ is a Parseval weak OVF.
\end{proposition}
\begin{proof}
We use the definition of frame operator and get
\begin{align*}
S_{AC+BD,\Psi E+\Phi F} &=\sum_{n=1}^\infty(\Psi_nE+\Phi_nF)^*(A_nC+B_nD)\\
&=E^*S_{A,\Psi}C+E^*\left(\sum_{n=1}^\infty\Psi_n^*B_n\right)D+F^*\left(\sum_{n=1}^\infty\Phi_n^*A_n\right)C+F^*S_{B,\Phi}D\\
&=E^*I_\mathcal{H}C+E^*0D+F^*0C+F^*I_\mathcal{H}D=I_\mathcal{H}.
\end{align*}
\end{proof}
\begin{proposition}
If $ ( \{A_n\}_{n}, \{\Psi_n\}_{n} ) $ and $ (\{B_n\}_{n} , \{\Phi_n\}_{n} )$ are orthogonal weak OVFs in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$, then $(\{A_n\oplus B_n\}_{n},\{\Psi_n\oplus \Phi_n\}_{n})$ is a weak OVF in $ \mathcal{B}(\mathcal{H}\oplus \mathcal{H}, \mathcal{H}_0).$ Further, if both $ ( \{A_n\}_{n}, \{\Psi_n\}_{n} ) $ and $ (\{B_n\}_{n} , \{\Phi_n\}_{n} )$ are Parseval, then $(\{A_n\oplus B_n\}_{n},\{\Psi_n\oplus \Phi_n\}_{n})$ is Parseval.
\end{proposition}
\begin{proof}
Let $ h \oplus g \in \mathcal{H}\oplus \mathcal{H}$. Then
\begin{align*}
S_{A\oplus B, \Psi\oplus \Phi}(h\oplus g)&=\sum_{n=1}^\infty(\Psi_n\oplus \Phi_n)^*(A_n\oplus B_n)(h\oplus g)=\sum_{n=1}^\infty(\Psi_n\oplus \Phi_n)^*(A_nh+ B_ng)\\
&=\sum_{n=1}^\infty(\Psi_n^*(A_nh+B_ng)\oplus \Phi_n^*(A_nh+B_ng))
\\
&=\left(\sum_{n=1}^\infty\Psi_n^*A_nh+\sum_{n=1}^\infty\Psi_n^*B_ng\right)\oplus \left(\sum_{n=1}^\infty\Phi_n^*A_nh+\sum_{n=1}^\infty\Phi_n^*B_ng\right)\\
&=(S_{A,\Psi}h+0)\oplus(0+S_{B,\Phi}g) =(S_{A,\Psi}\oplus S_{B,\Phi})(h\oplus g).
\end{align*}
\end{proof}
\section{EQUIVALENCE OF WEAK OPERATOR-VALUED FRAMES}\label{SIMILARITYCOMPOSITIONANDTENSORPRODUCT}
Definition \ref{SIMILARITYOVFKAFTALLARSONZHANG} introduced similarity for OVFs. Here is the similar notion for factorable weak OVFs.
\begin{definition}
A factorable weak OVF $( \{B_n\}_{n}, \{\Phi_n\}_{n} ) $ in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ is said to be \textbf{similar} to a factorable weak OVF $ ( \{A_n\}_{n}, \{\Psi_n\}_{n} ) $ in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ if there exist bounded invertible $ R_{A,B}, R_{\Psi, \Phi} \in \mathcal{B}(\mathcal{H})$ such that
\begin{align*}
B_n=A_nR_{A,B} , \quad \Phi_n=\Psi_nR_{\Psi, \Phi}, \quad \forall n \in \mathbb{N}.
\end{align*}
\end{definition}
Since $ R_{A,B} $ and $ R_{\Psi, \Phi}$ are bounded invertible, it easily follows that the notion similarity is symmetric. We further have that the relation ``similarity" is an equivalence relation on the set
\begin{align*}
\left\{( \{A_n\}_{n}, \{\Psi_n\}_{n} ):( \{A_n\}_{n}, \{\Psi_n\}_{n} ) \text{ is a factorable weak OVF}\right\}.
\end{align*}
Similar frames have nice property that knowing analysis, synthesis and frame operators of one give that of another.
\begin{lemma}\label{SIM}
Let $ ( \{A_n\}_{n}, \{\Psi_n\}_{n} ) $ and $ ( \{B_n\}_{n}, \{\Phi_n\}_{n} ) $ be similar factorable weak OVFs and $B_n=A_nR_{A,B} ,\Phi_n=\Psi_nR_{\Psi, \Phi}, \forall n \in \mathbb{N}$, for some invertible $ R_{A,B} ,R_{\Psi, \Phi} \in \mathcal{B}(\mathcal{H}).$ Then
\begin{enumerate}[label=(\roman*)]
\item $ \theta_B=\theta_A R_{A,B}, \theta_\Phi=\theta_\Psi R_{\Psi,\Phi}$.
\item $S_{B,\Phi}=R_{\Psi,\Phi}^*S_{A, \Psi}R_{A,B}$.
\item $P_{B,\Phi}=P_{A, \Psi}.$
\end{enumerate}
\end{lemma}
\begin{proof}
$ \theta_B=\sum_{n=1}^\infty L_nB_n=\sum_{n=1}^\infty L_nA_nR_{A,B}=\theta_AR_{A,B} $. Similarly $ \theta_\Phi=\theta_\Psi R_{\Psi,\Phi}$. Now using operators $\theta_B$ and $\theta_\Phi$ we get $S_{B,\Phi}=\sum_{n=1}^\infty \Phi_n^*B_n=\sum_{n=1}^\infty(\Psi_nR_{\Psi,\Phi})^*(A_nR_{A,B})=R_{\Psi, \Phi}^*\left (\sum_{n=1}^\infty\Psi_n^*A_n\right )R_{A,B}=R_{\Psi,\Phi}^*S_{A, \Psi}R_{A,B}$. We now use (i) and (ii) to get
\begin{align*}
P_{B,\Phi}=\theta_BS_{B,\Phi}^{-1}\theta_\Phi^*=(\theta_AR_{A,B})(R_{\Psi,\Phi}^*S_{A, \Psi}R_{A,B})^{-1}(\theta_\Psi R_{\Psi,\Phi})^*=P_{A,\Psi}.
\end{align*}
\end{proof}
We now classify similarity using operators.
\begin{theorem}\label{RIGHTSIMILARITY}
For two factorable weak OVFs $ ( \{A_n\}_{n}, \{\Psi_n\}_{n} ) $ and $ ( \{B_n\}_{n}, \{\Phi_n\}_{n} ) $, the following are equivalent.
\begin{enumerate}[label=(\roman*)]
\item $B_n=A_nR_{A,B} , \Phi_n=\Psi_nR_{\Psi, \Phi} , \forall n \in \mathbb{N},$ for some invertible $ R_{A,B} ,R_{\Psi, \Phi} \in \mathcal{B}(\mathcal{H}). $
\item $\theta_B=\theta_AR_{A,B} , \theta_\Phi=\theta_\Psi R_{\Psi, \Phi} $ for some invertible $ R_{A,B} ,R_{\Psi, \Phi} \in \mathcal{B}(\mathcal{H}). $
\item $P_{B,\Phi}=P_{A,\Psi}.$
\end{enumerate}
If one of the above conditions is satisfied, then invertible operators in $ \operatorname{(i)}$ and $ \operatorname{(ii)}$ are unique and are given by $R_{A,B}=S_{A,\Psi}^{-1}\theta_\Psi^*\theta_B$, $R_{\Psi, \Phi}=(S_{A,\Psi}^{-1})^*\theta_A^*\theta_\Phi.$ In the case that $ ( \{A_n\}_{n}, \{\Psi_n\}_{n} ) $ is Parseval, then $ ( \{B_n\}_{n}, \{\Phi_n\}_{n} ) $ is Parseval if and only if $R_{\Psi, \Phi}^*R_{A,B}=I_\mathcal{H} $ if and only if $R_{A,B}R_{\Psi, \Phi}^*=I_\mathcal{H} $.
\end{theorem}
\begin{proof}
The implications (i) $\Rightarrow$ (ii) $\Rightarrow$ (iii) follow from Lemma \ref{SIM}. Assume (ii) holds. We show (i) holds. Using Equation (\ref{LEQUATION}), $ B_n=L_n^*\theta_B=L_n^*\theta_AR_{A,B}'=A_nR_{A,B}'$; the same procedure gives $ \Phi_n$ also. Assume (iii). We note the following $ \theta_B=P_{B,\Phi}\theta_B$ and $ \theta_\Phi=P_{B,\Phi}^*\theta_\Phi.$ Using these, $ \theta_B=P_{A,\Psi}\theta_B=\theta_A(S_{A,\Psi}^{-1}\theta_\Psi^*\theta_B)$ and $ \theta_\Phi=P_{A,\Psi}^*\theta_\Phi=(\theta_AS_{A,\Psi}^{-1}\theta_\Psi^*)^*\theta_\Phi=\theta_\Psi((S_{A,\Psi}^{-1})^*\theta_A^*\theta_\Phi).$ We now try to show that both $S_{A,\Psi}^{-1}\theta_\Psi^*\theta_B$ and $(S_{A,\Psi}^{-1})^*\theta_A^*\theta_\Phi$ are invertible. This is achieved via,
\begin{align*}
(S_{A,\Psi}^{-1}\theta_\Psi^*\theta_B)(S_{B,\Phi}^{-1}\theta_\Phi^*\theta_A)=S_{A,\Psi}^{-1}\theta_\Psi^*P_{B,\Phi}\theta_A= S_{A,\Psi}^{-1}\theta_\Psi^*P_{A,\Psi}\theta_A= S_{A,\Psi}^{-1}\theta_\Psi^*\theta_A=I_\mathcal{H},\\
( S_{B,\Phi}^{-1}\theta_\Phi^*\theta_A)(S_{A,\Psi}^{-1}\theta_\Psi^*\theta_B)= S_{B,\Phi}^{-1}\theta_\Phi^*P_{A,\Psi}\theta_B=S_{B,\Phi}^{-1}\theta_\Phi^*P_{B,\Phi}\theta_B=S_{B,\Phi}^{-1}\theta_\Phi^*\theta_B=I_\mathcal{H}
\end{align*}
and
\begin{align*}
((S_{A,\Psi}^{-1})^*\theta_A^*\theta_\Phi)((S_{B,\Phi}^{-1})^*\theta_B^*\theta_\Psi)&=(S_{A,\Psi}^{-1})^*\theta_A^*P_{B,\Phi}^*\theta_\Psi
=(S_{A,\Psi}^{-1})^*\theta_A^*P_{A,\Psi}^*\theta_\Psi\\
&=(S_{A,\Psi}^{-1})^*\theta_A^*\theta_\Psi=I_\mathcal{H},\\
((S_{B,\Phi}^{-1})^*\theta_B^*\theta_\Psi)((S_{A,\Psi}^{-1})^*\theta_A^*\theta_\Phi)&=(S_{B,\Phi}^{-1})^*\theta_B^*P_{A,\Psi}^* \theta_\Phi=(S_{B,\Phi}^{-1})^*\theta_B^*P_{B,\Phi}^* \theta_\Phi\\
&=(S_{B,\Phi}^{-1})^*\theta_B^* \theta_\Phi= I_\mathcal{H}.
\end{align*}
Let $ R_{A,B}, R_{\Psi,\Phi} \in \mathcal{B}(\mathcal{H}) $ be invertible. From the previous arguments, $ R_{A,B}$ and $R_{\Psi,\Phi} $ satisfy (i) if and only if they satisfy (ii). Let $B_n=A_nR_{A,B} , \Phi_n=\Psi_nR_{\Psi, \Phi} , \forall n \in \mathbb{N}.$ Using (ii), $\theta_B=\theta_AR_{A,B} , \theta_\Phi=\theta_\Psi R_{\Psi, \Phi}$ $\implies$ $\theta_\Psi^*\theta_B=\theta_\Psi^*\theta_AR_{A,B}=S_{A,\Psi}R_{A,B} , \theta_A^*\theta_\Phi=\theta_A^*\theta_\Psi R_{\Psi, \Phi}=S_{A,\Psi}^*R_{\Psi, \Phi}$. These imply the formula for $R_{A,B}$ and $ R_{\Psi, \Phi}.$ For the last, we recall $ S_{B,\Phi}=R_{\Psi,\Phi}^*S_{A, \Psi}R_{A,B}$.
\end{proof}
\begin{corollary}
For any given factorable weak OVF $ ( \{A_n\}_{n}, \{\Psi_n\}_{n} ) $, the canonical dual of $ ( \{A_n\}_{n}, \{\Psi_n\}_{n} ) $ is the only dual factorable weak OVF that is similar to $ ( \{A_n\}_{n}, $ $ \{\Psi_n\}_{n} ) $.
\end{corollary}
\begin{proof}
Let $ (\{B_n\}_{n} , \{\Phi_n\}_{n} )$ be a factorable weak OVF which is both dual and similar for $ ( \{A_n\}_{n}, \{\Psi_n\}_{n} ) $. Then we have $ \theta_B^*\theta_\Psi=I_\mathcal{H}=\theta_\Phi^*\theta_A$ and there exist invertible $ R_{A,B},R_{\Psi,\Phi}\in \mathcal{B}(\mathcal{H})$ such that $B_n=A_nR_{A,B} , \Phi_n=\Psi_nR_{\Psi, \Phi} , \forall n \in \mathbb{N} $. Theorem \ref{RIGHTSIMILARITY} gives $R_{A,B}=S_{A,\Psi}^{-1}\theta_\Psi^*\theta_B, R_{\Psi, \Phi}=S_{A,\Psi}^{-1}\theta_A^*\theta_\Phi.$ But then $R_{A,B}=S_{A,\Psi}^{-1}I_\mathcal{H}=S_{A,\Psi}^{-1}$, $ R_{\Psi, \Phi}=(S_{A,\Psi}^{-1})^*I_\mathcal{H}=(S_{A,\Psi}^{-1})^*.$ Therefore $ (\{B_n\}_{n} , \{\Phi_n\}_{n} )$ is the canonical dual for $ ( \{A_n\}_{n}, $ $ \{\Psi_n\}_{n} ) $.
\end{proof}
\begin{corollary}
Two similar factorable weak OVF cannot be orthogonal.
\end{corollary}
\begin{proof}
Let a factorable weak OVF $ ( \{B_n\}_{n}, \{\Phi_n\}_{n} ) $ be similar to $ ( \{A_n\}_{n}, \{\Psi_n\}_{n} ) $. Choose invertible $ R_{A,B},R_{\Psi,\Phi}\in \mathcal{B}(\mathcal{H})$ such that $B_n=A_nR_{A,B} , \Phi_n=\Psi_nR_{\Psi, \Phi} , \forall n \in \mathbb{N} $. Using Theorem \ref{RIGHTSIMILARITY} and the invertibility of $R_{A,B}^* $ and $S_{A,\Psi}^* $, we get
\begin{align*}
\theta_B^*\theta_\Psi=(\theta_AR_{A,B})^*\theta_\Psi=R_{A,B}^*\theta_A^*\theta_\Psi=R_{A,B}^*S_{A,\Psi}^*\neq 0.
\end{align*}
\end{proof}
For every factorable weak OVF $ ( \{A_n\}_{n}, \{\Psi_n\}_{n} ) $, each of `OVFs' $( \{A_nS_{A, \Psi}^{-1}\}_{n}, \{\Psi_n\}_{n})$ and $ (\{A_n \}_{n}, \{\Psi_n(S_{A,\Psi}^{-1})^*\}_{n})$ is a Parseval OVF which is similar to $ ( \{A_n\}_{n}, \{\Psi_n\}_{n} ) $. Thus every OVF is similar to Parseval OVFs.
\section{WEAK OPERATOR-VALUED FRAMES GENERATED BY GROUPS AND GROUP LIKE UNITARY SYSTEMS} \label{FRAMESANDDISCRETEGROUPREPRESENTATIONS}
In this section $G$ denotes discrete group and $\pi$ denotes unitary representation of $G$. Identity element of $G$ is denoted by $e$.
\begin{definition}
Let $ \pi$ be a unitary representation of a discrete
group $ G$ on a Hilbert space $ \mathcal{H}.$ An operator $ A$ in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ is called a \textbf{factorable operator frame generator} (resp. a Parseval frame generator) w.r.t. an operator $ \Psi$ in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ if $(\{A_g\coloneqq A \pi_{g^{-1}}\}_{g\in G}, \{\Psi_g\coloneqq \Psi \pi_{g^{-1}}\}_{g\in G})$ is a factorable weak OVF (resp. Parseval) in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$. In this case, we write $ (A,\Psi)$ is an operator frame generator for $\pi$.
\end{definition}
\begin{proposition}\label{REPRESENATIONLEMMA}
Let $ (A,\Psi)$ and $ (B,\Phi)$ be operator frame generators in $\mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ for a unitary representation $ \pi$ of $G$ on $ \mathcal{H}.$ Then
\begin{enumerate}[label=(\roman*)]
\item $ \theta_A\pi_g=(\lambda_g\otimes I_{\mathcal{H}_0})\theta_A, \theta_\Psi \pi_g=(\lambda_g\otimes I_{\mathcal{H}_0})\theta_\Psi, \forall g \in G.$
\item $ \theta_A^*\theta_B, \theta_\Psi^*\theta_\Phi,\theta_A^*\theta_\Phi$ are in the commutant $ \pi(G)'$ of $ \pi(G)''.$ Further, $ S_{A,\Psi} \in \pi(G)'$.
\item $ \theta_AT\theta_\Psi^*, \theta_AT\theta_B^*, \theta_\Psi T\theta_\Phi^* \in \mathscr{R}(G)\otimes \mathcal{B}(\mathcal{H}_0), \forall T \in \pi(G)'.$ In particular, $ P_{A, \Psi} \in \mathscr{R}(G)$ $\otimes \mathcal{B}(\mathcal{H}_0). $
\end{enumerate}
\end{proposition}
\begin{proof} Let $ g,p,q \in G $ and $ h \in \mathcal{H}_0.$
\begin{enumerate}[label=(\roman*)]
\item From the definition of $ \lambda_g $ and $ \chi_q$, we get $ \lambda_g\chi_q=\chi_{gq}.$ Therefore $ L_{gq}h=\chi_{gq}\otimes h= \lambda_g\chi_q\otimes h= (\lambda_g\otimes I_{\mathcal{H}_0})(\chi_q\otimes h)=(\lambda_g\otimes I_{\mathcal{H}_0})L_qh.$ Using this,
\begin{align*}
\theta_A\pi_g&=\sum\limits_{p\in G} L_pA_p\pi_g=\sum\limits_{p\in G} L_pA\pi_{p^{-1}}\pi_g=\sum\limits_{p\in G} L_pA\pi_{{p^{-1}}g}\\
&=\sum\limits_{q\in G} L_{gq}A\pi_{q^{-1}}=\sum\limits_{q\in G}(\lambda_g\otimes I_{\mathcal{H}_0}) L_{q}A\pi_{q^{-1}}=(\lambda_g\otimes I_{\mathcal{H}_0})\theta_A.
\end{align*}
Similarly $ \theta_\Psi \pi_g=(\lambda_g\otimes I_{\mathcal{H}_0})\theta_\Psi.$
\item $ \theta_A^*\theta_B\pi_g=\theta_A^* (\lambda_g\otimes I_{\mathcal{H}_0})\theta_B=((\lambda_{g^{-1}}\otimes I_{\mathcal{H}_0})\theta_A)^*\theta_B=(\theta_A\pi_{g^{-1}})^*\theta_B=\pi_g\theta_A^*\theta_B.$ In the same way, $ \theta_\Psi^*\theta_\Phi, \theta_A^*\theta_\Phi\in \pi(G)'.$ By taking $ B=A$ and $ \Phi=\Psi$ we get $ S_{A,\Psi} \in \pi(G)'.$
\item Let $ T \in \pi(G)'.$ Then
\begin{align*}
\theta_AT\theta_\Psi^*(\lambda_g\otimes I_{\mathcal{H}_0})&= \theta_AT((\lambda_{g^{-1}}\otimes I_{\mathcal{H}_0})\theta_\Psi)^*=\theta_AT\pi_g\theta_\Psi^*\\
&=\theta_A\pi_gT\theta_\Psi^*=(\lambda_g\otimes I_{\mathcal{H}_0})\theta_AT\theta_\Psi^*.
\end{align*}
From the construction of $ \mathscr{L}(G),$ we now get $\theta_AT\theta_\Psi^* \in (\mathscr{L}(G)\otimes \{I_{\mathcal{H}_0}\})'=\mathscr{L}(G)'\otimes \{I_{\mathcal{H}_0}\}'=\mathscr{R}(G)\otimes \mathcal{B}(\mathcal{H}_0).$ Similarly $\theta_AT\theta_B^*, \theta_\Psi S\theta_\Phi^* \in \mathscr{R}(G)\otimes \mathcal{B}(\mathcal{H}_0), \forall S \in \pi (G)'.$ For the choice $ T=S_{A,\Psi}^{-1}$ we get $ P_{A, \Psi} \in \mathscr{R}(G)\otimes \mathcal{B}(\mathcal{H}_0). $
\end{enumerate}
\end{proof}
\begin{theorem}\label{gc1}
Let $ G$ be a discrete group and $( \{A_g\}_{g\in G}, $ $ \{\Psi_g\}_{g\in G})$ be a Parseval factorable weak OVF in $ \mathcal{B}(\mathcal{H},\mathcal{H}_0).$ Then there is a unitary representation $ \pi$ of $ G$ on $ \mathcal{H}$ for which
$$ A_g=A_e\pi_{g^{-1}}, \quad\Psi_g=\Psi_e\pi_{g^{-1}}, \quad\forall g \in G$$
if and only if
$$A_{gp}A_{gq}^*=A_pA_q^* ,\quad A_{gp}\Psi_{gq}^*=A_p\Psi_q^*,\quad \Psi_{gp}\Psi_{gq}^*=\Psi_p\Psi_q^*, \quad \forall g,p,q \in G.$$
\end{theorem}
\begin{proof}
$(\Rightarrow)$
$$A_{gp}\Psi_{gq}^*= A_e \pi_{(gp)^{-1}}(\Psi_e\pi_{(gq)^{-1}})^*=A_e\pi_{p^{-1}}\pi_{g^{-1}}\pi_g\pi_q\Psi_e^*=A_p\Psi_q^*, \quad\forall g,p,q \in G.$$
Similarly we get other two equalities.
$(\Leftarrow)$ Using assumptions, we use the following three equalities in the proof, among them we derive the second, remainings are similar.
For all $ g \in G,$
\begin{align*}
& (\lambda_g\otimes I_{\mathcal{H}_0})\theta_A\theta_A^*=\theta_A\theta_A^*(\lambda_g\otimes I_{\mathcal{H}_0}), ~ (\lambda_g\otimes I_{\mathcal{H}_0})\theta_A\theta_\Psi^*=\theta_A\theta_\Psi^*(\lambda_g\otimes I_{\mathcal{H}_0}),\\
& (\lambda_g\otimes I_{\mathcal{H}_0})\theta_\Psi\theta_\Psi^*=\theta_\Psi\theta_\Psi^*(\lambda_g\otimes I_{\mathcal{H}_0}).
\end{align*}
Noticing $ \lambda_g$ is unitary, we get $(\lambda_g\otimes I_{\mathcal{H}_0})^{-1}=(\lambda_g\otimes I_{\mathcal{H}_0})^*$; also we observed in the proof of Proposition \ref{REPRESENATIONLEMMA} that $(\lambda_g\otimes I_{\mathcal{H}_0})L_q=L_{gq}.$ So
\begin{align*}
(\lambda_g\otimes I_{\mathcal{H}_0})\theta_A\theta_\Psi^*(\lambda_g\otimes I_{\mathcal{H}_0})^*&=\left(\sum\limits_{p\in G}(\lambda_g\otimes I_{\mathcal{H}_0})L_pA_p\right)\left(\sum\limits_{q\in G}(\lambda_g\otimes I_{\mathcal{H}_0})L_q\Psi_q\right)^*\\
&=\sum\limits_{p\in G} L_{gp}\left(\sum\limits_{q\in G}A_p\Psi_q^*L_{gq}^*\right)
=\sum\limits_{r\in G} L_r\left(\sum\limits_{s\in G}A_{g^{-1}r}\Psi_{g^{-1}s}^*L_s^*\right)\\
& =\sum\limits_{r\in G} L_r\left(\sum\limits_{s\in G}A_r\Psi_s^*L_s^*\right)=\theta_A\theta_\Psi^*.
\end{align*}
Define $ \pi : G \ni g \mapsto \pi_g\coloneqq \theta_\Psi^*(\lambda_g\otimes I_{\mathcal{H}_0})\theta_A \in \mathcal{B}(\mathcal{H}).$ By using the Parsevalness,
\begin{align*}
\pi_g\pi_h&=\theta_\Psi^*(\lambda_g\otimes I_{\mathcal{H}_0})\theta_A \theta_\Psi^*(\lambda_h\otimes I_{\mathcal{H}_0})\theta_A =\theta_\Psi^*\theta_A \theta_\Psi^*(\lambda_g\otimes I_{\mathcal{H}_0}) (\lambda_h\otimes I_{\mathcal{H}_0})\theta_A \\
&= \theta_\Psi^*(\lambda_{gh}\otimes I_{\mathcal{H}_0})\theta_A =\pi_{gh}, \quad \forall g, h \in G
\end{align*}
and
\begin{align*}
\pi_g\pi_g^*&=\theta_\Psi^*(\lambda_g\otimes I_{\mathcal{H}_0})\theta_A\theta_A^*(\lambda_{g^{-1}}\otimes I_{\mathcal{H}_0})\theta_\Psi\\
&=\theta_\Psi^*\theta_A\theta_A^*(\lambda_g\otimes I_{\mathcal{H}_0})(\lambda_{g^{-1}}\otimes I_{\mathcal{H}_0})\theta_\Psi=I_\mathcal{H}, \\ \pi_g^*\pi_g&=\theta_A^*(\lambda_{g^{-1}}\otimes I_{\mathcal{H}_0})\theta_\Psi\theta_\Psi^*(\lambda_{g}\otimes I_{\mathcal{H}_0})\theta_A\\
&=\theta_A^*(\lambda_{g^{-1}}\otimes I_{\mathcal{H}_0})(\lambda_{g}\otimes I_{\mathcal{H}_0})\theta_\Psi\theta_\Psi^*\theta_A=I_\mathcal{H}, \quad \forall g \in G.
\end{align*}
Since $ G $ has the discrete topology, this proves $ \pi$ is a unitary representation. It remains to prove $ A_g=A_e\pi_{g^{-1}}, \Psi_g=\Psi_e\pi_{g^{-1}} $ for all $ g \in G$. Indeed,
\begin{align*}
A_e\pi_{g^{-1}}&= L_e^*\theta_A\theta_\Psi^*(\lambda_{g^{-1}}\otimes I_{\mathcal{H}_0})\theta_A=L_e^*(\lambda_{g^{-1}}\otimes I_{\mathcal{H}_0})\theta_A\theta_\Psi^*\theta_A\\
&=((\lambda_g\otimes I_{\mathcal{H}_0})L_e)^*\theta_A=L_{ge}^*\theta_A=A_g,
\end{align*}
and
\begin{align*}
\Psi_e\pi_{g^{-1}}&=L_e^*\theta_\Psi \theta_\Psi^*(\lambda_{g^{-1}}\otimes I_{\mathcal{H}_0})\theta_A=L_e^*(\lambda_{g^{-1}}\otimes I_{\mathcal{H}_0})\theta_\Psi\theta_\Psi^*\theta_A\\
&=((\lambda_g\otimes I_{\mathcal{H}_0})L_e)^*\theta_\Psi=L_{ge}^*\theta_\Psi=\Psi_g.
\end{align*}
\end{proof}
In the direct part of Theorem \ref{gc1}, we can remove the word `Parseval' since it has not been used in the proof; same is true in the following corollary.
\begin{corollary}
Let $ G$ be a discrete group and $( \{A_g\}_{g\in G}, \{\Psi_g\}_{g\in G})$ be a factorable weak OVF in $ \mathcal{B}(\mathcal{H},\mathcal{H}_0).$ Then there is a unitary representation $ \pi$ of $ G$ on $ \mathcal{H}$ for which
\begin{enumerate}[label=(\roman*)]
\item $ A_g=A_eS_{A,\Psi}^{-1}\pi_{g^{-1}}S_{A,\Psi}, \Psi_g=\Psi_e\pi_{g^{-1}} $ for all $ g \in G$ if and only if
\begin{align*}
& A_{gp}S_{A,\Psi}^{-1}(S_{A,\Psi}^{-1})^*A_{gq}^*=A_pS_{A,\Psi}^{-1}(S_{A,\Psi}^{-1})^*A_q^* ,\quad A_{gp}S_{A,\Psi}^{-1}\Psi_{gq}^*=A_pS_{A,\Psi}^{-1}\Psi_q^*,\\
& \Psi_{gp}\Psi_{gq}^*=\Psi_p\Psi_q^*, \quad \forall g,p,q \in G.
\end{align*}
\item $ A_g=A_e\pi_{g^{-1}}, \Psi_g=\Psi_e(S_{A,\Psi}^{-1})^*\pi_{g^{-1}}S_{A,\Psi} $ for all $ g \in G$ if and only if
\begin{align*}
&A_{gp}A_{gq}^*=A_pA_q^* ,\quad A_{gp}S_{A,\Psi}^{-1}\Psi_{gq}^*=A_pS_{A,\Psi}^{-1}\Psi_q^*,\\
& \Psi_{gp}(S_{A,\Psi}^{-1})^*S_{A,\Psi}^{-1}\Psi_{gq}^*=\Psi_p(S_{A,\Psi}^{-1})^*S_{A,\Psi}^{-1}\Psi_q^*, \quad \forall g,p,q \in G.
\end{align*}
\end{enumerate}
\end{corollary}
\begin{proof}
\begin{enumerate}[label=(\roman*)]
\item We apply Theorem \ref{gc1} to the factorable Parseval OVF $(\{A_gS_{A,\Psi}^{-1}\}_{g\in G}, $ $ \{\Psi_g\}_{g\in G})$ to get: there is a unitary representation $ \pi$ of $ G$ on $ \mathcal{H}$ for which $ A_gS_{A,\Psi}^{-1}=(A_eS_{A,\Psi}^{-1})\pi_{g^{-1}}, \Psi_g=\Psi_e\pi_{g^{-1}} $ for all $ g \in G$ if and only if
\begin{align*}
&(A_{gp}S_{A,\Psi}^{-1})(A_{gq}S_{A,\Psi}^{-1})^*=(A_pS_{A,\Psi}^{-1})(A_qS_{A,\Psi}^{-1})^*, \quad (A_{gp}S_{A,\Psi}^{-1})\Psi_{gq}^*= (A_pS_{A,\Psi}^{-1})\Psi_q^*,\\
&\Psi_{gp}\Psi_{gq}^*=\Psi_p\Psi_q^*, \quad \forall g,p,q \in G.
\end{align*}
\item We apply Theorem \ref{gc1} to the factorable Parseval OVF $( \{A_g\}_{g\in G}$, $ \{\Psi_g(S_{A,\Psi}^{-1})^*\}_{g\in G})$ to get: there is a unitary representation $ \pi$ of $ G$ on $ \mathcal{H}$ for which $ A_g=A_e\pi_{g^{-1}}$, $ \Psi_gS_{A,\Psi}^{-1}=(\Psi_e(S_{A,\Psi}^{-1})^*)\pi_{g^{-1}} $ for all $ g \in G$ if and only if
\begin{align*}
&A_{gp}A_{gq}^*=A_pA_q^* , \quad A_{gp}(\Psi_{gq}(S_{A,\Psi}^{-1})^*)^*=A_p(\Psi_q(S_{A,\Psi}^{-1})^*)^*,\\
&(\Psi_{gp}(S_{A,\Psi}^{-1})^*)(\Psi_{gq}(S_{A,\Psi}^{-1})^*)^*=(\Psi_p(S_{A,\Psi}^{-1})^*)(\Psi_q(S_{A,\Psi}^{-1})^*)^* , \quad \forall g,p,q \in G.
\end{align*}
\end{enumerate}
\end{proof}
We next address the situation of factorable weak OVF whenever it is indexed by group-like unitary systems. Group-like unitary systems arose from the study of Weyl-Heisenberg frames. This was first formally defined by \cite{GABARDO}. In the sequel, by $\mathbb{T}$, we mean the standard unit circle group centered at the origin equipped with usual multiplication.
\begin{definition}(\cite{GABARDO})
A collection $ \mathcal{U}\subseteq \mathcal{B}(\mathcal{H})$ containing $I_\mathcal{H}$ is called as a \textbf{unitary system}. If the group generated by unitary system $ \mathcal{U}$, denoted by $ \operatorname{group}(\mathcal{U})$ is such that
\begin{enumerate}[label=(\roman*)]
\item $\operatorname{group}(\mathcal{U}) \subseteq \mathbb{T}\mathcal{U}\coloneqq \{\alpha U : \alpha \in \mathbb{T}, U\in \mathcal{U} \}$, and
\item $\mathcal{U}$ is linearly independent, i.e., $\mathbb{T}U\ne\mathbb{T}V $ whenever $ U, V \in \mathcal{U}$ are such that $ U\ne V,$
\end{enumerate}
then $\mathcal{U}$ is called as a \textbf{group-like unitary system}.
\end{definition}
Let $ \mathcal{U}$ be a group-like unitary system. As in (\cite{GABARDOHANGROUPLIKE}), we define mappings
\begin{align*}
f:\operatorname{group}(\mathcal{U})\rightarrow \mathbb{T} \quad \text{ and } \quad \sigma:\operatorname{group}(\mathcal{U})\rightarrow \mathcal{U}.
\end{align*}
in the following way. For each $ U \in \operatorname{group}(\mathcal{U}) $ there are unique $\alpha\in \mathbb{T}, V \in \mathcal{U} $ such that $ U=\alpha V$. Define $ f(U)=\alpha$ and $\sigma(U)=V $. These $ f, \sigma $ are well-defined and satisfy
\begin{align*}
U=f(U)\sigma(U), \quad \forall U \in \operatorname{group}(\mathcal{U}).
\end{align*}
These mappings are called as \textbf{corresponding mappings} associated to $ \mathcal{U}$. We can picturize these maps as follows.
\begin{center}
\[
\begin{tikzcd}
\operatorname{group}(\mathcal{U}) \arrow[d,"\sigma"] \arrow[dr,"f"]\subseteq\mathbb{T}\mathcal{U}\\
\mathcal{U} & \mathbb{T}\arrow[l,] \\
\end{tikzcd}
\]
\end{center}
Next result gives certain fundamental properties of corresponding mappings associated with group-like unitary systems.
\begin{proposition}(\cite{GABARDOHANGROUPLIKE})\label{PER}
For a group-like unitary system $\mathcal{U}$ and $ f, \sigma $ as above,
\begin{enumerate}[label=(\roman*)]
\item $ f(U\sigma(VW))f(VW)=f(\sigma(UV)W)f(UV), \forall U,V,W \in \operatorname{group}(\mathcal{U}).$
\item $ \sigma(U\sigma(VW))=\sigma(\sigma(UV)W), \forall U,V,W \in \operatorname{group} (\mathcal{U}).$
\item $ \sigma(U)=U$ and $ f(U)=1$ for all $ U \in \mathcal{U}.$
\item If $ V, W \in \operatorname{group} (\mathcal{U}),$ then
\begin{align*}
\mathcal{U}&=\{\sigma(UV) : U \in \mathcal{U}\}=\{\sigma(VU^{-1}) : U \in \mathcal{U}\}\\
&=\{\sigma(VU^{-1}W) : U \in \mathcal{U}\}=\{\sigma(V^{-1}U) : U \in \mathcal{U}\}.
\end{align*}
\item For fixed $ V, W \in \mathcal{U}$, the following mappings are injective from $ \mathcal{U} $ to itself:
\begin{align*}
U\mapsto \sigma(VU) \quad (\text{resp.} ~ \sigma(UV), \sigma(UV^{-1}), \sigma(V^{-1}U),\\
\sigma(VU^{-1}), \sigma(U^{-1}V), \sigma(VU^{-1}W)).
\end{align*}
\end{enumerate}
\end{proposition}
Since $\operatorname{group} (\mathcal{U}) $ is a group, we note that, in (iv) of Proposition \ref{PER}, we can replace $V$ by $V^{-1}$. Hence, whenever $V \in \operatorname{group} (\mathcal{U})$, we have $\sum_{U \in \mathcal{U}}x_U=\sum_{U \in \mathcal{U}}x_{\sigma(VU)}$.
\begin{definition}(\cite{GABARDOHANGROUPLIKE})
A \textbf{unitary representation} $ \pi$ of a group-like unitary system $ \mathcal{U}$ on $ \mathcal{H}$ is an injective mapping from $ \mathcal{U}$ into the set of unitary operators on $ \mathcal{H}$ such that
$$\pi(U)\pi(V)=f(UV)\pi(\sigma(UV)) , \quad {\pi(U)}^{-1}=f(U^{-1})\pi(\sigma(U^{-1})), ~ \forall U,V \in \mathcal{U}, $$
where $ f$ and $ \sigma $ are the corresponding mappings associated with $ \mathcal{U}.$
\end{definition}
Since $\pi $ is injective, once we have a unitary representation of a group-like unitary system $ \mathcal{U}$ on $\mathcal{H}$, then $ \pi(\mathcal{U})$ is also a group-like unitary system.
Let $ \mathcal{U}$ be a group-like unitary system and $ \{\chi_U\}_{U\in \mathcal{U}}$ be the standard orthonormal basis for $\ell^2(\mathcal{U}) $. We define $\lambda $ on $ \mathcal{U}$ by $ \lambda_U\chi_V=f(UV)\chi_{\sigma(UV)}, \forall U,V \in \mathcal{U}.$ Then $ \lambda $ is a unitary representation which we call as left regular representation of $ \mathcal{U}$. Similarly, we define right regular representation of $ \mathcal{U}$ by $ \rho_U\chi_V=f(VU^{-1})\chi_{\sigma(VU^{-1})}, \forall U,V \in \mathcal{U}$ (\cite{GABARDOHANGROUPLIKE}).
Like frame generators for groups, we now define the frame generator for group-like unitary systems.
\begin{definition}
Let $ \mathcal{U}$ be a group-like unitary system. An operator $ A$ in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ is called an \textbf{operator frame generator} (resp. a Parseval frame generator) w.r.t. $ \Psi$ in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ if $(\{A_U\coloneqq A\pi(U)^{-1}\}_{U\in \mathcal{U}},\{\Psi_U\coloneqq \Psi\pi(U)^{-1}\}_{U\in \mathcal{U}})$ is a factorable weak OVF (resp. a Parseval) in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$. We write $ (A,\Psi)$ is an operator frame generator for $\pi$.
\end{definition}
\begin{theorem}\label{CHARACTERIZATIONGROUPLIKE}
Let $ \mathcal{U}$ be a group-like unitary system, $ I$ be the identity of $ \mathcal{U}$ and $(\{A_U\}_{U\in \mathcal{U}},\{\Psi_U\}_{U\in \mathcal{U}})$ be a factorable Parseval weak OVF in $ \mathcal{B}(\mathcal{H},\mathcal{H}_0)$ with $ \theta_A^*$ injective. Then there is a unitary representation $ \pi$ of $ \mathcal{U}$ on $ \mathcal{H}$ for which
$$ A_U=A_I\pi(U)^{-1}, \quad\Psi_U=\Psi_I\pi(U)^{-1}, \quad\forall U \in \mathcal{U}$$
if and only if
\begin{align*}
A_{\sigma(UV)}A_{\sigma(UW)}^*&=f(UV)\overline{f(UW)} A_VA_W^* ,\\
A_{\sigma(UV)}\Psi_{\sigma(UW)}^*&=f(UV)\overline{f(UW)} A_V\Psi_W^*,\\ \Psi_{\sigma(UV)}\Psi_{\sigma(UW)}^*&=f(UV)\overline{f(UW)} \Psi_V\Psi_W^*, \quad \forall U,V,W \in \mathcal{U}.
\end{align*}
\end{theorem}
\begin{proof}
$(\Rightarrow)$ For all $U,V,W \in \mathcal{U}$, we have
\begin{align*}
A_{\sigma(UV)}A_{\sigma(UW)}^*&= A_I\pi(\sigma(UV))^{-1}( A_I\pi(\sigma(UW))^{-1} )^*\\
&=A_I(\overline{f(UV)}\pi(U)\pi(V))^{-1} \overline{f(UW)}\pi(U)\pi(W)A^*_I\\
&=f(UV)\overline{f(UW)} A_I\pi(V)^{-1}(A_I\pi(W)^{-1})^*\\
&=f(UV)\overline{f(UW)} A_VA_W^*.
\end{align*}
Others can be shown similarly.
$(\Leftarrow)$ We have to construct unitary representation which satisfies the stated conditions. Following observation plays an important role in this part. Let $ h\in \mathcal{H}.$ Then
\begin{align*}
L_{\sigma(UV)}h&=\chi_{\sigma(UV)}\otimes h=\overline{f(UV)}\lambda_U\chi_V\otimes h=\overline{f(UV)}(\lambda_U\chi_V\otimes h)\\
&=\overline{f(UV)}(\lambda_U\otimes I_{\mathcal{H}_0})(\chi_V\otimes h)=\overline{f(UV)}(\lambda_U\otimes I_{\mathcal{H}_0})L_V h.
\end{align*}
As in the proof of Theorem \ref{gc1}, we argue the following, for which now we prove the first.
For all $ U \in \mathcal{U},$
\begin{align*}
&(\lambda_U\otimes I_{\mathcal{H}_0})\theta_A\theta_A^*=\theta_A\theta_A^*(\lambda_U\otimes I_{\mathcal{H}_0}), \quad (\lambda_U\otimes I_{\mathcal{H}_0})\theta_A\theta_\Psi^*=\theta_A\theta_\Psi^*(\lambda_U\otimes I_{\mathcal{H}_0}),\\
&(\lambda_U\otimes I_{\mathcal{H}_0})\theta_\Psi\theta_\Psi^*=\theta_\Psi\theta_\Psi^*(\lambda_U\otimes I_{\mathcal{H}_0}).
\end{align*}
Consider
\begin{align*}
(\lambda_U\otimes I_{\mathcal{H}_0})\theta_A\theta_A^*(\lambda_U\otimes I_{\mathcal{H}_0})^*&=\left(\sum\limits_{V\in \mathcal{U}}(\lambda_U\otimes I_{\mathcal{H}_0})L_VA_V\right)\left(\sum\limits_{W\in \mathcal{U}}(\lambda_U\otimes I_{\mathcal{H}_0})L_WA_W\right)^*\\ &=\left(\sum\limits_{V\in \mathcal{U}} f(UV)L_{\sigma(UV)}A_V\right)\left(\sum\limits_{W\in \mathcal{U}}f(UW)L_{\sigma(UW)}A_W\right)^*\\
&=\sum\limits_{V\in \mathcal{U}} L_{\sigma(UV)}\left(\sum\limits_{W\in \mathcal{U}}f(UV)\overline{f(UW)}A_VA_W^*L_{\sigma(UW)}^*\right)\\
&= \sum\limits_{V\in \mathcal{U}} L_{\sigma(UV)}\left(\sum\limits_{W\in \mathcal{U}}A_{\sigma(UV)}A_{\sigma(UW)}^*L_{\sigma(UW)}^*\right)\\
&=\left(\sum\limits_{V\in \mathcal{U}} L_{\sigma(UV)}A_{\sigma(UV)}\right)\left(\sum\limits_{W\in \mathcal{U}}L_{\sigma(UW)}A_{\sigma(UW)}\right)^*\\
&=\theta_A\theta_A^*
\end{align*}
where last part of Proposition \ref{PER} is used in the last equality.
Define $ \pi : \mathcal{U} \ni U \mapsto \pi(U)\coloneqq \theta_\Psi^*(\lambda_U\otimes I_{\mathcal{H}_0})\theta_A \in \mathcal{B}(\mathcal{H}).$ Then
\begin{align*}
\pi(U)\pi(V)&=\theta_\Psi^*(\lambda_U\otimes I_{\mathcal{H}_0})\theta_A \theta_\Psi^*(\lambda_V\otimes I_{\mathcal{H}_0})\theta_A \\
&=\theta_\Psi^*\theta_A \theta_\Psi^*(\lambda_U\otimes I_{\mathcal{H}_0}) (\lambda_V\otimes I_{\mathcal{H}_0})\theta_A \\
&= \theta_\Psi^*(\lambda_U\lambda_V\otimes I_{\mathcal{H}_0})\theta_A \\
&=\theta_\Psi^*(f(UV)\lambda_{\sigma(UV)}\otimes I_{\mathcal{H}_0})\theta_A \\
&=f(UV) \theta_\Psi^*(\lambda_{\sigma(UV)}\otimes I_{\mathcal{H}_0})\theta_A \\
&=f(UV)\pi({\sigma(UV)}), \quad \forall U, V \in \mathcal{U}
\end{align*}
and
\begin{align*}
\pi(U)\pi(U)^*&=\theta_\Psi^*(\lambda_U\otimes I_{\mathcal{H}_0})\theta_A\theta_A^*(\lambda_U^*\otimes I_{\mathcal{H}_0})\theta_\Psi
\\
&=\theta_\Psi^*\theta_A\theta_A^*(\lambda_U\otimes I_{\mathcal{H}_0})(\lambda_U^*\otimes I_{\mathcal{H}_0})\theta_\Psi
=I_\mathcal{H},\\
\pi(U)^*\pi(U)&=\theta_A^*(\lambda_U^*\otimes I_{\mathcal{H}_0})\theta_\Psi\theta_\Psi^*(\lambda_{U}\otimes I_{\mathcal{H}_0})\theta_A\\
&=\theta_A^*(\lambda_U^*\otimes I_{\mathcal{H}_0})(\lambda_U\otimes I_{\mathcal{H}_0})\theta_\Psi\theta_\Psi^*\theta_A=I_\mathcal{H}, \quad \forall U \in \mathcal{U}.
\end{align*}
Further,
\begin{align*}
\pi(U)f(U^{-1})\pi(\sigma(U^{-1}))&=\theta_\Psi^*(\lambda_U\otimes I_{\mathcal{H}_0})\theta_Af(U^{-1})\theta_\Psi^*(\lambda_{\sigma(U^{-1})}\otimes I_{\mathcal{H}_0})\theta_A \\
&=f(U^{-1})\theta_\Psi^*\theta_A\theta_\Psi^*(\lambda_U\otimes I_{\mathcal{H}_0})(\lambda_{\sigma(U^{-1})}\otimes I_{\mathcal{H}_0})\theta_A \\
&=f(U^{-1})\theta_\Psi^*(\lambda_U\otimes I_{\mathcal{H}_0})(\lambda_{\sigma(U^{-1})}\otimes I_{\mathcal{H}_0})\theta_A\\
&=f(U^{-1})\theta_\Psi^*(\lambda_U\lambda_{\sigma(U^{-1})}\otimes I_{\mathcal{H}_0})\theta_A\\
&=f(U^{-1})\theta_\Psi^*(f(U\sigma(U^{-1}))\lambda_{\sigma(U\sigma (U^{-1}))}\otimes I_{\mathcal{H}_0})\theta_A\\
&=\theta_\Psi^*(f(U\sigma(U^{-1}I))f(U^{-1}I)\lambda_{\sigma(U\sigma (U^{-1}I))}\otimes I_{\mathcal{H}_0})\theta_A\\
&=\theta_\Psi^*(f(\sigma(UU^{-1})I)f(UU^{-1})\lambda_{\sigma({\sigma(UU^{-1})I})}\otimes I_{\mathcal{H}_0})\theta_A\\
&=\theta_\Psi^*(\lambda_I\otimes I_{\mathcal{H}_0})\theta_A=I_\mathcal{H}
\end{align*}
$\Rightarrow {\pi(U)}^{-1}=f(U^{-1})\pi(\sigma(U^{-1}))$ for all $ U \in \mathcal{U}$. We shall now use $ \theta_A^*$ is injective to show $ \pi$ is injective and thereby to get $ \pi$ is a unitary representation. Let $ \pi(U)=\pi(V).$ Then
\begin{align*}
&\theta_\Psi^*(\lambda_U\otimes I_{\mathcal{H}_0})\theta_A =\theta_\Psi^*(\lambda_V\otimes I_{\mathcal{H}_0})\theta_A \Rightarrow \theta_\Psi^*(\lambda_U\otimes I_{\mathcal{H}_0})\theta_A \theta_A^*=\theta_\Psi^*(\lambda_V\otimes I_{\mathcal{H}_0})\theta_A \theta_A^* \\
&\Rightarrow \theta_\Psi^*\theta_A \theta_A^*(\lambda_U\otimes I_{\mathcal{H}_0}) =\theta_\Psi^*\theta_A \theta_A^*(\lambda_V\otimes I_{\mathcal{H}_0}) \Rightarrow \lambda_U\otimes I_{\mathcal{H}_0}=\lambda_V\otimes I_{\mathcal{H}_0}.
\end{align*}
We show $ U$ and $ V$ are identical at elementary tensors. For $ h \in \ell^2(\mathcal{U}), y \in \mathcal{H}_0, $ we get, $(\lambda_U\otimes I_{\mathcal{H}_0})(h\otimes y)=(\lambda_V\otimes I_{\mathcal{H}_0})(h\otimes y)\Rightarrow \lambda_Uh\otimes y=\lambda_Vh\otimes y \Rightarrow (\lambda_U-\lambda_V)h\otimes y=0 \Rightarrow 0= \langle (\lambda_U-\lambda_V)h\otimes y, (\lambda_U-\lambda_V)h\otimes y\rangle= \|(\lambda_U-\lambda_V)h\|^2 \|y\|^2 .$ We may assume $y\neq0$ (if $y=0$, then $h\otimes y=0$). But then $ (\lambda_U-\lambda_V)(h)=0,$ and $ \lambda$ is a unitary representation (it is injective) gives $ U=V.$ We now show $ A_U=A_I\pi(U)^{-1} $ and $ \Psi_U=\Psi_I\pi(U)^{-1} $ for all $ U \in \mathcal{U}$ in the following:
\begin{align*}
A_I\pi(U)^{-1}&= L_I^*\theta_A(\theta_\Psi^*(\lambda_U\otimes I_{\mathcal{H}_0})\theta_A)^*
= L_I^*(\theta_\Psi^*(\lambda_U\otimes I_{\mathcal{H}_0})\theta_A\theta_A^*)^*\\
&=L_I^*(\theta_\Psi^*\theta_A\theta_A^*(\lambda_U\otimes I_{\mathcal{H}_0}))^*
= L_I^*(\theta_A^*(\lambda_U\otimes I_{\mathcal{H}_0}))^*\\
&=(\theta_A^*(\lambda_U\otimes I_{\mathcal{H}_0})L_I)^*= (\theta_A^*\overline{f(UI)}(\lambda_U\otimes I_{\mathcal{H}_0})L_I)^*\\
&= (\theta_A^*L_{\sigma({UI})})^*=L_U^*\theta_A=A_U
\end{align*}
and
\begin{align*}
\Psi_I\pi(U)^{-1}&= L_I^*\theta_\Psi(\theta_\Psi^*(\lambda_U\otimes I_{\mathcal{H}_0})\theta_A)^*
= L_I^*(\theta_\Psi^*(\lambda_U\otimes I_{\mathcal{H}_0})\theta_A\theta_\Psi^*)^*\\
&=L_I^*(\theta_\Psi^*\theta_A\theta_\Psi^*(\lambda_U\otimes I_{\mathcal{H}_0}))^*
= L_I^*(\theta_\Psi^*(\lambda_U\otimes I_{\mathcal{H}_0}))^*\\
&=(\theta_\Psi^*(\lambda_U\otimes I_{\mathcal{H}_0})L_I)^*= (\theta_\Psi^*\overline{f(UI)}(\lambda_U\otimes I_{\mathcal{H}_0})L_I)^*\\
&= (\theta_\Psi^*L_{\sigma({UI})})^*=L_U^*\theta_\Psi=\Psi_U.
\end{align*}
\end{proof}
Note that neither Parsevalness of the frame nor $ \theta_A^*$ is injective was used in the direct part of Theorem \ref{CHARACTERIZATIONGROUPLIKE}. Since $ \theta_A$ acts between Hilbert spaces, we know that $ \overline{\theta_A(\mathcal{H})}=\operatorname{Ker}(\theta_A^*)^\perp$ and $ \operatorname{Ker}(\theta_A^*)=\theta_A(\mathcal{H})^\perp.$ From Lemma \ref{DILATIONLEMMA}, the range of $\theta_A$ is closed. Therefore $ \theta_A(\mathcal{H})=\operatorname{Ker}(\theta_A^*)^\perp.$ Thus the condition $ \theta_A^*$ is injective in the Theorem \ref{CHARACTERIZATIONGROUPLIKE} can be replaced by $ \theta_A$ is onto.
\begin{corollary}
Let $ \mathcal{U}$ be a group-like unitary system, $ I$ be the identity of $ \mathcal{U}$ and $(\{A_U\}_{U\in \mathcal{U}},\{\Psi_U\}_{U\in \mathcal{U}})$ be a factorable weak OVF in $ \mathcal{B}(\mathcal{H},\mathcal{H}_0)$ with $ \theta_A^*$ is injective. Then there is a unitary representation $ \pi$ of $ \mathcal{U}$ on $ \mathcal{H}$ for which
\begin{enumerate}[label=(\roman*)]
\item $ A_U=A_IS^{-1}_{A,\Psi}\pi(U)^{-1}S_{A, \Psi}, \Psi_U=\Psi_I\pi(U)^{-1} $ for all $ U \in \mathcal{U}$ if and only if
\begin{align*}
&A_{\sigma(UV)}S^{-1}_{A, \Psi}(S_{A,\Psi}^{-1})^*A_{\sigma(UW)}^*=f(UV)\overline{f(UW)} A_VS^{-1}_{A, \Psi}(S_{A,\Psi}^{-1})^*A_W^*,\\
& A_{\sigma(UV)}S_{A, \Psi}^{-1}\Psi_{\sigma(UW)}^*=f(UV)\overline{f(UW)} A_VS^{-1}_{A, \Psi}\Psi_W^*,\\
&\Psi_{\sigma(UV)}\Psi_{\sigma(UW)}^*=f(UV)\overline{f(UW)} \Psi_V\Psi_W^*, \quad \forall U,V,W \in \mathcal{U}.
\end{align*}
\item $ A_U=A_I\pi(U)^{-1}, \Psi_U =\Psi_I(S_{A,\Psi}^{-1})^*\pi(U)^{-1}S_{A, \Psi} $ for all $ U \in \mathcal{U}$ if and only if
\begin{align*}
&A_{\sigma(UV)}A_{\sigma(UW)}^*=f(UV)\overline{f(UW)} A_VA_W^*,\\
&A_{\sigma(UV)}S^{-1}_{A, \Psi}\Psi_{\sigma(UW)}^*=f(UV)\overline{f(UW)}A_VS^{-1}_{A, \Psi}\Psi_W^*,\\
&\Psi_{\sigma(UV)}(S_{A,\Psi}^{-1})^*S^{-1}_{A, \Psi}\Psi_{\sigma(UW)} ^*
=f(UV)\overline{f(UW)} \Psi_V(S_{A,\Psi}^{-1})^*S^{-1}_{A, \Psi}\Psi_W^*, \quad \forall U,V,W \in \mathcal{U}.
\end{align*}
\end{enumerate}
\end{corollary}
\begin{proof}
\begin{enumerate}[label=(\roman*)]
\item We apply Theorem \ref{CHARACTERIZATIONGROUPLIKE} to the factorable Parseval OVF $(\{A_US_{A,\Psi}^{-1}\}_{U\in \mathcal{U}} ,$ $ \{\Psi_U\}_{U\in \mathcal{U}})$. There is a unitary representation $ \pi$ of $ \mathcal{U}$ on $ \mathcal{H}$ for which $ A_US_{A,\Psi}^{-1}=(A_IS^{-1}_{A,\Psi})\pi(U)^{-1}, \Psi_U=\Psi_I\pi(U)^{-1} $ for all $ U \in \mathcal{U}$ if and only if
\begin{align*}
&(A_{\sigma(UV)}S^{-1}_{A, \Psi})(A_{\sigma(UW)}S_{A,\Psi}^{-1})^*=f(UV)\overline{f(UW)}( A_VS^{-1}_{A, \Psi})(A_WS_{A,\Psi}^{-1})^*,\\
&(A_{\sigma(UV)}S_{A, \Psi}^{-1}) \Psi_{\sigma(UW)}^*=
f(UV)\overline{f(UW)}( A_VS^{-1}_{A, \Psi})\Psi_W^*,\\
&\Psi_{\sigma(UV)}\Psi_{\sigma(UW)}^*=f(UV)\overline{f(UW)} \Psi_V\Psi_W^*, \quad \forall U,V,W \in \mathcal{U}.
\end{align*}
\item We apply Theorem \ref{CHARACTERIZATIONGROUPLIKE} to the factorable Parseval OVF $(\{A_U\}_{U\in \mathcal{U}} , \{\Psi_U(S_{A,\Psi}^{-1})^*\}_{U\in \mathcal{U}})$. There is a unitary representation $ \pi$ of $ \mathcal{U}$ on $ \mathcal{H}$ for which $ A_U=A_I\pi(U)^{-1}, \Psi_U(S_{A,\Psi}^{-1})^*=(\Psi_I(S_{A,\Psi}^{-1})^*)\pi(U)^{-1} $ for all $ U \in \mathcal{U}$ if and only if
\begin{align*}
&A_{\sigma(UV)}A_{\sigma(UW)}^*=
f(UV)\overline{f(UW)} A_VA_W^*,\\
&A_{\sigma(UV)}(\Psi_{\sigma(UW)}(S_{A,\Psi}^{-1})^*)^*=f(UV)\overline{f(UW)}A_V(\Psi_W(S_{A,\Psi}^{-1})^*)^*,\\
&(\Psi_{\sigma(UV)}(S_{A,\Psi}^{-1})^*)(\Psi_{\sigma(UW)}(S_{A,\Psi}^{-1})^*)^*=f(UV)\overline{f(UW)} (\Psi_V(S_{A,\Psi}^{-1})^*)(\Psi_W(S_{A,\Psi}^{-1})^*)^*,\\
& \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad\forall U,V,W \in \mathcal{U}.
\end{align*}
\end{enumerate}
\end{proof}
\section{PERTURBATIONS OF WEAK OPERATOR-VALUED FRAMES}\label{PERTURBATIONS}
In this section we derive stability results for factorable weak operator-valued frames.
\begin{theorem}\label{PERTURBATION RESULT 1}
Let $ ( \{A_n\}_{n}, \{\Psi_n\}_{n} ) $ be a factorable weak OVF in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$. Suppose $\{B_n\}_{n} $ in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ is such that there exist $\alpha, \beta, \gamma \geq 0 $ with $ \max\{\alpha+\gamma\|\theta_\Psi (S_{A,\Psi}^*)^{-1}\|, \beta\}$ $<1$ and for all $m=1,2, \dots, $
\begin{align}\label{p3}
\left\|\sum\limits_{n=1}^m(A_n^*-B_n^*)L_n^*y\right\|&\leq \alpha\left\|\sum\limits_{n=1}^mA_n^*L_n^*y\right\|+\beta\left\|\sum\limits_{n=1}^mB_n^*L_n^*y\right\|+\gamma \left(\sum\limits_{n=1}^m\|L_n^*y\|^2\right)^\frac{1}{2},\nonumber\\
&\quad \forall y \in \ell^2(\mathbb{N})\otimes \mathcal{H}_0.
\end{align}
Then $ ( \{B_n\}_{n}, \{\Psi_n\}_{n} ) $ is a factorable weak OVF with bounds
\begin{align*}
\frac{1-(\alpha+\gamma\|\theta_\Psi (S_{A,\Psi}^*)^{-1}\|)}{(1+\beta)\|(S_{A,\Psi}^*)^{-1}\|} \quad \text{ and } \quad \frac{\|\theta_\Psi\|((1+\alpha)\|\theta_A\|+\gamma)}{1-\beta}.
\end{align*}
\end{theorem}
\begin{proof}
For $m=1,2,\dots, $ and for every $ y$ in $ \ell^2(\mathbb{N})\otimes \mathcal{H}_0$,
\begin{align*}
\left\| \sum\limits_{n=1}^mB_n^*L_n^*y\right\|&\leq \left\| \sum\limits_{n=1}^m(A_n^*-B_n^*)L_n^*y\right\|+\left\| \sum\limits_{n=1}^mA_n^*L_n^*y\right\|\\
&\leq(1+\alpha)\left\| \sum\limits_{n=1}^mA_n^*L_n^*y\right\|+\beta\left\| \sum\limits_{n=1}^mB_n^*L_n^*y\right\|+\gamma\left( \sum\limits_{n=1}^m\|L_n^*y\|^2\right)^\frac{1}{2}
\end{align*}
which implies
\begin{equation}\label{p1}
\left\| \sum\limits_{n=1}^mB_n^*L_n^*y\right\|\leq\frac{1+\alpha}{1-\beta}\left\| \sum\limits_{n=1}^mA_n^*L_n^*y\right\|+\frac{\gamma}{1-\beta}\left( \sum\limits_{n=1}^m\|L_n^*y\|^2\right)^\frac{1}{2}, \quad \forall y \in \ell^2(\mathbb{N})\otimes \mathcal{H}_0.
\end{equation}
Since
$$ \langle y,y\rangle =\langle (I_{\ell^2(\mathbb{N})}\otimes I_{\mathcal{H}_0})y,y\rangle=\left\langle\sum\limits_{n=1}^\infty L_nL_n^* y,y\right\rangle=\sum\limits_{n=1}^\infty\|L_n^* y\|^2 , \quad \forall y \in \ell^2(\mathbb{N})\otimes \mathcal{H}_0,$$
Inequality (\ref{p1}) shows that $\sum_{n=1}^\infty B_n^*L_n^*y $ exists for all $ y \in \ell^2(\mathbb{N})\otimes \mathcal{H}_0.$
From the continuity of norm, Inequality (\ref{p1}) gives
\begin{align}\label{p2}
\left\| \sum\limits_{n=1}^\infty B_n^*L_n^*y\right\|&\leq\frac{1+\alpha}{1-\beta}\left\| \sum\limits_{n=1}^\infty A_n^*L_n^*y\right\|+\frac{\gamma}{1-\beta}\left( \sum\limits_{n=1}^\infty\|L_n^*y\|^2\right)^\frac{1}{2}\nonumber \\
&=\frac{1+\alpha}{1-\beta}\left\| \theta_A^*y\right\|+\frac{\gamma}{1-\beta}\|y\| , \quad \forall y \in \ell^2(\mathbb{N})\otimes \mathcal{H}_0
\end{align}
and this gives $ \sum_{n=1}^\infty B_n^*L_n^* $ is bounded; therefore its adjoint exists, which is $ \theta_B$; Inequality (\ref{p2}) now produces $\|\theta_B^*y\|\leq \frac{1+\alpha}{1-\beta}\left\| \theta_A^*y\right\|+\frac{\gamma}{1-\beta}\|y\| , \forall y \in \ell^2(\mathbb{N})\otimes \mathcal{H}_0 $ and from this $\|\theta_B\|=\|\theta_B^*\|\leq \frac{1+\alpha}{1-\beta}\left\| \theta_A^*\right\|+\frac{\gamma}{1-\beta} =\frac{1+\alpha}{1-\beta}\left\| \theta_A\right\|+\frac{\gamma}{1-\beta}.$
Thus we derived $ S_{B, \Psi}$ is a bounded linear operator.
Continuity of the norm, existence of frame operators together with Inequality (\ref{p3}) give
$$ \|\theta_A^*y-\theta_B^*y\|\leq \alpha\|\theta_A^*y\|+\beta\|\theta_B^*y\|+\gamma\|y\|, \quad \forall y \in \ell^2(\mathbb{N})\otimes \mathcal{H}_0$$
which implies
\begin{align*}
\|\theta_A^*(\theta_\Psi (S_{A,\Psi}^*)^{-1} h)-\theta_B^*(\theta_\Psi (S_{A,\Psi}^*)^{-1}h)\|&\leq \alpha\|\theta_A^*(\theta_\Psi (S_{A,\Psi}^*)^{-1} h)\|\\
&\quad +\beta\|\theta_B^*(\theta_\Psi (S_{A,\Psi}^*)^{-1} h)\| +\gamma\|\theta_\Psi (S_{A,\Psi}^*)^{-1} h\|, \\
&\quad \forall h \in \mathcal{H}.
\end{align*}
But $ \theta_A^*\theta_\Psi (S_{A,\Psi}^*)^{-1}=I_\mathcal{H}$ and $\theta_B^*\theta_\Psi (S_{A,\Psi}^*)^{-1}= S_{B,\Psi}^* (S_{A,\Psi}^*)^{-1}.$ Therefore
\begin{align*}
\| h- S_{B,\Psi}^*(S_{A,\Psi}^*)^{-1}h\|
&\leq \alpha\| h\|+\beta\|S_{B,\Psi}^* (S_{A,\Psi}^*)^{-1} h\|+\gamma\|\theta_\Psi (S_{A,\Psi}^*)^{-1} h\|\\
&\leq(\alpha+\gamma\|\theta_\Psi (S_{A,\Psi}^*)^{-1}\|)\|h\|+\beta\|S_{B,\Psi}^* (S_{A,\Psi}^*)^{-1} h\|, \quad \forall h \in \mathcal{H}.
\end{align*}
Since $ \max\{\alpha+\gamma\|\theta_\Psi (S_{A,\Psi}^*)^{-1}\|, \beta\}<1$, Theorem \ref{cc1} tells that $S_{B,\Psi}^* (S_{A,\Psi}^*)^{-1} $ is invertible and
$\|(S_{B,\Psi}^* (S_{A,\Psi}^*)^{-1})^{-1}\| \leq \frac{1+\beta}{1-(\alpha+\gamma\|\theta_\Psi (S_{A,\Psi}^*)^{-1}\|)}.$ From these, we get
$$(S_{B,\Psi}^* (S_{A,\Psi}^*)^{-1})S_{A,\Psi}^*=S_{B,\Psi}^* $$ is invertible and
\begin{align*}
\| S_{B,\Psi}^{-1}\|\leq\|(S_{A,\Psi}^*)^{-1}\|\| S_{A,\Psi}^*S_{B,\Psi}^{-1}\| \leq \frac{\|(S_{A,\Psi}^*)^{-1}\|(1+\beta)}{1-(\alpha+\gamma\|\theta_\Psi (S_{A,\Psi}^*)^{-1}\|)}.
\end{align*}
Therefore $ ( \{B_n\}_{n}, \{\Psi_n\}_{n} ) $ is a factorable weak OVF. Observing that
\begin{align*}
\|S_{B,\Psi}\|\leq \|\theta_\Psi\|\|\theta_B\|\leq \frac{\|\theta_\Psi\|((1+\alpha)\|\theta_A\|+\gamma)}{1-\beta}
\end{align*}
and $ \|S_{B,\Psi}^{-1}\|^{-1}$ and $\|S_{B,\Psi}\| $ are optimal lower and upper frame bounds for $ ( \{B_n\}_{n}, \{\Psi_n\}_{n} ) $, we get the frame bounds stated in the theorem.
\end{proof}
\begin{corollary}
Let $ ( \{A_n\}_{n}, \{\Psi_n\}_{n} ) $ be a factorable weak OVF in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$. Suppose $\{B_n\}_{n} $ in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ is such that
$$ r \coloneqq \sum_{n=1}^\infty\|A_n-B_n\|^2 <\frac{1}{\|\theta_\Psi (S_{A,\Psi}^*)^{-1}\|^2}.$$
Then $ ( \{B_n\}_{n}, \{\Psi_n\}_{n} ) $ is a factorable weak OVF with bounds
\begin{align*}
\frac{1-\sqrt{r}\|\theta_\Psi (S_{A,\Psi}^*)^{-1}\|}{\|(S_{A,\Psi}^*)^{-1}\|} \quad \text{ and }\quad {\|\theta_\Psi\|(\|\theta_A\|+\sqrt{r})}.
\end{align*}
\end{corollary}
\begin{proof}
We apply Theorem \ref{PERTURBATION RESULT 1} by taking $ \alpha =0, \beta=0, \gamma=\sqrt{r}$. Then $ \max\{\alpha+\gamma\|\theta_\Psi (S_{A,\Psi}^*)^{-1}\|, \beta\}<1$ and for all $m=1,2, \dots, $
\begin{align*}
\left\|\sum\limits_{n=1}^m(A_n^*-B_n^*)L_n^*y\right\|&\leq \left(\sum\limits_{n=1}^m\|A_n^*-B_n^*\|^2 \right)^\frac{1}{2}\left(\sum\limits_{n=1}^m\|L_n^*y\|^2\right)^\frac{1}{2}\\
&\leq\gamma\left(\sum\limits_{n=1}^m\|L_n^*y\|^2\right)^\frac{1}{2}, \quad\forall y \in \ell^2(\mathbb{N})\otimes \mathcal{H}_0.
\end{align*}
\end{proof}
We next derive another stability result with different condition.
\begin{theorem}\label{OVFQUADRATICPERTURBATION}
Let $ ( \{A_n\}_{n}, \{\Psi_n\}_{n} ) $ be a factorable weak OVF in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$. Suppose $\{B_n\}_{n} $ in $ \mathcal{B}(\mathcal{H}, \mathcal{H}_0)$ is such that $ \sum_{n=1}^\infty\|A_n-B_n\|^2$ converges, and
$\sum_{n=1}^\infty\|A_n-B_n\|\|\Psi_n(S_{A,\Psi}^*)^{-1}\|<1.$
Then $ ( \{B_n\}_{n}, \{\Psi_n\}_{n} ) $ is a factorable weak OVF with bounds
\begin{align*}
\frac{1-\sum_{n=1}^\infty\|A_n-B_n\|\|\Psi_n(S_{A,\Psi}^*)^{-1}\|}{\|(S_{A,\Psi}^*)^{-1}\|}\quad \text{ and } \quad \|\theta_\Psi\|\left(\left(\sum_{n=1}^\infty\|A_n-B_n\|^2\right)^{1/2}+\|\theta_A\|\right) .
\end{align*}
\end{theorem}
\begin{proof}
Let $ \alpha =\sum_{n=1}^\infty\|A_n-B_n\|^2 $ and $\beta =\sum_{n=1}^\infty\|A_n-B_n\|\|\Psi_n(S_{A,\Psi}^*)^{-1}\|$. For $m=1,2,\dots $ and for every $ y$ in $ \ell^2(\mathbb{N})\otimes \mathcal{H}_0$,
\begin{align*}
\left\| \sum\limits_{n=1}^mB_n^*L_n^*y\right\|&\leq \left\| \sum\limits_{n=1}^m(A_n^*-B_n^*)L_n^*y\right\|+\left\| \sum\limits_{n=1}^mA_n^*L_n^*y\right\|\\
&\leq \sum\limits_{n=1}^m\|A_n-B_n\|\|L_n^*y\|+\left\| \sum\limits_{n=1}^mA_n^*L_n^*y\right\| \\
&\leq \left( \sum\limits_{n=1}^m\|A_n-B_n\|^2\right)^\frac{1}{2}\left( \sum\limits_{n=1}^m\|L_n^*y\|^2\right)^\frac{1}{2}+\left\| \sum\limits_{n=1}^mA_n^*L_n^*y\right\|\\
&\leq \alpha^\frac{1}{2} \left( \sum\limits_{n=1}^m\|L_n^*y\|^2\right)^\frac{1}{2}+\left\| \sum\limits_{n=1}^mA_n^*L_n^*y\right\|\\
&=\alpha^\frac{1}{2} \left\langle \sum\limits_{n=1}^mL_nL_n^*y, y\right\rangle ^\frac{1}{2}+\left\| \sum\limits_{n=1}^mA_n^*L_n^*y\right\|,
\end{align*}
which converges to $\sqrt{\alpha}\|y\|+\|\theta_A^*y\|$. Hence
$\theta_B$ exists and $\|\theta_B\|\leq \sqrt{\alpha}+\|\theta_A\|$. Therefore $S_{B,\Psi}=\theta_\Psi^*\theta_B=\sum_{n=1}^\infty\Psi^*_nB_n$ exists.
Now
\begin{align*}
\|I_\mathcal{H}-S_{B,\Psi}(S_{A,\Psi}^*)^{-1}\|&=\left\|\sum_{n=1}^\infty A_n^*\Psi_n (S_{A,\Psi}^*)^{-1}-\sum_{n=1}^\infty B_n^*\Psi_n (S_{A,\Psi}^*)^{-1}\right\|\\
&=\left\|\sum_{n=1}^\infty(A_n^*-B_n^*)\Psi_n (S_{A,\Psi}^*)^{-1}\right\|\\
&\leq \sum_{n=1}^\infty\|A_n-B_n\|\|\Psi_n (S_{A,\Psi}^*)^{-1}\| =\beta<1.
\end{align*}
Therefore $S_{B,\Psi}(S_{A,\Psi}^*)^{-1}$ is invertible and $ \|(S_{B,\Psi}(S_{A,\Psi}^*)^{-1})^{-1}\|\leq 1/(1-\beta)$. Calculation of frame bounds is similar to proof of Theorem \ref{PERTURBATION RESULT 1}.
\end{proof}
{\onehalfspacing \chapter{CONCLUSION AND FUTURE WORK}\label{chap8} }
In Chapter \ref{chap2} we initiated the study of frames for metric spaces. Since metric spaces are more general objects and have less structure than Banach spaces, study of frames for metric spaces goes in a different way than that of frames for Hilbert as well as for Banach spaces. Arens-Eells space is used as a tool which allows to use the functional analysis technique to Lipschitz functions. However, this works good only when the co domain of Lipschitz functions is a Banach space. Most of the results in Chapter \ref{chap2} are concentrated whenever the codomain of Lipschitz function is a Banach space. In future we are interested to work on frames for arbitrary metric spaces.
In Chapter \ref{chap3} we defined multipliers for metric spaces. We obtained some fundamental properties of multipliers. One of the future work is to explore further on multipliers in metric spaces.
In Chapter \ref{chap4} we studied a special class of approximate Schauder frames. We characterized a class of approximate Schauder frames and its duals. It is planned to obtain the description of frames and its duals for Banach spaces.
In Chapter \ref{chap5} we initiated the study of the series $\sum_{n=1}^{\infty}\Psi_n^*A_n$. We mainly obtained results whenever this series is factored as the product of two bounded linear operators. In the future we are planning to study the series without factorability condition. We are also interested in studying path-connectedness of weak OVFs and try to get a result similar to Theorem \ref{KAFTALPATHCONNECTED}.
\leavevmode\newpage
\leavevmode\newpage
\addcontentsline{toc}{chapter}{APPENDIX A: DILATIONS OF LINEAR MAPS ON VECTOR SPACES}
\par~
\begin{center}
\textbf{{\fontsize{16}{1em}\selectfont APPENDIX A: DILATIONS OF LINEAR MAPS ON VECTOR SPACES}} \\
\end{center}
{\onehalfspacing \section{DILATIONS OF FUNCTIONS ON SETS}
One of the most useful results in the study of isometries on Hilbert spaces is the Wold decomposition. It describes the structure of an isometry. It uses the notion of a shift.
\begin{definition}(cf. \cite{NAGY})\label{SHIFTDEFINITION}
Let $\mathcal{H}$ be a Hilbert space. An operator $T:\mathcal{H}\to \mathcal{H}$ is called a \textbf{shift} if $\cap _{n=0}^\infty T^n(\mathcal{H})=\{0\}$.
\end{definition}
\begin{theorem}(cf. \cite{NAGY, WOLD})
(\textbf{Wold decomposition}) Let $T$ be an isometry on a Hilbert space $\mathcal{H}$. Then $\mathcal{H}$ decomposes uniquely as $\mathcal{H}=\mathcal{H}_u\oplus \mathcal{H}_s$, where $\mathcal{H}_u$ and $\mathcal{H}_s$ are $T$-reducing subspaces of $\mathcal{H}$, $T_{|\mathcal{H}_u}:\mathcal{H}_u\to \mathcal{H}_u$ is a unitary and $T_{|\mathcal{H}_s}:\mathcal{H}_s \to \mathcal{H}_s$ is a shift.
\end{theorem}
Using functional calculus and Weierstrass polynomial approximation theorem, Halmos in 1950 proved an important result that every contraction on a Hilbert space can be lifted to unitary.
\begin{theorem}(\cite{HALMOSORIGINAL}) \label{HALMOSDILATION}(\textbf{Halmos dilation})
Let $\mathcal{H}$ be a Hilbert space and $T:\mathcal{H}\to \mathcal{H}$ be a contraction. Then the operator
\begin{align*}
U\coloneqq \begin{pmatrix}
T & \sqrt{I-TT^*} \\
\sqrt{I-T^*T} & -T^* \\
\end{pmatrix}
\end{align*}is unitary on $\mathcal{H}\oplus \mathcal{H}$. In other words,
\begin{align*}
T=P_\mathcal{H}U_{|\mathcal{H}},
\end{align*}
where $P_\mathcal{H}:\mathcal{H}\oplus \mathcal{H}\to \mathcal{H}\oplus \mathcal{H}$ is the orthogonal projection onto $\mathcal{H}$.
\end{theorem}
Three years later, Sz. Nagy extended the result of Halmos which reads as follows.
\begin{theorem}(\cite{NAGYPAPER})\label{NAGYTHEOREM} (\textbf{Sz. Nagy dilation})
Let $\mathcal{H}$ be a Hilbert space and $T:\mathcal{H}\to \mathcal{H}$ be a contraction. Then there exists a Hilbert space $\mathcal{K}$ which contains $\mathcal{H}$ isometrically and a unitary $U:\mathcal{K}\to \mathcal{K}$ such that
\begin{align*}
T^n=P_\mathcal{H}U_{|\mathcal{H}},^n \quad \forall n=1, 2,\dots,
\end{align*}
where $P_\mathcal{H}:\mathcal{K}\to \mathcal{K}$ is the orthogonal projection onto $\mathcal{H}$.
\end{theorem}
Unitary operator $U$ in Theorem \ref{NAGYTHEOREM} is known as dilation operator and the space $\mathcal{K}$ is called as dilation space. If
\begin{align*}
\mathcal{K}=\overline{\text{span}}\{U^nh:, n \in \mathbb{Z}_+, h \in \mathcal{H}\},
\end{align*}
then $(\mathcal{K},U)$ is said to be a minimal dilation. It is known that in Theorem \ref{NAGYTHEOREM}, the space $\mathcal{K}$ can be taken as a minimal space.
It was \cite{SCHAFFER} who gave a proof of Sz. Nagy dilation theorem using infinite matrices. In the following theorem, $\oplus_{n=-\infty}^{\infty} \mathcal{H}$ is the Hilbert space defined by
\begin{align*}
\oplus_{n=-\infty}^{\infty} \mathcal{H}\coloneqq \left\{ \{h_n\}_{n=-\infty}^\infty, h_n \in \mathcal{H}, \forall n \in \mathbb{Z}, \sum_{n=-\infty}^{\infty}\|h_n\|^2<\infty\right\}
\end{align*}
with respect to the inner product
\begin{align*}
\langle \{h_n\}_{n=-\infty}^\infty, \{g_n\}_{n=-\infty}^\infty\rangle \coloneqq \sum_{n=-\infty}^{\infty}\langle h_n, g_n \rangle, \quad \forall \{h_n\}_{n=-\infty}^\infty, \{g_n\}_{n=-\infty}^\infty \in \oplus_{n=-\infty}^{\infty} \mathcal{H}.
\end{align*}
\begin{theorem}(\cite{SCHAFFER})\label{SCHAFFERTHEOREM}
Let $\mathcal{H}$ be a Hilbert space and $T:\mathcal{H}\to \mathcal{H}$ be a contraction. Let $U\coloneqq [u_{n,m}]_{-\infty < n,m< \infty}$ be the \textbf{Schaffer operator} defined on
$\oplus_{n=-\infty}^{\infty} \mathcal{H}$ given by the infinite matrix defined as follows:
\begin{align*}
&u_{0,0}\coloneqq T, \quad u_{0,1}\coloneqq \sqrt{I-TT^*}, \quad u_{-1,0}\coloneqq \sqrt{I-T^*T},\\
& u_{-1,1}\coloneqq -T^*~, \quad u_{n,n+1}\coloneqq I, ~\forall n \in \mathbb{Z}, n\neq 0,1, \quad u_{n,m} \coloneqq 0, \quad \text{ otherwise},
\end{align*}
i.e.,
\begin{align*}
U=\begin{pmatrix}
&\vdots &\vdots & \vdots & \vdots & \vdots & \\
\cdots & 0 & I& 0 & 0& 0& \cdots & \\
\cdots & 0 & 0& \sqrt{I-T^*T}& -T^* & 0&\cdots & \\
\cdots & 0&0&\boxed{T}&\sqrt{I-TT^*}& 0&\cdots&\\
\cdots & 0&0&0&0& I&\cdots &\\
\cdots & 0&0&0&0& 0&\cdots &\\
& \vdots &\vdots &\vdots &\vdots & \vdots & \\
\end{pmatrix}_{\infty\times \infty}
\end{align*}
where $T$ is in the $(0,0)$ position (which is in the box), is invertible on $\oplus_{n=-\infty}^{\infty} \mathcal{H}$ and
\begin{align*}
T^n=P_\mathcal{H}U_{|\mathcal{H}}^n,\quad \forall n\in \mathbb{N},
\end{align*}
where $P_\mathcal{H}:\oplus_{n=-\infty}^{\infty} \mathcal{H}\to \oplus_{n=-\infty}^{\infty} \mathcal{H}$ is the orthogonal projection onto $\mathcal{H}$.
\end{theorem}
After a year of work of Sz. Nagy, it was Egervary who observed that Halmos dilation of contraction can be extended finitely so that power of dilation will be dilation of power of contraction.
\begin{theorem}(\cite{EGERVARY}) \label{EGERVARY}(\textbf{N-dilation})
Let $\mathcal{H}$ be a Hilbert space and $T:\mathcal{H}\to \mathcal{H}$ be a contraction. Let $N$ be a natural number. Then the operator
\begin{align*}
U\coloneqq \begin{pmatrix}
T & 0& 0 & \cdots &0 & \sqrt{I-TT^*} \\
\sqrt{I-T^*T} & 0& 0 & \cdots &0& -T^* \\
0&I&0&\cdots &0& 0\\
0&0&I&\cdots &0 & 0\\
\vdots &\vdots &\vdots & & \vdots &\vdots \\
0&0&0&\cdots &0 & 0\\
0&0&0&\cdots &I & 0\\
\end{pmatrix}_{(N+1)\times (N+1)}
\end{align*}is unitary on $\oplus_{k=1}^{N+1} \mathcal{H}$ and
\begin{align*}
T^k=P_\mathcal{H}U_{|\mathcal{H}}^k,\quad \forall k=1, \dots, N,
\end{align*}
where $P_\mathcal{H}:\oplus_{k=1}^{N+1} \mathcal{H}\to \oplus_{k=1}^{N+1} \mathcal{H}$ is the orthogonal projection onto $\mathcal{H}$.
\end{theorem}
A very useful result which can be derived using Theorem \ref{EGERVARY} is the von Neumann's inequality. It was derived by \cite{VONNEUMANNINEQUALITYPAPER} using the theory of analytic functions.
\begin{theorem}(\textbf{von Neumann inequality}) (\cite{VONNEUMANNINEQUALITYPAPER, ORRGUIDED, RAINONE})
Let $\mathcal{H}$ be a Hilbert space and $T:\mathcal{H}\to \mathcal{H}$ be a contraction. Then for every polynomial $p\in \mathbb{C}[z]$,
\begin{align*}
\|p(T)\|\leq \sup_{|z|=1}|p(z)|.
\end{align*}
\end{theorem}
Sz. Nagy's dilation theorem leads to the study of dilating more than one operator which are commuting. After a decade of work of Sz. Nagy, Ando derived the following result.
\begin{theorem}(\cite{ANDO})\label{ANDOTHEOREM} (\textbf{Ando dilation})
Let $\mathcal{H}$ be a Hilbert space and $T_1, T_2:\mathcal{H}\to \mathcal{H}$ be commuting contractions. Then there exist a Hilbert space $\mathcal{K}$ which contains $\mathcal{H}$ isometrically and a pair of commuting unitaries $U_1, U_2:\mathcal{K}\to \mathcal{K}$ such that
\begin{align*}
T_1^nT_2^m=P_\mathcal{H}U_1^n{U_2}_\mathcal{H}^m, \quad \forall n,m=1, 2, \dots,
\end{align*}
where $P_\mathcal{H}:\mathcal{K}\to \mathcal{K}$ is the orthogonal projection onto $\mathcal{H}$.
\end{theorem}
An easy consequence of Ando dilation is generalization of von Neumann inequality.
\begin{theorem}(\textbf{Ando-von Neumann inequality}) (cf. \cite{BHATTACHARYYA, ANDO})
Let $\mathcal{H}$ be a Hilbert space and $T_1,T_2:\mathcal{H}\to \mathcal{H}$ be commuting contractions. Then for every polynomial $p\in \mathbb{C}[z, w]$,
\begin{align*}
\|p(T_1, T_2)\|\leq \sup_{|z|=|w|=1}|p(z,w)|.
\end{align*}
\end{theorem}
It is known that Ando dilation theorem can not be extended for more than two commuting contractions (cf. \cite{BHATTACHARYYA, PARROTT, VAROPOULOS, CRABBDAVIE, DRURY}). We next consider inter-twining lifting theorem. This says that any operator which intertwins contractions can be lifted so that the lifted operator intertwins dilation operator.
\begin{theorem}(\cite{NAGYLIFTING})\label{ILT} (\textbf{Inter-twining lifting theorem}) Let $T_1:\mathcal{H}_1\to \mathcal{H}_1$, $T_2:\mathcal{H}_2\to \mathcal{H}_2$ be contractions, where $\mathcal{H}_1$, $\mathcal{H}_2$ are Hilbert spaces. Let $V_1:\mathcal{K}_1\to \mathcal{K}_1$, $V_2:\mathcal{K}_2\to \mathcal{K}_2$ be minimal isometric dilations of $T_1,T_2$, respectively. Assume that $S:\mathcal{H}_2\to \mathcal{H}_1$ is a bounded linear operator such that $T_1S=ST_2$. Then there exists a bounded linear operator $R:\mathcal{K}_2\to \mathcal{K}_1$ such that
\begin{align*}
V_1R=RV_2, \quad P_{\mathcal{H}_1}R_{\mathcal{H}_2^\perp}=0, \quad P_{\mathcal{H}_1}R_{\mathcal{H}_2}=S, \quad \|R\|=\|S\|.
\end{align*}
Conversely if $R:\mathcal{K}_2\to \mathcal{K}_1$ is a bounded linear operator such that $V_1R=RV_2$ and $P_{\mathcal{H}_1}R_{\mathcal{H}_2^\perp}=0$, then $S\coloneqq P_{\mathcal{H}_1}R_{\mathcal{H}_2}$ satisfies $T_1S=ST_2$.
\end{theorem}
Next theorem gives a characterization which gives a condition that a given operator in a larger space becomes a dilation of compression of it to a smaller space.
\begin{theorem}(\cite{SARASON}) \label{SL}(\textbf{Sarason's lemma})
Let $\mathcal{H}$ be a closed subspace of a Hilbert space $\mathcal{K}$ and $V:\mathcal{K}\to \mathcal{K}$ be a bounded linear operator. Define $T\coloneqq P_\mathcal{H} V_{|\mathcal{H}}$. Then $T^n= P_\mathcal{H} V^n_{|\mathcal{H}}$, for all $n\in \mathbb{N}$ if and only if there are closed subspaces $\mathcal{M}\subseteq \mathcal{N}\subseteq\mathcal{K}$ both are invariant for $V$ such that
\begin{align*}
\mathcal{H}=\mathcal{N}\ominus\mathcal{M},
\end{align*}
where $\mathcal{N}\ominus\mathcal{M}$ denotes the orthogonal complement of $\mathcal{M}$ in $\mathcal{N}$.
\end{theorem}
Following Theorems \ref{HALMOSDILATION}, \ref{NAGYTHEOREM}, \ref{EGERVARY}, \ref{ANDOTHEOREM}, \ref{ILT} and \ref{SL}, extension of contractions on Hilbert spaces became an active area of research, known as dilation theory (\cite{NAGY, LEVYSHALIT, ARVESON, ORRGUIDED, AMBROZIE, AGLERCARTHY, PAULSENCOMPLETELY, PISIERSIMILARITY}). This study of contractions motivated the study of contractions and other classes of operators not only on Hilbert spaces, but also on Banach spaces (\cite{FACKLER, STROESCU, AKCOGLU}).
Recently, Bhat, De, and Rakshit abstracted the key ingredients in Halmos and Sz. Nagy dilation theorem and set up a set theoretic version of dilation theory. Following is the fundamental observation which lead Bhat, De, and Rakshit to set up a set theoretic notion of dilation theory.
\begin{enumerate}[label=(\roman*)]
\item \textbf{There is an embedding $i$ of the given space in a larger space}.
\item \textbf{There is a nice map in the larger space}.
\item \textbf{There is an idempotent from the larger space onto the given space}.
\end{enumerate}
These observations can be picturized using the following commutative diagram.
\begin{center}
\[
\begin{tikzcd}
&\mathcal{K} \arrow[r,"U^n"]& \mathcal{K} \arrow[d,"P_{\mathcal{H}}"]\\
&\mathcal{H} \arrow[u,"i"] \arrow[r,"T^n"] & \mathcal{H}
\end{tikzcd}
\]
\end{center}
\begin{definition}(\cite{BHATDERAKSHITH})\label{BHATSET}
Let $\mathscr{A}$ be a (non empty) set and $f:\mathscr{A}\to \mathscr{A}$ be a map. An \textbf{injective power dilation} of $f$ is a quadruple $(\mathscr{B}, i, g,p)$, where $\mathscr{B}$ is a set, $i:\mathscr{A}\to \mathscr{B}$, $v:\mathscr{B}\to \mathscr{B}$ are injective maps, $p:\mathscr{B}\to \mathscr{B}$ is an idempotent map such that $p(\mathscr{B})=i(\mathscr{A})$ and
\begin{align}\label{SETDILATIONEQUATION}
i(f^n(a))=p(g^n(i(a))), \quad \forall a \in \mathscr{A}, \forall n \in \mathbb{Z}_+.
\end{align}
A dilation $(\mathscr{B}, i, g,p)$ of $f$ is said to be \textbf{minimal} if
\begin{align*}
\mathscr{B}=\bigcup\limits_{n=0}^\infty g^n(i(\mathscr{A})).
\end{align*}
\end{definition}
Equation \ref{SETDILATIONEQUATION} says that the following diagram commutes for all $n$.
\begin{center}
\[
\begin{tikzcd}
\mathscr{B} \arrow[r,"g^n"]&\mathscr{B} \arrow[r,"p"]& \mathscr{B}\\
&\mathscr{A} \arrow[ul,"i"] \arrow[r,"f^n"] & \mathscr{A}\arrow[u,"i"]
\end{tikzcd}
\]
\end{center}
Bhat, De, and Rakshit succeeded in obtaining fundamental theorems of dilations. We now recall these results.
\begin{definition}(\cite{BHATDERAKSHITH}) (\textbf{Set shifts})
Let $\mathscr{A}$ be a set. A map $f:\mathscr{A}\to \mathscr{A}$ is said to be \textbf{shift} if $\cap_{n=0}^\infty f^n(\mathscr{A})=\emptyset$.
\end{definition}
\begin{theorem}(\cite{BHATDERAKSHITH}) (\textbf{Wold decomposition for sets})
Let $f:\mathscr{A}\to \mathscr{A}$ be an injective map. Then $\mathscr{A}$ decomposes uniquely as $\mathscr{A}=\mathscr{A}_b \sqcup \mathscr{A}_s$, where $\mathscr{A}_b$, $\mathscr{A}_s$ are invariant for $f$, $f_{|\mathscr{A}_b}$ is a bijection and $f_{|\mathscr{A}_s}$ is a shift.
\end{theorem}
\begin{theorem}(\cite{BHATDERAKSHITH}) (\textbf{Halmos dilation for sets})
Let $f:\mathscr{A}\to \mathscr{A}$ be a map. Define $\mathscr{B}\coloneqq\mathscr{A}\times \{0,1\}$,
\begin{align*}
&i:\mathscr{A}\ni a \mapsto (a,0)\in \mathscr{B}\\
&g:\mathscr{B}\ni (a,m)\mapsto (a,1-m)\in \mathscr{B}
\end{align*}
and $p:\mathscr{B}\to \mathscr{B}$ by
\begin{align*}
p(a,m)\coloneqq \bigg\{\begin{array}{ll}
(a,0) & \text{ if } m=0\\
(f(a),0) & \text{ if } m=1. \\
\end{array}
\end{align*}
Then $i$ is injective, $g$ is bijective, $p$ is idempotent and
\begin{align*}
i(f(a))= p(g(i(a))),\quad \forall a \in \mathscr{A}.
\end{align*}
\end{theorem}
\begin{theorem}(\cite{BHATDERAKSHITH})\label{SZNAGYSET} (\textbf{Sz. Nagy dilation for sets})
Every map $f:\mathscr{A}\to \mathscr{A}$ admits a minimal injective power dilation.
\end{theorem}
In \cite{BHATDERAKSHITH}, a particular type of minimal injective dilation, called as \textbf{standard dilation} was defined. This dilation is defined as follows. Let $f:\mathscr{A}\to \mathscr{A}$ be a map. Define
\begin{align*}
&\mathscr{B} \coloneqq \mathscr{A}\times \mathbb{Z}_+,\\
&i(a)\coloneqq (a,0), \quad \forall a \in \mathscr{A},\\
&g(a,m)\coloneqq (a,m+1), \quad \forall (a,m) \in \mathscr{B},\\
&p(a,m)\coloneqq (f^m(a),0), \quad \forall (a,m) \in \mathscr{B}.
\end{align*}
Then $(\mathscr{B}, i, g,p)$ is a minimal dilation of $f$.
\begin{theorem}(\cite{BHATDERAKSHITH}) (\textbf{Inter-twining lifting theorem for sets})
Let $f_1:\mathscr{A}_1\to \mathscr{A}_1$, $f_2:\mathscr{A}_2\to \mathscr{A}_2$ be maps and $(\mathscr{B}_1, i_1, g_1,p_1)$, $(\mathscr{B}_2, i_2, g_2,p_2)$ be their standard dilations, respectively. Suppose $s:\mathscr{A}_2\to \mathscr{A}_1$ is a function such that $sf_2=f_1s$. Then there exists a map $r:\mathscr{B}_2\to \mathscr{B}_1$ such that
\begin{align*}
rg_2=g_1r,\quad rp_2=p_1r, \quad ri_2=i_1s.
\end{align*}
Conversely if $r:\mathscr{B}_2\to \mathscr{B}_1$is a map such that $rg_2=g_1r$, $ rp_2=p_1r$, then there exists a map $s:\mathscr{A}_2\to \mathscr{A}_1$ such that $ri_2=i_1s$ and $sf_2=f_1s$.
\end{theorem}
\begin{theorem}(\cite{BHATDERAKSHITH}) (\textbf{Ando dilation for sets})
Let $\mathbb{J}$ be an index set, $\{f_j\}_{j\in \mathbb{J}}$ be a family of commuting functions on $\mathscr{A}$. Then there exists a quadruple $(\mathscr{B}, i, \{g_j\}_{j\in \mathbb{J}},p)$, where $\mathscr{B}$ is a set, $i:\mathscr{A}\to \mathscr{B}$ is an injective map, $\{v_j\}_{j\in \mathbb{J}}$ be a family of commuting functions on $\mathscr{B}$, $p:\mathscr{B}\to \mathscr{B}$ is idempotent such that
\begin{align*}
i(f_{j_1}f_{j_2}\cdots f_{j_k}(a))=p(g_{j_1}f_{g_2}\cdots f_{g_k}(i(a))),\quad \forall j_1, \dots, j_k \in \mathbb{J}, \forall a \in \mathscr{A}.
\end{align*}
\end{theorem}
\begin{theorem}(\cite{BHATDERAKSHITH})\label{SARASONLEMMASET} (\textbf{Sarason's lemma for sets})
Let $g:\mathscr{B}\to \mathscr{B}$ be an injective map and let $\mathscr{A}\subseteq\mathscr{B}$. Suppose $f:\mathscr{A}\to \mathscr{A}$ is a map such that $f(a)=g(a)$ for all $a \in \mathscr{A}$ with $g(a) \in \mathscr{A}$. Suppose $\mathscr{A}=\mathscr{A}_2\setminus \mathscr{A}_1$, where $\mathscr{A}_1,$ and $\mathscr{A}_2$ are invariant under $g$. Then there exists a map $p:\mathscr{B}\to \mathscr{B}$ such that $p^2=p$, $p(\mathscr{B})=\mathscr{A}$ and
\begin{align*}
pg^n(a)=f^n(a), \quad \forall n \in \mathbb{N}, \forall a \in \mathscr{A}.
\end{align*}
\end{theorem}
\section{WOLD DECOMPOSITION, HALMOS DILATION AND N-DILATION FOR VECTOR SPACES}
In this appendix we consider vector spaces (need not be finite dimensional) over arbitrary fields. We note that the Definition \ref{SHIFTDEFINITION} of shift of an operator on a Hilbert space does not use the Hilbert space structure. Thus it can be formulated for vector spaces without modifications.
\begin{definition}
Let $\mathcal{V}$ be a vector space and $T:\mathcal{V}\to \mathcal{V}$ be a linear map. The map $T$ is said to be a \textbf{shift} if $\cap _{n=0}^\infty T^n(\mathcal{V})=\{0\} $.
\end{definition}
\begin{theorem}(\textbf{Wold decomposition for vector spaces})
Let $T$ be an injective linear map on a vector space $\mathcal{V}$. Then $\mathcal{V}$ decomposes as $\mathcal{V}=\mathcal{V}_b\oplus \mathcal{V}_s$, where $\mathcal{V}_b$ is a $T$-invariant subspace of $\mathcal{V}$, $T_{|\mathcal{V}_b}:\mathcal{V}_b\to \mathcal{V}_b$ is a bijection and $T_{|\mathcal{V}_s}:\mathcal{V}_s \to \mathcal{V}$ is a shift.
\end{theorem}
\begin{proof}
Define $\mathcal{V}_b\coloneqq \cap _{n=0}^\infty T^n(\mathcal{V})$ and let $\mathcal{V}_s$ be a vector space complement of $\mathcal{V}_b$ in $\mathcal{V}$. We clearly have $\mathcal{V}=\mathcal{V}_b\oplus \mathcal{V}_s$. Now $T(\mathcal{V}_b)=T (\cap _{n=0}^\infty T^n(\mathcal{V}))\subseteq \cap _{n=0}^\infty T^n(\mathcal{V})=\mathcal{V}_b$. Thus $\mathcal{V}_b$ is a $T$-invariant subspace of $\mathcal{V}$. We now try to show that $T_{|\mathcal{V}_b}$ is a bijection. Since $T$ is already injective, it suffices to show that $T_{|\mathcal{V}_b}$ is surjective. Let $y\in \mathcal{V}_b$. Then there exists a sequence $\{x_n\}_{n=1}^\infty$ in $\mathcal{V}$ such that $y=Tx_1=T^2x_2=T^3x_3=\cdots .$ Since $T$ is injective, we then have $x_1=Tx_2=T^2x_2=\cdots $. Therefore $y=Tx_1$ and $x_1\in \mathcal{V}_b$. Thus $T_{|\mathcal{V}_b}$ is surjective. We are now left with proving that $T_{|\mathcal{V}_s}$ is a shift. Let $y\in \cap _{n=0}^\infty (T_{|\mathcal{V}_s})^n(\mathcal{V}_s)\subseteq (\cap _{n=0}^\infty T^n(\mathcal{V}))\cap \mathcal{V}_s= \mathcal{V}_b\cap \mathcal{V}_s$. Hence $y=0$ which completes the proof.
\end{proof}
Since vector space complements are not unique, we do not have uniqueness in Wold decomposition for vector spaces. We next derive Halmos dilation for vector spaces.
\begin{theorem}(\textbf{Halmos dilation for vector spaces})\label{HALMOSVECTORSPACE}
Let $\mathcal{V}$ be a vector space and $T: \mathcal{V} \to \mathcal{V}$ be a linear map. Then the operator
\begin{align*}
U\coloneqq \begin{pmatrix}
T & I \\
I & 0 \\
\end{pmatrix}
\end{align*}
is invertible on $\mathcal{V}\oplus \mathcal{V}$. In other words,
\begin{align*}
T=P_\mathcal{V}U_{|\mathcal{V}},
\end{align*}
where $P_\mathcal{V}:\mathcal{V}\oplus \mathcal{V}\to \mathcal{V}\oplus \mathcal{V}$ is the first coordinate projection onto $\mathcal{V}$.
\end{theorem}
\begin{proof}
It suffices to produce inverse map for $U$. A direct calculation says that
\begin{align*}
V\coloneqq \begin{pmatrix}
0 & I \\
I & -T \\
\end{pmatrix}
\end{align*}
is the inverse of $U$.
\end{proof}
In the sequel, any invertible operator of the form
\begin{align*}
\begin{pmatrix}
T & B \\
C & D \\
\end{pmatrix},
\end{align*}
where $B,C,D:\mathcal{V} \to \mathcal{V}$ are linear operators, will be called as a \textbf{Halmos dilation} of $T$.
Now we observe that Halmos dilation for vector spaces is not unique. Using the theory of \textbf{block matrices} (\cite{LUSHIOU}) we can produce a variety of Halmos dilations for a given operator. Following are some classes of Halmos dilations.
\begin{enumerate}[label=(\roman*)]
\item If $T: \mathcal{V} \to \mathcal{V}$ is an invertible linear map and the linear operators $B,C,D:\mathcal{V} \to \mathcal{V}$ are such that $D-CT^{-1}B$ is invertible, then
the operator
\begin{align*}
U\coloneqq \begin{pmatrix}
T & B \\
C & D \\
\end{pmatrix}
\text{ is a Halmos dilation of $T$ on $\mathcal{V}\oplus \mathcal{V}$ whose inverse is }
\end{align*}
\begin{align*}
\begin{pmatrix}
T^{-1} +T^{-1}B(D-CT^{-1}B)^{-1}& -T^{-1}B(D-CT^{-1}B)^{-1} \\
-(D-CT^{-1}B)^{-1}CT^{-1} & (D-CT^{-1}B)^{-1} \\
\end{pmatrix}.
\end{align*}
\item $D: \mathcal{V} \to \mathcal{V}$ is an invertible linear map and the linear operators $B,C:\mathcal{V} \to \mathcal{V}$ are such that $T-BD^{-1}C$ is invertible, then
the operator
\begin{align*}
\begin{pmatrix}
T & B \\
C & D \\
\end{pmatrix}
\text{ is a Halmos dilation of $T$ on $\mathcal{V}\oplus \mathcal{V}$ whose inverse is }
\end{align*}
\begin{align*}
\begin{pmatrix}
(T-BD^{-1}C)^{-1} & -(T-BD^{-1}C)^{-1}BD^{-1} \\
-D^{-1}C(T-BD^{-1}C)^{-1} & D^{-1}+D^{-1}C(T-BD^{-1}C)^{-1}BD^{-1} \\
\end{pmatrix}.
\end{align*}
\item $B: \mathcal{V} \to \mathcal{V}$ is an invertible linear map and the linear operators $C,D:\mathcal{V} \to \mathcal{V}$ are such that $C-DB^{-1}T$ is invertible, then
the operator
\begin{align*}
\begin{pmatrix}
T & B \\
C & D \\
\end{pmatrix}
\text{ is a Halmos dilation of $T$ on $\mathcal{V}\oplus \mathcal{V}$ whose inverse is }
\end{align*}
\begin{align*}
\begin{pmatrix}
-(C-DB^{-1}T)^{-1}DB^{-1} & (C-DB^{-1}T)^{-1} \\
B^{-1}+B^{-1}T(C-DB^{-1}T)^{-1}DB^{-1} & -B^{-1}T(C-DB^{-1}T)^{-1} \\
\end{pmatrix}.
\end{align*}
\item $C: \mathcal{V} \to \mathcal{V}$ is an invertible linear map and the linear operators $B,D:\mathcal{V} \to \mathcal{V}$ are such that $B-TC^{-1}D$ is invertible, then
the operator
\begin{align*}
\begin{pmatrix}
T & B \\
C & D \\
\end{pmatrix}
\text{ is a Halmos dilation of $T$ on $\mathcal{V}\oplus \mathcal{V}$ whose inverse is }
\end{align*}
\begin{align*}
\begin{pmatrix}
-C^{-1}D (B-TC^{-1}D)^{-1} & C^{-1}+C^{-1}D(B-TC^{-1}D)^{-1}TC^{-1} \\
(B-TC^{-1}D)^{-1} & -(B-TC^{-1}D)^{-1}TC^{-1}\\
\end{pmatrix}.
\end{align*}
\end{enumerate}
Recently, \cite{BHATMUKHERJEE} proved that there is certain kind of uniqueness of Halmos dilation for strict contractions in Hilbert spaces, as shown below.
\begin{theorem}(\cite{BHATMUKHERJEE})\label{BM}
Let $\mathcal{H}$ be a finite dimensional Hilbert space and $T:\mathcal{H}\to \mathcal{H}$ be a strict contraction (i.e., $\|T\|<1$). Then Halmos dilation of $T$ on $\mathcal{H}\oplus \mathcal{H}$ is unitarily equivalent to
\begin{align*}
\begin{pmatrix}
T & -\sqrt{I-TT^*}W \\
\sqrt{I-T^*T} & T^*W \\
\end{pmatrix}, \quad \text{ for some unitary operator } W:\mathcal{H}\to \mathcal{H}.
\end{align*}
\end{theorem}
We next derive a negative result to Theorem \ref{BM} for Halmos dilation in vector spaces.
\begin{theorem}\label{HALMOSTHEOREMVS}
Let $\mathcal{V}$ be a finite dimensional vector space and $T:\mathcal{V} \to \mathcal{V}$ be a linear operator with nonzero trace. Then there are Halmos dilations of $T$ which are not similar.
\end{theorem}
\begin{proof}
Note that \begin{align*}
\begin{pmatrix}
T & T-I \\
T+I & T \\
\end{pmatrix}
\end{align*}
is an invertible operator and hence is a Halmos dilation of $T$. It is now enough to show that the matrices
\begin{align*}
\begin{pmatrix}
T & T-I \\
T+I & T \\
\end{pmatrix} \quad \text{ and } \quad \begin{pmatrix}
T & I \\
I & 0 \\
\end{pmatrix}
\end{align*}
are not similar. Since $\mathcal{V}$ is finite dimensional, we can use the property of trace map to conclude that these matrices are not similar.
\end{proof}
Theorem \ref{HALMOSVECTORSPACE} can be generalized which gives vector space version of Theorem \ref{EGERVARY}.
\begin{theorem}(\textbf{N-dilation for vector spaces})\label{NDILATIONVECTOR}
Let $\mathcal{V}$ be a vector space and $T: \mathcal{V} \to \mathcal{V}$ be a linear map. Let $N$ be a natural number. Then the operator
\begin{align*}
U\coloneqq \begin{pmatrix}
T & 0& 0 & \cdots &0 & I \\
I & 0& 0 & \cdots &0& 0 \\
0&I&0&\cdots &0& 0\\
0&0&I&\cdots &0 & 0\\
\vdots &\vdots &\vdots & & \vdots &\vdots \\
0&0&0&\cdots &0 & 0\\
0&0&0&\cdots &I & 0\\
\end{pmatrix}_{(N+1)\times (N+1)}
\end{align*}is invertible on $\oplus_{k=1}^{N+1} \mathcal{V}$ and
\begin{align}\label{FINITEDILATIONEQUATION}
T^k=P_\mathcal{V}U_{|\mathcal{V}}^k,\quad \forall k=1, \dots, N,
\end{align}
where $P_\mathcal{V}:\oplus_{k=1}^{N+1} \mathcal{V}\to \oplus_{k=1}^{N+1} \mathcal{V}$ is the first coordinate projection onto $\mathcal{V}$.
\end{theorem}
\begin{proof}
A direct calculation of power of $U$ gives Equation (\ref{FINITEDILATIONEQUATION}). To complete the proof, now we need show that $U$ is invertible. Define
\begin{align*}
V\coloneqq \begin{pmatrix}
0 & I& 0& 0 & \cdots &0 & 0 \\
0 & 0& I& 0 & \cdots &0& 0 \\
0&0&0& I&\cdots &0& 0\\
0&0&0& 0&\cdots &0 & 0\\
\vdots &\vdots &\vdots & \vdots& & \vdots &\vdots \\
0&0&0& 0&\cdots &0 & I\\
I&-T&0& 0&\cdots &0 & 0\\
\end{pmatrix}_{(N+1)\times (N+1)}.
\end{align*}
Then $UV=VU=I$. Thus $V$ is the inverse of $U$.
\end{proof}
Note that the Equation (\ref{FINITEDILATIONEQUATION}) holds only upto $N$ and not for $N+1$ and higher natural numbers. We next derive vector space version of Theorem \ref{SCHAFFERTHEOREM}. In the following theorem, $\oplus_{n=-\infty}^{\infty} \mathcal{V}$ is the vector space defined by
\begin{align*}
\oplus_{n=-\infty}^{\infty} \mathcal{V}\coloneqq \left\{ \{x_n\}_{n=-\infty}^\infty, x_n \in \mathcal{V}, \forall n \in \mathbb{Z}, x_n\neq 0
\text{ only for finitely many } n' \text{s}\right\}
\end{align*}
with respect to natural operations.
\begin{theorem}\label{SCHAFFERVECTOR}(\textbf{Sz. Nagy dilation for vector spaces})
Let $\mathcal{V}$ be a vector space and $T: \mathcal{V} \to \mathcal{V}$ be a linear map. Let $U\coloneqq[u_{n,m}]_{-\infty \leq n,m\leq \infty}$ be the operator defined on
$\oplus_{n=-\infty}^{\infty} \mathcal{V}$ given by the infinite matrix defined as follows:
\begin{align*}
u_{0,0}\coloneqq T, \quad u_{n,n+1}\coloneqq I, \quad \forall n \in \mathbb{Z}, \quad u_{n,m}\coloneqq 0 \quad \text{ otherwise},
\end{align*}
i.e.,
\begin{align*}
U=\begin{pmatrix}
&\vdots &\vdots & \vdots & \vdots & \vdots & \\
\cdots & 0 & I& 0 & 0& 0& \cdots & \\
\cdots & 0 & 0& I & 0& 0&\cdots & \\
\cdots & 0&0&\underline{T}&I& 0&\cdots&\\
\cdots & 0&0&0&0& I&\cdots &\\
\cdots & 0&0&0&0& 0&\cdots &\\
& \vdots &\vdots &\vdots &\vdots & \vdots & \\
\end{pmatrix}_{\infty\times \infty}
\end{align*}
where $T$ is in the $(0,0)$ position (which is underlined), which is invertible on $\oplus_{n=-\infty}^{\infty} \mathcal{V}$ and
\begin{align}\label{INFINITEDILATIONEQUATION}
T^n=P_\mathcal{V}U_\mathcal{V}^n,\quad \forall n\in \mathbb{N},
\end{align}
where $P_\mathcal{V}:\oplus_{n=-\infty}^{\infty} \mathcal{V}\to \oplus_{n=-\infty}^{\infty} \mathcal{V}$ is the first coordinate projection onto $\mathcal{V}$.
\end{theorem}
\begin{proof}
We get Equation (\ref{INFINITEDILATIONEQUATION}) by calculation of powers of operator $U$. The matrix $V\coloneqq [v_{n,m}]_{-\infty < n,m< \infty}$ defined by
\begin{align*}
v_{0,0}\coloneqq 0, \quad v_{1,-1}\coloneqq -T, \quad v_{n,n-1}\coloneqq I, \quad \forall n \in \mathbb{Z}, \quad v_{n,m}\coloneqq 0 \quad \text{ otherwise},
\end{align*}
i.e.,
\begin{align*}
V=\begin{pmatrix}
&\vdots &\vdots & \vdots & \vdots & \vdots & \\
\cdots & I & 0& 0 & 0& 0& \cdots & \\
\cdots & 0 & I& \underline{0} & 0& 0&\cdots & \\
\cdots & 0&-T&I&0& 0&\cdots&\\
\cdots & 0&0&0&I& 0&\cdots &\\
\cdots & 0&0&0&0& I&\cdots &\\
& \vdots &\vdots &\vdots &\vdots & \vdots & \\
\end{pmatrix}_{\infty\times \infty}
\end{align*}
where $0$ is in the $(0,0)$ position (which is underlined), satisfies $UV=VU=I$ and hence $U$ is invertible which completes the proof.
\end{proof}
\section{MINIMAL DILATION, INTERTWINING LIFTING THEOREM AND VARIANT OF ANDO DILATION FOR VECTOR SPACES}
An important observation associated with Theorems \ref{HALMOSTHEOREMVS}, \ref{NDILATIONVECTOR} and \ref{SCHAFFERVECTOR} is that the dilation is not optimal, i.e., even if the given operator is invertible, then also $U$ is not same as $T$. To overcome this, next we move on with the definition of dilation given by Bhat, De, and Rakshit (\cite{BHATDERAKSHITH}). Set theoretic definition of dilation, given in Definition \ref{BHATSET} motivated Bhat, De, and Rakshit, to introduce the dilation of linear maps on vector spaces.
\begin{definition}(\cite{BHATDERAKSHITH})
Let $\mathcal{V}$ be a vector space and $T: \mathcal{V} \to \mathcal{V}$ be a linear map. A \textbf{linear injective dilation} of $T$ is a quadruple $(\mathcal{W}, I, U,P)$, where $\mathcal{W}$ is a vector space, and $I: \mathcal{V} \to \mathcal{W}$ is an injective linear map, $U: \mathcal{W} \to \mathcal{W}$ is an injective linear map, $P: \mathcal{W} \to \mathcal{W}$ is an idempotent linear map such that $P(\mathcal{W})=I(\mathcal{W})
$ and
\begin{align*}
\text{(\textbf{Dilation equation})} \quad IT^nx=PU^nIx, \quad \forall n\in \mathbb{Z}_+, \forall x \in \mathcal{V}.
\end{align*}
A dilation $(\mathcal{W}, I, U,P)$ of $T$ is said to be \textbf{minimal} if
\begin{align*}
\mathcal{W}=\operatorname{span}\{U^nIx: n\in \mathbb{Z}_+, x \in \mathcal{V}\}.
\end{align*}
\end{definition}
An easier way to remember the dilation equation is the following commutative diagram.
\begin{center}
\[
\begin{tikzcd}
\mathcal{W} \arrow[r,"U^n"]&\mathcal{W} \arrow[r,"P"]& \mathcal{W}\\
&\mathcal{V} \arrow[ul,"I"] \arrow[r,"T^n"] & \mathcal{V}\arrow[u,"I"]
\end{tikzcd}
\]
\end{center}
Following result is the vector space version of Theorem \ref{SZNAGYSET}.
\begin{theorem}(\cite{BHATDERAKSHITH})\label{STANDARDDILATION} (\textbf{Minimal Sz. Nagy dilation for sets})
Every linear map $T: \mathcal{V} \to \mathcal{V}$ admits minimal injective linear dilation.
\end{theorem}
\begin{proof}
We reproduce the proof given by \cite{BHATDERAKSHITH} for the sake of future use. Define
\begin{align*}
\mathcal{W}\coloneqq \left\{(x_n)_{n=0}^\infty :x_n \in \mathcal{V}, \forall n \in \mathbb{Z}_+, x_n\neq 0 \text{ only for finitely many } n'\text{s}\right\}.
\end{align*}
Clearly $\mathcal{W}$ is a vector space w.r.t. natural operations. Now define
\begin{align*}
& I:\mathcal{V} \ni x \mapsto (x, 0, \dots ) \in \mathcal{W},\\
& U: \mathcal{W} \ni (x_n)_{n=0}^\infty \mapsto (0, x_0, \dots) \in \mathcal{W},\\
& P:\mathcal{W} \ni (x_n)_{n=0}^\infty \mapsto \sum_{n=0}^{\infty}IT^nx_n\in \mathcal{W}.
\end{align*}
Then $(\mathcal{W}, I, U,P)$ is a minimal injective linear dilation of $T$.
\end{proof}
We call the dilation $(\mathcal{W}, I, U,P)$ constructed in Theorem \ref{STANDARDDILATION} as the \textbf{standard dilation} of $T$. We next consider inter-twining lifting theorem.
\begin{theorem}(\textbf{Inter-twining lifting theorem for vector spaces}) Let $\mathcal{V}_1$, $\mathcal{V}_2$ be vector spaces, $T_1: \mathcal{V}_1 \to \mathcal{V}_1$, $T_2: \mathcal{V}_2 \to \mathcal{V}_2$ be linear maps. Let $(\mathcal{W}_1, I_1, U_1,P_1)$, $(\mathcal{W}_2, I_2, U_2,P_2)$ be standard dilations of $T_1$, $T_2$, respectively. If $S: \mathcal{V}_2 \to \mathcal{V}_1$ is a linear map such that $T_1S=ST_2$, then there exists a linear map $R: \mathcal{W}_2 \to \mathcal{W}_1$ such that
\begin{align}\label{INTERIN}
U_1R=RU_2, \quad RP_2=P_1R, \quad RI_2=I_1S.
\end{align}
Conversely if $R: \mathcal{W}_2 \to \mathcal{W}_1$ is a linear map such that $U_1R=RU_2, RP_2=P_1R$, then there exists a linear map $S: \mathcal{V}_2 \to \mathcal{V}_1$ such that
\begin{align}\label{INTERSECOND}
RI_2=I_1S, \quad T_1S=ST_2.
\end{align}
\end{theorem}
\begin{proof}
Define $R:\mathcal{W}_2 \ni (x_n)_{n=0}^\infty \mapsto (Sx_n)_{n=0}^\infty \in \mathcal{W}_1 $. We now verify three equalities in Equation (\ref{INTERIN}). Let $ (x_n)_{n=0}^\infty \in \mathcal{W}_2$. Then
\begin{align*}
& U_1R(x_n)_{n=0}^\infty= U_1(Sx_n)_{n=0}^\infty=(0, S x_0, Sx_1, \dots), \\
&RU_2(x_n)_{n=0}^\infty=R(0, x_0, x_1,\dots)=(0, S x_0, Sx_1, \dots),
\end{align*}
\vspace{-1cm}
\begin{align*}
RP_2(x_n)_{n=0}^\infty&=R\left(\sum_{n=0}^{\infty}I_2T_2^nx_n\right)=\sum_{n=0}^{\infty}RI_2T_2^nx_n\\
&=\sum_{n=0}^{\infty}R(T_2^nx_n, 0, 0, \dots)=\sum_{n=0}^{\infty}(ST_2^nx_n, 0, 0, \dots),
\end{align*}
\vspace{-1cm}
\begin{align*}
P_1R(x_n)_{n=0}^\infty&= P_1(Sx_n)_{n=0}^\infty=\sum_{n=0}^{\infty}I_1T_1^nSx_n\\
&=\sum_{n=0}^{\infty}I_1ST_2^nx_n=\sum_{n=0}^{\infty}(ST_2^nx_n, 0, 0, \dots),
\end{align*}
\vspace{-1cm}
\begin{align*}
& RI_2x=R(x, 0, 0, \dots)=(Sx, 0, 0, \dots), \quad I_1Sx=(Sx, 0, 0, \dots).
\end{align*}
We now consider the converse part. For this, first we have to define linear map $S$. Let $y \in \mathcal{V}_2$. Now $RP_2(y, 0, \dots)=P_1R(y, 0, \dots)\in I_1(\mathcal{V}_1)$ and $I_1$ is injective implies that there exists a unique $x \in \mathcal{V}_2$ such that $RP_2(y, 0, \dots)=P_1R(y, 0, \dots)=I_1(x)$. We now define $Sy\coloneqq x.$ Then $S$ is well-defined and linear. Let $y \in \mathcal{V}_2$ and $x \in \mathcal{V}_2$ be such that $Sy=x$. Then $I_1Sy=RP_2(y, 0, \dots)=RI_2y$. Thus we verified first equality in (\ref{INTERSECOND}). We are left with verification of second equality. We now calculate
\begin{align}\label{CONVERSE-1}
RP_2U_2(x, 0, \dots)=RP_2(0, x, 0, \dots)=RI_2T_2x
\end{align}
and
\begin{align}\label{CONVERSEZERO}
P_1U_1R(x, 0, \dots)&=P_1RU_2(x, 0, \dots)=P_1R(0,x,0, \dots)\\
&=RP_2(0,x,0, \dots)=RI_2T_2x, \quad \forall x \in \mathcal{V}_2.
\end{align}
Given conditions produce
\begin{align}\label{CONVERSE}
RP_2U_2=P_1RU_2=P_1U_1R.
\end{align}
Equation (\ref{CONVERSE}) says that (\ref{CONVERSE-1}) and (\ref{CONVERSEZERO}) are equal which completes the proof.
\end{proof}
Following is a variant of Ando dilation for vector spaces.
\begin{theorem}(\textbf{Ando like dilation for vector spaces})
Let $\mathcal{V}$ be a vector space and $T, S: \mathcal{V} \to \mathcal{V}$ be commuting linear maps. Then there are dilations $(\mathcal{W}, I, U_1,P)$ and $(\mathcal{W}, I, U_2,P)$ of $T,S$ respectively, such that
\begin{align*}
\begin{pmatrix}
0_c& U
\end{pmatrix}
=\begin{pmatrix}
0_r\\
V
\end{pmatrix}
\end{align*}
and
\begin{align*}
\quad IT^nS^mx=PU^nV^mIx, \quad \forall n,m\in \mathbb{Z}_+, \forall x \in \mathcal{V},
\end{align*}
where $0_c$ denotes the infinite column matrix of zero vectors and $0_r$ denotes the infinite row matrix of zero vectors.
\end{theorem}
\begin{proof}
We extend the construction in the proof of Theorem \ref{STANDARDDILATION}. Define
\begin{align*}
\mathcal{W}\coloneqq \bigg\{
\begin{pmatrix}
x_{0,0} & x_{0,1} & x_{0,2}&\cdots \\
x_{1,0} & x_{1,1} & x_{1,2}&\cdots\\
x_{2,0} & x_{2,1} & x_{2,2}&\cdots\\
\vdots &\vdots &\vdots &\ddots
\end{pmatrix}_{\infty \times \infty }
:x_{n,m} \in \mathcal{V}, \forall n,m \in \mathbb{Z}_+, x_{n,m}\neq 0 \\
\text{ only for finitely many } (n,m)'\text{s}\bigg\}.
\end{align*}
Then $\mathcal{W}$ becomes a vector space with respect to natural operations. We now define the following four linear maps:
\begin{align*}
& I: \mathcal{V}\ni x \mapsto \begin{pmatrix}
x & 0 & 0&\cdots \\
0 & 0 & 0&\cdots\\
0 & 0 & 0&\cdots\\
\vdots &\vdots &\vdots &\ddots
\end{pmatrix}
\in \mathcal{W}\\
& U: \mathcal{W} \ni \begin{pmatrix}
x_{0,0} & x_{0,1} & x_{0,2}&\cdots \\
x_{1,0} & x_{1,1} & x_{1,2}&\cdots\\
x_{2,0} & x_{2,1} & x_{2,2}&\cdots\\
\vdots &\vdots &\vdots &\ddots
\end{pmatrix} \mapsto \begin{pmatrix}
0&0&0\\
x_{0,0} & x_{0,1} & x_{0,2}&\cdots \\
x_{1,0} & x_{1,1} & x_{1,2}&\cdots\\
\vdots &\vdots &\vdots &\ddots
\end{pmatrix} \in \mathcal{W}\\
& V: \mathcal{W} \ni \begin{pmatrix}
x_{0,0} & x_{0,1} & x_{0,2}&\cdots \\
x_{1,0} & x_{1,1} & x_{1,2}&\cdots\\
x_{2,0} & x_{2,1} & x_{2,2}&\cdots\\
\vdots &\vdots &\vdots &\ddots
\end{pmatrix} \mapsto \begin{pmatrix}
0&x_{0,0} & x_{0,1} &\cdots \\
0& x_{1,0} & x_{1,1}&\cdots\\
0& x_{2,0} & x_{2,1} &\cdots \\
\vdots & \vdots &\vdots &\ddots
\end{pmatrix} \in \mathcal{W}\\
& P :\mathcal{W} \ni \begin{pmatrix} x_{0,0} & x_{0,1} & x_{0,2}&\cdots \\
x_{1,0} & x_{1,1} & x_{1,2}&\cdots\\
x_{2,0} & x_{2,1} & x_{2,2}&\cdots\\
\vdots &\vdots &\vdots &\ddots
\end{pmatrix} \mapsto \sum_{m=0}^{\infty}\sum_{n=0}^{\infty}IT^nS^mx_{n,m} \in \mathcal{W}.
\end{align*}
We then have
\begin{align*}
\begin{pmatrix}
0_c& U
\end{pmatrix}=
\begin{pmatrix}
0& 0 & 0 & 0&\cdots \\
0& x_{0,0} & x_{0,1} & x_{0,2}&\cdots \\
0& x_{1,0} & x_{1,1} & x_{1,2}&\cdots\\
0& x_{2,0} & x_{2,1} & x_{2,2}&\cdots\\
0& \vdots &\vdots &\vdots &\ddots
\end{pmatrix}
=\begin{pmatrix}
0_r\\
V
\end{pmatrix}.
\end{align*}
Now $PU^nIx=IT^nx, $ $ PV^nIx=IS^mx$, $\forall x \in \mathcal{V}$, $\forall n,m\in \mathbb{Z}_+$. Hence $(\mathcal{W}, I, U_1,P)$ and $(\mathcal{W}, I, U_2,P)$ are dilations of $T,S$, respectively. A calculation now shows that $ IT^nS^mx=PU^nV^mIx, \forall n,m\in \mathbb{Z}_+, \forall x \in \mathcal{V}$.
\end{proof}
\textbf{Conclusion and future work : } In this appendix we derived some basic results on dilation of linear maps. Since vector spaces are more general than Hilbert spaces and tools of Hilbert spaces will not work in vector space, we are interested to explore algebraic aspects of dilation theory.
\leavevmode\newpage
\addcontentsline{toc}{chapter}{APPENDIX B: COMMUTATORS CLOSE TO THE IDENTITY}
\par~
\begin{center}
\textbf{{\fontsize{16}{1em}\selectfont APPENDIX B: COMMUTATORS CLOSE TO THE IDENTITY}} \\
\end{center}
\section{C*-ALGEBRAS}
Israel M. Gelfand defined abstractly the notion of a complete algebra (\cite{GELFAND}). These are Banach spaces in which we can multiply the elements and the multiplication enjoys continuity.
\begin{definition}(cf. \cite{ZHUBOOK})
A Banach space $ \mathcal{A}$ over $ \mathbb{C} $ is said to be a unital \textbf{Banach algebra} if it is a unital algebra and the multiplication satisfies the following:
\begin{enumerate} [label=(\roman*)]
\item $ \|xy\|\leq \|x\|\|y\|$, $ \forall x, y \in\mathcal{A}. $
\item $\|e\|=1$, where $e$ is the multiplicative identity of $ \mathcal{A}$.
\end{enumerate}
\end{definition}
\begin{example}(cf. \cite{ZHUBOOK})
\begin{enumerate}[label=(\roman*)]
\item If $K$ is a compact Hausdorff space, then the space $\mathcal{C}(K)$ of all complex-valued continuous functions on $K$ is a commutative unital Banach algebra w.r.t. sup-norm and pointwise multiplication.
\item If $\mathcal{X} $ is a Banach space, then the collection $\mathcal{B}(\mathcal{X})$ of all bounded linear operators on $\mathcal{X}$ is a noncommutative unital Banach algebra w.r.t. operator-norm and operator composition.
\end{enumerate}
\end{example}
\begin{proposition}(cf. \cite{ALLAN})
Every unital Banach algebra $\mathcal{A}$ can be isometrically embedded in $\mathcal{B}(\mathcal{A})$.
\end{proposition}
One of the most important notion associated with the study of Banach algebras is the notion of spectrum.
\begin{definition}(cf. \cite{ZHUBOOK})
Let $ \mathcal{A}$ be a unital Banach algebra with the identity $ e$. \textbf{Spectrum} of an element $ x $ in $ \mathcal{A}$ is the set of all complex numbers
$ \lambda$ such that $ \lambda e-x$ is not invertible.
\end{definition}
\begin{theorem}(cf. \cite{ZHUBOOK})
Spectrum of every element of a unital Banach algebra is a nonempty compact subset of $\mathbb{C}$.
\end{theorem}
Following is the first fundamental theorem in the study of Banach algebras which characterizes Banach algebras using the information of spectrum.
\begin{theorem}(cf. \cite{ZHUBOOK}) (\textbf{Gelfand-Mazur theorem})
If every nonzero element of a Banach algebra is invertible, then it is isometrically isomorphic to $\mathbb{C}$.
\end{theorem}
A subclass of Banach algebras known as C*-algebras allows to do most of the things which hold good for complex numbers. Notion of C*-algebras, for first time, appeared in the work of \cite{GELFANDNEUMARK}.
\begin{definition}(cf. \cite{ZHUBOOK})
A unital Banach algebra $ \mathcal{A}$ is called a unital \textbf{C*-algebra} if there exists a map $*:\mathcal{A}\ni x \mapsto x^* \in \mathcal{A}$ such that following conditions hold.
\begin{enumerate}[label=(\roman*)]
\item $ ((x)^*)^*=x, \forall x \in\mathcal{A}. $
\item $ (x+y)^*=x^*+y^*, \forall x, y \in\mathcal{A}. $
\item $(\alpha x)^*=\overline{\alpha}x^*$, $\forall \alpha \in \mathbb{K}, \forall x \in\mathcal{A}$.
\item $ (xy)^*=y^*x^*, \forall x, y \in\mathcal{A}. $
\item $\|x^*x\|=\|x\|^2, \forall x \in\mathcal{A}. $
\end{enumerate}
A map $*:\mathcal{A}\ni x \mapsto x^* \in \mathcal{A}$ satisfying (i)-(iii) is called as \textbf{involution}.
\end{definition}
Segal called the term C*-algebra; the letter `C' stands for uniformly closed. C*-algebras are also known as Gelfand-Naimark algebras (cf. \cite{PIETSCH}).
\begin{example}(cf. \cite{ZHUBOOK})
\begin{enumerate}[label=(\roman*)]
\item If $K$ is a compact Hausdorff space, then $\mathcal{C}(K)$ is a commutative unital C*-algebra w.r.t. involution $f^*(x)\coloneqq\overline{f(x)}, \forall x \in K$.
\item If $\mathcal{H}$ is a Hilbert space, then $\mathcal{B}(\mathcal{H})$ is a noncommutative unital C*-algebra w.r.t. operator adjoint.
\item If $\mathcal{H}$ is a Hilbert space, the the space $\mathcal{K}(\mathcal{H})$ of compact operators is a noncommutative C*-subalgebra of $\mathcal{B}(\mathcal{H})$. If $\mathcal{H}$ is infinite dimensional, then this algebra is non unital.
\end{enumerate}
\end{example}
Following two results characterize unital C*-algebras.
\begin{theorem}(cf. \cite{ZHUBOOK}) (\textbf{Gelfand-Naimark theorem})
If $ \mathcal{A}$ is a commutative unital C*-algebra, then $ \mathcal{A}$ is isometrically $*$-isomorphic to
$\mathcal{C}(K)$ for some compact Hausdorff space $K$.
\end{theorem}
\begin{theorem}(cf. \cite{ZHUBOOK}) (\textbf{Gelfand-Naimark-Segal theorem})\label{GELFANDNAIMARKSEGAL}
Let $ \mathcal{A}$ be a unital C*-algebra. Then there exists a Hilbert space $ \mathcal{H}$ such that $ \mathcal{A}$ is isometrically $ *$-isomorphic
to a C*-subalgebra of $ \mathcal{B}(\mathcal{H})$.
\end{theorem}
\begin{example}(cf. \cite{KADISONRINGROSE})
Consider the unital C*-algebra $ \mathcal{C}[0, 1]$. The map
\begin{align*}
&\pi : \mathcal{C}[0, 1] \ni f \mapsto \pi (f) \in \mathcal{B}(\mathcal{L}^2[0, 1]);\\
& \pi (f):\mathcal{L}^2[0, 1]\ni g \mapsto (\pi (f))(g) \coloneqq fg \in \mathcal{L}^2[0, 1]
\end{align*}
is an isometric $ *$-isomorphism to a C*-subalgebra of $\mathcal{B}(\mathcal{L}^2[0, 1])$.
\end{example}
{\onehalfspacing \section{COMMUTATORS CLOSE TO THE IDENTITY IN $\mathcal{B}(\mathcal{H})$}
Let $n\in \mathbb{N}$ and $M_n(\mathbb{K})$ be the ring of $n$ by $n$ matrices over $\mathbb{K}$. Using the property of trace map
we easily get that there does not exist $D, X \in M_n(\mathbb{K})$ such that $DX-XD=1_{M_n(\mathbb{K})}$ (\cite{HALMOS}). This argument will not work for bounded linear operators
on infinite dimensional Hilbert space since the map trace is not defined on the algebra $\mathcal{B}(\mathcal{H})$ of all bounded linear operators on an infinite dimensional Hilbert space $\mathcal{H}$ (it is defined for a proper subalgebra of $\mathcal{B}(\mathcal{H})$ known as the trace class operators (\cite{SCHATTEN})). Operators of the form $DX-XD$ are called as \textbf{commutator} of $D$ and $X$ and are denoted by $[D,X]$. An operator $T\in \mathcal{B}(\mathcal{H})$ is said to be a commutator if $T=[D,X]$, for some $D, X \in \mathcal{B}(\mathcal{H})$.\\
Using
the property of spectrum of bounded linear operator, Winter in 1947 proved that the following result.
\begin{theorem}(\cite{WINTNER})\label{WINTNERTHEOREM}
Let $\mathcal{H}$ be an infinite dimensional Hilbert space. Then there does not exist $D, X \in \mathcal{B}(\mathcal{H})$ such that
\begin{align}\label{COMMUTATORIDENTITY}
[D,X]=1_{\mathcal{B}(\mathcal{H})}.
\end{align}
\end{theorem}
After two years, \cite{WIELANDT} gave a
simple proof for the failure of Equation \ref{COMMUTATORIDENTITY}. We note that the boundedness of operators is crucial in Theorem \ref{WINTNERTHEOREM}. Following example shows that Theorem \ref{WINTNERTHEOREM} fails for unbounded operators
\begin{example}(\cite{HALMOS})
Let $\mathcal{H}\coloneqq \mathcal{L}^2(\mathbb{R})$ and define
\begin{align*}
(Df)(x)\coloneqq \frac{d}{dx}f, \quad (Xf)(x)\coloneqq xf(x)
\end{align*}
Then $[D,X]=1_{\mathcal{B}(\mathcal{H})}$.
\end{example}
Theorem \ref{WINTNERTHEOREM} leads to the question that which operators on infinite dimensional Hilbert spaces can be written as commutators of operators? First partial answer was given by Halmos.
\begin{theorem}(cf. \cite{PUTNAM})
Let $\mathcal{H}$ be an infinite dimensional Hilbert space. If $C\in \mathcal{B}(\mathcal{H})$ is compact, then $I_{\mathcal{B}(\mathcal{H})}+C$ is not a commutator.
\end{theorem}
\cite{BROWNPEARCY} characterized the set of bounded operators which can be written as commutators.
\begin{theorem}(\cite{BROWNPEARCY})
Let $\mathcal{H}$ be an infinite dimensional separable Hilbert space. Then an operator in $ \mathcal{B}(\mathcal{H})$ is a commutator if and only if it is not of the form $\lambda I_{\mathcal{B}(\mathcal{H})}+C$, where $\lambda$ is a nonzero scalar and $C$ is a compact operator.
\end{theorem}
Following the paper of \cite{BROWNPEARCY} there is a series of papers devoted to the study of commutators
on sequence spaces, $\mathcal{L}^p$-spaces, Banach spaces, C*-algebras, von Neumann algebras, Banach *-algebras etc (\cite{SCHNEEBERGER, LAUSTSEN, DYKEMAFIGIELGARYWODZICKI, MARCOUX2006, STASINSKI, DESOVJOHNSON2010, KADISONLIUTHOM, YOOD, DESOVJOHNSONSCHECHTMAN, DYKEMASKRIPKA, DOSEV2009, KAFTALNGZHANG, MARCOU2010}).
It was \cite{POPA} who started a quantitative study of commutators close to the identity operator. He gave the
following quantitative bound given by Popa for the product of norm of operators whenever the commutator is close to the identity.
\begin{theorem}(\cite{POPA})\label{POPAFIRST}
Let $\mathcal{H}$ be an infinite dimensional Hilbert space. Let $D,X \in \mathcal{B}(\mathcal{H})$ be such that
\begin{align*}
\|[D,X]-1_{\mathcal{B}(\mathcal{H})}\|\leq \varepsilon
\end{align*}
for some $\varepsilon>0$. Then
\begin{align*}
\|D\|\|X\|\geq\frac{1}{2}\log\frac{1}{\varepsilon}.
\end{align*}
\end{theorem}
Now the problem in Theorem \ref{POPAFIRST}, is the existence of $D,X \in \mathcal{B}(\mathcal{H})$ such that the commutator $[D,X]$ is close to the identity operator. This was again obtained by Popa which is stated in the following result. Given real $r $ and positive $s$, by $r=O(s)$ we mean that there is positive $\gamma$ such that $|r|\leq \gamma s$.
\begin{theorem}(\cite{POPA, TAO})\label{POPASECOND}
Let $\mathcal{H}$ be an infinite dimensional Hilbert space.
Then for each $0<\varepsilon\leq 1$, there exist $D,X \in \mathcal{B}(\mathcal{H})$ with
\begin{align*}
\|[D,X]-1_{\mathcal{B}(\mathcal{H})}\|\leq \varepsilon
\end{align*}
and
\begin{align*}
\|D\|\|X\|=O(\varepsilon^{-2}).
\end{align*}
\end{theorem}
Terence Tao improved Theorem \ref{POPASECOND} and obtained the following theorem.
\begin{theorem}(\cite{TAO})\label{TAOTHEOREM}
Let $\mathcal{H}$ be an infinite dimensional Hilbert space.
Then for each $0<\varepsilon\leq 1/2$, there exist $D,X \in \mathcal{B}(\mathcal{H})$ with
\begin{align*}
\|[D,X]-1_{\mathcal{B}(\mathcal{H})}\|\leq \varepsilon
\end{align*}
such that
\begin{align*}
\|D\|\|X\|=O\left(\log^5\frac{1}{\varepsilon}\right).
\end{align*}
\end{theorem}
In (\cite{POPA}) there is another result about commutators. Let $\mathcal{K}(\mathcal{H})$ be the ideal of compact operators in $\mathcal{B}(\mathcal{H})$ and define
\begin{align*}
\mathbb{C}+\mathcal{K}(\mathcal{H})\coloneqq \{\lambda. 1_{\mathcal{B}(\mathcal{H})}+T: \lambda \in \mathbb{C}, T \in \mathcal{K}(\mathcal{H})\}.
\end{align*}
\begin{theorem}(\cite{POPA})\label{POPACOMPACT}
If $K \in \mathcal{B}(\mathcal{H})$ is such that
\begin{align*}
\|A\|=O(1), \quad \|A\|=O(\text{dist}(A, \mathbb{C}+\mathcal{K}(\mathcal{H}))^\frac{2}{3}),
\end{align*}
then there exist $D,X \in \mathcal{B}(\mathcal{H})$ with
\begin{align*}
\|D\|\|X\|=O(1)\quad \text{ such that } \quad A=[D,X].
\end{align*}
\end{theorem}
\section{COMMUTATORS CLOSE TO THE IDENTITY IN \\
UNITAL C*-ALGEBRAS}
We
recall fundamentals of matrices over unital C*-algebras as given in (\cite{MURPHY}).
Let $\mathcal{A}$ be a unital C*-algebra. For $n\in \mathbb{N}$, $M_n(\mathcal{A})$ is defined as the set of all $n$ by $n$ matrices over $\mathcal{A}$. It is clearly an algebra with respect to natural matrix operations. We define the involution of an element $A\coloneqq [a_{i,j}]_{1\leq i,j \leq n}\in M_n(\mathcal{A})$ as $A^*\coloneqq [a_{j,i}^*]_{1\leq i,j \leq n}$. Then $M_n(\mathcal{A})$ is a *-algebra. From the Gelfand-Naimark-Segal theorem (Theorem \ref{GELFANDNAIMARKSEGAL}) there exists unique universal representation $(\mathcal{H}, \pi) $, where $\mathcal{H}$ is a Hilbert space, $\pi:M_n(\mathcal{A})\to M_n(\mathcal{B}(\mathcal{H}))$ is an isometric *-homomorphism. This gives a norm on $M_n(\mathcal{A})$ defined as
\begin{align*}
\|A\|\coloneqq \|\pi(A)\|, \quad \forall A \in M_n(\mathcal{A}).
\end{align*}
This norm makes $M_n(\mathcal{A})$ as a C*-algebra.\\
In the sequel, $\mathcal{A}$ is a unital C*-algebra. We first derive a lemma followed by a corollary for unital C*-algebras. Proof of the lemma is a direct algebraic calculation.
\begin{lemma}(\textbf{Commutator calculation})\label{COMMUTATORLEMMA}
Let $u,v, b_1, \dots, b_n \in \mathcal{A} $ and $\delta>0$. Let
\begin{align*}
X\coloneqq \begin{pmatrix}
0 & 0 & 0& \cdots & 0 &\delta b_1\\
1_\mathcal{A} & 0 & 0& \cdots& 0 &\delta b_2\\
0 & 1_\mathcal{A} & 0& \cdots& 0 &\delta b_3\\
\vdots &\vdots & \vdots& \cdots& \vdots &\vdots\\
0&0&0&\cdots& 0& \delta b_{n-1}\\
0&0&0&\cdots& 1_\mathcal{A}& \delta b_n
\end{pmatrix} \in M_n(\mathcal{A})
\end{align*}
and
\begin{align*}
D\coloneqq \begin{pmatrix}
\frac{v}{\delta} &1_\mathcal{A} & 0& \cdots& 0 & \delta b_1u\\
\frac{u}{\delta} & \frac{v}{\delta} & 2.1_\mathcal{A}& \cdots& 0 & \delta b_2u\\
0 & \frac{u}{\delta} & \frac{v}{\delta}& \cdots& 0 & \delta b_3u\\
\vdots &\vdots & \vdots& \cdots& \vdots &\vdots\\
0&0&0&\cdots& \frac{v}{\delta}& (n-1)1_\mathcal{A}+\delta b_{n-1}u\\
0&0&0&\cdots& \frac{u}{\delta}& \frac{v}{\delta}+\delta b_nu
\end{pmatrix} \in M_n(\mathcal{A}).
\end{align*}
Then
\begin{align*}
[D,X]=1_{M_n(\mathcal{A})}+\begin{pmatrix}
0 & 0 & 0& \cdots& 0 & [v,b_1]+0+\delta b_2+\delta b_1[u,b_n]\\
0 & 0 & 0& \cdots& 0 &[v,b_2]+[u,b_1]+2\delta b_3+\delta b_2[u,b_n]\\
0 & 0 & 0& \cdots& 0 &[v,b_3]+[u,b_2]+3\delta b_4+\delta b_3[u,b_n]\\
\vdots &\vdots & \vdots& \cdots& \vdots &\vdots\\
0&0&0&\cdots& 0& [v,b_{n-1}]+[u,b_{n-2}]+(n-1)\delta b_{n}+\delta b_{n-1}[u,b_n]\\
0&0&0&\cdots& 0& [v,b_n]+[u,b_{n-1}]+0+\delta b_n[u,b_n]-n.1_\mathcal{A}
\end{pmatrix}.
\end{align*}
\end{lemma}
\begin{corollary}\label{COROLLARYLEMMA}
Let $u,v, b_1, \dots, b_n \in \mathcal{A} $. Assume that for some $\delta>0$, we have equations
\begin{align}\label{CORASSUMPTION1}
[v,b_i]+[u,b_{i-1}]+i\delta b_{i+1}+\delta b_i[u,b_n]=0, \quad \forall i =2, \dots, n-1
\end{align}
and
\begin{align}\label{CORASSUMPTION2}
[v,b_n]+[u,b_{n-1}]+\delta b_n[u,b_n]=n\cdot 1_{{M_n(\mathcal{A})}}.
\end{align}
Then for any $\mu>0$, there exist matrices $D_\mu, X_\mu \in M_n(\mathcal{A})$ such that
\begin{align*}
&\| D_\mu\| \leq \frac{\|u\|}{\mu^2\delta}+\frac{\|v\|}{\mu\delta}+(n-1)+\delta \sum_{i=1}^{n}\mu^{n-i-1}\|b_i\|\|u\|,\\
& \| X_\mu\| \leq 1+ \delta \sum_{i=1}^{n}\mu^{n-i+1}\|b_i\| \quad \text{ and }\\
&\|[D_\mu,X_\mu]-1_{M_n(\mathcal{A})}\|\leq\mu^{n-1}\|[v,b_1]+\delta b_2+\delta b_1 [u,b_n]\|.
\end{align*}
\end{corollary}
\begin{proof}
Let $D$ and $X$ be as in Lemma \ref{COMMUTATORLEMMA}. Define
\begin{align*}
S_\mu &\coloneqq \begin{pmatrix}
\mu^{n-1}& 0 & 0& \cdots& 0 &0\\
0 & \mu^{n-2} & 0& \cdots& 0 &0\\
0 &0 & \mu^{n-3} &\cdots& 0&0\\
\vdots &\vdots & \vdots& \cdots& \vdots &\vdots\\
0&0&0&\cdots& \mu& 0\\
0&0&0&\cdots& 0& 1
\end{pmatrix} \in M_n(\mathbb{K}), \\
D_\mu &\coloneqq \frac{1}{\mu}S_\mu DS_\mu^{-1}, \quad X_\mu \coloneqq \mu S_\mu XS_\mu^{-1}.
\end{align*}
Then
\begin{align*}
\|D_\mu \|&=\left\| \begin{pmatrix}
\frac{v}{\mu\delta} &1_\mathcal{A} & 0& \cdots& 0 & \mu^{n-2}\delta b_1u\\
\frac{u}{\mu^2\delta} & \frac{v}{\mu\delta} & 2.1_\mathcal{A}& \cdots& 0 & \mu^{n-3}\delta b_2u\\
0 & \frac{u}{\mu^2\delta} & \frac{v}{\mu\delta}& \cdots& 0 & \mu^{n-4}\delta b_3u\\
\vdots &\vdots & \vdots& \cdots& \vdots &\vdots\\
0&0&0&\cdots &\frac{v}{\mu\delta}& (n-1)1_\mathcal{A}+\delta b_{n-1}u\\
0&0&0&\cdots& \frac{u}{\mu^2\delta}& \frac{v}{\mu\delta}+\mu^{-1}\delta b_nu
\end{pmatrix}\right\|\\
&\leq \left\|\frac{u}{\mu^2\delta}\right\|+\left\|\frac{v}{\mu\delta}\right\|+\|(n-1)1_\mathcal{A}\|+\left\| \begin{pmatrix}
0 &0 & 0& \cdots& 0 & \mu^{n-2}\delta b_1u\\
0 & 0& 0& \cdots& 0 & \mu^{n-3}\delta b_2u\\
0 & 0 & 0& \cdots& 0 & \mu^{n-4}\delta b_3u\\
\vdots &\vdots & \vdots& \cdots& \vdots &\vdots\\
0&0&0&\cdots& 0& \delta b_{n-1}u\\
0&0&0&\cdots& 0& \mu^{-1}\delta b_nu
\end{pmatrix}\right\|\\
&\leq \frac{\|u\|}{\mu^2\delta}+\frac{\|v\|}{\mu\delta}+(n-1)+\delta \sum_{i=1}^{n}\mu^{n-i-1}\|b_i\|\|u\|
\end{align*}
and
\begin{align*}
\|X_\mu\| &=\left\|\begin{pmatrix}
0 &0 & 0& \cdots& 0 & \mu^{n}\delta b_1\\
1_\mathcal{A} & 0& 0& \cdots& 0 & \mu^{n-1}\delta b_2\\
0 & 1_\mathcal{A} & 0& \cdots& 0 & \mu^{n-2}\delta b_3\\
\vdots &\vdots & \vdots& \cdots& \vdots &\vdots\\
0&0&0&\cdots& 0& \mu^2 \delta b_{n-1}\\
0&0&0&\cdots& 1_\mathcal{A}& \mu\delta b_n
\end{pmatrix}\right\|\\
&\leq \|1_\mathcal{A}\|+\left\|\begin{pmatrix}
0 &0 & 0& \cdots& 0 & \mu^{n}\delta b_1\\
0 & 0& 0& \cdots& 0 & \mu^{n-1}\delta b_2\\
0 & 0 & 0& \cdots& 0 & \mu^{n-2}\delta b_3\\
\vdots &\vdots& \vdots& \cdots& \vdots &\vdots\\
0&0&0&\cdots& 0& \mu^2 \delta b_{n-1}\\
0&0&0&\cdots& 0& \mu\delta b_n
\end{pmatrix}\right\|\\
&\leq 1+ \delta \sum_{i=1}^{n}\mu^{n-i+1}\|b_i\|.
\end{align*}
Now using (\ref{CORASSUMPTION1}) and (\ref{CORASSUMPTION2}) we get
\begin{align*}
\|[D_\mu,X_\mu]-1_{M_n(\mathcal{A})}\|&=\left\|\begin{pmatrix}
0 &0 & 0& \cdots& 0 & \mu^{n-1}([v,b_1]+\delta b_2+\delta b_1 [v,b_n])\\
0 & 0& 0& \cdots& 0 & 0\\
0 & 0 & 0& \cdots& 0 & 0\\
\vdots &\vdots & \vdots& \cdots& \vdots &\vdots\\
0&0&0&\cdots& 0& 0\\
0&0&0&\cdots& 0& 0
\end{pmatrix}\right\|\\
&\leq\mu^{n-1}\|[v,b_1]+\delta b_2+\delta b_1 [u,b_n]\|.
\end{align*}
\end{proof}
Let $\mathcal{A}$ be a unital C*-algebra. Assume that there are isometries $u, v \in \mathcal{A}$ such that
\begin{align}\label{IMPORTANTEQUATION}
u^*u=v^*v=uu^*+vv^*=1_\mathcal{A} \quad \text{and} \quad u^*v=v^*u=0.
\end{align}
Examples of such unital C*-algebras are $\mathcal{B}(\mathcal{H})$ (where $\mathcal{H}$ is an infinite dimensional Hilbert space) as well as any unital C*-algebra which contains the Cuntz algebra $\mathcal{O}_2$ (\cite{CUNTZ}). Note that whenever a unital C*-algebra admits a trace map there are no isometries satisfying Equation (\ref{IMPORTANTEQUATION}). In particular, any finite dimensional unital C*-algebra does not have such elements. It is also clear that no commutative unital C*-algebra can have isometries satisfying Equation (\ref{IMPORTANTEQUATION}).
It is shown in \cite{TAO} that whenever $\mathcal{H}$ is an infinite dimensional Hilbert space, then the Banach algebras $\mathcal{B}(\mathcal{H})$ and $M_2(\mathcal{B}(\mathcal{H})) $ are isometrically isomorphic. We now do these results for C*-algebras whenever they have isometries satisfying Equation (\ref{IMPORTANTEQUATION}). To do so we first need a result from the theory of C*-algebras.
\begin{theorem}(cf. \cite{TAKESAKI, PEDERSEN})\label{INJECTIOVEHOMOISISO}
\begin{enumerate}[label=(\roman*)]
\item Every *-homomorphism between C*-algebras is norm decreasing.
\item If a *-homomorphism between C*-algebras is injective, then it is isometric.
\end{enumerate}
\end{theorem}
\begin{theorem}\label{ALGEBRAMATRIX}
Let $\mathcal{A}$ be a unital C*-algebra. If there are isometries $u, v \in \mathcal{A}$ such that Equation (\ref{IMPORTANTEQUATION}) holds, then the map
\begin{align}\label{FIRSTMAP}
\phi:\mathcal{A} \ni x \mapsto
\begin{pmatrix}
u^*xu & u^*xv \\
v^*xu & v^*xv
\end{pmatrix} \in M_2(\mathcal{A})
\end{align}
is a C*-algebra isomorphism with the inverse map
\begin{align}\label{SECONDMAP}
\psi:M_2(\mathcal{A})\ni \begin{pmatrix}
a & b \\
c & d
\end{pmatrix} \mapsto uau^*+ubv^*+vcu^*+vdv^* \in \mathcal{A}.
\end{align}
\end{theorem}
\begin{proof}
Using Equation (\ref{IMPORTANTEQUATION}), a direct computation gives
\begin{align*}
\phi\psi\begin{pmatrix}
a & b \\
c & d
\end{pmatrix}&=\phi(uau^*+ubv^*+vcu^*+vdv^*)\\
&= \begin{pmatrix}
u^*(uau^*+ubv^*+vcu^*+vdv^*)u & u^*(uau^*+ubv^*+vcu^*+vdv^*)v \\
v^*(uau^*+ubv^*+vcu^*+vdv^*)u & v^*(uau^*+ubv^*+vcu^*+vdv^*)v
\end{pmatrix}\\
&=\begin{pmatrix}
1_\mathcal{A} a1_\mathcal{A}+1_\mathcal{A}b0+0c1_\mathcal{A}+0d0 & 1_\mathcal{A}a0+1_\mathcal{A}b1_\mathcal{A}+0c0+0d1_\mathcal{A} \\
0a1_\mathcal{A}+0b0+1_\mathcal{A}c1_\mathcal{A}+1_\mathcal{A}d0& 0a0+0b1_\mathcal{A}+1_\mathcal{A}c0+1_\mathcal{A}d1_\mathcal{A}
\end{pmatrix}\\
&=\begin{pmatrix}
a & b \\
c & d
\end{pmatrix}, \quad \forall \begin{pmatrix}
a & b \\
c & d
\end{pmatrix} \in M_2(\mathcal{A})
\end{align*}
and
\begin{align*}
\psi\phi x&=\psi \begin{pmatrix}
u^*xu & u^*xv \\
v^*xu & v^*xv
\end{pmatrix}\\
&=u(u^*xu)u^*+u(u^*xv)v^*+v(v^*xu)u^*+v(v^*xv)v^*\\
&=uu^*x(uu^*+vv^*)+vv^*x(uu^*+vv^*)\\
&=uu^*x1_\mathcal{A}+vv^*x1_\mathcal{A}=(uu^*+vv^*)x=x, \quad \forall x \in \mathcal{A}.
\end{align*}
Further,
\begin{align*}
(\phi(x))^*&= \begin{pmatrix}
u^*xu & u^*xv \\
v^*xu & v^*xv
\end{pmatrix}^*=\begin{pmatrix}
u^*x^*u & (v^*xu)^* \\
(u^*xv)^* & v^*x^*v
\end{pmatrix}\\
&=\begin{pmatrix}
u^*x^*u & u^*x^*v \\
v^*x^*u & v^*x^*v
\end{pmatrix}=\phi(x^*), \quad \forall x \in \mathcal{A}.
\end{align*}
Hence $\phi$ is a *-isomorphism. Using Theorem \ref{INJECTIOVEHOMOISISO}, to show $\phi$ is a C*-algebra isomorphism (i.e., isometric isomorphism), it suffices to show that $\phi$ is injective. Let $ x \in \mathcal{A}$ be such that $\phi x=0$. Then
\begin{align*}
u^*xu=u^*xv=0, \quad v^*xv=v^*xu=0.
\end{align*}
Using the first equation we get $uu^*xuu^*=uu^*xvv^*=0$ which implies $uu^*x=uu^*x(uu^*+vv^*)=0$. Similarly using the second equation we get $vv^*x=0$. Therefore $x=(uu^*+vv^*)x=0$. Hence $\phi$ is injective which completes the proof.
\end{proof}
Along with the lines of Theorem \ref{ALGEBRAMATRIX} we can easily derive the following result.
\begin{theorem}\label{ALLC}
Let $\mathcal{A}$ be a unital C*-algebra and $n \in \mathbb{N}$. If there are isometries $u, v \in \mathcal{A}$ such that Equation (\ref{IMPORTANTEQUATION}) holds, then the map
\begin{align*}
\phi:M_n(\mathcal{A}) \ni X \mapsto
\begin{pmatrix}
u^*Xu & u^*Xv \\
v^*Xu & v^*Xv
\end{pmatrix} \in M_{2n}(\mathcal{A})
\end{align*}
is a C*-algebra isomorphism with the inverse map
\begin{align*}
\psi:M_{2n}(\mathcal{A})\ni \begin{pmatrix}
A & B \\
C & D
\end{pmatrix} \mapsto uAu^*+uBv^*+vCu^*+vDv^* \in M_n(\mathcal{A}),
\end{align*}
where if $X\coloneqq [x_{i,j}]_{i,j}$ is a matrix, and $a,b\in \mathcal{A}$, by $aXb$ we mean the matrix $[ax_{i,j}b]_{i,j}$. In particular, the C*-algebras $\mathcal{A}, M_{2}(\mathcal{A}), M_{4}(\mathcal{A}), \dots, M_{2n}(\mathcal{A}), \dots $ are all *-isometrically isomorphic.
\end{theorem}
In the rest of this chapter, we assume that unital C*-algebra $\mathcal{A}$ has isometries $u,v$ satisfying Equation (\ref{IMPORTANTEQUATION}). In the next result we use the following notation. Given a vector $x \in \mathcal{A}^n$, $x_i$ means its $i^{\text{th}}$ coordinate.
\begin{proposition}\label{RIGHTPROPOSI}
Let $n\geq2$ and $T:\mathcal{A}^n\to \mathcal{A}^{n-1}$ be the bounded linear operator defined by
\begin{align*}
T(b_i)_{i=1}^n\coloneqq ([v,b_i]+[u,b_{i-1}])_{i=2}^n, \quad \forall (b_i)_{i=1}^n \in \mathcal{A}^n.
\end{align*}
Then there exists a bounded linear right-inverse $R:\mathcal{A}^{n-1}\to \mathcal{A}^{n}$ for $T$ such that
\begin{align*}
\|Rb\|&= \sup_{1\leq i \leq n}\|(Rb)_i\|\leq 8 \sqrt{2}n^2\sup_{2\leq i \leq n}\|b_i\|\\
&\leq 8 \sqrt{2}n^2\sup_{1\leq i \leq n}\|b_i\|=8 \sqrt{2}n^2\|b\|, \quad \forall b \in \mathcal{A}^{n}.
\end{align*}
\end{proposition}
\begin{proof}
Define
\begin{align*}
L:\mathcal{A}^{n-1} \ni (x_i)_{i=2}^n\mapsto \left(-\frac{1}{2}x_iv^*-\frac{1}{2}x_{i+1}u^*\right)_{i=1}^n\in \mathcal{A}^{n}, \quad \text{ where } x_1\coloneqq 0, x_{n+1}\coloneqq 0
\end{align*}
and
\begin{align*}
E:\mathcal{A}^{n-1}\ni (x_i)_{i=2}^n\mapsto \left(\frac{1}{2}(vx_iv^*+vx_{i+1}u^*+ux_{i-1}v^*+ux_iu^*)\right)_{i=2}^n \in \mathcal{A}^{n-1}.
\end{align*}
Then
\begin{align*}
&TL(x_i)_{i=2}^n=T\left(-\frac{1}{2}x_iv^*-\frac{1}{2}x_{i+1}u^*\right)_{i=1}^n=-\frac{1}{2}(T(x_iv^*)_{i=2}^n+T(x_{i+1}u^*)_{i=2}^n)\\
&=-\frac{1}{2}(([v, x_iv^*]+[u, x_{i-1}v^*])_{i=2}^n+([v,x_{i+1}u^*]+[u,x_{i}u^*])_{i=2}^n)\\
&=-\frac{1}{2}(vx_iv^*-x_iv^*v+ux_{i-1}v^*-x_{i-1}v^*u+vx_{i+1}u^*-x_{i+1}u^*v+ux_{i}u^*-x_{i}u^*u)_{i=2}^n\\
&=-\frac{1}{2}(vx_iv^*-x_i+ux_{i-1}v^*-x_{i-1}v^*u+vx_{i+1}u^*+ux_{i}u^*-x_{i})_{i=2}^n\\
&=(x_i)_{i=2}^n-\frac{1}{2}(vx_iv^*+vx_{i+1}u^*+ux_{i-1}v^*+ux_iu^*)_{i=1}^n \\
&=(1-E)(x_i)_{i=2}^n, \quad \forall (x_i)_{i=2}^n \in \mathcal{A}^{n-1}, \quad \text{ where } 1(x_i)_{i=2}^n\coloneqq (x_i)_{i=2}^n.
\end{align*}
We next try to show that the operator $1-E$ is bounded invertible with the help of Neumann series. First step is to change the norm on $\mathcal{A}^{n}$ to an equivalent norm so that invertibility property will not affect in both norms. Define a new norm on $\mathcal{A}^{n-1}$ by
\begin{align*}
\|(x_i)_{i=2}^n\|'\coloneqq\sup _{2\leq i \leq n}\left(2-\frac{i^2}{n^2}\right)^\frac{-1}{2}\|x_i\|.
\end{align*}
Let $x=(x_i)_{i=2}^n\in \mathcal{A}^{n-1}$ be such that $ \|(x_i)_{i=2}^n\|'\leq1$. Then
\begin{align*}
\left(2-\frac{i^2}{n^2}\right)^\frac{-1}{2}\|x_i\|\leq \sup _{2\leq i \leq n}\left(2-\frac{i^2}{n^2}\right)^\frac{-1}{2}\|x_i\|\leq 1, \quad \forall 2\leq i \leq n.
\end{align*}
Hence $\|x_i\|\leq \left(2-\frac{i^2}{n^2}\right)^\frac{1}{2}$ for all $2\leq i \leq n$. Using Theorem \ref{ALGEBRAMATRIX} we now get
\begin{align*}
\|(Ex)_i\|&=\frac{1}{2}\|vx_iv^*+vx_{i+1}u^*+ux_{i-1}v^*+ux_iu^*\|\\
&=\frac{1}{2} \left\|\begin{pmatrix}
x_{i} & x_{i+1} \\
x_{i-1} & x_{i}
\end{pmatrix}\right\|\leq \frac{1}{2}\left\|\begin{pmatrix}
\|x_{i}\| & \|x_{i+1}\| \\
\|x_{i-1}\| & \|x_{i}\|
\end{pmatrix}\right\|\\
&\leq \frac{1}{2} \left(\|x_{i}\|^2+\|x_{i+1}\|^2+\|x_{i-1}\|^2+\|x_{i}\|^2\right)^\frac{1}{2}\\
&\leq \frac{1}{2} \left(\left(2-\frac{i^2}{n^2}\right)+\left(2-\frac{(i+1)^2}{n^2}\right)+\left(2-\frac{(i-1)^2}{n^2}\right)+\left(2-\frac{i^2}{n^2}\right)\right)^\frac{1}{2}\\
&= \left(2-\frac{i^2}{n^2}-\frac{1}{2n^2}\right)^\frac{1}{2}\leq \left(1-\frac{1}{8n^2}\right)^\frac{1}{2}\left(2-\frac{i^2}{n^2}\right)^\frac{1}{2}\\
&\leq \left(1-\frac{1}{8n^2}\right)\left(2-\frac{i^2}{n^2}\right)^\frac{1}{2}, \quad \forall 2 \leq i \leq n.
\end{align*}
Hence
\begin{align*}
\|Ex\|'=\sup _{2\leq i \leq n}\left(2-\frac{i^2}{n^2}\right)^\frac{-1}{2}\|(Ex)_i\|\leq \left(1-\frac{1}{8n^2}\right)\|x\|', \quad \forall x \in \mathcal{A}^{n-1}.
\end{align*}
Since $1-\frac{1}{8n^2}<1$, $1-E$ is invertible and $\|(1-E)^{-1}x\|'\leq 8n^2\|x\|'$. Now going back to the original norm, we get
\begin{align*}
\frac{1}{\sqrt{2}}\|((1-E)^{-1}x)_i\|&\leq \sup _{2\leq i \leq n}\left(2-\frac{i^2}{n^2}\right)^\frac{-1}{2}\|((1-E)^{-1}x)_i\|\\
&=\|(1-E)^{-1}x\|'\leq 8n^2\|x\|'\\
&= 8n^2\sup _{2\leq i \leq n}\left(2-\frac{i^2}{n^2}\right)^\frac{-1}{2}\|x_i\|\\
&\leq 8n^2\sup _{2\leq i \leq n}\|x_i\|=8n^2\|x\|, \quad \forall x \in \mathcal{A}^{n-1}.
\end{align*}
Define $R\coloneqq L(1-E)^{-1}$. Then $TR=TL(1-E)^{-1}=(1-E)(1-E)^{-1}=1$ and
\begin{align*}
\|Rb\|&=\sup _{1\leq i \leq n}\|(Rb)_i\|=\|L(1-E)^{-1}b\|\leq \|L\|\|(1-E)^{-1}b\|\leq \|(1-E)^{-1}b\|\\
&=\sup _{2\leq i \leq n}\|((1-E)^{-1}b)_i\|\leq 8\sqrt{2} n^2\|b\|= 8\sqrt{2} n^2\sup _{2\leq i \leq n}\|b_i\|,\quad \forall b \in \mathcal{A}^{n-1}.
\end{align*}
\end{proof}
As given in \cite{TAO} we try to shift from the systems of equations (\ref{CORASSUMPTION1}) and (\ref{CORASSUMPTION2}) to the solution of single equation. Let $n\geq 2$. Define $a\coloneqq (0, \dots, n)\in \mathcal{A}^n$,
\begin{align*}
F:\mathcal{A}^n \ni (b_i)_{i=1}^n \mapsto (-2b_3, \dots, -(n-1)b_n,0)\in \mathcal{A}^{n-1}
\end{align*}
and
\begin{align*}
G:\mathcal{A}^n\times \mathcal{A}^n \ni ((b_i)_{i=1}^n, (c_i)_{i=1}^n)\mapsto (-b_2[u,c_n], \dots, -b_n[u,c_n])\in \mathcal{A}^{n-1}.
\end{align*}
We then have $\|F\|\leq n-1$ and $\|G\|\leq 2$.
\begin{proposition}
Systems (\ref{CORASSUMPTION1}) and (\ref{CORASSUMPTION2}) have a solution $b$ if and only if
\begin{align}\label{SOLUTIONEQUATION}
Tb=a+\delta F(b)+\delta G(b,b).
\end{align}
\end{proposition}
\begin{proof}
Systems (\ref{CORASSUMPTION1}) and (\ref{CORASSUMPTION2}) have a solution $b$ if and only if
\begin{align*}
[v,b_i]+[u,b_{i-1}]=-i\delta b_{i+1}-\delta b_i[u,b_n], \quad \forall i =2, \dots, n-1
\end{align*}
and
\begin{align*}
[v,b_n]+[u,b_{n-1}]=-\delta b_n[u,b_n]+n\cdot 1_{{M_n(\mathcal{A})}}
\end{align*}
if and only if
\begin{align*}
&([v,b_i]+[u,b_{i-1}])_{i=2}^n=\\
&\quad (0, \dots, n)+\delta (-2b_3, \dots, -(n-1)b_n,0)+\delta (-b_2[u,b_n], \dots, -b_n[u,b_n])
\end{align*}
if and only if
\begin{align*}
Tb=a+\delta F(b)+\delta G(b,b).
\end{align*}
\end{proof}
The above proposition reduces the work of solving systems (\ref{CORASSUMPTION1}) and (\ref{CORASSUMPTION2}) to a single operator equation. To solve (\ref{SOLUTIONEQUATION}) we need an abstract lemma from \cite{TAO}.
\begin{lemma}(\cite{TAO})\label{TAOTHEOREMABST}
Let $\mathcal{X}$, $\mathcal{Y}$ be Banach spaces, $T,F:\mathcal{X}\to \mathcal{Y}$ be bounded linear operators, and let
$G:\mathcal{X}\times\mathcal{X} \to \mathcal{Y}$ be a bounded bilinear operator with bound $r>0$ and let $ a \in\mathcal{Y}$. Suppose that
$T$ has a bounded linear right inverse $R:\mathcal{Y}\to \mathcal{X}$. If $\delta>0$ is such that
\begin{align}\label{LEMMACONDITION}
\delta(2\|F\|\|R\|+4r\|R\|^2\|a\|)<1,
\end{align}
then there exists $ b \in\mathcal{X}$ with $\|b\|\leq 2 \|R\|\|a\|$ that solves the equation
\begin{align*}
Tb=a+\delta F(b)+\delta G(b,b).
\end{align*}
\end{lemma}
\begin{theorem}
For each $n\geq2$, there exists a solution $b$ to Equation (\ref{SOLUTIONEQUATION}) such that $\|b\|\leq 16 \sqrt{2}n^3$.
\end{theorem}
\begin{proof}
We apply Lemma \ref{TAOTHEOREMABST} for
\begin{align*}
\delta \coloneqq \frac{1}{2000n^5}.
\end{align*}
Then using Proposition \ref{RIGHTPROPOSI}, we get
\begin{align*}
\delta(2\|F\|\|R\|+4r\|R\|^2\|a\|)&\leq \frac{1}{2000n^5}(2(n-1)8 \sqrt{2}n^2+4.2.128.n^4.n)\\
&\leq \frac{1}{2000n^5}(16\sqrt{2}n^3+1024n^5)<1.
\end{align*}
Lemma \ref{TAOTHEOREMABST} now says that there exists a $b$ which satisfies (\ref{SOLUTIONEQUATION}).
\end{proof}
\begin{theorem}\label{BBERF}
For each $n\geq2$, let $b$ be an element satisfying Equation (\ref{SOLUTIONEQUATION}) and $\|b\|\leq 16 \sqrt{2}n^3$. Then for $\mu=\frac{1}{2}$, $D_\mu, X_\mu \in M_n(\mathcal{A})$ such that
\begin{align*}
\| D_\mu\|=O(n^5), \quad \| X_\mu\|=O(1), \quad \|[D_\mu,X_\mu]-1_{M_n(\mathcal{A})}\|=O(n^32^{-n}).
\end{align*}
\end{theorem}
\begin{proof}
Let $D_\mu, X_\mu \in M_n(\mathcal{A})$ be as in Corollary \ref{COROLLARYLEMMA}. We then have
\begin{align*}
&\| D_\mu\| \leq 4. 2000n^5\|u\|+ 2. 2000n^5\|v\|+(n-1)+\frac{1}{2000n^5} \sum_{i=1}^{n}\frac{1}{2^{n-i-1}}16 \sqrt{2}n^3\|u\|\\
&\quad \quad =O(n^5),\\
& \| X_\mu\| \leq 1+ \frac{1}{2000n^5} \sum_{i=1}^{n}\frac{1}{2^{n-i-1}}16 \sqrt{2}n^3 =O(1),\\
&\|[D_\mu,X_\mu]-1_{M_n(\mathcal{A})}\|\leq 2\mu^{n-1}(\|v\|\|b_1\|+\delta \|b_2\|+\delta \|b_1 \|\|u\|\|b_n\|).\\
&\quad \leq 2\frac{1}{2^{n-1}}(\|u\|16 \sqrt{2}n^3+\frac{1}{2000n^5} 16 \sqrt{2}n^3+\frac{1}{2000n^5} 16 \sqrt{2}n^3 \|u\|16 \sqrt{2}n^3)\\
&\quad \leq 2\frac{n^3}{2^{n-1}}(\|v\|16 \sqrt{2}+\frac{1}{2000n^5} 16 \sqrt{2}+\frac{1}{2000n^5} 16 \sqrt{2} n^3\|u\|16 \sqrt{2})=O(n^32^{-n}).
\end{align*}
\end{proof}
\begin{theorem}\label{BEFORE}
Let $0<\varepsilon\leq 1/2$. Then there exist an even integer $n$ and $D,X \in M_n(\mathcal{A})$ with
\begin{align*}
\|[D,X]-1_{M_n(\mathcal{A})}\|\leq \varepsilon
\end{align*}
such that
\begin{align*}
\|D\|\|X\|=O\left(\log^5\frac{1}{\varepsilon}\right).
\end{align*}
\end{theorem}
\begin{proof}
Let $D_\mu, X_\mu \in M_n(\mathcal{A})$ be as in Corollary \ref{COROLLARYLEMMA}. Theorem \ref{BBERF} says that there are $\alpha, \beta, \gamma>0$ be such that
\begin{align*}
\| D_\mu\|\leq \alpha n^5, \quad \| X_\mu\|\leq \beta, \quad \|[D_\mu,X_\mu]-1_{M_n(\mathcal{A})}\|\leq \gamma n^32^{-n}.
\end{align*}
Since $2^n>n^4$ all but finitely many $n$'s, $ \gamma n^32^{-n}<\varepsilon$ all but finitely many $n$'s.
We now choose real $c$ such that $n=c \log\frac{1}{\varepsilon}$ is even $ \gamma n^32^{-n}<\varepsilon$. We then have $\| D_\mu\|=O(\log^5(\frac{1}{\varepsilon}))$ and $ \|[D_\mu,X_\mu]-1_{M_n(\mathcal{A})}\|\leq \varepsilon$.
\end{proof}
Theorem \ref{BEFORE} and Theorem \ref{ALLC} easily give the following.
\begin{theorem}\label{LASTTHEOREM}
Let $\mathcal{A}$ be a unital C*-algebra. Suppose there are isometries $u, v \in \mathcal{A}$ such that Equation (\ref{IMPORTANTEQUATION}) holds. Then
for each $0<\varepsilon\leq 1/2$, there exist $d,x \in \mathcal{A}$ with
\begin{align*}
\|[d,x]-1_{\mathcal{A}}\|\leq \varepsilon
\end{align*}
such that
\begin{align*}
\|d\|\|x\|=O\left(\log^5\frac{1}{\varepsilon}\right).
\end{align*}
\end{theorem}
\begin{remark}
Let $\mathcal{A}$ be a finite dimensional unital C*-algebra. From the structure theory (\cite{DAVIDSON}) we have
\begin{align*}
\mathcal{A}\cong M_{n_1}(\mathbb{C})\oplus \cdots \oplus M_{n_r}(\mathbb{C}),
\end{align*}
for unique (upto permutation) natural numbers $n_1, \dots, n_r$. This result says that normalized trace map (a trace map $\text{Tr} $ such that $\text{Tr}(1_\mathcal{A})=1$) exists on $\mathcal{A}$. Using this we make the following two observations.
\begin{enumerate}[label=(\roman*)]
\item $\mathcal{A}$ can not have isometries satisfying Equation (\ref{IMPORTANTEQUATION}). Suppose that there are such isometries. Then
\begin{align*}
1=\text{Tr}(uu^*+vv^*)=\text{Tr}(uu^*)+\text{Tr}(vv^*)=\text{Tr}(u^*u)+\text{Tr}(v^*v)=2
\end{align*}
which is impossible.
\item In (\cite{TAO}), Tao observed that if $\mathcal{H}$ is a finite dimensional Hilbert space, then there are no $D,X \in \mathcal{B}(\mathcal{H})$ satisfying $ \|[D,X]-1_{\mathcal{B}(\mathcal{H})}\|<1$. We elaborate this for any finite dimensional unital C*-algebra $\mathcal{A}$, namely, there do not exist $d,x \in \mathcal{A}$
satisfying $\|[d,x]-1_{\mathcal{A}}\|<1$. In other words, Theorem \ref{LASTTHEOREM} fails for every finite dimensional unital C*-algebra. Let $d,x \in \mathcal{A}$ be arbitrary. From the structure theory, we identify that $d$ as $D$ and $x$ as $X$ for some matrices $D, X \in M_n(\mathbb{C})$ and for some $n$. Using the commutativity of trace we then have $\text{Tr}([D,X])=0$. Let $\lambda_1, \dots, \lambda_n$ be eigenvalues of $[D,X]$. Then $\sum_{j=1}^{n}\lambda_j=\text{Tr}([D,X])=0$. This gives
\begin{align*}
n=\left|\sum_{j=1}^{n}(\lambda_j-1)\right|\leq \sum_{j=1}^{n}|\lambda_j-1|.
\end{align*}
Previous inequality says that there is atleast one $j$ such that $|\lambda_j-1|\geq 1$. We next see that all the eigenvalues of
$[D,X]-1_{M_n(\mathbb{C})}$ are $\lambda_1-1, \dots, \lambda_n-1$. Using the property of operator norm we finally get
\begin{align*}
\|[d,x]-1_{\mathcal{A}}\|=\|[D,X]-1_{M_n(\mathbb{C})}\|\geq \sup_{1\leq j \leq n}|\lambda_j-1|\geq 1.
\end{align*}
\end{enumerate}
\end{remark}
\textbf{Conclusion and future work : }
In this appendix we showed that the result of Tao's is valid in more general spaces. One of the future objectives is to improve the bounds in Theorem \ref{LASTTHEOREM} and Theorem \ref{POPACOMPACT}.
\leavevmode\newpage
\bibliographystyle{apalike}
\cleardoublepage
\phantomsection
\addcontentsline{toc}{chapter}{BIBIOGRAPHY}
| {
"timestamp": "2022-03-01T02:44:57",
"yymm": "2202",
"arxiv_id": "2202.13697",
"language": "en",
"url": "https://arxiv.org/abs/2202.13697",
"abstract": "Notion of frames and Bessel sequences for metric spaces have been introduced. This notion is related with the notion of Lipschitz free Banach spaces. \\ It is proved that every separable metric space admits a metric $\\mathcal{M}_d$-frame. Through Lipschitz-free Banach spaces it is showed that there is a correspondence between frames for metric spaces and frames for subsets of Banach spaces. Several characterizations of metric frames are obtained. Stability results are also presented. Non linear multipliers are introduced and studied. This notion is connected with the notion of Lipschitz compact operators. Continuity properties of multipliers are discussed.For a subclass of approximated Schauder frames for Banach spaces, characterization result is derived using standard Schauder basis for standard sequence spaces. Duals of a subclass of approximate Schauder frames are completely described. Similarity of this class is characterized and interpolation result is derived using orthogonality. A dilation result is obtained. A new identity is derived for Banach spaces which admit a homogeneous semi-inner product. Some stability results are obtained for this class.A generalization of operator-valued frames for Hilbert spaces are introduced which unifies all the known generalizations of frames for Hilbert spaces. This notion has been studied in depth by imposing factorization property of the frame operator. Its duality, similarity and orthogonality are addressed. Connections between this notion and unitary representations of groups and group-like unitary systems are derived. Paley-Wiener theorem for this class are derived.",
"subjects": "Functional Analysis (math.FA); Metric Geometry (math.MG); Operator Algebras (math.OA)",
"title": "Metric, Schauder and Operator-Valued Frames",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9732407191430024,
"lm_q2_score": 0.727975460709318,
"lm_q1q2_score": 0.7084953608991951
} |
https://arxiv.org/abs/2007.00167 | The Integers as a Higher Inductive Type | We consider the problem of defining the integers in Homotopy Type Theory (HoTT). We can define the type of integers as signed natural numbers (i.e., using a coproduct), but its induction principle is very inconvenient to work with, since it leads to an explosion of cases. An alternative is to use set-quotients, but here we need to use set-truncation to avoid non-trivial higher equalities. This results in a recursion principle that only allows us to define function into sets (types satisfying UIP). In this paper we consider higher inductive types using either a small universe or bi-invertible maps. These types represent integers without explicit set-truncation that are equivalent to the usual coproduct representation. This is an interesting example since it shows how some coherence problems can be handled in HoTT. We discuss some open questions triggered by this work. The proofs have been formally verified using cubical Agda. | \section{Introduction}
\label{sec:introduction}
How to define the integers in Homotopy Type Theory? This can sound like a trivial
question. The first answer is as signed natural numbers:
\begin{definition}\label{Z-w}
Let $\mathbb{Z}_w$ be the inductive type generated by the following constructors:
\begin{itemize}
\item[--] $0 : \mathbb{Z}_w$
\item[--] $\mathsf{strpos} : \mathbb{N} \to \mathbb{Z}_w$
\item[--] $\mathsf{strneg} : \mathbb{N} \to \mathbb{Z}_w$
\end{itemize}
\end{definition}
However, this type is very inconvenient in practice because it creates
a lot of unnecessary case distinctions. Nuo \cite{nuo-phd} tried to
prove distributivity of multiplication over addition, which
resulted in a lot of cases. It is like working with normal forms only,
when working with $\lambda$-terms.
Nuo shows that it is much better to work with a quotient type,
representing integers as differences of natural
numbers. That is, we define $\mathbb{Z}_q = \mathbb{N} \times \mathbb{N} / \sim$ where
$(x^+,x^-) \sim (y^+,y^-)$ is defined as $x^+ + y^- = y^+ + x^-$
\footnote{This is actually the definition in \cite{hottbook}.}.
However, this is not the end of the story. Here we use
set-quotients, which can be implemented as a higher inductive type with a
set-truncation constructor \cite[Section~6.10]{hottbook}.
However, the set-truncation constructor implies that using its recursion principle we can only define
functions into sets, which seems to be an unreasonable limitation when
working in HoTT. For example, in the proof that the loop space of the
circle is isomorphic to the integers \cite{licatashulman}, we must map from the integers to the loop space
of the circle, when we do not yet know that this will end up being a set.
We would like to have a definition of the integers which is convenient
to work with (i.e., does not reduce them to normal forms) but which is
not forced to be set-truncated by a set-truncation constructor.
Paolo Capriotti suggested the following definition:
\begin{definition}\label{Z-h}
Let $\mathbb{Z}_h$ be the higher inductive type with the following constructors:
\begin{itemize}
\item[--] $0 : \mathbb{Z}_h$;
\item[--] $\mathsf{succ} : \mathbb{Z}_h \to \mathbb{Z}_h$;
\item[--] $\mathsf{pred} : \mathbb{Z}_h \to \mathbb{Z}_h$;
\item[--] $\mathsf{sec} : (z : \mathbb{Z}_h) \to \mathsf{pred}(\mathsf{succ}(z)) = z$;
\item[--] $\mathsf{ret} : (z : \mathbb{Z}_h) \to \mathsf{succ}(\mathsf{pred}(z)) = z$;
\item[--] $\mathsf{coh} : (z : \mathbb{Z}_h) \to \ap{\mathsf{succ}}(\mathsf{sec}(z)) = \mathsf{ret}(\mathsf{succ}(z))$.
\end{itemize}
\end{definition}
We add $\mathsf{succ}$ and $\mathsf{pred}$ as constructors, but then we postulate that they
are inverse to each other using $\mathsf{sec}$ and $\mathsf{ret}$. At this point we could add
a set-truncation but then we would suffer from the same shortcoming as
the definition using a set-quotient. However, we can add just one
coherence condition $\mathsf{coh}$ which should look familiar to anybody
who has read the HoTT book: indeed the constructors $\mathsf{pred}$, $\mathsf{sec}$, $\mathsf{ret}$, and $\mathsf{coh}$
exactly say that $\mathsf{succ}$ is a half-adjoint equivalence \cite[Section~4.2]{hottbook}.
More precisely, $\mathsf{sec}$ postulates that $\mathsf{succ}$ is a section,
$\mathsf{ret}$ postulates that $\mathsf{succ}$ is a retraction, and $\mathsf{coh}$ represents the
triangle identity in the definition of half-adjoint equivalence.
The question that now remains is the following. Is $\mathbb{Z}_h$ a correct definition of the
integers, in particular is it a set with decidable equality? The
strategy to prove this is to define a normalisation function into the
signed integers, $\mathbb{Z}_w$, and show that this normalisation function,
together with the obvious embedding of $\mathbb{Z}_w$ into $\mathbb{Z}_h$, forms an equivalence.
It turns out that this is actually quite hard to prove, due to the presence of
higher equalities, and nobody has so far been able to formally verify this.
In this paper, we follow the same idea but use a \emph{simpler}
definition of equivalence, namely bi-invertible maps \cite[Section~4.3]{hottbook}:
\begin{definition}\label{Z-b}
Let $\mathbb{Z}_b$ be the higher inductive type with the following constructors:
\begin{itemize}
\item[--] $0 : \mathbb{Z}_b$;
\item[--] $\mathsf{succ} : \mathbb{Z}_b \to \mathbb{Z}_b$;
\item[--] $\mathsf{pred}_1 : \mathbb{Z}_b \to \mathbb{Z}_b$;
\item[--] $\mathsf{pred}_2 : \mathbb{Z}_b \to \mathbb{Z}_b$;
\item[--] $\mathsf{sec} : (z : \mathbb{Z}_b) \to \mathsf{pred}_1(\mathsf{succ}(z)) = z$;
\item[--] $\mathsf{ret} : (z : \mathbb{Z}_b) \to \mathsf{succ}(\mathsf{pred}_2(z)) = z$;
\end{itemize}
\end{definition}
In this case we postulate that $\mathsf{succ}$ has a left inverse, given by
$\mathsf{pred}_1$ and $\mathsf{sec}$, and a right inverse, given by $\mathsf{pred}_2$ and $\mathsf{ret}$.
The reason why $\mathbb{Z}_b$ is simpler than $\mathbb{Z}_h$ is because it only has
$0$- and $1$-dimensional constructors. The higher coherence $\mathsf{coh}$ is not needed
in this case for the same reason that a $2$-dimensional constructor is not needed
in the definition of bi-invertible map: having two, a priory, unrelated
inverses makes the type of witnesses that a certain map is bi-invertible a proposition
(\cite[Theorem~4.3.2]{hottbook}).
For this definition we can give a complete
proof that $\mathbb{Z}_b$ is equivalent to $\mathbb{Z}_w$, which has been formalized in
cubical Agda. We remark that this has previously been verified by Evan Cavallo \cite{cavallo} in
RedTT \cite{redtt}. However, our approach to prove the equivalence is more general.
Our main result is \cref{enough}, which says that only the components witnessing the
preservation of $0$ and $\mathsf{succ}$ are relevant when comparing morphisms out of $\mathbb{Z}_b$.
Another presentation of the integers follows from directly
implementing the idea that the integers can be specified as the initial type
with an inhabitant and an equivalence:
\newpage
\begin{itemize}
\item[--] $0 : \mathbb{Z}_U$;
\item[--] $s : \mathbb{Z}_U = \mathbb{Z}_U$.
\end{itemize}
The problem is that this is not a standard definition of a higher inductive type
because we state an equality of the type itself. However, this can be
fixed by using a small universe:
\begin{definition}
\label{Z-u}
Define $U:\mathcal{U}$ and $\mathsf{El} : U \to \mathcal{U}$ inductively with the
constructors:
\begin{itemize}
\item[--] $z : U$;
\item[--] $q : z = z$;
\item[--] $0 : \mathsf{El}(z)$
\end{itemize}
Now, let $\mathbb{Z}_U \defeq \mathsf{El}(z)$.
\end{definition}
While we can show that this is
a set without using set-truncation, its recursion principle isn't directly amenable to recursive
definitions of functions because even $\mathsf{succ}$ is not a constructor. On
the other hand the fact that the integers are the loop space of the
circle is a rather easy consequence of this definition.
The definition of the integers is also closely related to the free
group, indeed as suggested in \cite{free-group} we can define the free
group over a type $A$ by simply parametrizing all the
constructors but $0$:
\begin{definition}
\label{FG}
Given $A:\mathcal{U}$, define $\mathbf{F}(A)$ inductively with the
constructors:
\begin{itemize}
\item[--] $0 : \mathbf{F}(A)$;
\item[--] $\mathsf{succ} : A \to \mathbf{F}(A) \to \mathbf{F}(A)$;
\item[--] $\mathsf{pred}_1 : A \to \mathbf{F}(A) \to \mathbf{F}(A)$;
\item[--] $\mathsf{pred}_2 : A \to \mathbf{F}(A) \to \mathbf{F}(A)$;
\item[--] $\mathsf{sec} : (a : A) \to (z : \mathbf{F}(A)) \to \mathsf{pred}_1(a,\mathsf{succ}(a,z)) = z$;
\item[--] $\mathsf{ret} : (a : A) \to (z : \mathbf{F}(A))\to \mathsf{succ}(a,\mathsf{pred}_2(a,z)) = z$;
\end{itemize}
\end{definition}
The integers arise as the special case $\mathbb{Z} =
\mathbf{F}(\mathbf{1})$. However, the normal forms get a bit more
complicated because we must allow alternating sequences of $\mathsf{succ}$ and
$\mathsf{pred}$ but only for different $a:A$. This means that a normalisation
function is only definable for sets $a:A$ with a decidable
equality. The general problem of whether $\mathbf{F}(A)$ is a set, if $A$ is, is
still open --- in \cite{free-group} it is shown to be the
case, if we $1$-truncate the HIT.
The problem of defining the integers with convenient constructors, and adding only
the right coherences to make it a set, can be seen as a simple instance of a
more general class of coherence problems in HoTT. Another example that we have in mind is the intrinsic definition of the syntax of
type theory as the initial category with families as developed in
\cite{TTinTT}. If we carry out this definition in HoTT, we need to
set-truncate the syntax, but this stops us from interpreting the
syntax in the standard model formed by sets. We hope that also in this case we can
add the correct coherence laws and show that they are sufficient to deduce
that the initial algebra is a set.
\subsection{Contributions}
\label{sec:contributions}
We show that the definitions of the signed integers, $\mathbb{Z}_w$, the
definition of the integers as a higher inductive type using bi-invertible maps, $\mathbb{Z}_b$,
and the definition using a higher inductive-inductive type with a mini universe, $\mathbb{Z}_U$, are all
equivalent (\cref{Zb-equiv-Zw}).
For $\mathbb{Z}_b$ we establish some
useful principles such as a recursion principle (\cref{mapout})
which only uses one predecessor, and an induction principle which says
that to prove a predicate (i.e., a family of propositions), you
only need to prove closure under $0$, $\mathsf{succ}$, and $\mathsf{pred}_1$
(\cref{proveprop}). This is sufficient to verify all algebraic
properties of the integers, e.g., that the integers form a commutative ring. We
have formalized \cite{code} the constructions using cubical Agda \cite{cubical-agda}.
When formalizing the constructions involving $\mathbb{Z}_b$ we developed the
theory of bi-invertible
maps in cubical Agda, which wasn't available.
In particular, we prove that bi-inverti\-ble maps are equivalent to contractible-fibers maps
\cite[Section~4.4]{hottbook}, and the principle of equivalence induction for bi-invertible maps.
\subsection{Related work}
\label{sec:related-work}
The claim that $\mathbb{Z}_h$ is a set can be found in \cite{pinyo-types} but
the proof was flawed: it relies on the assumption that we can ignore
propositional parts of an algebra for a certain signature when constructing algebra morphisms,
which is not the case in general (\cref{counterexample}). Cavallo \cite{cavallo} verified that
$\mathbb{Z}_b \simeq \mathbb{Z}_w$ in RedTT. Higher inductive representations of
the integers are discussed in \cite{BasoldEtAl} and it is shown there that
$\mathbb{Z}_h$ without the last constructor is not a
set. \cite{nicolai-path-19} also discuss \cite{pinyo-types} and note
that it is a corollary of their higher Seifert-van Kampen
theorem --- however, they derive it from initiality not from the
induction principle.
\subsection{Background}
\label{background}
We use Homotopy Type Theory as presented in the book
\cite{hottbook}. We adopt the following notational conventions.
If two terms $a$ and $b$ are definitionally equal, we write $a \equiv b$,
and we reserve $a = b$ to denote the type of propositional equalities between $a$ and $b$.
Given a type $A : \mathcal{U}$ and a type family $P : A \to \mathcal{U}$, we write the corresponding
$\Pi$-type as $(a : A) \to P(a)$, and the corresponding $\Sigma$-type as
$(a : A) \times P(a)$.
Given a type $A : \mathcal{U}$, a type family $P : A \to \mathcal{U}$, an equality $e : a = b$ in $A$,
and $p : P(a)$, we denote the coercion of $p$ along $e$ by $e_*(p) : P(b)$.
This is defined by induction on $e$.
A type is contractible if it has exactly one inhabitant. That is,
given a type $A : \mathcal{U}$, we define $\mathsf{isContr}(A) \defeq (a_0 : A) \times \left((a : A) \to a = a_0\right)$.
A type is a proposition if any two inhabitants are equal. That is,
given a type $A : \mathcal{U}$, we define $\mathsf{isProp}(A) \defeq (a,b : A) \to a = b$, \cite[Definition~3.3.1]{hottbook}.
A type is a set if it satisfies UIP. That is, given a type $A : \mathcal{U}$,
we define $\mathsf{isSet}(A) \defeq (a,b : A) \to (p, q : a = b) \to p = q$, \cite[Definition~3.1.1]{hottbook}.
An equivalence between types $A$ and $B$ is a map $f : A \to B$ together
with a proof that $(b : B) \to \mathsf{isContr}( (a : A) \times f(a) = b)$.
The type of equivalences between $A$ and $B$ is denoted by $A \simeq B$.
The general syntax of Higher Inductive Inductive Types (HIITs) is
specified in \cite{ambrus-andras}, where also the types of the
eliminators are derived.
In the informal exposition, and in the formalisation, we use
the cubical approach to path algebra introduced in \cite{licata-brunerie}.
For the formalisation we use cubical Agda \cite{cubical-agda} which is
based on the cubical type theory of \cite{cubical}. The development of HIITs
in Agda is based on \cite{cubical-hits}.
\section{Representing $\mathbb{Z}$ using bi-invertible maps}
\label{sec:usingbi}
\begin{figure*}[!ht]
\[\begin{tikzcd}
f(\mathsf{succ}(\mathsf{pred}_1(x))) \arrow[dd,dash,"\ap{f}(\mathsf{sec}(x))" left]
\arrow[rr,dash,"r(\mathsf{succ}(\mathsf{pred}_1(x)))"]
\arrow[dr,dash]
& & g(\mathsf{succ}(\mathsf{pred}_1(x))) \arrow[dd,near end,dash,"\ap{g}(\mathsf{sec}(x))"]
\arrow[dr,dash]\\
& s(p(f(x))) \arrow[rr,near start,dash,crossing over,"\ap{\mathsf{succ}}(\ap{\mathsf{pred}_1}(r(x)))"]
& & s(p(g(x))) \arrow[dd,dash,"\sec(g(x))" right]\\
f(x) \arrow[rr, near end,dash, "r(x)" below]
\arrow[dr,equal]
&& g(x) \arrow[dr,equal]\\
& f(x) \arrow[rr,dash,"r(x)" below]
\arrow[from=uu,dash,near start,crossing over, "\sec(f(x))" left]
&& g(x)
\end{tikzcd}
\]
\caption{\label{cube}Cube needed for lemma \ref{uniquenessZb}}
\end{figure*}
The type $\mathbb{N}$ of natural numbers is usually defined as the inductive type generated by an inhabitant $0 : \mathbb{N}$ and
an endomap $\mathsf{succ} : \mathbb{N} \to \mathbb{N}$. In this section, we define the integers $\mathbb{Z}_b$ in a similar way. The idea is to give
constructors that guarantee that we have $0 : \mathbb{Z}_b$, $\mathsf{succ} : \mathbb{Z}_b
\to \mathbb{Z}_b$, and that $\mathsf{succ}$ is an equivalence using bi-invertible
maps, see \cref{Z-b}.
To make it easy to work with this definition, we prove three theorems that let us: map out of $\mathbb{Z}_b$ (\cref{mapout}), prove properties
about $\mathbb{Z}_b$ (\cref{proveprop}), and recognise when two maps out of $\mathbb{Z}_b$ are equal (\cref{enough}).
The result about mapping out of $\mathbb{Z}_b$ is very simple, and follows immediately from the recursion principle of $\mathbb{Z}_b$.
\begin{proposition}[{\texttt{rec}$\mathbb{Z}$\texttt{simp}}]
\label{mapout}
Given a type $T$ with an inhabitant $t : T$ and two maps $f : T \to T$, $g : T \to T$, such that
$g$ is a left and right inverse of $f$, we get a map $r : \mathbb{Z}_b \to T$ such that
$r(0) \equiv t$ and $r(\mathsf{succ}(z)) \equiv f(r(z))$, definitionally.
\end{proposition}
The next result is only slightly more involved.
\begin{proposition}[{\texttt{ind}$\mathbb{Z}$\texttt{simp}}]
\label{proveprop}
Given a type family $P : \mathbb{Z}_b \to \mathcal{U}$ such that $(z : \mathbb{Z}_b) \to \mathsf{isProp}(P(z))$,
if we have $P(0)$, $(z : \mathbb{Z}_b) \to P(z) \to P(\mathsf{succ}(z))$, and $(z : \mathbb{Z}_b) \to P(z) \to P(\mathsf{pred}_1(z))$,
then it follows that $(z : \mathbb{Z}_b) \to P(z)$.
\end{proposition}
\begin{proof}
We use the induction principle of $\mathbb{Z}_b$.
The main idea is that we do not have to check any coherences, since
we are proving a proposition. Concretely, this means that we only have to provide inhabitants
for the following types:
$P(0)$, $(z : \mathbb{Z}_b) \to P(z) \to P(\mathsf{succ}(z))$, $(z : \mathbb{Z}_b) \to P(z) \to P(\mathsf{pred}_1(z))$,
and $(z : \mathbb{Z}_b) \to P(z) \to P(\mathsf{pred}_2(z))$.
For the first three we just use the assumptions.
For the fourth one, we make use of the fact that, for every $z : \mathbb{Z}_b$, there
is an equality $\mathsf{pred}_1(z) = \mathsf{pred}_2(z)$.
This is because $\mathsf{pred}_2(z) = \mathsf{pred}_1(\mathsf{succ}(\mathsf{pred}_2(z))) = \mathsf{pred}_1(z)$ using
$\mathsf{sec}$ and then $\mathsf{ret}$.
\end{proof}
The result that allows us to compare maps out of $\mathbb{Z}_b$ is considerably more complicated to prove.
In order to explain its proof, we need to talk about bi-invertible maps.
\begin{definition}
A map between types $f : A \to B$ is a bi-invertible map if
there exist $g,h : B \to A$, and homotopies $s : g\circ f = \idfunc{A}$ and $r : f\circ h = \idfunc{B}$.
The type of bi-invertible structures on such a map $f$ is denoted by $\mathsf{isBiInv}(f)$.
The type $(f : A \to B) \times \mathsf{isBiInv}(f)$ is denoted by $A \simeq_b B$.
\end{definition}
Whenever we have $f : A \simeq_b B$, we will abuse notation, and write $f : A \to B$ for the underlying
function of the bi-invertible map $f$.
Notice that the constructors $\mathsf{succ}$, $\mathsf{pred}_1$, $\mathsf{pred}_2$, $\mathsf{sec}$, and $\mathsf{ret}$ form a bi-invertible map.
Suppose given a type $T$ with an inhabitant $t : T$ and a bi-invertible map $s : T \simeq_b T$.
The recursion principle of $\mathbb{Z}_b$ gives us $\mathsf{rec}_{\mathbb{Z}_b}(T,t,s) : \mathbb{Z}_b \to T$.
Now, assume given another map $f : \mathbb{Z}_b \to T$.
What do we have to check to be able to conclude that $f = \mathsf{rec}_{\mathbb{Z}_b}(T,t,s)$?
The following theorem gives a simple answer to the question and is the main
focus of this section.
\begin{theorem}[\texttt{uniqueness}$\mathbb{Z}$]
\label{enough}
Given a type $T$, an inhabitant $t : T$, a bi-invertible map $s : T \simeq_b T$, and a map $f : \mathbb{Z}_b \to T$,
if $f(0) = t$ and $s \circ f = f \circ \mathsf{succ}$
then $f = \mathsf{rec}_{\mathbb{Z}_b}(T,t,s)$.
\end{theorem}
In order to prove \cref{enough} we must study the preservation of bi-invertible maps,
which we introduce next.
Fix types $A,B,A',B' : \mathcal{U}$, bi-invertible maps $e : A \simeq_b B$ and $e' : A' \simeq_b B'$, and
maps $\alpha : A \to A'$ and $\beta : B \to B'$:
\[
\begin{tikzpicture}
\matrix (m) [matrix of math nodes,row sep=2em,column sep=2em,minimum width=2em,nodes={text height=1.75ex,text depth=0.25ex}]
{ A & B \\
A' & B'. \\};
\path[-stealth]
(m-1-1) edge node [above] {$e$} (m-1-2)
edge node [left] {$\alpha$} (m-2-1)
(m-2-1) edge node [above] {$e'$} (m-2-2)
(m-1-2) edge node [right]{$\beta$} (m-2-2)
;
\end{tikzpicture}
\]
We now define what it means for $\alpha$ and $\beta$ to respect $e$ and $e'$.
By a slight abuse of notation, let the bi-invertible maps $e$ and $e'$ be given
by $(e,g,h,s,r)$ and $(e',g',h',s',r')$.
\begin{definition}
We define the type $\mathsf{prBiInv}(e,e',\alpha,\beta)$ as the iterated $\Sigma$-type with the following fields:
\begin{itemize}
\item[--] (preservation of $e$) $p_e : e' \circ \alpha = \beta \circ e$;
\item[--] (preservation of $g$) $p_g : g' \circ \beta = \alpha \circ g$;
\item[--] (preservation of $h$) $p_h : h' \circ \beta = \alpha \circ h$;
\item[--] (preservation of $s$) $p_s : (a : A) \to s'( \alpha(a)) = \ap{g'}(p_e a) \sq p_g(e(a)) \sq \ap{\alpha}(s(a))$;
\item[--] (preservation of $r$) $p_r : (b : B) \to r'( \beta(b)) = \ap{e'}(p_h b) \sq p_e(h(b)) \sq \ap{\beta}(r(a))$.
\end{itemize}
\end{definition}
The next proposition follows from the initiality of $\mathbb{Z}_b$, although it is a bit involved to prove formally using
the constructors and the induction principle.
\begin{proposition}[\texttt{uniqueness}]
\label{uniquenessZb}
Suppose given a type $T$ with an inhabitant $t : T$, a bi-invertible map $s : T \simeq_b T$, and a map $f : \mathbb{Z}_b \to T$.
If $f(0) = t$ and $\mathsf{prBiInv}(\mathsf{succ},s,f,f)$, then $f = \mathsf{rec}_{\mathbb{Z}_b}(T,t,s)$.
\end{proposition}
\begin{proof}
We write $g$ for $\mathsf{rec}_{\mathbb{Z}_b}(T,t,s)$.
By function extensionality, it is enough to construct a term $r :
\Pi_{x : \mathbb{Z}_b} f(x) = g(x)$. We do this using the induction principle.
The case for $0$
follows directly from the assumption $f(0) = t$, and $r(\mathsf{succ}(x)) = \ap{s}(g(x))$
and the corresponding equalities for $\mathsf{pred}_1$ and $\mathsf{pred}_2$
follow directly from the assumption that $f$ respect the bi-invertible
maps $\mathsf{succ}$ and $s$.
It remains to check the cases of $\mathsf{sec}$ and $\mathsf{ret}$.
Since these are symmetric, we only describe the case of $\mathsf{sec}$.
In this case, we have to provide a filler for the following
square of equalities:
\[\begin{tikzcd}[column sep=huge]
f(\mathsf{succ}(\mathsf{pred}_1(x))) \arrow[d,dash,"\ap{f}(\mathsf{sec}(x))" left]
\arrow[r,dash,"r(\mathsf{succ}(\mathsf{pred}_1(x)))"]
& g(\mathsf{succ}(\mathsf{pred}_1(x))) \arrow[d,dash,"\ap{g}(\mathsf{sec}(x))"]\\
f(x) \arrow[r,dash,"r(x)" below] & g(x).
\end{tikzcd}\]
This filler can be obtained by filling the cube in figure \ref{cube}, as follows.
All the sides apart from the square in question can be filled
using the fact that $f$ preserves the bi-invertible maps, and simple
path algebra, so we can conclude the proof
using the Kan filling property of cubes: any open box can be filled.
\end{proof}
Given a type $A : \mathcal{U}$, let $\idfunc{A} : A \to A$ be the identity function. We have $\idb{A} : \mathsf{isBiInv}(\idfunc{A})$,
so we can define a map $\mathsf{toBiInv} : A = B \to A\simeq_b B$ by path induction, sending
$\refl{} : A = A$ to $\idb{A}$.
By \cite[Corollary~4.3.3]{hottbook} and the univalence axiom, the map $\mathsf{toBiInv}$ is an equivalence.
Let $\mathsf{toEq} : A \simeq_b B \to A = B$ be its inverse.
From this we can derive the principle of (based) equivalence induction,
which we now state.
\begin{lemma}[\texttt{BiInduction}]\label{BiInduction}
Fix a type $A : \mathcal{U}$ and a type family $P : (B : \mathcal{U}) \to A \simeq_b B \to \mathcal{U}$.
If we have $P_0 : P(A,\idfunc{A},\idb{A})$, then we have
$(B : \mathcal{U}) \to (e : A \simeq_b B) \to P(B,e)$.
\end{lemma}
\begin{proof}
This is proven by path induction, after translating bi-invertible maps to equalities, using $\mathsf{toEq}$ and $\mathsf{toBiInv}$.
\end{proof}
Using equivalence induction, and singleton elimination, one can finally prove that a map between types together with bi-invertible maps
that respects the maps, automatically respects the bi-invertible structure.
\begin{lemma}
\label{inductiontoalgebra}
The type $\mathsf{prBiInv}(e,e',\alpha,\beta)$ is equivalent to the type $e' \circ \alpha = \beta \circ e$.
\end{lemma}
\begin{proof}
We use equivalence induction (\cref{BiInduction}) for $e$ and $e'$ and then observe that
the type
\[
\mathsf{prBiInv}((\idfunc{},\idb{}),(\idfunc{},\idb{}),\alpha,\beta)
\]
is equivalent to the type of equalities $\alpha = \beta$.
\end{proof}
\begin{proof}[Proof of \cref{enough}]
The theorem is a corollary of \cref{uniquenessZb} and \cref{inductiontoalgebra}.
\end{proof}
One should notice that \cref{inductiontoalgebra} can be proven directly, avoiding the usage of
the univalence axiom (which was used to prove that $\mathsf{toBiInv}$ is an equivalence). The reason
why we don't do this, is because the path algebra involved in proving \cref{inductiontoalgebra}
directly is non-trivial.
\section{$\mathbb{Z}$ is a set}
\label{sec:isset}
In this section we relate $\mathbb{Z}_b$ with the usual definition of the integers as signed natural numbers,
which we call $\mathbb{Z}_w$.
We show that $\mathbb{Z}_b \simeq \mathbb{Z}_w$, and since we already know that $\mathbb{Z}_w$ is a set,
we deduce that $\mathbb{Z}_b$ is a set too.
\begin{definition}
Let $\mathbb{Z}_w$ be the inductive type with the following constructors:
\begin{itemize}
\item[--] $0 : \mathbb{Z}_w$
\item[--] $\mathsf{strpos} : \mathbb{N} \to \mathbb{Z}_w$
\item[--] $\mathsf{strneg} : \mathbb{N} \to \mathbb{Z}_w$
\end{itemize}
\end{definition}
\begin{theorem}[$\mathbb{Z}$\texttt{is}$\mathbb{Z}$]
We have an equivalence $\mathbb{Z}_b \simeq \mathbb{Z}_w$.
\end{theorem}
\begin{proof}
On the one hand, one can define $\mathsf{succ}_w : \mathbb{Z}_w \to \mathbb{Z}_w $ by induction, by mapping:
\begin{itemize}
\item[--] $0 \mapsto \mathsf{strpos}(0)$;
\item[--] $\mathsf{strpos}(n) \mapsto \mathsf{strpos}(\mathsf{succ}(n))$;
\item[--] $\mathsf{strneg}(0) \mapsto 0$;
\item[--] $\mathsf{strneg}(\mathsf{succ}(n)) \mapsto \mathsf{strneg}(n)$.
\end{itemize}
Similarly one defines $\mathsf{pred}_w$.
The fact that $\mathsf{pred}_w$ provides a left and right inverse for $\mathsf{succ}_w$ is straightforward.
So, by \cref{mapout} we get a map $\mathsf{nf} : \mathbb{Z}_b \to \mathbb{Z}_w$.
On the other hand, it is easy to construct a map $i : \mathbb{Z}_w \to \mathbb{Z}_b$ by induction.
Induction on $\mathbb{Z}_w$ shows that $\mathsf{nf} \circ i = \idfunc{\mathbb{Z}_w}$.
The hard part is to show that $i \circ \mathsf{nf} = \idfunc{\mathbb{Z}_b}$.
This is where \cref{enough} comes in handy. \cref{enough} implies that it is enough to check that
$(i \circ \mathsf{nf})(0) = 0$ and that $\mathsf{succ} \circ (i \circ \mathsf{nf}) = (i \circ \mathsf{nf}) \circ \mathsf{succ}$,
and this follows directly by construction.
\end{proof}
\section{Representing $\mathbb{Z}$ using a universe}
\label{sec:usinguniverse}
In this section we give another definition of the integers, denoted by $\mathbb{Z}_U$, which allows one to easily prove that they are the initial
type together with an inhabitant and an equality from the type to itself.
To make sense of initiality, we first define the type of $\mathbb{Z}$-algebras and of $\mathbb{Z}$-algebra morphisms.
\begin{definition}
A $\mathbb{Z}$-algebra is a type $T : \mathcal{U}$ together with an inhabitant $t : T$,
and an equality $e : T = T$.
We denote such a $\mathbb{Z}$-algebra as $(T,t,e)$, or $T$ if the rest of the structure
can be inferred from the context.
\end{definition}
\begin{definition}
A morphism of $\mathbb{Z}$-algebras from $(T,t,e)$ to $(T',t',e')$
is given by a map $f : T \to T'$, together with an equality $f(t) = t'$, and a proof that
$e_*(f) = e'_*(f)$.
We denote the type of morphisms of $\mathbb{Z}$-algebras between $T$ and $T'$ by $T \to_{\mathbb{Z}} T'$.
\end{definition}
We are interested in initial $\mathbb{Z}$-algebras.
\begin{definition}
A initial $\mathbb{Z}$-algebra is a $\mathbb{Z}$-algebra $(T,t,e)$ such that
for any other $\mathbb{Z}$-algebra $(T',t',e')$ the type
$T\to_{\mathbb{Z}} T'$ is contractible.
\end{definition}
See \cref{Z-u} for the definition of the initial
$\mathbb{Z}$-algebra using a mini universe%
\footnote{This is inspired by Zongpu Xie's proposal how to represent
HIITs in Agda \cite{ambrus-agda}.}.
Then define an interpretation function $\mathsf{El} : U \to \mathcal{U}$, as the higher inductive family with
only one constructor $0 : \mathsf{El}(z)$.
Define the type $\mathbb{Z}_U \defeq \mathsf{El}(z)$. The type $\mathbb{Z}_U$ has the structure of a $\mathbb{Z}$-algebra, since we have
$0 : \mathbb{Z}_U$ and $s \defeq \ap{\mathsf{El}}(q) : \mathbb{Z}_U = \mathbb{Z}_U$.
The following result follows by a routine application of the induction
principle of $\mathbb{Z}_U$.
\begin{theorem}[$\mathbb{Z}$\texttt{isInitial}]
The $\mathbb{Z}$-algebra $\mathbb{Z}_U$ is initial.\qed
\end{theorem}
In particular, we have.
\begin{proposition}
Given a type $T$ with an inhabitant $t : T$ and an equality $e : T = T$, we get a morphism of $\mathbb{Z}$-algebras $\mathbb{Z}_U \to T$.\qed
\end{proposition}
Again, comparing maps out of $\mathbb{Z}_U$ is easy, thanks to the following theorem.
\begin{theorem}
Given a type $T$, an inhabitant $t : T$, an equality $e : T = T$, and a map $f : \mathbb{Z}_U \to T$,
if $f(0) = t$ and $e_* \circ f = f \circ s_*$ then $f = \mathsf{rec}_{\mathbb{Z}_U}(T,t,e)$.\qed
\end{theorem}
Analogously to the case of $\mathbb{Z}_b$, this is proven by combining the initiality of $\mathbb{Z}_U$ with
the fact that to preserve an equality in the universe $e : T = T$, it is enough to commute with
its corresponding coercion function $e_* : T \to T$.
Following the argument given in \cref{sec:isset}, one deduces the following.
\begin{theorem}
\label{Zb-equiv-Zw}
There is an equivalence $\mathbb{Z}_U \simeq \mathbb{Z}_w$.\qed
\end{theorem}
We omit the proof since it is basically the same as the construction
presented in \cite{licatashulman} when proving that the integers are
the loop space of the circle.
Indeed, the mini universe $U$ is nothing but the higher inductive type
presentation of the circle $S^1$ of \cite[Section~6.1]{hottbook}, so
that $(z = z) \equiv \Omega S^1$.
Moreover, the type family $\mathsf{El}$ is equivalent to the path space fibration of the circle, in the following sense.
\begin{theorem}[\texttt{ElisPath}]
For every $u : U$ we have $\mathsf{ed}(u) : \mathsf{El}(u) \simeq (z = u)$.
\end{theorem}
\begin{proof}
We construct a map $\mathsf{ed}(u) : \mathsf{El}(u) \to (z=u)$
using induction on $U$ and mapping $0 : \mathsf{El}(z)$ to $\refl{z}$,
To construct a map going the other way,
we use path induction and map $\refl{z}$ to $0$.
It is then straightforward to see that these maps
give an equivalence as in the statement.
\end{proof}
As a corollary, we obtain the well-known equivalence between the loop space of the circle and the integers.
\begin{corollary}
We have an equivalence $\Omega S^1 \simeq \mathbb{Z}_w$.\qed
\end{corollary}
This suggests that alternatively one could view the representation of the integers as a
universe as an inductive-inductive presentation of the circle equipped
with a family that has a point in the fiber over the base point.
\section{Formalization in cubical Agda}
We formally checked the results of this paper \cite{code} using cubical Agda \cite{cubical-agda}.
There are two differences between the informal presentation in the paper and the formalisation.
The first one is that the presentation in the paper is done using book-HoTT \cite{hottbook}, whereas
the formalisation is done using a cubical type theory. In this case, this difference is not important, since
it is easy to translate the formalized arguments to book-HoTT.
The real difference is in the definition of higher inductive types. In the paper we define higher inductive types
as initial algebras for a certain signature (\cref{background}). In the formalisation, we use higher inductive types
as implemented in cubical Agda, which are based on \cite{cubical-hits}.
Although it is natural to assume that the Agda higher inductive type should be initial in the sense of \cref{background},
proving this fact is actually one of the main difficulties in the formalisation (\cref{uniquenessZb}).
In proving the results of \cref{sec:usingbi}, we developed the theory of bi-invertible maps in cubical Agda, which
wasn't available. We prove that the type of bi-invertible maps between $A$ and $B$ is equivalent to the type of equivalences
between $A$ and $B$, and the principle of bi-invertible induction.
\section{Open questions}
\label{sec:concl-open-quest}
\subsection*{Preservation of properties}
The key result in the above discussion is \cref{inductiontoalgebra}, which can be reformulated as follows.
Let $T,T' : \mathcal{U}$, $s : T \to T$, $s' : T' \to T'$, $\phi : \mathsf{isBiInv}(s)$, and $\phi' : \mathsf{isBiInv}(s')$.
We can define the following two types of morphisms between $T$ and $T'$:
\begin{align*}
\mathsf{Map}_{\mathsf{end}}(T,T') &\defeq (f : T \to T') \times (s' \circ f = f \circ s)\\
\mathsf{Map}_{\mathsf{biInv}}(T,T') &\defeq (f : T \to T') \times \mathsf{prBiInv}(s,s',f,f).
\end{align*}
Informally, $\mathsf{Map}_{\mathsf{end}}(T,T')$ is the type of maps that respect the endomorphism, and $\mathsf{Map}_{\mathsf{biInv}}(T,T')$ is
the type of maps that respect the endomorphism and the proof that the endomorphism is a bi-invertible map.
We have a forgetful map $\mathsf{Map}_{\mathsf{biInv}}(T,T') \to \mathsf{Map}_{\mathsf{end}}(T,T')$, and what \cref{inductiontoalgebra} says is that
this map is an equivalence.
There is something special about the type family $\mathsf{isBiInv} : (A \to B) \to \mathcal{U}$, and that is that it is valued
in propositions.
One might wonder if \cref{inductiontoalgebra} is a general principle, in the following sense.
Say that we have a signature $S$ for a type of algebras, and we extend it to a signature $S'$, such that
the fields we added take values in propositions. In the above example $S$ corresponds to $(T : \mathcal{U}) (f : T \to T)$
and $S'$ corresponds to the extension $(T : \mathcal{U}) (f : T \to T) (\phi : \mathsf{isBiInv}(s))$.
As usual, given $S'$-algebras $T,T'$, we have a forgetful map $\mathsf{Map}_{S'}(T,T') \to \mathsf{Map}_S(T,T')$.
Is this map an equivalence in general?
The following example, suggested by Paolo Capriotti, shows that this is not necessarily the case.
\begin{example}
\label{counterexample}
Consider $S$, the signature $(T : \mathcal{U}) (o : T \to T \to T)$, and $S'$, the extension
\begin{align*}
&(T : \mathcal{U}) (o : T \to T \to T) (tr : \mathsf{isSet}(T))\\
&(e : T) \left(u : (t : T) \to o(t,e) = t \times o(e,t) = t\right).
\end{align*}
The $S$-algebras are the types with a binary operation, and the $S'$-algebras are the sets with
a binary operation with a distinguished element that is a left and right unit.
The extension $S'$ is propositional. This is because being a set is a proposition (\cite[Theorem~7.1.10]{hottbook}),
so $tr$ inhabits a proposition, two left and right units must necessarily coincide,
so $e$ inhabits a proposition (assuming $u$),
and the identity types of a set are propositions and these are closed under pi-types (\cite[Theorem~7.1.9]{hottbook}),
so $u$ inhabits a proposition (assuming $tr$).
Let us see that for $S'$-algebras $T,T'$ the forgetful map $\mathsf{Map}_{S'}(T,T') \to \mathsf{Map}_S(T,T')$ is not an
equivalence in general. Let $T$ be $(\mathbb{N},+,\phi,0,\psi)$, where $\phi$ is a proof that the natural numbers form
a set, and $\psi$ is a proof that $0$ is a left and right unit for $+$.
Let $T'$ be $(\mathsf{bool},\vee,\phi',\bot,\psi')$, where $\phi'$ is a proof that the booleans form a set,
and $\phi$ is a proof that $\bot$ is a left and right unit for $\vee$.
Then we have $\lambda n. \top : \mathbb{N} \to \mathsf{bool}$. This map clearly respects the operations, so we get an inhabitant
of $\mathsf{Map}_S(T,T')$. But this morphism does not respect the units, so it cannot come from a morphism
in $\mathsf{Map}_{S'}(T,T')$.
\end{example}
This discussion leaves open an interesting question.
\begin{question}
Given a signature $S$ and a propositional extension $S'$, are there useful necessary and sufficient conditions for
the forgetful map $\mathsf{Map}_{S'}(T,T') \to \mathsf{Map}_S(T,T')$ to be an equivalence for every pair of $S'$-algebras
$T$ and $T'$?
\end{question}
\subsection*{Initiality of HIITs}
\label{sec:intiality-hiits}
Our original goal was to complete the conjectured result from
\cite{pinyo-types} and formally verify that $\mathbb{Z}_h$ is a set. Using the
strategy from this paper this is fairly straightforward: we can show
that the natural notion of morphism of $\mathbb{Z}_h$-algebras satisfies a
principle analogous to \cref{inductiontoalgebra}, and hence that
$\mathbb{Z}_h$ is a set. When attempting to formalize this
construction we hit an unexpected problem: it turns out that it is rather
difficult to verify that the higher inductive type defining $\mathbb{Z}_h$ is initial in its corresponding
wild category of algebras. Specifically, the proof seems to require the construction of a
filler for a $4$-dimensional cube which is rather laborious. In \cite{qiits-popl} it
is shown that for QIITs (i.e., set-truncated HIITs) elimination and
initiality are equivalent, but the extension to higher dimensional HIITs
seems non-trivial. In particular it may require developing the higher
order categorical structure of the category of algebras.
\paragraph{Acknowledgements}
The first author would like to thank Paolo Capriotti, Nicolai Kraus
and Gun Pinyo for many interesting discussions on the subject of
this paper. Both authors would like to thank Christian Sattler for
comments and useful discussions. The work by and with Ambrus Kaposi
and Andr\'as Kov\'acs plays an important role in particular in
connection with the open questions triggered by this paper.
\bibliographystyle{alpha}
| {
"timestamp": "2020-07-02T02:08:27",
"yymm": "2007",
"arxiv_id": "2007.00167",
"language": "en",
"url": "https://arxiv.org/abs/2007.00167",
"abstract": "We consider the problem of defining the integers in Homotopy Type Theory (HoTT). We can define the type of integers as signed natural numbers (i.e., using a coproduct), but its induction principle is very inconvenient to work with, since it leads to an explosion of cases. An alternative is to use set-quotients, but here we need to use set-truncation to avoid non-trivial higher equalities. This results in a recursion principle that only allows us to define function into sets (types satisfying UIP). In this paper we consider higher inductive types using either a small universe or bi-invertible maps. These types represent integers without explicit set-truncation that are equivalent to the usual coproduct representation. This is an interesting example since it shows how some coherence problems can be handled in HoTT. We discuss some open questions triggered by this work. The proofs have been formally verified using cubical Agda.",
"subjects": "Logic in Computer Science (cs.LO); Logic (math.LO)",
"title": "The Integers as a Higher Inductive Type",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9732407137099625,
"lm_q2_score": 0.7279754607093178,
"lm_q1q2_score": 0.7084953569440753
} |
https://arxiv.org/abs/1001.0708 | On the so-called Boy or Girl Paradox | A quite old problem has been recently revitalized by Leonard Mlodinow's book The Drunkard's Walk, where it is presented in a way that has definitely confused several people, that wonder why the prevalence of the name of one daughter among the population should change the probability that the other child is a girl too. I try here to discuss the problem from scratch, showing that the rarity of the name plays no role, unless the strange assumption of two identical names in the same family is taken into account. But also the name itself does not matter. What is really important is `identification', meant in an acceptation broader than usual, in the sense that a child is characterized by a set of attributes that make him/her uniquely identifiable (`that one') inside a family. The important point of how the information is acquired is also commented, suggesting an explanation of why several people tend to consider the informations "at least one boy" and "a well defined boy" (elder/youngest or of a given name) equivalent. | \section{Introduction}
A classical series of problems in elementary probability
theory is about the gender combinations
($m$-$m$, $m$-$f$, $f$-$m$ and $f$-$f$)
in a family of two children. Being this an academic exercise
(in the bad sense of the term), usually one does not
attempt to assess how much one believes that these combinations
happen in a real family. This means that the well known male over
female birth asymmetry is neglected, as are neglected
gender correlations within a family, like those
induced by genetic factors, or by the possibility of
monovular twins.
Once the conditions are properly defined, the
usual questions, besides the trivial one of male/female, are
\begin{enumerate}
\item[Q$_1$)] What is the probability of two boys?
\item[Q$_2$)] What is the probability of two boys, if the eldest child
is a boy?
\item[Q$_3$)] What is the probability of two boys, if at least
one child is a boy?
\end{enumerate}
These questions can be promptly answered looking at
contingency table 1 that lists the space of the four
equiprobable elementary cases.
\begin{table}
\begin{center}
\begin{tabular}{c|cc|c}
Eldest & \multicolumn{3}{c}{Youngest} \\
& $m$ & $f$ & $m\cup f$ \\
\hline
$m$ & $1/4$ & $1/4$ & $1/2$ \\
$f$ & $1/4$ & $1/4$ & $1/2$ \\
\hline
$m\cup f$ & $1/2$ & $1/2$ & $1$
\end{tabular}
\caption{{\sl Table of equiprobable cases of the
four possible sequences of child's gender. The symbol
`$\cup$' stands for `OR'.}}
\end{center}
\end{table}
\begin{enumerate}
\item[A$_1$)] The probability of two boys is 1/4, or 25\%,
since it is just the probability of each
elementary event, that all
together have to sum up to unity, or 100\%.
\item[A$_2$)] If the eldest child is a boy, the space of
possibilities is squeezed to the first row of the
table. We remain with two equiprobable cases,
each of which gets probability 1/2. In formulae:
\begin{eqnarray}
P(Em\cap Y\!m\,|\,Em,I_0) &=& \frac{P(Em\cap Y\!m\,|\,I_0)}
{P(Em\,|\,I_0)}
\label{eq:A2} \\
& =& \frac{1/4}{1/2} = \frac{1}{2}\,.
\end{eqnarray}
[The symbol `$\cap$' stands for a logical `AND'; `$|$' stands for `given',
or `conditioned by'; `$Em$' and `$Y\!m$' are short forms
for ``the eldest is male'' and ``the youngest is male'';
the condition $I_0$ is the
{\it background status of information}
under which the probabilities are evaluated, that includes
the simplifying hypotheses stated above; when
there is a further condition, like $Em$ and $I_0$
in the l.h.s. of Eq.~(\ref{eq:A2}),
they are both indicated after the conditional symbol `$|$',
separated by a comma.]
\item[A$_3$)] Finally,
the information that there is at least one boy in the family
reduces the space of possibilities
to three equiprobable cases, of which only one is
that of our interest, thus getting 1/3 (the symbol `$\cup$'
in the following formulae indicated a logical `OR').
Formally
\begin{eqnarray}
P(Em\cap Y\!m\,|\,Em\cup Y\!m,I_0) &=& \frac{P[(Em\cap Y\!m)\cap(Em\cup Y\!m)\,|\,I_0]}
{P(Em\cup Y\!m\,|\,I_0)} \\
&=& \frac{P(Em\cap Y\!m\,|\,I_0}
{P(Em\cup Y\!m\,|\,I_0)} \\
& =& \frac{1/4}{3/4} = \frac{1}{3}
\end{eqnarray}
\end{enumerate}
Obviously, the problem can been turned into probability of girl-girl
by symmetry.
The `complication'
(mainly induced confusion) comes when the information about
the name of one child is provided (`Florida' in
{\it The Drunkard's Walk}~\cite{Drunkard}):
\begin{description}
\item[$Q_4$] What is the probability of two boys, if one
of the children is called Mark?\footnote{This question can be turned
into ``what is the probability that Mark has a brother or a sister?''
and any normal and sane person might wonder about the sense
of this {\it madness}, as said a friend of mine with zero math skill,
when he saw a first draft of this paper on my desk, because
-- he explained --
``it is absolutely equally likely that he has either a brother or a sister''.}
\end{description}
\section{How the child name changes the probabilities}
At this point the question is how table 1 is changed by the
information that one child is known by gender and name
(the latter usually implying the former).
The way the problem is often solved is to assume that the same
name can given to two different children in the same family.
Frankly, I never heard of this possibility before.
And, anyhow, if such a strange behavior occurs
in some very rare cases, it seems to me of an importance
much lower than all other questions that
have been neglected (male/female asymmetry, genetic biases, etc.).
What I found annoying is that this peculiar solution is not
reported as a mathematical curiosity
(see e.g. the Drunkard's Walk or the
Wikipedia page site dedicated to
the so called `paradox'~\cite{Wikipedia} -- a puzzle one is
unable to solve is not necessarily a paradox), but
as it would be `the solution'.
Nevertheless, let us first see what happens when this possibility is
allowed.
(By the way, one might think of families with children
coming from previous marriages, in which case identical names
might occur, but this possibility is excluded in the
often implicit assumptions of this kind of puzzles,
often formulated as ``a lady has two children, \ldots''.)
\subsection{Allowing identical names for two children of a family}\label{ss:id_names}
Just to stay close to the formulation of
the problem in
the recent disputes
that have triggered this paper, let us focus on girl probabilities,
assuming we know that one of the child is a girl of a given
name. Splitting the female category into `female of that given name'
($fN$) and `female with any other name' ($f\overline N$),
there are now three possible cases for each child,
$\{m, fN, f\overline N\}$, no longer equiprobable.
Calling $r$ the fraction of girls owning that name
in the population, we get the following probabilities,
under the new background condition $I_1$:
$P(m\,|\,I_1)=1/2$, $P(fN\,|\,I_1)=r\times 1/2=r/2$ and
$P(f\overline N\,|\,I_1)=(1-r)/2$.
The nine possibilities and their probabilities,
calculated using the product rule
(justified by elder/youngest name independence),
are reported in table 2.
\begin{table}
\begin{center}
\begin{tabular}{cc||ccc|cc}
\multicolumn{2}{c||}{Eldest} & \multicolumn{4}{c}{Youngest} & \\
\hline\hline
& & $m$ & \multicolumn{2}{c|}{$f$} & $m\cup f$ &\\
& & & $fN$ & $f\overline N$ & & \\
\hline
$m$ & & $1/4$ & $\mathbf{r/4}$ & $(1-r)/4$ & $1/2$ & \\
\multirow{2}{*}{$f$} & $fN$ & $\mathbf{r/4}$ & $\mathbf{r^2/4} $ & $\mathbf{r (1-r)/4} $ & $r/2$ &\multirow{2}{*}{$1/2$}\\
& $f\overline N$ & $(1-r)/4$ & $\mathbf{ r (1-r)/4}$ & $(1-r)^2/4$ & $(1-r)/2$ & \\
\hline
$m\cup f$ & & $1/2$ & $r/2$ & $(1-r)/2$ & 1 &\\
& & & \multicolumn{2}{c|}{$1/2$} & &
\end{tabular}
\caption{{\sl Table of probabilities of the possible cases
assuming that
eldest and youngest children can have the same name (see text).}}
\end{center}
\end{table}
From the table we can calculate the probability of both females,
if we know one girl by name:
\begin{eqnarray}
P[(Ef\cap Y\!f)\,|\,(EfN\cup Y\!fN)\ ,I_1] &=&
\frac{P[(Ef\cap Y\!f)\cap(EfN\cup Y\!fN)\,|\,I_1]}
{P[(EfN\cup Y\!fN)\,|\,I_1]}\,.
\end{eqnarray}
The denominator is given by the five elements emphasized in boldface
in table 2, whose probability sum up to $r-r^2/4$.
The numerator is given by the three elements
that have $m$ neither in the rows nor in the columns,
whose probabilities are $r^2/4$, $r(1-r)/4$ and $r(1-r)/4$,
adding up to $(2r - r^2)/4$. We get then
\begin{eqnarray}
P[(Ef\cap Y\!f)\,|\,(EfN\cup Y\!fN)\ ,I_1]
& =& \frac{(2 r-r^2)/4}{r-r^2/4} \\
& =& \frac{1}{2}\left[\frac{1-r/2}{1-r/4}\right]
\label{eq:p_r_2}\\
& \approx& \frac{1}{2}- \frac{r}{8}
\hspace{1.3cm}\mbox{(for }r\ll 1\mbox{)}
\end{eqnarray}
As we can see,
the probability does depend on $r$, but it
tends rapidly to 1/2 for small values of $r$, as
also shown in table 3 for some numerical values of this parameter.\footnote{
The value $r=0.02=1/50$ is that used in Ref.~\cite{Wikipedia}, for which
the probabilities of table 2 acquire the following values
\begin{center}
\begin{tabular}{lll|l}
0.2500 & 0.0050 & 0.2450 & 0.5000 \\
0.0050 & 0.0001 & 0.0049 & 0.0100 \\
0.2450 & 0.0049 & 0.2401 & 0.4900 \\
\hline
0.5000 & 0.1000 & 0.4900 & 1.0000
\end{tabular}
\end{center}
from which we get the following table of {\it expected values}
in 10000 families
\begin{center}
\begin{tabular}{rrr|r}
2500 & 50 & 2450 & 5000 \\
50 & 1 & 49 & 100 \\
2450 & 49 & 2401 & 4900 \\
\hline
5000 & 100 & 4900 & 10000
\end{tabular}
\end{center}
[By the way, I would like to point out that
quoting expected values is a way to state \ldots what we expect
in a probabilistic sense -- and probability theory teaches how
to calculate standard expectation uncertainties
(the $\sigma$'s) -
and has little to do with `frequentistic approach',
since the probabilities have not been evaluated by past frequencies
(statistical data).]
}
\begin{table}[h]
\begin{center}
\begin{tabular}{l|c}
\multicolumn{1}{c|}{$r$} & $P(\mbox{two girls}\,|\,fN,I_1)$ \\
\hline
0.3 & 0.45946 \\
0.2 & 0.47368 \\
0.1 & 0.48718 \\
0.02 & 0.49749 \\
0.01 & 0.49875 \\
0.001 & 0.49988 \\
0.0001& 0.49999
\end{tabular}
\caption{{\sl Probability of two girls in family,
if we know by name a daughter, calculated
as a function of the prevalence of that name within
the girls of that population. [Note, just for mathematical curiosity,
that if $r=1$ (all girls have the same name), Eq.~(\ref{eq:p_r_2})
gives a probability
of 1/3, thus recovering $Q_3$.
In fact, in this case telling the name adds no more information
to ``at least one is female''.]}}
\end{center}
\end{table}
\newpage
\subsection{Unique names of children within a family}\label{eq:unique_name}
Let us now see what happens if we require that, as it
normally happens, children names are unique.
The central element of table 2 goes to zero, but the sums
along rows and columns have to be preserved\footnote{If no further
information is provided,
the probability
that any child is a female with the special name $N$ has to be
$r/2$, no matter if the child in question is the eldest or the youngest.
Similarly, the probability of girl with a name different from $N$
has to be $(1-r)/2$.}
[for example the probability of $EfN\cap Y\!f\overline{N}$
becomes $r(1-r)/4+r^2/4$, that is the same as
$r/2-r/4$, i.e. $r/4$].
The result is shown in table 4
(we label the central value of the table by `-' to
remark that this case is impossible by assumption). [Note
that the impossibility of identical children names
constrains $r$ to be smaller than $1/2$, well above any
reasonable value. Remember also that the probabilities
of table 4
reflect the several simplifying
assumptions of the problem.]
\begin{table}[h]
\begin{center}
\begin{tabular}{cc||ccc|cc}
\multicolumn{2}{c||}{Eldest} & \multicolumn{5}{c}{Youngest} \\
\hline\hline
& & $m$ & \multicolumn{2}{c|}{$f$} & $m\cup f$ &\\
& & & $fN$ & $f\overline N$ & & \\
\hline
$m$ & & $1/4$ & $\mathbf{r/4}$ & $(1-r)/4$ & $1/2$ & \\
\multirow{2}{*}{$f$} & $fN$ & $\mathbf{r/4}$ & - & $\mathbf{r/4} $
& $r/2$ &\multirow{2}{*}{$1/2$}\\
& $f\overline N$ & $(1-r)/4$ & $\mathbf{r/4}$ & $(1-2r)/4$ & $(1-r)/2$ & \\
\hline
$m\cup f$ & & $1/2$ & $r/2$ & $(1-r)/2$ & 1 &\\
& & & \multicolumn{2}{c|}{$1/2$} & &
\end{tabular}
\caption{{\sl Same as table 2, but not allowing the identical names
of the children.
}}
\end{center}
\end{table}
\newpage
Contrary to table 2, the four cases that involve $fN$ are now
{\it equiprobable} (each with probability $r/4$). It follows that the probability that the other
child is a boy or a girl is 50\%, {\it independently} of the
rarity of the name. In formulae (note the new background condition $I_2$):
\begin{eqnarray}
P[(Ef\cap Y\!f)\,|\,(EfN\cup Y\!fN)\ ,I_2] &=&
\frac{P[(Ef\cap Y\!f)\cap(EfN\cup Y\!fN)\,|\,I_2]}
{P(EfN\cup Y\!fN\,|\,I_2)}
\label{eq:PEfYf|fN} \\
& =& \frac{2\times r/4}{4\times r/4} \\
& =& \frac{1}{2}\,.\label{eq:PEfYf|fN=1/2}
\end{eqnarray}
\section{Does the name really matter?}
At this point it is easy to understand that we could replace
$fN$ in the table by $fID$, where `$ID$' stands now for
`uniquely identified within the family', thus getting table 5.
\begin{table}[b]
\begin{center}
\begin{tabular}{cc||ccc|cc}
\multicolumn{2}{c||}{Eldest} & \multicolumn{5}{c}{Youngest} \\
\hline\hline
& & $m$ & \multicolumn{2}{c|}{$f$} & $m\cup f$ &\\
& & & $fID$ & $f\overline{ID}$ & & \\
\hline
$m$ & & $1/4$ & $\mathbf{r/4}$ & $(1-r)/4$ & $1/2$ & \\
\multirow{2}{*}{$f$} & $fID$ & $\mathbf{r/4}$ & - & $\mathbf{r/4} $
& $r/2$ &\multirow{2}{*}{$1/2$}\\
& $f\overline{ID}$ & $(1-r)/4$ & $\mathbf{r/4}$ & $(1-2r)/4$ & $(1-r)/2$ & \\
\hline
$m\cup f$ & & $1/2$ & $r/2$ & $(1-r)/2$ & 1 &\\
& & & \multicolumn{2}{c|}{$1/2$} & &
\end{tabular}
\caption{{\sl Same as table 4, but based on `identification' of a girl.}}
\end{center}
\end{table}
Think, for example to the following statements
\begin{itemize}
\item ``the secretary of the department X of hospital Y in Rome is
daughter of my aunt B who has also another child'';
\item ``the parents of the actress starring in the last movie I have seen
have two children'';
\item ``the mother of that lady has got two children'';
\item and so on\ldots
\end{itemize}
In all these cases the probability that
the female in question has a sister is 50\%,
as everybody that is not fooled by probability theory will
promptly tell us (see footnote 1).
It is not just a question of knowing her
name, or knowing that she is the eldest or the youngest
(that's the reason we recover the answer to $Q_2$!).
{\it What matters is that this person is somehow
uniquely `identified' in the family}, where `identified' is within
quote marks because it is not requested we know her passport number,
but just that we are able to point to her as {\it that one}.
If this is not the case, and two children could correspond to
the same description, then table 2 holds, assuming no correlation
between the descriptions (if one is blond, there is high change
that the other is blond too, and so on). Therefore we recover it
as table 6,
\begin{table}
\begin{center}
\begin{tabular}{cc||ccc|cc}
\multicolumn{2}{c||}{Eldest} & \multicolumn{5}{c}{Youngest} \\
\hline\hline
& & $m$ & \multicolumn{2}{c|}{$f$} & $m\cup f$ &\\
& & & $f{\cal ID}$ & $f\overline{{\cal ID}}$ & & \\
\hline
$m$ & & $1/4$ & $\mathbf{r/4}$ & $(1-r)/4$ & $1/2$ & \\
\multirow{2}{*}{$f$} & $f{\cal ID}$ & $\mathbf{r/4}$ & $\mathbf{r^2/4} $ & $\mathbf{r (1-r)/4} $ & $r/2$ &\multirow{2}{*}{$1/2$}\\
& $f\overline{{\cal ID}}$ & $(1-r)/4$ & $\mathbf{ r (1-r)/4}$ & $(1-r)^2/4$ & $(1-r)/2$ & \\
\hline
$m\cup f$ & & $1/2$ & $r/2$ & $(1-r)/2$ & 1 &\\
& & & \multicolumn{2}{c|}{$1/2$} & &
\end{tabular}
\caption{{\sl Same as table 2, but with reference to {\it non unique
identification} (note the symbol `${\cal ID}$' instead of `$ID$'
of table 5)
rather then name.}}
\end{center}
\end{table}
but in terms of {\it non unique identification} (`${\cal ID}$') rather
than of name. Now it makes sense.
In fact, since we are referring here to
`identification' in a loose sense, it might really occur
that two daughters correspond to the same description
(`goes to college', or `play tennis', and so on).
Finally, the name can be considered a generic identification,
in order to include
the possibility of identical names in a family (for example
in the cases of second marriages).
\section{Some Bayesian flavor}
Someone asks me about {\it the} Bayesian solution of the problem
(because I am supposed to be {\it a Bayesian}).
But, besides the clarification that
``I am not a Bayesian''~\cite{MaxEnt98},
such a kind of `alternative' solution of the problem
does not exist. The solution is already that provided
by Eq.~(\ref{eq:PEfYf|fN}), because `Bayesians'
just make use of probability theory to state
the relative beliefs of several hypotheses given some
well stated assumptions. In particular, the so called
Bayes' rule for this problem is essentially
Eq.~(\ref{eq:PEfYf|fN}), that can be possibly written
in other convenient forms using the rules of probability.
\subsection{Reconditioning the probability of an hypothesis on the
light of a new status of information}
To make the point clearer, and calling $A=Ef\cap Y\!f$
(``both children are female'') and
$B=EfN\cup Y\!fN$ (``one child is a female of a given particular name'')
to simplify the notation, we can rewrite
Eq.~(\ref{eq:PEfYf|fN}) as
\begin{eqnarray}
P(A\,|\,B,I_2) &=& \frac{P(A\cap B\,|\,I_2)}
{P(B\,|\,I_2)} \label{eq:P_A_B_I2}\\
&=& \frac{P(B\,|\,A,I_2)\,P(A\,|\,I_2)}
{P(B\,|\,I_2)}\,.
\end{eqnarray}
The latter expression
shows explicitly how the probability of $A$ is
updated, by the extra condition $B$, via the factor
$P(B\,|\,A,I_2)/P(B\,|\,I_2)$, i.e.
\begin{eqnarray}
P(A\,|\,B,I_2) &=& \frac{P(B\,|\,A,I_2)}
{P(B\,|\,I_2)} \times P(A\,|\,I_2)\,.
\end{eqnarray}
The three ingredients we need to evaluate $P(A\,|\,B,I_2)$
can be easily read from table 4,
\begin{eqnarray}
P(A\,|\,I_2) &=& \frac{1}{4} \\
P(B\,|\,I_2) &=& r \\
P(B\,|\,A,I_2) &=& 2 r \,,
\end{eqnarray}
from which we get
\begin{eqnarray}
P(A\,|\,B,I_2) &=& \frac{2 r}{r}\times\frac{1}{4} = 2\times\frac{1}{4} = \frac{1}{2}\,,
\end{eqnarray}
recovering the result of section \ref{eq:unique_name}
(note that it must be so because we are strictly using the probabilities
of table 4).
\subsection{Updating the odds}
We can do it in a different way, comparing the probability
of ``two girls'' ($A$) with that of ``only one girl''
[let us indicate the latter hypothesis as
$C = (Ef \cap Y\!m) \cup (Em \cap Y\!f)$].
The probability of $C$ conditioned by $B$, i.e.
$P(C\,|\,B,I_2)$, could be obtained in analogy to
Eq.~(\ref{eq:P_A_B_I2}), reading $P(C\cap B\,|\,I_2)$
from table 4. But it can be more instructive to get it
{\it the Bayesian way}, using the formula that
shows how relative probabilities are updated by the
{\it Bayes factor} to take into account the new
piece of information
(this second approach has also the advantage of
getting rid of $r$ since the very beginning):
\begin{eqnarray}
\underbrace{\frac{P(A\,|\,B,I_2)}{P(C\,|\,B,I_2)}}_{\mbox{`updated odds'}}
&=&
\underbrace{\frac{P(B\,|\,A,I_2)}{P(B\,|\,C,I_2)}
}_{\begin{array}{c} \mbox{`updating factor'} \\ \mbox{({\it Bayes factor})} \end{array}}
\times
\underbrace{\frac{P(A\,|\,I_2)}{P(C\,|\,I_2)}}_{\mbox{`initial odds'}}\,.
\end{eqnarray}
The initial probability of two girls is one half that of a single
girl, i.e.
\begin{eqnarray}
\frac{P(A\,|\,I_2)}{P(C\,|\,I_2)} &=& \frac{1}{2}\,,
\end{eqnarray}
while the probability that there is a girl with a precise name
is proportional to the number of girls in the family
(remember that the condition `$I_2$' does not allow the same name),
namely
\begin{eqnarray}
\frac{P(B\,|\,A,I_2)}{P(B\,|\,C,I_2)} &=& 2\,.
\end{eqnarray}
It follows
\begin{eqnarray}
\frac{P(A\,|\,B,I_2)}{P(C\,|\,B,I_2)} & = &
2 \times \frac{1}{2} = 1\,:
\end{eqnarray}
the girl of which we know the name has equal probability
to have a sister ($A$) or a brother ($C$), that is
\begin{eqnarray}
P(A\,|\,B,I_2) = P(C\,|\,B,I_2) & = & \frac{1}{2}\,.
\end{eqnarray}
\section{Conclusions}
The probability that, knowing the name of
one child in a family of two, the other one child
is of the same gender has nothing to do with the rarity
of the name, unless the crazy possibility of identical
names in a family is assumed (and if somebody insists
that this can happen, he/she is invited to
calculate more realistic probabilities that take into
account male/female asymmetry and genetic correlations;
also the possibility of identical names of children coming
from previous marriages are implicitly excluded in this kind of
puzzles, that usually talk of ``a lady having two children\ldots'').
Moreover, what matters is not the knowledge of the name,
but rather something that allows us to point to him/her
as `that one'. For this reason $Q_2$ and $Q_4$ have the same solution.
I would like to end with some comments on the
last of the three text book questions reminded in the introduction.
It seems to me that the reason there is
quite a broad tendency to confuse $Q_3$ with $Q_4$
(or similar questions involving the child identification, including $Q_2$),
is that {\it in normal life the information about boy/girl is
acquired simultaneously with other attributes that make
the identification unique} (``my daughter Claudia'').
People do not express themselves as in math
textbooks,
stating that ``I have two children, and at least one of them
is a boy'', or
``my children are not both boys''. We usually gain
this information in an indirect way. For this reason
several people have some initial difficulty to
grasp that ``that lady has Claudia and another child''
is not the same as ``that lady has two children, at least one being a girl''.
Moreover, even if a mother says ``if I had two boys'', we may
understand from the context (already knowing she has two children)
that she has two girls, because we perceived that she
emphasized `boys'
instead of `two' (in the latter case we could
think she has already a boy). Instead,
if she said ``if my children were both boys'',
we usually understand that she is expressing this way
because she has a boy and a girl.
Therefore, besides stereotyped
recreational puzzles,
the evaluation of probabilities,
in the sense of how much we have to rationally believe
the several hypotheses, can be not trivial. We
need to take properly
into account all contextual information
``when the bare facts wont'do''~\cite{Pearl}.
Indeed, in probability evaluations not only the `facts' play a role,
but also
the words, their sound and the expression
of the person who says them, and (too
often ignored) the question to which they reply~\cite{Pearl}.
\vspace{0.5cm}
It is a pleasure to thank Dino Esposito,
Enrico Franco, Paolo Agnoli, Serena Cenatiempo
and Stefano Testa for the several interactions
on the issues discussed here and related ones.
The paper has benefitted from comments by Dino and Paolo.
\vspace{0.5cm}
| {
"timestamp": "2010-01-05T15:29:10",
"yymm": "1001",
"arxiv_id": "1001.0708",
"language": "en",
"url": "https://arxiv.org/abs/1001.0708",
"abstract": "A quite old problem has been recently revitalized by Leonard Mlodinow's book The Drunkard's Walk, where it is presented in a way that has definitely confused several people, that wonder why the prevalence of the name of one daughter among the population should change the probability that the other child is a girl too. I try here to discuss the problem from scratch, showing that the rarity of the name plays no role, unless the strange assumption of two identical names in the same family is taken into account. But also the name itself does not matter. What is really important is `identification', meant in an acceptation broader than usual, in the sense that a child is characterized by a set of attributes that make him/her uniquely identifiable (`that one') inside a family. The important point of how the information is acquired is also commented, suggesting an explanation of why several people tend to consider the informations \"at least one boy\" and \"a well defined boy\" (elder/youngest or of a given name) equivalent.",
"subjects": "History and Overview (math.HO); Probability (math.PR); Data Analysis, Statistics and Probability (physics.data-an)",
"title": "On the so-called Boy or Girl Paradox",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.973240718366854,
"lm_q2_score": 0.7279754548076477,
"lm_q1q2_score": 0.7084953545904323
} |
https://arxiv.org/abs/1711.04965 | Near-optimal sample complexity for convex tensor completion | We analyze low rank tensor completion (TC) using noisy measurements of a subset of the tensor. Assuming a rank-$r$, order-$d$, $N \times N \times \cdots \times N$ tensor where $r=O(1)$, the best sampling complexity that was achieved is $O(N^{\frac{d}{2}})$, which is obtained by solving a tensor nuclear-norm minimization problem. However, this bound is significantly larger than the number of free variables in a low rank tensor which is $O(dN)$. In this paper, we show that by using an atomic-norm whose atoms are rank-$1$ sign tensors, one can obtain a sample complexity of $O(dN)$. Moreover, we generalize the matrix max-norm definition to tensors, which results in a max-quasi-norm (max-qnorm) whose unit ball has small Rademacher complexity. We prove that solving a constrained least squares estimation using either the convex atomic-norm or the nonconvex max-qnorm results in optimal sample complexity for the problem of low-rank tensor completion. Furthermore, we show that these bounds are nearly minimax rate-optimal. We also provide promising numerical results for max-qnorm constrained tensor completion, showing improved recovery results compared to matricization and alternating least squares. | \section{Introduction}\label{introduction}
Representing data as multi-dimensional arrays, i.e., tensors, arises naturally in many modern applications such as interpolating large scale seismic data \cite{kreimer2013tensor,da2015optimization}, medical images \cite{mocks1988topographic}, data mining \cite{acar2005modeling}, image compression \cite{shashua2001linear,liu2013tensor}, hyper-spectral image
analysis \cite{li2010tensor}, and radar signal processing \cite{nion2010tensor}. A more extensive list of such applications can be found in \cite{kolda2009tensor}. There are many reasons where one may want to work with a subset of the tensor entries; (\RNum{1}) often, these data sets are large and we wish to store only a small number of the entries (compression); (\RNum{2}) In some applications, the acquisition of each entry can be expensive, e.g., each entry may be obtained by solving a large PDE \cite{van2013fast}; (\RNum{3}) some of the entries might get lost due physical constraints while gathering them. These restrictions result in situations where one has access only to a subset of the tensor entries. The problem of tensor completion entails recovering a tensor from a subset of its entries. Without assuming further structure on the underlying tensor, there is no hope of recovering the missing entries as they are independent of the observed entries. Therefore, here (and in many applications) tensors of interest are the ones that can be expressed approximately as a lower dimensional object, compared to the ambient dimension of the tensor. In particular, in this paper we consider tensors that have low CP-rank \cite{carroll1970analysis,harshman1970foundations}. The low rank assumption makes tensor completion a feasible problem. For example, an order-$d$, rank-$r$ tensor, which has size $N_1 \times N_2 \times \cdots N_d$ where $N_i=O(N)$, has $O(rNd)$ free variables, which is much smaller than $N^d$, the ambient dimension of the tensor.\newline
Tensor completion problem focuses on two important goals: Given a low-rank tensor: (\RNum{1}) identify the sufficient number of entries to be observed in order to recover a good approximation of the tensor, as a function of the size parameters $N_i$, the (CP) rank
$r$, and the order $d$; and (\RNum{2}) design stable and tractable methods that recover the tensor using a subset of its entries.\newline
The order-$2$ case, known as \emph{matrix completion}, has been extensively studied in the literature \cite{fazel2002matrix,srebro2005maximum, candes2009exact,keshavan2009matrix,davenport2016overview}. The most basic idea is finding the matrix with lowest rank that is consistent with the measurements. However, rank minimization is NP-hard; therefore, extensive research has been done to find tractable alternatives. The most common approach is using nuclear norm, also known as trace-norm, which is the convex relaxation of the rank function \cite{fazel2002matrix}. It was shown in \cite{candes2010power} that solving a nuclear-norm minimization problem would recover a rank-$r$, $N \times N$ matrix from only $O(rN\text{polylog}(N))$ samples under mild incoherence conditions on the matrix. The nuclear norm is the sum of the singular values of the matrix and it is also the dual of the spectral norm. Extensive research has been done in analyzing variants of nuclear-norm minimization and designing efficient algorithms to solve it as shown in \cite{cai2010singular, candes2010power,candes2010matrix,xu2012alternating}.\newline
An alternative interpretation of the rank and the nuclear-norm of a matrix is based on the minimum number of columns of its factorizations. In particular, the rank of a matrix $M$ is the minimum number of columns of the factors $U,\ V$ where $M=UV'$; the nuclear norm is the minimum product of the Frobenius norms of the factors, i.e., $\|M\|_{\ast} := \text{min} \|U\|_F \|V\|_F\ \text{subject to } M=UV'$ \cite{srebro2005maximum}. An alternative proxy for the rank of a matrix is its max-norm defined as $\|M\|_{\text{max}}:=\text{min} \|U\|_{2,\infty}\|V\|_{2,\infty}$ $\text{subject to}\ M=UV'$ \cite{srebro2005maximum}. The max-norm bounds the norm of the rows of the factors $U$ and $V$ and was used for matrix completion in \cite{foygel2011concentration}. There, the authors studied both max-norm and trace-norm matrix completion by analyzing the Rademacher complexity of the unit balls of these norms. They proved that under uniformly random sampling, either with or without replacement, $m=O(\frac{rN}{\epsilon}\log^3(\frac{1}{\epsilon}))$ samples are sufficient for achieving mean squared recovery error $\epsilon$ using max-norm constrained estimation and $m=O(\frac{rN\log(N)}{\epsilon}\log^3(\frac{1}{\epsilon}))$ samples are sufficient for achieving mean squared recovery error $\epsilon$ using nuclear-norm constrained estimation.\newline
Despite all the powerful tools and algorithms developed for matrix completion, tensor completion problem is still fairly open and not as well understood. For instance, there is a large gap between theoretical guarantees and what is observed in numerical simulations. This is mainly due to the lack of efficient orthogonal decompositions, low-rank approximations, and limited knowledge of structure of low-rank tensors compared to matrices. This large gap has motivated much research connecting the general tensor completion problem to matrix completion by rearranging the tensor as a matrix, including the sum of nuclear-norms (SNN) model that minimizes the sum of the nuclear-norm of matricizations of the tensor along all its dimensions, leading to sufficient recovery with $m=O(r N^{d-1})$ samples \cite{liu2013tensor,gandy2011tensor}. More balanced matricizations, such as the one introduced in \cite{mu2014square}, can result in a better bound of $m=O(rN^{\ceil{\frac{d}{2}}})$ samples.\newline
Once we move from matrices to higher order tensors, many of the well-known facts of matrix algebra cease to be true. For example, even a best rank-$k$ approximation may not exist for some tensors, illustrated in \cite[Section 3.3]{kolda2009tensor}, showing that the space of tensors with rank at most $2$ is not closed. Interestingly there is a paper titled ``\emph{Most tensor problems are NP-hard}" \cite{hillar2013most} which proves that many common algebraic tasks are NP-hard for tensors with $d \geq 3$, including computing the rank, spectral norm, and nuclear norm. Computational complexity of directly solving tensor completion and the inferior results of matricization make tensor completion challenging.\newline
Having all these complications in mind, on the theoretical side, a low-rank tensor has $O(rdN)$ free variables, but an upper-bound of $O(rN^{\ceil{\frac{d}{2}}})$ on the sample complexity. When $d>2$, the polynomial dependence on $N$ seems to have a lot of room for improvement. Moreover, it is well-known that empirical recovery results are much better when the tensor is not rearranged as a matrix, even though these results are attempting to solve an NP-hard problem. This has resulted in efforts towards narrowing this gap, including heuristic algorithms \cite{liu2013tensor,bazerque2013rank}. In spite of good empirical results and reasonable justifications, a theoretical study filling in the gap was not presented in these cases.\newline
Nuclear norm of a tensor, defined as the dual of the spectral norm was originally formulated in \cite{schatten1985theory,grothendieck1955produits} and has been revisited more in depth in the past few years, e.g., in \cite{derksen2013nuclear,hu2015relations}. Recently \cite{yuan2016tensor} studied tensor completion using nuclear-norm minimization and proved that under mild conditions on the tensor, $m=O(\sqrt{r}N^{\frac{d}{2}} \log(N))$ measurements is sufficient for successful recovery, but this is still far away from the number of free variables.\newline
In effort to obtain linear dependence on $N$, we analyze tensor completion using a max-qnorm (max-quasi-norm) constrained algorithm where the max-qnorm is a direct generalization of the matrix max-norm to the case of tensors. Unfortunately, max-qnorm is non-convex. However, analyzing the unit-ball of the dual of the dual of the max-qnorm (which is a convex norm) led us to define and analyze a convex atomic-norm ( which we call M-norm) constrained least squares problem, where we obtained optimal recovery bounds on the size of the tensor. The main contribution of this paper is as follows. Consider an order-$d$ tensor $T \in \mathbb{R}^{N_1 \times \cdots \times N_d}$ where $N_i = O(N)$ for $1\leq i \leq d$.
\begin{itemize}
\item We define the M-norm and max-qnorm of tensors as a robust proxy for the rank of a tensor. We prove that both M-norm and max-qnorm of a bounded low-rank tensor is upper-bounded by a quantity that just depends on its rank and its infinity norm and is independent of $N$.\medskip
\item We use a generalization of Grothendieck's theorem to connect the max-qnorm of tensors to its nuclear decomposition with unit infinity-norm factors. Using this, we bound the Rademacher complexity of the set of bounded tensors with low max-qnorm. This also establishes a theoretical framework for further investigation of low max-qnorm tensors.\medskip
\item We prove that, with high probability, $m=O(r^{\frac{3d}{2}} d N)$ (or $m=O(R^2 N)$ if M-norm is bounded by $R$) samples are sufficient to estimate a rank-$r$ bounded tensor using a convex least squares algorithm. Moreover, we derive an information-theoretic lower bound that proves $m=O(R^2N)$ measurements is necessary for recovery of tensors with M-norm less than $R$. This proves that our bound is optimal both in its dependence on $N$ and the M-norm bound $R$. It is worth mentioning though, that the bound we prove in this paper is not necessarily optimal in $r$, the rank of the tensor.\medskip
\item Through synthetic numerical examples, we illustrate the advantage of using algorithms designed for max-qnorm constrained tensor completion instead of algorithms, using matricization. These algorithms significantly improve algorithms based on matricization and alternating least squares (ALS). It is worth mentioning that computing the nuclear norm of a general tensor is known to be NP-hard. Although it is not known whether computing the M-norm or max-qnorm of a tensor is NP-hard or not, our numerical results for max-qnorm constrained least squares, using a simple projected quasi-Newton algorithm give promising results.
\end{itemize}
\subsection{Notations and basics on tensors}
We adopt the notation of Kolda and Bader's review on tensor decompositions \cite{kolda2009tensor}. Below, $\lambda$, $\sigma$, and $\alpha$ are used to denote scalars, and $C$ and $c$ are used to denote universal constants. Vectors are denoted by lower case letters, e.g., $u$, and $v$. Both matrices and tensors are represented by upper case letters, usually using $A$ and $M$ for matrices, and $T$ and $X$ for tensors. Tensors are a generalization of matrices to higher order, also called multi-dimensional arrays. For example, a first order tensor is a vector and a second order tensor is a matrix. $X \in \bigotimes_{i=1}^{d} \mathbb{R}^{N_i}$ is a $d$-th order tensor whose $i$-th size is $N_i$. We also denote $\bigotimes_{i=1}^{d} \mathbb{R}^{N}$ as $\mathbb{R}^{N^d}$. Elements of a tensor are either specified as $X_{i_1, i_2, \cdots, i_d}$ or $ X(i_1, i_2, \cdots, i_d)$, where $1 \leq i_j \leq N_j$ for $1 \leq j \leq d$. We also use $X_{\omega}$ as a shorthand to refer the index $\omega$ of a tensor, where $\omega=(i_1, i_2, \cdots, i_d)$ is an $n$-tuple determining the index $X(i_1, i_2, \cdots, i_d)$.\newline
Inner products are denoted by $\langle \cdot , \cdot \rangle$. The symbol $\circ$ represents both matrix and vector outer products where $T=U_1 \circ U_2 \circ \cdots \circ U_d$ means $T(i_1,i_2,\cdots,i_d)=\sum_{k} U_1(i_1,k)U_2(i_2,k)$ $\cdots U_d(i_d,k)$, where $k$ ranges over the columns of the factors. In the special case of vectors, $T=u_1 \circ u_2 \circ \cdots \circ u_d$ means $T(i_1,i_2,\cdots,i_d)=u_1(i_1)u_2(i_2)\cdots u_d(i_d)$. Finally $[N]:=\{1, \cdots, N\}$ is the shorthand notation we use for the set of integers from $1$ to $N$.
\subsubsection{Rank of a tensor}
A unit tensor is a tensor $U \in \bigotimes_{j=1}^{d} \mathbb{R}^{N_j}$ that can be written as
\begin{equation}\label{rank_one}
U=u^{(1)} \circ u^{(2)} \circ \cdots \circ u^{(d)},
\end{equation}
where $u^{(j)} \in \mathbb{R}^{N_j}$ is a unit-norm vector. The vectors $u^{(j)}$ are called the components of $U$. Define $\onet_d$ to be the set of unit tensors of order $d$. A rank-$1$ tensor is a scalar multiple of a unit tensor.\newline
The rank of a tensor $T$, denoted by rank($T$) is defined as the smallest number of rank-$1$ tensors that generate $T$ as their sum, i.e.,
\begin{equation*}
T = \sum_{i=1}^r \lambda_i U_i = \sum_{i=1}^r \lambda_i u_i^{(1)} \circ u_i^{(2)} \circ \cdots \circ u_i^{(d)},
\end{equation*}
where $U_i \in \onet_d$ is a unit tensor. This low-rank decomposition is also known as CANDECOMP/PARAFAC (CP) decomposition \cite{harshman1970foundations,carroll1970analysis}. In this paper we use CP decompositions; however, we note that there are other decompositions that are used in the literature such as Tucker decomposition \cite{tucker1966some}. For a detailed overview of alternate decompositions, refer to \cite{kolda2009tensor}.
\subsubsection{Tensor norms}
Define $\Tau_d$ to be set of all order-$d$ tensors of size $N_1 \times N_2 \times \cdots \times N_d$. For $X, T \in \Tau_d$, the inner product of $X$ and $T$ is defined as:
\begin{equation*}
\langle X, T\rangle = \sum_{i_1=1}^{N_1} \sum_{i_2=1}^{N_2} \cdots \sum_{i_d=1}^{N_d} X_{i_1, i_2, \cdots, i_d} T_{i_1, i_2, \cdots, i_d}.
\end{equation*}
Consequently the Frobenius norm of a tensor is defined as
\begin{equation}
\|T\|_F^2 := \sum_{i_1=1}^{N_1} \sum_{i_2=1}^{N_2} \cdots \sum_{i_d=1}^{N_d} T_{i_1, i_2, \cdots, i_d}^2 = \langle T,T \rangle.
\end{equation}
Using the definition of unit tensors one can define the spectral norm of tensors as
\begin{equation}
\|T\| := \underset{U \in \onet_d}{\text{max}} \langle T, U\rangle.
\end{equation}
Similarly, nuclear-norm was also generalized for tensors (see \cite{lim2010multiarray,friedland1982variation}, although the original idea dates back to Grothendieck \cite{grothendieck1955produits}) as
\begin{equation}\label{nuclear_norm}
\|T\|_{\ast} := \underset{\|X\| \leq 1}{\text{max}} \langle T, X\rangle.
\end{equation}
Finally we generalize the definition of max-norm to tensors as
\begin{equation}\label{max_norm_tensor_definition}
\|T\|_{\text{max}}:= \underset{T=U^{(1)} \circ U^{(1)} \circ \cdots \circ U^{(d)}}{\text{min}}\lbrace \prod_{j=1}^{d} \|U^{(j)}\|_{2,\infty}\rbrace,
\end{equation}
where, $\|U\|_{2,\infty} = \underset{\|x\|_2=1}{\text{sup}} \|Ux\|_{\infty}$. In Section \ref{max_norm_section}, we prove that for $d>2$ this generalization does not satisfy the triangle inequality and is a quasi-norm (which we call max-qnorm). We analyze the max-qnorm thoroughly in Section \ref{max_norm_section}.
\subsection{Simplified upper bound on tensor completion recovery error}\label{main_result_section}
Without going into details, we briefly state and compute the upper bounds we establish (in Section \ref{section_Max_norm_constrained_LS_estimation}) on the recovery errors associated with M-norm and max-qnorm constrained tensor completion. For ease of comparison, we assume $N_1=N_2=\cdots=N_d=N$. Given a rank-$r$, order-$d$ tensor $T^{\sharp} \in \bigotimes_{i=1}^{d} \mathbb{R}^N$, and a random subset of its entries with indices in $S=\{\omega_1,\omega_2,\cdots,\omega_m\},\ \omega_i \in [N] \times [N] \times \cdots \times [N]$, we observe $m$ noisy entries $\{Y_{\omega_t}\}_{t=1}^{m}$ of $\{T^{\sharp}(\omega_t)\}_{t=1}^{m}$, where each observation is perturbed by iid noise with mean zero and variance $\sigma^2$. To give a simple version of the result we assume that indices in $S$ are drawn independently at random with the same probability for each observation, i.e., we assume uniform sampling to give a simple theorem. We provide the general observation model in Section \ref{section_observation} and a general version of the theorem (which covers both uniform and non-uniform sampling) in Section \ref{TC_maxnorm_section} and prove it in Section \ref{proof_theorem_atomic_TC}. The purpose of tensor completion is to recover $T^{\sharp}$, from $m$ random samples of $T^{\sharp}$ when $m \ll N^d$.
\begin{thm}\label{theorem_simplified}
Consider a rank-$r$, order-$d$ tensor $T^{\sharp} \in \bigotimes_{i=1}^{d} \mathbb{R}^N$ with $\|T^{\sharp}\|_{\infty} \leq \alpha$. Assume that we are given a collection of noisy observations
$$Y_{\omega_t}= T^{\ast}(\omega_t) + \sigma \xi_t\ , \ \ t=1,\cdots,m,$$
where the noise sequence $\xi_t$ are i.i.d. standard normal random variables and each index $\omega_t$ is chosen uniformly random over all the indices of the tensor. Then if $m>dN$, there exist a constant $C<20$ such that the solution of
\begin{equation}
\hat{T}_{M} = \underset{X}{\text{arg min }} \frac{1}{m}\sum_{t=1}^{m} (X_{\omega_t}-Y_{\omega_t})^2 \ \ \ \ \text{subject to}\ \ \ \ \|X\|_{\infty} \leq \alpha,\ \|X\|_{M} \leq (r\sqrt{r})^{d-1} \alpha,
\end{equation}
satisfies
$$\frac{\|T^{\sharp}-\hat{T}_{M}\|_F^2}{N^d} \leq C (\alpha + \sigma) \alpha (r\sqrt{r})^{d-1}\sqrt{\frac{d N}{m}}.$$
with probability greater than $1-e^{\frac{-N}{\text{ln}(N)}}-e^{-dN}$. Moreover, the solution of
\begin{equation}
\hat{T}_{\text{max}} = \underset{X}{\text{arg min }} \frac{1}{m}\sum_{t=1}^{m} (X_{\omega_t}-Y_{\omega_t})^2 \ \ \ \ \text{subject to}\ \ \ \ \|X\|_{\infty} \leq \alpha,\ \|X\|_{\text{max}} \leq (\sqrt{r^{d^2-d}}) \alpha,
\end{equation}
satisfies
$$\frac{\|T^{\sharp}-\hat{T}_{\text{max}}\|_F^2}{N^d} \leq C_d (\alpha + \sigma) \alpha \sqrt{r^{d^2-d}}\sqrt{\frac{d N}{m}},$$
with probability greater than $1-e^{\frac{-N}{\text{ln}(N)}}-e^{-dN}$.
\end{thm}
\begin{rem}
Above, $\|X\|_M$ is the M-norm of tensor $X$ which is an atomic norm whose atoms is rank-$1$ sign tensors defined in Section \ref{section_tensor_maxnorm_Mnorm} \eqref{atomoc_norm_definition} and $\|X\|_{\text{max}}$ is max-qnorm of tensor $X$ which is a generalization of matrix max-norm to tensors defined in Section \ref{section_tensor_maxnorm_Mnorm} \eqref{maxnorm_tensor}.
\end{rem}
\begin{rem}[{\bf{theoretical contributions}}]
The general framework for establishing these upper bounds is already available (the key is to control the Rademacher complexity of the set of interest). The methods to adapt this to the matrix case are available in, e.g., \cite{srebro2004learning,cai2016matrix}. To move to tensor completion, we study the interaction of the max-qnorm, the M-norm, and the rank of a tensor in Section \ref{max_norm_section}. The tools given in Section \ref{max_norm_section} allow us to generalize matrix completion to tensor completion.
\end{rem}
\subsection{Organization}
In Section \ref{related_works}, we briefly overview recent results on tensor completion and max-norm constrained matrix completion. In Section \ref{max_norm_section}, we introduce the generalized tensor max-qnorm and characterize the max-qnorm unit ball that is crucial in our analysis. This also results in defining a certain convex atomic-norm which gives similar bounds on constrained tensor completion problem. We also prove that both the M-norm and the max-qnorm of a bounded rank-$r$ tensor $T$ can be bounded by a function of $\|T\|_{\infty}$ and $r$, independently of $N$. We have deferred all the proofs to Section \ref{section_proofs}. In Section \ref{TC_maxnorm_section}, we explain the tensor completion problem and state the main results on recovering low-rank bounded tensors. We also compare our results with previous results on tensor completion and max-norm constrained matrix completion. In Section \ref{lower_bound_section}, we state an upper bound on the performance of the M-norm constrained tensor completion which proves optimal dependence on the size. In Section \ref{experiments and algorithms}, we present numerical results on the performance of max-qnorm constrained tensor completion and compare it with applying matrix completion on the matricized version of the tensor and Section \ref{section_proofs} contains all the proofs.
\section{Related work}\label{related_works}
\subsection{Tensor matricization}\label{tensor_matricization_section}
The process of reordering the elements of a tensor into a matrix is called matricization, also known as unfolding or flattening. For a tensor $X \in \bigotimes_{i=1}^{d} \mathbb{R}^{N_i}$, \emph{mode-}$i$ fibers of the tensor are $\Pi_{j \neq i} N_j$ vectors obtained by fixing all indices of $X$ except for the $i$-th one. The \emph{mode-}$i$ matricization of $X$, denoted by $X_{(i)} \in \mathbb{R}^{N_i \times \Pi_{j \neq i} N_j}$ is obtained by arranging all the \emph{mode-}$i$ fibers of $X$ along columns of the matrix $X_{(i)}$. More precisely, $X_{(i)}(i_i,j)=X(i_1, i_2, \cdots, i_d)$, where
$$j=1+\sum_{k=1,k \neq i}^d(i_k-1)J_k\ \ \ \ \text{with}\ \ \ \ J_k=\Pi_{m=1,m\neq i}^{k-1} N_m.$$
A detailed illustration of these definitions can be found in \cite{kolda2006multilinear,kolda2009tensor}.
A generalization of these unfoldings was proposed by \cite{mu2014square} that rearranges $X_{(1)}$ into a more balanced matrix: For $j \in \{1,\cdots,d\}$, $X_{[j]}$ is obtained by arranging the first $j$ dimensions along the rows and the rest along the columns. In particular, using Matlab notation $X_{[j]}=\text{reshape}(X_{(1)},\Pi_{i=1}^{i=j} N_i,\Pi_{i=j+1}^{i=d} N_i)$. More importantly, for a rank-$r$ tensor $T = \sum_{i=1}^r \lambda_i u_i^{(1)} \circ u_i^{(1)} \circ \cdots \circ u_i^{(d)}$, $T_{[j]} = \sum_{i=1}^r \lambda_i (u_i^{(1)} \otimes u_i^{(2)} \otimes \cdots \otimes u_i^{(j)}) \circ (u_i^{(j+1)} \otimes \cdots \otimes u_i^{(d)})$, which is a rank-$r$ matrix. Here, the symbol $\otimes$, represents the Kronecker product. Similarly, the rank of all matricizations defined above are less than or equal to the rank of the tensor.
\subsection{Past results}\label{section_past_results}
Using max-norm for learning low-rank matrices was pioneered in \cite{srebro2005maximum} where max-norm was used for collaborative prediction. In this paper we use max-qnorm for tensor completion which is a generalization of a recent result on matrix completion using max-norm constrained optimization \cite{cai2016matrix}. In this section, we review some of the results which is related to M-norm and max-qnorm tensor completion. In particular, we first go over some of the matrix completion results including using nuclear-norm and max-norm and then review some of the results on tensor completion.\newline
Inspired by the result of \cite{fazel2002matrix}, which proved that the nuclear-norm is the convex envelope of rank function, most of the research on matrix completion has focused on using nuclear-norm minimization. Assuming $M$ to be a rank-$r$, $N \times N$ matrix and $M_{\Omega}$ to be the set of $m$ independent samples of this matrix, in \cite{candes2009exact,srebro2004learning}, it was proved that solving
\begin{equation}\label{optimizatin_nuclear_matrix}
\hat{M}:= \text{argmin}\ \|X\|_{\ast} \ \ \ \ \text{subject to} \ \ M_{\Omega}=X_{\Omega},
\end{equation}
recovers the matrix $M$ exactly if $|\Omega| > C N^{1.2} r \log(N)$, provided that the row and column space of the matrix is ``incoherent". This result was later improved in \cite{keshavan2009matrix} to $|\Omega|=O(Nr\log(N))$. There has been significant research in this area since then, either in sharpening the theoretical bound, e.g., \cite{bhojanapalli2014universal,candes2010power,recht2011simpler} or designing efficient algorithms to solve \eqref{optimizatin_nuclear_matrix}, e.g., \cite{jain2013low,cai2010singular}.\newline
More relevant to noisy tensor completion are the results of \cite{candes2010matrix,keshavan2010matrix,cai2016matrix} which consider recovering $M^{\sharp}$ from measurements $Y_{\Omega}$, where $Y=M^{\sharp}+Z$, and $|\Omega|=m$; here $Z$ is a noise matrix. It was proved in \cite{candes2010matrix} that if $\|Z_{\Omega}\|_F \leq \delta$, by solving the nuclear-norm minimization problem
$$\text{argmin} \ \|M\|_{\ast}\ \ \text{subject to}\ \|(X-Y)_{\Omega}\|_F \leq \delta,$$
we can recover $\hat{M}$ where,
$$\frac{1}{N}\|M^{\sharp}-\hat{M}\|_F \leq C\sqrt{\frac{N}{m}} \delta + 2\frac{\delta}{N},$$
provided that there are sufficiently many measurements for perfect recovery in the noiseless case.\newline
Another approach was taken by \cite{keshavan2010matrix} where the authors assume that $\|M^{\sharp}\|_{\infty} \leq \alpha$, and $Z$ is a zero mean random matrix whose entries are i.i.d. with subgaussian-norm $\sigma$. They then suggest initializing the left and right hand singular vectors ($L$ and $R$) from the observations $Y_{\Omega}$ and prove that by solving
$$\underset{L,S,R} {\text{min}}\ \frac{1}{2}\|M^{\sharp}-LSR'\|_F^2\ \ \text{subject to}\ L'L=\mathbb{I}_r,R'R=\mathbb{I}_r,$$
one can recover a rank-$r$ matrix $\hat{M}$ where
$$\frac{1}{N}\|M^{\sharp} - \hat{M}\|_F \leq C \alpha\sqrt{\frac{Nr}{m}} + C'\sigma \sqrt{\frac{Nr\alpha\log(N)}{m}}.$$
Inspired by promising results regarding the use of max-norm for collaborative filtering \cite{srebro2005maximum}, a max-norm constrained optimization was employed in \cite{foygel2011concentration} to solve the noisy matrix completion problem under the uniform sampling assumption. Nuclear norm minimization has been proven to be rate-optimal for matrix completion. However, it is not entirely clear if it is the best approach for non-uniform sampling. In many applications, such as collaborative filtering, the uniform sampling assumption is not a reasonable assumption. For example, in the Netflix problem, some movies get much more attention and therefore have more chance of being rated compared to others. To tackle the issue of non-uniform samples, \cite{negahban2012restricted} suggested using a weighted nuclear norm, imposing probability distributions on samples belonging to each row or column. Due to similar considerations, \cite{cai2016matrix} generalized the max-norm matrix completion to the case of non-uniform sampling and proved that, with high probability, $m=O(\frac{Nr}{\epsilon}\log^3(\frac{1}{\epsilon}))$ samples are sufficient for achieving mean squared recovery error $\epsilon$, where the mean squared error is dependent on the distribution of the observations. To be more precise, in their error bound, indices that have higher probability of being observed are recovered more accurately compared to the entries that have less probability of being observed. In particular, \cite{cai2016matrix} assumed a general sampling distribution as explained in Section \ref{main_result_section} (when $d=2$) that includes both uniform and non-uniform sampling. Assuming that each entry of the noise matrix is a zero mean Gaussian random variable with variance $\sigma$, and $\|M^{\sharp}\|_{\infty} \leq \alpha$, they proved that the solution $\hat{M}_{\text{max}}$ of
$$\underset{\|M\|_{\text{max}}\leq \sqrt{r}\alpha} {\text{min}}\ \|(M^{\sharp}-M)_{\Omega}\|_F^2,$$
assuming $\pi_{\omega} \geq \frac{1}{\mu N^2},\forall \omega \in [N] \times [N]$, satisfies
$$\frac{1}{N^2}\|\hat{M}_{\text{max}}-M^{\sharp}\|_F^2 \leq C \mu (\alpha + \sigma) \alpha \sqrt{\frac{r N}{n}},$$
with probability greater than $1-2e^{-dN}$. This paper is a generalization of the above result to tensor completion.\newline
Finally, we briefly explain some of the past results on tensor completion. To out knowledge, this paper provides the first result that proves linear dependence of the sufficient number of random samples on $N$. It is worth mentioning, though, that \cite{krishnamurthy2013low} proves that $O(N r^{d-0.5} d\log(r))$ adaptively chosen samples is sufficient for exact recovery of tensors. However, the result is heavily dependent on the samples being adaptive.\newline
There is a long list of heuristic algorithms that attempt to solve the tensor completion problem by using different decompositions or matricizations which, in spite of showing good empirical results, are not backed with a theoretical explanation that shows the superiority of using the tensor structure instead of matricization, e.g., see \cite{liu2013tensor,grasedyck2013literature}. The most popular approach is minimizing the sum of nuclear-norm of all the matricizations of the tensor along all modes. To be precise one solves
\begin{equation}\label{SNN}
\underset{X}{\text{min}} \sum_{i=1}^d \beta_i \|X_{(i)}\|_{\ast}\ \text{subject to } X_{\Omega}=T^{\sharp}_{\Omega},
\end{equation}
where $X_{(i)}$ is the mode-$i$ matricization of the tensor (see \cite{liu2013tensor,signoretto2010nuclear,tomioka2010estimation}). The result obtained by solving \eqref{SNN} is highly sensitive on the choice of the weights $\beta_{i}$ and an exact recovery requirement is not available. At least, in the special case of tensor sensing, where the measurements of the tensor are its inner products with random Gaussian tensors, \cite{mu2014square} proves that $m=O(rN^{d-1})$ is necessary for \eqref{SNN}, whereas a more balanced matricization such as $X_{[\floor{\frac{d}{2}}]}$ (as explained in Section \ref{tensor_matricization_section}) can achieve successful recovery with $m=O(r^{\floor{\frac{d}{2}}}N^{\ceil{\frac{d}{2}}})$ Gaussian measurements.\newline
Assuming $T^{\sharp}$ is symmetric and has an orthogonal decomposition, \cite{jain2014provable} proves that when $d=3$, an alternating minimization algorithm can achieve exact recovery from $O(r^5 N^{\frac{3}{2}} \log(N)^4)$ random samples. However, the empirical results of this work show good results for non-symmetric tensors as well if a good initial point can be found.\newline
In \cite{zhang2015exact}, a generalization of the singular value decomposition for tensors, called t-SVD, is used to prove that a third order tensor ($d=3$) can be recovered from $O(r N^2 \log(N)^2)$ measurements, provided that the tensor satisfies some incoherence conditions, called tensor incoherence conditions.\newline
The last related result that we will mention is an interesting theoretical result that generalizes the nuclear-norm to tensors as the dual of the spectral norm and avoids any kind of matricization in the proof \cite{yuan2015tensor}. They show that using the nuclear norm, the sample size requirement for a tensor with low coherence using nuclear-norm is $m=O(\sqrt{rN^d}\log(N))$. Comparing our result with the result of \cite{yuan2015tensor}, an important question that needs to be investigated is whether max-qnorm is a better measure of complexity of low-rank tensors compared to nuclear-norm or whether the difference is just an artifact of the proofs. While we introduce the framework for max-qnorm in this paper, an extensive comparison of these two norms is beyond the scope of this paper. Another difficulty of using tensor nuclear-norm is the lack of sophisticated or even approximate algorithms that can minimize nuclear-norm of a tensor.\newline
We compare our results with some of the above mentioned results in Sections \ref{comparison_past_section} and Section \ref{experiments and algorithms}.
\section{Max-qnorm and atomic M-norm}\label{max_norm_section}
In this section, we introduce the max-qnorm and M-norm of tensors and characterize the unit ball of these norms as tensors that have a specific decomposition with bounded factors. This then helps us to prove a bound on the max-qnorm and M-norm of low-rank tensors that is independent of $N$. The results in this section might be of independent interest and therefore, to give an overview of properties of max-qnorm and M-norm, we state the theorems and some remarks about the theorems and postpone the proofs to Section \ref{section_proofs}.
\subsection{Matrix max-norm}
First, we define the max-norm of matrices which was first defined in \cite{linial2007complexity} as $\gamma_2$ norm. We also mention some of the properties of the matrix max-norm which we generalize later on in this section. Recall that the max-norm of a matrix is defined as
\begin{equation}\label{matrix_maxnorm}
\|M\|_{\text{max}}=\underset{M=U \circ V}{\text{min}}\lbrace \|U\|_{2,\infty}\|V\|_{2,\infty}\rbrace,
\end{equation}
where, $\|U\|_{2,\infty} = \underset{\|x\|_2=1}{\text{sup}} \|Ux\|_{\infty}$ is the maximum $\ell_2$-norm of the rows of $U$ \cite{linial2007complexity,srebro2005rank}.\newline
Considering all the possible factorizations of a matrix $M=U\circ V$, the rank of $M$ is the minimum number of columns in the factors and nuclear-norm of $M$ is the minimum product of the Frobenius norms of the factors. The max-norm, on the other hand, finds the factors with the smallest row-norm as $\|U\|_{2,\infty}$ is the maximum $\ell_2$-norm of the rows of the matrix $U$. Furthermore, it was noticed in \cite{lee2010practical} that max-norm is comparable with nuclear norm in the following sense:
\begin{equation}\label{maxnorm_sum}
\|M\|_{\text{max}} \approx \text{inf}\lbrace\sum_j |\sigma_j|: M=\sum_j \sigma_j u_j v_j^T , \|u_j\|_{\infty}=\|v_j\|_{\infty}=1\rbrace.
\end{equation}
Here, the factor of equivalence is the Grothendieck's constant $K_G \in (1.67,1.79)$. To be precise, $\frac{\text{inf}\sum_j |\sigma_j|}{K_G} \leq \|M\|_{\text{max}} \leq \text{inf}\sum_j |\sigma_j|$, where the infimum is taken over all nuclear decompositions $M=\sum_j \sigma_j u_j v_j^T , \|u_j\|_{\infty}=\|v_j\|_{\infty}=1$.
Moreover, in connection with element-wise $\ell_{\infty}$ norm we have:
\begin{equation}\label{linial}
\|M\|_{\infty} \leq \|M\|_{\text{max}} \leq \sqrt{\text{rank}(M)} \|M\|_{1,\infty} \leq \sqrt{\text{rank}(M)}\|M\|_{\infty}.
\end{equation}
This is an interesting result that shows that we can bound the max-norm of a low-rank matrix by an upper bound that is independent of $N$.
\subsection{Tensor max-qnorm and atomic M-norm}\label{section_tensor_maxnorm_Mnorm}
We generalize the definition of max-norm to tensors as follows. Let $T$ be an order-$d$ tensor. Then
\begin{equation}\label{maxnorm_tensor}
\|T\|_{\text{max}}:= \underset{T=U^{(1)} \circ U^{(2)} \circ \cdots \circ U^{(d)}}{\text{min}}\lbrace \prod_{j=1}^{d} \|U^{(j)}\|_{2,\infty}\rbrace.
\end{equation}
Notice that this definition agrees with the definition of max-norm for matrices when $d=2$. As in the matrix case, the rank of the tensor is the minimum possible number of columns in the low-rank factorization of $T=U^{(1)} \circ U^{(2)} \circ \cdots \circ U^{(d)}$ and the max-qnorm is the minimum row norm of the factors over all such decompositions.\newline
\begin{thm}\label{theorem_max-qnorm_quasi}
For $d \geq 3$, the max-qnorm \eqref{maxnorm_tensor} does not satisfy the triangle inequality. However, it satisfies a quasi-triangle inequality
$$\|X+T\|_{\text{max}} \leq 2^{\frac{d}{2}-1} (\|X\|_{\text{max}}+\|T\|_{\text{max}}),$$
and, therefore, is a quasi-norm.
\end{thm}
The proof of this theorem is in Section \ref{section_max-qnorm_quasi}. Later on, in Section \ref{section_Max_norm_constrained_LS_estimation}, we prove that a max-qnorm constrained least squares estimation, with the max-qnorm as in \eqref{max_norm_tensor_definition}, breaks the $O(N^{\frac{d}{2}})$ limitation on the number of measurements mainly because of two main properties:
\begin{itemize}
\item Max-qnorm of a bounded low rank tensor does not depend on the size of the tensor.\medskip
\item Defining $T_{\pm}:=\lbrace T \in \{\pm 1\}^{N_1\times N_2 \times \cdots \times N_d}\ |\ \text{rank}(T)=1\rbrace$, the unit ball of the tensor max-qnorm is a subset of $C_d \text{conv}(T_{\pm})$ which is a convex combination of $2^{Nd}$ rank-$1$ sign tensors. Here $C_d$ is a constant that only depends on $d$ and conv($S$) is the convex envelope of the set $S$.
\end{itemize}
However, the max-qnorm in non-convex. To obtain a convex alternative that still satisfies the properties mentioned above, we consider the norm induced by the set $T_{\pm}$ directly; this is an atomic norm as discussed in \cite{chandrasekaran2012convex}. The atomic M-norm of a tensor $T$ is then defined as the gauge of $T_{\pm}$ \cite{rockafellar2015convex} given by
\begin{equation}\label{atomoc_norm_definition}
\|T\|_{M} := \text{inf}\{t>0:T \in t\ \text{conv}(T_{\pm})\}.
\end{equation}
As $T_{\pm}$ is centrally symmetric around the origin and spans $\bigotimes_{j=1}^{d} \mathbb{R}^{N_j}$, this atomic norm is a convex norm and the gauge function can be rewritten as
\begin{equation}
\|T\|_{M} = \text{inf}\{ \sum_{X \in T_{\pm}} c_X :\ T=\sum_{X \in T_{\pm}} c_X X, c_X \geq 0,X \in T_{\pm}\}.
\end{equation}
\subsection{Unit max-qnorm ball of tensors}
In the next lemma, we prove that, similar to the matrix case, the tensor unit max-qnorm ball is comparable to the set $T_{\pm}$. First define $\mathbb{B}_{\text{max}}^T(1) := \lbrace T \in \mathbb{R}^{N_1 \times \cdots \times N_d}\ |\ \|T\|_{\text{max}} \leq 1\rbrace$ and $\mathbb{B}_{M}(1) := \{ T : \|T\|_{M}\leq 1\}$.
\begin{lem}\label{max_tensor_ball}
The unit ball of the max-qnorm, unit ball of atomic M-norm, and $\text{conv}(T_{\pm})$ satisfy the following:
\begin{enumerate}
\item $\mathbb{B}_{M}(1)=\text{conv}(T_{\pm})$, \medskip
\item $\mathbb{B}_{\text{max}}^T(1)$ $\subset$ $c_1 c_2^d$ conv($T_{\pm}$).
\end{enumerate}
\end{lem}
Here $c_1$ and $c_2$ are derived from the generalized Grothendieck theorem \cite{tonge1978neumann,blei1979multidimensional} which is explained thoroughly in Section \ref{section_max-qnorm_quasi}.\newline
Using Lemma \ref{max_tensor_ball}, it is easy to analyze the Rademacher complexity of the unit ball of these two norms. In fact, noticing that $T_{\pm}$ is a finite class with $|T_{\pm}|<2^{dN}$ and some basic properties of Rademacher complexity we can prove the following lemma. Below, $\hat{R}_S(X)$ denotes the empirical Rademacher complexity of $X$. To keep this section simple, we refer to Section \ref{rademacher_complexity} for the definition of Rademacher complexity and proof of Lemma \ref{lemma_rademacher}.
\begin{lem}\label{lemma_rademacher}
The Rademacher complexity of unit balls of M-norm and max-qnorm is bounded by
\begin{enumerate}
\item $\underset{S:|S|=m}{sup} \hat{R}_S(\mathbb{B}_{M}(1)) < 6 \sqrt{\frac{dN}{m}}$,\medskip
\item $\underset{S:|S|=m}{sup} \hat{R}_S(\mathbb{B}_{\text{max}}^T(1)) < 6 c_1 c_2^d \sqrt{\frac{dN}{m}}$ .
\end{enumerate}
\end{lem}
\subsection{Max-qnorm and M-norm of bounded low-rank tensors}
Next, we bound the max-qnorm and M-norm of a rank-$r$ tensor whose (entry-wise) infinity norm is less than $\alpha$. First, we bound the max-qnorm and a similar proof can be used to obtain a bound on the M-norm as well which we explain in the Section \ref{section_proof_atomicbound}. As mentioned before, for $d=2$, i.e., the matrix case, an interesting inequality has been proved which does not depend on the size of the matrix, i.e., $\|M\|_{\text{max}} \leq \sqrt{\text{rank}(M)}\ \alpha$. In what follows, we bound the max-qnorm and M-norm of a rank-$r$ tensor $T$ with $\|T\|_{\infty} \leq \alpha$. \newline
\begin{thm}\label{theorem_atomicnorm_bound}
Assume $T \in \mathbb{R}^{N_1 \times \cdots N_d}$ is a rank-$r$ tensor with $\|T\|_{\infty} = \alpha$. Then
\begin{itemize}
\item $\alpha \leq \|T\|_{M} \leq (r\sqrt{r})^{d-1} \alpha.$\medskip
\item $\alpha \leq \|T\|_{\text{max}} \leq \sqrt{r^{d^2-d}} \alpha.$
\end{itemize}
\end{thm}
\noindent
The proofs of these two bounds are similar and both of them can be found in Section \ref{section_proof_atomicbound}. Notice the discrepancy of Theorem \ref{theorem_atomicnorm_bound} when $d=2$. This is an artifact of the proof which hints at the fact that Theorem \ref{theorem_atomicnorm_bound} might be not optimal in $r$ for general $d$ as well.\newline
\section{M-norm constrained tensor completion}\label{TC_maxnorm_section}
In this section, we consider the problem of tensor completion from noisy measurements of a subset of the tensor entries. As explained before, we assume that the indices of the entries that are measured are drawn indepently at random with replacement. Also the tensor of interest is low-rank and has bounded entries. Instead of constraining the problem to the set of low-rank bounded tensors, we consider a more general case and consider the set of bounded tensors with bounded M-norm which includes the set of low-rank bounded tensors. We minimize a constrained least squares (LS) problem given in \eqref{optimization_TC_atomicnorm} below. Similar results can be obtained for a max-qnorm constrained LS. We only provide the final result of the max-qnorm constrained problem in Theorem \ref{theorem_maxqnorm_TC} as the steps are exactly similar to the M-norm constrained one. When $d=2$, i.e., the matrix case, max-norm constrained matrix completion has been thoroughly studied in \cite{cai2016matrix}, so we will not discuss the lemmas and theorems that can be directly used in the tensor case; see \cite{cai2016matrix} for more details.
\subsection{Observation model}\label{section_observation}
Given an order-$d$ tensor $T^{\sharp} \in \mathbb{R}^{N^d}$ and a random subset of indices $S=\{\omega_1,\omega_2,\cdots,\omega_m\},$ $\omega_i \in [N] \times [N] \times \cdots \times [N]$, we observe $m$ noisy entries $\{Y_{\omega_t}\}_{t=1}^{m}$:
\begin{equation}\label{noisy_measurements}
Y_{\omega_t}= T^{\sharp}(\omega_t) + \sigma \xi_t\ , \ \ t=1,\cdots,m,
\end{equation}
for some $\sigma > 0$. The variables $\xi_t$ are zero mean i.i.d. random variables with $\mathbb{E}(\xi_t^2)=1$. The indices in $S$ are drawn randomly with replacement from a predefined probability distribution $\Pi=\{\pi_\omega\}$, for $\omega \in [N]\times[N]\times \cdots\times [N]$, such that $\sum_\omega \pi_\omega=1$. Obviously $\max \pi_\omega\geq \frac{1}{N^d}$. Although it is not a necessary condition for our proof, it is natural to assume that there exist $\mu\geq 1$ such that
\begin{equation*}
\pi_{\omega} \geq \frac{1}{\mu N^d}\ \ \forall \omega \in [N]^d,
\end{equation*}
which ensures that each entry is observed with some positive probability. This observation model includes both uniform and non-uniform sampling and is a better fit than uniform sampling in many practical applications.
\subsection{M-norm constrained least squares estimation}\label{section_Max_norm_constrained_LS_estimation}
Given a collection of noisy observations $\{Y_{\omega_t}\}_{t=1}^{m}$ of a low-rank tensor $T^{\sharp}$, following the observation model \eqref{noisy_measurements}, we solve a least squares problem to find an estimate of $T^{\sharp}$. Consider the set of bounded M-norm tensors with bounded infinity norm
$$K^T_{M}(\alpha,R):=\lbrace T \in \mathbb{R}^{N_1 \times N_2 \times \cdots \times N_d}: \|T\|_{\infty} \leq \alpha, \|T\|_{M} \leq R \rbrace.$$
Notice that assuming that $T^{\sharp}$ has rank $r$ and $\|T^{\sharp}\|_{\infty} \leq \alpha$, Theorem \ref{theorem_atomicnorm_bound} ensures that a choice of $R = (r\sqrt{r})^{d-1} \alpha$ is sufficient to include $T^{\sharp}$ in $K^T_{M}(\alpha,R)$. Defining
\begin{equation}\label{L_TC}
\mathcal{L}_{m}(X,Y):=\frac{1}{m}\sum_{t=1}^{m} (X_{\omega_t}-Y_{\omega_t})^2,
\end{equation}
we bound the recovery error for the estimate $\hat{T}_{M}$ obtained by solving the optimization problem
\begin{equation}\label{optimization_TC_atomicnorm}
\hat{T}_{M} = \underset{X}{\text{arg min }} \mathcal{L}_{m}(X,Y) \ \ \ \ \text{subject to}\ \ \ \ X \in K^T_{M}(\alpha,R),\ R\geq \alpha.
\end{equation}
In words, $\hat{T}_{M}$ is a tensor with entries bounded by $\alpha$ and M-norm less than $R$ that is closest to the sampled tensor in Frobenius norm. \newline
We now state the main result on the performance of M-norm constrained tensor completion as in \eqref{optimization_TC_atomicnorm} for recovering a bounded low-rank tensor.
\begin{thm}\label{theorem_atomic_TC}
Consider an order-$d$ tensor $T^{\sharp} \in \bigotimes_{i=1}^{d} \mathbb{R}^N$ with $\|T^{\sharp}\|_{\infty} \leq \alpha$ and $\|T^{\sharp}\|_{M} \leq R$. Given a collection of noisy observations $\{Y_{\omega_t}\}_{t=1}^{m}$ following the observation model \eqref{noisy_measurements} where the noise sequence $\xi_t$ are i.i.d. standard normal random variables, there exist a constant $C<20$ such that the minimizer $\hat{T}_{M}$ of \eqref{optimization_TC_atomicnorm} satisfies:
\begin{equation}\label{TC_error}
\|\hat{T}_{M}-T^{\sharp}\|_{\Pi}^2:=\sum_\omega \pi_\omega (\hat{T}_{M}(\omega)-T^{\sharp}(\omega))^2 \leq C \left( \sigma(R+\alpha) + R\alpha \right) \sqrt{\frac{dN}{m}},
\end{equation}
with probability greater than $1-e^{\frac{-N}{\text{ln}(N)}}-e^{-dN}$.
\end{thm}
\begin{cor}
If we assume each entry of the tensor is sampled with some positive probability, $\pi_{\omega} \geq \frac{1}{\mu N^d}\ \ \forall \omega \in [N]^d$, then for a sample size $m>dN$, we get
\begin{equation}\label{TC_frobenious_error}
\frac{1}{N^d}\|\hat{T}_{M}-T^{\sharp}\|_{F}^2 \leq C \mu (\alpha + \sigma) R \sqrt{\frac{dN}{m}}.
\end{equation}
with probability greater than $1- e^{\frac{-N}{\text{ln}(N)}} -e^{-dN}$.
\end{cor}
\begin{rem}
In Section \ref{main_result_section}, we presented a simplified version of the above theorem when $\mu=1$ and $T^{\sharp}$ is a rank-$r$ tensor which uses the bound $\|T^{\sharp}\|_{M} < (r\sqrt{r})^{d-1} \alpha$ proved in Theorem \ref{theorem_atomicnorm_bound}.
\end{rem}
\begin{rem}
The upper bound \eqref{TC_error} is general and does not impose any restrictions on the sampling distribution $\pi$. However, the recovery error depends on the distribution. In particular, the entries that have a bigger probability of being sampled have a better recovery guarantee compared to the ones that are sampled with smaller probability.
\end{rem}
\begin{cor}
Under the same assumptions as in Theorem \ref{theorem_atomic_TC} but assuming instead that $\xi_t$ are independent sub-exponential random variables with sub exponential norm $K$ such that
$$\underset{n=1,\cdots,n}{\max} \mathbb{E} [\exp(\frac{|\xi_t|}{K})] \leq e,$$
for a sample size $m>dN$, we get
\begin{equation}\label{TC_frobenious_error}
\frac{1}{N^d}\|\hat{T}_{M}-T^{\sharp}\|_{F}^2 \leq C \mu (\alpha + \sigma K) R \sqrt{\frac{dN}{m}}.
\end{equation}
with probability greater than $1- 2e^{\frac{-N}{\text{ln}(N)}}$.
\end{cor}
Although equation \eqref{TC_frobenious_error} proves linear dependence of sample complexity with $N$, we are not aware of a polynomial-time method for estimating (or even attempting to estimate) the solution of \eqref{optimization_TC_atomicnorm}. However, we later propose an algorithm that is inspired by max-qnorm constrained tensor completion and illustrate its efficiency numerically. Therefore, now we analyze the error bound of max-qnorm constrained tensor completion which is very similar to the error bound of \eqref{optimization_TC_atomicnorm}. To this end, we define the set of low max-qnorm tensors as
$$K^T_{\text{max}}(\alpha,R):=\lbrace T \in \mathbb{R}^{N_1 \times N_2 \times \cdots \times N_d}: \|T\|_{\infty} \leq \alpha, \|T\|_{\text{max}} \leq R \rbrace.$$
Note that, Theorem \ref{theorem_atomicnorm_bound} ensures that a choice of $R = \sqrt{r^{d^2-d}}\alpha$ is sufficient to include $T^{\sharp}$ in $K^T_{\text{max}}(\alpha,R)$. The following theorem provides the bound on max-qnorm constrained LS estimation.
\begin{thm}\label{theorem_maxqnorm_TC}
Consider an order-$d$ tensor $T^{\sharp} \in \bigotimes_{i=1}^{d} \mathbb{R}^N$ with $\|T^{\sharp}\|_{\infty} \leq \alpha$ and $\|T^{\sharp}\|_{\text{max}} \leq R$. Given a collection of noisy observations $\{Y_{\omega_t}\}_{t=1}^{m}$ following the observation model \eqref{noisy_measurements} where the noise sequence $\xi_t$ are i.i.d. standard normal random variables, define
\begin{equation}\label{optimization_TC_maxnorm}
\hat{T}_{\text{max}} = \underset{X}{\text{arg min }} \mathcal{L}_{m}(X,Y) \ \ \ \ \text{subject to}\ \ \ \ X \in K^T_{\text{max}}(\alpha,R),\ R\geq \alpha.
\end{equation}
Then there exist a constant $C_d$ such that the minimizer $\hat{T}_{M}$ of \eqref{optimization_TC_atomicnorm} satisfies:
\begin{equation}\label{TC_qmax_error}
\|\hat{T}_{\text{max}}-T^{\sharp}\|_{\Pi}^2=\sum_\omega \pi_\omega (\hat{T}_{\text{max}}(\omega)-T^{\sharp}(\omega))^2 \leq C_d \left( \sigma(R+\alpha) + R\alpha \right) \sqrt{\frac{dN}{m}},
\end{equation}
with probability greater than $1-e^{\frac{-N}{\text{ln}(N)}}-e^{-dN}$.
\end{thm}
\begin{cor}
Moreover, if we assume each entry of the tensor is sampled with some positive probability, $\pi_{\omega} \geq \frac{1}{\mu N^d}\ \ \forall \omega \in [N]^d$, and for a sample size $m>dN$, we get
\begin{equation}\label{TC_qfrobenious_error}
\frac{1}{N^d}\|\hat{T}_{\text{max}}-T^{\sharp}\|_{F}^2 \leq C_d \mu (\alpha + \sigma) R \sqrt{\frac{dN}{m}}.
\end{equation}
with probability greater than $1-e^{\frac{-N}{\text{ln}(N)}}-e^{-dN}$.
\end{cor}
\begin{rem}
In Section \ref{main_result_section}, we presented a simplified version of the above theorem when $\mu=1$ and $T^{\sharp}$ is a rank-$r$ tensor which uses the bound $\|T^{\sharp}\|_{\text{max}} < \alpha \sqrt{r^{d^2-d}}$ proved in Theorem \ref{theorem_atomicnorm_bound}.
\end{rem}
The proof of this theorem is very similar to the proof of Theorem \ref{theorem_atomic_TC}. The only differences are: (\RNum{1}) the max-qnorm is a quasi-norm and therefore the max-qnorm of the error tensor ($\hat{T}_{\text{max}} - T^{\sharp}$) is bounded by $2^{d-1} R$; (\RNum{2}) the unit ball of max-qnorm is larger than the unit ball of M-norm. The details of these differences are provided in Remark \ref{remark_proof_maxnorm} in Section \ref{proof_theorem_atomic_TC}.
\section{Comparison to past results}\label{comparison_past_section}
As discussed in Section \ref{section_past_results}, there are several works that have considered max-norm for matrix completion \cite{srebro2005maximum,lee2010practical,foygel2012matrix, shen2014online,fang2015max}. However, the closest work to our result is \cite{cai2016matrix}, where the authors study max-norm constrained matrix completion, which is a special case of max-qnorm constrained tensor completion with $d=2$. Here, we have generalized the framework of \cite{cai2016matrix} to the problem of tensor completion. Although the main ideas of the proof are similar, the new ingredients include building a machinery for analyzing the max-qnorm and M-norm of low rank tensors, as explained in Section \ref{max_norm_section}. As expected, our result reduces to the one in the \cite{cai2016matrix} when $d=2$. More interestingly when $d>2$, compared to the matrix error bound, the only values in upper bound \eqref{TC_error} that change is the upper bound on the max-qnorm of the $d$-th order tensor (which is independent of $N$) and the order $d$, which changes the constants slightly.\newline
As can be seen from Theorem \ref{theorem_atomicnorm_bound}, for a rank-$r$ tensor $T$ with $\|T\|_{\infty} \leq \alpha$, we have $\|T\|_{M} \leq (r\sqrt{r})^{d-1} \alpha$. Therefore, assuming $\alpha=O(1)$, to obtain an error bound of $\frac{1}{N^d}\|\hat{T}_{M}-T^{\sharp}\|_F^2 \leq \epsilon$, it is sufficient to have $m > C \frac{(r\sqrt{r})^{d-1} d N} {\epsilon^2}$ samples. Similarly, using the max-qnorm, for an approximation error bounded by $\epsilon$, it is sufficient to obtain $m > C_d \frac{r^{d^2-d} d N}{\epsilon^2}$ samples. In contrast, the sufficient number of measurements with the best possible matricization is $m> C \frac{r N^{\ceil{\frac{d}{2}}}}{\epsilon^2}$, significantly bigger for higher order tensors.\newline
Tensor completion using nuclear-norm gives significantly inferior bounds as well. In particular, fixing $r$, and $d$, compared to latest results on tensor completion using nuclear-norm \cite{yuan2015tensor}, using M-norm lowers the theoretical sufficient number of measurements from $O(N^{\frac{d}{2}})$ to $O(dN)$
\section{Information theoretic lower bound}\label{lower_bound_section}
To prove a lower bound on the performance of \eqref{optimization_TC_atomicnorm}, we employ a classical information theoretic technique to establish a minimax lower bound for non-uniform sampling of random tensor completion on the max-qnorm ball. A similar strategy in the matrix case has been used in \cite{davenport20141,cai2016matrix}. In order to derive a lower bound on the performance of \eqref{optimization_TC_atomicnorm}, we find a set of tensors in the set $K^T_{M}$ that are sufficiently far away from each other. Fano's inequality implies that with the finite amount of information that we have, there is no method that can differentiate between all the elements of a set with too many elements and therefore any method will fail to recover at least one of them with a large probability. The main idea and the techniques closely follow \cite[Section 6.2]{cai2016matrix}; therefore we only explain the main steps we take to generalize this approach from matrices to tensors.\newline
For simplicity we assume $N_1=N_2=\cdots=N_d=N$. Similar to the upper bound case, we analyze a general restriction on the max-qnorm of the tensors instead of concentrating on low-rank tensors. Plugging the upper bound of the max-qnorm of low-rank tensors as a special case provides a lower bound for low-rank tensors as well.\newline
Restating the set of bounded low M-norm tensors given by
\begin{equation}\label{K}
K_{M}^T(\alpha,R):=\lbrace T \in \mathbb{R}^{N \times N \times \cdots \times N}: \|T\|_{\infty} \leq \alpha, \|T\|_{M} \leq R \rbrace,
\end{equation}
We will find a lower bound on the recovery error of any method that takes $\{Y_{\omega_t}\}_{t=1}^{m}$ as input and outputs an estimate $\hat{T}$. This includes $\hat{T}_{M}$ that is obtained by
\begin{equation}\label{lower_ptimization_maxnorm}
\hat{T}_{M} = \underset{X}{\text{arg min }} \mathcal{L}_{m}(X,Y) \ \ \ \ \text{subject to}\ \ \ \ X \in K_{M}^T(\alpha,R).
\end{equation}
In particular, we show that when the sampling distribution satisfies
$$\frac{\mu}{N^d} \leq \text{min}_{\omega} \pi_{\omega} \leq \max_{\omega} \pi_{\omega} \leq \frac{L}{N^d},$$
the M-norm constrained least squares estimator is rate optimal on $K_{M}^T(\alpha,R)$.
\begin{thm}\label{theorem_atomic_TC_lowerbound}
Assume that the noise sequence $\xi_t$ are i.i.d. standard normal random variables and the sampling distribution $\Pi$ satisfies $\max_{\omega} \pi_{\omega} \leq \frac{L}{N^d}$. Fix $\alpha$, $R$, and $N$, and $m$ such that
\begin{equation}\label{conditions_lowerbound_theorem}
R^2 \geq \frac{48\alpha^2 K_G^2}{N},
\end{equation}
then the minimax recover error is lower bounded by
\begin{equation}\label{TC_lowerbound}
\underset{\hat{T}_M}{\inf} \underset{T \in K_{M}^T(\alpha,R)}{\sup} \frac{1}{N^d} \mathbb{E}\|\hat{T}_{M}-T\|_F^2 \geq \text{min} \{\frac{\alpha^2}{16},\frac{\sigma R}{128\sqrt{2}K_G}\sqrt{\frac{N}{mL}}\}.
\end{equation}
\end{thm}
\begin{rem}
Comparing the above theorem with \eqref{TC_frobenious_error}, we observe that as long as $\frac{\sigma R}{128\sqrt{2}K_G}\sqrt{\frac{N}{mL}} < \frac{\alpha^2}{16}$, M-norm constrained tensor completion is optimal in both $N$ and $R$.
\end{rem}
\section{Experiments}\label{experiments and algorithms}
In this section, we present algorithms that we use to solve \eqref{optimization_TC_maxnorm} and experiments concerning max-qnorm of specific classes of tensors and max-qnorm constrained tensor completion. As mentioned before most of the typical procedures such as calculating the nuclear norm or even calculating the rank of a tensor are NP-hard. The situation seems even more hopeless if we consider the results of \cite{barak2015noisy} which connects $3$-dimensional tensor completion with refuting $3$-SATs, which has a long line of research behind it. In short, if we assume that either max-qnorm or M-norm is computable in polynomial time, a conjecture of \cite{daniely2013more} for refuting $3$-SATs will be disproved. All these being said, the current paper is the first paper considering max-qnorm for tensor completion and the preliminary results we show in this section are promising, outperforming matricization in every experiment we ran, and even outperforming the TenALS algorithm of \cite{jain2014provable}.\newline
In this section, we concentrate on \eqref{optimization_TC_maxnorm} instead of \eqref{optimization_TC_atomicnorm} as we are not aware of any algorithm that can even attempt to solve \eqref{optimization_TC_atomicnorm} and simple heuristic algorithms we designed for \eqref{optimization_TC_maxnorm} give promising results even though we do not know of any algorithm that is known to converge due to the non-convexity of the optimization problem \eqref{optimization_TC_maxnorm}.\newline
There are two questions that need to be answered while solving \eqref{optimization_TC_maxnorm}. First is how to choose the max-qnorm bound $R$, and second is how to solve the least squares problem once $R$ is fixed. We address both these question in the next sections. We also run some experiments to estimate the tensor max-qnorm of some specific classes of tensors to get an idea of the dependency of the max-qnorm of a tensor on its size and rank. Finally, we compare the results of max-qnorm constrained tensor completion with TenALS and matricizing.
\subsection{Algorithms for max-qnorm constrained least squares estimation}\label{section_algorithms_maxnormTC}
In this section, we introduce a few algorithms that attempt to solve (or approximate the solution of) \eqref{optimization_TC_maxnorm}. Defining $f(V_1, \cdots, V_d,Y):=\mathcal{L}_m((V_1 \circ \cdots \circ V_d),Y)$, we minimize
\begin{equation}\label{optimization_maxnorm_algorithm}
\text{min} f(V_1, \cdots, V_d,Y) \text{ subject to } \max_i (\|V_i\|_{2,\infty}) \leq \sqrt[d]{R},
\end{equation}
where $R$ is the max-qnorm constraint. In the definition of the max-qnorm, there is no limitation on the column size of the factors $V_i$
In the experiments we run in this section, we limit the factor sizes to $N \times 2N$. Although, this is an arbitrary value and we haven't derived an error bound in the max-qnorm of tensors with this limitation, we believe (and our experiments also confirm) that this choice is large enough when $r<<N$. We defer the exact details of the effect of this choice on the error bounds to future work.\newline
All the algorithms mentioned in this section are first order methods that are scalable for higher dimensions and just require access to first derivative of the loss function.
\subsubsection{Projected gradient}
The first algorithm is the projected gradient algorithm that for each factor, fixes all the other factors and takes a step according to the gradient of the loss function. Next, we project back all the factors on the set $C:=\{X|\|X\|_{2,\infty} \leq \sqrt[d]{R}\}$. To be precise, for each factor $V_i$, define the matricization of $T=V_1 \circ \cdots \circ V_d$ along the $i$-th dimension, $T_i$, to be $T_i=V_i \circ R_i$ and define $f_i(X):=\mathcal{L}((X \circ R_i),Y_i)$, where $Y_i$ is the matricization of $Y$ along its $i$-th dimension. Fixing a step size $\gamma$, the algorithm updates all the factors in parallel via
\begin{equation}
[V_i] \leftarrow \mathbb{P}_{C}([V_i - \gamma \bigtriangledown(f_i) R_i]).
\end{equation}
where, $\mathbb{P}_C$ simply projects the factor onto the set of matrices with $\ell_2$-infinity norm less than $\sqrt[d]{R}$. This projection looks at each row of the matrix and if the norm of a row is bigger than $\sqrt[d]{R}$, it scales that row back down to $\sqrt[d]{R}$ and leaves other rows unchanged.\newline
This algorithm is a well known algorithm with a lot of efficient implementations and modifications. Furthermore, using armijo line search rule to guarantee sufficient decrease of the loss function, it is guaranteed to find a stationary point of \eqref{optimization_maxnorm_algorithm}.
\subsubsection{Projected quasi-Newton}
Stacking all the factors in a matrix $X$,
$$
X =\left[
\begin{array}{cc}
V_1\\
V_2\\
\vdots\\
V_d
\end{array}\right]
$$
and defining $f(X):=\mathcal{L}_m((V_1 \circ \cdots \circ V_d),Y)$, this algorithm uses BFGS quasi-Newton method to form a quadratic approximation to the function at the current estimate and then uses spectral projected gradient (SPG) method to minimize this quadratic function, constrained to $X \in C$. We use the implementation of \cite{schmidt2009optimizing} which uses limited memory BFGS and uses a Barzilai-Borwein scaling of the gradient, and use a non-monotone Armijo line search along the feasible direction to find the next iterate in the SPG step.
\subsubsection{Stochastic gradient}
The loss function
$$\mathcal{L}_{m}(X,Y)=\frac{1}{m}\sum_{t=1}^{m} (X_{\omega_t}-Y_{\omega_t})^2,$$
is decomposable into the sum of $m$ loss functions, each concerning one observed entry. This makes it very easy to use stochastic gradient methods that at each iteration take one or more of the entries, and find the feasible direction according to this subset of observations. In particular, at each iteration, we take a subset of the $m$ entries, $S \subset \Omega$, and minimize the loss function
$$
\mathcal{L}_{S}(X,Y)=\frac{1}{|S|}\sum_{\omega_t \in S} (X_{\omega_t}-Y_{\omega_t})^2.
$$
This approach is useful when we are dealing with very high dimension sizes and accessing all the measurements at once is not an option or very costly. There has been plenty of research on the efficiency of this method and its recovery guarantees \cite{kushner2012stochastic}. The projection part is done as before with the advantage that we just need to project the rows in the factors that correspond to the subset of entries chosen in this iteration and not necessarily all of them which saves time in large applications.
\subsection{Experiment on max-qnorm of tensors}\label{section_experiments_maxqnorm_tensors}
In this section, we run an experiment to find the dependency of the max-qnorm on its rank and size. To this end, we consider tensors whose low-rank factors come from Gaussian distribution. We also mention the results for tensors coming from random sign factors. Although in comparison with other ways of generating low-rank tensors, these specific classes of tensors do not necessarily represent tensors with highest possibles max-qnorm, they can be helpful in giving us an idea of how does the max-qnorm scale with size and rank.\newline
In order to estimate the max-qnorm of a tensor, we employ a max-qnorm constrained tensor completion while accessing all the entries of the tensor and find the smallest constraint that successfully recovers the tensor. Using the bisection method to estimate the max-qnorm of the tensor, starting from a lower bound and an upper bound for the max-qnorm of the tensor we first check if the tensor can be recovered with max-qnorm bound equal to the average of the upper bound and the lower bound. Next, we increase the lower bound if the max-qnorm constraint is too small for full recovery and reduce the upper bound if the max-qnorm bound is large enough. Algorithm \ref{algorithm_MNfinding} explains this algorithm in more details. For small ranks we get to the approximate max-qnorm very fast, for example, in less than $\log (\text{rank}^{dim-1})+k$ iterations we can estimate the max-qnorm with an error less than $2^{-k}$. Moreover, we assume successful recovery is achieved once the root means squared error (RMSE) is less than a small predefined value. This algorithm becomes faster after the first iteration, as we use the factors found in the previous iteration as a good initial point for the next iteration.\newline
\begin{algorithm}\caption{Estimating max-qnorm of a tensor $T$}
\begin{algorithmic}[1]\label{algorithm_MNfinding}
\STATE \textbf{Input} $T$, $\Omega=[N_1] \times [N_2] \times \cdots \times [N_d]$, $lowerbound$, $upperbound$
\STATE \textbf{Output} $\|T\|_{\text{max}}$ with an estimation error of at most $0.01$
\FOR{iteration =1 to $\ceil{\log_2 (upperbound-lowerbound)}+6$}
\STATE $\hat{T}=\underset{X \in \mathbb{R}^{N_1} \times \cdots \times \mathbb{R}^{N_d}}{\text{argmin}}$ $\|X_{\Omega}-T_{\Omega}\|_F^2 \text{ subject to } \|X\|_{\text{max}} \leq \frac{lowerbound+upperbound}{2}$
\ENDFOR
\STATE $RMSE=\frac{\|X-T\|_F}{\sqrt{\prod_{i=1}^{i=d}N_i}}$
\IF{$RMSE \leq 1e-3$}
\STATE $upperbound=\frac{lowerbound+upperbound}{2}$
\ELSE
\STATE $lowerbound=\frac{lowerbound+upperbound}{2}$
\ENDIF
\RETURN $\frac{lowerbound+upperbound}{2}$
\end{algorithmic}
\end{algorithm}\vspace{-0.0in}
Figure \ref{maxnorm_results} shows the results for both $3$ and $4$ dimensional tensors when low-rank factors are drawn either from Gaussian distribution \ref{maxnorm_results}.a, and \ref{maxnorm_results}.b or from Bernoulli distribution \ref{maxnorm_results}.c. In both cases we have considered $r \in \{1, 2, \cdots, 10\}$, and $N \in \{5,10,15,20\}$ for the $3$-dimensional case and $N \in \{5,10\}$ for the $4$-dimensional case. In all the cases the average max-qnorm is similar for different value of $N$ when rank and order is fixed. These results confirm that max-qnorm of a tensor just depends on its rank and its order and is independent of the size of the tensor.\newline
The results are averaged over 15 experiments. Because of the linear effect of infinity norm of a tensor on its max-qnorm, we rescale all tensors to have$\|T\|_{\infty}=1$ before estimating their max-qnorm. Comparing Figure \ref{maxnorm_results}.a and \ref{maxnorm_results}.b shows that the max-qnorm is around $\sqrt{r}$ for $d=3$, and $r$ for $d=4$. These values are attained exactly when the factors are drawn from Bernoulli random variables. Moreover, the dependence is constant when $d=2$. This suggests a multiplicative increase of $\sqrt{r}$ when the order is increased by one. However, whether or not the actual bound in general case is $O(\sqrt{r^{d^2-d}})$ is an interesting open question.\newline
\begin{figure}[h]
\centering
\subfloat[][$3$-dimensional, Gaussian factors]{
\includegraphics[width=0.33\textwidth]{gaussian_ranks.eps}
\label{3dgaussian}}
\subfloat[][$4$-dimensional, Gaussian factors]{
\includegraphics[width=0.33\textwidth]{gaussian_ranks4d.eps}
\label{4dgaussian}}
\subfloat[][$3$ and $4$ dimensional, sign factors]{
\includegraphics[width=0.33\textwidth]{sign_ranks.eps}
\label{34dsign}}
\caption{Log-log plot of average max-qnorm of $3$ and $4$ dimensional low-rank tensors obtained by Algorithm \ref{algorithm_MNfinding}, for various rank and sizes, averaged over 15 draws for each rank and size.}
\label{maxnorm_results}
\end{figure}
\subsection{Automatic algorithm for choosing optimal max-qnorm bound $R$}
As explained, before, other than designing algorithms for solving constrained max-qnorm minimization, we need to design a procedure to find good bounds on the max-qnorm. The theoretical bounds found in this paper might not be tight and even if they are proven to be tight, such theoretical upper bounds usually capture the worst-case scenarios which might not be optimal for a general problem. This issue is very important as the result of the tensor completion is very dependent on choosing the right upper bound. Other than this in many practical applications we don't have access to the actual rank of the underlying tensor which shows the importance of finding the upper bound automatically and not as an input to the optimization problem.\newline
\begin{algorithm}\caption{Tensor completion, with cross validation}
\begin{algorithmic}[1]\label{algorithm_TC}
\STATE \textbf{Input} possibly noisy measurements $Y_{\Omega}=T_{\Omega} + \sigma \mathbb{N}(0,1)$, observed entries $\Omega$, $lowerbound$, $upperbound$
\STATE \textbf{Output} $\hat{T}$
\STATE Divide the observations into $\Omega_{\text{train}}$, and $\Omega_{\text{validate}}$
\FOR{iteration =1 to $\ceil{\log_2 (upperbound-lowerbound)}+6$}
\FOR{$iter_{check}$=0 to 4}
\STATE $bound(ietr_{check})= \frac{iter_{check}}{4}*upperbound + \frac{4-iter_{check}}{4}*upperbounds$
\STATE $\hat{T}(ietr_{check})=\underset{X \in \mathbb{R}^{N_1} \times \cdots \times \mathbb{R}^{N_d}}{\text{argmin}}$ $\|X_{\Omega}-Y_{\Omega}\|_F^2 \text{ subject to } \|X\|_{\text{max}} \leq bound_{current} $
\STATE $RMSE(iter_{check})=\frac{\|(\hat{T}(iter_{check})_{\Omega_{\text{validate}}}-(Y)_{\Omega_{\text{validate}}}\|_F}{\sqrt{|\Omega_{\text{validate}|}}}$
\ENDFOR
\STATE $min_{index} = \text{argmin}\ RMSE$
\STATE $lowerbound=bound(min_{index}-1)$
\STATE $upperbound=bound(min_{index}+1)$
\STATE $\hat{T}=\hat{T}(min_{index})$
\ENDFOR
\RETURN $\hat{T}$
\end{algorithmic}
\end{algorithm}\vspace{-0.0in}
The first approach is modifying algorithm \ref{algorithm_MNfinding} to find a good upper bound. There are two complications with generalizing this approach. First, in tensor completion, we don't have access to the full tensor and we have to estimate the recovery error on the indices we have not observed as otherwise choosing a large upper bound can result in over training, i.e, fitting the observations exactly and losing the low-rank (and low max-qnorm) structure of the tensor. The other complication, is that in algorithm \ref{algorithm_MNfinding}, we use $\text{RMSE}<1e-3$ as an approximation for full recovery. Doing such a thing in a noisy problem is not possible and using the RMSE of the noise is an independent problem that depends on the noise-type and is usually not optimal in a general case. Other than this when we have access to the full tensor, using any upper bound bigger than some optimal upper bound results in $\text{RMSE}<1e-3$ which is not the case in tensor completion due to over-fitting. In other words, when we observe the full tensor, over-fitting is meaningless.\newline
To address these two issues, we propose algorithm \ref{algorithm_TC} for tensor completion that uses cross validation for estimating the RMSE and uses a five-point search for the optimal max-qnorm bound.\newline
\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth]{tuning_fig.eps}
\caption{All the possible situations in the five-point search algorithm. The leftmost and the rightmost red dots are the previous lower bound and upper bound on the max-qnorm $R$, respectively. The green intervals shows the new lower bound and upper bound based on the RMSE.}
\label{parameter_tuning}
\end{figure}
As explained above we need to find a way to estimate the true RMSE to avoid over-fitting and also as a measure of how good the approximation is. In order to do this, before starting the optimization process, we randomly divide the observed samples, $\Omega$, into two sets: $\Omega_{\text{train}}$ and $\Omega_{\text{validate}}$ and we solve each max-qnorm constrained sub-problem just using the samples in $\Omega_{\text{train}}$ which contains 80\% of the total samples and use the reserved samples $\Omega_{\text{validate}}$ (20\% of total samples) to estimate the RMSE. We are aware of more sophisticated cross validation algorithms that try to remove the bias in the samples as much as possible. However, considering that each max-qnorm constrained tensor completion problem is expensive, our numerical results show that the simple cross validation used in algorithm \ref{algorithm_TC} is good enough for finding an approximately optimal upper bound.\newline
Now we explain algorithm \ref{algorithm_TC} more thoroughly. To find the optimal upper bound we input a large enough upper bound and a small enough lower bound that are bigger and smaller than the optimal max-qnorm bound and iteratively refine these bounds until the two bounds become close to each other. To determine the next upper and lower bounds, checking the middle point is not enough because unlike algorithm \ref{algorithm_MNfinding}, the RMSE is not going to be zero for bounds bigger than the optimal bound. Therefore, other than the lower bound and the upper bound we calculate the RMSE using three points in the interval of the lower bound and the upper bound as well and consider the best bound among those three points to be the center of the new upper bound and the new lower bound. The derivation of this approach is in Algorithm \ref{algorithm_TC}. A hand-wavy reasoning behind this is assume that there is an optimal bound $R^{\sharp}$ that gives the best RMSE, any bound larger than $R^{\sharp}$ results in over training and any bound smaller than $R^{\sharp}$ results in under training. This over-training or under-training becomes more severe when the bound is further from the optimal max-qnorm bound and therefore the problem becomes finding the minimum of a function where we don't know the derivative of the function. Deriving a provably exact algorithm to find the optimal upper bound in such a situation is an interesting question that we postpone to future work. Assuming all the justification above is roughly correct, Figure \ref{parameter_tuning} shows that the five-point method explained in Algorithm \ref{algorithm_TC} finds the optimal upper bound. Considering all the assumptions above to be true, the figure plots the RMSE against the max-qnorm upper bound $R$, and shows all the possible situations that the five points (red stars) can have in such a curve. The green lines shows the new interval bounded by the new lower and upper bound. Notice that the optimal value always stays in the middle of these two bounds. Our numerical experiments in the next section show that although this justification is not mathematically rigorous, the final outcome is very promising. However, providing a better explanation or designing a more rigorous method is an interesting future work.
\subsection{Numerical results of low-rank tensor completion}
In this section we present the results of max-qnorm constrained tensor completion, and compare it with those of matricization and alternating least squares (tenALS) \cite{jain2014provable}. As explained in the beginning of Section \ref{experiments and algorithms}, we pick the low-rank factors to have size $N\times 2N$. Although the choice of $2N$ is arbitrary, we believe it is large enough for small ranks and does not result in large errors. This also has the additional benefit of not requiring the knowledge of the exact rank of the tensors. As explained above, we just assume the knowledge of an upper bound and a lower bound on the max-qnorm of the tensor and use cross validation to the find the optimal max-qnorm bound. Obviously the algorithm becomes faster if these bounds are closer to the actual max-qnorm of the tensor.
\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth]{3d_nonoise.eps}
\caption{{\bf{3-dimensions, no noise }}Average relative recovery error $\frac{\|T_{\text{recovered}} - T^{\sharp}\|_F^2}{\|T^{\sharp}\|_F^2}$ for 3-dimensional tensor $T \in \mathbb{R}^{50\times 50 \times 50}$ and different ranks and samples.}
\label{TC_3d_nonoise}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth]{3d_noisy.eps}
\caption{{\bf{3-dimensions, 10-db noise }}Average relative recovery error $\frac{\|T_{\text{recovered}} - T^{\sharp}\|_F^2}{\|T^{\sharp}\|_F^2}$ for 3-dimensional tensor $T \in \mathbb{R}^{50\times 50 \times 50}$ and different ranks and samples.}
\label{TC_3d_noisy}
\end{figure}
Figures \ref{TC_3d_nonoise} and \ref{TC_3d_noisy} show the results of completing a $50 \times 50 \times 50$ tensor with $m$ random samples of it taken uniformly at random (without and with 10-db noise respectively). The results are averaged over $15$ experiments, with various ranks ranging from $3$ to $30$ and sampling rate ranging from $0.01$ to $0.1$. The row $i$ and column $j$ of each subplot, shows the average squared relative recovery error $\frac{\|T-\hat{T}\|_F^2}{\|T\|_F^2}$ for a random tensor with rank $i$, where different columns represents different number of samples observed according to \eqref{noisy_measurements}. We rescale the true tensors to have infinity norm equal to $1$ before adding the noise and the RMSE of the noise is around $0.1$. As expected in all experiments we get better results with higher number of measurements $m$, and smaller rank $r$. The matricized results are obtained by applying the Fixed Point Continuation with Approximate SVD algorithm (FPCA), introduced in \cite{ma2011fixed} to flattened $900 \times 30$ matrices (This algorithm results in the best outcomes in our experiments, compared to other noisy matrix completion algorithms). Next, for a more fair comparison we have included the results of tensor completion using alternating least squares (ALS) where the code is provided online \cite{jain2014provable}. The two plots in the second row show the max-qnorm constrained (MNC) results in two scenarios. One with exact low-rank factors and second, as the theory suggests, with factors with larger number of columns. Notice that the exact max-qnorm formulation does not put any limitation on the number of columns of the factors and we chose $2N$ to balance the computational cost and the accuracy of the computed max-qnorms. The results unanimously show the advantage of using $N\times 2N$ factors instead of $N \times r$ factors. Although we are dealing with higher dimensional factors and this makes the algorithm a little slower, using larger factors has the additional benefit of not getting stuck in local minima compared to exact low-rank factors. The ALS algorithm, show some discrepancies in the results, i.e., there are cases that the average error increases with smaller rank or higher sampling rate. We believe this is because of high non convexity in this algorithm. We expected to see similar discrepancies in the max-qnorm results as well but at least for these dimensions and setup, our algorithm seems to be able to run away from local minima which is surprising. We can see the importance of this issue when we compare the results of using $N\times r$ factors and $N \times 2N$ factors. It is worth mentioning here that the ALS algorithm uses a number of initial vectors to use in the initialization step and larger value gives better estimate at the expense of longer processing time. We use 50 vectors so that the time spent by the algorithms is comparable to each other. However, the result can be slightly improved if we use more initial vectors.\newline
The results of matricization is always inferior to those of tensor completion with both ALS and MNC which is expected, especially as here the tensors have an odd order. The difference between matrix completion and MNC with $N \times 2N$ factors is significant. For example, when $\frac{m}{N^d}=0.1$, Max-qnorm constrained TC (MNC) recovers all the tensor with rank less 10, whereas matrix completion starts to fails for ranks bigger than $1$.\newline
In Figures \ref{TC_4d_nonoise} and \ref{TC_4d_noisy} we have included the results for completing a $4$-dimensional $20\times 20 \times 20 \times 20$ tensor. The results are similar to the $3$-dimensional case and therefore we are not including the exact low-rank factor results which are inferior to using $N \times 2N$ factors. The available online code of ALS algorithm is not suitable for $4$-dimensional cases as well. Similar to the $3$-dimensional case MNC TC always beats matrix completion. Notice that because of the even dimension, the matricization can be done in a more balanced way and therefore, results of matrix completion are a little better compared to the $3$-dimensional case but still we can see the huge advantage of using tensor completion instead of matricizing.\newline
It is worth mentioning that although the algorithms take reasonable time for small dimensions used here (less than 15 seconds in each case), it is still slower than matricizing. In other words using tensors enjoys better sample complexity and can be used to save very large data but it comes at the cost of computational complexity. Deriving scalable and efficient algorithms is an interesting and necessary future work for practical applications.
\begin{figure}[t]
\centering
\includegraphics[width=1\textwidth]{4d_nonoise.eps}
\caption{{\bf{4-dimensions, no noise }}Average relative recovery error $\frac{\|T_{\text{recovered}} - T^{\sharp}\|_F^2}{\|T^{\sharp}\|_F^2}$ for 4-dimensional tensor $T \in \mathbb{R}^{20\times 20 \times 20 \times 20 \times 20}$ and different ranks and samples. The plot on the left shows the results for the $400 \times 400$ matricized case.}
\label{TC_4d_nonoise}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth]{4d_noisy.eps}
\caption{{\bf{4-dimensions, 10-db noise }}Average relative recovery error $\frac{\|T_{\text{recovered}} - T^{\sharp}\|_F^2}{\|T^{\sharp}\|_F^2}$ for 4-dimensional tensor $T \in \mathbb{R}^{20\times 20 \times 20 \times 20}$ and different ranks and samples. The plot on the left shows the results for the $400 \times 400$ matricized case.}
\label{TC_4d_noisy}
\end{figure}
\section{Proofs}\label{section_proofs}
\subsection{Proof of Theorem \ref{theorem_max-qnorm_quasi}}\label{section_max-qnorm_quasi}
Notice that for any tensors $T$ and $X$, there exist decompositions $T=U^{(1)} \circ U^{(2)} \circ \cdots \circ U^{(d)}$ and $X=V^{(1)} \circ V^{(2)} \circ \cdots \circ V^{(d)}$, where $\|U^{(j)}\|_{2,\infty} \leq (\|T\|_{\text{max}})^{\frac{1}{d}}$ and $\|V^{(j)}\|_{2,\infty} \leq (\|X\|_{\text{max}})^{\frac{1}{d}}$. Moreover, one way to factorize the tensor $X+T$ is by concatenating the factors of $X$ and $T$ as $X+T= [U^{(1)}, V^{(1)}] \circ [U^{(2)}, V^{(2)}] \circ \cdots \circ [U^{(d)}, V^{(d)}]$ and therefore,
\begin{equation*}
\begin{aligned}
\|X+T\|_{\text{max}} &\leq \Pi_{j=1}^d \|[U^{(j)}, V^{(j)}]\|_{2,\infty} \leq \big( \sqrt{\|X\|_{\text{max}}^{\frac{2}{d}}+\|T\|_{\text{max}}^{\frac{2}{d}}} \big)^{d} \leq 2^{\frac{d}{2}-1} (\|X\|_{\text{max}} + \|T\|_{\text{max}}) ,
\end{aligned}
\end{equation*}
which proves that max-qnorm is a quasi-norm. Notice that the last inequality follows from the inequality $|a+b|^p \leq 2^{p-1} (|a|^p + |b|^p)$ for $p>1$. It is easy to check that the max-qnorm satisfies the triangle inequality, when $d=2$. However, this is not true for $d>2$. Next, we prove this for $d=3$ and higher order cases can be proven similarly.\newline
The main challenge in proving that the max-qnorm does not satisfy the triangle-inequality when $d=3$ is that the size of the factors is not fixed. However, it can be observed from the following simple counter-example. Let $T=T_1 + T_2$, where $T_1=\begin{bmatrix} 1\\0 \end{bmatrix} \circ \begin{bmatrix} 1\\0 \end{bmatrix} \circ \begin{bmatrix} 1\\0 \end{bmatrix}$, and $T_2 =\begin{bmatrix} 1\\1 \end{bmatrix} \circ \begin{bmatrix} 1\\1 \end{bmatrix} \circ \begin{bmatrix} 1\\1 \end{bmatrix}$, and note that $T$ is a rank-$2$, $2\times 2 \times 2$ tensor. Here, $T_1$ and $T_2$ are rank-$1$ tensors with $\|T_1\|_{\text{max}}=1$ and $\|T_2\|_{\text{max}}=1$ (notice that for any $T$, $\|T\|_{\text{max}} \geq \|T\|_{\infty}$). Therefore, if max-qnorm satisfies triangle-inequality, then $\|T\|_{\text{max}}$ cannot exceed $2$. In what follows we prove that this is not possible. If $\|T\|_{\text{max}} \leq 2$, then there exists a decomposition $T=U^{(1)} \circ U^{(2)} \circ U^{(3)}$ such that $\|T\|_{\text{max}}=\prod_{j=1}^{3} \|U^{(j)}\|_{2,\infty} \leq 2$, and with a simple rescaling of the factors,
\begin{equation}\label{counterexample_decomposition}
\|U^{(1)}\|_{2,\infty} \leq \sqrt{2},\ \|U^{(2)}\|_{2,\infty} \leq \sqrt{2},\ \|U^{(3)}\|_{2,\infty} \leq 1.
\end{equation}
First, notice that $T$ is an all-ones tensor except for one entry where $T(1,1,1)=2$. Defining the generalized inner product as
\begin{equation}\label{generalized_inner_product}
\langle x_1, \cdots, x_d \rangle := \sum_{i=1}^{k} \prod_{j=1}^{d} x_j(i),
\end{equation}
this means that
\begin{equation}\label{index_11111}
\langle U^{(1)}(1,:), U^{(2)}(1,:), U^{(3)}(1,:)\rangle =2.
\end{equation}
Using Cauchy-Schwarz
\begin{equation}\label{tensor_inner_product_inequality}
\langle U^{(1)}(1,:), U^{(2)}(1,:), U^{(3)}(1,:)\rangle \leq \|U^{(1)}(1,:)\| \ \|U^{(2)}(1,:)\| \ \|U^{(3)}(1,:)\|_{\infty}.
\end{equation}
Combining \eqref{counterexample_decomposition}, \eqref{index_11111}, and \eqref{tensor_inner_product_inequality}, we get
$$2 \leq \|U^{(1)}(1,:)\| \ \|U^{(2)}(1,:)\| \leq \|U^{(1)}\|_{2,\infty} \ \|U^{(2)}\|_{2,\infty} \leq 2,$$
which together with \eqref{counterexample_decomposition} proves that
\begin{equation}\label{U1U2norm}
\|U^{(1)}(1,:)\|=\sqrt{2},\ \text{and}\ \|U^{(2)}(1,:)\|=\sqrt{2}.
\end{equation}
Moreover, similarly
$$2 \leq 2 \|U^{(3)}(1,:)\|_{\infty}\leq 2\ \Rightarrow\ \|U^{(3)}(1,:)\|_{\infty}=1.$$
Notice that $\|U^{(3)}(1,:)\| \leq 1$, and $\|U^{(3)}(1,:)\|_{\infty}=1$ which proves that $U^{(3)}(1,:)$ is an all zeros vector with a single non-zero entry of one. Remember that the number of columns of $U^{(3)}$ is arbitrary. Without loss of generality, we can assume
\begin{equation}
U^{(3)}(1,:)=(1,0,\cdots,0).
\end{equation}
Combining this with \eqref{index_11111}, and \eqref{U1U2norm} we can also prove that
\begin{equation}
U^{(1)}(1,:)=U^{(2)}(1,:)=(\sqrt{2},0,\cdots,0).
\end{equation}
Now from $T(1,1,2)=1$ and the above two equations we have to have $U^{(3)}(2,1)=\frac{1}{2}$ and similarly $U^{(2)}(2,1)=\frac{1}{\sqrt{2}}$. Finally $T(1,2,2)=U^{(1)}(1,1)\ U^{(2)}(2,1)\ U^{(3)}(2,1)=\sqrt{2} \frac{1}{\sqrt{2}} \frac{1}{2} = \frac{1}{2}$ which is a contradiction.\qed
\subsection{Proof of Lemma \ref{max_tensor_ball}}\label{section_max_tensor_ball}
Characterization of the unit ball of the atomic M-norm follows directly from \eqref{atomoc_norm_definition}. By definition, any tensor $T$ with $\|T\|_{M} \leq 1$ is a convex combination of the atoms of $T_{\pm}$, $ T=\sum_{X \in T_{\pm}} c_X X, c_X>0$ with $\sum_{X \in T_{\pm}} c_X=1$. This proves that $\mathbb{B}_{M}(1)=\text{conv}(T_{\pm})$.\newline
To characterize the unit ball of max-qnorm, we use a generalization of Grothendieck's theorem to higher order tensors \cite{tonge1978neumann,blei1979multidimensional}. First, we generalize the matrix $\|\cdot\|_{\infty,1}$ norm ($\|M\|_{\infty,1}:=\underset{\|x\|_{\infty}=1}{sup}\|Mx\|_1$) as:
\begin{dfn}\label{definition_inf_one}
$\|T\|_{\infty,1}:=\underset{\|x_1\|_{\infty},\cdots,\|x_d\|_{\infty} \leq 1} {\text{sup}}| \sum_{i_1=1}^{N_1} \cdots \sum_{i_d=1}^{N_d} T(i_1,\cdots,i_d)x_1(i_1)\cdots x_d(i_d)|.$
\end{dfn}
\begin{thm}[{\bf{Generalized Grothendieck theorem}}]\label{bleitong}
Let $T$ be an order-$d$ tensor such that
$$\|T\|_{\infty,1}\ \leftrightarrow \underset{\|x_1\|_{\infty},\cdots,\|x_d\|_{\infty} \leq 1} {\text{sup}}| \sum_{i_1=1}^{N_1} \cdots \sum_{i_d=1}^{N_d} T(i_1,\cdots,i_d)x_1(i_1)\cdots x_d(i_d)| \leq 1,$$
and let $u_{i_j}^j \in \mathbb{R}^k, 1 \leq j \leq d, 1 \leq i_j \leq N_j$ be $\sum N_j$ vectors such that $\|u_{i_j}^j\| \leq 1$. Then
\begin{equation}\label{tensorinf1norm}
| \sum_{i_1=1}^{N_1} \cdots \sum_{i_d=1}^{N_d} T(i_1,\cdots,i_d)\langle u_{i_1}^1, u_{i_2}^2, \cdots, u_{i_d}^d \rangle | \leq c_1 c_2^d,
\end{equation}
where $\langle u_{i_1}^1, u_{i_2}^2, \cdots, u_{i_d}^d \rangle$ is the generalized inner product of $u_{i_1}^1, u_{i_2}^2, \cdots, u_{i_d}^d$ as defined in \eqref{generalized_inner_product}. Here, $c_1 \leq \frac{K_G}{5}$ and $c_2 \leq 2.83$.
\end{thm}
\noindent
Now we use Theorem \ref{bleitong} to prove Lemma \ref{max_tensor_ball}.\newline
\noindent
\textbf{Proof of Lemma \ref{max_tensor_ball}}: The dual norm of the max-qnorm is
\begin{equation}\label{max_dual}
\|T\|_{\text{max}}^{\ast} = \underset {\|U\|_{\text{max}} \leq 1}{\text{max}} \langle T,U \rangle = \underset{\|u_{i_1}^1\|,\cdots,\|u_{i_d}^d\| \leq 1}{\text{max}}\sum_{i_1=1}^{N_1} \cdots \sum_{i_d=1}^{N_d} T(i_1,\cdots,i_d)\langle u_{i_1}^1, u_{i_2}^2, \cdots, u_{i_d}^d \rangle.
\end{equation}
Above, the length of the vectors $u_{i_1}^1,\cdots,u_{i_d}^d$ is not constrained. Using Theorem \ref{bleitong}, $\|T\|_{\text{max}}^{\ast} \leq c_1 c_2^d \|T\|_{\infty,1}$. On the other hand, for $u_{i_1}^1,\cdots,u_{i_d}^d \in \mathbb{R}$ the right hand side of \eqref{max_dual} is equal to $\|T\|_{\infty,1}$. Therefore, $\|T\|_{\infty,1} \leq \|T\|_{\text{max}}^{\ast}$. Taking the dual:
\begin{equation}
\frac{\|T\|_{\infty,1}^{\ast}}{c_1 c_2^d} \leq (\|T\|_{\text{max}}^{\ast})^{\ast} \leq \|T\|_{\infty,1}^{\ast}
\end{equation}
Notice that the max-qnorm, defined in \eqref{max_norm_tensor_definition} is a quasi-norm and therefore, $(\|T\|_{\text{max}}^{\ast})^{\ast}$ is not equal to $\|T\|_{\text{max}}$. However, notice that the max-qnorm is absolutely homogeneous and therefore,
$$(\|T\|_{\text{max}}^{\ast})^{\ast} = \underset{\|Z\|_{\text{max}}^{\ast}\leq 1}{\text{max}} \langle T,Z \rangle \leq \|T\|_{\text{max}}.$$
which implies that
\begin{equation}\label{max_inequality}
\frac{\|T\|_{\infty,1}^{\ast}}{c_1 c_2^d} \leq \|T\|_{\text{max}}.
\end{equation}
To calculate the unit ball of $\|.\|_{\infty,1}^{\ast}$, notice that the argument of the supremum in Definition \ref{definition_inf_one} is linear in each variable $x_j(i_j)$ and as $-1 \leq x_j(i_j) \leq 1$, the supremum is achieved when $x_j(i_j)=\pm 1$ which means that $\|T\|_{\infty,1}=\underset{U \in T_{\pm}}{sup} |\langle T,U \rangle|$. Therefore, $\text{conv}(T_{\pm})$ is the unit ball of $\|.\|_{\infty,1}^{\ast}$ and Lemma \ref{max_tensor_ball} (\RNum{2}) follows from \eqref{max_inequality}.
\subsection{Proof of Theorem \ref{theorem_atomicnorm_bound}}\label{section_proof_atomicbound}
In order to prove the tensor max-qnorm bound, we first sketch the proof of \cite{rashtchian2016bounded} for the matrix case. That is, assuming that $M$ ia s matrix with $\text{rank}(M)=r$ and $\|M\|_{\infty}\leq \alpha$, we show that there exist a decomposition $M=U \circ V$ where $U \in \mathbb{R}^{N_1 \times r}, V \in \mathbb{R}^{N_2 \times r}$ and $\|U\|_{2,\infty} \leq \sqrt{r}, \|V\|_{2,\infty} \leq \alpha$. To prove this, we first state a version of the John's theorem \cite{rashtchian2016bounded}.
\begin{thm}[John's theorem \cite{john2014extremum}]
For any full-dimensional symmetric convex set $K \subseteq \mathbb{R}^r$ and any ellipsoid $E \subseteq \mathbb{R}^r$
that is centered at the origin, there exists an invertible linear map $S$ so that $E \subseteq S(K) \subseteq \sqrt{r} E$.
\end{thm}
\begin{thm}\label{john_matrix}\cite[Corollary 2.2]{rashtchian2016bounded}
For any rank-$r$ matrix $M \in \mathbb{R}^{N_1 \times N_2}$ with $\|M\|_{\infty}\leq \alpha$ there exist vectors $u_1,\cdots,u_{N_1},v_1,\cdots,v_{N_2} \in \mathbb{R}^r$ such that $\langle u_i,v_j\rangle =M_{i,j}$ and $\|u_i\|\leq \sqrt{r}$ and $\|v_j\|\leq \alpha$.
\end{thm}
The proof is based on considering any rank-$r$ decomposition of $M=X\circ Y$ where, $X \in \mathbb{R}^{N_1 \times r}$ and $Y \in \mathbb{R}^{N_2 \times r}$ and $M_{i,j}=\langle x_i,y_j\rangle$. Defining $K$ to be the convex hull of the set $\{ \pm x_i : i \in [N_1]\}$. Then using the linear map $S$ in John's Theorem for the set $K$ with the ellipsoid $E=\mathbb{B}_r:=\{x \in \mathbb{R}^r : \|x\|_2 \leq 1\}$, the decomposition $M=(XS)\circ (YS^{-1})$ satisfies the conditions of Theorem \ref{john_matrix} \cite{rashtchian2016bounded}.\newline
The following lemma proves the existence of a nuclear decomposition for bounded rank-$r$ tensors, which can be used directly to bound the M-norm of a bounded rank-$r$ tensor.
\begin{lem}\label{inductive_atomicnorm_decom}
Any order-$d$, rank-$r$ tensor $T$, with $\|T\|_{\infty} \leq \alpha$ can be decomposed into $r^{d-1}$ rank-one tensors whose components have unit infinity norm such that
\begin{equation}
T=\sum_{j=1}^{r^{d-1}} \sigma_j u_j^1 \circ u_j^2 \circ \cdots \circ u_j^d ,\ \|u_j^1\|_{\infty},\cdots,\|u_j^d\|_{\infty} \leq 1, \ \text{with } \sum |\sigma_j| \leq (r\sqrt{r})^{d-1} \alpha.
\end{equation}
\end{lem}
{\bf{Proof}}: We prove this lemma by induction. The proof for $d=2$ follows directly from applying John's theorem to a rank-$r$ decomposition of $T$, i.e., $T=XS \circ YS^{-1}$ where $T=X \circ Y$. Now assume an order-$d$ tensor which can be written as $T=\sum_{j=1}^{r} \lambda_j v_j^1 \circ v_j^2 \circ \cdots \circ v_j^d$ and $\|T\|_{\infty} \leq \alpha$. Matricizing along the first dimension results in $T_{[1]} = \sum_{i=1}^r (\lambda_i u_i^{(1)}) \circ (u_i^{(2)} \otimes \cdots \otimes u_i^{(d)})$. Using MATLAB notation we can write $T_{[1]}=U \circ V$ where $U(:,i)=\lambda_i u_i^{(1)} \in \mathbb{R}^{N_1}$, and $V(:,i)=u_i^{(2)} \otimes \cdots \otimes u_i^{(d)} \in \mathbb{R}^{\Pi_{k=2}^{k=d}N_k}$.\newline
Using John's theorem, there exist $S \in \mathbb{R}^{r \times r}$ where $T_{[1]}=X \circ Y$ where $X=US$, $Y=VS^{-1}$, $\|X\|_{\infty} \leq \|X\|_{2,\infty} \leq \sqrt{r}$, and $\|Y\|_{\infty} \leq \|Y\|_{2,\infty} \leq \alpha$. Furthermore, each column of $Y$ is a linear combination of the columns of $V$, i.e., there exist $\zeta_{1}, \cdots \zeta_{r}$ such that $Y(:,i)=\sum_{j=1}^r \zeta_{j} (u_j^{(2)} \otimes \cdots \otimes u_j^{(d)})$. Therefore, unfolding $i$-th column of $Y$ into a $(d-1)$-dimensional tensor $E_i \in \mathbb{R}^{N_2 \times \cdots \times N_d}$ would result in a rank-$r$, $(d-1)$-dimensional tensor with $\|E_i\|_{\infty} \leq \|Y\|_{\infty} \leq \alpha$. By induction, $E_i$ can be decomposed into $r^{d-2}$ rank-one tensors with bounded factors, i.e., $E_i =\sum_{j=1}^{r^{d-2}} \sigma_{i,j} v_{i,j}^1 \circ v_{i,j}^2 \circ \cdots \circ v_{i,j}^d$, where $\|v_{i,j}\|_{\infty} \leq 1$ and $\sum |\sigma_{i,j}| \leq (r\sqrt{r})^{d-2} \alpha $.\newline
Going back to the original tensor, as $T_{[1]}= X \circ Y$, we also have $T= \sum_{i=1}^r X(:,i) \circ (\sum_{j=1}^{r^{d-2}} \sigma_{i,j} v_{i,j}^1 \circ v_{i,j}^2 \circ \cdots \circ v_{i,j}^d)$. Notice that $\|X(:,i)\|_{\infty} \leq \sqrt{r}$. Therefore, by distributing the outer product and rearranging, we get $T=\sum_{j=1}^{r^{d-1}} \sigma_j u_j^1 \circ u_j^2 \circ \cdots \circ u_j^d ,\ \|u_j^1\|_{\infty},\cdots,\|u_j^d\|_{\infty} \leq 1$ and $\sum |\sigma_j| \leq \sum_{i=1}^r \sqrt{r} \left( (r\sqrt{r})^{d-2} \alpha \right) = (r\sqrt{r})^{d-1} \alpha$, which concludes the proof or Lemma \ref{inductive_atomicnorm_decom}. This lemma can be used directly to bound the M-norm of a bounded rank-$r$ tensor.\newline
Next, we bound the max-qnorm of a bounded rank-$r$ tensor. The following lemma proves the existence of a nuclear decomposition for bounded rank-$r$ tensors, which can be used directly to bound their max-qnorm. As the max norm is linear, without loss of generality we assume $\|T\|_{\infty} \leq 1$.
\begin{lem}\label{inductive_maxnorm_decom}
Any order-$d$, rank-$r$ tensor $T \in \bigotimes_{i=1}^{d} \mathbb{R}^{N_i}$, with $\|T\|_{\infty} \leq 1$ can be decomposed into $r^{d-1}$ rank-one tensors, $T=\sum_{j=1}^{r^{d-1}} u_j^1 \circ u_j^2 \circ \cdots \circ u_j^d$, where:
\begin{equation}
\sum_{j=1}^{r^{d-1}} (u_j^k(t))^2 \leq r^{d-1}\ \text{for any } 1\leq k \leq d,\ 1\leq t\leq N_k.
\end{equation}
Notice that $\sqrt{\sum_{j=1}^{r^{d-1}} (u_j^k(t))^2}$ is the spectral norm of $j$-th row of $k$-th factor of $T$, i.e., $\sum_{j=1}^{r^{d-1}} (u_j^k(t))^2 \leq r^{d-1}$ means that the two-infinity norm of the factors is bounded by $\sqrt{r^{d-1}}$.
\end{lem}
\begin{rem}{[Proof by Lemma \ref{inductive_atomicnorm_decom}]}
At the end of this subsection, we provide a proof for the lemma as stated above. However, using the decomposition obtained in Lemma \ref{inductive_atomicnorm_decom}, we can find a decomposition with $\sum_{j=1}^{r^{d-1}} (u_j^k(t))^2 \leq r^{d}$. To do this notice that by Lemma \ref{inductive_atomicnorm_decom}, and defining $\vec{\sigma}:=\{\sigma_1, \cdots,\sigma_{r^{d-1}}\}$ we can write
$$T=\sum_{j=1}^{r^{d-1}} \sigma_j v_j^1 \circ v_j^2 \circ \cdots \circ v_j^d ,\ \|v_j^1\|_{\infty},\cdots,\|v_j^d\|_{\infty} \leq 1, \ \text{with } \|\vec{\sigma}\|_1\leq (r\sqrt{r})^{d-1}.$$
Now define
$$u_j^k:=(\sigma_j)^{\frac{1}{d}} v_j^k\ \text{for any } 1\leq k \leq d,\ 1\leq t\leq N_k.$$
It is easy to check that $T=\sum_{j=1}^{r^{d-1}} u_j^1 \circ u_j^2 \circ \cdots \circ u_j^d$ and
$$\sum_{j=1}^{r^{d-1}} (u_j^k(t))^2 = \sum_{j=1}^{r^{d-1}} \sigma_j^{\frac{2}{d}} (v_j^k(t))^2 \leq \sum_{j=1}^{r^{d-1}} \sigma_j^{\frac{2}{d}} = \|\vec{\sigma}\|_{\frac{2}{d}}^{\frac{2}{d}}.$$
Using Holder's inequality, when $d \geq 2$ we have
$$\sum_{j=1}^{r^{d-1}} (u_j^k(t))^2 = \|\vec{\sigma}\|_{\frac{2}{d}}^{\frac{2}{d}} \leq \| \vec{\sigma} \|_1^{\frac{2}{d}} (r^{d-1})^{1-\frac{2}{d}} \leq r^{\frac{3d-3}{d}} r^{\frac{(d-1)(d-2)}{d}}=r^{\frac{(d-1)(d+1)}{d}} \leq r^d$$
This proves an upper bound which is close to the one in the lemma. To get a more optimal upper bound (the one stated in the Lemma \ref{inductive_maxnorm_decom}) we need to go over the induction steps as explained below.
\end{rem}
\noindent
{\bf{Proof of Lemma \ref{inductive_maxnorm_decom}}}: We prove this lemma by induction. The proof for $d=2$ follows directly from applying John's theorem to a rank-$r$ decomposition of $T$, i.e., $T=XS \circ YS^{-1}$ where $T=X \circ Y$. Now assume an order-$d$ tensor which can be written as $T=\sum_{j=1}^{r} u_j^1 \circ u_j^2 \circ \cdots \circ u_j^d$ and $\|T\|_{\infty} \leq 1$. Matricizing along the first dimension results in $T_{[1]} = \sum_{i=1}^r (u_i^{(1)}) \circ (u_i^{(2)} \otimes \cdots \otimes u_i^{(d)})$. Using matrix notation we can write $T_{[1]}=U \circ V$ where $U(:,i)= u_i^{(1)} \in \mathbb{R}^{N_1}$, and $V(:,i)=u_i^{(2)} \otimes \cdots \otimes u_i^{(d)} \in \mathbb{R}^{\Pi_{k=2}^{k=d}N_k}$.\newline
Using John's theorem, there exist $S \in \mathbb{R}^{r \times r}$ where $T_{[1]}=X \circ Y$ where $X=US$, $Y=VS^{-1}$, $\|X\|_{2,\infty} \leq \sqrt{r}$, and $\|Y\|_{\infty} \leq \|Y\|_{2,\infty} \leq 1$. More importantly, each column of $Y$ is a linear combination of the columns of $V$. More precisely, there exist $\zeta_{1}, \cdots \zeta_{r}$ such that $Y(:,i)=\sum_{j=1}^r \zeta_{j} (u_j^{(2)} \otimes \cdots \otimes u_j^{(d)})$. Therefore, unfolding $i$-th column of $Y$ into a $(d-1)$-dimensional tensor $E_i \in \mathbb{R}^{N_2 \times \cdots \times N_d}$ would result in a rank-$r$, $(d-1)$-dimensional tensor with $\|E_i\|_{\infty} \leq \|Y\|_{\infty} \leq 1$. By induction, $E_i$ can be decomposed into $r^{d-2}$ rank-one tensors where $E_i =\sum_{j=1}^{r^{d-2}} v_{i,j}^2 \circ v_{i,j}^3 \circ \cdots \circ v_{i,j}^d$, where $\sum_{j=1}^{r^{d-2}} (v_{i,j}^k(t))^2 \leq r^{d-2}$ for any $2\leq k\leq d$ and any $1\leq t \leq N_k$. Notice that the factors start from $v_{i,j}^2$ to emphasize that $E$ is generated from the indices in the dimensions $2$ to $d$.\newline
Going back to the original tensor, as $T_{[1]}= X \circ Y$, we can write
$$T= \sum_{i=1}^r X(:,i) \circ (\sum_{j=1}^{r^{d-2}} v_{i,j}^2 \circ v_{i,j}^3 \circ \cdots \circ v_{i,j}^d).$$
By distributing the outer product we get $T= \sum_{i=1}^r \sum_{j=1}^{r^{d-2}} X(:,i) \circ v_{i,j}^2 \circ v_{i,j}^3 \circ \cdots \circ v_{i,j}^d$. Renaming the vectors in the factors we get
$$T = \sum_{k=1}^{r^{d-1}} u_k^1 \circ u_k^2 \circ \cdots \circ u_k^d.$$
Now we bound the max norm of $T$ using this decomposition by considering each factor separately using the information we have about $X$ and $E_i$s.\newline
Starting from the first factor, notice that $\|X\|_{2,\infty} \leq \sqrt{r}$ or more precisely $\sum_{i=1}^r X(t,i)^2 \leq r$ for any $1 \leq t \leq N_1$. By Careful examining of the two decompositions of $T$ stated above, we get
$$u_k^1=X(:,\text{mod}(k-1,r)+1)$$
and therefore
\begin{equation}\label{square_sum_first_factor}
\sum_{k=1}^{r^{d-1}} (u_k^1(t))^2= r^{d-2} \sum_{i=1}^{r} X(t,i)^2 \leq r^{d-2} r=r^{d-1},\ \text{for any } 1 \leq t \leq N_1,
\end{equation}
which proves the lemma for the vectors in the first dimension of the decomposition.\newline
For the second dimension, define $j:=\text{mod}(k-1,r^{d-2})+1$, and $j:=\frac{k-j}{r^{d-2}}+1$. Then
$$u_k^2=v_{i,j}^k,$$
and therefore,
\begin{equation}\label{square_sum_second_factor}
\sum_{k=1}^{r^{d-1}} (u_k^2(t))^2= \sum_{i=1}^r \sum_{j=1}^{r^{d-2}} (v_{i,j}^2(t))^2 \leq \sum_{i=1}^ r r^{d-2} \leq r^{d-1},\ \text{for any } 1 \leq t \leq N_2,
\end{equation}
which finishes the proof the lemma for the vectors in the second dimension. All the other dimensions can be bounded in an exact similar way to the second dimension.\newline
The bound on the max-qnorm of a bounded rank-$r$ tensor, follows directly from Lemma \ref{inductive_maxnorm_decom} and definition of tensor max-qnorm.
\begin{rem}
In both lemmas \ref{inductive_atomicnorm_decom} and \ref{inductive_maxnorm_decom}, we start by decomposing a tensor $T=U_1 \circ U_2 \circ \cdots \circ U_d$ into $T_{[1]}=U_1 \circ V$ and generating $K$ (in the John's theorem) by the rows of the factor $U_1$. Notice that John's theorem requires the set $K$ to be full-dimensional. This condition is satisfied in the matrix case as the low rank decomposition of a matrix (with the smallest rank) is full-dimensional. However, this is not necessarily the case for tensors. In other words, the matricization along a dimension might have smaller rank than the original tensor. To take care of this issue, consider a factor $U_{add}$ with the same size of $U_1$ such that $U_1+U_{add}$ is full-dimensional. Now the tensor $T_{\epsilon} = T + \epsilon U_{add} \circ U_2 \circ \cdots \circ U_d\epsilon$ satisfies the conditions of the John's theorem and by taking $\epsilon$ to zero we can prove that $\|T\|_M = \|T_{\epsilon}\|_M$ and $\|T\|_{\text{max}} = \|T_{\epsilon}\|_{\text{max}}$. Notice that M-norm is convex and max-qnorm satisfies $\|X+T\|_{\text{max}} \leq \big( \sqrt{\|X\|_{\text{max}}^{\frac{2}{d}} + \|T\|_{\text{max}}^{\frac{2}{d}} } \big)^d$.
\end{rem}
\subsection{Proof of Theorem \ref{theorem_atomic_TC}}\label{proof_theorem_atomic_TC}
In this section, we prove Theorem \ref{theorem_atomic_TC}. We make use of Lemma \ref{lemma_rademacher} and Theorem \ref{theorem_atomicnorm_bound} repeatedly. However, some parts of the other calculations are simple manipulations of the proof in \cite[Section 6]{cai2016matrix}.\newline
For ease of notation define $\hat{T}:=\hat{T}_{M}$. Notice that $T^{\sharp}$ is feasible for \eqref{optimization_TC_atomicnorm} and therefore,
\begin{equation*}
\frac{1}{m}\sum_{t=1}^{m} (\hat{T}_{\omega_t}-Y_{\omega_t})^2 \leq \frac{1}{m}\sum_{t=1}^{m} (T^{\sharp}_{\omega_t}-Y_{\omega_t})^2.
\end{equation*}
Plugging in $Y_{\omega_t}= T(\omega_t) + \sigma \xi_t$ and defining $\Delta=\hat{T}-T^{\sharp} \in K_{M}^T(2\alpha,2R)$ we get
\begin{equation}\label{delta_TC_equation}
\frac{1}{m}\sum_{t=1}^{m} \Delta(\omega_t)^2 \leq \frac{2\sigma}{m}\sum_{t=1}^{m} \xi_t \Delta(\omega_t).
\end{equation}
The proof is based on a lower bound on the left hand side of \eqref{delta_TC_equation} and an upper bound on its right hand side.
\subsubsection{Upper bound on right hand side of \eqref{delta_TC_equation}}
First, we bound $\hat{R}_m(\alpha,R) := \underset{T \in K_M^T(\alpha,R)} {\sup} |\frac{1}{m} \sum_{t=1}^{m} \xi_t T(\omega_t)|$ where $\xi_t$ is a sequence of $\mathbb{N}(0,1)$ random variables. With probability at least $1-\delta$ over $\xi=\{\xi_t\}$, we can relate this value to a Gaussian maxima as follows \cite{pisier1999volume}:
\begin{equation*}
\begin{aligned}
\underset{T \in K_M^T(\alpha,R)} {\sup} |\frac{1}{m} \sum_{t=1}^{m} \xi_t T(\omega_t)| &\leq \mathbb{E}_{\xi}[\underset{T \in K_M^T(\alpha,R)} {\sup} |\frac{1}{m} \sum_{t=1}^{m} \xi_t T(\omega_t)|] + \pi \alpha \sqrt{\frac{\log (\frac{1}{\delta})}{2m}} \\
&\leq R\ \mathbb{E}_{\xi}[\underset{T \in T_{\pm}} {\sup} |\frac{1}{m} \sum_{t=1}^{m} \xi_t T(\omega_t)|] + \pi \alpha \sqrt{\frac{\log (\frac{1}{\delta})}{2m}},
\end{aligned}
\end{equation*}
where $T \in T_{\pm}$ is the set of rank-one sign tensors with $|T_{\pm}| < 2^{Nd}$. Since for each $T$, $\sum_{t=1}^{m} \xi_t T(\omega_t)$ is a Gaussian with mean zero and variance $m$, the expected maxima is bounded by $\sqrt{2m \log(|T_{\pm}|)}$. Gathering all the above information, we end up with the following upper bound with probability larger than $1-\delta$:
\begin{equation}
\underset{T \in K_M^T(\alpha,R)} {\sup} |\frac{1}{m} \sum_{t=1}^{m} \xi_t T(\omega_t)| \leq R \sqrt{\frac{2\log(2)Nd}{m}} + \pi \alpha \sqrt{\frac{\log (\frac{1}{\delta})}{2m}}.
\end{equation}
Choosing $\delta=e^{-\frac{Nd}{2}}$, we get that with probability at least $1-e^{-\frac{Nd}{2}}$
\begin{equation}\label{TC_final_upperbound}
\underset{T \in K_M^T(\alpha,R)} {\sup} |\frac{1}{m} \sum_{t=1}^{m} \xi_t T(\omega_t)| \leq 2(R+\alpha)\sqrt{\frac{Nd}{m}}.
\end{equation}
\subsubsection{Lower bound on left hand side of \eqref{delta_TC_equation}}
In this section, we prove that with high probability, $\frac{1}{m}\sum_{t=1}^{m} T(\omega_t)^2$ does not deviate much from its expectation $\|T\|_{\Pi}^2$. For ease of notation, define $T_S=(T(\omega_1),T(\omega_2),\cdots,T(\omega_m))$ to be the set of chosen samples drawn from $\Pi$ where
$$\|T\|_{\Pi}^2=\frac{1}{m}\mathbb{E}_{S\sim \Pi} \|T_S\|_2^2=\sum_{\omega} \pi_{\omega} T(\omega)^2.$$
We prove that with high probability over the samples,
\begin{equation}\label{T_S_deviation}
\frac{1}{m} \|T_S\|_2^2 \geq \|T\|_{\Pi}^2 - f_{\beta}(m,N,d),
\end{equation}
hold uniformly for all tensors $T \in K_M^T(1,\beta)$.
\begin{lem}\label{lem_deviation_empirical}
Defining $\Delta(S):=\underset{T \in K_M^T(1,\beta)} {\sup} |\frac{1}{m} \|T_S\|_2^2 - \|T\|_{\Pi}^2|,$ and assuming $N,d >2$ and $m \leq N^d$, there exist a constant $C>20$ such that
$$\mathbb{P}(\Delta(S) > C\beta \sqrt{\frac{Nd}{m}}) \leq e^{\frac{-N}{ln(N)}}.$$
\end{lem}
To prove this lemma, we show that we can bound the $t$-th moment of $\Delta(S)$ as
\begin{equation}\label{t-moment}
\mathbb{E}_{S \sim \Pi}[\Delta(S)^t] \leq \left( \frac{8 \beta\sqrt{Nd+t\ \text{ln}(m)}}{\sqrt{m}} \right)^t.
\end{equation}
Before stating the proof of this bound, we show how we can use it to prove Lemma \ref{lem_deviation_empirical} by using Markov's inequality.
\begin{equation}
\mathbb{P}(\Delta(S) > C \beta \sqrt{\frac{Nd}{m}}) = \mathbb{P}\left( (\Delta(S))^t > (C \beta \sqrt{\frac{Nd}{m}})^t \right) \leq \frac{\mathbb{E}_{S \sim \Pi}[\Delta(S)^t]}{(C \beta \sqrt{\frac{Nd}{m}})^t}.
\end{equation}
Using \eqref{t-moment} and simplifying we get
$$\mathbb{P}(\Delta(S) > C \beta \sqrt{\frac{Nd}{m}}) \leq \left( \frac{4 \sqrt{Nd + t \text{ln}(m)}}{C \sqrt{Nd}} \right) ^t$$.
Taking $t = \frac{Nd}{\text{ln}(m)}$ and for $C>12$,
$$\mathbb{P}(\Delta(S) > C \beta \sqrt{\frac{Nd}{m}}) \leq e^{\frac{-Nd}{\text{ln}(m)}} \leq e^{\frac{-N}{\text{ln}(N)}}.$$ \qed
Now we prove \eqref{t-moment} by using some basic techniques of probability in Banach space, including symmetrization and contraction inequality \cite{ledoux2013probability,davenport20141}. Regarding the tensor $T \in \bigotimes_{i=1}^d \mathbb{R}^N$ as a function from $[N] \times [N] \times \cdots \times [N] \rightarrow R$, we define $f_T(\omega_1, \omega_2, \cdots, \omega_d) := T(\omega_1, \omega_2, \cdots, \omega_d)^2$. We are interested in bounding $\Delta(S):=\underset{f_T : T \in K_M^T(1,\beta)} {\sup} |\frac{1}{m} \sum_{i=1}^{m} f_T(\omega_i) - \mathbb{E}(f_T(\omega_i)) |$. A standard symmetrization argument and using contraction principle yields
$$\mathbb{E}_{S \sim \Pi}[\Delta(S)^t] \leq \mathbb{E}_{S \sim \Pi} \lbrace 2 \mathbb{E}_{\epsilon} [\underset{T \in K_M^T(1,\beta)} {\sup} |\frac{1}{m} \sum_{i=1}^m \epsilon_i T(\omega_i)^2| ] \rbrace^t \leq \mathbb{E}_{S \sim \Pi} \lbrace 4 \mathbb{E}_{\epsilon} [\underset{T \in K_M^T(1,\beta)} {\sup} |\frac{1}{m} \sum_{i=1}^m \epsilon_i T(\omega_i)| ] \rbrace^t.$$
Notice that if $T_{M} \leq \beta$, then $T \in \beta \text{conv}(T_{\pm})$ and therefore
$$\mathbb{E}_{S \sim \Pi}[\Delta(S)^t] \leq \mathbb{E}_{S \sim \Pi} \lbrace 4 \beta \mathbb{E}_{\epsilon} [\underset{T \in T_{\pm}} {\sup} |\frac{1}{m} \sum_{i=1}^m \epsilon_i T(\omega_i)| ] \rbrace^t = \beta^t \mathbb{E}_{S \sim \Pi} \lbrace \mathbb{E}_{\epsilon} [\underset{T \in T_{\pm}} {\sup} |\frac{4}{m} \sum_{i=1}^m \epsilon_i | ] \rbrace^t.$$
To bound the right hand side above notice that for any $\alpha>0$ \cite[Theorem 36]{srebro2004learning}
$$\mathbb{P}_{\epsilon} ( \frac{4}{m} \sum_{i=1}^m \epsilon_i \geq \frac{\alpha}{\sqrt{m}}) = \mathbb{P} \left (\text{Binom}(m,\frac{1}{2}) \geq \frac{m}{2} + \frac{\alpha \sqrt{m}}{8} \right) \leq e^{\frac{-\alpha^2}{16}}.$$
Taking a union bound over $T_{\pm}$, where $|T_{\pm}| \leq 2^{Nd}$ we get
$$\mathbb{P}_{\epsilon} [\underset{T \in T_{\pm}} {\sup} ( |\frac{4}{m} \sum_{i=1}^m \epsilon_i T(\omega_i)|) \geq \frac{\alpha}{\sqrt{m}}] \leq 2^{Nd+1} e^{\frac{-\alpha^2}{16}}.$$
Combining the above result and using Jensen's inequality, when $t>1$
$$\beta^t \mathbb{E}_{S \sim \Pi} \lbrace \mathbb{E}_{\epsilon} [\underset{T \in T_{\pm}} {\sup} |\frac{4}{m} \sum_{i=1}^m \epsilon_i | ] \rbrace^t \leq \beta^t \mathbb{E}_{S \sim \Pi} \lbrace \mathbb{E}_{\epsilon} [\underset{T \in T_{\pm}} {\sup} |\frac{4}{m} \sum_{i=1}^m \epsilon_i | ]^t \rbrace \leq \beta^t \left( (\frac{\alpha}{\sqrt{m}})^t + 4^t 2^{Nd+1} e^{\frac{-\alpha^2}{16}} \right)$$
Choosing $\alpha=\sqrt{16 \text{ln} (4 \times 2^{Nd+1}) + 4t\text{ln}(m)}$ and simplifying proves \eqref{t-moment}.\qed
\subsubsection{Gathering the results of \eqref{delta_TC_equation}, \eqref{TC_final_upperbound}, and \eqref{T_S_deviation}}
Now we combine the upper and lower bounds in the last two sections to prove Theorem \ref{theorem_atomic_TC}. On one hand from \eqref{TC_final_upperbound}, as $\Delta \in K_T(2\alpha,2R)$, we get
$$ \frac{1}{m}\|\Delta_S\|_2^2 \leq 8\sigma (R+\alpha) \sqrt{\frac{Nd}{m}},$$
with probability greater than $1-e^{-\frac{Nd}{2}}$. On the other hand, using Lemma \ref{lem_deviation_empirical} and rescaling, we get
$$\|\Delta\|_{\Pi}^2 \leq \frac{1}{m} \|\Delta_S\|_2^2 + C R \alpha \sqrt{\frac{Nd}{m}}\},$$
with probability greater than $1-e^{\frac{-N}{\text{ln}(N)}}$. The above two inequalities finishes the proof of Theorem \ref{theorem_atomic_TC}.
\begin{rem}\label{remark_proof_maxnorm}
There are only two differences in the proof of theorem \ref{theorem_atomic_TC} and Theorem \ref{theorem_maxqnorm_TC}. First is an extra constant, $c_1 c_2^d$, which shows up in Rademacher complexity of unit max-qnorm ball which changes the constant $C$ in Theorem \ref{theorem_atomic_TC} to $C c_1 c_2^d$ in Theorem \ref{theorem_maxqnorm_TC} and second is the max-qnorm of the error tensor $\Delta=\hat{T}-T^{\sharp}$ (refer to equation \eqref{delta_TC_equation}) which belongs to $K_{\text{max}}^T(2\alpha, 2^{d-1} R)$ instead of $K_{\text{max}}^T(2\alpha, 2R)$.
\end{rem}
\subsection{Proof of Theorem \ref{theorem_atomic_TC_lowerbound}}\label{proof_TC_lowerbound}
\subsubsection*{Packing set construction}
In this section we construct a packing for the set $K_{M}^T(\alpha,R)$.
\begin{lem}\label{packing}
Let $r=\floor{(\frac{R}{\alpha K_G})^2}$ and let $K_{M}^T(\alpha,R)$ be defined as in \eqref{K} and let $\gamma \leq 1$ be such that $\frac{r}{\gamma^2}$ is an integer and suppose $\frac{r}{\gamma^2} \leq N$. Then there exist a set $\chi^T \subset G_{M}^T(\alpha,R)$ with
$$|\chi^T| \geq \exp{\frac{rN}{16\gamma^2}}$$
such that
\begin{enumerate}
\item For $T \in \chi^T$, $|T(\omega)|=\alpha \gamma$ for $\omega \in \{[N] \times [N] \cdots [N]\}$.
\item For any $T^{(i)},T^{(j)} \in \chi^T$, $T^{(i)} \neq T^{(j)}$
$$\|T^{(i)}-T^{(j)}\|_F^2 \geq \frac{\alpha^2 \gamma^2 N^d}{2}.$$
\end{enumerate}
\end{lem}
\noindent
{\bf{Proof:}} This packing is a tensor version of the packing set generated in \cite{davenport20141} with similar properties and our construction is based on the packing set generated there for low-rank matrices with bounded entries. In particular we know that there exist a set $\chi \subset \lbrace M \in \mathbb{R}^{N \times N}: \|M\|_{\infty} \leq \alpha, \text{rank}(M)=r \rbrace$ with $|\chi| \geq \text{exp}{\frac{rN}{16\gamma^2}}$ and for any $M^{(i)},M^{(j)} \in \chi$, $\|M^{(i)}-M^{(j)}\|_F^2 \geq \frac{\alpha^2 \gamma^2 N^2}{2}$ when $i \neq j$. Take any $M^{(k)} \in \chi$. $M^{(k)}$ is a rank-$r$ matrix with $\|M^{(k)}\|_{\infty} \leq \alpha$ and therefore $\|M^{(k)}\|_{\text{max}} \leq \sqrt{r} \alpha$ which means there exist a nuclear decomposition of $M^{(k)}$ with bounded singular vectors $M^{(k)}=\sum_{i} \sigma_i u_i \circ v_i,\ \|u_i\|_{\infty},\|v_i\|_{\infty} \leq 1$, such that $\sum_{i=1} |\sigma_i| \leq K_G \sqrt{r} \alpha$. Define $T^{(k)} = \sum_{i} \sigma_i u_i \circ v_i \circ \vec{\mathbf{1}} \cdots \circ \vec{\mathbf{1}}$ where $\vec{\mathbf{1}} \in \mathbb{R}^N$ is the vector of all ones. Notice that $\|u_i\|_{\infty}, \|v_i\|_{\infty}, \|\vec{\mathbf{1}}\|_{\infty} \leq 1$ and therefore by definition, $\|T^{(k)}\|_{M} \leq K_G \sqrt{r} \alpha \leq R$ by Lemma \ref{max_tensor_ball}. The tensor is basically generated by stacking the matrix $M^{(k)}$ along all the other $d-2$ dimensions and therefore $|T^{(k)}(\omega)|=\alpha \gamma$ for $\omega \in \{[N] \times [N] \cdots [N]\}$, and $\|T^{(k)}\|_{\infty} \leq \alpha$. Hence we build $\chi^T$ from $\chi$ by taking the outer product of the matrices in $\chi$ by the vector $\vec{\mathbf{1}}$ along all the other dimensions. obviously $|\chi^T|=|\chi| \geq \text{exp}{\frac{rN}{16\gamma^2}}$. It just remains to prove $$\|T^{(i)}-T^{(j)}\|_F^2 \geq \frac{\alpha^2 \gamma^2 N^d}{2}$$.
Assuming that $T^{(i)}$ is generated from $M^{(i)}$ and $T^{(j)}$ is generated from $M^{(j)}$, since $T^{(i)}(i_1,i_2, \cdots,i_d)=M^{(i)}(i_1,i_2)$, $\|T^{(i)}-T^{(j)}\|_F^2 = \sum_{i_1=1}^{N} \sum_{i_2=1}^{N} \cdots \sum_{i_d=1}^{N} (T^{(i)}(i_1,i_2, \cdots,i_d) - T^{(j)}(i_1,i_2, \cdots,i_d))^2= \\ N^{d-2} \sum_{i_1=1}^{N} \sum_{i_2=1}^{N} (M^{(i)}(i_1,i_2)-M^{(j)}(i_1,i_2))^2 = N^{d-2} \|M^{(i)}-M^{(j)}\|_F^2 \geq \frac{\alpha^2 \gamma^2 N^d}{2}$ which concludes proof of the lemma.
\subsubsection*{Proof of Theorem \ref{theorem_atomic_TC_lowerbound}}
Now we use the construction in Lemma \ref{packing} to obtain a $\delta$-packing set, $\chi^T$ of $K_{M}^T$, with $\delta=\alpha \gamma \sqrt{\frac{N^d}{2}}$. For the lower bound we assume that the sampling distribution satisfies
\begin{equation}\label{upperbound_on_pi}
\max_{\omega} \pi_{\omega} \leq \frac{L}{N^d}.
\end{equation}
The proof is based on the proof in \cite[Section 6.2]{cai2016matrix} which we will rewrite the main parts and refer to \cite{cai2016matrix} for more details. The proof is based on two main arguments. First is a lower bound on the $\|\cdot\|_F$-risk in terms of the error in a multi-way hypothesis testing problem
$$\underset{\hat{T}}{\inf} \underset{T \in G_{M}^T(\alpha,R)}{\sup} \mathbb{E}\|\hat{T}-T\|_F^2 \geq \frac{\delta^2}{4} \underset{\tilde{T}}{\text{min}} \mathbb{P} (\tilde{T} \neq T^{\sharp}),$$
where $T^{\sharp}$ is uniformly distributed over the pacing set $\chi^T$. Second argument is a variant of the Fano's inequality which conditioned on the observations $S=\{\omega_1, \cdots, \omega_m\}$, gives the lower bound
\begin{equation}\label{Fano_upper_bound}
\mathbb{P}(\tilde{T} \neq T^{\sharp} | S) \geq 1-\frac{(\binom{|\chi^T|}{2})^{-1} \sum_{k\neq j} K(T^k || T^j)+\log(2)}{\log|\chi^T|},
\end{equation}
where $K(T^k || T^j)$ is the Kullback-Leibler divergence between distributions $(Y_S|T^k)$ and $(Y_S|T^j)$. For our observation model with i.i.d. Gaussian noise, we have
$$K(T^k || T^j) = \frac{1}{2\sigma^2}\sum_{t=1}^{m} (T^k(\omega_t) - T^j(\omega_t))^2,$$
and therefore,
$$\mathbb{E}_S [K(T^k || T^j)] = \frac{m}{2\sigma^2} \|T^k - T^j\|_{\Pi}^2.$$
From the first property of the packing set generated in Lemma \ref{packing}, $\|T^k-T^j\|_F^2 \leq 4 \gamma^2 N^d$. This combined \eqref{upperbound_on_pi} and \eqref{Fano_upper_bound} yields
$$\mathbb{P}(\tilde{T} \neq T^{\sharp}) \geq 1-\frac{\frac{32L\gamma^4\alpha^2m}{\sigma^2}+12\gamma^2}{rN}\geq 1-\frac{32L\gamma^4\alpha^2m}{\sigma^2rN}-\frac{12\gamma^2}{rN}\geq \frac{3}{4}-\frac{32L\gamma^4\alpha^2m}{\sigma^2rN},$$
provided $rN>48$. Now if $\gamma^4 \leq \frac{\sigma^2rN}{128L\alpha^2m}$,
$$\underset{\hat{T}}{\inf} \underset{T \in K_{M}^T(\alpha,R)}{\sup} \frac{1}{N^d} \mathbb{E}\|\hat{T}-T\|_F^2 \geq \frac{\alpha^2 \gamma^2}{16}.$$
Therefore, if $\frac{\sigma^2rN}{128L\alpha^2m} \geq 1$ choosing $\gamma=1$ finishes the proof. Otherwise choosing $\gamma^2 = \frac{\sigma}{8\sqrt{2}\alpha}\sqrt{\frac{rN}{Lm}}$ results in
$$\underset{\hat{T}}{\inf} \underset{T \in K_{M}^T(\alpha,R)}{\sup} \frac{1}{N^d} \mathbb{E}\|\hat{T}-T\|_F^2 \geq \frac{\sigma\alpha}{128\sqrt{2}}\sqrt{\frac{rN}{Lm}} \geq \frac{\sigma R}{128\sqrt{2}K_G}\sqrt{\frac{N}{Lm}}.$$
\section{Future directions and open problems}
In this work we considered max-qnorm constrained least squares for tensor completion and showed that theoretically the number of required measurements needed is linear in the maximum size of the tensor. To the best of our knowledge, this is the first work that reduces the required number of measurements from $N^{\frac{d}{2}}$ to $N$. However, there are a lot of open problems and complications that need to be answered. Following, we list a few of these problems.
\begin{itemize}
\item The difference between the upper bound of nuclear-norm and max-qnorm of a bounded low-rank tensor is significant and it is also a main reason for the theoretical superiority of max-qnorm over nuclear-norm. In our proof, one of the main theoretical steps for bounding the least square estimation error, constrained with an arbitrary norm, is bounding the Rademacher complexity of unit-norm tensors and finding a tight bound for the norm of low-rank tensors. In case of max-qnorm, we are able to achieve upper bound of $r^{\sqrt{d^2-d}} \alpha$ and Rademacher complexity of $O(\sqrt{\frac{dN}{m}})$. A careful calculation of these quantities for nuclear-norm still needs to be done. However, a generalizations of current results gives an upper bound of $O(\sqrt{r^{d-1} N^d} \alpha)$ for the nuclear-norm of rank-$r$ tensors. Considering the tensor $\vec{\mathbf{1}} \circ \cdots \circ \vec{\mathbf{1}} $, we can see that this bound is tight. We leave the exact answer to this question to future work.
\item We know that the dependency of the upper bound of the low-rank max-qnorm tensor found in Theorem \ref{theorem_atomicnorm_bound} is optimal in $N$. However, we believe the dependency on $r$ can be improved. We saw in Section \ref{experiments and algorithms} that this is definitely the case for some specific class of tensors.
\item Other than the open problems concerning algorithms for calculating max-qnorm of tensors and projection onto max-qnorm balls, an interesting question is analyzing exact tensor recovery using max-qnorm. Most of the evidence point to this being NP-hard including \cite{hillar2013most} which proves that a lot of similar tensor problems is NP-hard and the connection between noisy tensor completion and the 3-SAT problem \cite{barak2015noisy} which proves that if an exact tensor completion is doable in polynomial time, the conjecture in \cite{daniely2013more} will be disapproved. However, an exact study of whether or not it is NP-hard or availability of polynomial time estimates needs to be done.
\item The preliminary results of Algorithm \ref{algorithm_TC} show significant improvements over previous algorithms. This highlights the need for a more sophisticated (and provable) algorithm which utilizes the max-qnorm for tensor completion. As a first step, in the matrix case, the max-qnorm can be reformulated as a semidefinite programming problem, and together with \cite{burer2006computational}, this proves once we solve the problem using its factors, any local minima is a global minima and hence proves the correctness of the algorithm. However, this is not the case in tensors and in our experiments we saw that the results are sensitive to the size of low-rank factors. Analyzing this behavior is an interesting future direction.
\end{itemize}
| {
"timestamp": "2017-11-15T02:40:32",
"yymm": "1711",
"arxiv_id": "1711.04965",
"language": "en",
"url": "https://arxiv.org/abs/1711.04965",
"abstract": "We analyze low rank tensor completion (TC) using noisy measurements of a subset of the tensor. Assuming a rank-$r$, order-$d$, $N \\times N \\times \\cdots \\times N$ tensor where $r=O(1)$, the best sampling complexity that was achieved is $O(N^{\\frac{d}{2}})$, which is obtained by solving a tensor nuclear-norm minimization problem. However, this bound is significantly larger than the number of free variables in a low rank tensor which is $O(dN)$. In this paper, we show that by using an atomic-norm whose atoms are rank-$1$ sign tensors, one can obtain a sample complexity of $O(dN)$. Moreover, we generalize the matrix max-norm definition to tensors, which results in a max-quasi-norm (max-qnorm) whose unit ball has small Rademacher complexity. We prove that solving a constrained least squares estimation using either the convex atomic-norm or the nonconvex max-qnorm results in optimal sample complexity for the problem of low-rank tensor completion. Furthermore, we show that these bounds are nearly minimax rate-optimal. We also provide promising numerical results for max-qnorm constrained tensor completion, showing improved recovery results compared to matricization and alternating least squares.",
"subjects": "Machine Learning (cs.LG); Optimization and Control (math.OC); Machine Learning (stat.ML)",
"title": "Near-optimal sample complexity for convex tensor completion",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9732407183668539,
"lm_q2_score": 0.7279754548076477,
"lm_q1q2_score": 0.7084953545904322
} |
https://arxiv.org/abs/1708.00502 | Estimation of the covariance structure of heavy-tailed distributions | We propose and analyze a new estimator of the covariance matrix that admits strong theoretical guarantees under weak assumptions on the underlying distribution, such as existence of moments of only low order. While estimation of covariance matrices corresponding to sub-Gaussian distributions is well-understood, much less in known in the case of heavy-tailed data. As K. Balasubramanian and M. Yuan write, "data from real-world experiments oftentimes tend to be corrupted with outliers and/or exhibit heavy tails. In such cases, it is not clear that those covariance matrix estimators .. remain optimal" and "..what are the other possible strategies to deal with heavy tailed distributions warrant further studies." We make a step towards answering this question and prove tight deviation inequalities for the proposed estimator that depend only on the parameters controlling the "intrinsic dimension" associated to the covariance matrix (as opposed to the dimension of the ambient space); in particular, our results are applicable in the case of high-dimensional observations. | \section{Introduction}
Estimation of the covariance matrix is one of the fundamental problems in data analysis: many important statistical tools, such as Principal Component Analysis(PCA) \cite{hotelling1933analysis} and regression analysis, involve covariance estimation as a crucial step.
For instance, PCA has immediate applications to nonlinear dimension reduction and manifold learning techniques \cite{allard2012multi}, genetics \cite{novembre2008genes}, computational biology \cite{alter2000singular}, among many others.
However, assumptions underlying the theoretical analysis of most existing estimators, such as various modifications of the sample covariance matrix, are often restrictive and do not hold for real-world scenarios.
Usually, such estimators rely on heuristic (and often bias-producing) data preprocessing, such as outlier removal.
To eliminate such preprocessing step from the equation, one has to develop a class of new statistical estimators that admit strong performance guarantees, such as exponentially tight concentration around the unknown parameter of interest, under weak assumptions on the underlying distribution, such as existence of moments of only low order.
In particular, such heavy-tailed distributions serve as a viable model for data corrupted with outliers -- an almost inevitable scenario for applications.
We make a step towards solving this problem: using tools from the random matrix theory, we will develop a class of \textit{robust} estimators that are numerically tractable and are supported by strong theoretical evidence under much weaker conditions than currently available analogues. The term ``robustness'' refers to the fact that our estimators admit provably good performance even when the underlying distribution is heavy-tailed.
\subsection{Notation and organization of the paper}
Given $A\in \mathbb R^{d_1\times d_2}$, let $A^T\in \mathbb R^{d_2\times d_1}$ be transpose of $A$.
If $A$ is symmetric, we will write $\lambda_{\mbox{\footnotesize{max}\,}}(A)$ and $\lambda_{\mbox{\footnotesize{min}\,}}(A)$ for the largest and smallest eigenvalues of $A$.
Next, we will introduce the matrix norms used in the paper.
Everywhere below, $\|\cdot\|$ stands for the operator norm $\|A\|:=\sqrt{\lambda_{\mbox{\footnotesize{max}\,}}(A^T A)}$.
If $d_1=d_2=d$, we denote by $\mbox{tr} A$ the trace of $A$.
For $A\in \mathbb R^{d_1\times d_2}$, the nuclear norm $\|\cdot\|_1$ is defined as
$\|A\|_1=\mbox{tr}(\sqrt{A^T A})$, where $\sqrt{A^T A}$ is a nonnegative definite matrix such that $(\sqrt{A^T A})^2=A^T A$.
The Frobenius (or Hilbert-Schmidt) norm is $\|A\|_{\mathrm{F}}=\sqrt{\mbox{tr}(A^T A)}$, and the associated inner product is
$\dotp{A_1}{A_2}=\mbox{tr}(A_1^\ast A_2)$.
For $z\in \mathbb R^d$, $\left\| z \right\|_2$ stands for the usual Euclidean norm of $z$.
Let $A$, $B$ be two self-adjoint matrices. We will write $A\succeq B \ (\text{or }A\succ B)$ iff $A-B$ is nonnegative (or positive) definite.
For $a,b\in \mathbb R$, we set $a\vee b:=\max(a,b)$ and $a\wedge b:=\min(a,b)$.
We will also use the standard Big-O and little-o notation when necessary.
Finally, we give a definition of a matrix function.
Let $f$ be a real-valued function defined on an interval $\mathbb T\subseteq \mathbb R$, and let $A\in \mathbb R^{d\times d}$ be a symmetric matrix with the eigenvalue decomposition
$A=U\Lambda U^\ast$ such that $\lambda_j(A)\in \mathbb T,\ j=1,\ldots,d$.
We define $f(A)$ as
$f(A)=Uf(\Lambda) U^\ast$, where
\[
f(\Lambda)=f\left( \begin{pmatrix}
\lambda_1 & \, & \,\\
\, & \ddots & \, \\
\, & \, & \lambda_d
\end{pmatrix} \right)
:=\begin{pmatrix}
f(\lambda_1) & \, & \,\\
\, & \ddots & \, \\
\, & \, & f(\lambda_d)
\end{pmatrix}.
\]
Few comments about organization of the material in the rest of the paper: section \ref{sec:literature} provides an overview of the related work. Section \ref{sec:main} contains the mains results of the paper.
The proofs are outlined in section \ref{sec:proofs}; longer technical arguments can be found in the supplementary material.
\subsection{Problem formulation and overview of the existing work}
\label{sec:literature}
Let $X\in \mathbb R^d$ be a random vector with
mean $\mathbb EX = \mu_0$, covariance matrix $\Sigma_0 = \mathbb E\left[ (X - \mu_0)(X - \mu_0)^T \right]$, and assume $\mathbb E \|X - \mu_0\|_2^4<\infty$.
Let $X_1,\ldots, X_{m}$ be i.i.d. copies of $X$.
Our goal is to estimate the covariance matrix $\Sigma$ from $X_j, \ j\leq m$.
This problem and its variations have previously received significant attention by the research community: excellent expository papers by \cite{cai2016} and \cite{fan2016overview} discuss the topic in detail.
However, strong guarantees for the best known estimators hold (with few exceptions mentioned below) under the restrictive assumption that $X$ is either bounded with probability 1 or has sub-Gaussian distribution, meaning that there exists $\sigma>0$ such that for any $v\in \mathbb R^d$ of unit Euclidean norm,
\[
\Pr\left(\left|\dotp{v}{X-\mu_0}\right|\geq t \right)\leq 2 e^{-\frac{t^2 \sigma^2}{2}}.
\]
In the discussion accompanying the paper by \cite{cai2016}, \cite{balasubramanian2016discussion} write that ``data from real-world experiments oftentimes tend to be corrupted with outliers and/or exhibit heavy tails.
In such cases, it is not clear that those covariance matrix estimators described in this article remain optimal'' and ``..what are the other possible strategies to deal with heavy tailed distributions warrant further studies.''
This motivates our main goal: develop new estimators of the covariance matrix that (i) are computationally tractable and perform well when applied to heavy-tailed data and (ii) admit strong theoretical guarantees (such as exponentially tight concentration around the unknown covariance matrix) under weak assumptions on the underlying distribution.
Note that, unlike the majority of existing literature, we do not impose \textit{any further conditions} on the moments of $X$, or on the ``shape'' of its distribution, such as elliptical symmetry.
Robust estimators of covariance and scatter have been studied extensively during the past few decades.
However, majority of rigorous theoretical results were obtained for the class of elliptically symmetric distributions which is a natural generalization of the Gaussian distribution; we mention just a small subsample among the thousands of published works.
Notable examples include the Minimum Covariance Determinant estimator and the Minimum Volume Ellipsoid estimator which are discussed in \cite{hubert2008high}, as well Tyler's \cite{tyler1987distribution} M-estimator of scatter.
Works by \cite{fan2016overview,wegkamp2016adaptive, han2016eca} exploit the connection between Kendall's tau and Pearson's correlation coefficient \cite{fang1990symmetric} in the context of elliptical distributions to obtain robust estimators of correlation matrices.
Interesting results for shrinkage-type estimators have been obtained by \cite{ledoit2004well,ledoit2012nonlinear}.
In a recent work, \cite{chen2015robust} study Huber's $\varepsilon$-contamination model which assumes that the data is generated from the distribution of the form $(1-\varepsilon)F + \varepsilon Q$, where $Q$ is an arbitrary distribution of ``outliers'' and $F$ is an elliptical distribution of ``inliers'', and propose novel estimator based on the notion of ``matrix depth'' which is related to Tukey's depth function \cite{tukey1975mathematics}; a related class of problems has been studies by \cite{diakonikolas2016robust}.
The main difference of the approach investigated in this paper is the ability to handle a much wider class of distributions that are not elliptically symmetric and only satisfy weak moment assumptions.
Recent papers by \cite{catoni2016pac}, \cite{giulini2015pac}, \cite{fan2016eigenvector,fan2017estimation,fan2017robust} and \cite{minsker2016sub} are closest in spirit to this direction.
For instance, \cite{catoni2016pac} constructs a robust estimator of the Gram matrix of a random vector $Z\in \mathbb R^d$ (as well as its covariance matrix) via estimating the quadratic form $\mathbb E \dotp{Z}{u}^2$ uniformly over all $\|u\|_2=1$.
However, the bounds are obtained under conditions more stringent than those required by our framework, and resulting estimators are difficult to evaluate in applications even for data of moderate dimension.
\cite{fan2016eigenvector} obtain bounds in norms other than the operator norm which the focus of the present paper.
\cite{minsker2016sub} and \cite{fan2016robust} use adaptive truncation arguments to construct robust estimators of the covariance matrix.
However, their results are only applicable to the situation when the data is centered (that is, $\mu_0=0$).
In the robust estimation framework, rigorous extension of the arguments to the case of non-centered high-dimensional observations is non-trivial and requires new tools, especially if one wants to avoid statistically inefficient procedures such as sample splitting.
We formulate and prove such extensions in this paper.
\section{Main results}
\label{sec:main}
Definition of our estimator has its roots in the technique proposed by \cite{catoni2012challenging}.
Let
\begin{align}
\label{eq:psi2}
\psi(x) = \left( |x|\wedge 1 \right)\mbox{sign}(x)
\end{align}
be the usual truncation function.
As before, let $X_1,\ldots,X_m$ be i.i.d. copies of $X$, and assume that $\widehat \mu$ is a suitable estimator of the mean $\mu_0$ from these samples, to be specified later.
We define $\widehat\Sigma$ as
\begin{align}
\label{eq:rob-cov}
\widehat\Sigma := \frac{1}{m\theta}\sum_{i=1}^m \psi\left( \theta(X_i - \widehat\mu)(X_i - \widehat\mu)^T \right),
\end{align}
where $\theta\simeq m^{-1/2}$ is small (the exact value will be given later).
It easily follows from the definition of the matrix function that
\[
\widehat\Sigma = \frac{1}{m\theta}\sum_{i=1}^m \frac{(X_i - \widehat\mu)(X_i - \widehat\mu)^T}{\left\| X_i - \widehat\mu\right\|_2^2} \psi\left( \theta \left\| X_i - \widehat\mu \right\|_2^2 \right),
\]
hence it is easily computable.
Note that $\psi(x)= x$ in the neighborhood of $0$; it implies that whenever all random variables $\theta \left\| X_i - \widehat\mu \right\|_2^2, \ 1\leq i\leq m$ are ``small'' (say, bounded above by $1$) and $\hat\mu$ is the sample mean, $\widehat \Sigma$ is close to the usual sample covariance estimator.
On the other hand, $\psi$ ``truncates'' $\left\| X_i - \widehat\mu \right\|_2^2$ on level $\simeq \sqrt{m}$, thus limiting the effect of outliers.
Our results (formally stated below, see Theorem \ref{th:lepski}) imply that for an appropriate choice of $\theta=\theta(t,m,\sigma)$,
\[
\left\| \widehat\Sigma - \Sigma_0 \right\| \leq C_0\sigma_0\sqrt{\frac{\beta}{m}}
\]
with probability $\geq 1 - d e^{-\beta}$ for some positive constant $C_0$, where
\[
\sigma_0^2 := \left\| \mathbb E \left\| X - \mu_0\right\|_2^2 (X - \mu_0)(X - \mu_0)^T \right\|
\]
is the "matrix variance".
\subsection{Robust mean estimation}
\label{ssec:mean}
There are several ways to construct a suitable estimator of the mean $\mu_0$.
We present the one obtained via the ``median-of-means'' approach.
Let $x_1,\ldots,x_k\in \mathbb R^d$. Recall that the \textit{geometric median} of $x_1,\ldots,x_k$ is defined as
\[
\med{x_1,\ldots,x_k}:=\argmin\limits_{z\in \mathbb R^d}\sum_{j=1}^k \left\|z- x_j \right\|_2.
\]
Let $1<\beta<\infty$ be the confidence parameter, and set $k=\Big\lfloor 3.5 \beta\Big\rfloor+1$; we will assume that $k\leq \frac{m}{2}$.
Divide the sample $X_1,\ldots, X_m$ into $k$ disjoint groups $G_1,\ldots, G_k$ of size $\Big\lfloor \frac{m}{k}\Big\rfloor$ each, and define
\begin{align}
\label{eq:median_mean}
\nonumber
\hat\mu_j&:=\frac{1}{|G_j|}\sum_{i\in G_j}X_i, \ j=1\ldots k,\\
\hat\mu&:=\med{\hat\mu_1,\ldots,\hat\mu_k}.
\end{align}
It then follows from Corollary 4.1 in \cite{minsker2013geometric} that
\begin{align}
\label{eq:deviation1}
&
\Pr\Big(\left\| \hat\mu-\mu \right\|_2 \geq 11\sqrt{\frac{\mbox{tr}(\Sigma_0)(\beta+1)}{m}}\Big)\leq e^{-\beta}.
\end{align}
\begin{comment}
\begin{theorem}
\label{th:main}
Let $Y_1,\ldots,Y_n\in \mathbb R^{d\times d}$ be a sequence of independent self-adjoint random matrices, and
$\sigma_n^2\geq\left\| \sum_{j=1}^n \mathbb EY_j^2 \right\|$.
Then
\[
\Pr\left( \left\| \sum_{j=1}^n \left(\frac{1}{\theta}\psi\left( \theta Y_j \right) - \mathbb EY_j \right) \right\| \geq t\sqrt n \right)
\leq 2d \exp\left( -\theta t\sqrt{n} + \frac{\theta^2\sigma_n^2}{2} \right).
\]
In particular, setting $\theta=\frac{t\sqrt{n}}{\sigma_n^2}$, we get the ``sub-Gaussian'' tail bound $2d \exp\left( -\frac{t^2}{2\sigma_n^2/n} \right)$, for a given $t>0$.
Alternatively, setting $\theta=\frac{\sqrt{n}}{\sigma_n^2}$ (independent of $t$), we obtain sub-exponential concentration with tail
$2d \exp\left( -\frac{2t-1}{2\sigma_n^2/n} \right)$ for all $t>1/2$.
\end{theorem}
\begin{remark}
\label{remark:iid}
In the important special case when $Y_j, \ j=1,\ldots,n$ are i.i.d. copies of $Y$, we will often use the following equivalent form of of the bound: assume that $\sigma^2\geq\| \mathbb EY^2 \|$, then replacing $t$ by $\sigma \sqrt{s}$ and setting $\theta=\sqrt{\frac{s}{n}}\frac{1}{\sigma}$ implies that
\begin{align}
\label{th:iid}
&
\Pr\left( \left\| \frac{1}{n\theta}\sum_{j=1}^n \psi\left( \theta Y_j \right) - \mathbb EY \right\| \geq \sigma \sqrt{\frac{s}{n}} \right)
\leq 2d \exp\left( -s/2 \right).
\end{align}
\end{remark}
\end{comment}
\subsection{Robust covariance estimation}
Let $\widehat\Sigma$ be the estimator defined in \eqref{eq:rob-cov} with $\widehat\mu$ being the ``median-of-means'' estimator \eqref{eq:median_mean}.
Then $\widehat \Sigma$ admits the following performance guarantees:
\begin{lemma}
\label{lemma:main}
Assume that $\sigma \geq \sigma_0$, and set $\theta=\frac{1}{\sigma}\sqrt{\frac{\beta}{m}}$.
Moreover, let $\overline{d}:=\sigma_0^2/\|\Sigma_0\|^2$, and suppose that $m\geq C\overline{d}\beta$, where $C>0$ is an absolute constant. Then
\begin{equation}
\label{simple-bound}
\left\| \widehat\Sigma - \Sigma_0\right\| \leq 3\sigma\sqrt{\frac{\beta}{m}}
\end{equation}
with probability at least $1-5d e^{-\beta}$.
\end{lemma}
\begin{remark}
The quantity $\bar d$ is a measure of ``intrinsic dimension'' akin to the ``effective rank'' $r=\frac{\mbox{tr}\left(\Sigma_0\right)}{\|\Sigma_0\|}$; see Lemma \ref{effective-rank-bound} below for more details.
Moreover, note that the claim of Lemma \ref{lemma:main} holds for any $\sigma\geq\sigma_0$, rather than just for $\sigma=\sigma_0$; this ``degree of freedom'' allows construction of adaptive estimators, as it is shown below.
\end{remark}
The statement above suggests that one has to know the value of (or a tight upper bound on) the ``matrix variance''
$\sigma_0^2$ in order to obtain a good estimator $\widehat\Sigma$.
More often than not, such information is unavailable.
To make the estimator completely data-dependent, we will use Lepski's method \cite{lepskii1992asymptotically}.
To this end, assume that $\sigma_{\mbox{\footnotesize{min}\,}}, \ \sigma_{\mbox{\footnotesize{max}\,}}$ are ``crude'' preliminary bounds such that
\[
\sigma_{\mbox{\footnotesize{min}\,}}\leq \sigma_0 \leq \sigma_{\mbox{\footnotesize{max}\,}}.
\]
Usually, $\sigma_{\mbox{\footnotesize{min}\,}}$ and $\sigma_{\mbox{\footnotesize{max}\,}}$ do not need to be precise, and can potentially differ from $\sigma_0$ by several orders of magnitude. Set
\[
\sigma_j := \sigma_{\mbox{\footnotesize{min}\,}} 2^j \text{ and }
\mathcal J=\left\{ j\in \mathbb Z: \ \sigma_{\mbox{\footnotesize{min}\,}} \leq \sigma_j < 2\sigma_{\mbox{\footnotesize{max}\,}} \right\}.
\]
Note that the cardinality of $J$ satisfies $\mathrm{card}(\mathcal J)\leq 1+\log_2(\sigma_{\mbox{\footnotesize{max}\,}}/\sigma_{\mbox{\footnotesize{min}\,}})$.
For each $j\in \mathcal J$, define $\theta_j:=\theta(j,\beta) = \frac{1}{\sigma_j} \sqrt{\frac{\beta}{m}}$.
Define
\[
\widehat\Sigma_{m,j}=\frac{1}{m\theta_j}\sum_{i=1}^m \psi\left( \theta_j (X_i-\widehat\mu)(X_i-\widehat\mu)^T \right).
\]
Finally, set
\begin{align}
\label{eq:lepski}
j_\ast:=\min\left\{ j\in \mathcal J: \forall k>j \text{ s.t. } k\in \mathcal J,\ \left\| \widehat\Sigma_{m,k} - \widehat\Sigma_{m,j} \right\|\leq 6 \sigma_{k} \sqrt{\frac{\beta}{m}} \right\}
\end{align}
and $\widehat\Sigma_\ast:=\widehat\Sigma_{m,j_\ast}$.
Note that the estimator $\widehat\Sigma_\ast$ depends only on $X_1,\ldots,X_m$, as well as $\sigma_{\mbox{\footnotesize{min}\,}}, \ \sigma_{\mbox{\footnotesize{max}\,}}$.
Our main result is the following statement regarding the performance of the data-dependent estimator $\widehat\Sigma_\ast$:
\begin{theorem}
\label{th:lepski}
Suppose $m\geq C\overline{d}\beta$, then,
the following inequality holds with probability at least $1 - 5d \log_2\left(\frac{2\sigma_{\mbox{\footnotesize{max}\,}}}{\sigma_{\mbox{\footnotesize{min}\,}}}\right) e^{-\beta}$:
\[
\left\| \widehat\Sigma_\ast - \Sigma_0 \right\| \leq 18\sigma_0 \sqrt{\frac{\beta}{m}}.
\]
\end{theorem}
An immediate corollary of Theorem \ref{th:lepski} is the quantitative result for the performance of PCA based on the estimator
$\widehat\Sigma_\ast$.
Let $\mbox{{\rm Proj}}_k$ be the orthogonal projector on a subspace corresponding to the $k$ largest positive eigenvalues $\lambda_1,\ldots,\lambda_k$ of $\Sigma_0$ (here, we assume for simplicity that all the eigenvalues are distinct), and $\widehat{\mbox{{\rm Proj}}_k}$ -- the orthogonal projector of the same rank as $\mbox{{\rm Proj}}_k$ corresponding to the $k$ largest eigenvalues of $\widehat\Sigma_\ast$.
The following bound follows from the Davis-Kahan perturbation theorem \cite{davis1970rotation}, more specifically, its version due to \cite[][Theorem 3 ]{Zwald2006On-the-Converge00}.
\begin{corollary}
\label{cor:PCA}
Let $\Delta_k=\lambda_k - \lambda_{k+1}$, and assume that $\Delta_k\geq 72\sigma_0 \sqrt{\frac{\beta}{m}}$.
Then
\[
\big\|\widehat{\mbox{{\rm Proj}}_k}-\mbox{{\rm Proj}}_k\big\| \leq \frac{36}{\Delta_k}\sigma_0 \sqrt{\frac{\beta}{m}}
\]
with probability $\geq 1 - 5d \log_2\left(\frac{2\sigma_{\mbox{\footnotesize{max}\,}}}{\sigma_{\mbox{\footnotesize{min}\,}}}\right) e^{-\beta}$.
\end{corollary}
\begin{comment}
\begin{align}
\label{eq:gap}
&
\Delta_k > 44\sqrt{\frac{\left(\mathbb E\|X\|^4-\mbox{tr}(\Sigma^2)\right)\log(1.4/\delta)}{n}}.
\end{align}
Then
\[
\Pr\Bigg(\big\|\widehat{\mbox{{\rm Proj}}_m}-\mbox{{\rm Proj}}_m\big\|_{{\rm F}}\geq
\frac{22}{\Delta_m}\sqrt{\frac{\left(\mathbb E\|X\|^4-\mbox{tr}(\Sigma^2)\right)\log(1.4/\delta)}{n}}\Bigg)\leq \delta.
\]
\end{comment}
It is worth comparing the bound of Lemma \ref{lemma:main} and Theorem \ref{th:lepski} above to results of the paper by \cite{fan2016robust}, which constructs a covariance estimator $\widehat{\Sigma}_m'$ under the assumption that the random vector $X$ is centered, and $\sup_{\mathbf v\in \mathbb R^d: \|\mathbf{v}\|_2\leq1}\expect{|\langle\mathbf{v},X\rangle|^4}=B<\infty$.
More specifically, $\widehat{\Sigma}_m'$ satisfies the inequality
\begin{align}
\label{eq:fan}
\pr{\left\| \widehat{\Sigma}_m'-\Sigma_0 \right\| \geq \sqrt{\frac{C_1\beta Bd}{m}} } \leq de^{-\beta},
\end{align}
where $C_1>0$ is an absolute constant.
The main difference between \eqref{eq:fan} and the bounds of Lemma \ref{lemma:main} and Theorem \ref{th:lepski} is that the latter are expressed in terms of $\sigma_0^2$, while the former is in terms of $B$.
The following lemma demonstrates that our bounds are at least as good:
\begin{lemma}
\label{bound-on-sigma}
Suppose that $\mathbb E X = 0$ and $\sup_{\mathbf v\in \mathbb R^d:\|\mathbf{v}\|_2\leq 1}\expect{|\langle\mathbf{v},X\rangle|^4}=B<\infty$.
Then $Bd\geq\sigma_0^2$.
\end{lemma}
It follows from the above lemma that $\overline{d}=\sigma_0^2/\|\Sigma_0\|^2\lesssim d$.
Hence, By Theorem \ref{th:lepski}, the error rate of estimator $\widehat \Sigma_\ast$ is bounded above by $\mathcal{O}(\sqrt{d/m})$ if $m\gtrsim d$.
It has been shown (for example, see \cite{lounici2014high}) that the minimax lower bound of covariance estimation is of order
$\Omega(\sqrt{d/m})$.
Hence, the bounds of \cite{fan2016robust} as well as our results imply correct order of the error.
That being said, the ``intrinsic dimension'' $\bar d$ reflects the structure of the covariance matrix and can potentially be much smaller than $d$, as it is shown in the next section.
\subsection{Bounds in terms of intrinsic dimension}
In this section, we show that under a slightly stronger assumption on the fourth moment of the random vector $X$, the bound $\mathcal{O}(\sqrt{d/m})$ is suboptimal, while our estimator can achieve a much better rate in terms of the ``intrinsic dimension'' associated to the covariance matrix.
This makes our estimator useful in applications involving high-dimensional covariance estimation, such as PCA.
Assume the following uniform bound on the \textit{kurtosis} of linear forms $\langle Z,v\rangle$:
\begin{equation}
\label{kurtosis}
\sup_{\|\mathbf{v}\|_2\leq1}\frac{\sqrt{\mathbb E \dotp{Z}{\mathbf{v}}^4}}{\mathbb E \dotp{Z}{\mathbf{v}}^2}=R<\infty.
\end{equation}
The intrinsic dimension of the covariance matrix $\Sigma_0$ can be measured by the \textit{effective rank} defined as
\[
\mathbf{r}(\Sigma_0)=\frac{\mbox{tr}(\Sigma_0)}{\|\Sigma_0\|}.
\]
Note that we always have $\mathbf{r}(\Sigma_0)\leq \text{rank}(\Sigma_0)\leq d$, and it some situations
$\mathbf{r}(\Sigma_0)\ll \text{rank}(\Sigma_0)$, for instance if the covariance matrix is ``approximately low-rank'', meaning that it has many small eigenvalues.
The constant $\sigma_0^2$ is closely related to the effective rank as is shown in the following lemma (the proof of which is included in the supplementary material):
\begin{lemma}
\label{effective-rank-bound}
Suppose that \eqref{kurtosis} holds.
Then,
\[
\mathbf{r}(\Sigma_0)\|\Sigma_0\|^2\leq \sigma_0^2\leq R^2\mathbf{r}(\Sigma_0)\|\Sigma_0\|^2.
\]
\end{lemma}
As a result, we have $\mathbf{r}(\Sigma_0)\leq\overline{d}\leq R^2\mathbf{r}(\Sigma_0)$.
The following corollary immediately follows from Theorem \ref{th:lepski} and Lemma \ref{effective-rank-bound}:
\begin{corollary}
Suppose that $m\geq C\beta \mathbf{r}(\Sigma_0)$ for an absolute constant $C>0$ and that \eqref{kurtosis} holds.
Then
\[
\left\| \widehat\Sigma_\ast - \Sigma_0 \right\| \leq 18R\|\Sigma_0\| \sqrt{\frac{\mathbf{r}(\Sigma_0)\beta}{m}}
\]
with probability at least
$1 - 5d \log_2\left(\frac{2\sigma_{\mbox{\footnotesize{max}\,}}}{\sigma_{\mbox{\footnotesize{min}\,}}}\right) e^{-\beta}$.
\end{corollary}
\section{Applications: low-rank covariance estimation}
In many data sets encountered in modern applications (for instance, gene expression profiles \cite{saal2007poor}), dimension of the observations, hence the corresponding covariance matrix, is larger than the available sample size.
However, it is often possible, and natural, to assume that the unknown matrix possesses special structure, such as low rank, thus reducing the ``effective dimension'' of the problem.
The goal of this section is to present an estimator of the covariance matrix that is ``adaptive'' to the possible low-rank structure; such estimators are well-known and have been previously studied for the bounded and sub-Gaussian observations \cite{lounici2014high}.
We extend these results to the case of heavy-tailed observations; in particular, we show that the estimator obtained via soft-thresholding applied to the eigenvalues of $\widehat\Sigma_\ast$ admits optimal guarantees in the Frobenius (as well as operator) norm.
Let $\widehat\Sigma_\ast$ be the estimator defined in the previous section, see equation \eqref{eq:lepski}, and set
\begin{align}
&
\widehat \Sigma_\ast^{\tau}=\argmin_{A\in \mathbb R^{d\times d}}
\left[ \left\| A - \widehat \Sigma_\ast \right\|^2_{\mathrm{F}} +\tau \left\| A \right\|_1\right],
\end{align}
where $\tau>0$ controls the amount of penalty.
It is well-known (e.g., see the proof of Theorem 1 in \cite{lounici2014high}) that
$\widehat \Sigma_{2n}^\tau$ can be written explicitly as
\[
\widehat \Sigma_{\ast}^\tau = \sum_{i=1}^d \max\left(\lambda_i\left(\widehat \Sigma_{\ast}\right) -\tau/2, 0\right) v_i(\widehat \Sigma_{\ast}) v_i(\widehat \Sigma_{\ast})^T,
\]
where $\lambda_i(\widehat \Sigma_{\ast})$ and $v_i(\widehat \Sigma_{\ast})$ are the eigenvalues and corresponding eigenvectors of $\widehat \Sigma_{\ast}$.
We are ready to state the main result of this section.
\begin{theorem}
\label{th:covariance}
For any
$
\tau \geq 36 \sigma_0 \sqrt{\frac{\beta}{m}},
$
\begin{align}
&\label{eq:ex70}
\left\| \widehat \Sigma_\ast^\tau - \Sigma_0 \right\|_{\mathrm{F}}^2\leq \inf_{A\in \mathbb R^{d\times d}} \left[ \left\| A - \Sigma_0 \right\|_{\mathrm{F}}^2 + \frac{(1+\sqrt{2})^2}{8}\tau^2\mathrm{rank}(A) \right].
\end{align}
with probability $\geq 1 - 5d \log_2\left(\frac{2\sigma_{\mbox{\footnotesize{max}\,}}}{\sigma_{\mbox{\footnotesize{min}\,}}}\right) e^{-\beta}$.
\end{theorem}
In particular, if $\mathrm{\,rank}(\Sigma_0) = r$ and $\tau = 36 \sigma_0 \sqrt{\frac{\beta}{m}}$, we obtain that
\[
\left\| \widehat \Sigma_\ast^\tau - \Sigma_0 \right\|_{\mathrm{F}}^2 \leq 162\,\sigma_0^2 \left(1+\sqrt{2}\right)^2 \frac{\beta r}{m}
\]
with probability $\geq 1 - 5d \log_2\left(\frac{2\sigma_{\mbox{\footnotesize{max}\,}}}{\sigma_{\mbox{\footnotesize{min}\,}}}\right) e^{-\beta}$.
\section{Proofs}
\label{sec:proofs}
\subsection{Proof of Lemma \ref{lemma:main}}
\label{ssec:mainproof}
The result is a simple corollary of the following statement.
\begin{lemma}
\label{main:lemma-2}
Set $\theta=\frac{1}{\sigma}\sqrt{\frac{\beta}{m}}$, where $\sigma \geq \sigma_0$ and $m\geq\beta$.
Let $\overline{d}:=\sigma_0^2/\|\Sigma_0\|^2$.
Then, with probability at least $1-5de^{-\beta}$,
\begin{multline*}
\left\| \widehat\Sigma - \Sigma_0\right\|
\leq 2\sigma\sqrt{\frac{\beta}{m}} \\
+C'\|\Sigma_0\| \left( \sqrt{\frac{\overline{d}\sigma}{\|\Sigma_0\|}}\left(\frac{\beta}{m}\right)^{\frac34} + \frac{\sqrt{\overline{d}}\sigma}{\|\Sigma_0\|}\frac{\beta}{m}
+ \sqrt{\frac{\overline{d}\sigma}{\|\Sigma_0\|}}\left(\frac{\beta}{m}\right)^{\frac54}
+\overline{d}
\left(\frac{\beta}{m}\right)^{\frac32} + \frac{\overline{d}\beta^2}{m^2} + \overline{d}^{\frac54}\left(\frac{\beta}{m}\right)^{\frac94} \right),
\end{multline*}
where $C'>1$ is an absolute constant.
\end{lemma}
Now, by Corollary \ref{FKG-bound} in the supplement, it follows that
$\overline{d} = \sigma_0^2/\|\Sigma_0\|^2\geq\mbox{tr}(\Sigma_0)/\|\Sigma_0\|\geq1$. Thus,
assuming that the sample size satisfies $m\geq(6C')^4\overline{d}\beta$, then,
$\overline{d}\beta/m\leq1/(6C')^4<1$, and by some algebraic manipulations
we have that
\begin{equation}\label{need-steps}
\left\| \widehat\Sigma - \Sigma_0\right\|
\leq 2\sigma\sqrt{\frac{\beta}{m}} + \sigma\sqrt{\frac{\beta}{m}}=3\sigma\sqrt{\frac{\beta}{m}}.
\end{equation}
For completeness, a detailed computation is given in the supplement. This
finishes the proof.
\subsection{Proof of Lemma \ref{main:lemma-2}}
Let $B_\beta = 11\sqrt{2\mbox{tr}(\Sigma_0)\beta/m}$ be the error bound of the robust mean estimator $\widehat\mu$ defined in \eqref{eq:median_mean}.
Let $Z_i = X_i - \mu_0$,
$\Sigma_\mu = \expect{(Z_i-\mu)(Z_i-\mu)^T}$, $\forall i=1,2,\cdots,d$, and
\[
\hat{\Sigma}_\mu = \frac{1}{m\theta}\sum_{i=1}^m \frac{(X_i - \mu)(X_i - \mu)^T}{\left\| X_i - \mu\right\|_2^2} \psi\left( \theta \left\| X_i - \mu \right\|_2^2 \right),
\]
for any $\|\mu\|_2\leq B_\beta$.
We begin by noting that the error can be bounded by the supremum of an empirical process indexed by $\mu$, i.e.
\begin{equation}\label{triangle-inequality}
\left\| \hat{\Sigma} - \Sigma_0 \right\|
\leq \sup_{\|\mu\|_2\leq B_\beta}\left\| \hat{\Sigma}_\mu - \Sigma_0 \right\|
\leq \sup_{\|\mu\|_2\leq B_\beta}\left\| \hat{\Sigma}_\mu - \Sigma_\mu \right\|
+ \left\| \Sigma_\mu - \Sigma_0 \right\|
\end{equation}
with probability at least $1-e^{-\beta}$.
We first estimate the second term $\left\| \Sigma_\mu - \Sigma_0 \right\|$.
For any $\|\mu\|_2\leq B_\beta$,
\begin{multline*}
\left\| \Sigma_\mu - \Sigma_0 \right\|
= \left\| \expect{(Z_i-\mu)(Z_i-\mu)^T - Z_iZ_i^T} \right\|
= \sup_{\mathbf{v}\in \mathbb R^d:\|\mathbf{v}\|_2 \leq 1} \left| \expect{\dotp{Z_i-\mu}{\mathbf{v}}^2 - \dotp{Z_i}{\mathbf{v}}^2 } \right| \\
= (\mu^T\mathbf{v})^2 \leq \|\mu\|_2^2 \leq B_\beta^2 =242 \frac{\mbox{tr}(\Sigma_0)\beta}{m},
\end{multline*}
with probability at least $1-e^{-\beta}$.
It follows from Corollary \ref{FKG-bound} in the supplement that with the same probability
\begin{equation}
\label{mean-bound}
\left\| \Sigma_\mu - \Sigma_0 \right\| \leq 242\frac{\sigma_0^2\beta}{\|\Sigma_0\|m}
\leq 242\frac{\sigma^2\beta}{\|\Sigma_0\|m} = 242\|\Sigma_0\|\frac{\overline{d}\beta}{m}.
\end{equation}
Our main task is then to bound the first term in \eqref{triangle-inequality}.
To this end, we rewrite it as a double supremum of an empirical process:
\[
\sup_{\|\mu\|_2\leq B_\beta}\left\| \hat{\Sigma}_\mu - \Sigma_\mu \right\|
= \sup_{\|\mu\|_2\leq B_\beta,\|\mathbf{v}\|_2\leq1} \left|\mathbf{v}^T\left(\hat{\Sigma}_\mu - \Sigma_\mu\right)\mathbf{v}\right|
\]
It remains to estimate the supremum above.
\begin{lemma}
\label{key-lemma}
Set $\theta=\frac{1}{\sigma}\sqrt{\frac{\beta}{m}}$, where $\sigma \geq \sigma_0$ and $m\geq\beta$.
Let $\overline{d}:=\sigma_0^2/\|\Sigma_0\|^2$.
Then, with probability at least $1-4de^{-\beta}$,
\begin{multline*}
\sup_{\|\mu\|_2\leq B_\beta,\|\mathbf{v}\|_2\leq1} \left|\mathbf{v}^T\left(\hat{\Sigma}_\mu - \Sigma_\mu\right)\mathbf{v}\right|
\leq 2\sigma\sqrt{\frac{\beta}{m}} \\
+C''\|\Sigma_0\| \left( \sqrt{\frac{\overline{d}\sigma}{\|\Sigma_0\|}}\left(\frac{\beta}{m}\right)^{\frac34} + \frac{\sqrt{\overline{d}}\sigma}{\|\Sigma_0\|}\frac{\beta}{m}
+ \sqrt{\frac{\overline{d}\sigma}{\|\Sigma_0\|}}\left(\frac{\beta}{m}\right)^{\frac54}
+\overline{d}
\left(\frac{\beta}{m}\right)^{\frac32} + \frac{\overline{d}\beta^2}{m^2} + \overline{d}^{\frac54}\left(\frac{\beta}{m}\right)^{\frac94} \right),
\end{multline*}
where $C''>1$ is an absolute constant.
\end{lemma}
Note that $\sigma\geq\sigma_0$ by defnition, thus, $\overline{d}\leq\sigma^2/\|\Sigma_0\|^2$.
Combining the above lemma with \eqref{triangle-inequality} and \eqref{mean-bound} finishes the proof.
\subsection{Proof of Theorem \ref{th:lepski}}
\label{ssec:lepskiproof}
Define $\bar j:=\min\left\{ j\in \mathcal J: \ \sigma_j \geq \sigma_0\right\}$, and note that $\sigma_{\bar j}\leq 2\sigma_0$.
We will demonstrate that $j_\ast \leq \bar j$ with high probability.
Observe that
\begin{align*}
\Pr\left( j_\ast > \bar j\right)&\leq \Pr\left( \bigcup_{k\in \mathcal J: k>\bar j} \left\{ \left\| \widehat \Sigma_{m,k} - \Sigma_{m,\bar j} \right\| > 6\sigma_k \sqrt{\frac{\beta}{n}} \right\} \right)\\
&
\leq \Pr\left( \left\| \widehat\Sigma_{m,\bar j} - \Sigma_0 \right\| > 3\sigma_{\bar j} \sqrt{\frac{\beta}{m}} \right) +
\sum_{k\in \mathcal J: \ k>\bar j}\Pr\left( \left\| \widehat\Sigma_{m,k} - \Sigma_0 \right\| > 3\sigma_k \sqrt{\frac{\beta}{m}} \right) \\
&
\leq 5de^{-\beta} + 5d \log_2\left(\frac{\sigma_{\mbox{\footnotesize{max}\,}}}{\sigma_{\mbox{\footnotesize{min}\,}}}\right) e^{-\beta},
\end{align*}
where we applied \eqref{simple-bound} to estimate each of the probabilities in the sum under the assumption that the number of samples $m\geq C\overline{d}\beta$ and $\sigma_k\geq\sigma_{\bar j}\geq\sigma_0$.
It is now easy to see that the event
\[
\mathcal B = \bigcap_{k\in \mathcal J: k\geq \bar j}
\left\{ \left\| \widehat\Sigma_{m,k} - \Sigma_0 \right\|\leq 3\sigma_k\sqrt{\frac{\beta}{m}} \right\}
\]
of probability $\geq 1 - 5d \log_2\left(\frac{2\sigma_{\mbox{\footnotesize{max}\,}}}{\sigma_{\mbox{\footnotesize{min}\,}}}\right) e^{-\beta}$ is contained in
$\mathcal E=\left\{ j_\ast\leq \bar j \right\}$.
Hence, on $\mathcal B$
\begin{align*}
\left\| \widehat\Sigma_\ast - \Sigma_0 \right\|&
\leq \| \widehat\Sigma_\ast - \widehat\Sigma_{m,\bar j} \| + \| \widehat\Sigma_{m,\bar j} - \Sigma_0 \| \leq
6 \sigma_{\bar j}\sqrt{\frac{\beta}{m}} + 3\sigma_{\bar j}\sqrt{\frac{\beta}{m}} \\
&\leq 12\sigma_0\sqrt{\frac{\beta}{m}} + 6\sigma_0\sqrt{\frac{\beta}{m}} = 18\sigma_0 \sqrt{\frac{\beta}{m}},
\end{align*}
and the claim follows.
\subsection{Proof of Theorem \ref{th:covariance}}
The proof is based on the following lemma:
\begin{lemma}
Inequality (\ref{eq:ex70}) holds on the event $\mathcal E=\left\{ \tau\geq 2\left\| \widehat \Sigma_{\ast} - \Sigma_0 \right\| \right\}$.
\end{lemma}
To verify this statement, it is enough to repeat the steps of the proof of Theorem 1 in \cite{lounici2014high}, replacing each occurrence of the sample covariance matrix by its ``robust analogue'' $\widehat \Sigma_\ast$. \\
It then follows from Theorem \ref{th:lepski} that $\Pr(\mathcal E)\geq 1 - 5d \log_2\left(\frac{2\sigma_{\mbox{\footnotesize{max}\,}}}{\sigma_{\mbox{\footnotesize{min}\,}}}\right) e^{-\beta}$ whenever $\tau \geq 36 \sigma_0 \sqrt{\frac{\beta}{m}}$.
\begin{comment}
\section*{References}
References follow the acknowledgments. Use unnumbered first-level
heading for the references. Any choice of citation style is acceptable
as long as you are consistent. It is permissible to reduce the font
size to \verb+small+ (9 point) when listing the references. {\bf
Remember that you can go over 8 pages as long as the subsequent ones contain
\emph{only} cited references.}
\medskip
\end{comment}
\bibliographystyle{imsart-number}
| {
"timestamp": "2018-01-17T02:04:38",
"yymm": "1708",
"arxiv_id": "1708.00502",
"language": "en",
"url": "https://arxiv.org/abs/1708.00502",
"abstract": "We propose and analyze a new estimator of the covariance matrix that admits strong theoretical guarantees under weak assumptions on the underlying distribution, such as existence of moments of only low order. While estimation of covariance matrices corresponding to sub-Gaussian distributions is well-understood, much less in known in the case of heavy-tailed data. As K. Balasubramanian and M. Yuan write, \"data from real-world experiments oftentimes tend to be corrupted with outliers and/or exhibit heavy tails. In such cases, it is not clear that those covariance matrix estimators .. remain optimal\" and \"..what are the other possible strategies to deal with heavy tailed distributions warrant further studies.\" We make a step towards answering this question and prove tight deviation inequalities for the proposed estimator that depend only on the parameters controlling the \"intrinsic dimension\" associated to the covariance matrix (as opposed to the dimension of the ambient space); in particular, our results are applicable in the case of high-dimensional observations.",
"subjects": "Statistics Theory (math.ST)",
"title": "Estimation of the covariance structure of heavy-tailed distributions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9732407168145569,
"lm_q2_score": 0.7279754548076477,
"lm_q1q2_score": 0.7084953534603982
} |
https://arxiv.org/abs/1106.3109 | Stretched exponential behavior and random walks on diluted hypercubic lattices | Diffusion on a diluted hypercube has been proposed as a model for glassy relaxation and is an example of the more general class of stochastic processes on graphs. In this article we determine numerically through large scale simulations the eigenvalue spectra for this stochastic process and calculate explicitly the time evolution for the autocorrelation function and for the return probability, all at criticality, with hypercube dimensions $N$ up to N=28. We show that at long times both relaxation functions can be described by stretched exponentials with exponent 1/3 and a characteristic relaxation time which grows exponentially with dimension $N$. The numerical eigenvalue spectra are consistent with analytic predictions for a generic sparse network model. | \section*{Introduction}
In 1854 R. Kohlrausch used a phenomenological expression
\begin{equation}
\label{kohl}
q_{K}(t)=\exp(-(t/\tau)^\beta)
\end{equation}
to parametrize the non-exponential decay of the electric polarization
of Leyden jars (primitive capacitors)\cite{RK}; his son F. Kohlrausch
later used the same expression to analyse creep in galvanometer
suspensions \cite{FK}. A century later, in 1951 Weibull introduced
\cite{weibull} the closely related Weibull function; this survival
probability function \cite{eliazar} which is widely used in the
engineering literature is strictly of the Kohlrausch form,
Eqn. (\ref{kohl}). In 1970 Williams and Watts re-discovered the
Kohlrausch function in the context of dielectric
relaxation\cite{WW}. Under the name of ``stretched exponential''
\cite{chamberlin} the KWW (Kohlrausch-Williams-Watts) function has
become ubiquitous in phenomenological analyses of non-exponential
relaxation data, experimental or numerical. In particular the KWW
form was used by Ogielski in a phenomenological fit the decay of the
autocorrelation function at equilibrium for a $3d$ Ising spin glass
model \cite{ogi1985}.
Many arguments have been given as to why under certain assumptions,
specific systems should show KWW relaxation
\cite{phil,havl,rasa,dons1,dons2,gras,gotz,ian1,ian2,ian3}, but there
have always been lingering suspicions that for most cases the KWW
expression is nothing more than a convenient fitting function of no
fundamental significance.
It was conjectured \cite{ian1} that KWW relaxation is the signature of
a complex configuration space. Thus from the argument which follows it
was suggested that random walks on a diluted hypercube (a hypercube
with a fraction $p$ of vertices occupied at random) near the critical
concentration for percolation $p_{c}$ \cite{erdos1979} would lead to
an autocorrelation function decay of the form $q(t) \sim
\exp[-(t/\tau)^{\beta}]$, with a specific value of the exponent,
$\beta = 1/3$.
For random walks at percolation threshold in a randomly occupied
Euclidean (flat) space of dimension $d$ such as $\mathbb{Z}^{d}$, the
familiar Fickian diffusion law $\langle R^2\rangle \sim t$ is replaced
by a sub-linear diffusion $\langle R^2\rangle \sim t^{\beta_{d}}$,
with $\beta_{d} \equiv 1/3$ for $d\geq 6$ \cite{alexander:82}. Random
walks on the surface of a full [hyper]sphere $\mathbb{S}_{d-1}$ in any
dimension $d$ are characterized by the generic law $\langle
\cos(\theta)\rangle = \exp(-(t/\tau))$ where $\theta$ denotes the
generalized angular displacement of the walker
\cite{debye,caillol}. It was argued \cite{ian1} that random walks on
percolation clusters at threshold inscribed on [hyper]spheres would be
characterized by relaxation of the form $\langle \cos(\theta)\rangle =
\exp(-(t/\tau)^{\beta_{d}})$ with the same exponents $\beta_{d}$ as in
the corresponding Euclidean space. This was demonstrated numerically
for $d = 3$ to $8$ \cite{jund}. A hypercube being topologically
equivalent to a hypersphere, for random walks on a diluted hypercube
at threshold one then expects stretched exponential relaxation with
exponent $\beta = 1/3$.
The diluted hypercube at threshold can alternatively be considered as
a specific example of a sparse graph. Remarkably, analytic expressions
for diffusion on general sparse graphs \cite{bray1988,samukhin2008} derived
from a quite different line of argument also lead to stretched
exponential relaxation expressions with the same specific value
$\beta=1/3$ for the exponent.
Here we present numerical data for random walks on the diluted
hypercube at threshold up to dimension $N=28$ which are consistent
with these conclusions.
We argue that the KWW relaxation observed phenomenologically in
numerous complex systems just above their respective critical
temperatures is not an artifact, but is the signature of a universal
form of coarse grained configuration space morphology which precedes a
glass transition.
\section*{Laplace transforms and random networks}
Quite generally, any relaxation function $q(t)$ can equivalently be
characterized by its Laplacian, a relaxation mode density (or
eigenvalue density) function $\rho(s)$ defined by:
\begin{equation}
\label{eq:relaxationfunction}
q(t) \equiv \int_0^\infty \rho(s)e^{-s t}ds
\end{equation}
with the normalization condition
\begin{equation}
\int_0^\infty \rho(s)ds =1
\end{equation}
In model systems it can be possible to establish analytically or
numerically the distribution $\rho(s)$ which can then be
inverted to obtain $q(t)$. The inverse Laplace transform of a
numerical or experimental $q_{K}(t)$ to obtain $\rho(s)$ is much more
difficult unless $q(t)$ is known to very high precision over a wide
range of $t$. This is an ill-conditioned problem as different
$\rho(s)$ distributions can lead to almost indistinguishable $q(t)$.
Pollard \cite{pollard} (see Berberan-Santos \cite{berberan2008})
provided an exact inversion of the pure stretched exponential
relaxation function $q_K(t)$ \ref{kohl} :
\begin{equation}
\label{eq:laplace}
\rho_{K,\beta}(s)=\frac{\tau}{\pi}\int_0^\infty \exp\left[ -u^\beta \cos\left(
\frac{\beta \pi}{2}
\right)
\right] \cos\left[u^\beta\sin\left(
\frac{\beta \pi}{2}
\right)
\right]
\cos(s\tau u)
\;du
\end{equation}
For $\beta < 1$, $\rho_{K,\beta}(s)$ can be expressed in terms
of elementary functions only for $\beta = 1/2$ \cite{pollard}; in that
case
\begin{equation}
\label{eq:half}
\rho_{K,1/2}(s)= [\tau/2\pi^{1/2}(s\tau)^{3/2}]\exp(-1/4s\tau)]
\end{equation}
To a good approximation, for general $\beta$ the large $s$ (short
time) limit takes the form $\rho_{K,\beta} \sim s^{-(1+\beta)}$
and the small $s$ (long time) limit the form $\rho_{K,\beta} \sim
(s)\exp[s^{-\beta/(1-\beta)}]$.
It should be kept in mind that at short times observed relaxation
functions usually deviate from the ``asymptotic'' form. Also at very
long times for finite sized systems the relaxation is controlled by
the smallest non-zero value of $s$, $s_{1}$. For time $t >
s_{1}^{-1}$ the relaxation will tend to a pure exponential, $q(t)
\sim \exp[-t s_{1}]$, but for large systems this condition corresponds
to extremely long times and we will not consider it. What we are
interested in is to establish the form of the relaxation in the regime
where the mode distribution is no longer affected by short time
effects and where $\rho(s)$ can be considered continuous.
\section*{Random networks}
Random walks on the diluted $N$-simplex or hypertetrahedron which is
an Erd\"os-R\'enyi graph having dead ends and vertices with two
connections, was studied theoretically by Bray and Rodgers
\cite{bray1988} using Replica theory. They showed that
in this model the return function $p_{ret}(t)$, the probability that
the walker will have returned to the origin after $t$ steps, behaves
like a stretched exponential with exponent $1/3$.
Samukhin {\it et al} \cite{samukhin2008} have made analytic studies of
random walks and relaxation processes on uncorrelated Random
Networks. They considered a stochastic process governed by the
Laplacian operator occurring on a random graph with $N^{*}$ nodes,
taking the limit as $N^{*} \to \infty$. They find that the determining
parameter in this problem is the minimum degree $q_{m}$ of vertices
(i.e. the minimum number of neighbors to any given vertex). For $q_m =
2$, meaning that the network is ``sparse'', the graph tends to a
random Bethe lattice in which almost all finite subgraphs are trees,
i.e., they contain almost no closed loops. In the present context the
essential statement of Samukhin {\it et al} \cite{samukhin2008} is
that when $q_m = 2$ the mode density function $\rho_{S}(s)$ for
this very general model can be approximated by
\begin{equation}
\label{eq:laplacian}
\rho_{S}(s) = s^{-4/3}\exp(-a/\sqrt{s})
\end{equation}
where
\begin{equation}
\label{eq:defa}
a=\sqrt{\frac{4\tau^-1}{3}}
\end{equation}
with a similar expression for $q_{m} = 1$ (graphs with dead
ends). Then for a graph with $N^{*}$ vertices the asymptotics at $t >
\ln N^{*}$ for the probability of return to the starting point at time
$t$ during a random walk on the network (the "autocorrelator"
\cite{samukhin2008}) will be
\begin{equation}
\label{pretnet}
p_{ret,S}(t) \sim t^{\eta}\exp[-3(a/2)^{2/3}t^{1/3}],
\end{equation}
a stretched exponential having exponent $1/3$, multiplied by a mildly
time dependent prefactor ($\eta$ is small). This limit should be
observable if the network size satisfies $(\ln N^{*})^{2/3} \gg 1$.
\section{Hypercube model}
We have already addressed the hypercube problem numerically through
Monte Carlo techniques \cite{lemke1996} and through the explicit
solution of Master equations \cite{lemke2000,almeida2000}. In this
paper we extend these results by investigating the time evolution for
the autocorrelation function $q(N,t)$, the return probability
$p_{ret}(N,t)$, and the eigenvalue spectrum $\rho(N,s)$ for
diffusion on diluted hypercubes of dimension $N$ near the critical
occupation probability $p_c(N)$, for $N$ up to $28$.
Consider a hypercube (or n-cube) in [high] dimension $N$,
$\mathbb{Q}_{N}$, with a fraction $p$ of its $2^N$ vertices occupied
at random. It is well established \cite{erdos1979,bollobas,borgs} that
there is a critical threshold at $p_{c}(N) \sim 1/N$. For $p > p_c(N)$
the occupied vertices having one or more occupied vertices as
neighbors make up a giant spanning cluster; for $p<p_c$ there exist
only small clusters (each with less than $N$ elements). By analogy
with the equivalent situation in randomly occupied Euclidean space we
will refer to $p_{c}$ as the ``percolation'' threshold.
Gaunt and Brak \cite{gaunt1984} predict that the dependency of the
critical site percolation concentration $p_c$ on a hypercubic lattice
of dimension $d$, $\mathbb{Z}^d$, or on a hypercube of dimension $N$,
$\mathbb{Q}_N$, is given to order $4$ by:
\begin{equation}
p_c(\sigma) =\sigma+\frac{3}{2}\sigma^2+\frac{15}{14}\sigma^3+\frac{83}{4}\sigma^4\ldots
\label{eq:pc}
\end{equation}
where $\sigma(d)=1/(2d -1)$ for the hypercubic lattice and
$\sigma(N)=1/(N-1)$ for the hypercube \cite{gaunt1976}. Although the
terms in this expression are expected to be exact, the demonstration
is not entirely rigorous \cite{gaunt1984}, and the series is obviously
truncated. Grassberger \cite{grassberger2003} tested the equation
(\ref{eq:pc}) through large scale Monte Carlo simulations on
$\mathbb{Z}^d$ and verified that for $d > 10$ it represents the
numerically determined $p_{c}(d)$ to within a small correction term.
We will work with samples having vertex concentrations $p(N)$ equal to
the values $p_c(N)$ given by the truncated series equation
(\ref{eq:pc}). For different samples $k$ the individual critical values
$p_c(k)$ will in fact be distributed about the average value
\cite{borgs}.
For $p > p_c(N)$ we can define a random walk along edges on the giant
cluster. Start at any vertex $i$ on the giant cluster. Choose at
random a vertex $j$ on the hypercube, near neighbor to $i$. If the
vertex $j$ is also on the giant cluster and so accessible, move to
$j$; otherwise the walker remains one time step longer at the vertex
$i$. This evolution rule is chosen to mimic Monte Carlo simulations
using Metropolis dynamics.
We can compare the autocorrelation function $q(N,t)$ obtained from
this procedure, ($q(N,t)$ is defined in Eq. (\ref{eq:correlation})
below), to the time dependent autocorrelation $\langle
S_i(t).S_i(0)\rangle$ measured in thermodynamic models for systems of
Ising spins $S_{i}$ \cite{ogi1985} and even to experimental
magnetization decay results. From a theoretical point of view it is
often more convenient to investigate the ``return probability''
$p_{ret}(t)$ that is basically the probability of finding the walker
at the origin of the system after $t$ steps ($p_{ret}(N,t)$ is defined
in Eq. (\ref{eq:pret}) below). For any network $p_{ret}(t)$ can be
defined, while $q(t)$ can be defined conveniently only on models such
as the hypercube which have a suitable metric.
The numerical data near criticality show that the long time
relaxations of the autocorrelation parameter $q(N,t)$ and of the
return probability $p_{ret}(N,t)$ are consistent with stretched
exponentials having an exponent $\beta = 1/3$ over many orders of
magnitude in time.
\section*{Algorithm}
The time evolution of the entire probability distribution for the
walker after $t$ steps, $\vec{\Pi}(t)$, can be described by a Master
Equation. At $t=0$ the walker is localized on a single vertex $i_o$ on
the hypercube; the probability distribution then diffuses over the
system at each time step following the equation:
\begin{equation}
\label{eq:master}
\Pi_i(t) =\Pi_i(t-1) +
\left[ \sum_{j}
\Pi_i(t-1)W(j\to i)-
\Pi_j(t-1)W(i\to j) \right]
\end{equation}
where $W(i\to j)$ represents the transition probability that is given
by:
\begin{equation}
\label{eq:transition}
W(i\to j)=\left\{
\begin{array}{l l}
\frac{1}{N} & \mbox{if $i$ vertex and $j$ vertex are allowed} \\
0 & \mbox{otherwise}
\end{array}
\right.
\end{equation}
The equation (\ref{eq:master}) can be rephrased as:
\begin{equation}
\label{eq:operator}
\vec{\Pi}(t)=F\vec{\Pi} (t-1)
\end{equation}
where $F$ is the linear evolution operator.
Since this process is Markovian we can diagonalize $F$; the smallest
eigenvalue corresponding to the infinite time equilibrium limit (where
all sites become equally populated) is 1. We can determine $U$ and $D$
satisfying:
\begin{equation}
\label{eq:diag}
F=U^TDU
\end{equation}
where $D$ is a diagonal matrix. For practical reasons it is convenient
to diagonalize $F$ so as to investigate the temporal evolution of the
relevant quantities. We use:
\begin{equation}
\label{eq:time-evol}
\Pi(t)=F^t\Pi(0)=U^TD^tU\Pi(0)
\end{equation}
We choose the initial condition as:
\begin{equation}
\label{eq:initial}
\Pi_i(0)=\delta_{ii_o}
\end{equation}
where $i_o$ is a vertex on the giant cluster.
The value of the normalized autocorrelation function $q(t)$ after time
$t$ for a given walk starting from $i_o$ and arriving at $i$ after
time $t$ can be defined by:
\begin{equation}
\label{eq:correlation}
q(t) =\left\langle \frac{1}{N_G}\sum_{i_o} \sum_{i} \Pi_i(t)
\frac{N-2d_H(i,i_o)}{N}- q_\infty \right\rangle
\end{equation}
where $d_H(i,i_o)$ is the Hamming distance between vertex $i$ and the
initial state, $N_G$ is the number of vertices on the giant cluster,
$q_{t=\infty}$ for a given realization is given by:
\begin{equation}
\label{eq:correlation2}
q_{t=\infty} =\frac{1}{N_G^2}\sum_{ii_o}
\frac{N-2d_H(i,i_o)}{N}
\end{equation}
and the averages are over different realizations of the diluted
hypercube.
We also calculated $p_{ret}$ defined by:
\begin{equation}
\label{eq:pret}
p_{ret}(t)=\left\langle \frac{1}{N_G}\sum_{i_o} \Pi_{i_o}(t)-\frac{1}{N_G}\right\rangle
\end{equation}
We can show that:
\begin{equation}
\label{eq:pret2}
p_{ret}(t)=\frac{1}{N_G}\left\langle \sum_{j}\lambda_j^t
-\frac{1}{N_G} \right\rangle
\end{equation}
This quantity is easier to calculate theoretically than $q(t)$, but it
is not useful to compare with results on model spin systems or
experiments. We can write this equation in a more convenient form:
\begin{equation}
\label{eq:pret3}
p_{ret}(t)=\frac{1}{N_G}\left\langle \sum^\prime_{s\neq 0}e^{-s_it}\right\rangle
\end{equation}
where $s=-\ln \lambda$ and we excluded $\lambda =1$ eigenvalue. Another
convenient form for investigating $p_{ret}$ is:
\begin{equation}
\label{eq:dens-exp}
p_{ret}(t)=\int_0^\infty ds\rho(s) e^{-ts}
\end{equation}
where the density $\rho$ is defined by:
\begin{equation}
\label{eq:density}
\rho(\lambda)=\left\langle \frac{1}{N_G-1} \sum_i \delta(s-s_i )
\right\rangle
\end{equation}
Our numerical workflow can be summarized as follows:
\begin{enumerate}
\item generation of a diluted hypercube
\item determination of the giant cluster
\item determination of the eigenvalues and eigenvectors of $F$
\item calculation of $\rho(s)$, $q(t)$ and $p_{ret}(t)$
\end{enumerate}
The algorithm was implemented on Mathematica 8.0 and the simulations
were performed on a Intel Xeon 2.27 Ghz with 24 Gbytes of Ram Memory.
A single simulation for the $N=28$ cost 12 hours. The algorithm
demands 24 Gbytes of memory for this case.
Calculations were made with hypercubes of dimension $N=10, 12, 14, 16,
18, 20, 22, 24, 26$ and $28$. All the calculations were performed at
$p_c(N)$ values given by equation (\ref{eq:pc}); this condition is
important since it allows us to scale conveniently data for systems
having different dimensions $N$. It is useful to be able to include
data for smaller $N$ in the global analysis as in these samples we
deal with much smaller matrices which is simpler computationally.
All vertices on the giant cluster were used as starting points, except
for the largest systems $N=26$ and $28$ where we have not used all
possible initial states $i_o$. For these sizes we approximated $q(t)$
and $p_{ret}(t)$ by using only $1000$ randomly chosen initial states
for each realization. We have tested the accuracy of this
approximation and we concluded that the error was very small (even for
the smaller sizes). We studied $1000$ different realizations of the
hypercube for all sizes $N$ except for $N=28$ when we have studied
$100$.
\section{Numerical data}
On Figure (\ref{fig:net}) we represent a graphical representation of a
diluted $\mathbb{Q}_{24}$ for this particular sample the graph is a tree
showing the validity of the approximation proposed by \cite{samukhin2008}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.6]{fig1.pdf}
\end{center}
\caption{ A graphical representation of a diluted $\mathbb{Q}_{24}$
exactly at $p_c$. The picture shows that the network presents no
loops.}
\label{fig:net}
\end{figure}
The time evolution for the autocorrelation functions $q(N,t)$
(\ref{eq:correlation}) is depicted in Figure \ref{fig:corrlog} against
$\log(t)$. On Figure \ref{fig:pretlog} we show the equivalent
results for the return probability $p_{ret}(N,t)$.
In all cases we have fitted the long time part of the curves using
the expression:
\begin{equation}
f(t)=A\exp\left[- \left( \frac{t}{\tau}\right)^{1/3} \right]
\end{equation}
\begin{figure}
\begin{center}
\includegraphics[scale=0.8]{fig2.pdf}
\end{center}
\caption{ The relaxation of the autocorrelation function $\log
q(N,t)$. Eqn.(\ref{eq:correlation}), against $\log(t)$ for
$N$ from $10$ to $28$. .
}
\label{fig:corrlog}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.6]{fig3}
\end{center}
\caption{ The decay of the return probability $\log p_{ret}(N,t)$,
Eqn.(\ref{eq:pret}), against $\log t$ for $N$ from $10$ to $28$.
}
\label{fig:pretlog}
\end{figure}
In Figures \ref{fig:corrt} and \ref{fig:prett} we present the same
results in a different manner so as to demonstrate the stretched
exponential long time behavior. On the $x$ axis the time scale is
normalized with $x(t) = (t/\tau(N))^{1/3}$ and on the $y$ axis the
measured $q(N,t)$ or $p_{ret}(N,t)$ are normalized so $y(N,t) = \ln
(q(N,t)/A_{q}(N))$ and $y(N,t) = \ln (p_{ret}(N,t)/A_{ret}(N))$
respectively. In these plots a stretched exponential with exponent
$1/3$ is a straight line as observed; we have chosen the normalization
factors $\tau(N)$ and $A_{q}(N), A_{ret}(N)$ so that data for
different hypercube dimensions $N$ collapse. This form of plot allows
one to distinguish clearly between the short time regime and the
stretched exponential regime; the latter can be seen to extend over a
wide time range until measurements are limited by the statistical
noise. The effective exponent $\beta = 1/3$ is independent of $N$ to
within the statistics.
\begin{figure}
\begin{center}
\includegraphics[scale=0.8]{fig4}
\end{center}
\caption{ The decay of the normalized autocorrelation function $\ln
(q(N,t)/A_{q}(N))$ against $(t/\tau)^{1/3}$. For stretched
exponentials with exponent $\beta=1/3$ in the long time regime the
data should lie on a straight line in this form of plot, as
observed. }
\label{fig:corrt}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=1.0]{fig5.pdf}
\end{center}
\caption{ The decay of the normalized return probability $\log
(p_{ret}(N,t)/A_{ret}(N))$ against $(t/\tau(N))^{1/3}$. For
stretched exponentials with exponent $\beta=1/3$ in the long time
regime the data should lie on a straight line on this form of
plot, as observed. }
\label{fig:prett}
\end{figure}
On Figure \ref{fig:tau} we show the size dependence on the $\tau(N)$
time scale parameter from the fits of the autocorrelation $q(t,N)$ and
the return probability $p_{ret}(t,N)$ data. The data can be fitted by
fitted by
\begin{equation}
\tau(N)=B 10^{\gamma N}
\label{eq:tau}
\end{equation}
with the fit parameters $\gamma = 0.24\pm 0.1$ and $B= 1.5\pm 0.1$ for
autocorrelation function, $\gamma = 0.24\pm 0.05$ and $B= 1.7\pm 0.2$
for the return probability. The values of the time scaling parameters
$\tau(N)$ for the two
different observables are identical within the precision of the
measurements.
\begin{figure}
\begin{center}
\includegraphics[scale=0.25]{fig6}
\end{center}
\caption{ The dependence of the time scale $\tau(N)$ with dimension
for the return probability $p_{ret}(N,t)$ (in red) and
autocorrelation $q(N,t)$ (in blue). }
\label{fig:tau}
\end{figure}
The most fundamental way to understand the system dynamics is through
investigating the eigenvalue spectra; the stretched exponential long
time behavior depends exclusively on the density of the eigenvalues
above the smallest eigenvalue, in the region where the distribution
for a finite size sample can still be considered to be continuous. A
given spectrum leads unambiguously to a unique relaxation function,
while it is much more difficult to determine the precise form of a
mode spectrum from a relaxation function.
On Figure \ref{fig:rho} we compare the mode density $\rho(s)$ obtained
through the present simulations with the theoretical expressions. All
the numerical results were obtained using $1000$ different
realizations of the diluted hypercube at each dimension
$N$. Unfortunately in practice the calculations of $\rho(s)$ are
numerically demanding because of strong sample to sample
fluctuations. The spectra were first binned in the form of histograms.
We defined a cut-off $\lambda_{min}(N)$ or equivalently
$s_{max}(N)=-\ln \lambda_{min}(N)$ to eliminate the short time effects
and selected the eigenvalues on the interval $s\in (0,
s_{max}(N))$. We choose $s_{max}=2/\tau(N)$ for all dimensions.
We divided this interval in bins equally spaced on a
logarithmic scale and then calculated the densities for each interval,
normalizing the frequencies by the length of the intervals.
The continuous curves were calculated from the expression (\ref{eq:laplace})
for $\rho_{K,1/3}(\lambda)$ and from the approximate
analytic expression (\ref{eq:laplacian}) for $\rho_{S}(\lambda)$ using
$\tau(N)$ estimated from equation (\ref{eq:tau}). To compare with
simulation results we normalized $\rho(\lambda)$ functions using:
\begin{equation}
C^{-1}=\int_{0}^{s_{max}} \rho(s)ds
\end{equation}
and
\begin{equation}
\rho^\prime(s)=C\rho(s)
\end{equation}
Over the ranges for which reliable data points have been obtained the
measured mode spectrum densities $\rho(N,s)$ closely resemble the
corresponding parts of the calculated spectra from the Laplace
transform $\rho_{K,1/3}(s)$, (\ref{eq:laplace}) or the analytic
$\rho_{S}(s)$ spectrum (\ref{eq:laplacian}) \cite{samukhin2008} (which
are in fact very similar to each other). The numerical spectra for the
hypercube model are indeed consistent with the mode density spectral
form derived analytically for the more general random network model
\cite{samukhin2008}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.4]{fig7}
\end{center}
\caption{Spectral density data $\rho(N,s)$ from the hypercube
evaluations together with the exact Laplace transforms
$\rho_{1/3}(s)$ (\ref{eq:laplace}) for stretched exponentials with
$\beta=1/3$ and $\tau(N)$ values equal to the numerical estimates
\ref{eq:tau} (dashed lines), and the analytic sparse network
expression (\ref{eq:laplacian}),\cite{samukhin2008} (full
lines). The values are normalized (see text). }
\label{fig:rho}
\end{figure}
\section*{Discussion and conclusions}
We have studied numerically relaxation through random walks along near
neighbor edges on the giant cluster of vertices in randomly diluted
hypercubes of dimensions up to $N=28$ near the percolation threshold
for the cluster. The data show clearly that at the percolation
threshold concentration $p_c(N)$, the relaxation mode spectrum, the
time dependence of the autocorrelation $q(N,t)$, and the return
probability $p_{ret}(N,t)$, are all consistent with asymptotic
stretched exponential relaxation $\exp[-(t/\tau(N))^\beta]$ having
exponent $\beta = 1/3$. The time scale $\tau(N)$ increases
exponentially with dimension $N$, Eqn. (\ref{eq:tau}). The observed
eigenvalue spectra demonstrate that the dynamical $q(N,t)$ behavior
previously obtained from Monte Carlo simulations and from numerical
solutions of the master equation
\cite{lemke1996,lemke2000,almeida2000} does not represent a crossover
between different exponential regimes, but that it is the consequence
of a specific wide eigenvalue spectrum.
A final long time crossover to a pure exponential (which would
correspond to a regime where the effective relaxation mode spectrum is
reduced to a gap between the ground state and the lowest mode) is not
visible in the data.
This diluted hypercube model at threshold can be considered as the
limiting high dimensional case of percolation on sphere-like
spaces. Alternatively it can be considered as a specific explicit
example of a generic sparse random network. The observed stretched
exponential behavior with exponent $\beta =1/3$ on the dilute
hypercube at the percolation threshold is consistent with the
predictions of the sphere-like percolation approach \cite{ian1} and
with studies of random walks on sparse random networks
\cite{bray1988,samukhin2008}, where the same stretched exponential
relaxation with the same exponent $\beta = 1/3$ has been derived
analytically.
For a physical system, configuration space can be imagined as a very
high dimensional graph. The system's dynamics is equivalent to a
random walk of the point representing the instantaneous state of the
system among those vertices of the graph which are thermodynamically
accessible. We suggest that when the stretched exponential
$\exp[-(t/\tau)^{1/3}]$ form of limiting relaxation with diverging
$\tau$ is observed numerically or experimentally for the
autocorrelation function relaxation $q(t)$ in complex physical systems
(which is often the case, see for instance
\cite{ogi1985,angelani,billoire})it is the signature of a
configuration space tending to a percolation threshold and having a
sparse random network topology.
\section*{Acknowledgements}
This work was supported by FAPESP grant no. 09/10382-2. This research
was supported by resources supplied by the Center for Scientific
Computing (NCC/GridUNESP) of the S\~ao Paulo State University (UNESP).
| {
"timestamp": "2011-06-17T02:00:43",
"yymm": "1106",
"arxiv_id": "1106.3109",
"language": "en",
"url": "https://arxiv.org/abs/1106.3109",
"abstract": "Diffusion on a diluted hypercube has been proposed as a model for glassy relaxation and is an example of the more general class of stochastic processes on graphs. In this article we determine numerically through large scale simulations the eigenvalue spectra for this stochastic process and calculate explicitly the time evolution for the autocorrelation function and for the return probability, all at criticality, with hypercube dimensions $N$ up to N=28. We show that at long times both relaxation functions can be described by stretched exponentials with exponent 1/3 and a characteristic relaxation time which grows exponentially with dimension $N$. The numerical eigenvalue spectra are consistent with analytic predictions for a generic sparse network model.",
"subjects": "Statistical Mechanics (cond-mat.stat-mech); Disordered Systems and Neural Networks (cond-mat.dis-nn); Soft Condensed Matter (cond-mat.soft)",
"title": "Stretched exponential behavior and random walks on diluted hypercubic lattices",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9732407152622596,
"lm_q2_score": 0.7279754548076477,
"lm_q1q2_score": 0.7084953523303638
} |
https://arxiv.org/abs/1608.02342 | Recent Progress on Definability of Henselian Valuations | Although the study of the definability of henselian valuations has a long history starting with J. Robinson, most of the results in this area were proven during the last few years. We survey these results which address the definability of concrete henselian valuations, the existence of definable henselian valuations on a given field, and questions of uniformity and quantifier complexity. | \section{Definability of a given henselian valuation}
The first definability results of henselian valuations were shown 50 years ago. \label{sec:1}
The main aim was to reduce the decidability of some field to the decidability
of a subring or subfield. Julia Robinson showed the following theorem which implies
that the $\mathcal{L}_\textrm{ring}$-theory of $\mathbb{Q}_p$ is decidable if
and only if the $\mathcal{L}_\textrm{ring}$-theory of
$\mathbb{Z}_p$ is decidable:
\begin{Theorem}[J.\ Robinson, {\cite[\S2]{Robinson}}]\label{Robinson}
The $p$-adic valuation on $\mathbb{Q}_p$ is defined by the formula
$$
\exists y\;(y^2=1+kx^2)
$$
where $k=p$ if $p$ is odd, and $k=8$ if $p=2$.
\end{Theorem}
Similarly, Ax gave a definition of the ring $F[[t]]$ in $F((t))$ to conclude
that, in the case ${\rm char}(F)=0$,
the $\mathcal{L}_\textrm{ring}$-theory of $F((t))$ is decidable if and only if
the $\mathcal{L}_\textrm{ring}$-theory of $F$ is decidable.
\begin{Theorem}[Ax, \cite{Ax}]\label{Ax}
For any field $F$, the power series valuation on the field of Laurent series $F((t))$
is defined by
\begin{align*}
\exists w, y \forall u, x_1, x_2 \exists z \forall y_1, y_2\, [&(z^m=1+wx_1^mx_2^m\vee y_1^m\neq 1+wx_1^m \vee y_2^m\neq 1+wx_2^m)
\\&\wedge u^m\neq w\wedge y^m=1+wx^m]
\end{align*}
where $m>1$ and ${\rm char}(F)\nmid m$.
\end{Theorem}
Moreover, Ax observed that both Julia Robinson's and his formulae work for arbitrary henselian valued fields with value group a $\mathbb{Z}$-group (i.e., when
$vK \equiv \mathbb{Z}$ as ordered abelian groups).
A first generalization of these results was proven by Koenigsmann (\cite[Lemma 3.6]{Koe04}) who showed\footnote{Note that
his proof relies on a result from \cite{Koe94} which does not quite work.
However, the citation in question can be replaced by Theorem \ref{uniformp} below.}
that any henselian valuation with an archimedean non-divisible value group is $\emptyset$-definable.
In \cite[Corollary 2]{Hong}, Hong points out that Julia Robinson's formula can be generalized to henselian valued fields with discrete value group (i.e., when the value group has a least positive element).
Improving on this further and incorporating Ax's idea, Hong finally proves:
\begin{Theorem}[Hong, {\cite[Theorem 4]{Hong}}]
Let $(K,v)$ be a henselian valued field. \label{Hong}
If $vK$ is regular and non-divisible, then $v$ is $\emptyset$-definable.
\end{Theorem}
Here, an ordered abelian group is called \emph{regular} if every quotient by a non-zero convex subgroup is divisible.
In particular, all $\mathbb{Z}$-groups and all archimedean ordered abelian groups are regular.
All the above definitions of henselian valuations
were obtained using properties of the value group of said valuation.
On the other hand, one can also define a henselian valuation using specifics
about its residue field, as asserted by the next theorem.
\begin{Theorem}[Jahnke--Koenigsmann, {\cite[Proposition 3.1, Corollaries 3.3 and 3.8]{JK12}}] \label{JK12}
Let $(K,v)$ be a henselian valued field.
Then $v$ is $\emptyset$-definable if the residue field $Kv$ satisfies one of the following conditions:
\begin{enumerate}
\item for some prime $p>2$, $Kv$ is not $p$-henselian and not $p$-closed.
\item $Kv$ is hilbertian.
\item $Kv$ is pseudo-algebraically closed and not separably closed.
\end{enumerate}
\end{Theorem}
See \cite[Chapters 11 and 12]{FJ} for definitions of pseudo-algebraically closed and hilbertian fields.
In particular, condition (1) applies to the case where $Kv$ is {\em finite},
and condition (2) applies in particular to the case where $Kv$ is a {\em number field}.
As discussed in section \ref{sec:2} below, the canonical henselian valuation is not necessarily definable. The canonical $p$-henselian valuation
is however usually definable. More precisely,
correcting a mistake in \cite{Koe95}, Jahnke and Koenigsmann show:
\begin{Theorem}[Jahnke--Koenigsmann, {\cite[Theorem 3.1]{JK15b}}]\label{uniformp}
For every prime $p>2$ there exists a $\emptyset$-formula
which defines the canonical $p$-henselian valuation
on any field $K$ with either ${\rm char}(K)=p$ or $\mu_p\subseteq K$, where $\mu_p$ denotes the set of $p$th roots
of unity.
\end{Theorem}
Note that there is also a version for $p=2$, see \cite{JK15b} for more details.
\section{Definable henselian valuations on a given henselian field}
A field that admits a henselian valuation may in fact admit a multitude of them,
some of which might be definable, while others are not.
This section focuses on the question of when a henselian field admits any definable henselian valuation at all. \label{sec:2}
A first observation in this direction is that not every henselian field admits a nontrivial definable valuation:
Every nontrivial valuation on an algebraically closed, or, more generally, separably closed, field is henselian, although never definable\footnote{This follows from the stability of separably closed fields, see \cite{Wood}.}.
Also every non-archimedean real closed field carries nontrivial henselian valuations, none of which are definable.
There are, however, more henselian fields without definable henselian valuations:
\begin{Theorem}[Prestel--Ziegler, {\cite[p.~338]{PZ}}]\label{PZ}
There exists a henselian valued field $K$ of characteristic zero which is neither algebraically closed nor real closed
and does not admit any nontrivial $\emptyset$-definable henselian valuation.
\end{Theorem}
However, building on ideas from \cite{AEJ},
Koenigsmann sketched a proof in \cite{Koe94} that a henselian valued field which is neither separably closed nor real closed
always admits a definable valuation that at least induces the same topology as a henselian valuation.
There is no published article containing these
results, and the preprint has some flaws.
However, it has been very influential, and has paved the way for a number of the current advances we discuss in this survey
(in particular those in \cite{Hong}, \cite{JK15b}, \cite{JK16} and \cite{Krupinski}).
Koenigsmann's approach can be made rigorous by using results from \cite{JK15b}, see \cite{Dupont}.
Refinements of the Prestel--Ziegler construction have recently been given in \cite{FJ15}, \cite{JK16} and \cite{AJ}.
In particular, Theorem \ref{PZ} can in fact be strengthened from ``$\emptyset$-definable'' to ``definable'', see \cite[Example 6.2]{JK16}.
The relations between admitting $\emptyset$-definable and definable henselian valuations have been explored further in \cite{AJ}.
They also show:
\begin{Theorem}[Anscombe--Jahnke, {\cite[Proposition 3.4]{AJ}}]
Let $K$ be a field which is not algebraically closed and whose canonical henselian valuation has residue characteristic zero.
If every field elementarily equivalent to $K$ is henselian, then $K$ admits a nontrivial $\emptyset$-definable henselian valuation.
\end{Theorem}
While in the previous section we listed
results that give sufficient conditions on the residue field or the value group
of a henselian valuation
to imply that this particular valuation is definable,
the following two results
give sufficient conditions on the residue field or value group
of a henselian valuation
to imply the existence of some nontrivial definable henselian valuation on that field.
The first one is about so-called {\em almost real closed} fields,
which is expressed as a condition on the residue field:
\begin{Theorem}[Delon--Farr\'e, {\cite[\S2]{DF}}]
Let $K$ be a field which is not real closed.
If $K$ admits a henselian valuation with real closed residue field,
then $K$ admits a nontrivial $\emptyset$-definable henselian valuation.
\footnote{Proof: There is a coarsest valuation $v_0$ on $K$ with real closed residue field \cite[Prop.~2.1]{DF},
whose valuation ring is the intersection of a family of valuation rings of valuations $v_{S}$, where $S$ runs over all
finite sets of primes \cite[Proof of Lemma 2.7]{DF}.
By assumption, $v_0$ is nontrivial, hence also one of the $v_S$ must be nontrivial,
and it is $\emptyset$-definable by \cite[Prop.~2.6]{DF}.
}
\end{Theorem}
An analogous result holds when one replaces \emph{real closed} by \emph{separably closed}, see \cite{JK12}.
The second result in this direction uses a condition on the value group:
\begin{Theorem}[Jahnke--Koenigsmann, {\cite[Proposition 4.2]{JK16}}]
Let $K$ be a field.
If $K$ admits a henselian valuation with non-divisible value group,
then $K$ admits a nontrivial definable henselian valuation.
\end{Theorem}
Moreover, \cite[Theorems A and B]{JK16} give sufficient conditions on a henselian field $K$,
in terms of the value group and residue field of the canonical henselian valuation,
for $K$ to admit nontrivial definable resp.~$\emptyset$-definable henselian valuations.
In case the residue field of the canonical henselian valuation on $K$ has characteristic zero,
this is even a full characterization
for the existence of nontrivial definable henselian valuations, cf.~\cite[Corollary 6.1]{JK16}.
As a concrete example they deduce that
every henselian field which is neither separably nor real closed and
has finite transcendence degree over its prime field
admits a nontrivial $\emptyset$-definable henselian valuation.
In a different paper they
give conditions on the absolute Galois group of a henselian field
that imply the existence of a nontrivial definable henselian valuation:
\begin{Theorem}[Jahnke--Koenigsmann, {\cite[Theorem 3.15]{JK12}}]
Let $K$ be a hensel\-ian field which is neither separably nor real closed.
If there exists a finite group $G$ such that no finite extension of $K$ has a Galois extension with Galois group $G$,
then $K$ admits a nontrivial $\emptyset$-definable henselian valuation.
\end{Theorem}
This result applies in particular to henselian valued {\em NIP} fields of positive characteristic that are not separably closed (see \cite[Corollary 3.18]{JK12}), which subsequently was generalized from NIP to $n$-NIP in \cite[Proposition 7.4]{Hempel}. For the definition of NIP and more details on NIP fields as well as on related model-theoretic concepts, see \cite{Sim15}.
Similarly, Johnson in \cite{Johnson} uses Theorem
\ref{uniformp} to give a classification of {\em dp-minimal} fields which involves showing that
every dp-minimal field which is neither algebraically closed or real closed admits a nontrivial definable henselian valuation.
Also \cite{Krupinski} gives some results towards the Shelah--Hasson conjecture:
using \cite{Koe94}, he shows that every radically bounded field that admits a valuation with non-divisible value group
admits a nontrivial definable valuation (not necessarily henselian).
As an application, he deduces that every valuation on a
field which is {\em superrosy} or {\em minimal} has divisible value group.
\section{Quantifier complexity and questions of uniformity}
\label{sec:3}
When defining sets in any structure, we are often interested in the quantifier complexity of the formulae involved and on determining
in which elementary classes a given definition uniformly yields the desired set. The classical results by J. Robinson (Theorem \ref{Robinson})
and Ax
(Theorem \ref{Ax}) discussed in
the first section give an $\exists$-$K$-formula and an $\exists\forall\exists\forall$-$\emptyset$-formula defining the valuation
ring $\mathcal{O}_v$ on any henselian valued field $(K,v)$ for which the value group is a $\mathbb{Z}$-group.
Moreover, one can check that the formulae worked out by Hong for $(K,v)$ henselian with a
regular, non-discrete and non-divisible value groups (as
given in the proof of Theorem \ref{Hong} above) are in fact
$\exists\forall$-$\emptyset$-formulae.
A class of henselian fields of particular interest is formed by the $p$-adics, their algebraic extensions and related fields
(e.g., $\mathbb{F}_p((t))$ and ultraproducts of the $p$-adics). Here,
we have the following results which were the starting point to many other recent works:
\begin{Theorem}[Cluckers--Derakhshan--Leenknegt--Macintyre, {\cite[Theorems 2, 5 and 6]{CDLM}}] \label{CDLM} \mbox{}
\begin{enumerate}
\item There is an $\exists\forall$-$\emptyset$-formula defining $\mathcal{O}_v$
in any henselian valued field $(K,v)$ with $Kv$ finite or pseudofinite.
\item There is no $\forall$-$\emptyset$-formula nor an $\exists$-$\emptyset$-formula
which uniformly defines $\mathbb{Z}_p$ in $\mathbb{Q}_p$ for all primes $p$.
However, for any fixed finite extension $K$ of $\mathbb{Q}_p$, the
unique prolongation of $v_p$ to $K$ is both $\forall$-$\emptyset$-definable and
$\exists$-$\emptyset$-definable.
\end{enumerate}
\end{Theorem}
Here, a field is called \emph{pseudofinite} if it is an infinite model of the common $\mathcal{L}_\mathrm{ring}$-theory
of all finite fields.
In particular,
all pseudofinite fields are pseudo-algebraically closed but not separably closed. See \cite[\S20.10]{FJ} for more details
on pseudofinite fields.
In fact, the statement of Theorem \ref{CDLM}(1) given in \cite{CDLM} is stronger than the version given above, since they actually
prove $\exists$-$\emptyset$-definability in a modification of the Macintyre language $\mathcal{L}_\mathrm{Mac}$,
see the introduction of \cite{CDLM} for more details.
Note that if $(K,v)$ is henselian with finite (respectively pseudofinite)
residue field and $L$ is a finite extension of $K$,
then the residue field of the unique prolongation of $v$ to $L$ is again finite
(respectively pseudofinite).
Thus, since for a fixed henselian valued field $(K,v)$ there is no
$\forall$-$\emptyset$-formula nor an $\exists$-$\emptyset$-formula
which uniformly defines the unique prolongation of $v$ to $L$ for all finite extensions $L$ of $K$ (cf.~\cite[Theorem 4]{CDLM}),
the first statement of Theorem \ref{CDLM} cannot be improved to either
a uniform $\forall$-$\emptyset$-definition or a uniform $\exists$-$\emptyset$-definition.
The positive characteristic analogue of Theorem \ref{CDLM}(2) was shown by Anscombe and Koenigsmann. Again,
the definition cannot work uniformly for all power series fields over finite fields.
\begin{Theorem}[Anscombe--Koenigsmann, {\cite[Theorem 1.1]{AK}}] Let $q$ be a prime power. Then
$\mathbb{F}_q[[t]]$ is $\exists$-$\emptyset$-definable in $\mathbb{F}_q((t))$.
\end{Theorem}
Fehm generalizes the methods employed by Anscombe and Koenigsmann in \cite{F15} to work for all henselian valuations with
finite residue field. Moreover, he shows that although
uniformity for all primes is impossible to achieve, one can always find $\exists$-$\emptyset$-formulae which work
for large (infinite) families of finite residue fields:
\begin{Theorem}[Fehm, {\cite[Theorems 1.1 and 1.2]{F15}}] \label{Fehm} \mbox{}
\begin{enumerate}
\item Let $(K,v)$ be henselian. If $Kv$ is finite or pseudo-algebraically closed and the algebraic part of $Kv$ is not algebraically
closed, then $\mathcal{O}_v$ is $\exists$-$\emptyset$-definable.
\item For every $\varepsilon >0$ there is an $\exists$-$\emptyset$-formula $\phi$ and a set $P$ of primes of
Dirichlet density at least $1- \varepsilon$ such that for any henselian $(K,v)$ with $|Kv| \in P$, the formula
$\phi$ defines $\mathcal{O}_v$ in $K$.
\end{enumerate}
\end{Theorem}
Note that the assumption on the algebraic part of $Kv$ being non-algebraically closed is necessary: The power series valuation
on $\mathbb{C}((t))$ is not $\exists$-$\emptyset$-definable (\cite[Appendix A]{AK}),
and $\mathbb{C}$ is of course pseudo-algebraically closed.
All of the above results give explicit formulae. After these results had emerged, Prestel proved a Beth-like Characterization Theorem
in \cite{Prestel} which implies the existence of low-quantifier definitions (without a method to explicitly construct them).
Applying this, he shows the following (partial) improvement of Theorems \ref{JK12} and \ref{CDLM}:
\begin{Theorem}[Prestel, {\cite[Theorem 1]{Prestel}}]
There is an $\exists\forall$-$\emptyset$-formula defining uniformly the henselian valuations whose
residue field is either finite, pseudofinite or hilbertian.
\end{Theorem}
In \cite{FehmPrestel}, the authors apply Prestel's Characterization Theorem to obtain the existence of
uniform $\exists$-$\emptyset$- and
$\forall$-$\emptyset$-definitions for $\mathbb{Z}_p$ in $\mathbb{Q}_p$ and for $\mathbb{F}_p[[t]]$ in $\mathbb{F}_p((t))$
for odd primes $p$ in the Macintyre language $\mathcal{L}_\textrm{Mac}$. Moreover, they build on Hong's work (Theorem \ref{Hong}) to show
\begin{Theorem}[Fehm--Prestel, {\cite[Corollary 3.8]{FehmPrestel}}]
Let $(K,v)$ be a henselian valued field.
If $vK$ is regular and non-divisible, then $v$ is $\exists\forall$-$\emptyset$-definable.
\end{Theorem}
Prestel's Characterization Theorem lead him to ask the question of
whether whenever a henselian valuation is $\emptyset$-definable,
it is
already definable with a formula of low quantifier complexity (that is with an $\exists\forall$-$\emptyset$-formula or an $\forall\exists$-$\emptyset$-formula). For canonical ($p$-)henselian valuations, this question is addressed
by Fehm and Jahnke in \cite{FJ15}. In the simpler case of the canonical $p$-henselian valuation and the setting of Theorem \ref{uniformp},
the canonical $p$-henselian
valuation is either $\forall\exists$-$\emptyset$-definable or $\exists\forall$-$\emptyset$-definable (depending on whether the residue
field is $p$-closed or not, see \cite[Propositions 3.6 and 3.7]{FJ15}).
For the canonical henselian valuation, the analogous result is only obtained if the absolute Galois group of the field is small
(\cite[Theorem 1.1]{FJ15})
On the other hand, Halupczok and Jahnke construct an $\emptyset$-definable henselian valuation in \cite[Theorem 1.3]{HJ15}
which is neither definable by an
$\exists\forall$-$\emptyset$-formula nor an $\forall\exists$-$\emptyset$-formula.
Very recently, Fehm and Anscombe gave a characterization of $\exists$-$\emptyset$-definable as well as $\forall$-$\emptyset$-definable henselian valuation rings
(\cite{AF2}). In particular, they show that the question of whether a given equicharacteristic henselian valuation
on a field is $\exists$-$\emptyset$-definable
(respectively $\forall$-$\emptyset$-definable)
depends only on the residue field of that valuation. The existential case of their theorem reads as follows.
\begin{Theorem}[Anscombe--Fehm, {\cite[Theorem 1.1]{AF2}}] Let $F$ be a field. Then the following are equivalent: \label{AF}
\begin{enumerate}
\item There is an $\exists$-$\emptyset$-formula that defines $\mathcal{O}_v$ in $K$ for \emph{some}
equicharacteristic henselian nontrivially valued field $(K,v)$ with residue field $F$.
\item There is an $\exists$-$\emptyset$-formula that defines $\mathcal{O}_v$ in $K$ for \emph{every}
henselian valued field $(K,v)$ with residue field elementarily equivalent to $F$.
\item There is no elementary extension $F^* \succ F$ with a nontrivial valuation $v$ on $F^*$ for which the
residue field $F^*v$ embeds into $F^*$.
\end{enumerate}
\end{Theorem}
For the universal version of Theorem \ref{AF}, the quantifier $\exists$ is replaced by $\forall$ in conditions
(1) and (2), and condition (3)
is replaced by
\begin{enumerate}
\item[(3)'] There is no elementary extension $F^* \succ F$ with a nontrivial henselian valuation $v$ on a subfield $E \subseteq F^*$
with $Ev \cong F^*$.
\end{enumerate}
It is easy to see that both conditions (3) and (3)' hold for example in the setting of Theorem \ref{Fehm}(1).
Anscombe and Fehm also apply their results to show (\cite[Corollary 6.12]{AF2})
that for any field $F$, the valuation ring $F[[t]]$ is $\forall$-$F$-definable
on $F((t))$ if and only if $F$ is not a large field.
(\emph{Large} is a property
of fields which is a common generalization of henselian and pseudo-algebraically closed,
see \cite[\S6.2]{AF2} for more details and references on large fields.)
\section*{Acknowledgements}
The authors would like to thank Martin Bays, Jochen Koenigsmann and Alexander Prestel for many helpful discussions and
their comments on this survey.
\bibliographystyle{amsplain}
| {
"timestamp": "2016-08-09T02:11:03",
"yymm": "1608",
"arxiv_id": "1608.02342",
"language": "en",
"url": "https://arxiv.org/abs/1608.02342",
"abstract": "Although the study of the definability of henselian valuations has a long history starting with J. Robinson, most of the results in this area were proven during the last few years. We survey these results which address the definability of concrete henselian valuations, the existence of definable henselian valuations on a given field, and questions of uniformity and quantifier complexity.",
"subjects": "Logic (math.LO)",
"title": "Recent Progress on Definability of Henselian Valuations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9732407144861112,
"lm_q2_score": 0.7279754548076478,
"lm_q1q2_score": 0.7084953517653468
} |
https://arxiv.org/abs/1409.6366 | A direct proof for Lovett's bound on the communication complexity of low rank matrices | The log-rank conjecture in communication complexity suggests that the deterministic communication complexity of any Boolean rank-r function is bounded by polylog(r). Recently, major progress was made by Lovett who proved that the communication complexity is bounded by O(r^1/2 * log r). Lovett's proof is based on known estimates on the discrepancy of low-rank matrices. We give a simple, direct proof based on a hyperplane rounding argument that in our opinion sheds more light on the reason why a root factor suffices and what is necessary to improve on this factor. | \section{Introduction}
In the classical \emph{communication complexity} setting, we imagine to have two players, Alice and Bob
and a function $f : X \times Y \to \{ \pm 1\}$. The players agree on a communication protocol beforehand; then
Alice is given an input $x \in X$ and Bob is presented an input $y \in Y$. Then the players can exchange messages
to figure out the function value $f(x,y)$ of their common input. The cost of the protocol is the number
of exchanged bits for the worst case input. Moreover we denote the cost of the most efficient protocol by $CC^{\det}(f)$.
It is common to view the function $f$ as a matrix $M \in \{ \pm 1\}^{X \times Y}$ with entries $M_{xy} = f(x,y)$ --- we will interchangeably use the function $f$ and the matrix $M$ and we abbreviate $\textrm{rank}(f) := \textrm{rank}(M)$.
A \emph{monochromatic rectangle} for $f$ is a subset $R = X' \times Y'$ with $X' \subseteq X$ and $Y' \subseteq Y$ on which the function is constant. In particular, the leaves of the optimal deterministic protocol tree correspond to a partition
of $M$ into $2^{CC^{\textrm{det}}(f)}$ many monochromatic rectangles.
Observe that this partition can be used to write $M$ as the sum of $2^{CC^{\textrm{det}}(f)}$ many rank-1 matrices,
which implies that
$CC^{\textrm{det}}(f) \geq \log \textrm{rank}(f)$. On the other hand it is also known that $CC^{\textrm{det}}(f) \leq \textrm{rank}(f)$.
In fact, Lov{\'a}sz and Saks~\cite{LatticesMoebiusCommComplexity-LovaszSaks-FOCS88} even conjectured that
the rank lower bound is tight up to a polynomial factor, that means $CC^{\textrm{det}}(f) \leq (\log \textrm{rank}(r))^{O(1)}$.
The exponent in this \emph{log-rank conjecture} needs to be at least $\log_3(6) \approx 1.63$ (unpublished by Kushilevitz, cf. \cite{RankVsCommunicationComplexity-NisanWigderson95}).
Small improvements have been made by Kotlov~\cite{Rank-and-chromatic-number-of-a-graph-Kotlov97},
who showed that $CC^{\textrm{det}}(f) \leq \log(4/3) \textrm{rank}(f)$
and Ben-Sasson, Ron-Zewi and Lovett~\cite{AddCombinatorics-approach-comm-complexity-BenSassonLovettRonZewi-FOCS12}
who gave an asymptotic improvement of $CC^{\textrm{det}}(f) \leq O(\frac{\textrm{rank}(f)}{\log \textrm{rank}(f)})$,
but had to assume the polynomial Freiman Rusza conjecture.
In a recent breakthrough, Lovett~\cite{CommunicationBoundedByRootRank-Lovett-STOC2014} showed
an unconditional bound of $CC^{\textrm{det}}(f) \leq O(\sqrt{r} \log r)$.
The key ingredient for his result is a lower bound on the \emph{discrepancy} of a function/matrix.
\begin{theorem}[\cite{CommunicationComplexity-KushilevitzNisan1995,Complexity-measures-of-sign-matrices-LMSS-Combinatorica2007}]
For any rank-$r$ Boolean matrix $M$ and any measure $\mu$ on its entries,
there exists a rectangle $R$ so that
$
| \mu(f^{-1}(1) \cap R) - \mu(f^{-1}(-1) \cap R)| \geq \frac{1}{8\sqrt{r}}
$.
\end{theorem}
Formally, the discrepancy of a function is the minimum such quantity over all possible measures,
\[
\textrm{disc}(f) = \min_{\textrm{measure }\mu} \; \max_{\textrm{rectangle } R} | \mu(f^{-1}(1) \cap R) - \mu(f^{-1}(-1) \cap R)|
\]
The discrepancy lower bound is tight in general, that means there are indeed functions $f$ with
$\textrm{disc}(f) \leq O(\frac{1}{\sqrt{\textrm{rank}(f)}})$, which suggests that using discrepancy as a black box might not be enough for
a better bound. Recently Shraibman~\cite{CorruptionBoundCommComplexityShraibman-ARXIV2014}
obtained
Lovett's bound in terms of the corruption lower bound. However, for expressing the bound in
terms of the rank of the matrix, the result still relies on the black-box bound on the discrepancy.
In this write-up, we give a direct proof that bypasses the discrepancy lower bound and
somewhat makes it more clear, what needs to be done in order to break the $\sqrt{\textrm{rank}(f)}$ barrier.
For more details on communication complexity we refer to the book of Kushilevitz and
Nisan~\cite{CommunicationComplexity-KushilevitzNisan1995}.
\section{Preliminaries}
The main technical result of Lovett~\cite{CommunicationBoundedByRootRank-Lovett-STOC2014} in a
slightly paraphrased form says that we can always find a large
rectangle $R$ that is \emph{almost monochromatic}.
\begin{theorem} \label{thm:FindingAlmostMonochromaticRectangle}
Given any Boolean function $f : X \times Y \to \{ \pm 1\}$
with rank $r$ and a measure $\mu$ on $X \times Y$
with $\mu(f^{-1}(1)) \geq \delta>0$. Then there exists a rectangle $R \subseteq X \times Y$
with $\mu(R) \geq 2^{-\Theta(\sqrt{r} \log \frac{1}{\delta})}$ so that
$\mathop{\mathbb{E}}_{(x,y) \sim R}[ f(x,y) ] \geq 1- \delta$.
\end{theorem}
In particular one can use Theorem~\ref{thm:FindingAlmostMonochromaticRectangle} to find a rectangle $R$ of size $|R| \geq 2^{\Theta(\sqrt{r} \log r)}|X \times Y|$ which has a $(1-\frac{1}{8r})$-fraction of 1-entries (assuming for symmetry reasons
that at least half of the entries of $f$ were 1).
By arguments of Gavinsky and Lovett~\cite{EnRouteToLogRankConj-Gavinsky-Lovett-ICALP2014} such an almost
monochromatic rectangle always contains a sub-rectangle $R' \subseteq R$ with $|R'| \geq \frac{1}{8}|R|$
that is \emph{fully} monochromatic.
The guarantee of having large monochromatic rectangles in any sub-matrix, can then be turned into a
protocol using arguments of Nisan and Wigderson:
\begin{theorem}[\cite{RankVsCommunicationComplexity-NisanWigderson95}] \label{thm:NisanWigdersonProtocoll}
Assume that any rank-$r$ function $f : X \times Y \to \{ \pm 1\}$ has
a monochromatic rectangle of size $2^{-c(r)}$. Then any Boolean function $g$
has
\[
CC^{\textrm{det}}(g) \leq O(\log^2 \textrm{rank}(g)) + \sum_{i=0}^{\log \textrm{rank}(g)} O(c(\textrm{rank}(g)/2^i)).
\]
\end{theorem}
In particular, we will apply Theorem~\ref{thm:NisanWigdersonProtocoll} with $c(r) = \Theta(\sqrt{r} \log(r))$
and obtain a protocol of cost $O(\sqrt{r} \log r)$ for any rank $r$ function.
We want to emphasize that the whole construction only has a $\textrm{polylog}(r)$
overhead, that means the log-rank conjecture is actually \emph{equivalent} to being able to find rectangles
of size $2^{-\textrm{polylog}(r)}|X \times Y|$ that have at least a $1-\frac{1}{8r}$ fraction of entries $+1$ (or $-1$, resp).
The reader can find a more detailed explanation of Theorem~\ref{thm:NisanWigdersonProtocoll} in Lovett's paper~\cite{CommunicationBoundedByRootRank-Lovett-STOC2014}.
\section{Proof of the main theorem}
In this section, we want to reprove Lovett's main technical result, Theorem~\ref{thm:FindingAlmostMonochromaticRectangle}.
Fix a matrix $M \in \{ \pm 1\}^{X \times Y}$ and denote its rank by $r$.
First, what does it actually mean that the matrix has rank $r$? By definition it means that there are
$r$-dimensional vectors $u_x,v_y$ for all $x \in X$ and $y \in Y$ so that $\left<u_x,u_y\right> = M_{xy}$. But what can we actually
say about the \emph{length} of those vectors?
To quote Linial, Mendelson, Schechtman and Shraibman, it is ``well known to Banach space theorists''
that length $r^{1/4}$ suffices
(see \cite{Complexity-measures-of-sign-matrices-LMSS-Combinatorica2007}, Lemma 4.2).
For the case that the Banach space knowledge of the reader got a bit rusty, we include
a more or less self-contained proof. In the exposition we follow closely \cite{FawziGouveiaParriloRobinsonThomas-PSD-Rank-Arxiv14}.
\begin{lemma}
Any rank-$r$ matrix $M \in \{ \pm 1\}^{X \times Y}$
has a factorization $M = \left<u_x,v_y\right>$
so that $u_x,v_y \in \setR^r$
are vectors with $\|u_x\|_2,\|v_y\|_2 \leq r^{1/4}$ for $x \in X$, $y \in Y$.
\end{lemma}
\begin{proof}
First of all, by the definition of rank, there are \emph{some} vectors $u_x,v_y \in \setR^r$ so that
$\left<u_x,v_y\right> = M_{xy}$ with $\textrm{span}\{ u_x : x \in X\} = \setR^r$ --- just that we have no a priori guarantee on their length.
Observe that this choice of vectors is far from being unique. For example
we could choose any regular matrix $T \in \setR^{r \times r}$ and rescale
$u_x' := Tu_x$ and $v_y'=(T^{-1})^Tv_y$. The inner product would remain invariant
as $\big<u_x',v_y'\big> = u_x^TT^T(T^{-1})^Tv_y = u_x^Tv_y = M_{xy}$.
To find a suitable linear map $T$, we will make use of \emph{John's Theorem} (\cite{JohnsTheorem1948}, see also the excellent survey of \cite{IntroToModernConvexGeometry-Ball97}):
\begin{theorem}[John '48]
For any full-dimensional symmetric convex set $K \subseteq \setR^r$ and any Ellipsoid $E \subseteq \setR^r$
that is centered at the origin, there exists an invertible linear map $T$ so that $E \subseteq T(K) \subseteq \sqrt{r} E$.
\end{theorem}
We want to apply John's Theorem to $K = \textrm{conv}\{ \pm u_x \mid x \in X\}$ (which indeed is a symmetric convex set)
and the ellipsoid $E := r^{-1/4}B$ with $B := \{ x \in \setR^r \mid \|x\|_2 = 1\}$ being the unit ball.
First, John's Theorem provides us with a linear map $T$ so that $r^{-1/4}B \subseteq \textrm{conv}\{ \pm Tu_x : x \in X\} \subseteq r^{1/4}B$.
Now, we can rescale the vectors by letting $u_x' := Tu_x$ and $v_y' := (T^{-1})^Tv_y$.
For the sake of a simpler notation, let us start all over and assume that the original vectors $u_x$ and $v_y$
satisfied $r^{-1/4}B \subseteq K \subseteq r^{1/4}B$ for $K = \textrm{conv}\{ \pm u_x \mid x \in X\}$ from the beginning on.
Then by this assumption we immediately see that $\|u_x\|_2 \leq r^{1/4}$
and it just remains to argue that also $\|v_y\|_2 \leq r^{1/4}$ for a fixed $y \in Y$.
To see this, take the vector $w := \frac{v_y}{r^{1/4}\|v_y\|_2}$ and observe that $w \in r^{-1/4}B$
and hence $w \in K$. By standard linear optimization reasoning, there must be a \emph{vertex}
$\pm u_x$ of $K$ so that $\left|\left<u_x,v_y\right>\right| \geq \left|\left<w,v_y\right>\right|$.
\begin{center}
\psset{unit=0.8cm}
\begin{pspicture}(-4,-2.0)(4,2.5)
\pscircle[linewidth=1pt](0,0){2.7}
\pspolygon[fillstyle=solid,fillcolor=lightgray](2,1.1)(1.1,-1.1)(-2,-1.1)(-1.1,1.1)
\pscircle[linewidth=1pt,fillstyle=vlines,hatchcolor=gray](0,0){1}
\cnode*(0,0){2.5pt}{origin} \nput[labelsep=2pt]{90}{origin}{$\bm{0}$}
\cnode*(2,1.1){2.5pt}{u}
\cnode*(-2,-1.1){2.5pt}{u2}
\nput[labelsep=2pt]{110}{u}{$u_x$}
\rput[l](-1.2,1.3){$K$}
\cnode*(3,0){2.5pt}{v} \nput[labelsep=2pt]{0}{v}{$v_y$}
\ncline[linestyle=dashed]{origin}{v}
\pnode(0,-1){A} \pnode(0,-1.5){B} \ncline[arrowsize=5pt]{->}{B}{A} \nput{-90}{B}{$r^{-1/4}B$}
\rput[r](-2.8,0){$r^{1/4}B$}
\cnode*(1,0){2.5pt}{w} \nput[labelsep=2pt]{45}{w}{$w$}
\end{pspicture}
\end{center}
This implies that
\[
r^{-1/4}\|v_y\|_2 = \big|\big<w,v_y\big>\big| \leq \big|\big<u_x,v_y\big>\big| = 1
\]
and the claim is proven.
\end{proof}
\subsection*{The hyperplane rounding argument}
Eventually we are ready to prove Lovett's claim. Let $M \in \{ \pm 1\}^{X \times Y}$
be the matrix with rank-$r$ factorization $M_{xy} = \big<u_x,v_y\big>$ so that $\|u_x\|_2,\|v_y\|_2 \leq r^{1/4}$.
We abbreviate $Q_i = \{ (x,y) \in X \times Y : M_{xy} = i\}$ as the $i$-entries of the matrix.
We assume that we have a measure $\mu$ with $\mu(Q_1) \geq \delta$ with $\delta > 0$ and we will aim at
finding a large rectangle that contains mostly 1-entries.
It will be convenient to normalize the vectors to $\bar{u}_x := \frac{u_x}{\|u_x\|_2}$ and $\bar{v}_y := \frac{v_y}{\|v_y\|_2}$.
We can make the following observation about their inner products:
\[
\left<\bar{u}_x,\bar{v}_y\right> = \frac{\left<u_x,v_y\right>}{\|u_x\|_2 \cdot \|v_y\|_2}
\; \begin{cases} \geq \frac{1}{\sqrt{r}} & \textrm{if } M_{xy} = 1 \\ \leq -\frac{1}{\sqrt{r}} & \textrm{if } M_{xy} = -1
\end{cases}
\]
In other words, the \emph{angle} between $u_x$ and $v_y$ for a 1-entries $(x,y)$ is a tiny bit smaller than the
angle for a $-1$-entry. It is a standard argument that has been used many times e.g. in approximation algorithms
that if we take a \emph{random hyperplane}, then the chance that a pair of vectors ends up on the same side, is larger
if their angle is smaller.
Formally, let $N^r(0,1)$ be the distribution of an \emph{$r$-dimensional Gaussian random variable}.
Then in a slightly modified form, \emph{Sheppard's Formula} tells us:
\begin{lemma}
For any unit vectors $u,v \in \setR^r$ with $\left<u,v\right> = \alpha$ we have
\[
\Pr_{g \sim N^r(0,1)}[\left<g,u\right> \geq 0\textrm{ and }\left<g,v\right> \geq 0]
= \frac{1}{2}\Big(1 - \frac{\textrm{arccos}(\alpha)}{\pi}\Big)
\]
\end{lemma}
In particular, the quantity $\frac{1}{2}(1-\frac{1}{\pi} \textrm{arccos}(\alpha))$ is monotonically increasing
in $\alpha$ with $\frac{1}{2}(1-\frac{1}{\pi} \textrm{arccos}(\alpha)) \geq \frac{1}{4}$ for all $\alpha \geq 0$
and $\frac{1}{2}(1-\frac{1}{\pi} \textrm{arccos}(\alpha)) \leq \frac{1}{4} - \frac{|\alpha|}{7}$
for $\alpha \leq 0$.
Next, we want to take $T := 7\ln(\frac{2}{\delta}) \cdot \sqrt{r}$ many random hyperplanes and define $R$ as those vectors $u_x$
and $v_y$ that always ended up on the positive side. Formally, we will take
independent random Gaussian vectors $g_1,\ldots,g_T \sim N^r(0,1)$ and define
rectangles
\[
R_t := \{ x \in X : \left<\bar{u}_x,g_t\right> \geq 0 \} \times \{ y \in Y : \left<\bar{v}_y,g_t\right> \geq 0\}
\]
and $R := R_1 \cap \ldots \cap R_T$. It remains to argue that in expectation $R$ satisfies the claim of Theorem~\ref{thm:FindingAlmostMonochromaticRectangle}.
First, using Sheppard's Formula, we know that for an entry
$(x,y) \in Q_1$ one has $\Pr[(x,y) \in R_t] \geq \frac{1}{4}$, while for an entry
$(x,y) \in Q_{-1}$ one has $\Pr[(x,y) \in R_t] \leq \frac{1}{4} - \frac{1}{7\sqrt{r}}$.
Since we take the Gaussians independently,
\[
\mathop{\mathbb{E}}[\mu(R \cap Q_1)] \geq \mu(Q_1) \cdot \left(\frac{1}{4}\right)^T
\;\textrm{ and }\quad \mathop{\mathbb{E}}[\mu(R \cap Q_{-1})] \leq \mu(Q_{-1}) \cdot \left(\frac{1}{4} - \frac{1}{7\sqrt{r}}\right)^T
\]
In particular their ratio behaves like
\[
\frac{\mathop{\mathbb{E}}[\mu(R \cap Q_{-1})]}{\mathop{\mathbb{E}}[\mu(R \cap Q_1)]}
\leq \frac{ (1/4 - \frac{1}{7\sqrt{r}})^T}{\delta \cdot (1/4)^T }
= \frac{1}{\delta} \cdot \Big(1-\frac{4}{7\sqrt{r}}\Big)^T \leq \frac{1}{\delta} \exp\Big(- T \cdot \frac{4}{7\sqrt{r} } \Big)
\leq \frac{\delta}{2}
\]
for our choice of $T = 7\ln(\frac{2}{\delta}) \cdot \sqrt{r}$.
On the other hand, $\mathop{\mathbb{E}}[\mu(R)] \geq \mathop{\mathbb{E}}[\mu(R \cap Q_1)] \geq \delta \cdot (\frac{1}{4})^T \geq 2^{-\Theta(\sqrt{r} \log \frac{1}{\delta})}$.
We can combine those estimates and consider a single expectation
\[
\mathop{\mathbb{E}}\Big[\mu(R \cap Q_1) - \frac{1}{\delta} \cdot \mu(R \cap Q_{-1})\Big] \geq 2^{-\Theta(\sqrt{r} \log \frac{1}{\delta})}
\]
We take any $R$ attaining this, then in particular we must have
$\mu(R) \geq 2^{-\Theta(\sqrt{r} \log \frac{1}{\delta})}$ and $\mu(R \cap Q_{-1}) \leq \delta \cdot \mu(R)$.
\section{Remarks}
We want to conclude this paper with a couple of remarks:
\begin{itemize}
\item Instead of taking $T$ random Gaussians, one can also find
an almost monochromatic rectangle using a \emph{single} Gaussian.
Sample $g \sim N^r(0,1)$ and define
\[
R := \{ x \in X : \left<\bar{u}_x,g\right> \geq s \} \times \{ y \in Y : \left<\bar{v}_y,g\right> \geq s\}.
\]
where $s = \Theta(r^{1/4}\sqrt{\log r})$ is a suitable threshold.
The rectangle $R$ will satisfy the same guarantee as before (up to constant factors). Geometrically, this approach might
be more intuitive, as it means that one can take all vectors in a \emph{random cap} of the unit ball.
\item The approach can also be used to get the discrepancy lower bound $\textrm{disc}(f) \geq \Omega(1/\sqrt{\textrm{rank}(f)})$
as a corollary. Take any measure $\mu$ and assume that $\mu(Q_1) \geq 1/2$.
Then sample a single Gaussian $g \sim N^r(0,1)$ and let $R := \{ x \in X : \left<g,u_x\right> \geq 0 \} \times \{ y \in Y : \left<g,v_y\right> \geq 0\}$.
Then
\[
\mathop{\mathbb{E}}[\mu(R \cap Q_1) - \mu(R \cap Q_{-1})] \geq \frac{1}{2} \cdot \frac{1}{4} - \frac{1}{2} \cdot \Big(\frac{1}{4} - \frac{1}{7\sqrt{r}}\Big)= \frac{1}{14\sqrt{r}}.
\]
\item Note that John's theorem and also the bounds on $\|u_x\|_2,\|v_y\|_2$ are tight in general. However, it seems plausible that
one can modify the hyperplane rounding in order to improve the bounds, possibly depending on the geometric arrangement of the vectors.
\item We do hesitate to call our proof ``new'', since all its ingredients have been contained already
either in Lovett's paper~\cite{CommunicationBoundedByRootRank-Lovett-STOC2014} or in the paper of Linial et al.~\cite{Complexity-measures-of-sign-matrices-LMSS-Combinatorica2007}. For example the bound on the factorization norm~\cite{Complexity-measures-of-sign-matrices-LMSS-Combinatorica2007} is based on John's theorem; the hyperplane rounding is also used to prove
Grothendieck's inequality that is another ingredient of \cite{Complexity-measures-of-sign-matrices-LMSS-Combinatorica2007}.
Also \cite{CommunicationBoundedByRootRank-Lovett-STOC2014} used an amplification procedure as we did by sampling $T$ hyperplanes.
\item There are indeed rank-$r$ Boolean matrices that have $2^{\Omega(r)}$ many different rows and columns. The construction is due to Lov\'asz and Kotlov~\cite{RankOfGraphs-KotlovLovasz96}.
An explicit construction is as follows:
Take $8r$ disjoint symbols $A \dot{\cup} A' \dot{\cup} B \dot{\cup} B'$ with
$|A| = |A'| = |B| = |B'| = 2r$.
Then define set families
\begin{eqnarray*}
\mathcal{A} &:=& \{ a \subseteq A \cup B : |a \cap A| = r\textrm{ and }|a \cap B| = 1 \}
\cup \{ a \subseteq A' \cup B' : |a \cap A'| = r\textrm{ and }|a \cap B'| = 1\} \\
\mathcal{B} &:=& \{ b \subseteq A' \cup B : |b \cap B| = r\textrm{ and }|b \cap A'| = 1\} \cup \{ b \subseteq A \cup B' : |b \cap B'| = r\textrm{ and }|B \cap A| = 1\}
\end{eqnarray*}
It is not difficult to check that for all $a \in \mathcal{A}$ and $b \in \mathcal{B}$ one has $|a \cap b| \in \{ 0,1\}$. Moreover for different tuples $a,a' \in \mathcal{A}$ there is
always a $b \in \mathcal{B}$ with $|a \cap b| = 0$ und $|a \cap b'| = 1$ and the reverse is true for different $b,b' \in \mathcal{B}$.
The matrix $M \in \{ \pm 1\}^{\mathcal{A} \times \mathcal{B}}$ defined by
$M_{ab} = \left<(2 \cdot \bm{1}_a,-1),(2 \cdot \bm{1}_b,1)\right>$ has then rank at most $8r+1$
and $|\mathcal{A}| = |\mathcal{B}| = 2^{\Theta(r)}$ many different rows and columns.
\end{itemize}
\paragraph{Acknowledgments.}
The author is very grateful to Paul Beame, James Lee and Anup Rao for helpful discussions.
\bibliographystyle{alpha}
| {
"timestamp": "2014-09-24T02:04:46",
"yymm": "1409",
"arxiv_id": "1409.6366",
"language": "en",
"url": "https://arxiv.org/abs/1409.6366",
"abstract": "The log-rank conjecture in communication complexity suggests that the deterministic communication complexity of any Boolean rank-r function is bounded by polylog(r). Recently, major progress was made by Lovett who proved that the communication complexity is bounded by O(r^1/2 * log r). Lovett's proof is based on known estimates on the discrepancy of low-rank matrices. We give a simple, direct proof based on a hyperplane rounding argument that in our opinion sheds more light on the reason why a root factor suffices and what is necessary to improve on this factor.",
"subjects": "Computational Complexity (cs.CC); Discrete Mathematics (cs.DM)",
"title": "A direct proof for Lovett's bound on the communication complexity of low rank matrices",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9732407199191508,
"lm_q2_score": 0.7279754489059774,
"lm_q1q2_score": 0.7084953499767205
} |
https://arxiv.org/abs/2206.01795 | Robust Topological Inference in the Presence of Outliers | The distance function to a compact set plays a crucial role in the paradigm of topological data analysis. In particular, the sublevel sets of the distance function are used in the computation of persistent homology -- a backbone of the topological data analysis pipeline. Despite its stability to perturbations in the Hausdorff distance, persistent homology is highly sensitive to outliers. In this work, we develop a framework of statistical inference for persistent homology in the presence of outliers. Drawing inspiration from recent developments in robust statistics, we propose a $\textit{median-of-means}$ variant of the distance function ($\textsf{MoM Dist}$), and establish its statistical properties. In particular, we show that, even in the presence of outliers, the sublevel filtrations and weighted filtrations induced by $\textsf{MoM Dist}$ are both consistent estimators of the true underlying population counterpart, and their rates of convergence in the bottleneck metric are controlled by the fraction of outliers in the data. Finally, we demonstrate the advantages of the proposed methodology through simulations and applications. | \section*{Acknowledgements}
BKS is supported by National Science Foundation (NSF) CAREER Award DMS-1945396. SK is partially supported by JSPS KAKENHI Grant Number 21H03403. SV, KF, and SK were supported by JST, CREST Grant Number JPMJCR15D3, Japan.
\section{Conclusion \& Discussion}
\label{sec:discussion}
In this paper, we introduce a methodology for constructing filtrations which are computationally efficient, provably robust, and statistically consistent even in the presence of outliers. To our knowledge, our results are the first of this type.
To elaborate, we introduced \md{}, $\dnq$, as a computationally efficient and outlier-robust variant of the distance function based on the median-of-means principle, and established some of its theoretical properties. In particular, when the samples contain outliers in the adversarial contamination setting, we (i) showed that the $\dnq$-weighted filtrations are statistically consistent estimators of the true (uncontaminated) population counterpart, (ii) characterized its convergence rate in the bottleneck metric, and (iii) provided uniform confidence bands in the space of persistence diagrams. Furthermore, we used an empirical influence analysis framework to quantify the robustness of the $\dnq-$filtrations, and provide a framework for selecting the parameter $Q$.
Topological inference in the presence of outliers is a topic which has received considerable attention in recent years, and with good reason. We would like to highlight that the objective in this paper has been to develop a framework of topological inference in which the population target is the persistence diagram $\dgm\pa{\bbv[\Xb]}$. Therefore, the proposed methodology disregards, to a large extent, the distribution of mass on the support. As a future direction, we would like to explore a framework of inference which incorporates information from, both, the geometry of the underlying space and the structure of the probability measure generating the data. As noted in \citet[Section~5]{anai2019dtm}, their results follow only from a few simple properties of the distance-to-measure. We build off their foundation to provide some useful generalizations which we hope will be useful in the analysis of other estimators using this framework.
\section{Main Results}
\label{sec:main}
In the following, we present a MoM estimator to obtain outlier robust persistence diagrams in Section~\ref{sec:proposal}, and its statistical properties along with the influence analysis are presented in Sections~\ref{sec:statistical}---\ref{sec:influence}. In Section~\ref{sec:lepski} we present a method for adaptively calibrating the MoM tuning parameter using a data-driven procedure. The proofs for all results are deferred to Section~\ref{sec:proofs}.
\subsection{Empirical distance function using the Median-of-Means principle}
\label{sec:proposal}
Let $\Xn = \pb{\Xv_1, \Xv_2, \dots, \Xv_n} \subset \R^d$ be a sample of $n$ observations. We assume that the samples are obtained under sampling setting \samp{}. We emphasize that this setting encompasses the following~scenarios:
\begin{enumerate}[label=(\alph*)]
\item The samples $\Xn$ are obtained i.i.d.~from $\pr \in \mathcal{P}(\Xb, a, b)$ for compact $\Xb \subset \R^d$.
\item The samples are obtained from a distribution $\pr = (1-\pi)\pr_{signal} + \pi\pr_{noise}$, where $\pi \in (0, 1/2)$ and $\pr_{signal} \in \mathcal{P}(\Xb, a, b)$.
\item $\qty{\widetilde{\Xv}_1,\widetilde{\Xv}_2, \dots, \widetilde{\Xv}_n}$ is first sampled i.i.d.~from $\pr \in \mathcal{P}(\Xb, a, b)$, and then handed over to an adversary. The adversary is then free to examine the $n$ points, and replace any $m < n/2$ of them with some points of their choice. The modified dataset, $\Xn$, is then shuffled and handed to the topologist for inference, who has no prior knowledge of the original $\qty{\widetilde{\Xv}_1,\widetilde{\Xv}_2, \dots, \widetilde{\Xv}_n}$.
\end{enumerate}
The central objective is to derive a statistically consistent and computationally efficient estimator of $\dgm\pa{\bbv[\Xb]}$ which is robust to the misspecification scenarios detailed above, using the samples $\Xn$. To this end, the MoM Distance (\md{}) function $\dnq$ is defined as follows.
\begin{definition}[\md{}]
Given a collection of points $\Xn \subset \R^d$ and $1 \le Q \le n$, let $\pB{S_1, S_2, \dots S_Q}$ be a partition of $\Xn$ into $Q$ disjoint blocks, such that each subset $S_q \subset \Xn$ comprises of $|{S_q}| = \floor{n/Q}$ samples\footnote{Without loss of generality, we may assume that $n$ is divisible by $Q$, so that $n/Q \in \Z_+$}. The MoM distance function $\dnq: \R^d \rightarrow \R_{\ge 0}$ is defined to be
\eq{
\dnq(\yv) \defeq \med\qty\Big{ \dsf_{n, S_q}(\yv) : q \in [Q] } = \med\qty\Big{ \inf_{\xv \in S_q} \norm{\xv -\yv} : q \in [Q]}.
\label{eq:momdist}
}
The proposed outlier robust persistence diagram $\dgm\pa{ \bbv[{\Xn, \dnq}] }$ is then obtained using $\dnq$-weighted filtration $V[\Xn, \dnq]$.
\label{def:mom}
\end{definition}
Note that we recover the usual empirical distance function, i.e., $\dsf_{n, 1} \equiv \dsf_n$ when $Q = 1$.
\begin{remark} For each block $S_q$, distance function $d_{n,q} \in L_\infty({\R^d})$ can be viewed as the Kuratowski embedding of $S_q$. The most natural generalization of the multivariate median-of-means estimators proposed by \cite{minsker2015geometric} and \cite{lerasle2019monk} would suggest the following estimator as the natural candidate for \md{}:
\eq{
\widetilde{\dsf}_{n,Q} = \arginf_{f \in L_{\infty}(\R^d)} \sum_{q=1}^{Q} \norminf{f - \dsf_{n,S_q}},\nonumber
}
where the median under consideration corresponds to the geometric median in $L_{\infty}(\R^d)$. Although $\widetilde{\dsf}_{n,Q}$ has its appeal from a theoretical perspective, the computation of $\widetilde{\dsf}_{n,Q}$ involves an infinite-dimensional optimization problem, making it infeasible in practice. In contrast, the proposed estimator in Definition~\ref{def:mom}, is a pointwise median-of-means estimator with a tractable computational cost. This has the promise of being highly modular, and widely applicable in many practical settings. The technical difficulty arises in showing that the pointwise estimator $\dnq$ achieves an exponential concentration bound around $\dx$ in the $L_{\infty}(\R^d)$ metric.
\end{remark}
Similar to the proposed methodology in Definition~\ref{def:mom}, the procedure of partitioning the data $\Xn$ into smaller subsets, and then aggregating them as an estimator of persistent homology has been shown to satisfy several favorable properties by \cite{solomon2021geometry} and \cite{gomez2021curvature}, albeit in a different context. We argue that a similar principle, in our setting, also leads to provably robust estimators.
\textbf{Computational considerations.} Given a weighting function $f$, the first step in constructing the $f$-weighted filtration begins with estimating the weights associated with the sample points, i.e., $w_i = f(\Xv_i)$ for all $i \in [n]$. After this step, the computational complexity of constructing the $f$-weighted filtration $V[\Xn, f]$ is independent of the choice of the weighting function $f$. Table~\ref{tab:comparison} compares the computational complexity for three robust filtrations: (i) the \md{} $\dnq$, (ii) the distance-to-measure $\delta_{n,k}$ (DTM, \citealp{anai2019dtm}), and (iii) the robust kernel density estimator $\fns$ (RKDE, \citealp{vishwanath2020robust}). Given a test point $\xv \in \R^d$, the distance from $\xv$ to each block $S_q$ is optimally computed using a $k-$d tree. The pre-processing step, which involves the construction of the $k-$d tree \citep{wald2006building}, typically has time complexity $O(\abs{S_q} \log \abs{S_q})$ for each block $q \in [Q]$ with $\abs{S_q} = n/Q$. Thereafter $O(\log \abs{S_q})$ time is needed for a single query \citep[Chapter~10]{cormen2009introduction}. The results for each block $q\in[Q]$ are then aggregated to compute the median, which takes an additional $O(Q)$ time per query. This results in a total evaluation time of $O(n \cdot (Q + \log n/Q))$ for $n$ samples.
The distance-to-measure with parameter $m$ requires the evaluation of the distance to the $k${th} nearest neighbor for $k=\floor{mn}$. This is, again, optimally computed using a $k-$d tree; however, unlike $\dnq$, the $k-$d tree needs to be constructed for all $n$ samples, resulting in a time complexity of $O(n\log n)$ for pre-processing. Thereafter, the evaluation time takes $O(k\log n)$ for each query point, resulting in $O(n \cdot k\log n)$ for evaluation over $n$ samples. The robust KDE $\fns$, on the other hand, requires $O(n^2)$ time to compute the Gram-matrix in each iteration of the KIRWLS algorithm, and takes $O(n^2\ell)$ for $\ell$ outer loops. After this pre-processing step, the coefficients of $\fns$ may be used to evaluate each query in $O(n)$ time. The three weighted filtrations $V[\Xn, \dnq]$, $V[\Xn, \delta_{n, k}]$ and $V[\Xn, \fns]$ are illustrated in Figure~\ref{fig:robust-examples}.
\begin{table}\caption{Comparison of computational complexity for robust weighted filtrations.}
\centering
\resizebox{\textwidth}{!}{\begin{tabular}{llll}
\toprule
Method & Pre-processing & Evaluation & Provably robust? \\
\midrule
$V[\Xn, \dnq]$ (\md{}--filtration) & $O\qty\Big( \f nQ \log(n/Q))$ & $O\qty\Big( n \cdot (Q + \log n/Q) )$ & Yes \\
$V[\Xn, \delta_{n, k}]$ \citep[DTM--filtration]{anai2019dtm} & $O( n \log n )$ & $O( kn \log n )$ & No \\
$V[\Xn, \fns]$ \citep[RKDE--filtration]{vishwanath2020robust} & $O(n^2 \ell)$ & $O(n^2)$ & No \\
\bottomrule
\end{tabular}}
\medskip
\scriptsize $n=\#$samples, $Q=\#$blocks, $k=\floor{mn}=$ DTM parameter, $\sigma=$RKDE bandwidth, and $\ell=\#$iterations of KIRWLS algorithm
\label{tab:comparison}
\end{table}
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{figures/plots/v_momdist.pdf}
\caption{$V^t[\Xn, \dnq]$, \md{}}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{figures/plots/v_dtm.pdf}
\caption{$V^t[\Xn, \delta_{n, k}]$, DTM}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{figures/plots/v_rkde.pdf}
\caption{$V^t[\Xn, \fns]$, RKDE}
\end{subfigure}
\caption{Comparison of $V^t[\Xn, f]$ for the median filtration value $t=\median\pb{w_1, w_2, \dots, w_n}$ and $p=\infty$.}
\label{fig:robust-examples}
\end{figure}
We conclude this section with the following result, which establishes that \md{} is $1-$Lipschitz.
\begin{lemma}
Given samples $\Xn = \pb{\Xv_1, \Xv_2, \dots, \Xv_n}$ and $Q < n$,
\eq{
\abs{ \dnq(\xv) - \dnq(\yv) } \le \norm{ \xv - \yv }, \qq{for all $\xv, \yv \in \R^d$.}\nn
}
\label{lemma:lipschitz}
\end{lemma}
\subsection{Statistical properties of $\bbv[\dnq]$}
\label{sec:statistical}
We begin our analysis by characterizing the persistence diagrams obtained using the sublevel filtration of $\dnq$. The following result (proved in Section~\ref{proof:theorem:momdist-sublevel}), establishes that $\dgm\pa{\bbv[\dnq]}$ is a statistically consistent estimator of target population quantity $\dgm\pa{\bbv[\Xb]}$ under sampling setting~\samp{}, and establishes its rate of convergence in the $\Winf$ metric.
\begin{theorem}[Sublevel filtration]
Suppose $\pr \in \mathcal{P}(\bX, a, b)$ is a probability distribution with support~$\bX$ satisfying the $(a, b)-$standard condition, and $\Xn$ is obtained under sampling condition \samp{}. For $2m < Q < n$ and for all $\delta < e^{-(1+b)Q}$,
\eq{
\pr\qty\Bigg{ \Winf\qty\bigg( \dgm\pa{\bbv[\dnq]}, \dgm\pa{\bbv[\Xb]} ) \le \mathfrak{g}(n, Q, a, b) } \ge 1 - \delta,
\label{eq:mom-confidence-band}
}
where
\eq{
\mathfrak{g}(n, Q, a, b) = \qty\bigg( \frac{Q\log(n / Q)}{a n} + \frac{4Q \log(1/\delta)}{a(Q-2m)n} )^{1/b}.\nn
}
Furthermore, if the number of outliers grows with n as $m = cn^\epsilon$ for $c > 0$ and $\epsilon \in [0, 1)$ then
\eq{
\E\qty\Bigg[ \Winf\qty\bigg( \dgm\pa{\bbv[\dnq]}, \dgm\pa{\bbv[\Xb]} ) ] \lesssim \pa{\f{\log n}{n^{1-\e}}}^{1/b}.
\label{eq:rate-with-noise}
}
\label{theorem:momdist-sublevel}
\end{theorem}
\begin{remark}
The following salient observations can be made from Proposition~\ref{theorem:momdist-sublevel}.
\begin{enumerate}[label=\textup{\textbf{(\roman*)}}]
\item In addition to characterizing the uniform rate of convergence of $\dnq$, \eref{eq:mom-confidence-band} also provides a uniform confidence band for $\dgm\pa{\bbv[{\Xb}]}$ in the presence of outliers. The two terms appearing in $\mathfrak{g}(n, Q, a, b)$ may be interpreted as follows: The first term is similar to the term appearing in \citet[Theorem~2]{chazal2015convergence} with an effective sample size of $n/Q$ instead of $n$, which is a consequence of the Median-of-Means procedure. The second term incorporates the desired confidence level $\delta$ adaptive to the volume dimension $b>0$, with an effective sample size of $n/Q$. Notably, as the number of outliers $m$ increases, the number of blocks $Q$ must also increase; thereby widening the resulting confidence band.
\item The complex inter-dependence of the parameters $m, Q$ and $\delta$ in \eref{eq:mom-confidence-band} is simplified in \eref{eq:rate-with-noise}. In the absence of outliers, i.e., when $m=0$ and $Q=1$, we recover the same convergence rate as in \citet[Theorem~4]{chazal2015convergence},
\eq{
\E\qty\Bigg[ \Winf\qty\bigg( \dgm\pa{\bbv[\dnq]}, \dgm\pa{\bbv[\Xb]} ) ] \lesssim \pa{\f{\log n}{n}}^{1/b}.
\label{eq:dnq-rate}
}
Specifically, it becomes apparent that accommodating for more adverse noise conditions comes at the price of an attenuated rate of convergence.
\item The admissible confidence level $\delta$ for constructing the confidence band is implicitly dependent on the parameter $Q$. This phenomenon is unavoidable with estimators based on the median-of-means principle. We refer the reader to \citet[Section~2.4]{lugosi2019mean} for a comprehensive discussion on how robustness must come at the price of the confidence level $\delta$ being restricted.
\end{enumerate}
\end{remark}
The proof of Proposition~\ref{theorem:momdist-sublevel} relies on the following lemma, which allows us to control the deviation of a pointwise median-of-means estimator from its uncontaminated population counterpart in terms of a Binomial tail probability.
\begin{lemma}
Suppose $\pr \in \mathcal{P}(\bX)$ for $\bX \subset \R^d$ and $\Xn = \Xnm \cup \Ym$ is obtained under sampling condition \samp{} with $\Xnm$ observed i.i.d. from $\pr$. Let $\pr_n$ denote the empirical measure associated with $\Xn$ and for $2m < Q < n$, let $\pr_q$ be the empirical measure associated with the block $S_q$ for all $q \in [Q]$. Given a statistical functional $T : \mathcal{P}(\R^d) \rightarrow L_{\infty}(\R^d)$, let $T_Q(\pr_n) \in L_{\infty}(\R^d)$ be the pointwise MoM estimator given by
\eq{
T_Q(\pr_n)(\xv) = \med\qty\Big{ T(\pr_q)(\xv) : q \in [Q] }, \ \ \textup{ for all } \xv \in \R^d.\nn
}
Then, for $t > 0$
\eq{
\pr\qty\bigg({ \norminf{ T_Q(\pr_n) - T(\pr) } > t }) \le \pr\qty({ \sum_{q \in A} \xi_{q}(t; n, Q) > \f{Q-2m}{2} }),\nn
}
where $A = \qty{ q \in [Q] : S_q \cap \Y_m = \varnothing}$ are the indices for the blocks containing no outliers, and
\eq{
\xi_{q}(t; n, Q) \defeq \mathbbm{1}\qty\Big( \norminf{ T(\pr_q) - T(\pr) } > t ) \ \ \textup{ for all } q \in A. \nn
}
\label{lemma:mom}
\end{lemma}
The statement of Lemma~\ref{lemma:mom} holds for empirical processes arising from general classes of pointwise median-of-means estimators. In particular, by taking $T(\pr_q) = \dsf_{s,q}$ to be the distance function w.r.t. block $S_q$, the estimator $\dnq$ satisfies the conditions of Lemma~\ref{lemma:mom}. We also point out that the exponential concentration bound in Proposition~\ref{theorem:momdist-sublevel} is strictly better than similar bounds appearing in other pointwise MoM estimators, e.g., \citet[Theorem~2]{humbert2020robust}. This is owing to the Chernoff bound (instead of a Hoeffding bound) used for bounding the Binomial tail probability appearing in Lemma~\ref{lemma:mom}. This provides a significant gain for Binomial random variables with shrinking probability \citep{Hagerup1990AGT}.
\subsection{Statistical properties of $V[\Xn, \dnq]$}
\label{sec:statistical-2}
In practice, the sublevel filtration $V[\dnq]$ cannot be computed exactly, and one must rely on approximations using cubical homology. To this end, we now turn our attention to $\dnq$-weighted filtrations computed on the sample points directly. Before we study the statistical properties of the $\dnq$--weighted filtration, we provide a useful characterization of the persistence diagram obtained using the sublevel sets of $\dnq$.
\begin{lemma}
Given samples $\Xn$ and $Q<n$, $V[\dnq]$ and $V{[\R^d, \dnq]}$ are $(\id, \alpha)-$interleaved for $\alpha: t \mapsto 2^{\f{p-1}{p}} t$ for all $p \ge 1$. In particular, $V[\dnq] = V{[\R^d, \dnq]}$ when $p=1$.
\label{lemma:sublevel-equivalence}
\end{lemma}
We now turn our attention to the $\dnq$-weighted filtration $V[\Xn, \dnq]$. The following result establishes that the persistence module $\bbv[{\Xn, \dnq}]$ is sufficiently regular.
\begin{lemma}[Regularity]
For $\Xn$ obtained under sampling setting \samp{} and $\dnq$ defined in \eref{eq:momdist}, the persistence module $\bbv[{\Xn, \dnq}]$ is $q-$tame and pointwise finite-dimensional.
\label{lemma:momdist-regularity}
\end{lemma}
The proof of Lemma~\ref{lemma:momdist-regularity} is a direct consequence of \citet[Proposition~{3.1}]{anai2019dtm}, and ensures that the persistence diagram $\dgm\pa{\bbv[\Xn, \dnq]}$ is well-defined.
Next, in order to establish that $\dgm\pa{\bbv[{\Xn, \dnq}]}$ is a consistent estimator of $\dgm\pa{\bbv[\bX]}$ and to construct uniform confidence bands in the space of persistence diagrams $(\Omega, \winf)$, we need a tighter control for how the two persistence modules are interleaved. To this end, Lemmas \ref{lemma:ab-filtration} and \ref{lemma:ab-module} will be of assistance, and serve as generalizations of \citet[Lemma~4.8 \& Proposition~4.9]{anai2019dtm}. The following result, which holds for a general metric space $(\M, \rho)$ and an arbitrary weight function $f$, provides a handle for the interleavings between $f$-weighted filtrations computed on two nested sets using the same function $f$.
\begin{lemma}
Given a metric space $(\M, \rho)$, two compact subsets $\bX, \bY$ of $\M$ such that $\bX \subseteq \bY$, and a weight function $f: \mathcal{M} \rightarrow \R_{\ge 0}$, let $\Vt[ ][\bX,f][\rho]$ and $\Vt[ ][\bY, f][\rho]$ be their respective $f$--weighted filtrations. If $f$ satisfies the property that
\eq{
\inf_{\xv \in \bX}\rho(\xv, \yv) \le f(\yv) + a,\nn
}
for $a > 0$ and for all $\yv \in \bY$, then the filtrations are $(\id,\alpha)$--interleaved, i.e.,
\eq{
\Vt[][\bX, f] \subseteq \Vt[][\bY, f] \subseteq \Vt[\alpha(t)][\bX, f],\nn
}
for $\alpha: t \mapsto 2^{1 - \f 1 p} t + a + \sup_{\xv \in \bX}f(\xv)$.
\label{lemma:ab-filtration}
\end{lemma}
Since map $\alpha$ appearing in Lemma~\ref{lemma:ab-filtration} is not purely a translation map, it does not lead to a bound in the interleaving metric as per \eref{eq:interleaving-filtration}, and, therefore, a bound in the $\winf$ metric cannot be characterized using Lemma~\ref{lemma:ab-filtration} alone. The next result, which is stated only for the Euclidean space $(\R^d, \norm{\cdot})$, establishes that for sufficiently large values of $t$, the map $\alpha$ may be replaced by a translation map.
\begin{lemma}
Let $(\mathcal{M},\rho) = (\R^d, \norm{\cdot})$. Suppose $\bX$, $\bY \subset \R^d$ are compact sets such that $\bX \subseteq \Yb$, and $f$ satisfies the same conditions as in Lemma~\ref{lemma:ab-filtration} for $a>0$. Let $t({\bX})$ be the filtration value for the simplex corresponding to $\bX$ in $\textup{nerve}\pb{\VVt[ ][\bX, f][\rho]}$, i.e.,
\eq{
t({\bX}) \defeq \inf \qty\Big{t>0: {\textstyle \bigcap\limits_{\xv \in \bX} } B_{f, \rho}(\xv, t) \neq \varnothing},\nn
}
and $\beta: t \mapsto t + c(\bX)$ be a non-decreasing map with
\eq{
c(\bX) \defeq a + \sup_{\xv \in \bX}f(\xv) + \pa{1 - \f 1 p}t(\bX).\nn
}
Then for all $t \ge t(\bX)$, the homomorphisms $\phi_{t}^{\beta(t)}: \bVt[t][\bX, f][\rho] \rightarrow \bVt[\beta(t)][\bX, f][\rho]$ are trivial, i.e.,
$$
{\textup{Im}\qty\big(\phi_{t}^{\beta(t)})} \cong \begin{cases}
\mathbf{F} & \text{ \ \ if \ \ } \bVt[t][\bX, f][\rho] = \textup{H}_0\pa{\Vt[t][\bX, f][\rho]}\nn \\
\pb{\mathsf{0}} & \text{\ \ \ if \ \ } \bVt[t][\bX, f][\rho] = \textup{H}_k\pa{\Vt[t][\bX, f][\rho]}, \ \ k>0
\end{cases}.
$$
Furthermore, the bottleneck distance between the resulting $f$--weighted persistence diagrams is bounded above as
\eq{
\Winf\qty\Big( \dgm\pa{\bVt[ ][\bX, f][\rho]}, \dgm\pa{\bVt[ ][\bY, f][\rho]} ) \le c(\bX).\nn
}
\label{lemma:ab-module}
\end{lemma}
\begin{remark}
Unlike Lemma~\ref{lemma:ab-filtration}, which is stated for general metric spaces, restricting ourselves to the Euclidean space $(\R^d, \norm{\cdot})$ in Lemma~\ref{lemma:ab-module} is sufficient for the objective of this work. However, as outlined in the proof, the only issue arises when \citet[Lemma~B.1]{anai2019dtm} is invoked. While \citet[Lemma~B.1]{anai2019dtm} (which holds for affine spaces satisfying the parallelogram identity) extends naturally to Banach spaces, the extension to general metric spaces will require some care on a case-by-case basis.
\end{remark}
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{figures/plots/interleaving2.pdf}
\caption{Illustration of Lemmas~\ref{lemma:ab-filtration} and \ref{lemma:ab-module} for $p=2$, $t(\bX)=3$ and $a+\sup_{\xv\in \bX}f(\xv)=1$. The interleaving maps $\alpha$ and $\beta$ are illustrated in blue and red, respectively. When $t < t(\bX)$, the interleaving map is $\alpha$ from Proposition~\ref{lemma:ab-filtration}. For $t \ge t(\bX)$, the map $\beta$ is obtained using Lemma~\ref{lemma:ab-module}. Extending $\beta$ along the black line yields the interleaving bound.}
\label{fig:ab-interleaving}
\end{figure}
In essence, the preceding two results enable us to control the filtrations in two separate stages, and, then, ``stitch'' the results together. See Figure~\ref{fig:ab-interleaving} for an illustration. This forms the crux of the next result, which establishes an analogue of the stability result for $\dnq$-weighted filtrations, but unlike the stability for the usual distance function $\dn$, it is also robust to outliers.
\begin{theorem}(Stability \& robustness of $\dnq$-weighted filtrations)
Let $\Xn = \Xnm \cup \Ym$ be a collection of points obtained under sampling condition \samp{}. For $Q > 2m$ let $\dnq$ be the \textup{\md{}} function computed on the contaminated points $\Xn$ and let $\dsf_{n-m}$ be the distance function w.r.t.~the inliers $\Xnm$. Then
\eq{
\Winf\qty\bigg( \bVt[ ][\Xn, \dnq], \bVt[ ][\Xnm, \dsf_{n-m}] ) \le \sup_{\xv \in \Xnm}\dnq(\xv) + \norminf{\dnq - \dsf_{n-m}} + \qty(1 - \f1p)t(\Xnm), \nn
}
where $t(\Xnm)$ is the filtration value of the simplex associated with the inliers $\Xnm$ in the filtration $V[\Xnm,\dnq]$. In particular, when $p=1$ we have
\eq{
\Winf\qty\bigg( \bVt[ ][\Xn, \dnq], \bVt[ ][\Xnm, \dsf_{n-m}] ) \le \sup_{\xv \in \Xnm}\dnq(\xv) + \norminf{\dnq - \dsf_{n-m}}.
\label{eq:stability-p1}
}
\label{theorem:momdist-stability}
\end{theorem}
\begin{remark}The following observations follow from Theorem~\ref{theorem:momdist-stability}.
\begin{enumerate}[label=\textup{\textbf{(\roman*)}}]
\item In contrast to what would follow from Lemma~\ref{lemma:anai-et-al}~(ii) for the standard unweighted filtration, the term appearing in the r.h.s.~of \eref{eq:stability-p1} completely eliminates the dependence on the Hausdorff distance between $\Xn$ and $\Xnm$ in the $\dnq-$filtration. More generally, the same bound in Proposition~\ref{theorem:momdist-stability} holds even when $V[\Xn,\dnq]$ is replaced by $V[\mathbb{M}, \dnq]$ for any set $\mathbb{M} \supseteq \Xnm$.
\item Notably, $V[\Xn, \dnq]$ remains resilient to outliers. To see this, observe that the first term appearing in the r.h.s. of \eref{eq:stability-p1} may be bounded as
\eq{
\sup_{\xv \in \Xnm}\dnq(\xv) = \sup_{\xv \in \Xnm}\abs{ \dnq(\xv) - \dx(\xv) } \le \norminf{ \dsf_{n, Q} - \dx },\nn
}
where the first equality follows from the fact that $\dx(\xv)=0$ for all $\xv \in \Xnm$. Therefore, from the proof of Theorem~\ref{theorem:momdist-sublevel}, the r.h.s.~of \eref{eq:stability-p1} vanishes with high probability for sufficiently large sample sizes.
\item For $p=1$, a similar analysis for the DTM-filtrations appears in \citet[Theorem~{4.5}]{anai2019dtm} and the bottleneck distance is bounded above as
\eq{
\winf\qty\bigg( \dgm\pa{\bbv[\Xn, \delta_{n, k}]}, \dgm\pa{\bbv[\Xnm, \delta_{n-m, k}]} ) \le \sqrt{\f nk} W_2\pa{\Xnm, \Xn} + \sup_{\xv \in \Xnm}\delta_{n-m, k}.\nn
}
While the last term on the r.h.s. converges to the uncontaminated population analogue with high probability, the first term involving the Wasserstein distance $W_2(\Xnm, \Xn)$ can be large even for a few extreme outliers. In contrast, the r.h.s. of \eref{eq:stability-p1} converges to zero with high probability with no assumptions on the outliers $\Ym$.
\end{enumerate}
\label{remark:stability}
\end{remark}
With this background we are now in a position to state our main result, which characterizes the rate of convergence for the $\dnq$--weighted filtration on the contaminated sample points, $V[\Xn, \dnq]$, to the counterfactual population analogue $V[\bX]$ in the $\winf$ metric.
\begin{theorem}[$\dnq$-weighted filtration]
Let $p=1$. Suppose $\pr \in \mathcal{P}(\bX, a, b)$ is a probability distribution with support $\bX$ satisfying the $(a, b)-$standard condition, and $\Xn = \Xnm \cup \Ym$ is obtained under sampling condition \samp{}. Then, for $2m < Q < n$ and for all $\delta \in (0, 1)$,
\eq{
\pr\qty\Bigg{\Winf\qty\bigg( \bVt[ ][\Xnm \cup \Ym, \dnq], \bvt{}[\bX] ) \le \mathfrak{f}(n, m, Q, \delta_1, \delta_2)} \ge 1 - \delta,\nn
}
where
\eq{
\mathfrak{f}(n, m, Q, a, b) \defeq \qty\bigg( \frac{Q\log(n / Q)}{a (n / Q)} + \frac{4Q \log(1/\delta_1)}{a(Q-2m)n} )^{1/b} + \qty\bigg( \frac{\log (n-m)}{a (n-m)} + \frac{4 \log(1/\delta_2)}{a (n-m)} )^{1/b},\nn
}
for $\delta_1, \delta_2 \in (0, 1)$ such that $\delta_1 \le e^{-(1+b)Q}$ and $\delta_1 + \delta_2 = \delta$. In particular, if $m_n = cn^\e$ for $0 \le \e < 1$, then
\eq{
\E\qty\bigg[ \Winf\qty\bigg( \bVt[ ][\Xnm \cup \Ym, \dnq], \bvt{} [\bX] ) ] \lesssim \qty\Bigg( \f{\log n}{n^{1-\e}} )^{1/b}.
\label{eq:dnq-rate-1}
}
\label{theorem:momdist-consistency}
\end{theorem}
\begin{remark} We make the following observations from Theorem~\ref{theorem:momdist-consistency}.
\begin{enumerate}[label=\textup{\textbf{(\roman*)}}]
\item The term appearing in the r.h.s.~of \eref{eq:dnq-rate-1} is identical to the term appearing in the r.h.s.~of \eref{eq:dnq-rate} in Theorem~\ref{theorem:momdist-sublevel}. Therefore, the $\dnq$--weighted filtration and the $\dnq$ sublevel filtration converge to the same population limit with identical convergence rates. They both differ from the minimax rate without outliers \citep[Theorem~4]{chazal2015convergence} by a factor of $n^{-\e/b}$.
\item The uniform confidence band we obtain from Theorem~\ref{theorem:momdist-consistency} can, in principle, be computed for any confidence level $\delta \in (0, 1)$. However, the restriction on $\delta_1$ makes the confidence band obtained using $V[\Xn, \dnq]$ wider than that obtained using Proposition~\ref{theorem:momdist-sublevel}. This is, ultimately, the price we have to pay for choosing the computationally tractable $\dnq$-weighted filtration as the estimator as opposed to the $\dnq$ sublevel filtration.
\end{enumerate}
\end{remark}
We conclude this section with the following result, which relates the sublevel filtration $V[\dnq]$ to $V[\Xn, \dnq]$.
\begin{proposition}
Given samples $\Xn = \pb{\Xv_1, \Xv_2, \dots, \Xv_n}$ and $Q < n$, the filtrations $V[\dnq]$ and $V[\Xn, \dnq]$ are $(\eta, \xi)-$interleaved, where
\eq{
\eta: t \mapsto 2^{\ipfac}t + \sup_{\xv \in \Xnm}\dnq(\xv), \qq{} \xi: t \mapsto 2^{\ipfac}\eta(t),\nn
}
and $p \ge 1$. Specifically, when $p=1$,
\eq{
\winf\qty\bigg( \dgm\pa{\bbv[\dnq]}, \dgm\pa{\bbv[\Xn, \dnq]} ) \le \sup_{\xv \in \Xnm}\dnq(\xv).\nn
}
\label{prop:sublevel-2}
\end{proposition}
The above result characterizes the error incurred when using $V[\Xn, \dnq]$ to approximate the sublevel filtration $V[\dnq]$. In light of Remark~\ref{remark:stability}~(ii), this error vanishes with increasing sample size. In contrast, the approximation error for the DTM-filtration is non-vanishing \cite[Proposition~4.6]{anai2019dtm}.
\begingroup
\subsection{Influence analysis}
\label{sec:influence}
The statistical analysis in the previous sections establishes that, even in the presence of outliers, as the number of samples increases we can eventually mitigate the effect of the outliers. In this section, we provide a more precise characterization for the influence the outliers have on the resulting $\dnq$--weighted filtrations, in contrast to the non-robust counterpart---the $\dn$--weighted filtrations.
Given a probability measure $\pr \in \mathcal{P}(\bX, a, b)$, \citet[Definition~4.1]{vishwanath2020robust} characterized the influence an outlier at $\xv \in \R^d$ has on a persistence diagram $\dgm\pa{\bbv[f_\pr]}$---obtained using the sublevel sets of $f_\pr$---using the \textit{persistence influence} function
\eq{
\boldsymbol{\Psi}(f_\pr; \xv) \defeq \lim_{\e \rightarrow 0} \winf\qty\Big( \dgm\pa{\bbv[{f_{\pr^{\e}_{\xv}}}]}, \dgm\pa{\bbv[f_\pr]} ),
\label{eq:persinf}
}
where $\pr^{\e}_{\xv} = (1-\e)\pr + \delta_{\xv}$ is the perturbation curve w.r.t.~$\xv$ in the space of probability measures. The persistence influence is a generalization of the influence function in robust statistics \citep{hampel2011robust} to general metric spaces. The analysis in this section is similar in spirit to the analysis based on the persistence influence, but differs in two important aspects. First, the $\dnq$--weighted filtration is computed purely on the sample points---by partitioning the samples into $Q$ disjoint blocks---and, therefore, the notion of persistence influence is adapted to the samples, in contrast to \eref{eq:persinf}, which is based on the data-generating distribution $\pr$. Additionally, unlike the case of the persistence influence function---where the influence of outliers in the resulting persistence diagram is quantified in terms of the bottleneck distance---here we directly examine the influence the outlying point has on the resulting persistence diagram itself. This provides a more tractable interpretation for how outliers impact the resulting topological inference.
The discussion in the previous section focused on the weighted filtrations, which can be approximated using the weighted-\cech{} complex. Here, we will explicitly restrict ourselves to the case of the weighted Rips filtrations. Firstly, a majority of the computational applications of persistent homology are performed using the Rips complex, with several optimized implementations widely available, e.g. Ripser, Gudhi, GiottoTDA. Furthermore, since the Rips complex $\mathcal{R}^t[\bX, f]$ is defined to be the flag complex associated with the $1$--skeleton of $\check{C}^t[\bX, f]$, the weighted Rips persistence diagram is entirely characterized by its $0-$ and $1-$simplices.
\begingroup
\renewcommand{\Xnm}{\bX[n+m]}
With this background, we now introduce the empirical persistence influence framework. Suppose we are given a collection of observations $\Xn$, which is sampled i.i.d.~from a probability distribution $\pr$ of interest. Let $\dgm\pa{\bVt[ ][\Xn, f_n][]}$ be its weighted--Rips persistence diagram, where the weight function $f_n$ is constructed using the samples $\Xn$. Suppose $\Xn$ is contaminated with $m < \f n2$ outliers to obtain the contaminated dataset $\Xnm$. In particular, we may assume that the $m$--points are placed at an outlying location $\xvo$, i.e.,
\eq{
\Xnm = \Xn \bigcup \pb{\mathop{\medcup}\limits_{j=1}^m\pb{\xvo}},\nn
}
such that the factor $m$ and the location $\xvo$ together control the relative influence the outliers have. This is similar to the role played by the factor $\e$ in the perturbation curve associated with the persistence influence. Note that when $m=0$, the influence of the outliers is non-existent in the dataset.
Let $\dgm\pa{\bVt[ ][\Xb, f_{n+m}]}$ be the weighted--Rips persistence diagram constructed on $\Xnm$ using the weight function $f_{n+m}$. This gives rise to a collection of spurious topological features in the resulting persistence diagram. If $\bn$ is the birth time associated with a hypothetical topological feature with mass $0$ at $\xvo$ (i.e. $0\delta_{\xvo}$) in $\dgm\pa{\bVt[ ][\Xn, f_n]}$, and $\bnm$ is the birth time associated with the observed topological feature associated with the $m$--points at $\xvo$ (i.e. $m\delta_{\xvo}$), then the \textit{empirical persistence influence} of $\xvo$ can be characterized by
\eq{
\text{influence}\pa{b; \Xn, f_n, m, \xvo} = \Delta b_{n, m}(\pb{\xvo}) = \bn - \bnm.
\label{eq:birth-influence}
}
Indeed, when $\bn - \bnm$ is small, the resulting weighted-Rips persistence diagram $\dgm\pa{\bVt[ ][\Xnm, f_{n+m}]}$ is more robust, and vice versa.
In a similar vein as \citet[Definition~4.1]{vishwanath2020robust} we may also characterize the influence the outliers have on the persistence diagrams resulting from the sublevel filtrations as
\eq{
\text{influence}\pa{\winf; \Xn, f_n, m, \xvo} = \winf\qty\bigg( \dgm\pa{\bbv[f_{m+n}]}, \dgm\pa{\bbv[f_n]} ) \le \norminf{f_{n+m} - f_{n}}.
\label{eq:winf-influence}
}
The following result establishes that, under some mild conditions and with high probability, the $\dnq$--weighted Rips persistence diagrams are more robust than their non-robust counterpart.
\endgroup
\begin{theorem}[Influence analysis of $\dnq$-weighted filtrations]
For $\Xn$ observed i.i.d. from $\pr \in \mathcal{P}(\bX, a, b)$ and $\xvo \in \R^d$, let $\Xmn$ be given by
\eq{
\Xmn = \Xn \bigcup \pb{\mathop{\medcup}\limits_{j=1}^m\pb{\xvo}}. \nn
}
For $2m < Q < n+m$, let $\dsf_{n+m}$ and $\dsf_{n+m, Q}$ denote the distance and MoM distance function w.r.t. $\Xmn$, and let $\db$ and $\dbq$ be as defined in \eref{eq:birth-influence} for $\dsf_{n+m}$ and $\dsf_{n+m, Q}$ respectively. Then
\eq{
\dbq \le \db \ \ \ \textup{a.s.}\nn
}
Furthermore, for $\nQ = (n+m)/Q$ and $c = \min\qty{a2^{-(1+b)}, a2^{-2b}}$, if
\eq{
\vp \defeq c\ \!\dx(\xvo)^b > \f{\log\nQ}{\nQ} + \f{4(1+b)^2Q^3}{\nQ} , \tag{I}
}
then, for all $\delta \in (0,1)$ satisfying
\eq{
(1+b)^2Q^2 \le \log(2/\delta) \le \f{\nQ\vp - \log\nQ}{4Q}, \tag{II}
}
with probability greater than $1-\delta$,
\eq{
\norminf{ \dsf_{n+m} - \dsf_n } - \norminf{\dsf_{n+m, Q} - \dsf_n} \ge %
\qty({ \f{2\log\nQ}{a\nQ} } + { \f{8 \log(2/\delta)}{a\nQ} })^{1/b}.\nn%
}
\label{theorem:momdist-influence}
\end{theorem}
\begin{remark} The result from Theorem~\ref{theorem:momdist-influence} may be interpreted as follows.
\begin{enumerate}[label=\textup{\textbf{(\roman*)}}]
\item The first part guarantees that the $\dnq$-weighted persistence diagram always has a smaller influence on the birth time in comparison to the non-robust counterpart. Since the Rips persistence diagram is entirely determined by the filtration values associated with the $0-$ and $1-$ simplices, this provides a partial picture for the influence $\xvo$ has on the resulting persistence diagrams. Characterizing the influence on the $1-$simplices is far more challenging owing to the combinatorial complexity in characterizing their lifetimes.
\item The second part compares the upper bounds on the empirical persistence influence from \eref{eq:winf-influence}. When conditions (I) and (II) hold, then with high probability, persistence diagrams obtained using $\dnq$ are closer to the truth than those obtained using $\dsf_n$. Therefore, the interplay between $n$, $m$ and $\xvo$ is better understood by characterizing when conditions (I) and (II) hold.
\item For fixed $n$ observe that (I) is satisfied whenever $\dx(\xvo)$ is sufficiently large, i.e., $\xvo$ is sufficiently far away from the support. On the other hand, if $\xvo$ is fixed, then (I) is satisfied when $\log \nQ / \nQ$ is sufficiently small, i.e., $n$ is sufficiently large. Together, this implies that for condition (I) to be satisfied, either (a) we need the outliers to be sufficiently well-separated from the support $\Xb$ such that we are able to distinguish outliers $\xvo$ from the inliers $\Xn$, or (b) for outliers placed very close to the support $\Xb$ we need sufficiently many inliers $n$ for us to be able to distinguish them from the outliers. On the other hand, note that if $n$ and $m$ are fixed, then the r.h.s. of (I) is directly proportional to $Q$. Although $Q$ can take any values between $2m < Q < (n+m)$, choosing a value of $Q$ much larger than $2m+1$ will likely breach condition (I) for a fixed $\xvo$. Equivalently, for a suboptimal choice of $Q$, we need the outliers to be sufficiently far away from the inliers in order to be able to distinguish them.
\item The l.h.s. of (II) is equivalent to the constraint that $\delta \le e^{-(1+b)Q}$, which appears in Theorems~\ref{theorem:momdist-sublevel}~and~\ref{theorem:momdist-consistency}. The r.h.s. of (II) specifies a lower-bound on the confidence level $\delta$. Condition (I) guarantees that the admissible values of $\delta \in (0,1)$ satisfying (II) is nonempty. For fixed $m, Q$ and $\xvo$, the r.h.s. of (II) is directly proportional to $n$, i.e., the lower bound vanishes as $n \rightarrow \infty$.
\item When conditions (I) and (II) are satisfied, we have the following lower bound from the l.h.s. of (II):
\eq{
\norminf{ \dsf_{n+m} - \dsf_n } - \norminf{\dsf_{n+m, Q} - \dsf_n} \gtrsim \qty({ \f{\log(n+m/Q)}{a(n+m)/Q}} + { \f{Q^2}{(n+m)/Q} })^{1/b}.\label{eq:inf-lb}
}
In the regime when $n,m \rightarrow \infty$, and for the optimal choice of $Q$, i.e., $Q=km$ for $k > 2$, the r.h.s. of \eref{eq:inf-lb} is non-trivial when $m = \Omega(n^{1/3})$. Therefore, under conditions (I) and (II), when there are sufficiently many outliers, there is greater evidence to support the robustness of $\dnq$.
\end{enumerate}
\end{remark}
\endgroup
\begingroup
\providecommand{\hQ}{\widehat{Q}}
\providecommand{\hm}{\widehat{m}}
\renewcommand{\ms}{m^*}
\providecommand{\h}{\mathfrak{h}}
\subsection{Auto-tuning the parameter $Q$}
\label{sec:lepski}
The result in Theorem~\ref{theorem:momdist-consistency} relies on the crucial assumption that the number of outliers $\ms$ is known \textit{a priori}. While this assumption may hold in certain adversarial settings, in general, this information may be unavailable. In order to make Theorem~\ref{theorem:momdist-consistency} more useful in practical settings, we discuss two solutions for calibrating the parameter $Q$. The first procedure is based on Lepski's method \citep{lepskii1991problem}, which is a powerful data-driven method for adaptive parameter selection. In this case, we also provide theoretical guarantees for the adaptively tuned estimator. The second procedure---which is based on some heuristic observations regarding the sample estimator $\bbv{}[\Xn, \dnq]$--- works well in practice, and may be used as a precursor to Lepski's method.
When the number of outliers $\ms$ is known, choosing $Q^*=2\ms+1$ results in the rate of convergence in Theorem~\ref{theorem:momdist-consistency}. However, without access to $m^*$, Lepski's method provides a systematic procedure for selecting a parameter $\hQ$ which provides the same error guarantees as $Q^*$ \citep{birge2001alternative}. The procedure is as follows. Let $m_{\min}$ and $m_{\max}$ be two coarse bounds on (unknown) $\ms$ such that ${m_{\min} \le \ms \le m_{\max}}$. For a choice of $\theta>1$, let $m(j) = \theta^{j}\mmin$ and define
$$
\J \defeq \qty\Big{ j \ge 1 : \mmin \le m(j) < \theta\mmax }.
$$
For $\pr \in \mathcal{P}\pa{\bX, a, b}$ and $\Xn$ obtained under sampling condition ($\scr{S}$), let $\bbv_n(j) = \bbv[\Xn, \dsf_{n, Q(j)}]$ be the persistence module obtained using the \md{-weighted} filtration with ${Q(j) = 2m(j)+1}$.
For $\delta \in (0,1)$ and $\delta_{\max} = \delta - e^{-(1+b)(2\mmax+1)}$, let $\mathfrak{h}(n, m, \delta)$ be defined as follows:
\eq{
\mathfrak{h}(n, m, \delta) = 2\qty\Bigg( \f{2m+1}{an} \wo\qty( \f{ne^{ 4(1+b)(2\mmax+1) }}{2m+1} ) )^{1/b} + \qty\Bigg( \f{1}{a(n-m)} \wo\qty( (n-m)e^{4\log(1/\delta_{\max})} ) )^{1/b},\nn
}
where for $z>0$, $\wo(z)$ is the Lambert $\wo$ function given by the identity $\wo(z)e^{\wo(z)} = z$. With this background, let $\hj$ be the output of the following procedure:
\eq{
\hj \defeq \min \qty\Big{ j \in \J : \winf\pa{ \bbv_n(j), \bbv_n({j'}) } \le 2\mathfrak{h}(n, m(j'), \delta) \qq{for all} j' \in \J, j' > j },
\label{eq:lepski-j}
}
the resulting weighted persistence module $\widehat{\bbv}_n = \bbv_n({\hj}) = \bbv[\Xn, \dsf_{n, Q(\hj)}]$ is the Lepski estimator for $\bbv[\bX]$. The following result establishes that the adaptive selection of $Q$ results in an estimator with the same convergence guarantees as in Theorem~\ref{theorem:momdist-consistency}.
\begin{theorem}[Adaptive $\dnq$-weighted filtration]
Suppose $\Xn$ is obtained under sampling condition \samp{} for $\pr \in \mathcal{P}(\bX, a, b)$, and suppose $\mmin$ and $\mmax$ are known such that unknown number of outliers ${\ms \in [\mmin, \mmax]}$ and $\ms < n/2$. For a chosen $\theta > 1$ let $\hj$ be the output of data-driven procedure in \eref{eq:lepski-j} and let $\widehat{\bbv}_n = \bbv_n{(\hj)}$. Then, for all $\delta \in (0, 1)$,
\eq{
\pr\qty\bigg( \winf\qty\Big( \dgm\qty\big(\widehat{\bbv}_n), \dgm\qty\big(\bbv[\bX]) ) \le 3\mathfrak{h}(n, \theta m^*, \delta) ) \ge 1 - \delta \log_\theta\qty( \f{\theta \mmin}{\mmax} ).\nn
}
\label{theorem:lepski}
\end{theorem}
\begin{remark}
We make the following useful observations from Theorem~\ref{theorem:lepski}.
\begin{enumerate}[label=\textup{\textbf{(\roman*)}}]
\item We make the distinction that the output $\widehat\bbv_n$ of Lepski's method does not necessarily correspond to the optimal choice $\bbv_n^*$ if $\ms$ were known. Instead, Theorem~\ref{theorem:lepski} guarantees that error associated with $\widehat\bbv_n$ is of the same order (up to constants) as that of $\bbv_n^*$.
\item While Lepski's method guarantees optimal errors for the adaptive estimator without any knowledge of the true $m^*$; in practice, however, the empirical performance depends on several factors. Since the procedure in Theorem~\ref{theorem:lepski} is designed to match the guarantee of Theorem~\ref{theorem:momdist-consistency}, the success of the procedure crucially depends on the tightness of the bound $\mathfrak{f}(n, m, Q, \delta_1, \delta_2)$ in Theorem~\ref{theorem:momdist-consistency}. Furthermore, the implementation described in \eref{eq:lepski-j} requires knowledge of the parameters $a, b > 0$ arising from the $(a,b)-$standard condition. While the calibration of $a$ and $b$ in practice is more of an art and beyond the scope of the paper, we emphasize here that it is possible to construct a statistically consistent estimator of the true population quantity $\bbv[\bX]$ in a purely data-adaptive fashion, even in the presence of adversarial contamination.
\item Unlike a standard grid search, Lepski's method adapts to the true noise level $m^*$ in an efficient manner. Given a reasonable estimate for $\mmin$ and $\mmax$, Lepski's method has a computational cost of $O( \log^2_\theta(\mmax/\mmin ) )$. However, the choice of $\theta > 1$ must also be made judiciously, e.g., replacing $\theta$ with $\sqrt{\theta}$ for the procedure in \eref{eq:lepski-j} will require $\sim4$ times more computational time.
\item In the worst case, when there are no reasonable estimates for $\mmin$ and $\mmax$, choosing $\mmin=1$ and $\mmax=n/2$ requires $O(\log^2_\theta(n))$ computational time. Notably, more than just the additional computational price, a suboptimal choice of $\mmin$ and $\mmax$ leads to poor performance. To see this, note that the term $\h(n, m, \delta)$ is a lower bound for the term $\mathfrak{f}(n, m, Q, \delta_1, \delta_2)$ in Theorem~\ref{theorem:momdist-consistency} when $Q=2m+1$ and $\delta_1 = e^{-(1+b)(2\mmax+1)} \le e^{-(1+b)Q}$. Therefore, when the number of outliers grows with $n$ as $m^* = cn^\e$ for $c>0$ and $\e \in [0, 1)$, a similar analysis to that in Theorem~\ref{theorem:momdist-sublevel} and Theorem~\ref{theorem:momdist-consistency} yields that
\eq{
\E\qty\bigg[ \winf\qty\Big( \dgm\qty\big(\widehat{\bbv}_n), \dgm\qty\big(\bbv[\bX]) ) ] \lesssim \pa{\f{\log n}{n/\mmax}}^{1/b}.\nn
}
Therefore, if the bound $\mmax$ is not tight, i.e., $\mmax = Cn^\beta$ for $\epsilon < \beta$, then, asymptotically, the output of Lepski's method is not adaptive to the true noise $m^*$, and, instead, reflects the suboptimal choice of $\mmax$.
\end{enumerate}
\label{remark:lepski}
\end{remark}
In a similar vein, Lepski's method may be used to adaptively select the parameter $Q$ to obtain a statistically consistent sublevel set persistence module. The following result outlines a data-driven procedure to obtain ${\cj \in \J}$ such that the resulting sublevel persistence module $\overline{\bbv}_n = \bbv_n(\cj) = \bbv[\dsf_{n, Q(\cj)}]$ has the same convergence guarantee as Theorem~\ref{theorem:momdist-sublevel}.
\begin{theorem}[Adaptive sublevel filtration]
For $\pr \in \mathcal{P}(\bX, a, b)$, suppose $\Xn$ is obtained under sampling condition \samp{}, and suppose $\mmin$ and $\mmax$ are known such that unknown number of outliers $\ms \in [\mmin, \mmax]$ and $\ms < n/2$. Let $\bbw_n(j) = \bbv[\dsf_{n, Q(j)}]$ be the sublevel persistence module obtained using $\dsf_{n, Q(j)}$ with ${Q(j) = 2m(j)+1}$ for all $j \in \J$. For a chosen $\theta > 1$, let $\cj$ be the output of data-driven procedure,
\eq{
\cj = \min \qty\Big{ j \in \J : \winf\pa{ \bbv_n(j), \bbv_n({j'}) } \le 2\mathfrak{p}(n, m(j'), \delta) \qq{for all} j' \in \J, j' > j },\nn
}
where
\eq{
\mathfrak{p}(n, m, \delta) = \qty\Bigg( \f{2m+1}{an} \wo\qty( \f{ne^{ (1+b)\log(1/\delta) }}{2m+1} ) )^{1/b}.\nn
}
Then, for all $\delta \le e^{-(1+b)(2\mmax+1)}$ and $\overline{\bbv}_n = \bbv_n{(\cj)}$,
\eq{
\pr\qty\bigg( \winf\qty\Big( \dgm\qty\big(\overline{\bbv}_n), \dgm\qty\big(\bbv[\bX]) ) \le 3\mathfrak{h}(n, \theta m^*, \delta) ) \ge 1 - \delta \log_\theta\qty( \f{\theta \mmin}{\mmax} ).\nn
}
\label{corollary:lepski}
\end{theorem}
\endgroup
The proof is identical to that of Theorem~\ref{theorem:lepski}, and is, therefore, omitted. The success of Lepski's method depends on the tightness of the probabilistic bounds, knowledge of the (nuisance) parameters (i.e. $a,b$) appearing in these bounds, and a prudent choice for $\mmin$ and $\mmax$. While the calibration of $a$ is beyond the scope of this paper, in $\R^d$ a conservative choice for $b$ would be the dimension $d$ of the ambient space. We refer the reader to \cite[Section~4]{chazal2015convergence} for further details.
To address the last bottleneck in Lepski's method, we describe a heuristic method to select the parameter $Q$, which may be used to obtain reasonable choices for $\mmin$ and $\mmax$. %
The method is based on the observation that the blocks $\pb{S_q : q \in [Q]}$ may be resampled by shuffling the sample points $\Xn$ prior to partitioning it. The resulting estimator $\bbv[\Xn, \dnq]$ is an unbiased estimator of the same population quantity when ${2m < Q < n}$. Therefore, we may choose the smallest value of $Q$ for which the pairwise bottleneck distance over permutations of the data is minimized. Specifically, suppose $\Xn^\sigma = \pb{ \Xv_{\s(1)}, \Xv_{\s(2)}, \dots, \Xv_{\s(n)} }$ is a permutation of $\Xn$, then
\eq{
\widehat{Q}_{R} = \argmin_{Q \ge 1} \sum_{1 \le i < j \le N} \winf\qty\Big({ \bbv[{\Xn^{\sigma_i}}, \dnq], \bbv[{ \Xn^{\sigma_j} }, \dnq] }),\nn
}
where, for a chosen number of replicates $N$, $\sigma_i, \sigma_j$ are permutations of $[n]$ for each $i, j \in [N]$. Furthermore, for $\widehat{m}_{R} = \floor{\widehat{Q}_{R}/2}$ and for a constant $C > 1$, the bounds $\mmin$ and $\mmax$ may be taken to be $C\inv\widehat{m}_{R}$ and $C\widehat{m}_{R}$, respectively.
\section{Proofs}
\label{sec:proofs}
In this section, we present the proofs for the results in Section~\ref{sec:main}.
\allowdisplaybreaks
\providecommand{\ut}[1]{U^{#1}}
\providecommand{\vt}[1]{V^{#1}}
\providecommand{\wt}[1]{W^{#1}}
\providecommand{\but}[1]{\mathbb{U}^{#1}}
\providecommand{\bvt}[1]{\mathbb{V}^{#1}}
\providecommand{\bwt}[1]{\mathbb{W}^{#1}}
\providecommand{\ball}[1]{B_{f\!, \rho}\pa{#1}}
\providecommand{\xvy}{\xv^*_{\yv}}
\subsection{Proof for Lemma~\ref{lemma:lipschitz}}
\label{proof:lemma:lipschitz}
We begin by noting that for each $q \in [Q]$, the distance function $\dsf_{n, S_q}$ associated with the block $S_q$ is $1-$Lipschitz \cite[Chapter~9.1]{boissonnat2018geometric}. Thus, for each $q \in [Q]$ and for all $\xv, \yv \in \R^d$ we have that
\eq{
0 \le \dsf_{n,q}(\xv) \le \dsf_{n,q}(\yv) + \norm{\xv-\yv},\nn
}
and, therefore, it follows that
\eq{
\median\pb{ \dsf_{n,q}(\xv) : q \in [Q] } \le \median\pb{ \dsf_{n,q}(\yv) : q \in [Q] } + \norm{\xv-\yv}.\nn
}
As a result, we obtain that $\dnq(\xv) \le \dnq(\yv) + \norm{\xv-\yv}$. Exchanging $\xv$ and $\yv$ in the steps above yields the desired result. \null\nobreak\hfill\qedsymbol{}
\subsection{Proof for Lemma~\ref{lemma:mom}}
\label{proof:lemma:mom}
For $t > 0$, define two events
\eq{
E_1 = \pb{\norminf{T_Q(\pr_n) - T(\pr)} \le t}, \qq{and}
E_2 = \pb{ \#\qty\Big{q \in [Q]: \norminf{ T(\pr_q) - T(\pr) } > t} \le \f{Q}{2} }.\nn
}
First, we show that $E_2 \subseteq E_1$. To this end for any $\omega \in E_2$, we have
\eq{
\omega \in E_2 &\Longrightarrow \omega \in \pb{\#\qty\Big{q \in [Q]: \norminf{T(\pr_q) - T(\pr)} > t} \le \f{Q}{2}}\n
&\Longrightarrow \omega \in \pb{\#\qty\Big{q \in [Q]: \norminf{T(\pr_q) - T(\pr)} \le t} > Q - \f{Q}{2}}\n
&\Longrightarrow \omega \in \pb{\#\qty\Big{q \in [Q]: \forall \xv \in \R^d, T(\pr)(\xv) - t \le T(\pr_q)(\xv) \le T(\pr)(\xv) + t} > \f{Q}{2}}\n
&\Longrightarrow \omega \in \pb{\forall \xv \in \R^d, T(\pr)(\xv) - t \le \med\qty\big{T(\pr_q)(\xv) : q \in [Q]} \le T(\pr)(\xv) + t}\n
&\Longrightarrow \omega \in \pb{\forall \xv \in \R^d, T(\pr)(\xv) - t \le T_Q(\pr_n)(\xv) \le T(\pr)(\xv) + t}\n
&\Longrightarrow \omega \in \pb{\norminf{T_Q(\pr_n) - T(\pr)} \le t}\n
&\Longrightarrow \omega \in E_1.\nn
}
Therefore, we have $E_2 \subseteq E_1$. Next, note that $E_2$ can be written as
\eq{
E_2 = \pb{\sum_{q=1}^Q \rqt \le \f Q 2},\nn
}
where, for each $q \in [Q]$,
\eq{
\rqt \defeq \mathbbm{1}\qty\Big( \norminf{ T(\pr_q) - T(\pr) } > t ).\nn
}
Since $0 \le \rqt \le 1$ a.s., we have that
\eq{
\sum_{q=1}^Q \rqt &= \sum_{q \in A}\rqt + \sum_{q\in A^c}\rqt \le \sum_{q \in A}\rqt + \abs{A^c} \le \sum_{q \in A}\rqt + m.\nn
}
As a result, we can further bound the probability of $E_2$ from below as
\eq{
\pr(E_2) \ge \pr\pa{\sum_{q \in A}\rqt \le \f Q 2 - m}.
\label{mom-lemma-1}
}
Combining \eref{mom-lemma-1} with the fact that $E_2 \subseteq E_1$, we obtain
\eq{
\pr\qty\bigg({ \norminf{ T_Q(\pr_n) - T(\pr) } > t }) = \pr(E_1^c) \le \pr(E_2^c) \le \pr\pa{\sum_{q \in A}\rqt > \f Q 2 - m},\nn
}
which gives us the desired result. \null\nobreak\hfill\qedsymbol{}
\subsection{Proof of Theorem~\ref{theorem:momdist-sublevel}}
\label{proof:theorem:momdist-sublevel}
First, we note from the stability of persistence diagrams that,
\eq{
\pr\qty\bigg{\Winf\qty\Big({\dgm\pa{\dnq}, \dgm\pa{\dx}}) > t} \le \pr\pb{\norminf{\dnq - \dx} > t}.
\label{mom-concentration-stability}
}
Therefore, it suffices to control the probability of the event $\pb{\norminf{\dnq - \dx} > t}$. To this end, let $A = \pb{q\in [Q]: S_q \cap \Ym = \varnothing}$ be the blocks which contain no outliers. From the assumption on $Q$, i.e., $2m < Q < n$, it follows that, and $\abs{A} > Q/2$. For $q \in [Q]$, let $\rqt$ be given by
\eq{
\rqt = \mathbbm{1}\qty\Big( { \norminf{\mathsf{d}_{n, q} - \dx} } > t ).\nn
}
On application of Lemma~\ref{lemma:mom} to the estimator $\dnq$, it follows that
\eq{
\pr\qty\bigg{{ \norminf{\mathsf{d}_{n, Q} - \dx} } > t} \le \pr\pa{\sum_{q \in A}\rqt > \f Q 2 - m}.
\label{eq:momdist-sublevel1}
}
Since $S_q \subseteq \Xnm$ for all $q \in A$, it follows that $\qty{\rqt: q \in A}$ are i.i.d.~$\textup{Bernoulli}\qty\big(p(t; n, Q))$ random variables, where
\eq{
p(t; n, Q) = \E\qty(\rqt) = \pr\qty\Big( { \norminf{\mathsf{d}_{n, q} - \dx} } > t ).\nn
}
For the remainder of the proof we need two key ingredients: (i) we need an upper bound for $\E(\rqt)$, and (ii) we need a tight bound for the binomial tail probability in \eref{eq:momdist-sublevel1}.
\textbf{Bound for $p(t; n, Q)$.} From \citet[Theorem~2]{chazal2015convergence}, under the $(a, b)-$standard condition it follows that
\eq{
p(t; n, Q) \le \f{2^b}{at^b}\exp({ -\nq at^b }) = \exp({ -\nq at^b - \log\qty\big(at^b) + b\log2 }).
\label{eq:momdist-pt}
}
\textbf{Binomial tail probability bound.} For $0 < \e < 1$, using the Chernoff-Hoeffding bound from Lemma~\ref{lemma:chernoff-hoeffding} yields,
\eq{
\pr\pa{\f{1}{\abs{A}}\sum_{q \in A}\rqt > \e} \le \exp\qty\Bigg( \abs{A} \pa{\f{2}{e} + \e \log p(t; n, Q)}).\nn
}
Using the bound for $p(t; n, Q)$ from \eref{eq:momdist-pt}, we obtain
\eq{
\pr\pa{\f{1}{\abs{A}}\sum_{q \in A}\rqt > \e} &\le \exp\qty\Bigg( \abs{A} \qty\Big( \f{2}{e} + b\e\log2 - \e \nq at^b - \e \log\!\qty\big(at^b)) )\n
&\le \exp\qty\Bigg( \abs{A} \qty\Big( 1 + b\e - \e \nq at^b - \e \log\!\qty\big(at^b)) )\n
&\le \exp\qty\bigg( \abs{A} \qty\Big( 1 + b\e - \e\Ot) ),\nn
}
where, in the last line we use $\Ot \defeq (n/Q)at^b + \log(at^b)$ for brevity. When $t$ satisfies the condition that
\eq{
\Ot \ge \f{2(1+b\e)}{\e}
\label{eq:Ot-condition1}
}
then it implies that
\eq{
1 + b\e - \e\Ot \le -\f\e2 \Ot,\nonumber
}
and we get
\eq{
\pr\pa{\f{1}{\abs{A}}\sum_{q \in A}\rqt > \e} \le \exp\qty\Bigg( -\f{\abs{A}\e}{2} \Ot ).\nn
}
By setting $\delta$ equal to the r.h.s. of the inequality above, we obtain
\eq{
\Ot = \f{2\log(1/\delta)}{\abs{A}\e}.
\label{eq:Ot-Ae}
}
When $\delta \le e^{-(1+b)Q}$, using the fact that $Q > \abs{A}$ and $0 < \e < 1$, it follows that
\eq{
\Ot = \f{2\log(1/\delta)}{\abs{A}\e} \ge \f{2(1+b)Q}{\abs{A}\e} \ge \f{2(1+b\e)}{\e},\nn
}
and, therefore, the condition in \eref{eq:Ot-condition1} is satisfied. Consequently, for $\delta \le e^{-(1+b)Q}$, on rearranging the terms in \eref{eq:Ot-Ae} we obtain
\eq{
\pr\pa{\sum_{q \in A}\rqt > \f{2\log(1/\delta)}{\Ot}} \le \delta.
\label{eq:momdist-sublevel-bound1}
}
Comparing \eref{eq:momdist-sublevel1} with \eref{eq:momdist-sublevel-bound1} we conclude that
\eq{
\pr\pa{\sum_{q \in A}\rqt > \f{Q-2m}{2}} = \pr\pa{\sum_{q \in A}\rqt > \f{2\log(1/\delta)}{\Ot}} \le \delta,\nn
}
by setting
\eq{
\f{2\log(1/\delta)}{\Ot} = \f{Q-2m}{2} \Longleftrightarrow \Ot = \f{4{\log(1/\delta)}}{Q-2m}.\nn
}
Since $\Ot = \nq at^b + \log(at^b)$, this is equivalent to
\eq{
\exp( \nq a t^b ) \nq at^b = \nq \exp({ \f{4{\log(1/\delta)}}{Q-2m} }).\nn
}
Moreover, using the fact that the Lambert $\wo$ function is given by the identity $\wo(x)e^{\wo(x)} = x$ \citep{hoorfar2008inequalities}, we obtain that
\eq{
t = \qty\Bigg( \f{Q}{an} \wo\qty( \nq \exp{ \f{4{\log(1/\delta)}}{Q-2m} })) ^{1/b}.
\label{eq:momdist-t-constraint}
}
Therefore, from \eref{mom-concentration-stability} and (\ref{eq:momdist-sublevel1}), for $t$ satisfying \eref{eq:momdist-t-constraint} and for all $\tau \ge t$ we have that
\eq{
\pr\qty\bigg{\Winf\qty\Big({\dgm\pa{\dnq}, \dgm\pa{\dx}}) > \tau} \le \pr\qty\bigg{\Winf\qty\Big({\dgm\pa{\dnq}, \dgm\pa{\dx}}) > t} \le \delta.
\label{eq:momdist-sublevel2}
}
Since $\delta \le e^{-(1+b)Q}$, observe that
\eq{
\f{4\log(1/\delta)}{Q-2m} \ge \f{4(1+b)Q}{Q-2m} \ge 4(1+b) > 1.\nonumber
}
Furthermore, using the fact that $\wo(z) \le \log(z)$ for $z > e$ \cite[Eq.~1.1]{hoorfar2008inequalities}, we may take $\tau$ to be
\eq{
t = \qty\Bigg( \f{Q}{an} \wo\qty( \nq \exp{ \f{4{\log(1/\delta)}}{Q-2m} })) ^{1/b} &\le \qty\Bigg( \f{Q}{an} \log\qty( \nq \exp{ \f{4{\log(1/\delta)}}{Q-2m} })) ^{1/b}\n
&= \qty\Bigg( \f{Q \log(n/Q)}{an} + \f{4Q{\log(1/\delta)}}{a(Q-2m)n} ) ^{1/b} \defeq \tau.\nn
}
Plugging this into \eref{eq:momdist-sublevel2}, we obtain the desired result.
For the second claim in the theorem, by inverting the relationship between $t$ and $\delta$ in \eref{eq:momdist-t-constraint} and using the fact that $\wo(z)$ is an increasing function for $z > 0$, observe that the constraint on $\delta$ equivalently specifies a constraint on $t$, i.e.,
\eq{
\delta \le e^{-(1+b)Q} \Longleftrightarrow t \ge \qty\Bigg( \f{Q}{an} \wo\qty( \nq \exp{ \f{4{(1+b)Q}}{Q-2m} })) ^{1/b}.\nn
}
A sufficient condition for this to hold is that
\eq{
t \ge t(n, Q) \defeq \qty\Bigg( \f{Q \log(n/Q)}{an} + \f{4(1+b)Q^2}{a(Q-2m)n} )^{1/b}.\nonumber
}
Therefore, from \eref{eq:momdist-sublevel2} we have that for all $t \ge t(n, Q)$
\eq{
\pr\qty\bigg{\Winf\qty\Big({\dgm\pa{\dnq}, \dgm\pa{\dx}}) > t} \le \exp( - \qty(\f{Q-2m}{4})\Ot ).\nn
}
By taking $\pr\qty\big{\Winf\qty\Big({\dgm\pa{\dnq}, \dgm\pa{\dx}}) > t}$ to be its maximum value of $1$ in the interval $[0,t(n, Q)]$ we have
\eq{
\E\qty[ \Winf\qty\Big({\dgm\pa{\dnq}, \dgm\pa{\dx}}) ] &= \int\limits_0^\infty\pr\qty\bigg{\Winf\qty\Big({\dgm\pa{\dnq}, \dgm\pa{\dx}}) > t} dt\n
&\le t(n, Q) + \int\limits_{t(n, Q)}^\infty \exp( - \qty(\f{Q-2m}{4})\Ot ) dt.\nonumber
}
By taking $w = \Ot$ and setting $r_n = 4(1+b)Q/(Q-2m)$, we further obtain
\eq{\label{eq:momdist-ebound1}
&\E\qty\Big[ \Winf\qty\big({\dgm\pa{\dnq}, \dgm\pa{\dx}}) ] \n
&\qq{} \qq{} \lesssim t(n, Q) + \pa{\f{Q}{n}}^{1/b} \int\limits_{r_n}^{\infty} \f{e^{-w/4}\pa{\wo\pa{ \f{n}{Q} e^w }}^{1/b}}{w+1}dw\nn\\[5pt]
&\qq{} \qq{} \num{\lesssim}{ii} t(n, Q)%
+ \qty( \f{\log(n/Q)}{n/Q})^{1/b} \underbrace{\int\limits_{r_n}^\infty\f{e^{-w/4}}{w+1}dw}_{\circled{a}} %
+ \pa{\f{Q}{n}}^{1/b} \underbrace{\int\limits_{r_n}^\infty\f{e^{-w/4}w^{1/b}}{w+1}dw}_{\circled{b}},
}
where (ii) follows from the fact that $\wo(z) \le \log(z)$ for $z > e$ together with, either, an application of Lemma~\ref{lemma:useful-inequalities}~(iii) when $b\ge 1$, or Lemma~\ref{lemma:useful-inequalities}~(i) with the additional factor $2^{1/b - 1}$ being absorbed in the symbol $\lesssim$ when $b<1$. The term $\circled{a}$ can be bounded above using the incomplete $\Gamma$ function as,
\eq{
\circled{a} = \int\limits_{r_n}^\infty\f{e^{-w/4}}{w+1}dw = e^{1/4} \int_{(r_n-1)/4}^\infty v^{-1}e^{-v}dv = e^{1/4} \Gamma\qty\big(0, (r_n-1)/4) < \infty. \nn
}
Similarly, using the fact that $w+1 > 1$, the term $\circled{b}$ may be bounded above as,
\eq{
\circled{b} = \int\limits_{r_n}^\infty\f{e^{-w/4}w^{1/b}}{w+1}dw \le \int\limits_{r_n}^\infty{e^{-w/4}w^{1/b}}dw \le \f{\Gamma(1 + b\inv)}{4^{1+1/b}} \int_{r_n}^{\infty}\pi(v)dv \le \f{\Gamma(1 + b\inv)}{4^{1+1/b}} < \infty,\nn
}
where $\pi$ is the probability density function of the distribution $\Gamma\qty\big(1+b\inv, 1/4)$. Therefore, the inequality in \eref{eq:momdist-ebound1} becomes
\eq{
\E\qty[ \Winf\qty\Big({\dgm\pa{\dnq}, \dgm\pa{\dx}}) ] \lesssim \qty\bigg(\f{\log(n/Q)}{n/Q} + \f{Q^2}{(Q-2m)n} )^{1/b} + \qty\bigg(\f{Q}{n})^{1/b}.\nn
}
When the number of outliers grows with $n$ as $m_n = cn^\e$ where $0 \le \e < 1$, let the number of blocks be $Q_n = 3c n^\beta$, where $\e \le \beta < 1$. Therefore,
\eq{
\E\qty[ \Winf\qty\Big({\dgm\pa{\dnq}, \dgm\pa{\dx}}) ] &\lesssim \inf_{\e \le \beta < 1} \qty\bigg(\f{\log(n/n^\beta)}{n/n^\beta} + \f{n^{2\beta}}{(3n^\beta-2n^\e)n} )^{1/b} + \qty\bigg(\f{n^\beta}{n})^{1/b}\n[5pt]
&\lesssim \qty( \f{\log n}{n^{1-\e}} )^{1/b},\nn
}
which gives us the desired result. \null\nobreak\hfill\qedsymbol{}
\subsection{Proof of Lemma~\ref{lemma:sublevel-equivalence}}
\label{proof:lemma:sublevel-equivalence}
\begingroup
\providecommand{\rd}{\R^d}
\renewcommand{\a}{\alpha}
For simplicity, let $f=\dnq$ denote the \md{} function. By definition, $V[f]$ and $V[\rd,f]$ are $( \id, \a )-$interleaved if the following relationship holds
\eq{
V^t[f] \subseteq V^t[\rd, f] \subseteq V^{\a(t)}[f].\nn
}
The first inclusion is straightforward since
\eq{
V^t[f] \subseteq \bigcup_{\xv \in V^t[f]}B_f(\xv, t) = \bigcup_{\xv \in \rd}B_f(\xv, t) = V^t[\rd,f].\nn
}
For the second inclusion, suppose $\xv \in V^t[\rd, f]$, i.e., there exists $\yv \in \rd$ such that $\norm{\xv-\yv} \le r_{f,\yv}(t)$. It suffices to show that $\xv \in V^{\a(t)}[f]$. To this end, note that since $\dnq$ is $1-$Lipschitz by Lemma~\ref{lemma:lipschitz} it follows that
\eq{
f(\xv) &\le f(\yv) + \norm{\xv-\yv}\n
&\le f(\yv) + r_{f, \yv}(t)\n
&= f(\yv) + (t^p - f(\yv)^p)^{\f 1p}\n
&\num{\le}{i} 2^{\f{p-1}{p}}\qty( f(\yv)^p + (t^p - f(\yv)^p) )^{\f 1p} = 2^{\f{p-1}{p}}t,\nn
}
where (i) follows from an application of Lemma~\ref{lemma:useful-inequalities}~(iii). Since $f(\xv) \le 2^{\f{p-1}{p}}t = \a(t)$, it implies that $x \in V^{\a(t)}$ and the result follows. When $p=1$, note that $\a(t) = t$, and therefore $V[f] = V[\rd, f]$.
\endgroup
\null\nobreak\hfill\qedsymbol{}
\subsection{Proof of Lemma~\ref{lemma:ab-filtration}}
\label{proof:lemma:ab-filtration}
Since $\bX \subseteq \bY$, the inclusion $\Vt[][\bX, f][\rho] \subseteq \Vt[][\bY, f][\rho]$ holds trivially. For the next part, let ${\vt{t} = \Vt[][\bX, f][\rho]}$ and ${\ut{t} = \Vt[][\bY, f][\rho]}$ denote the respective $f$--weighted filtrations, so as to avoid the notational overload. In order to show the second inclusion, i.e., $\ut{t} \subseteq \vt{\alpha(t)}$, consider $\zv \in \ut{t}$. Then, there exists $\yv \in \bY$ such that $\zv \in \ball{\yv, t}$. If $\yv \in \bX \subset \bY$, then it immediately follows that ${\zv \in \vt{t} \subseteq \vt{\alpha(t)}}$. In what remains, for $\yv \in \bY \setminus \bX$, it is sufficient to show that there exists $\xv \in \bX$ such that ${\zv \in \ball{\xv, \alpha(t)}}$.
To this end, let $\xvy = \arginf_{\xv \in \bX}\rho(\xv, \yv)$ be the projection of $\yv$ onto $\bX$ via $\rho$. Then two following cases arise: (I) $\rho(\xvy, \zv) \le \rho(\xvy, \yv)$, and (II) $\rho(\xvy, \zv) \ge \rho(\xvy, \yv)$ (see Figure~\ref{fig:proof:ab-filtration1}).
\begin{figure}
\centering
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{./figures/illustrator/fig4_1.jpg}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\bigskip
\includegraphics[width=\textwidth]{./figures/illustrator/fig4_2.jpg}
\end{subfigure}
\caption{Illustration of Case I (Left) and Case II (Right).}
\label{fig:proof:ab-filtration1}
\end{figure}
\textbf{Case I.} The distance between $\xvy$ and $\zv$ will satisfy
\eq{
\rho(\xvy, \zv) \le \rho(\xvy, \yv) &\num{\le}{i} f(\yv) + a\n
&\num{\le}{ii} \qty\big({t^p - \rho(\yv, \zv)^p})^{\fpp} + a\n
&\le t + a\nn
}
where (i) follows from the assumption on $f$, and (ii) follows from the fact that if $\zv \in \ball{\yv, t}$, then $\rho(\yv, \zv) \le \rfx[f][\yv][t] = \pa{t^p - f(\yv)^p}^{1/p}$. Furthermore, from Lemma~\ref{lemma:useful-inequalities}~(vi) we obtain
\eq{
\rho(\xvy, \zv) &\le \qty\Big( (t+a + f(\xvy))^p - f(\xvy)^p )^{\fpp}\n
&\le \qty\Big( \qty\big(t+a + \sup_{\xv \in \bX}f(\xv))^p - f(\xvy)^p )^{\fpp}\n
&\le \qty\Big( \qty\big(2^{1-\fpp}t+a + \sup_{\xv \in \bX}f(\xv))^p - f(\xvy)^p )^{\fpp}\n
&= \qty\big( \alpha(t)^p - f(\xvy)^p )^{\fpp} = \rfx[f][\xvy][\alpha(t)],\nn
}
where the last inequality holds because $2^{1-\fpp} \ge 1$. The last line implies that ${\zv \in \ball{\xvy, \alpha(t)} \subseteq \vt{\alpha(t)}}$.
\textbf{Case II.} For $r = \rho(\xvy, \yv)$ let $\yv'$ be the projection of $\zv$ onto $\partial B\pa{\xvy, r}$, i.e.,
$$
\yv' = \arginf_{\xv' \in \partial B\pa{\xvy, r}}\rho(\xv',\zv).
$$
The point $\yv'$ satisfies the following three properties: (PI)~${\rho(\xvy, \yv') = \rho(\xvy, \yv)}$, since $\yv' \in \partial\ball{\xvy, r}$; (PII)~$\rho(\zv, \yv') \le \rho(\zv, \yv)$ by definition of $\yv'$; and (PIII)~$\rho(\xvy, \yv') + \rho(\yv', \zv) \ge \rho(\xvy, \zv)$ from the triangle inequality.
Since $\zv \in \ball{\yv, t}$, when $\rho(\xvy, \yv) \le a$ we may use the triangle inequality to obtain
\eq{
\rho(\xvy, \zv) \le \rho(\xvy, \yv) + \rho(\zv, \yv) \le a + \qty( t^p - f(\yv)^p )^{\fpp} \le a + t \le a + 2^{1-\fpp}t.\label{eq:rho-1}
}
Alternatively, when $\rho(\xvy, \yv) > a$ we obtain the following inequality,
\eq{
t^p &\ge \rho(\yv, \zv)^p + f(\yv)^p \n
&\num{\ge}{iii} \rho(\zv, \yv)^p + \pa{\rho(\xvy, \yv) - a}^p\n
&\num{=}{iv} \rho(\zv, \yv)^p + \pa{\rho(\xvy, \yv') - a}^p\n
&\num{\ge}{v} \rho(\zv, \yv')^p + \pa{\rho(\xvy, \yv') - a}^p\n
&\num{\ge}{vi} \qty\Big(\rho(\xvy, \zv) - \rho(\xvy, \yv'))^p + \qty\Big(\rho(\xvy, \yv') - a)^p\n
&\num{\ge}{vii} 2^{1-p}\qty\Big( \rho(\xvy, \zv)- a)^p,
\label{eq:proof:ab-filtration1}
}
where (iii) holds from the assumption on $f$, (iv--vi) follow from (PI--PIII) respectively, and (vii) uses Lemma~\ref{lemma:useful-inequalities}\,(i). Rearranging the terms of \eref{eq:proof:ab-filtration1} we get $\rho(\xvy, \zv) \le a + 2^{1-\fpp}t$. Therefore, from \eref{eq:rho-1} and \eref{eq:proof:ab-filtration1}, in case (II) we have that
\eq{
\rho(\xvy, \zv) &\le 2^{1-\fpp}t + a\n
&\num{\le}{viii} \qty\Big( \qty\big(2^{1-\fpp}t + a + \sup_{\xv \in \bX}f(\xv))^p - f(\xvy)^p )^{\fpp}\n
&= \rfx[f][\xvy][\alpha(t)],\nn
}
where (viii) uses Lemma~\ref{lemma:useful-inequalities}~(vi). Similar to case (I), we obtain $\zv \in \ball{\xv, \alpha(t)^{ }} \subseteq \vt{\alpha(t)}$. \null\nobreak\hfill\qedsymbol{}
\subsection{Proof of Lemma~\ref{lemma:ab-module}}
\label{proof:lemma:ab-module}
\providecommand{\xvo}{{\xv_{o}}}
Let $t(\bX) = \inf\pb{t > 0: \bigcap_{\xv \in \bX}\ball{\xv, t} \neq \varnothing}$, and let $\xvo \in \bigcap_{\xv \in \bX}\ball{\xv, t}$. To ease the notation, let $\ut{t} = \Vt[t][\bY, f][\rho]$ denote the usual $f$-weighted filtration, and let $\wt{t}$ be defined as
\eq{
\wt{t} = \pb{\bigcup_{\xv \in \bX}\ball{\xv, \beta(t)}} \cup \pb{\bigcup_{\yv \in \bY \setminus \bX}\ball{\yv, t}},\nn
}
such that $\ut{t} \subset \wt{t} \subset \ut{\beta(t)}$. With this background, the proof closely follows that of \citet[Proposition~4.8]{anai2019dtm}. Specifically, the proof is based on the following outline:
\begin{enumerate}[label=\protect\circled{\arabic*}]
\item We first establish that for any $\yv \in \bY \setminus \bX$, there exists $\xv = \xvy \in \bX$ such that for all $t \ge t(\bX)$, $\ball{\yv, t} \cup \ball{\xv, \beta(t)}$ is star-shaped around $\xvo$. Since this holds for all $\yv \in \bY \setminus \bX$, it also holds for $\bigcup_{\yv \in \bY \setminus \bX}\ball{\yv, t}$, and, therefore, $\wt{t}$ is star-shaped and contractible to $\xvo$.
\item The inclusion map $\iota_t: \ut{t} \hookrightarrow \ut{\beta(t)}$ can be decomposed as $\iota_t = j_t \circ \kappa_t$ where ${j_t: \ut{t} \hookrightarrow \wt{t}}$ and ${\kappa_{t}: \wt{t} \hookrightarrow \ut{\beta(t)}}$. Since $\wt{t}$ is star-shaped and contractible, i.e., $\wt{t} \sim \pb{\xvo}$, the linear map between the homology groups induced by $\kappa_{t}$, i.e., $v_t: \bwt{t} \rightarrow \but{\beta(t)}$ will be trivial.
\item The interleavings $\alpha(t)$ (Lemma~\ref{lemma:ab-filtration}) and $\beta(t)$ are combined to provide the bound in $\Winf$.
\end{enumerate}
\textbf{Claim \circled{1}.} Let $\yv \in \bY \setminus \bX$. We need to show that there exists $\xv \in \bX$ such that ${\ball{\xv, \beta(t)}\cup\ball{\yv, t}}$ is star-shaped around $\xvo$, i.e., for any $\zv \in \ball{\yv, t}$ the curve $\Gamma[\xvo, \zv]$ is contained inside the set ${\ball{\xv, \beta(t)}\cup\ball{\yv, t}}$. See Figure~\ref{fig:lemma:ab-module-claim1}.
\begin{figure}
\includegraphics[width=\textwidth]{./figures/illustrator/fig5_4.pdf}
\caption{Illustration of Claim \protect\circled{1}.}
\label{fig:lemma:ab-module-claim1}
\end{figure}
To this end, let $\xv = \arginf_{\zv \in \bX}\rho(\zv, \yv)$ be the projection of $\yv$ onto $\bX$. Note that, from the definition of $\xvo$, $\xvo \in \ball{\xv, t}$ for all $t \ge t(\bX)$. For simplicity, let $S^t = {\ball{\xv, \beta(t)}\cup\ball{\yv, t}}$. Additionally, let $\pi(\xv)$ and $\pi(\yv)$ be the projection of $\xv$ and $\yv$ onto $\Gamma[\xvo, \zv]$, respectively, i.e.,
\eq{
\pi(\xv) = \arginf_{\xv' \in \Gamma[\xvo, \zv]}\rho(\xv', \xv),\nn
}
mutatis mutandis, the same for $\pi(\yv)$. By definition, $\rho(\xv, \pi(\xv)) \le \rho(\xv, \xvo)$ and ${\rho(\yv, \pi(\yv)) \le \rho(\yv, \zv)}$, and consequently, $\pi(\yv) \in \ball{\yv, t}$. This implies that $\Gamma[\pi(\yv), \zv] \subseteq S^t$. What remains to be established is that $\Gamma[\xvo, \pi(\yv)] \subseteq S^t$. In order to show this, note that it is sufficient to show that $\pi(\yv) \in \ball{\xv, \beta(t)}$. Indeed, if this holds, then $\Gamma[\xvo, \pi(\yv)] \subseteq \ball{\xv, \beta(t)} \subseteq S^t$, and it will follow that ${\Gamma[\xvo, \pi(\yv)] \cup \Gamma[\pi(\yv), \zv] = \Gamma[\xvo, \zv] \subseteq S^t}$.
Let $\tau = \rho(\yv, \pi(\yv))$. Since $\pi(\yv) \in \ball{\yv, t}$, when $\rho(\xv, \yv) > a$ it follows that
\eq{
\tau &\le \rfx[f][\yv][t]\n
&\le \qty\Big(t^p - f(\yv)^p)^{\fpp}\n
&\le \qty\Big(t^p - \qty\big(\rho(\xv, \yv) - a)^p)^{\fpp},\nn
}
where the last inequality follows from the assumption on $f$. Thus, we have
\eq{
\rho(\xv, \yv) &\le \qty\Big(t^p - \tau^p)^{\fpp} + a.
\label{eq:lemma:ab-module-eq2}
}
Alternatively, when $\rho(\xv, \yv) \le a$, \eref{eq:lemma:ab-module-eq2} holds trivially. Since $\pi(\xv) \in \ball{\xv, t}$ and $\rho(\xv, \pi(\xv)) \le \rho(\xv, \xvo)$, it follows that
\eq{
\rho(\xv, \pi(\xv)) \le t(\bX).
\label{eq:lemma:ab-module-eq3}
}
Since $\rho=\norm{\cdot}$, \citet[Lemma~B.2]{anai2019dtm} holds, which, combined with Eqs.\eqref{eq:lemma:ab-module-eq2}~and~\eqref{eq:lemma:ab-module-eq3} yields
\eq{
\rho(\xv, \pi(\yv))^2 &\num{\le}{i} \qty\Big(\qty\Big(t^p - \tau^p)^{\fpp} + a)^2 + \tau(2t(\bX) - \tau)\n
&\le \qty\Big(t^p - \tau^p)^{\f2p} + \tau(2t(\bX) - \tau) + a^2 + 2a\qty\Big(t^p - \tau^p)^{\fpp}\n
&\num{\le}{ii} (t + \kappa t(\bX))^2 + a^2 + 2at\n
&\le (t + \kappa t(\bX))^2 + a^2 + 2a(t+\kappa t(\bX))\n
&= \qty\big(t+a+\kappa t(\bX))^2,\nn
}
where (i) is a consequence of \citet[Lemma~B.2]{anai2019dtm}, (ii) follows from \citet[Lemma~B.3]{anai2019dtm} and noting that $t^p - \tau^p \le t^p$ since $\tau \le t$, and $\kappa = (1-\fpp)$. Additionally, from Lemma~\ref{lemma:useful-inequalities}~(vi) we obtain
\eq{
\rho(\xv, \pi(\yv)) &\le t+a+\kappa t(\bX) \le \qty\Big( \qty\big(t+a+\kappa t(\bX) + \sup_{\xv \in \bX}f(\xv))^p - f(\xv)^p )^{\fpp} = \rfx[f][\xv][\beta(t)].\nn
}
This implies that $\pi(\yv) \in \ball{\xv, \beta(t)}$, and establishes claim \circled{1}.
For claim \circled{2}, note that since $\wt{t} \sim \pb{\xvo}$, for the $k$th homology group $\bwt{t}$, we have that $\bwt{t} \simeq \mathbf{F}$ for $k=0$, and $\bwt{t} \simeq \pb{\mathsf{0}}$ for $k > 0$. Therefore, the map $w_t: \bwt{t} \rightarrow \but{\beta(t)}$ is trivial, and consequently, so is the linear map $\but{t} \rightarrow \but{\beta(t)}$.
In order to show claim \circled{3}, observe that the persistence modules $\but{ }$ and $\bvt{ }$ are
\eq{
\begin{cases}
\text{ $(\id,\alpha)$--interleaved } & \text{ for all $t$ and for $\alpha: t \mapsto 2^{1-\fpp}t + a + \sup_{\xv \in \bX}f(\xv)$}\nn \\
\text{ $(\id,\beta)$--interleaved } & \text{ for $t \ge t(\bX)$ and for $\beta: t \mapsto t+a+\kappa t(\bX) + \sup_{\xv \in \bX}f(\xv)$}\nn
\end{cases}.
}
When $t\le t(\bX)$, from \citet[Lemma~B.1]{anai2019dtm},
\eq{
\alpha(t) &= t + \qty\Big(2^{1-\fpp}-1)t + a + \sup_{\xv \in \bX}f(\xv) \le t + \kappa t(\bX) + a + \sup_{\xv \in \bX}f(\xv) = \beta(t).\nn
}
Thus, $\alpha(t) \le \beta(t)$ for $t \le t(\bX)$. Since $\beta: t \mapsto t + c(\bX)$ is an additive interleaving for $c(\bX) = \kappa t(\bX) + a + \sup_{\xv \in \bX}f(\xv)$, this implies that
\eq{
\Winf\qty\big(\dgm\pa{\but{ }}, \dgm\pa{\bvt{ }}) \le c(\bX),\nn
}
which establishes claim \circled{3}. \null\nobreak\hfill\qedsymbol{}
\subsection{Proof of Theorem~\ref{theorem:momdist-stability}}
\label{proof:theorem:momdist-stability}
We begin by establishing the following result:
\eq{
\Winf\qty\bigg( \bVt[ ][\Xn, \dnq], \bVt[ ][\Xnm, \dnq] ) \le \adjustlimits\sup_{\xv \in \Xnm}\dnq(\xv) + \qty( 1 - \f1p )t(\Xnm).\nn
}
Observe that from Lemma~\ref{lemma:ab-filtration} and Lemma~\ref{lemma:ab-module}, it suffices to show that for every $\yv \in \Ym$ the MoM-Dist function $\dnq$ satisfies the property that
\eq{
\inf_{\xv \in \Xnm}\norm{\xv - \yv} \le \dnq(\yv).\nn
}
To this end, let $A = \pb{ q \in [Q] : S_q \cap \Ym = \varnothing }$ be the blocks containing no outliers. For $\yv \in \Ym$ and every $q \in A$, we have that $S_q \subseteq \Xnm$, and therefore
\eq{
\inf_{\xv \in \Xnm} \norm{\xv - \yv} \le \inf_{\xv \in S_q}\norm{\xv - \yv} = \mathsf{d}_{n,q}(\yv).\nn
}
Since this holds for every $q \in A$, taking the infimum on the right hand side over $Q$ yields
\eq{
\inf_{\xv \in \Xnm} \norm{\xv - \yv} \le \inf_{q \in [Q]} \mathsf{d}_{n,q}(\yv).\nn
}
Since $2m < Q$ by assumption, using the pigeonhole principle we further have that
\eq{
\inf_{q \in [Q]} \mathsf{d}_{n,q}(\yv) \le \med\qty\Big{ \mathsf{d}_{n, q}(\yv) : q \in [Q] },\nn
}
which implies that $\inf_{\xv \in \Xnm}\norm{\xv - \yv} \le \dnq(\yv)$ for every $\yv \in \Ym$. Therefore, taking $a=0$ in Lemma~\ref{lemma:ab-filtration} and Lemma~\ref{lemma:ab-module} we obtain
\eq{
\Winf\qty\bigg( \bVt[ ][\Xn, \dnq], \bVt[ ][\Xnm, \dnq] ) \le \adjustlimits\sup_{\xv \in \Xnm}\dnq(\xv) + \qty( 1 - \f1p )t(\Xnm).
\label{eq:stability-1}
}
Turning our attention to the quantity appearing in the statement of the theorem, note that an application of the triangle inequality yields
\eq{
\Winf\qty\bigg( \bVt[ ][\Xn, \dnq], \bVt[ ][\Xnm, \dsf_{n-m}] ) &\le \Winf\qty\bigg( \bVt[ ][\Xn, \dnq], \bVt[ ][\Xnm, \dnq] ) \n
&\quad\quad+ \Winf\qty\bigg( \bVt[ ][\Xnm, \dnq], \bVt[ ][\Xnm, \dsf_{n-m}] )\n
&\num{\le}{$\star$} \adjustlimits\sup_{\xv \in \Xnm}\dnq(\xv) + \qty( 1 - \f1p )t(\Xnm) + \norminf{\dnq - \dsf_{n-m}},\nn
}
where the first term in ($\star$) follows from \eref{eq:stability-1} and the last term follows from Lemma~\ref{lemma:anai-et-al}~(i). This gives us the desired result. Furthermore, when $p=1$ note that $1 - 1/p = 0$, giving us the tighter bound in this case. \null\nobreak\hfill\qedsymbol{}
\subsection{Proof of Theorem~\ref{theorem:momdist-consistency}}
\label{proof:theorem:momdist-consistency}
We begin by noting that $\bvt{}[\bX] = \bvt{}[\bX, \dx]$. Indeed, the distance function $\dx(\xv) = 0$ for all $\xv \in \bX$. We may further conclude that
\eq{
\sup_{\xv \in \bX}\dx(\xv) = 0.
\label{eq:supdist=0}
}
The bottleneck distance between $\bVt[ ][\Xnm \cup \Ym, \dnq]$ and $\bvt{}[\bX]$ may be bounded above as
\eq{
\Winf\qty\bigg( \bVt[ ][\Xnm \cup \Ym, \dnq], \bvt{}[\bX] ) &\le \Winf\qty\bigg( \bVt[ ][\Xnm \cup \Ym, \dnq], \bVt[ ][\Xnm, \dnq] ) && =:\circled{a} \n
&\quad+ \Winf\qty\bigg( \bVt[ ][\Xnm, \dnq], \bVt[ ][\Xnm, \dx]) && =:\circled{b} \n
&\quad+ \Winf\qty\bigg( \bVt[ ][\Xnm, \dx], \bVt[ ][\bX, \dx] ). && =:\circled{c} \nn
}
When $p=1$, the term $\circled{b} \le \norminf{\dnq - \dx}$ using Lemma~\ref{lemma:anai-et-al}~(i), and the term $\circled{c} \le \mathsf{H}(\Xnm, \bX)$ using Lemma~\ref{lemma:anai-et-al}~(ii). The term $\circled{a}$ is bounded above by taking $p=1$ in \eref{eq:stability-1} (from the proof of Theorem~\ref{theorem:momdist-stability}) to give
\eq{
\circled{a} &= \sup_{\xv \in \Xnm}\dnq(\xv) \n
&\num{\le}{$\star$} \sup_{\xv \in \bX}\dnq(\xv)\n
&\num{\le}{$\dagger$} \norminf{\dnq - \dx} + \sup_{\xv \in \bX}\dx(\xv)\n
&\num{=}{$\ddagger$} \norminf{\dnq - \dx},\nn
}
where ($\star$) follows from the fact $\Xnm \subset \bX$, ($\dagger$) uses the identity $f(\xv) \le \norminf{f - g} + g(\xv)$ for all $\xv \in \bX$, and ($\ddagger$) follows from \eref{eq:supdist=0}. Plugging in the bounds for the bottleneck distance we obtain
\eq{
\Winf\qty\bigg( \bVt[ ][\Xnm \cup \Ym, \dnq], \bvt{}[\bX] ) \le 2\norminf{\dnq - \dx} + \mathsf{H}(\Xnm, \bX).\nn
}
By noting that the Hausdorff distance $\mathsf{H}(\Xnm, \bX) = \norminf{\dsf_{n-m} - \dx}$, for $t_1, t_2$ such that $t_1 + t_2 = t$ we may bound the tail probability for the bottleneck distance as follows.
\eq{
\pr\qty\Bigg{\Winf\qty\Big( \bVt[ ][\Xnm \cup \Ym, \dnq], \bvt{}[\bX] ) > t} &\le \pr\qty\Big({ 2\norminf{\dnq - \dx} > t_1 }) + \pr\qty\Big({ \norminf{\dsf_{n-m} - \dx} > t_2 })\n
&\le \delta_1 + \delta_2 = \delta,
\label{eq:momdist-filt}
}
where the relationship between $\delta_1, \delta_2$ and $t_1, t_2$ is given by \eref{eq:momdist-t-constraint}, i.e., $\delta_1 \le e^{-(1+b)Q}$ from the condition in Theorem~\ref{theorem:momdist-sublevel}, $\delta_2 = \delta - \delta_1$,
\eq{
t_1 = 2\qty\Bigg( \f{Q}{an} \wo\qty( \nq \exp{ \f{4{\log(1/\delta_1)}}{Q-2m} })) ^{1/b}, \ \ \text{ and } \ \ \ t_2 = \qty\Bigg( \f{1}{an} \wo\qty\Big( n e^{ 4{\log(1/\delta_2)} })) ^{1/b}.\nn
}
Furthermore, using the bound for the Lambert $\wo$ function ${\wo(z) \le \log z}$ for $z > e$, we have
\eq{
t_1 \le 2\qty\bigg( \frac{Q\log(n / Q)}{a (n / Q)} + \frac{4Q \log(1/\delta_1)}{a(Q-2m)n} )^{1/b}, \ \ \text{ } \ \ \ t_2 \le \qty\bigg( \frac{\log (n-m)}{a (n-m)} + \frac{4 \log(1/\delta_2)}{a (n-m)} )^{1/b},\nonumber
}
and $t = t_1 + t_2 \le \mathfrak{f}(n, m, Q, \delta_1, \delta_2)$. Therefore, the bound in \eref{eq:momdist-filt} yields
\eq{
\pr\Bigg\{\Winf\qty\Big( \bVt[ ][\Xnm \cup \Ym, \dnq], \bvt{}[\bX] )& \le \mathfrak{f}(n, m, Q, a, b)\Bigg\} \n
&\ge \pr\qty\Bigg{\Winf\qty\Big( \bVt[ ][\Xnm \cup \Ym, \dnq], \bvt{}[\bX] ) > t_1 + t_2} \ge 1 - \delta,\nn
}
which gives the desired result. The second part of the theorem follows directly using the identical procedure as that used in the proof of Theorem~\ref{theorem:momdist-sublevel} in Section~\ref{proof:theorem:momdist-sublevel}. \null\nobreak\hfill\qedsymbol{}
\subsection{Proof of Proposition~\ref{prop:sublevel-2}}
\label{proof:prop:sublevel-2}
We begin by noting from Lemma~\ref{lemma:sublevel-equivalence} that $V[\dnq]$ and $V[\R^d, \dnq]$ are $(\id, \alpha)-$interleaved for ${\alpha : t \mapsto \tp t}$. Furthermore, consider the intermediate filtrations $V[\Xnm, \dnq]$. From Proposition~\ref{theorem:momdist-stability} and Lemma~\ref{lemma:ab-filtration} we have that $V[\R^d, \dnq]$ and $V[\Xnm, \dnq]$ are $(\eta, \id)-$ interleaved for
$${\eta: t \mapsto \tp t + \sup_{\xv \in \Xnm}\dnq(\xv)}.
$$
Using an identical argument, but reversing the order, we have that $V[\Xnm, \dnq]$ and $V[\Xn, \dnq]$ are $(\id,\eta)-$interleaved. We can now apply the ``triangle inequality'' for generalized interleavings \cite[Proposition~3.11]{bubenik2015metrics} to obtain that $V[\dnq]$ and $V[\Xn, \dnq]$ are ${(\id \circ \eta \circ \id, \alpha \circ \id \circ \eta)-}$interleaved. On simplifying the interleaving maps, we obtain that the two filtrations are $(\eta, \xi)-$interleaved for ${\xi(t) = \alpha \circ \eta(t) = \tp\eta(t)}$. \null\nobreak\hfill\qedsymbol{}
\subsection{Proof of Theorem~\ref{theorem:momdist-influence}}
\label{proof:theorem:momdist-influence}
The birth time of a connected component at $\xvo$ in $\Vt[ ][\bX, f]$ is given by $b_f(\pb{\xvo}) = f(\xvo)$.
Therefore, $\db$ is given by
\eq{
\db = b_{n}(\pb{\xvo}) - b_{n+m}(\pb{\xvo}) = \dsf_n(\xvo) - \dsf_{n+m}(\xvo) = \dsf_n(\xvo),\nn
}
where the last equality follows from the fact that $\dsf_{n+m}(\xvo) = 0$, since $\xvo \in \Xmn$. On the other hand, from the proof of Proposition~\ref{theorem:momdist-stability},
\eq{
\dsf_n(\xvo) = \inf_{\xv \in \Xn}\norm{\xv - \xvo} \le \dsf_{n+m, Q}(\xvo).\nn
}
Therefore, we have that $\dbq = \dsf_n(\xvo) - \dsf_{n+m, Q}(\xvo) \le \db$, and the result follows.
For the second part, we begin by observing that $\norminf{ \dsf_{n+m} - \dsf_n }$ can be bounded from below as follows:
\eq{
\norminf{ \dsf_{n+m} - \dsf_n } &\ge \dsf_n(\xvo) - \dsf_{n+m}(\xvo)\n
&= \dsf_n(\xvo) - 0 \n
&=\inf_{\xv \in \Xn}\norm{\xv - \xvo} \n
&\ge\inf_{\xv \in \Xb}\norm{\xv - \xvo} = \dx(\xvo). \nn
}
Furthermore, for $\delta \le e^{-(1+b)Q}$ and $k \defeq \max\qty\big{1, {2^{\f{b-1}{b}}}}$, with probability greater than $1-\delta$,
\eq{
&\norminf{\dsf_{n+m,Q} - \dsf_n} \n
&\qq{} \num{\le}{i} \norminf{\dsf_{n+m,Q} - \dx} + \norminf{\dsf_n - \dx}\n
&\qq{} \num{\le}{ii} \qty\Bigg[ \f{1}{a\nQ}\wo\qty( \nQ \exp\qty{ \f{4Q\log(2/\delta)}{Q-2m} } ) ]^{1/b} + \qty\Bigg[ \f{1}{an}\wo\qty( n \exp\qty{ 4\log(2/\delta) } ) ]^{1/b}\n
&\qq{} \num{\le}{iii} \f{k}{a^{1/b}}\qty[{ \f{1}{\nQ} \wo\qty( \nQ \exp\qty{ \f{4Q\log(2/\delta)}{Q-2m} } ) + \f{1}{n} \wo\qty( n \exp\qty{ 4\log(2/\delta) } )}]^{1/b}\n
&\qq{} \num{\le}{iv} \f{k}{a^{1/b}}\qty[{\f{\log\nQ}{\nQ}} + \f{\log n}{n} + 4\log(2/\delta)\qty( \f{Q}{\nQ(Q-2m)} + \f{1}{n} ) ]^{1/b}\n
&\qq{} \num{\le}{v} \f{k2^{1/b}}{a^{1/b}}\qty(\f{\log\nQ + 4Q\log(2/\delta)}{\nQ})^{1/b} \defeq \et,\nn
}
where, for $\nQ = (n+m)/Q$, (i) is a consequence of the triangle inequality and (ii) follows from the proofs of Theorem~\ref{theorem:momdist-sublevel} and Theorem~\ref{theorem:momdist-consistency}, (iii) uses Lemma~\ref{lemma:useful-inequalities}, (iv) follows from the fact that $\wo(z) < \log(z)$ for $z > e$, and (v) uses the fact that $\nQ < n$ and $(Q-2m)\inv \le 1$ for $2m > Q$.
Observe that if $2\et \le \dx(\xvo)$, then with probability greater than $1-\delta$,
\eq{
\norminf{ \dsf_{n+m} - \dsf_n } - \norminf{\dsf_{n+m,Q} - \dsf_n} \ge \dx(\xvo) - \et \ge \et,
\label{eq:momdist-influence-triangle}
}
and the result follows. Therefore, in order to establish the claim for the second part it suffices to check that ${2\et \le \dx(\xvo)}$ under conditions (I) and (II). To this end, note that
\eq{
\dx(\xvo) \ge 2\et \quad \Longleftrightarrow \quad \vp \ge \f{\log\nQ + 4Q\log(2/\delta)}{\nQ},\n
}
which is satisfied whenever $\delta$ satisfies the r.h.s. of condition (II), i.e.,
\eq{
\log(2/\delta) \le \f{\nQ\vp - \log\nQ}{4Q}.\nn
}
Furthermore, the l.h.s. of condition (II), i.e., $\delta \le e^{-(1+b)Q}$, is satisfied only when
\eq{
(1+b)^2 Q^2 \le \f{\nQ\vp - \log\nQ}{4Q},\nn
}
or, equivalently, when condition (I) is satisfied:
\eq{
\vp \ge \f{\log\nQ}{\nQ} + \f{4(1+b)^2Q^3}{\nQ}.\nn
}
The result now follows from \eref{eq:momdist-influence-triangle}. \null\nobreak\hfill\qedsymbol{}
\begingroup
\providecommand{\hQ}{\widehat{Q}}
\providecommand{\hm}{\widehat{m}}
\renewcommand{\ms}{m^*}
\providecommand{\h}{\mathfrak{h}}
\subsection{Proof of Theorem~\ref{theorem:lepski}}
\label{proof:theorem:lepski}
Let $\js = \min\pb{ j \in \J : m(j) > \ms }$. By definition of $\J$ we have that $\abs{\J} \le 1 + \log_\theta(\mmax/\mmin)$ and $m(\js) < \theta\ms$ for $\theta > 1$. The outline of the proof is as follows. First, we show that $\mathfrak{h}(n,m,\delta)$ is non-decreasing in $m$, from which it follows that $\mathfrak{h}(n,m(j),\delta) \le \mathfrak{h}(n,m(j+1),\delta)$. Next, we show that the event $\pb{ \jh \le \js }$ contains the event $\mathcal{E}$ given by
\eq{
\mathcal{E} = \bigcap_{\qty{j \in \J : j \ge \js}}\qty\Big{ \winf\qty\big( \bbv_n(j), \bbv[\bX] ) \le \h(n, m(j), \delta) }.\nn
}
Then, using a standard procedure for obtaining the Lepski bound (e.g., Theorem~5.1 of \citealt{minsker2018sub} and Theorem~3.1 of \citealt{chen2020robust}), we show that the event $\mathcal{E}$, and, therefore the event $\pb{\jh \le \js}$, holds with probability at least $1 - \delta \log_\theta(\mmax/\mmin)$. Lastly, we use the bound on the event $\pb{\jh \le \js}$ to obtain the desired result.
\textbf{1. Monotonicity of $\h(n, m, \delta)$ in $m$.} Consider the function $f(z; \alpha, \beta) = \alpha\wo(\beta z)/ z$ for fixed constants $\alpha, \beta > 0$. The derivative of $f$ is given by
\eq{
f'(z; \alpha, \beta) = \f{d}{dz} \qty(\f{\alpha}{z}\wo(\beta z)) &= \alpha\pa{ \f{\beta}{z}\wo'(\beta z) - \f{1}{z^2}\wo(\beta z) }\nn\\[10pt]
&\num{=}{i}\alpha\pa{ \f{\beta}{z}\pb{\f{\wo(\beta z)}{\beta z (1 + \wo(\beta z))}} - \f{1}{z^2}\wo(\beta z) }\nn\\[10pt]
&= - \f{\alpha \wo(\beta z)^2}{z^2(1+\wo(\beta z))} < 0 \qq{for all} z > 0.\nn
}
Note that in (i) we have used the fact that the derivative of the Lambert $\wo$ function is given by $\wo'(z) = {\wo(z)}/{z(1+\wo(z))}$. Therefore, it follows that $f$ is non-increasing in $z$. The claim follows by noting that the function $\mathfrak{h}$ is given by
\eq{
\h(n, m, \delta) = f( n/(2m+1); \alpha_1, \beta_1 )^{1/b} + f(n-m; \alpha_2, \beta_2)^{1/b},\nn
}
for constants $\alpha_1, \beta_1, \alpha_2, \beta_2 > 0$ not depending on $n$ or $m$.
\textbf{2. $\mathcal{E}$ is a subset of $\pb{\jh \le \js}$.} We begin by noting that since $\ms \le m(\js)$, it follows that $2\ms < Q(j) = 2 m(j) + 1$ for all $j \ge \js$ and satisfies the first condition for Theorem~\ref{theorem:momdist-consistency}. By taking
$$
\delta_1 = e^{-2(1+b)(2\mmax + 1)} \le e^{-2(1+b)Q(j)},
$$
and $\delta_2 = \delta - \delta_1$, note that $\h\qty\big(n, m(j), \delta) = \mathfrak{f}\qty\big(n, m(j), Q(j), \delta_1, \delta_2)$. Therefore, we may use Theorem~\ref{theorem:momdist-consistency} to obtain
\eq{
\pr\qty\bigg( \winf\qty\Big( \bbv_n(j), \bbv[\bX]) > \h(n, m(j), \delta) ) < \delta \qq{for all} j \ge \js.
\label{eq:j-condition}
}
Furthermore, by definition of $\jh$, it follows that for all $j < \jh$, there exists at least one $i > j$ such that ${\winf\pa{ \bbv_n(i), \bbv_n(j) } > 2\h(n, m(i), \delta)}$. Therefore,
\eq{
\pb{ \jh > \js } &\subseteq \bigcup_{\pb{ j \in \J : j > \js }} \qty\Big{ \winf\qty\big( \bbv_n(j), \bbv_n(\js) ) > 2\h( n, m(j), \delta ) }\nn\\
&\num{\subseteq}{ii} \bigcup_{\pb{ j \in \J : j > \js }} \qty\Big{ \winf\qty\big( \bbv_n(j), \bbv[\bX]) > \h( n, m(j), \delta ) } \cup \qty\Big{ \winf\qty\big( \bbv_n(\js), \bbv[\bX]) > \h( n, m(\js), \delta ) }\nn\\
&= \bigcup_{\pb{ j \in \J : j \ge \js }} \qty\Big{ \winf\qty\big( \bbv_n(j), \bbv[\bX]) > \h( n, m(j), \delta ) } \defeq \mathcal{E}^c,
\label{eq:Ec}
}
where, in (ii) we have used the fact that $\h(n, m(\js), \delta) \le \h(n, m(j), \delta)$ for all $j > \js$, and, therefore
\eq{
\qty\Big{ \winf\qty\big( \bbv_n(j), \bbv[\bX]) \le \h( n, m(j), \delta ) } \cap \qty\Big{ \winf\qty\big( \bbv_n(\js), \bbv[\bX]) \le \h( n, m(\js), \delta ) } \nn\\
\subseteq \qty\Big{ \winf\qty\big( \bbv_n(j), \bbv_n(\js) ) \le 2\h( n, m(j), \delta ) }. \qq{} \qq{} \nn
}
By inverting the above inclusion we get the inclusion in (ii). Therefore, we obtain that $\mathcal{E} \subseteq \pb{ \jh \le \js }$.
\textbf{3. Tail bound for the event $\mathcal{E}$.} Applying a union bound to \eref{eq:Ec}, we obtain
\eq{
\pr( \mathcal{E}^c ) &= \pr\qty( \bigcup_{\pb{ j \in \J : j \ge \js }} \qty\Big{ \winf\qty\big( \bbv_n(j), \bbv[\bX]) > \h( n, m(j), \delta ) } )\nn\\
&\le \sum_{\pb{ j \in \J : j \ge \js }} \pr\qty\bigg( \winf\qty\big( \bbv_n(j), \bbv[\bX]) > \h( n, m(j), \delta ) )\nn\\
&\num{\le}{iv} \sum_{\pb{ j \in \J : j \ge \js }} \!\!\!\delta\nn\\
&\num{\le}{v} \delta \log_\theta\qty( \f{\theta\mmax}{\mmin} ),\nn
}
where (iv) follows from \eref{eq:j-condition} and (v) uses the fact that $\abs{\J} \le 1 + \log_\theta(\mmax/\mmin)$.
\textbf{4. Bound for $\winf(\bbv_n(\jh), \bbv[\bX])$.} We begin by noting that when the event $\mathcal{E}$ holds, we have that
\eq{
\winf\qty(\bbv_n(\jh), \bbv[\bX]) &\le \winf\qty(\bbv_n(\jh), \bbv_n(\js)) + \winf\qty(\bbv_n(\js), \bbv[\bX])\nn\\
&\num{\le}{vi} 2\h( n, m(\js), \delta ) + \h(n, m(\js), \delta)\nn\\
&\num{\le}{vii} 3\h( n, \theta\ms, \delta ),\nn
}
where the first term in (vi) follows from the definition of $\jh$, which is guaranteed to hold because $\mathcal{E} \subseteq \pb{\jh \le \js}$, and the second term in (vi) follows from the definition of $\mathcal{E}$. The inequality in (vii) uses the fact that $m(\js) < \theta\ms$ and the fact that $h(n, m, \delta)$ is non-decreasing in $m$. Therefore, we have the inclusion
\eq{
\mathcal{E} \subseteq \qty\Big{ \winf\qty(\bbv_n(\jh), \bbv[\bX]) \le 3\h( n, \theta\ms, \delta ) }.\nn
}
Using the tail bound on $\mathcal{E}$ we obtain
\eq{
\pr\qty\Big( \winf\qty\big(\bbv_n(\jh), \bbv[\bX]) \le 3\h( n, \theta\ms, \delta ) ) \ge \pr( \mathcal{E} ) \ge 1 - \delta\log_\theta\qty( \f{\theta\mmax}{\mmin} ),\nn
}
which is the desired result. \null\nobreak\hfill\qedsymbol{}
\endgroup
\section{Introduction}
\label{sec:intro}
Given a compact set $\bX \subset \R^d$, its persistence diagram encodes the subtle geometric and topological features which underlie $\bX$ as a multiscale summary, and forms the cornerstone of topological data analysis. Persistent homology serves as the backbone for computing persistence diagrams, and encodes the homological features underlying $\bX$ at different resolutions. The computation of persistent homology is typically achieved by constructing a \textit{filtration} $V_{\bX}$, i.e., a nested sequence of topological spaces, which captures the evolution of geometric and topological features as the resolution varies. The persistent homology, which is encoded in its persistence module, $\bbv_{\bX}$, extracts the homological information from the filtration $V_{\bX}$. This is then summarized in a persistence diagram $\dgm\pa{\bbv_{\bX}}$.
Broadly speaking, there are two different methods for obtaining filtrations. The first, and, arguably more classical method is obtained by examining the union of balls of radius $r$ centered on the points of $\bX$ called the $r$--\textit{offset} of $\bX$, denoted ${\bX}(r)$, for each resolution $r > 0$. The resulting filtration $V[\bX] = \pb{\Xb(r): r > 0}$, depends only on the metric properties of $\bX$. The second, and more general approach is based on constructing a \textit{filter function} $f_{\Xb}$, which reflects the topological features underlying $\bX$. The resulting filtration $V[f_{\bX}]$, in this case, is obtained by probing the sublevel sets $f\inv_{\Xb}\left( (-\infty, r] \right)$ or the superlevel sets $f\inv_{\Xb}\qty( [r, \infty))$ associated with $f_\Xb$. While these two methods are vastly different, in principle, they both attempt to explore the topological features underlying $\bX$.
In this context, the distance function $\dx$ to the set $\Xb$ plays a special role in topological data analysis, and satisfies the property that $V[\bX] = V[\dx]$. That is, the sublevel sets of the distance function encode the same topological information as the filtration from its offsets. The appeal of using the distance function in the computation of persistence diagrams comes from the celebrated stability of persistence diagrams \citep{chazal2016structure}. In a nutshell, the stability result for persistence diagrams guarantees that (i) the persistence diagrams resulting from two compact sets $\Xb$ and $\Yb$ are close whenever the sets themselves are close in the Hausdorff distance, and, (ii)
the functional persistence diagrams resulting from two filter functions $f$ and $g$ are close whenever $f$ and $g$ are close w.r.t. the $\norminf{\cdot}$ metric.
In the statistical setting, one has access to $\bX$ only through samples $\Xn = \pb{\Xv_1, \Xv_2, \dots, \Xv_n}$ obtained using a probability distribution $\pr$ which is supported on the (unknown) set $\bX$. The objective, in a statistical inference framework, is to use the samples $\Xn$ to infer the true population persistence diagram $\dgm\pa{\bbv_\Xb}$. The offset $\Xn(r)$ and filter function $f_n$, constructed using the sample points, are themselves random quantities associated with their population counterparts $\Xb(r)$ and $f_\Xb$, respectively, and these may be used to construct a sample estimator $\dgm\pa{\bbv_{\Xn}}$. To this end, several existing works have studied the statistical properties of $\dgm\pa{\bbv_{\Xn}}$, e.g.,~constructing confidence bands and characterizing the convergence rate of $\dgm\pa{\bbv_{\Xn}}$ to $\dgm\pa{\bbv_{\Xb}}$ in the space of persistence diagrams \citep{fasy2014confidence,chazal2015subsampling,chazal2015convergence,chazal2017robust,vishwanath2020robust}.
\subsection{Contributions}
In practical settings, real-world data is likely subject to measurement errors and the presence of outliers. While some assumptions may be imposed on the noise and the outliers, in the most baneful settings, the given data may be subject to adversarial contamination. In this setting, for $m<n/2$, we assume that the samples $\Xn$, which we have access to, contain only ${n-m}$ points obtained from the probability distribution $\pr$ with $\supp\pa{\pr} = \bX$, and make no further assumptions on the remaining $m$ points. In principle, the $m$ outliers may be carefully chosen by an adversary after examining the remaining $n-m$ points. The overarching objective of this paper is to construct an estimator of the (unknown) population quantity $\dgm\pa{\bbv[\bX]}$ using the corrupted sample points $\Xn$ which is, both, statistically consistent and computationally efficient.
While the stability of persistence diagrams guarantees that small perturbations in the sample points induce only small changes in the resulting persistence diagrams, even a few outliers in the samples can lead to deleterious effects. This issue is further exacerbated in the adversarial setting, where the adversary is free to place the $m$ points where it may drastically impact the resulting topological inference.
In this paper, we introduce \md{}, denoted by $\dnq$, as an outlier-robust variant of the empirical distance function which is constructed using the median-of-means principle, and we establish its theoretical properties. Notably the \md{} relies on a tuning parameter $Q$ which is easy to interpret. While the persistence diagram resulting from the sublevel filtration of $\dnq$ is a valid candidate for statistical inference, it can be expensive to compute in practice. To overcome this, we use the weighted filtrations introduced by \cite{buchet2016efficient} and \cite{anai2019dtm} to construct $\dnq$-weighted filtrations, $V[\Xn, \dnq]$, as computationally efficient estimators of $\dgm\pa{V[\bX]}$. Our main contributions are the following:
\begin{enumerate}[label=\textup{(\Roman*)}]
\item We show that sublevel set persistence diagrams of $\dnq$ are consistent estimators of the sublevel set persistence diagram of the true population counterpart $\dx$ even in the presence of outliers (Theorem~\ref{theorem:momdist-sublevel}).
\item We establish a stability result for the the $\dnq$-weighted filtrations, $V[\Xn, \dnq]$, and we show that they are stable w.r.t.~adversarial contamination (Theorem~\ref{theorem:momdist-stability}).
\item Furthermore, we show that the persistence diagram $\dgm\pa{\bbv[\Xn, \dnq]}$ is both a computationally efficient and statistically consistent estimator of $\dgm\pa{\bbv[\bX]}$, and we establish its convergence rate (Theorem~\ref{theorem:momdist-consistency}).
\item Next, in a sensitivity analysis framework, we quantify the gain in robustness achieved when using the $\dnq$-weighted filtrations vis-\`{a}-vis its non-robust $\dsf_n$-weighted counterpart (Theorem~\ref{theorem:momdist-influence}).
\item Lastly, we propose a data-driven procedure for adaptively selecting the tuning parameter $Q$ using Lepski's method. For the data-driven choice $\widehat Q$, we show that the resulting estimator $\dgm\qty\big{\bbv[\Xn, \dsf_{n, \widehat Q}]}$ is statistically consistent and establish its convergence rate (Theorem~\ref{theorem:lepski}).
\end{enumerate}
\subsection{Related Work}
Several approaches have been proposed in existing literature to overcome the sensitivity of persistence diagrams to noise. The prevailing ideas in these approaches rely on constructing a filter function, $f_\pr$, which reflects both the topological information and the distribution of mass underlying the support $\supp\pa{\pr} = \bX$. Replacing the population probability measure $\pr$ with the empirical measure $\pr_n$ associated with the samples $\Xn$ results in an empirical estimator $f_{\pr_n}$. Some notable examples include the distance-to-measure \citep{chazal2011geometric}, the kernel distance \citep{phillips2015geometric}, and kernel density estimators \citep{fasy2014confidence}.
While these approaches mitigate, to some extent, the influence of noise on the resulting persistence diagrams, they are not without their drawbacks. For starters, while it may be argued that $\dgm\pa{\bbv[f_{\pr_n}]}$ is more resilient to noise, ultimately, this sample estimator corresponds to the population quantity $\dgm\pa{\bbv[f_{\pr}]}$, which may, nevertheless, omit some subtle geometric and topological features present in $\dgm\pa{\bbv[\bX]}$. Furthermore, from a statistical perspective, if $\Xn$ comprises of only $n-m$ points from $\pr$ and the remaining $m$ points constitute outliers, then the sample estimator $V[f_{\pr_n}]$, obtained using $\Xn$, will no longer be a valid estimator of the population quantity $\dgm\pa{\bbv[f_{\pr}]}$ which we wish to infer.
Lastly, the exact computation of these estimators can be prohibitively expensive, if not impossible in practice. For instance, the exact computation of the distance-to-measure requires computing an order-$k$ Voronoi diagram. Moreover, in the general setting, the sublevel/superlevel filtrations arising from these approaches are computed using cubical homology, which relies on a (nuisance) grid resolution \mbox{parameter}. If this resolution is too coarse, then some subtle topological features are affected. On the flipside, if the resolution is too fine, then the accuracy is still impacted, as noted in \cite{fasy2014confidence}. In the high-dimensional setting, cubical homology also falls victim to the curse of dimensionality, i.e., for a fixed grid resolution, the number of simplices in the resulting cubical complex grows exponentially with the dimension of the ambient space.
In order to overcome these computational drawbacks, \cite{buchet2016efficient} and \cite{anai2019dtm} propose weighted filtrations, $V[\Xn, f_{\Xn}]$, using power distances. While the weighted filtrations circumvent the need for constructing grid-based approximations, they come at the expense of exact inference, i.e., the weighted filtrations $V[\Xn, f_{\Xn}]$ only approximate $V[f_{\Xn}]$ and do not provide valid statistical inference, even in the absence of outliers.
More recently, \cite{vishwanath2020robust} propose robust persistence diagrams which are resilient to outliers using kernel density estimators (KDE), and also develop a principled framework for characterizing the sensitivity to outliers using an analogue of influence functions. Although \citet[Theorem 1]{vishwanath2020robust} describes the gain in robustness by considering the robust KDE $\fn$ using the persistence influence function, \citet[Theorems 2 \& 3]{vishwanath2020robust} together establish that as $n \rightarrow \infty$ and $\sigma \rightarrow 0$, the persistence diagram $\dgm\pa{\fn}$ recovers the same information which underlies the sample points $\Xn$. However, if the underlying distribution is contaminated, e.g., ${\pr^* = (1-\pi) \pr_{signal} + \pi \pr_{noise}}$, then the topological inference we hope to target is that of $\pr_{signal}$ and not that of $\pr^*$.
Finally, with a similar objective of mitigating the impact of noise in topological inference, recent approaches have considered multi-parameter persistent homology as a robust tool for inferring the topological features underlying $\Xn$ \citep{carlsson2009theory}. While some recent results have demonstrated some promise (e.g., \citealt{vipond2021multiparameter}), they are, nevertheless, computationally infeasible for most applications, in addition to being hard to interpret \citep{otter2017roadmap,bjerkevik2020computing}.
On the statistical front, robust statistics was founded on the seminal works of \cite{tukey1960survey} and \cite{huber1964robust} with the objective of developing a framework of statistical inference stable to model misspecification and the presence of extraneous errors. Over the past few decades, robust counterparts for several inference tasks have been explored in literature \citep{huber2004robust,hampel2011robust}. More recently, in the landscape of big-data and high-dimensional statistics, the field of robust statistics has witnessed a renewed interest in the statistics and computer science literature \citep{diakonikolas2017being}. In particular, the classical problem of mean and covariance estimation has been revisited in several works \citep{audibert2011robust,minsker2015geometric,devroye2016sub,joly2016robust} with the objective of easing model assumptions to, either, the regularity of the data generating mechanism, or, the presence of outliers. See \cite{lugosi2019mean} for a recent survey. A common theme underlying these works is the constant struggle to achieve a Goldilocks equilibrium: the right balance of statistical optimality, computational efficiency and robustness to model misspecification.
In this regard, the median-of-means estimator, and, more broadly, the median-of-means principle \citep{lecue2020robust}, has emerged as a powerful tool for ``robustifying'' an existing estimator in near linear time. Although this comes slightly at the expense of statistical optimality, median-of-means estimators are, nevertheless, easier to compute than statistically optimal \textit{and} robust methods such as the tournament estimators introduced by \cite{lugosi2019risk}. However, computing the median in high dimensions is not a well-defined task, and can be computationally burdensome. To make matters worse, robust topological summaries naïvely employing the median-of-means principle require estimating the median in infinite-dimensional space, which can be hopeless to achieve in a computationally tractable fashion. Our work overcomes this limitation by proposing a pointwise median-of-means estimator which, although computationally tractable, exhibits a concentration of measure phenomenon with respect to the true target population counterpart in the $\norminf{\cdot}$ metric.
\textbf{Organization.} The remainder of this paper is organized as follows. In Section~\ref{sec:preliminaries} we present the necessary background on persistent homology and robust statistics. We first introduce the proposed methodology in Section~\ref{sec:proposal}, and then present the main results in the remainder of the section. We establish the statistical properties of the proposed estimator in Section~\ref{sec:statistical}, and we present the influence analysis in Section~\ref{sec:influence}. Numerical results supporting the theory are provided in Section~\ref{sec:experiments}. The proofs of all the results are collected in Section~\ref{sec:proofs}.
\section{Experiments}
\label{sec:experiments}
In the following section, we supplement the theory through illustration of the performance of the robust filtrations $\bbv[\dnq]$ and $\bbv[\Xn, \dnq]$ in synthetic experiments. The tools for data-adaptive construction of $\dnq$-weighted filtrations, in addition to the code for all experiments, are made publicly available in the \href{https://www.github.com/sidv23/RobustTDA.jl}{\textsf{RobustTDA.jl}} Julia package\footnote{\url{https://www.github.com/sidv23/RobustTDA.jl/}}. In all experiments, the persistence diagrams are computed using the \textsf{Ripserer.jl} backend \citep{cufar2020ripserer}, and we set the parameter $p=1$ for the weighted-filtrations.
\subsection{Adaptive calibration of $Q$}
\label{exp:adaptive}
For $n=500$, $K=30$ replicates and for each $i \in [K]$, point clouds $\Xn^{(i)}$ are generated on a circle, and ${m^{(i)} \sim \text{Unif}\qty(\qty[50, 150])}$ outliers added from a Matérn cluster process. This is illustrated in Figure~\ref{fig:lepski}\,(a). Taking $m_{\min} = 20$, $m_{\max}=200$ and $\theta=1.07$, the adaptive estimate $\widehat{m}^{(i)}$ is computed using Lepski's method, and $\widehat{m}_R^{(i)}$ is computed using the heuristic method described in Section~\ref{sec:lepski} with $N=50$. For a single replicate ${i \in [K]}$, Figure~\ref{fig:lepski}\,(b) plots $\sum_{1 \le i < j \le N} \winf\qty\big({ \bbv[{\Xn^{\sigma_i}}, \dnq], \bbv[{ \Xn^{\sigma_j} }, \dnq] })$ vs. $Q$. In most cases, we have observed that the resampled bottleneck distance criterion stabilizes shortly before the optimal value of $m$. Figure~\ref{fig:lepski}\,(c) shows a boxplot for the relative errors $\qty\big{ \widehat{m}^{(i)} - {m}^{(i)} / {m}^{(i)} : i \in [K] }$ and $\qty\big{ \widehat{m}_R^{(i)} - {m}^{(i)} / {m}^{(i)} : i \in [K] }$ for Lepski's method and the heuristic procedure, respectively. Lepski's method is fairly robust to the choice of the hyperparameters, and, consistently selects $\widehat m^{(i)} \ge m^{(i)}$. In contrast, since the resampled bottleneck distance from the heuristic procedure often stabilizes before $m^{(i)}$, we observe that $\widehat{m}^{(i)}_R < m^{(i)}$.
\begin{figure}[t]
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[height=\textwidth]{./figures/experiments/calibration/scatterplot.pdf}
\caption{Scatterplot of $\Xn$}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[height=\textwidth]{./figures/experiments/calibration/resampled.pdf}
\caption{Heuristic procedure}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[height=1.05\textwidth]{./figures/experiments/calibration/Box.pdf}
\caption{Relative error: Lepski vs. Resampling}
\end{subfigure}
\caption{Comparison of Lepski's method and the heuristic procedure for selecting the parameter $Q$.}
\label{fig:lepski}
\end{figure}
\subsection{Comparison of $\bbv[\dnq]$ and $\bbv[\Xn, \dnq]$}
\label{exp:sublevel}
The objective of this experiment is to illustrate that the $\dnq$-weighted filtration $V[\Xn, \dnq]$ reasonably approximates the sublevel filtration $V[\dnq]$. For the same setup as \ref{exp:adaptive}, {$\Xn$ comprises of $n=550$ points obtained by sampling $500$ points on a circle with additive Gaussian noise ($\s=0.01$) and $m=50$ outliers added from a Matérn cluster process.} For $Q=\widehat Q$ selected using Lepski's method, Figure~\ref{fig:sublevel}\,(a) depicts the \md{} function $\dnq$. Figure~\ref{fig:sublevel}\,(b) illustrates the scatter plot for $\Xn$ with the points colored by the weights $\dnq(\xv_i)$ for each $\xv_i \in \Xn$. The shaded regions show the $\dnq$-weighted offsets $V^t[\Xn, \dnq]$ for $t \in \qty{1.5, 1.75, 2, 2.25}$ colored from white to blue. Figure~\ref{fig:sublevel}\,(c) depicts the sublevel persistence diagram $\dgm\pa{ \bbv[\dnq] }$ computed using cubical homology on a grid of resolution $0.5$. As expected by the result of Proposition~\ref{prop:sublevel-2}, the $\dnq$-weighted persistence diagram $\dgm\pa{\bbv[\Xn, \dnq]}$ in Figure~\ref{fig:sublevel}\,(d) captures the essential topological information in $\dgm\pa{ \bbv[\dnq] }$.
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.35\textwidth}
\centering
\includegraphics[height=\textwidth]{./figures/experiments/sublevel/sublevel.pdf}
\caption{\md{} function $\dnq$}
\end{subfigure}
\quad\quad
\begin{subfigure}[b]{0.35\textwidth}
\centering
\includegraphics[height=\textwidth]{./figures/experiments/sublevel/filtrations.pdf}
\caption{$V^t[\Xn, \dnq]$}
\end{subfigure}
\vspace{5mm}\\
\begin{subfigure}[b]{0.35\textwidth}
\centering
\includegraphics[width=\textwidth]{./figures/experiments/sublevel/sublevel_dgm1.pdf}
\caption{$\dgm\pa{\bbv[\dnq]}$}
\end{subfigure}
\begin{subfigure}[b]{0.35\textwidth}
\centering
\includegraphics[width=\textwidth]{./figures/experiments/sublevel/wrips_dgm1.pdf}
\caption{$\dgm\pa{\bbv[\Xn, \dnq]}$}
\end{subfigure}
\caption{Comparison of sublevel filtrations with the $\dnq$-weighted filtration.}
\label{fig:sublevel}
\end{figure}
\subsection{High dimensional topological inference}
\label{exp:highdim}
In this experiment, we illustrate the advantage of using $\dnq$-weighted filtrations for high dimensional topological inference. Points are uniformly sampled in $\R^3$ from two interlocked circles. Using a random rotation matrix $Q \in {SO}(100)$, the points are transformed to an arbitrary configuration in $\R^{100}$. The samples $\Xn \subset \R^{100}$ are obtained by replacing $12.5\%$ of the points in $\R^{100}$ with outliers sampled from $\textup{Uniform}\pa{ [-0.2, 0.2]^{100} }$. A scatterplot for $\Xn$ projected to $3$ arbitrary coordinates is shown in Figure~\ref{fig:highdim}\,(a). Since the point cloud is embedded in $\R^{100}$, computing sublevel filtrations using cubical homology with the same resolution as earlier requires $(10/0.5)^{100} \approx 10^{131}$ simplices to be stored in memory. In contrast, computing the $\dnq$-weighted filtrations requires is less intensive. Figure~\ref{fig:highdim}\,(b) shows the persistence diagram $\dgm(\widehat{\bbv}_n)$ obtained using $\dnq$-weighted filtrations, where the parameter $Q$ is adaptively selected using Lepski's method. The two $1$st order homological features underlying the interlocked circles are recovered. Figure~\ref{fig:highdim}\,(c) illustrates the persistence diagram $\dgm(\bbv [{\Xn, \delta_{n,k}}])$ obtained using DTM-weighted filtrations. Since the DTM parameter $k \in [1, n]$ results in a smoothing similar to the parameter $Q \in [1, n]$ for the \md{}, the parameter $k$ is set to the value of $Q$ obtained using Lepski's method.
\begin{figure}[H]
\begin{subfigure}[b]{0.34\textwidth}
\centering
\includegraphics[width=\textwidth]{./figures/experiments/highdim/interlocked-noisy.pdf}
\caption{{$\Xn$ projected to coordinates $11, 53 \ \& \ 91$}}
\end{subfigure}
\begin{subfigure}[b]{0.31\textwidth}
\centering
\includegraphics[height=\textwidth]{./figures/experiments/highdim/interlocked-MOM.pdf}
\caption{MOM diagram $\dgm({\widehat{\bbv}_n})$}
\end{subfigure}
\begin{subfigure}[b]{0.31\textwidth}
\centering
\includegraphics[height=\textwidth]{./figures/experiments/highdim/interlocked-DTM.pdf}
\caption{{DTM} diagram $\dgm\pa{\bbv[\Xn, \delta_{n, k}]}$}
\end{subfigure}
\caption{Robust persistence diagrams for interlocked circles in $\R^{100}$ using $\dnq$ and $\delta_{n, k}$ weighted filtrations.}
\label{fig:highdim}
\end{figure}
\subsection{Recovering the true signal under adversarial contamination}
\label{exp:mnist}
In this experiment, we illustrate how $\bbv[{\Xn, \dnq}]$ can be used to recover the true topological features in the presence of adversarial contamination. In Figure~\ref{fig:mnist}\,(a), we consider a $28 \times 28$ image for the digit ``6'' from the MNIST database \citep{deng2012mnist}. We consider the setting in which an adversary is allowed to manipulate $10\%$ of the image by modifying the pixel intensities. Figure~\ref{fig:mnist}\,(b) depicts the adversarially contaminated version of the image by transforming the ``6'' to an ``8''.
For each pixel $p$ with pixel intensity $\iota(p)$, we convert the image to a point cloud $\Xn \subset \R^2$ by sampling $10 * \iota(p)$ points uniformly from the region enclosed by the pixel. \mbox{Figures~\ref{fig:mnist}(d, e)} illustrate the point clouds obtained from the true and contaminated images with $n-m \approx 1100$ and $n \approx 1300$, respectively. The persistence diagrams constructed using the distance function $\dsf_n$ for the two point clouds are reported in Figures~\ref{fig:mnist}(g, h). The persistence diagram in Figure~\ref{fig:mnist}\,(h) indicates the presence of the additional loop introduced by the adversary. To account for the adversarial contamination, we compute the \md{} function $\dnq$ with the parameter $Q$ selected using the contamination budget, i.e., $Q = 1 + 2(1100 \times 10\%) = 221$. Figure~\ref{fig:mnist}\,(f) shows the adversarially contaminated point cloud with each point $\xv_i \in \Xn$ colored by the value of $\dnq(\xv_i)$. The resulting $\dnq$-weighted persistence diagram $\dgm(\bbv[\Xn, \dnq])$ is reported in Figure~\ref{fig:mnist}\,(f). We note that $\dgm(\bbv[\Xn, \dnq])$ recovers the prominent features of Figure~\ref{fig:mnist}\,(g) up to a rescaling. Additionally, for each pixel $p$ we compute a rescaled version of $\dnq$, given by
\eq{
f_{n, Q}(p) = \f{\max\limits_x\dnq(x) - \dnq(p)}{\max\limits_x\dnq(x)},\nn
}
as a proxy for the pixel intensity obtained using $\dnq$. In Figure~\ref{fig:mnist}\,(c), we plot the level sets $\pb{p: f_{n,Q} = t}$ on the original image for $t \ge 0.8$.
\begingroup
\renewcommand{\Xnm}{\mathbb{X}_{n\minus m}}
\begin{figure}
\begin{subfigure}[c]{0.3\textwidth}
\includegraphics[width=\textwidth]{./figures/experiments/mnist/6.png}
\caption{Signal}
\end{subfigure}
\quad
\begin{subfigure}[c]{0.3\textwidth}
\includegraphics[width=\textwidth]{./figures/experiments/mnist/8.png}
\caption{Adversarial contamination}
\end{subfigure}
\quad
\begin{subfigure}[c]{0.305\textwidth}
\includegraphics[height=\textwidth]{./figures/experiments/mnist/heatmap2.pdf}
\caption{Recovered}
\end{subfigure}
\begin{subfigure}[c]{0.32\textwidth}
\includegraphics[height=\textwidth]{./figures/experiments/mnist/scatter-6.pdf}
\caption{$\Xnm$}
\end{subfigure}
\begin{subfigure}[c]{0.32\textwidth}
\includegraphics[height=\textwidth]{./figures/experiments/mnist/scatter-8.pdf}
\caption{$\Xn$}
\end{subfigure}
\quad
\begin{subfigure}[c]{0.32\textwidth}
\includegraphics[height=\textwidth]{./figures/experiments/mnist/scatter-8-heat.pdf}
\caption{$\Xn$ colored by $\dnq$}
\end{subfigure}
\begin{subfigure}[c]{0.32\textwidth}
\includegraphics[height=\textwidth]{./figures/experiments/mnist/dgm6.pdf}
\caption{$\dgm\qty( \bbv[\Xnm] )$}
\end{subfigure}
\begin{subfigure}[c]{0.32\textwidth}
\includegraphics[height=\textwidth]{./figures/experiments/mnist/dgm8.pdf}
\caption{$\dgm\qty( \bbv[\Xn] )$}
\end{subfigure}
\begin{subfigure}[c]{0.32\textwidth}
\includegraphics[height=\textwidth]{./figures/experiments/mnist/rdgm8.pdf}
\caption{$\dgm\qty( \bbv[\Xn, \dnq] )$}
\end{subfigure}\hspace{10pt}
\caption{Recovering the topological information underlying the signal in the presence of adversarial contamination.}
\label{fig:mnist}
\end{figure}
\endgroup
\subsection{Empirical influence analysis}
\label{exp:influence}
\providecommand{\fnms}{\ensuremath{f^{n+m}_{\rho,\s}}}
\providecommand{\Dnms}{\ensuremath{D^{RKDE}_{{n+m}, \rho,\s}}}
\providecommand{\dnms}{\ensuremath{\dsf_{n+m, \rho,\s}}}
In this experiment, we examine the influence of outliers on $\dnq$-weighted filtrations. For $n=500$, points $\Xn$ are sampled uniformly from a circle. We compute the unweighted persistence diagram $D_n = \dgm(\bbv[\Xn])$. In a small neighborhood around the center of the circle, outliers $\Ym$ are sampled uniformly from $[-0.1,0.1]^2$. For the composite sample $\Xn \cup \Ym$ and a fixed value of $Q=100$ \& $k = 50$, we compute the \md{} weighted persistence diagram $D^{MoM}_{n+m,Q} = \dgm(\bbv[\Xn \cup \Ym, \dsf_{n+m,Q}])$, the DTM weighted persistence diagram $D^{DTM}_{n+m,k} = \dgm(\bbv[\Xn \cup \Ym, \delta_{n+m, k}])$, and the RKDE weighted persistence diagram ${\Dnms}$ from the RKDE $\fnms$ using the Hampel loss $\rho$ and a Gaussian kernel $K_\s$. Since the RKDE ${\fnms \defeq \sum_{i=1}^{n+m}w_i K_\s(\cdot, \Xv_i)}$ does not behave like a distance function, we convert $\fnms$ to a distance-like function $\dnms$ using a similar approach as \cite{phillips2015geometric} to obtain
\eq{
\dnms(\xv) \defeq \normh{K_\s(\cdot, \xv) - \fnms} = \sqrt{\mathop{\sum\sum}\limits_{1 \le i,j \le n+m} w_i w_j K_\s(\Xv_i, \Xv_j) + K_\s(\xv, \xv) - 2 \fnms(\xv)}.\nn
}
The RKDE-weighted persistence diagram $\Dnms = \dgm(\bbv[\Xn \cup \Ym, \dnms])$ is then computed using the $\dnms$-weighted filtration on the composite sample. The bandwidth of the kernel and the parameters for the Hampel loss function are selected using the same approach as in \cite{vishwanath2020robust}. For each diagram, we compute the birth time $b(\qty{\xvo})$ for the first outlier $\xvo \in \Ym$, and the bottleneck influence $\winf(D_{n+m}, D_n)$, as described in Section~\ref{sec:influence}. We generate $10$ such samples for each value of $m$, and report the average in Figure~\ref{fig:influence}.
From Figure~\ref{fig:influence}\,(a), we note that $D^{MoM}_{n+m,Q}$ and $D^{DTM}_{n+m,k}$ show similar behavior, although the outliers consistently appear earlier in the {DTM persistence diagram $D^{DTM}_{n+m,k}$.} Since the birth time $b(\qty{\xvo})$ alone does not fully characterize the impact an outlier has on inferring the topological feature underlying the circle, we also compute the maximum persistence for the first order persistence diagram in Figure~\ref{fig:influence}\,(b). We point out that the behavior of $b(\qty{\xvo})$ w.r.t. $m$ largely reflects the influence an outlier has on the relevant topological signal. Furthermore, for $D^{MoM}_{n+m,Q}$, we observe the sharp transition which occurs between $m=50$ and $m=80$, which is due to the fact that the theoretical guarantees for $\dnq$ from Theorem~\ref{theorem:momdist-influence} are valid only when $2m < Q = 100$. Similarly, from Theorem~\ref{theorem:momdist-consistency}, the outliers are guaranteed to have little influence on $D^{MoM}_{n+m,Q}$ whenever $m \le 50$, as seen in Figure~\ref{fig:influence}\,(c).
On the other hand, while the RKDE remains resilient to uniform outliers, we note that $\Dnms$ is significantly impacted by the outliers placed at a single point in center of the circle. This is evidenced by the sharp transitions for $\Dnms$ in Figures~\ref{fig:influence}\,(b, c). However, unlike $\dsf_{n+m, Q}$ and $\delta_{n+m, k}$, by construction ${\norminf{\dnms} \le \sup_{\xv} \sqrt{2 K_\s(\xv, \xv)} < \infty}$. Therefore, the impact the outliers have on $\Dnms$ are bounded; and despite being more sensitive to the outliers, the resulting influence the outliers have on $\Dnms$ in Figures~\ref{fig:influence}\,(a, b, c) is bounded.
\begin{figure}[H]
\begin{subfigure}[c]{0.48\textwidth}
\includegraphics[width=\textwidth]{./figures/experiments/influence5/influence-birth.pdf}
\caption{$\text{influence}\pa{b; \Xn, f_n, m, \xvo}$}
\end{subfigure}
\begin{subfigure}[c]{0.48\textwidth}
\includegraphics[width=\textwidth]{./figures/experiments/influence5/influence-rpers.pdf}
\caption{max Persistence for the first order diagram}
\end{subfigure}
\begin{subfigure}[c]{0.48\textwidth}
\includegraphics[width=\textwidth]{./figures/experiments/influence5/influence-bottleneck0.pdf}
\caption{$\text{influence}\pa{\winf; \Xn, f_n, m, \xvo}$}
\end{subfigure}
\caption{Influence analysis for $\dnq$-weighted filtrations vis-à-vis DTM-based filtrations and unweighted filtrations.}
\label{fig:influence}
\end{figure}
\section{Preliminaries}
\label{sec:preliminaries}
The following subsections introduce the essential ingredients used for the remaining of the paper.
\textbf{Definitions and Notations.} For two sets $A$ and $B \subseteq A$, $\id: B \rightarrow A$ given by $b \mapsto b$ denotes the identity map. For $n \in \Z_+$, we use the notation $[n] = \pb{1, 2, \dots, n}$, and for real-valued functions~$f$~and~$g$ we employ the notation $f(n) \lesssim g(n)$ if $f(n) = O\qty\big(g(n))$. Given a metric space $(\M, \rho)$ with metric ${\rho: \M \times \M \rightarrow \R_{\ge 0}}$, the ball of radius $r$ centered at $\xv \in \M$ is denoted $\Bfx[\rho][][r]$\footnote{When $r<0$ we explicitly define $B_\rho(\xv, r) = \varnothing$.}.
For a compact set $\bX \subset \M$, the $r$--offset of $\bX$ w.r.t the metric $\rho$ is given by
$$
\bX[\rho](r) = \bigcup_{\xv \in \bX}\Bfx[\rho][][r].
$$
The distance function w.r.t. the compact set $\Xb$ plays a central role in extracting the geometric and topological features underlying $\Xb$.
\begin{definition}[Distance function]
For a metric space $(\M, \rho)$ and a compact set $\bX \subseteq \M$, the distance function to the set $\bX$, denoted as $\dx$, is given by
\eq{
\dx(\yv) \defeq \inf_{\xv \in \bX}\rho\pa{\xv, \yv}, \ \ \ \text{for all } \yv \in \M.\nn
}
\end{definition}
For a finite collection of points $\Xn$, the distance function $d_{\Xn}$ is simply denoted as $\dsf_n$. For two compact sets $\bX, \bY \subset (\M, \rho)$ the \textit{Hausdorff distance} {between $\bX$ and $\bY$} is given by
$$
{\haus{\bX, \bY}[\rho] \defeq \max\pb{ \sup_{\xv \in \Xb}\dsf_{\Yb}(\xv), \ \sup_{\yv \in \Yb}\dx(\yv) } = \inf\pB{\e > 0: \bX \subseteq \bY[\rho](\e), \bY \subseteq \bX[\rho](\e)}},
$$
and metrizes the space of all compact subsets of $(\M, \rho)$. Throughout the paper we assume that $(\M, \rho) = (\R^d, \norm{\cdot})$ is the usual Euclidean space with the $\ell_2$ metric, and omit the subscript $\rho$. However, the results here should extend to general metric spaces $(\M,\rho)$ with simple modifications along the lines of \cite{chazal2015convergence} and \cite{buchet2016efficient}.
We use $\mathcal{P}(\bX)$ to denote the set of Borel probability measures defined on $\R^d$ with support $\bX \subseteq \R^d$, and for $\xv \in \R^d$, $\dir{\xv}$ is used to denote a Dirac measure at $\xv$. A key assumption used throughout the paper is a regularity condition for the data generating mechanism. For $a,b>0$, the probability measure satisfies the $(a,b)-$standard condition if
\eq{\label{eq:ab-standard}
\pr\qty\Big( B(\xv, r) ) > 1 \wedge a r^b \quad \text{for all } r>0.
}
We denote by $\mathcal{P}(\Xb, a, b)$ the subset of $\mathcal{P}(\Xb)$ which satisfies the $(a, b)-$standard condition in \eref{eq:ab-standard} for $a, b >0$. This regularity assumption is standard in the domain of statistical shape analysis (e.g., \citealt{cuevas2004boundary,chazal2015convergence,chazal2015subsampling,chazal2017robust}). Throughout the paper, we assume that the samples $\Xn$ are obtained in an adversarial contamination setting $(\scr{S})$, as defined below.
\begin{description}[labelindent=1cm]
\item[\textsc{Sampling Setting ($\scr{S}$).}] The data comprises of $n$ samples $\Xn = \rangeb{\Xv}{n}$, where $m < {n}/{2}$ samples are contaminated with unknown outliers. No distributional assumption is made on these outliers. The remaining $n\!-\!m$ samples are observed iid from a distribution $\pr \in \mathcal{P}(\Xb, a, b)$, for compact $\Xb \subset \R^d$ and $a,b > 0$.
\end{description}
A glossary of notations for additional definitions and notations introduced in the subsequent sections is provided in Appendix~\ref{sec:glossary}.
\subsection{Background on Persistent Homology}
\label{sec:persistent}
In this section we provide the necessary background on persistent homology arising from single parameter filtrations. We refer the reader to \cite{chazal2017introduction,edelsbrunner2010computational} for a detailed introduction.
Given a compact set $\bX$, the building block of any topological data analysis pipeline to extract meaningful information from $\bX$ begins with a nested sequence of filtered topological spaces called a filtration, simply denoted by $V$. The sequence of spaces are parametrized by a resolution parameter $t$. There are several approaches for constructing a filtration using $\Xb$. One approach is to consider the collection of offsets built on top of $\Xb$, i.e., $V^t = V^t[\bX] = \Xb(t)$.
For $s < t$, the offsets are nested $V^s \subseteq V^t$, and $V[\bX] \defeq \pb{ V^t[\Xb] : t \in \R }$ is a nested sequence of topological spaces and defines the filtration built using the offsets of $\Xb$.
The second approach to constructing a filtration is using a filter function $f_{\Xb}: \R^d \rightarrow \R$ which carries the topological information underlying $\Xb$. In this scenario, one typically constructs the filtration from the sublevel sets associated with $f_\Xb$, given by $V^t = f\inv_{\Xb}\qty\big({ (-\infty, t] })$ for each resolution $t$. Again, for $s<t$, $V^s \subset V^t$ and the sequence $V[f_\Xb] = \pb{ V^t[f_{\Xb}] : t \in \R }$ constitutes the sublevel filtration from $f_{\Xb}$. Mutatis mutandis a similar notion holds for the superlevel filtration.
In general, the filtration $V[\Xb]$ can be very different from $V[f_{\Xb}]$, although the prevailing objective is for $V[f_{\Xb}]$ to encode the same information as in $V[\Xb]$. In this context, the distance function $\dx$ plays a special role owing to the fact that its sublevel filtration is the same filtration associated with the offsets, i.e., $V[\dx] = V[\bX]$. This fact plays an important role in motivating the \md{} estimator introduced in Section~\ref{sec:proposal}, and follows by noting that for every resolution $t > 0$,
\eq{
\dsf_{\Xb}\inv\qty\Big({ (-\infty, t] }) = \pb{\xv \in \R^d : \dx(\xv) \le t} = \bigcup_{\xv \in \Xb}B(\xv, t).\nn
}
Let $V = \pb{V^t : t \in \R}$ denote a generic filtration and let $\iota_{s}^t: V^s \hookrightarrow V^t$ denote the inclusion map between the filtered spaces at resolutions $s < t$. For each resolution $t$, let $\bbv^t = \textup{H}_*\pa{V^t; \bF}$ be the homology\footnote{Where, as per convention, the order of homology, denoted by $*$, is an arbitrary non-negative integer.} of $V^t$ with coefficients in a field $\bF$. As the resolution $t$ varies, the evolution of topological features is captured by $V$. Roughly speaking, new cycles (i.e., connected components, loops, holes, and higher dimensional analogues) are born, or existing cycles can merge and disappear. The collection of cycles in $V^t$ at each resolution $t$ is encoded as a vector space in $\bbv^t$. The inclusion maps $\iota_{s}^t: V^s \hookrightarrow V^t$ induce linear maps $\phi_s^t: \bbv^s \rightarrow \bbv^t$ between the vector spaces $\bbv^s$ and $\bbv^t$.
As such, the collection $V$ can be described more succinctly as the \emph{category} $V = \qty{V^t, \iota_s^t : s \le t}$ with the inclusion maps $\iota_s^t$ \mbox{representing the morphisms for ${s\le t}$}. The image of $V$ under the \emph{homology functor} $\mathbf{Hom}_* : V \mapsto \bbv$, gives us the \emph{persistence module}
\eq{
\bbv \defeq \qty\Big{\bbv^t, \phi_s^t: {s\le t}},\nonumber
}
where the induced maps $\phi_s^t: \bbv^s \rightarrow \bbv^t$ are homomorphisms between two vector spaces. For $r < s < t$, the persistence module can equivalently be represented as
\begin{figure}[H]
\includegraphics[]{./tikz/v-module.pdf}
\vspace{-10pt}
\end{figure}
Informally, a new topological feature is born at resolution $b \in \R$ if the cycle associated with that feature is not present in $\bbv^{b-\e}$ for all $\e > 0$. The same feature is said to die at resolution $d>b$ if the cycle associated with this feature disappears from $\bbv^{d+\e}$ for all $\e > 0$, resulting in the (ordered) persistence pair $(b,d)$. By collecting all the persistence pairs, the persistence module $\bbv$ may be succinctly represented by a \textit{persistence diagram},
\eq{
\dgm\pa{\bbv} \defeq \pb{ (b,d) \in \R^2: b \le d \le \infty }.\nn
}
\subsection{Interleaving of Persistence Modules}
\label{sec:interleaving}
Given two persistence modules $\bV = \pb{\bV[][t], \phi_s^t}_{s\le t}$ and $\bW = \pb{\bW[][t], \psi_s^t}_{s \le t}$, they are said to be equivalent (or isomorphic) if there exists a family of isomorphisms $\pb{\xi_t}_{t \in \R}$ such that each $\xi^t: \bV[][t] \rightarrow \bW[][t]$ is an isomorphism. This notion can be extended to define two collection of maps $\pb{\alpha_t : t \in \R}$ and $\pb{\beta_t : t \in \R}$ which weave the two persistence modules together.
\bigskip
\begin{definition}[Interleaving of persistence modules] Given two persistence modules $\bV$ and $\bW$, and two monotone increasing maps ${\alpha,\beta: \R \rightarrow \R}$, $\bV$ and $\bW$ are said to be $\pa{\alpha,\beta}$--interleaved if the following diagrams commute for all $s \le t$
\begin{figure}[H]
\includegraphics[]{./tikz/commutative.pdf}
\end{figure}
\label{def:interleaving}
\end{definition}
\begin{remark}
The persistence modules $\bbv$ and $\bbw$ are purely algebraic objects, and their underlying filtrations $V$ and $W$ are not necessarily compatible. However, when the filtrations $V$ and $W$ arise as filtered subsets of the same underlying space (e.g., $\R^d$), we can similarly define an $(\alpha,\beta)-$interleaving between the filtrations $V$ and $W$ by replacing all linear maps in Definition~\ref{def:interleaving} by inclusion maps.
\end{remark}
The resulting persistence diagrams $\dgm\pa{\bbv}$ and $\dgm\pa{\bbw}$ are elements of the space of persistence diagrams $\Omega = \pb{ (x,y) : x \le y }$, which is endowed with the family of $q-$Wasserstein metrics $W_q(\cdot, \cdot)$ for $1 \le q \le \infty$. We refer the reader to \cite{edelsbrunner2010computational,mileyko2011probability} for more details. In special case of $q = \infty$, the resulting metric $\winf$ is commonly referred to as the \textit{bottleneck distance}, and is given as follows.
\begin{definition}[Bottleneck distance]
Given two persistence diagrams $D_1, D_2 \in \Omega$, the bottleneck distance is given by
\eq{
\winf\pa{D_1, D_2} \defeq \inf_{\gamma \in \Gamma}\sup_{p \in D_1 \cup \Delta} \norminf{ p - \gamma(p) },\nonumber
}
where $\Gamma = \pb{\gamma : D_1 \cup \Delta \rightarrow D_2 \cup \Delta}$ is the set of all multi-bijections from $D_1$ to $D_2$ including the diagonal $\Delta = \pb{(x,y) : x=y}$ with infinite multiplicity.
\end{definition}
Although the space of persistence diagrams $(\Omega, W_q)$, together with the $q-$Wasserstein distance, presents a challenging mathematical structure for refined statistical analyses \citep{mileyko2011probability,turner2014frechet}, the stability of persistence diagrams provides a handle on this space by allowing us to directly work on the space generating the filtrations.
\begin{lemma}[Stability of persistence diagrams; \citealt{cohen2007stability,chazal2016structure}]
Given two compact sets $\bX, \bY \subset \R^d$,
\eq{
\winf\qty\Big({ \dgm\pa{ \bbv[\Xb] }, \dgm\pa{ \bbv[\Yb]} }) \le \haus{\Xb, \Yb}[].\nn
}
Alternatively, for two filter functions $f,g : \R^d \rightarrow \R$,
\eq{
\winf\qty\Big({ \dgm\pa{ \bbv[f] }, \dgm\pa{ \bbv[g]} }) \le \norminf{f-g}.\nn
}
\end{lemma}
\bigskip
\begin{remark} \null \hfill
\begin{enumerate}[label=\textup{\textbf{(\roman*)}}]
\item When the interleaving maps $\pa{\alpha,\beta}$ are additive, i.e., of the form ${\alpha: t \mapsto t + \epsilon}$ and ${\beta: t \mapsto t+\delta}$, then persistence diagrams $\dgm\pa{\bV}$ and $\dgm\pa{\bW}$ obtained from the persistence modules satisfy the following relationships:
\eq{
\dgm\pa{\bV} \in \dgm\pa{\bW} \oplus \pc{-\delta, \e}^2 \hspace{1em} \text{ and } \hspace{1em} \dgm\pa{\bW} \in \dgm\pa{\bV} \oplus \pc{-\e, \delta}^2,\nn
}
where $\oplus$ denotes the Minkowski sum in $\R^2$. A \emph{coarser} bound is obtained from the stability theorem \citep{cohen2007stability} which guarantees that
\eq{
\Winf\qty\big(\dgm\pa{\bV}, \dgm\pa{\bW}) \le \max\pb{\e,\delta}.\nn
}
\item Furthermore, when the interleaving maps are identical, i.e., $\alpha \equiv \beta: t \mapsto t + \epsilon$, this notion can be extended to define an \textit{interleaving pseudo-distance} between persistence modules,
\eq{
d_{\mathcal{I}}\pa{\bV, \bW} \defeq \inf\qty\Big{ \e > 0 : \bV \text{ and } \bW \text{ are } (\alpha,\alpha)-\text{interleaved for } \alpha: t \mapsto t + \e }.\nn
}
From the isometry theorem \citep{chazal2016structure} the \emph{interleaving distance} is identical to the \emph{bottleneck distance}, i.e., $\Winf\qty\big(\dgm\pa{\bV}, \dgm\pa{\bW}) = d_{\mathcal{I}}\pa{\bV, \bW}$. In such cases, it is equivalent to say that $\bV$ and $\bW$ are $\pa{\alpha,\alpha}$--interleaved or ${d_{\mathcal{I}}\pa{\bV, \bW} \le \e}$. Similarly, for filtrations $V$ and $W$ comprising of subsets of $\R^d$,
\eq{
d_{\mathcal{I}}\pa{V, W} \defeq \inf\qty\Big{ \e > 0 : V^t \subseteq W^{t+\e} \quad \text{and} \quad W^t \subseteq V^{t+\e} }.
\label{eq:interleaving-filtration}
}
By functoriality, $d_{\mathcal{I}}\pa{V, W} \le \e \Longrightarrow d_{\mathcal{I}}\pa{\bV, \bW} \le \e \Longrightarrow \Winf\qty\big(\dgm\pa{\bV}, \dgm\pa{\bW}) \le \e$.
\end{enumerate}
\label{remark:interleaving-1}
\end{remark}
\subsection{Weighted Rips Filtrations}
\label{sec:weightedrips}
In practice, given a compact set $\Xb \subset \R^d$ or a filter function $f$, the persistence modules $\bbv[\Xb]$ and $\bbv[f]$ are computed using simplicial complexes. In particular:
\begin{enumerate}[label=(\roman*)]
\item For each $t \in \R$, one may use the \cech{} or Alpha complex to compute the nerve of the cover, $\text{nerve}\pb{ B(\xv, t) : \xv \in \Xb }$. Since the Nerve lemma \citep{edelsbrunner2010computational} guarantees that ${V^t[\Xb] \cong \text{nerve}\pb{ B(\xv, t) : \xv \in \Xb }}$, the resulting persistence module $\bbv[\bX]$ may be computed exactly using simplicial homology.
\item In the case of $\bbv\pc{f}$, this is typically achieved by choosing a grid resolution parameter $\e$, and constructing a cubical complex $\k_\e$ on the underlying space. The function $f:\R^d \rightarrow \R$ may be extended to define $f: \k_\e \rightarrow \R$, and at each resolution $t \in \R$, the sublevel sets $V^t[f_\Xb]$ can be approximated using the lower-star filtration $\k_\e^t = \pb{ \sigma \in \k_\e : \max_{\xv \in \sigma}f(\xv) \le t }$. Therefore, the filtration $\bbv[f]$ can be approximated by the filtration $\pb{\k_\e^t : t \in \R}$, and the resulting persistence module is computed using cubical homology.
\end{enumerate}
Note that (i) is able to compute the exact persistence module in practice, but is unable to weight points according to $f$. On the other hand, (ii) is only an approximate computation and depends on the nuisance parameter $\e$. Furthermore, the size of the cubical complex is $\abs{\k_\e} = O(\e^{-d})$, making it scale poorly in high dimensions. To overcome this limitation, \cite{buchet2016efficient} proposed the $f$-weighted filtrations, which was subsequently generalized by \cite{anai2019dtm}.
Given a non-negative \emph{weight function} $f: \R^d \rightarrow \R_{\ge 0}$ and \textit{power} $1 \le p \le \infty$, the \emph{weighted radius function} of resolution $t>0$ at $\xv$ is given by
\begin{equation
\rfx \defeq \begin{cases}
\pa{t^p - f(\xv)^p}^{1/p} & \text{ if } t \ge f(\xv) \\
-\infty & \text{ if } t < f(\xv).
\end{cases}\nonumber
\end{equation}
Consequently, $\Bfx[f,\rho][][][]$ is the \emph{weighted ball of resolution $t$ at $\xv$} w.r.t.~the metric $\rho$, which is illustrated in Figure~\ref{fig:ball}, and is given by
\eq{
\Bfx[f,\rho][][][] \defeq B_{\rho}\pa{\xv, \rfx} = \pb{\yv \in \R^d: \rho(\xv,\yv) \le \rfx}.\nn
}
\begin{figure}
\centering
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{figures/plots/v1.pdf}
\caption{$V^t[\Xn]$ unweighted}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{figures/plots/v2.pdf}
\caption{$V^t[\Xn, f]$ for $p=1$}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{figures/plots/v3.pdf}
\caption{$V^t[\Xn, f]$ for $p=\infty$}
\end{subfigure}
\caption{Illustration of offsets for $t=0.5$ and $f(\xv) = \inf_{\yv \in \mathbb{S}^1}\norm{\xv-\yv}$.}
\label{fig:ball}
\end{figure}
Given $\bX \subseteq \R^d$, the collection of weighted balls ${\VVt[][\bX, f][] = \pB{\Bfx[f][\xv][]: \xv \in \bX}}$, is called the \emph{weighted cover} of $\Xn$. The $f$-weighted offset at resolution $t$ is given by the union of balls in $\VVt[][\bX, f][]$,
\eq{
\Vt[][\bX,f][] \defeq \bigcup_{\xv \in \bX} \Bfx[f][\xv][].\nonumber
}
Together with the inclusion maps $\iota_s^t:\Vt[s][\bX,f][] \hookrightarrow \Vt[][\bX,f][]$, the $f-$\emph{weighted filtration} is given by
\eq{
V[\bX,f] \defeq \pB{\Vt[][\bX,f][], \iota_s^t : {s \le t}}.\nonumber
}
The image of $V[\bX,f]$ under the \emph{homology functor} $\mathbf{Hom}_* : V[\bX,f] \mapsto \bV[][][\bX,f]$, results in the \emph{weighted persistence module} $\bV[][][\bX,f] \defeq \qty{\bVt[][\bX,f][], \phi_s^t : {s\le t}}$, where the induced maps ${\phi_s^t: \bVt[s][\bX,f][] \rightarrow \bVt[][\bX,f][]}$ are linear maps between vector spaces. The weighted-simplicial complexes
\eq{
\Ct[][\bX,f][] = \text{nerve}\pb{\VVt[][\bX,f][]} \hspace{2mm} \text{ and } \hspace{2mm} \Rt[][\bX,f][] = \text{Rips}\pb{\VVt[][\bX,f][]}\nn
}
denote the weighted-\cech{} complex and weighted-Rips complex associated with the weighted cover $\VVt[][][]$ respectively. Without loss of generality $\bVt[][\bX,f][] = \textup{H}_*\pa{\Vt[][\bX,f][]}$ is the homology of the offset $\Vt[][\bX,f][]$, which, by the nerve lemma, is the same as the homology of the weighted-\cech{} complex. Furthermore, if $f(\xv) \equiv 0$ for all $\xv \in \R^d$ then the resulting filtrations are the usual unweighted filtrations. In particular, $V[\Xn] \cong \Ct[ ][\Xn,f][ ]$ and $\Rt[ ][\Xn, f][ ]$ correspond to \cech{ and Rips} filtrations, respectively. The following structural results appear in \cite{anai2019dtm}, and serve as analogues of the stability result for $f$-weighted filtrations.
\bigskip
\begin{lemma}[{\citealp[Propositions 3.2 \& 3.3]{anai2019dtm}}]
Given $\bX \subset \R^d$ and $f,g : \bX \rightarrow \R_+$
\begin{enumerate}[label=\textup{(\roman*)}]
\item $\bbv[\bX,f]$ and $\bbv[\bX, g]$ are $(\alpha,\alpha)$--interleaved for $\alpha: t \mapsto t + \norminf{f-g}$.
\end{enumerate}
Additionally, given $\bY \subset \R^d$ and $h: \bX \cup \bY \rightarrow \R_+$, if $h$ is $L$--Lipschitz and $\haus{\bX,\bY} \le \e$, then
\begin{enumerate}[label=\textup{(\roman*)}, resume]
\item $\bbv[\bX,h]$ and $\bbv[\bY, h]$ are $(\beta,\beta)$--interleaved for $\beta: t \mapsto t + \e \pa{1+L^p}^{1/p}$.
\end{enumerate}
\label{lemma:anai-et-al}
\end{lemma}
\subsection{Median-of-means Estimators}
\label{sec:mom}
Median-of-means (MoM) estimators have gained popularity in the robust machine learning owing to recent success, both theoretically and experimentally. See, for example, \cite{devroye2016sub,lugosi2019mean,lecue2020robust}. The background for MoM estimators in the context of mean estimation is as follows: samples $\Xn = \pb{\Xv_1, \Xv_2, \dots, \Xv_n}$ are observed and we wish to construct an estimator for the population mean $\theta$. The sample mean $\hat\theta = \overline{\bX}_n$ is known to achieve sub-Gaussian estimation error only when the samples $\Xn$ themselves are observed from a sub-Gaussian distribution.
Robust statistics deals with two important relaxations to this model: (i) the samples $\Xn$ are observed iid from $\pr$, but $\pr$ is no longer sub-Gaussian and is assumed to have heavy tails; and (ii) a fraction $\pi < \half$ of the samples are assumed to be contaminated with outliers, and the remaining $(1-\pi)n$ samples are observed from a well-behaved distribution $\pr$.
The median-of-means estimator $\hat\theta_{\text{MOM}}$, originally introduced by \cite{nemirovskij1983problem}, addresses these relaxations by constructing a robust estimator of location as follows: For $1 \le Q \le n$, the sample $\Xn$ is partitioned into subsets $\pB{S_1, S_2, \dots, S_Q}$ such that each subset $S_q \subset \pb{1,2,\dots,n}$ with $|{S_q}| = \floor{n/Q}$. The \emph{MoM estimator} $\thetamom$ is, then, defined as
\eq{
\thetamom \defeq \text{median} \pb{\hat\theta_1, \hat\theta_2, \dots, \hat\theta_Q},\nonumber
}
where $\{\hat\theta_q: q \in [Q]\}$ are the sample means computed for each subset $\{S_q: q \in [Q]\}$. \cite{audibert2011robust} showed that, in the univariate setting, $\thetamom$ achieves sub-Gaussian rates of convergence for heavy tailed data. \cite{minsker2015geometric} and \cite{devroye2016sub} extend these results to the multivariate setting by considering the geometric median. The MoM idea has subsequently been extended in several other directions, e.g., U-statistics \citep{joly2016robust}, kernel mean embeddings \citep{lerasle2019monk} and general M-estimators \citep{lecue2020robust} among others. Most importantly, these extensions move away from the heavy-tailed framework and provide significant insights on how $\thetamom$ can overcome the second relaxation, i.e., estimation in the presence of outlying contamination. While the MoM estimators are not unique in their ability to recover the signal under heavy tailed noise, or in the presence of contamination, they are very simple to construct in most cases, and provide a clear characterization of the effect of noise on the estimation error.
\section{Glossary of Notations}
\label{sec:glossary}
\begin{center}
\begin{tabular}{r!{:}l}
$\mathsf{H}_\rho(\bX, \bY)$ & Hausdorff distance between $\bX \subseteq \M$ and $\bY \subseteq \M$ measured w.r.t. metric $\rho$.\\
$V^t[f]$ & Sublevel set of $f$ at level $t$ given by $\pb{\xv \in \R^d : f(\xv) \le t}$\\
$V[f]$ and $\bbv[f]$ & Sublevel filtration $\pb{V^t[f]: t \in \R}$ and its persistence module\\
$\rfx$ & The $f$--weighted radius function of resolution $t$ at $\xv$. $\rfx=\pa{t^p - f(\xv)^p}^{\f1p}$\\
$\Bfx[f,\rho][\xv][t]$ & $f$--weighted ball at $\xv$ with radius $\rfx$ w.r.t the metric $\rho$.\\
$\Vt[][][]$ & The $f$--weighted offset of $\bX$ at resolution $t$ given by $\Vt[][][] = \bigcup_{\xv\in\bX}\Bfx[f][\xv][t]$\\
$\Vt[ ][][]$ & $f$--weighted filtration, i.e., $\dots \rightarrow \Vt[t_1] \rightarrow \Vt[t_2] \rightarrow \dots \Vt[t_n] \rightarrow \dots$\\
$\bVt[ ][][]$ & $f$--weighted persistence module, i.e., $\bVt[ ][][] = \textbf{Hom}\pa{\Vt[ ][][]}$\\
$\dgm\pa{\bbv}$ & Persistence diagram associated with the persistence module $\bbv$\\
$\hat\theta_{n,Q}$ & MoM-estimator, $\median\{\hat\theta_{1},\dots, \hat\theta_Q\}$, where $\hat\theta_q$ is the estimator from block $S_q$.\\
$\dnq$ & \md{} function given by $\dnq(\xv) = \median\pb{\inf_{\yv \in S_q}\norm{\xv-\yv} : q \in [Q]}$\\
$\dx$ & Distance function to a compact set $\Xb$ given by $\dx(\yv) = \inf_{\xv \in \Xb}\norm{\xv-\yv}$
\end{tabular}
\end{center}
\section{Supplementary Results}
\label{supplementary-results}
The following lemma is a collection of well-known inequalities (and their slight variants). We state them here for reference, as they are used frequently in the proofs.
\begin{lemma}
For $0 < y \le x$ and $p \ge 1$, the following inequalities hold:
\begin{enumerate}[label=\textup{(\roman*)}, itemsep=7pt]
\item $x^p + y^p \le (x+y)^p \le 2^{p-1}(x^p + y^p)$;
\item $2^{1-p}x^{p} - y^p \le (x-y)^p \le x^p - y^p$;
\item $(x+y)^{\f 1 p} \le x^{\f 1 p} + y^{\f 1 p} \le 2^{\f{p-1}{p}}(x+y)^{\f 1 p}$;
\item $x^{\f 1 p} - y^{\f 1 p} \le (x-y)^{\f 1 p} \le 2^{\f{p-1}{p}}x^{\f 1 p} - y^{\f 1 p}$;
\item $y^{1 - \f1p}x^{\f1p} \le x \le y^{1-p}x^p$;
\item $x \le \qty\Big(\pa{x+y}^p - y^p)^{\fpp}$.
\end{enumerate}
\label{lemma:useful-inequalities}
\end{lemma}
\begin{proof}
\emph{Part (i).} Let $f(y) = (x+y)^p - x^p - y^p$ on the interval $0 < y \le x$. The derivative,
$$
f'(y) = p(x+y)^{p-1} - py^{p-1} \ge 0
$$
for all $0 < y \le x$ and $p \ge 1$. Therefore $f(y)$ is strictly non-decreasing, and $f(y) \ge f(0) = 0$. This gives us the first inequality. For the second inequality, note that $g(z) = z^p$ is convex for $z\ge 0$. This follows from the fact that $g''(z) = p(p-1)z^{p-2} \ge 0$ for all $z \ge 0$ and $p\ge 1$. By convexity, we obtain
\eq{
2^{-p} \pa{x+y}^p = \pa{\half x + \half y}^p \le \f{x^p + y^p}{2},\nn
}
which leads to the second inequality.
\emph{Part (ii).} Let $z = (x-y)$. Applying the first inequality from the preceding part to $z$ and $y$ we get $z^p \le (y+z)^p - y^p$, i.e., $(x-y)^p \le x^p - y^p$. Similarly, from the second inequality, $(z+y)^p \le 2^{p-1}(z^p + y^p)$, which is the same as $2^{1-p}x^p - y^p \le (x-y)^p$.
\emph{Part (iii).} Note that $f(z) = z^{\f 1 p}$ is concave for all $z\ge 0$ and $p \ge 1$, since
\eq{
f''(z) = -\pa{\f{p-1}{p^2}} z^{\f{1-2p}{p}} \le 0,\nn
}
for all $z \ge 0$, $p \ge 1$. Therefore, by concavity,
\eq{
2^{-\f{1}{p}}(x+y)^{\f 1 p} \ge \f{x^{\f 1 p} + y^{\f 1 p}}{2},\nn
}
which leads to the right hand side inequality, i.e., $x^{\f 1 p} + y^{\f 1 p} \le 2^{1 - \f{1}{p}}(x + y)^p$. For the left hand side inequality, let $f(y) = x^{\f 1 p} + y^{\f 1 p} - (x+y)^{\f 1 p}$ on the interval $0 < y \le x$. The derivative is given by
\eq{
f'(y) = \f{1}{p}\pa{y^{\f1p - 1}-(x+y)^{\f1p-1}} \ge 0,\nn
}
since $0 < 1/p \le 1$ and $0 < y \le x$. Thus, $f(y)$ is increasing on the interval $[0,x]$, and, therefore, $f(y) \ge f(0) = 0$. This leads to the desired result.
\emph{Part (iv).} The proof is identical to the proof in Part (ii). The inequalities are obtained by taking $z=(x-y)$, and applying the results of Part (iii).
\emph{Part (v).} Since $y \le x$, it follows that $1 \le \pa{x/y}^{\f1p} \le x/y \le \pa{x/y}^p$ for $p\ge 1$. By rearranging the terms, we get $x \le y^{1-p}x^p$ and $x \ge y^{1 - \f1p}x^{\f1p}$.
\emph{Part (vi).} We have $x = (x + y - y) = \qty\big((x+y-y)^p)^{\fpp}$. From Part (ii) we have
$$
{(x+y-y)^p \le (x+y)^p - y^p},
$$
which, on rearrangement, yields $x \le \qty\big((x+y)^p - y^p)^{\fpp}$.
\end{proof}
\begin{lemma}[Chernoff-Hoeffding bound simplified]
Suppose ${ Z_1, Z_2, \dots Z_N }$ are i.i.d. $\textup{Bernoulli}(p)$ random variables. Then, for $0 < \e < 1$,
\eq{
\pr\pa{\f{1}{N}\sum_{1 \le i \le N}Z_i > \e} \le \exp\qty\Bigg( N \pa{\f{2}{e} + \e \log(p)}).\nn
}
\label{lemma:chernoff-hoeffding}
\end{lemma}
\providecommand{\kl}{\mathsf{KL}}
\begin{proof}
For $0 < \e < 1$, using the Chernoff-Hoeffding bound for binomial random variables \citep[Theorem~1]{hoeffding1963probability} we have
\eq{
\pr\pa{\f{1}{N}\sum_{1 \le i \le N}Z_i > \e} \le \exp\qty\Bigg(-N \cdot \kl\qty\Big(\textup{Ber}(\e) || \textup{Ber}(p))),
\label{mom-concentration-4}
}
where $\textup{Ber}(\e)$ and $\textup{Ber}(p)$ are Bernoulli distributions with parameters $\e$ and $p$ respectively, and $\kl(\pr || \qr)$ is the Kullback-Leibler divergence of $\qr$ w.r.t $\pr$. Simplifying the quantity in the exponent, we get
\eq{
\kl\qty\big(\textup{Ber}(\e) || \textup{Ber}(p)) &= \e\log\pa{\f{\e}{p}} + (1-\e)\log\pa{\f{1-\e}{1-p}}\n
&= \underbrace{\e \log(\e) + (1-\e)\log(1-\e)}_{\ge -2/e} - \e\log(p) - (1-\e)\log(1-p)\n
&\ge -\f{2}{e} - \e \log(p),\nn
}
where the last inequality uses the fact that $x\log(x) \ge -1/e$ for all $0 \le x \le 1$, and ${-(1-\e)\log(1-p) \ge 0}$ for all ${0 \le \e, p \le 1}$. Substituting this in \eref{mom-concentration-4} yields the result.
\end{proof}
\let\sh\undefined
\let\len\undefined
| {
"timestamp": "2022-06-07T02:02:54",
"yymm": "2206",
"arxiv_id": "2206.01795",
"language": "en",
"url": "https://arxiv.org/abs/2206.01795",
"abstract": "The distance function to a compact set plays a crucial role in the paradigm of topological data analysis. In particular, the sublevel sets of the distance function are used in the computation of persistent homology -- a backbone of the topological data analysis pipeline. Despite its stability to perturbations in the Hausdorff distance, persistent homology is highly sensitive to outliers. In this work, we develop a framework of statistical inference for persistent homology in the presence of outliers. Drawing inspiration from recent developments in robust statistics, we propose a $\\textit{median-of-means}$ variant of the distance function ($\\textsf{MoM Dist}$), and establish its statistical properties. In particular, we show that, even in the presence of outliers, the sublevel filtrations and weighted filtrations induced by $\\textsf{MoM Dist}$ are both consistent estimators of the true underlying population counterpart, and their rates of convergence in the bottleneck metric are controlled by the fraction of outliers in the data. Finally, we demonstrate the advantages of the proposed methodology through simulations and applications.",
"subjects": "Statistics Theory (math.ST); Computational Geometry (cs.CG); Machine Learning (cs.LG); Algebraic Topology (math.AT); Machine Learning (stat.ML)",
"title": "Robust Topological Inference in the Presence of Outliers",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9732407175907055,
"lm_q2_score": 0.7279754489059775,
"lm_q1q2_score": 0.7084953482816696
} |
https://arxiv.org/abs/1512.06547 | Kumjian-Pask algebras of finitely-aligned higher-rank graphs | We extend the the definition of Kumjian-Pask algebras to include algebras associated to finitely aligned higher-rank graphs. We show that these Kumjian-Pask algebras are universally defined and have a graded uniqueness theorem. We also prove the Cuntz-Kreiger uniqueness theorem; to do this, we use a groupoid approach. As a consequence of the graded uniqueness theorem, we show that every Kumjian-Pask algebra is isomorphic to the Steinberg algebra associated to its boundary path groupoid. We then use Steinberg algebra results to prove the Cuntz-Kreiger uniqueness theorem and also to characterize simplicity and basic simplicity. | \section{Introduction}
In the 1990s, $C^{\ast }$-algebras of row-finite directed graphs were
introduced in \cite{BPRS00,KPR98,KPRR97}. Since their first appearance,
these $C^{\ast }$-algebras have been intensively studied (for example, see
\cite{R08}). Some of the earliest results about these algebras include the
existence of a universal family, the gauge-invariant uniqueness theorem, and
the Cuntz-Krieger uniqueness theorem.
Higher-rank graph $C^{\ast}$-algebras were introduced by Kumjian and Pask in
\cite{KP00} as a generalisation of the $C^{\ast }$-algebras of directed
graphs. In \cite{KP00}, Kumjian and Pask limit their focus to row-finite
higher-rank graphs with no sources. Later, Raeburn, Sims and Yeend extended
the coverage by introducing $C^{\ast}$-algebras of locally convex,
row-finite higher-rank graphs in \cite{RSY03} and then finitely aligned
higher-rank graphs in \cite{RSY04}. It is in the finitely aligned setting
where graphs that fail to be row-finite are considered. Once again Raeburn,
Sims and Yeend establish the existence of a universal family, the
gauge-invariant uniqueness theorem, and the Cuntz-Krieger uniqueness theorem.
On the other hand, Leavitt path algebras were developed independently by
Ara, Moreno, and Pardo in \cite{AMP07} and Abrams and Aranda Pino in \cit
{AA05}. A complex Leavitt path algebra is a purely algebraic structure
constructed from a directed graph that sits densely inside the graph
C^{\ast}$-algebra. Tomforde showed in \cite{T11} that one can generalise
further and define Leavitt path $R$-algebras where $R$ is any commutative
ring with identity. Tomforde proved the existence of a universal family, the
graded uniqueness theorem (which is the algebraic analogue of the
gauge-invariant uniqueness theorem), and the Cuntz-Krieger uniqueness
theorem for Leavitt path $R$-algebras. Tomforde's proofs in \cite{T11} use
techniques that are similar to those employed by Raeburn for Leavitt path
\mathbb{C}$-algebras in \cite{Graph Algebras} and in Tomforde's earlier
paper \cite{T07} for Leavitt path $K$-algebras where $K $ is an arbitrary
field.
Moving to higher-rank graphs, Kumjian-Pask $R$-algebras were introduced in
\cite{ACaHR13} and include the class of Leavitt path algebras. Kumjian-Pask
algebras are the algebraic analogue of the higher-rank graph $C^{\ast}
-algebras of \cite{KP00}. As in \cite{KP00}, the authors of \cite{ACaHR13}
consider row-finite higher-rank graphs with no sources. Later, Clark, Flynn
and an Huef developed Kumjian-Pask algebras for locally convex, row-finite
higher-rank graphs in \cite{CFaH14}. To complete the final algebraic piece,
in this paper we introduce Kumjian-Pask algebras for finitely aligned
higher-rank graphs. We will establish the existence of a universal family,
the graded-invariant uniqueness theorem, and the Cuntz-Krieger uniqueness
theorem.
Our motivation to consider this class of higher-rank graphs comes from our
desire to establish an algebraic version of \cite[Theorem~4.1]{P15}: there
Pangalela shows that the Toeplitz $C^{\ast}$ algebra associated to a
row-finite graph $\Lambda$ can be realized as the graph $C^{\ast}$-algebra
associated to a higher-rank graph constructed from $\Lambda$, called
T\Lambda$. In this setting $T\Lambda$ has sources and is not locally convex.
Let $\Lambda $ be a finitely aligned $k$-graph and let $R$ be a commutative
ring with identity. We define a Kumjian-Pask $\Lambda $-family (Definition
\ref{KP-family}) and show the existence of a universal Kumjian-Pask algebra
{\normalsize \operatorname{KP}}_{R}\left( \Lambda \right) $ that is a $\mathbb{Z
^{k} $-graded $R$-algebra in Proposition \ref{universal-KP-family}. We then
prove the graded-invariant uniqueness theorem in Theorem \re
{the-graded-uniqueness-theorem}. Up to this point, our techniques mirror the
$C^{\ast }$-algebraic techniques of \cite{RSY04}. However, the proof of the
Cuntz-Krieger uniqueness theorem of \cite{RSY04} is highly analytic so we
must use an alternate approach. We have chosen a groupoid approach.
In Section \ref{Section-Steinberg-algebra}, we introduce groupoids and \emph
Steinberg algebras}. Then, given a finitely aligned higher-rank graph
\Lambda $, we build the associated boundary-path groupoid $\mathcal{G
_{\Lambda }$ as in \cite{Y07}. We then use the graded-invariant uniqueness
theorem (Theorem \ref{the-graded-uniqueness-theorem}) to show that the
Kumjian-Pask algebra ${\normalsize \operatorname{KP}}_{R}\left( \Lambda \right) $ is
isomorphic to the Steinberg algebra $A_{R}(G_{\Lambda })$ in Proposition \re
{KP-is-isomorphic-to-Steinberg-algebras}. With this isomorphism in place, we
aim to use results about Steinberg algebras to establish results about
Kumjian-Pask algebras.
First we establish how certain properties of $\Lambda$ translate to
properties of $\mathcal{G}_{\Lambda}$; we do this in Section \re
{Section-aperiodic-effective} and Section \ref{Section-cofinal-minimal}. Of
interest in its own right, we show that a higher-rank graph $\Lambda $ is
\emph{aperiodic} if and only if the boundary-path groupoid $\mathcal{G
_{\Lambda }$ is \emph{effective} in Proposition \ref{aperiodic-iff-effective
. We also show in Proposition \ref{cofinal-iff-minimal}, that a higher-rank
graph $\Lambda $ is \emph{cofinal} if and only if $\mathcal{G}_{\Lambda }$
is \emph{minimal}.
Now in Section \ref{Section-the-CK-uniqueness-theorem}, we prove the
Cuntz-Krieger uniqueness theorem. This theorem only applies to Kumjian-Pask
algebras associated to aperiodic graphs. The proof is a simpy an application
of the Cuntz-Krieger uniqueness theorem for Steinberg algebras \cite[Theorem
3.2]{CE-M15} which applies to effective groupoids. Note that our technique
gives an alternate proof of the Cuntz-Krieger uniqueness theorem in the
special cases of Leavitt path algebras in \cite{T11} and the row-finite
Kumjian-Pask algebras of \cite{ACaHR13,CFaH14}.
Finally, in Section \ref{Section-Basic-Simpllicity}, we give necessary and
sufficient conditions for ${\normalsize \operatorname{KP}}_{R}\left( \Lambda \right)$
to be basically simple in Theorem \ref{basic-simplicity} and simple in
Theorem \ref{simplicity}. These two results are a consequence of the
characterisation of basic simplicity and simplicity of the Steinberg algebra
$A_{R}\left( \mathcal{G}_{\Lambda }\right) $ (see Theorem 4.1 and Corollary
4.6 of \cite{CE-M15}).
\section{Background}
Let $\mathbb{N}$ be the set of non-negative integers and let $k$ be a
positive integer. We write $n\in \mathbb{N}^{k}$ as $\left( n_{1},\ldots
,n_{k}\right) $ and for $m,n\in \mathbb{N}^{k}$, we write $m\leq n$ to
denote $m_{i}\leq n_{i}$ for $1\leq i\leq k$. We also write $m\vee n$ for
their coordinate-wise maximum and $m\wedge n$ for their coordinate-wise
minimum. We denote the usual basis in $\mathbb{N}^{k}$ by $\left\{
e_{i}\right\} $.
A \emph{directed graph} or $1$\emph{-graph} $E=\left( E^{0},E^{1},r,s\right)
$ consists of countable sets of vertices $E^{0}$, edges $E^{1}$ and
functions $r,s:E^{1}\rightarrow E^{0}$, which denote range and source maps,
respectively. We follow the conventions of \cite{CBMS} and write $\lambda
\mu $ to denote the composition of paths $\lambda $ and $\mu $ with $s\left(
\lambda \right) =r\left( \mu \right) $. Thus a path of length $n\in \mathbb{
}$ is a sequence $\lambda =\lambda _{1}\cdots \lambda _{n}$ of edges
\lambda _{i}$ with $s\left( \lambda _{i}\right) =r\left( \lambda
_{i+1}\right) $ for $1\leq i\leq n-1$. We also have $s\left( \lambda \right)
:=s\left( \lambda _{n}\right) $ and $r\left( \lambda \right) :=r\left(
\lambda _{1}\right) $.
\begin{remark}
\label{convetion-of-graph}We use this convention of paths because we view
the collection of paths as a category.
\end{remark}
\subsection{Higher-rank graphs.}
For a positive integer $k$, we regard the additive semigroup $\mathbb{N}^{k}$
as a category with one object. A \emph{higher-rank graph} or $k$\emph{-graph}
$\Lambda =\left( \Lambda ^{0},\Lambda ,r,s\right) $ is a countable small
category $\Lambda $ with a functor $d:\Lambda \rightarrow \mathbb{N}^{k}$,
called the \emph{degree map}, satisfying the \emph{factorisation property}:
for every $\lambda \in \Lambda $ and $m,n\in \mathbb{N}^{k}$ with $d\left(
\lambda \right) =m+n$, there are unique elements $\mu ,\nu \in \Lambda $
such that $\lambda =\mu \upsilon $ and $d\left( \mu \right) =m$, $d\left(
\nu \right) =n$. We then write $\lambda \left( 0,m\right) $ for $\mu $ and
\lambda \left( m,m+n\right) $ for $\nu $.
We write $\Lambda ^{0}$ to denote the set of objects in $\Lambda $ and we
identify each object $v\in \Lambda ^{0}$ with the identity morphism at the
object, which, by the factorisation property, is the only morphism with
range and source $v$. We then regard elements of $\Lambda ^{0}$ as \emph
vertices}$.$ For $n\in \mathbb{N}^{k}$, we defin
\begin{equation*}
\Lambda ^{n}:=\left\{ \lambda \in \Lambda :d\left( \lambda \right) =n\right\}
\end{equation*
and call the elements $\lambda $ of $\Lambda ^{n}$ \emph{paths of degree }$n
. For each $\lambda \in \Lambda $ we say $\lambda $ has \emph{\ source }
s\left( \lambda \right) $ and \emph{\ range } $r\left( \lambda \right) $.
For $v\in \Lambda ^{0}$, $\lambda \in \Lambda $ and $E\subseteq \Lambda $,
we define
\begin{align*}
vE& :=\left\{ \mu \in E:r\left( \mu \right) =v\right\} \text{,} \\
\lambda E& :=\left\{ \lambda \mu \in \Lambda :\mu \in E,r\left( \mu \right)
=s\left( \lambda \right) \right\} \text{,} \\
E\lambda & :=\left\{ \mu \lambda \in \Lambda :\mu \in E,s\left( \mu \right)
=r\left( \lambda \right) \right\} \text{.}
\end{align*}
\begin{remark}
In older references, for example \cite{KP00,RSY03}, $v\Lambda $ is denoted
by $\Lambda \left( v\right) $.
\end{remark}
\begin{example}[{\protect\cite[Example 2.2.(ii)]{RSY03}}]
Let $k\in \mathbb{N}$ and $m\in \left( \mathbb{N\cup }\left\{ \infty
\right\} \right) ^{k}$. We defin
\begin{equation*}
\Omega _{k,m}:=\left\{ \left( p,q\right) \in \mathbb{N}^{k}\times \mathbb{N
^{k}:p\leq q\leq m\right\} \text{.}
\end{equation*
This is a category with objects $\left\{ p\in \mathbb{N}^{k}:p\leq m\right\}
$, range map $r\left( p,q\right) =p$, source map $s\left( p,q\right) =q$,
and degree map $d\left( p,q\right) =q-p$. Then $\left( \Omega
_{k,m},d\right) $ is a $k$-graph.
\end{example}
One way to visualise $k$-graphs is to use coloured graphs. By choosing $k
-different colours $c_{1},\ldots ,c_{k}$, we can view paths in $\Lambda
^{e_{i}}$ as edges of colour $c_{i}$. For a $k$-graph $\Lambda $, we call
its corresponding coloured graph the \emph{skeleton }of $\Lambda $. For
further discussion about $k$-graphs and their skeletons, see \cite{HRSW13}.
Let $\Lambda $ be a $k$-graph. For $\lambda ,\mu \in \Lambda $, we say that
\tau $ is a \emph{minimal common extension }of $\lambda $ and $\mu $ if
\begin{equation*}
d\left( \tau \right) =d\left( \lambda \right) \vee d\left( \mu \right) \text
, }\tau \left( 0,d\left( \lambda \right) \right) =\lambda \text{ and }\tau
\left( 0,d\left( \mu \right) \right) =\mu \text{.}
\end{equation*
Let $\operatorname{MCE}\left( \lambda ,\mu \right) $ denote the collection of all
minimal common extensions of $\lambda $ and $\mu $. Then we writ
\begin{equation*}
\Lambda ^{\min }\left( \lambda ,\mu \right) :=\left\{ \left( \rho ,\tau
\right) \in \Lambda \times \Lambda :\lambda \rho =\mu \tau \in \operatorname{MCE
\left( \lambda ,\mu \right) \right\} \text{.}
\end{equation*
Meanwhile, for $E\subseteq \Lambda $ and $\lambda \in \Lambda $, we write
\begin{equation*}
\operatorname{Ext}\left( \lambda ;E\right) :=\bigcup_{\mu \in E}\left\{ \rho :(\rho
,\tau )\in \Lambda ^{\min }\left( \lambda ,\mu \right) \right\} \text{.}
\end{equation*
A set $E\subseteq v\Lambda $ is \emph{exhaustive} if for every $\lambda \in
v\Lambda $, there exists $\mu \in E$ such that $\Lambda ^{\min }\left(
\lambda ,\mu \right) \neq \emptyset $. We defin
\begin{equation*}
\operatorname{FE}\left( \Lambda \right) :=\bigcup_{v\in \Lambda ^{0}}\left\{
E\subseteq \left. v\Lambda \right\backslash \left\{ v\right\} :E\text{ is
finite and exhaustive}\right\} \text{.}
\end{equation*
For $E\in \operatorname{FE}\left( \Lambda \right) $, we write $r\left( E\right) $
for the vertex $v$ which satisfies $E\subseteq v\Lambda $.
We say that $\Lambda $ is \emph{finitely aligned} if $\Lambda ^{\min }\left(
\lambda ,\mu \right) $ is finite (possibly empty) for all $\lambda ,\mu \in
\Lambda $. We see that every $1$-graph is finitely aligned. As in \cite
Definition 1.4]{KP00}, we say that a $k$-graph $\Lambda $ is \emph{row-finit
} if $v\Lambda ^{n}$ is finite for every $v\in \Lambda ^{0}$ and $n\in
\mathbb{N}^{k}$. Note that for all $\lambda ,\mu \in \Lambda $, we have
\left\vert \Lambda ^{\min }\left( \lambda ,\mu \right) \right\vert
=\left\vert \operatorname{MCE}\left( \lambda ,\mu \right) \right\vert \leq
\left\vert r\left( \lambda \right) \Lambda ^{d\left( \lambda \right) \vee
d\left( \mu \right) }\right\vert $. Hence, every row-finite $k$-graph
\Lambda $ is finitely aligned. On the other hand, a finitely aligned $k
-graph $\Lambda $ is not necessarily row-finite.
For example, consider the $2$-graph $\Lambda _{1}$ which has skeleton
\begin{equation*}
\begin{tikzpicture} \node[inner sep=1pt] (v) at (0,0) {$\bullet$};
\node[inner sep=1pt] at (0,-0.3) {$v$}; \path[->,every
loop/.style={looseness=20}] (v) edge[in=-45,out=-135,loop, red, very thick]
node[pos=0.5, below, black]{$e$} (v); \path[->,every
loop/.style={looseness=10}] (v) edge[in=-225,out=-315,loop, blue, dashed,
very thick] node[pos=0.5, above, black]{$f_1$} (v); \path[->,every
loop/.style={looseness=30}] (v) edge[in=-212.5,out=-327.5,loop, blue,
dashed, very thick] node[pos=0.5, above, black]{$f_2$} (v); \path[->,every
loop/.style={looseness=75}] (v) edge[in=-200,out=-340,loop, blue, dashed,
very thick] node[pos=0.5, above, black]{$\vdots$} (v); \end{tikzpicture}
\end{equation*}
where $ef_{i}=f_{i}e$ for all positive integers $i$, the solid edge has
degree $\left( 1,0\right) $ and dashed edges have degree $\left( 0,1\right)
. It is clearly not row-finite because $|v\Lambda _{1}^{\left( 0,1\right)
}|=\infty $. On the other hand, for $\lambda ,\mu \in \Lambda $, $\left\vert
\Lambda _{1}^{\min }\left( \lambda ,\mu \right) \right\vert $ is either $0$
or $1$, and then $\Lambda _{1}$ is finitely aligned.
Following \cite[Definition 1.4]{KP00}, a $k$-graph $\Lambda $ has \emph{no
sources }if $v\Lambda ^{n}$ is nonempty for every $v\in \Lambda ^{0}$ and
n\in \mathbb{N}^{k}$. Meanwhile, recall from \cite[Definition 3.9]{RSY03}
that a $k$-graph $\Lambda $ is \emph{locally convex }if for all $v\Lambda
^{0}$, $1\leq i,j\leq k$ with $i\neq j$, $\lambda \in v\Lambda ^{e_{i}}$ and
$\mu \in v\Lambda ^{e_{j}}$, the sets $s\left( \lambda \right) \Lambda
^{e_{j}}$ and $s\left( \mu \right) \Lambda ^{e_{i}}$ are nonempty.
Consider the $2$-graph $\Lambda _{2}$ with skeleto
\begin{equation*}
\begin{tikzpicture} \node[inner sep=1pt] (v_1) at (0,0) {$\bullet$};
\node[inner sep=1pt] at (0,-0.3) {$v_1$}; \node[inner sep=1pt] (v_2) at
(0,3) {$\bullet$}; \node[inner sep=1pt] at (-0.3,3.3) {$v_2$}; \node[inner
sep=1pt] (v_3) at (3,0) {$\bullet$}; \node[inner sep=1pt] at (3.3,-0.3)
{$v_3$}; \node[inner sep=1pt] (v_4) at (3,3) {$\bullet$}; \node[inner
sep=1pt] at (3.3,3.3) {$v_4$}; \node[inner sep=1pt] (v_5) at (-3,0)
{$\bullet$}; \node[inner sep=1pt] at (-3.3,0) {$v_5$}; \draw[-latex, red,
very thick] (v_2) edge[out=270, in=90] node[pos=0.5, left, black]{$e_1$}
(v_1); \draw[-latex, blue, dashed, very thick] (v_4) edge[out=180, in=0]
node[pos=0.5, above, black]{$f_1$} (v_2); \draw[-latex, red, very thick]
(v_4) edge[out=270, in=90] node[pos=0.5, right, black]{$e_2$} (v_3);
\draw[-latex, blue, dashed, very thick] (v_3) edge[out=180, in=0]
node[pos=0.5, below, black]{$f_2$} (v_1); \draw[-latex, red, very thick]
(v_5) edge[out=0, in=180] node[pos=0.5, above, black]{$e_3$} (v_1);
\end{tikzpicture}
\end{equation*}
where $e_{1}f_{1}=f_{2}e_{2}$, solid edges have degree $\left( 1,0\right) $
and dashed edges have degree $\left( 0,1\right) $. Since $v_{5}$ does not
receive edges with degree $\left( 0,1\right) $, then $v_{5}$ is a source of
\Lambda _{2}$. Furthermore, $\Lambda _{2}$ fails to be locally-convex since
e_{3}\in v_{1}\Lambda _{2}^{\left( 1,0\right) },f_{2}\in v_{1}\Lambda
_{2}^{\left( 0,1\right) }$ but $s\left( e_{3}\right) \Lambda _{2}^{\left(
0,1\right) }=\emptyset $. On the other hand, $\Lambda _{2}$ is row-finite
thus $\Lambda _{2}$ is finitely aligned.
Next consider the $2$-graph $\Lambda _{3}$ with skeleto
\begin{equation*}
\begin{tikzpicture} \node[inner sep=1pt] (v_1) at (0,0) {$\bullet$};
\node[inner sep=1pt] at (-0.3,-0.3) {$v_1$}; \node[inner sep=1pt] (v_2) at
(0,3) {$\bullet$}; \node[inner sep=1pt] at (-0.3,3.3) {$v_2$}; \node[inner
sep=1pt] (v_3) at (3,0) {$\bullet$}; \node[inner sep=1pt] at (3.3,-0.3)
{$v_3$}; \node[inner sep=1pt] (w_1) at (1.5,1.5) {$\bullet$}; \node[inner
sep=1pt] at (1.2,1.2) {$w_1$}; \node[inner sep=1pt] (w_2) at (2.3,2.3)
{$\bullet$}; \node[inner sep=1pt] at (2.0,2.0) {$w_2$}; \node[inner sep=1pt]
at (2.6,2.6) {$\cdot$}; \node[inner sep=1pt] at (2.7,2.7) {$\cdot$};
\node[inner sep=1pt] at (2.8,2.8) {$\cdot$}; \draw[-latex, red, very thick]
(v_2) edge[out=270, in=90] node[pos=0.5, left, black]{$e$} (v_1);
\draw[-latex, blue, dashed, very thick] (v_3) edge[out=180, in=0]
node[pos=0.5, below, black]{$f$} (v_1); \draw[-latex, red, very thick] (w_1)
edge[out=315, in=135] node[pos=0.5, below, black]{$e_1$} (v_3);
\draw[-latex, blue, dashed, very thick] (w_1) edge[out=135, in=315]
node[pos=0.5, below, black]{$f_1$} (v_2); \draw[-latex, blue, dashed, very
thick] (w_2) edge[out=165, in=-15] node[pos=0.5, above, black]{$f_2$} (v_2);
\draw[-latex, red, very thick] (w_2) edge[out=285, in=105] node[pos=0.5,
right, black]{$e_2$} (v_3); \end{tikzpicture}
\end{equation*}
where $ef_{i}=fe_{i}$ for all positive integers $i$, solid edges have degree
$\left( 1,0\right) $ and dashed edges have degree $\left( 0,1\right) $.
Since $\left\vert \Lambda _{3}^{\min }\left( e,f\right) \right\vert =\infty
, then $\Lambda _{3}$ is not finitely aligned. Hence, not every $k$-graph is
finitely aligned.
To summarise, finitely aligned $k$-graphs generalise both row-finite $k
-graphs with no sources and locally convex row-finite $k$-graphs. However,
this class of $k$-graphs does not cover all $k$-graphs. In this paper, we
focus on finitely aligned $k$-graphs. For other examples and further
discussion, see \cite{KP00,P15,RSY03,RSY04,W11}.
\subsection{Paths and boundary paths.}
Suppose that $\Lambda $ is a finitely aligned $k$-graph. Recall from \cite
Definition 3.1]{RSY03} that for $n\in \mathbb{N}^{k}$, we defin
\begin{equation*}
\Lambda ^{\leq n}:=\{\lambda \in \Lambda :d\left( \lambda \right) \leq
\text{, and }d\left( \lambda \right) _{i}<n_{i}\text{ implies }s\left(
\lambda \right) \Lambda ^{e_{i}}=\emptyset \}\text{.}
\end{equation*
Note that $v\Lambda ^{\leq n}\neq \emptyset $ for all $v\in \Lambda ^{0}$
and $n\in \mathbb{N}^{k}$. This is because $v$ is contained in $v\Lambda
^{\leq n}$ whenever $v\Lambda ^{\leq n}$ has no non-trivial paths of degree
less than or equal to $q$. For further discussion, see \cite[Remark 3.2
{RSY03}.
Following \cite[Definition 5.10]{FMY05}, we say that a degree-preserving
functor $x:\Omega _{k,m}\rightarrow \Lambda $ is a \emph{boundary path }of
\Lambda $ if for every $n\in \mathbb{N}^{k}$ with $n\leq m $ and for $E\in
x\left( n,n\right) \operatorname{FE}\left( \Lambda \right) $, there exists $\lambda
\in E$ such that $x\left( n,n+d\left( \lambda \right) \right) =\lambda $. We
write $\partial \Lambda $ for the set of all boundary paths. Note that for
v\in \Lambda ^{0}$, $v\partial \Lambda $ is nonempty \cite[Lemma 5.15]{FMY05
.
\begin{remark}
In the locally convex setting, the set $\Lambda ^{\leq \infty }$ (as defined
in \cite[Definition 3.14]{RSY03}) is referred to as the \textquotedblleft
boundary path space\textquotedblright . Indeed, if $\Lambda $ is row-finite
and locally convex, then $\Lambda ^{\leq \infty }=\partial \Lambda $ \cite
Proposition 2.12]{W11}. However, more generally, $\Lambda ^{\leq \infty }
\subseteq \partial \Lambda $ and the two can be different (see \cite[Example
2.11]{W11}).
\end{remark}
Let $x\in \partial \Lambda $. If $n\in \mathbb{N}^{k}$ and $n\leq d\left(
x\right) $, we define $\sigma ^{n}x$ by $\sigma ^{n}x\left( 0,m\right)
=x\left( n,n+m\right) $ for all $m\leq d\left( x\right) -n$, and by \cite
Lemma 5.13.(1)]{FMY05}, $\sigma ^{n}x$ also belongs to $\partial \Lambda $.
We also write $x\left( n\right) $ for the vertex $x\left( n,n\right) $. Then
the range of boundary path $x$ is the vertex $r\left( x\right) :=x\left(
0\right) $. For $\lambda \in \Lambda x\left( 0\right) $, we also have
\lambda x\in \partial \Lambda $ \cite[Lemma 5.13.(2)]{FMY05}.
\subsection{Graded rings.}
Suppose that $G$ is an additive abelian group. A ring $A$ is $G$\emph
-graded }if there are additive subgroups $\left\{ A_{g}:g\in G\right\} $
satisfying
\begin{equation*}
A=\bigoplus {}_{g\in G}A_{g}\text{ and for }g,h\in G\text{,
A_{g}A_{h}\subseteq A_{g+h}\text{.}
\end{equation*
If $A$ and $B$ are $G$-graded rings, a homomorphism $\pi :A\rightarrow B$ is
$G$\emph{-graded}\ if $\pi \left( A_{g}\right) \subseteq B_{g}$ for $g\in G$.
Let $A$ be a $G$-graded ring. We say an ideal $I$ of $A$ is a $G$\emph
-graded ideal} if $\left\{ I\cap A_{g}:g\in G\right\} $ is a grading of $I$.
\section{Kumjian-Pask $\Lambda $-families}
Suppose that $\Lambda $ is a finitely aligned $k$-graph and $R$ is a
commutative ring with identity $1$. For $\lambda \in \Lambda $, we call
\lambda ^{\ast }$ a \emph{ghost path }($\lambda ^{\ast }$ is a formal
symbol) and we define
\begin{equation*}
G\left( \Lambda \right) :=\left\{ \lambda ^{\ast }:\lambda \in \Lambda
\right\} \text{.}
\end{equation*
For $v\in \Lambda ^{0}$, we define $v^{\ast }:=v$. We also extend $r$ and $s$
to be defined on $G\left( \Lambda \right) $ b
\begin{equation*}
r\left( \lambda ^{\ast }\right) =s\left( \lambda \right) \text{ and }s\left(
\lambda ^{\ast }\right) =r\left( \lambda \right) \text{.}
\end{equation*
We then define composition on $G\left( \Lambda \right) $ by setting $\lambda
^{\ast }\mu ^{\ast }=\left( \mu \lambda \right) ^{\ast }$ for $\lambda ,\mu
\in \Lambda $; and write $G\left( \Lambda ^{\neq 0}\right) $ the set of
ghost paths that are not vertices. Note that the factorisation property of
\Lambda $ induces a similar factorisation property on $G\left( \Lambda
\right) $.
\begin{definition}
\label{KP-family}A \emph{Kumjian-Pask }$\Lambda $\emph{-family} $\left\{
S_{\lambda },S_{\mu ^{\ast }}:\lambda ,u\in \Lambda \right\} $ in an $R
-algebra $A$ consists of $S:\Lambda \cup G\left( \Lambda ^{\neq 0}\right)
\rightarrow A$ such that:
\begin{enumerate}
\item[(KP1)] $\left\{ S_{v}:v\in \Lambda ^{0}\right\} $ is a collection of
mutually orthogonal idempotents;
\item[(KP2)] for $\lambda ,\mu \in \Lambda $ with $s\left( \lambda \right)
=r\left( \mu \right) $, we have $S_{\lambda }S_{\mu }=S_{\lambda \mu }$ and
S_{\mu ^{\ast }}S_{\lambda ^{\ast }}=S_{\left( \lambda \mu \right) ^{\ast }}
;
\item[(KP3)] $S_{\lambda ^{\ast }}S_{\mu }=\sum_{(\rho ,\tau )\in \Lambda
^{\min }\left( \lambda ,\mu \right) }S_{\rho }S_{\tau ^{\ast }}$ for all
\lambda ,\mu \in \Lambda $; and
\item[(KP4)] $\prod_{\lambda \in E}\left( S_{r\left( E\right) }-S_{\lambda
}S_{\lambda ^{\ast }}\right) =0$ for all $E\in \operatorname{FE}\left( \Lambda
\right) $.
\end{enumerate}
\end{definition}
\begin{remark}
\label{KP-family-remark}A number of aspects of these relations are worth
commenting on:
\begin{enumerate}
\item[(i)] In previous references about Leavitt path algebras and
Kumjian-Pask algebras, people usually distinguish the vertex idempotents as
\textquotedblleft $P_{v}$\textquotedblright\ (for example, see \cit
{A15,AA05,AA08,AMP07,ACaHR13,CFaH14,T07,T11}). We do not follow this
convention because we do not want to make additional unnecessary cases in
each proof.
\item[(ii)] (KP2) in \cite{ACaHR13,CFaH14} has more relations to check.
However, using our notational convention, those relations can be simplified
and are equivalent to our (KP2).
\item[(iii)] The restriction to finitely aligned $k$-graphs is necessary for
the sum in (KP3) to be make sense (see \cite{RS05}).
\item[(iv)] In (KP3), we interpret the empty sum as $0$, so $S_{\lambda
^{\ast }}S_{\mu }=0$ whenever $\Lambda ^{\min }\left( \lambda ,\mu \right)
=\emptyset $. We also have $S_{\lambda ^{\ast }}S_{\lambda }=S_{s\left(
\lambda \right) }$.
\item[(v)] (KP3-4) have been changed from those in \cite[Definition 3.1
{ACaHR13} and \cite[Definition 3.1]{CFaH14}. We do this because we need to
adjust the relations to deal with situation where $k$-graph is not locally
convex. For further discussion, see Appendix A of \cite{RSY04}.
\end{enumerate}
\end{remark}
The following lemma establishes some useful properties of a family
satisfying (KP1-3).
\begin{proposition}
\label{properties-of-KP}Let $\Lambda $ be a finitely aligned $k$-graph, $R$
be a commutative ring with $1$, and $\left\{ S_{\lambda },S_{\mu ^{\ast
}}:\lambda ,u\in \Lambda \right\} $ be a family satisfying (KP1-3) in an $R
-algebra $A$. Then
\begin{enumerate}
\item[(a)] $S_{\lambda }S_{\lambda ^{\ast }}S_{\mu }S_{\mu ^{\ast
}}=\sum_{\lambda \rho \in \operatorname{MCE}\left( \lambda ,\mu \right) }S_{\lambda
\rho }S_{\left( \lambda \rho \right) ^{\ast }}$ for $\lambda ,\mu \in
\Lambda $; and $\left\{ S_{\lambda }S_{\lambda ^{\ast }}:\lambda \in \Lambda
\right\} $ is a commuting family.
\item[(b)] The subalgebra generated by $\left\{ S_{\lambda },S_{\mu ^{\ast
}}:\lambda ,u\in \Lambda \right\} $ is
\begin{equation*}
\operatorname{span}_{R}\{S_{\lambda }S_{\mu ^{\ast }}:\lambda ,u\in \Lambda ,s\left(
\lambda \right) =s\left( \mu \right) \}\text{.}
\end{equation*}
\item[(c)] For $n\in \mathbb{N}^{k}$ and $\lambda ,\mu \in \Lambda ^{\leq n}
, we have $S_{\lambda ^{\ast }}S_{\mu }=\delta _{\lambda ,\mu }S_{s\left(
\lambda \right) }$.
\item[(d)] Suppose that $rS_{v}\neq 0$ for all $r\in \left.
R\right\backslash \left\{ 0\right\} $, $v\in \Lambda ^{0}$ and that $\lambda
,\mu \in \Lambda $ with $s\left( \lambda \right) =s\left( \mu \right) $. If
r\in \left. R\right\backslash \left\{ 0\right\} $ and $G\subseteq s\left(
\lambda \right) \Lambda $ is finite non-exhaustive, the
\begin{equation*}
rS_{\lambda }\neq 0\text{ and }rS_{\lambda }\big(\prod_{\nu \in G}\left(
S_{s\left( \lambda \right) }-S_{\nu }S_{\nu ^{\ast }}\right) \big)S_{\mu
^{\ast }}\neq 0\text{.}
\end{equation*}
\end{enumerate}
\end{proposition}
\begin{proof}
To show (a), we take $\lambda ,\mu \in \Lambda $ and the
\begin{align*}
S_{\lambda }S_{\lambda ^{\ast }}S_{\mu }S_{\mu ^{\ast }}& =S_{\lambda }\big
\sum_{(\rho ,\tau )\in \Lambda ^{\min }\left( \lambda ,\mu \right) }S_{\rho
}S_{\tau ^{\ast }}\big)S_{\mu ^{\ast }}=\sum_{(\rho ,\tau )\in \Lambda
^{\min }\left( \lambda ,\mu \right) }S_{\lambda \rho }S_{\left( \mu \tau
\right) ^{\ast }} \\
& =\sum_{(\rho ,\tau )\in \Lambda ^{\min }\left( \lambda ,\mu \right)
}S_{\lambda \rho }S_{\left( \lambda \rho \right) ^{\ast }}=\sum_{\lambda
\rho \in \operatorname{MCE}\left( \lambda ,\mu \right) }S_{\lambda \rho }S_{\left(
\lambda \rho \right) ^{\ast }}\text{.}
\end{align*
Furthermore,
\begin{equation*}
S_{\lambda }S_{\lambda ^{\ast }}S_{\mu }S_{\mu ^{\ast }}=\sum_{\lambda \rho
\in \operatorname{MCE}\left( \lambda ,\mu \right) }S_{\lambda \rho }S_{\left(
\lambda \rho \right) ^{\ast }}=\sum_{\mu \tau \in \operatorname{MCE}\left( \lambda
,\mu \right) }S_{\mu \tau }S_{\left( \mu \tau \right) ^{\ast }}=S_{\mu
}S_{\mu ^{\ast }}S_{\lambda }S_{\lambda ^{\ast }}\text{,}
\end{equation*}
as required.
Next we show (b). For $\lambda ,u\in \Lambda $, we have $S_{\lambda }S_{\mu
^{\ast }}=S_{\lambda }S_{s\left( \lambda \right) }S_{s\left( \mu \right)
}S_{\mu ^{\ast }}$ by (KP2). Then by (KP1), $S_{\lambda }S_{\mu ^{\ast
}}\neq 0$ implies $s\left( \lambda \right) =s\left( \mu \right) $.
Therefore, the result follows from part (a), (KP2) and (KP3).
To show (c), we take $\lambda ,\mu \in \Lambda ^{\leq n}$. Suppose that
S_{\lambda ^{\ast }}S_{\mu }\neq 0$. By (KP3), there exists $(\rho ,\tau
)\in \Lambda ^{\min }\left( \lambda ,\mu \right) $ such that $\lambda \rho
=\mu \tau $ and $d\left( \lambda \rho \right) \leq n$. Since $\lambda ,\mu
\in \Lambda ^{\leq n}$, then $\rho =s\left( \lambda \right) =\tau $ and
hence $\lambda =\mu $.
Finally, we show (d). Take $r\in \left. R\right\backslash \left\{ 0\right\} $
and $\lambda \in \Lambda $. Suppose for contradiction that $rS_{\lambda }=0
. The
\begin{equation*}
0=S_{\lambda ^{\ast }}\left( rS_{\lambda }\right) =rS_{\lambda ^{\ast
}}S_{\lambda }=rS_{s\left( \lambda \right) }\text{,}
\end{equation*
which contradicts with $rS_{v}\neq 0$ for all $r\in \left. R\right\backslash
\left\{ 0\right\} $ and $v\in \Lambda ^{0}$. Hence, $rS_{\lambda }\neq 0$.
Now take $r\in \left. R\right\backslash \left\{ 0\right\} $, $\lambda ,\mu
\in \Lambda $ with $s\left( \lambda \right) =s\left( \mu \right) $ and
finite non-exhaustive $G\subseteq s\left( \lambda \right) \Lambda $. Suppose
for contradiction tha
\begin{equation*}
rS_{\lambda }\big(\prod_{\nu \in G}\left( S_{s\left( \lambda \right)
}-S_{\nu }S_{\nu ^{\ast }}\right) \big)S_{\mu ^{\ast }}=0\text{.}
\end{equation*
Since $G$ is non-exhaustive, then there exists $\gamma \in s\left( \lambda
\right) \Lambda $ such that $\operatorname{Ext}\left( \gamma ;G\right) =\emptyset $.
Hence $\Lambda ^{\min }\left( \nu ,\gamma \right) =\emptyset $ for every
\nu \in G$, and then by (KP3), $S_{\nu ^{\ast }}S_{\gamma }=0$ for $\nu \in
G $. Therefore
\begin{align*}
0& =\big(rS_{\lambda }\big(\prod_{\nu \in G}\left( S_{s\left( \lambda
\right) }-S_{\nu }S_{\nu ^{\ast }}\right) \big)S_{\mu ^{\ast }}\big)S_{\mu
\gamma } \\
& =rS_{\lambda }\big(\prod_{\nu \in G}\left( S_{s\left( \lambda \right)
}-S_{\nu }S_{\nu ^{\ast }}\right) \big)S_{\gamma } \\
& =rS_{\lambda }S_{\gamma }=rS_{\lambda \gamma }\text{,}
\end{align*
which contradicts with $rS_{\lambda \gamma }\neq 0$. Hence, $rS_{\lambda
\big(\prod_{\nu \in G}\left( S_{s\left( \lambda \right) }-S_{\nu }S_{\nu
^{\ast }}\right) \big)S_{\mu ^{\ast }}\neq 0$.
\end{proof}
\begin{remark}
\label{properties-of-KP-additional}For $n\in \mathbb{N}^{k}$, we have
\Lambda ^{n}\subseteq \Lambda ^{\leq n}$. Hence, Proposition \re
{properties-of-KP}.(c) also implies that for $n\in \mathbb{N}^{k}$ and
\lambda ,\mu \in \Lambda ^{n}$, we have $S_{\lambda ^{\ast }}S_{\mu }=\delta
_{\lambda ,\mu }S_{s\left( \lambda \right) }$.
\end{remark}
\begin{remark}
\label{properties-of-KP-additional-2}Suppose that $rS_{v}\neq 0$ for all
r\in \left. R\right\backslash \left\{ 0\right\} $, $v\in \Lambda ^{0}$ and
that $\lambda ,\mu \in \Lambda $ with $s\left( \lambda \right) =s\left( \mu
\right) $. Then the contrapositive of Proposition \ref{properties-of-KP}.(d)
says: if $r\in R$ and $G\subseteq s\left( \lambda \right) \Lambda $ is
finite such that $rS_{\lambda }\big(\prod_{\nu \in G}\left( S_{s\left(
\lambda \right) }-S_{\nu }S_{\nu ^{\ast }}\right) \big)S_{\mu ^{\ast }}=0$,
then we have either $r=0$ or $G$ is exhaustive.
\end{remark}
Now we give an example of a Kumjian-Pask $\Lambda $-family in a particular
algebra of endomorphism.
\begin{proposition}
\label{the-boundary-path-representation}Let $\Lambda $ be a finitely aligned
$k$-graph and $R$ be a commutative ring with $1$. Let $\mathbb{F}_{R}\left(
\partial \Lambda \right) $ be the free module with basis the boundary path
space. Then for every $v\in \Lambda ^{0}$ and $\lambda ,\mu \in \left.
\Lambda \right\backslash \Lambda ^{0}$, there exist endomorphisms
S_{v},S_{\lambda },S_{\mu ^{\ast }}:\mathbb{F}_{R}\left( \partial \Lambda
\right) \rightarrow \mathbb{F}_{R}\left( \partial \Lambda \right) $ such
that for $x\in \partial \Lambda $,
\begin{align*}
S_{v}\left( x\right) &
\begin{cases}
x & \text{if }r\left( x\right) =v\text{;} \\
0 & \text{otherwise,
\end{cases}
\\
S_{\lambda }\left( x\right) &
\begin{cases}
\lambda x & \text{if }s\left( \lambda \right) =r\left( x\right) \text{;} \\
0 & \text{otherwise,
\end{cases}
\\
S_{\mu ^{\ast }}\left( x\right) &
\begin{cases}
\sigma ^{d\left( \mu \right) }x & \text{if }x\left( 0,d\left( \mu \right)
\right) =\mu \text{;} \\
0 & \text{otherwise.
\end{cases
\end{align*
Furthermore, $\left\{ S_{\lambda },S_{\mu ^{\ast }}:\lambda ,u\in \Lambda
\right\} $ is a Kumjian-Pask $\Lambda $-family in the $R$-algebra $\operatorname{End
\left( \mathbb{F}_{R}\left( \partial \Lambda \right) \right) $ with
rS_{v}\neq 0$ for all $r\in \left. R\right\backslash \left\{ 0\right\} $ and
$v\in \Lambda ^{0}$.
\end{proposition}
\begin{proof}
Take $v\in \Lambda ^{0}$ and $\lambda ,\mu \in \left. \Lambda
\right\backslash \Lambda ^{0}$. First note that for $x\in \partial \Lambda $
and $m\leq d\left( x\right) $, we have $\sigma ^{m}x\in \partial \Lambda $.
Now define functions $f_{v}$, $f_{\lambda }$, and $f_{\mu ^{\ast }}:\partial
\Lambda \rightarrow \mathbb{F}_{R}\left( \partial \Lambda \right) $ b
\begin{align*}
f_{v}\left( x\right) &
\begin{cases}
x & \text{if }r\left( x\right) =v\text{;} \\
0 & \text{otherwise,
\end{cases}
\\
f_{\lambda }\left( x\right) &
\begin{cases}
\lambda x & \text{if }s\left( \lambda \right) =r\left( x\right) \text{;} \\
0 & \text{otherwise,
\end{cases}
\\
f_{\mu ^{\ast }}\left( x\right) &
\begin{cases}
\sigma ^{d\left( \mu \right) }x & \text{if }x\left( 0,d\left( \mu \right)
\right) =\mu \text{;} \\
0 & \text{otherwise.
\end{cases
\end{align*
The universal property of free modules gives nonzero endomorphisms
\begin{equation*}
S_{v},S_{\lambda },S_{\mu ^{\ast }}:\mathbb{F}_{R}\left( \partial \Lambda
\right) \rightarrow \mathbb{F}_{R}\left( \partial \Lambda \right)
\end{equation*}
extending $f_{v}$, $f_{\lambda }$, and $f_{\mu ^{\ast }}$, as needed.
Now we claim that $\left\{ S_{\lambda },S_{\mu ^{\ast }}:\lambda ,u\in
\Lambda \right\} $ is a Kumjian-Pask $\Lambda $-family. To see (KP1), take
v\in \Lambda ^{0}$ and $x\in \partial \Lambda $. Then we have
S_{v}^{2}\left( x\right) =x=S_{v}\left( x\right) $ if $r\left( x\right) =v$,
and $S_{v}^{2}\left( x\right) =0=S_{v}\left( x\right) $ otherwise. Hence
S_{v}^{2}=S_{v}$. Now take $v,w\in \Lambda ^{0}$ with $v\neq w$ and $x\in
\partial \Lambda $. Since $x\in w\partial \Lambda $ implies $x\notin
v\partial \Lambda $, we have $S_{v}S_{w}\left( x\right) =0$ for $x\in
\partial \Lambda $ and $S_{v}S_{w}=0$.
Next we show (KP2). Take $\lambda ,\mu \in \Lambda $ with $s\left( \lambda
\right) =r\left( \mu \right) $. Then for $x\in s\left( \mu \right) \partial
\Lambda $, we have $\mu x\in s\left( \lambda \right) \partial \Lambda $.
Then $S_{\lambda }S_{\mu }\left( x\right) =\lambda \mu x=S_{\lambda \mu
}\left( x\right) $ if $x\in s\left( \mu \right) \partial \Lambda $, and
S_{\lambda }S_{\mu }\left( x\right) =0=S_{\lambda \mu }\left( x\right) $
otherwise. Hence $S_{\lambda }S_{\mu }=S_{\lambda \mu }$. Meanwhile, for
x\in r\left( \lambda \right) \partial \Lambda $ with $x\left( 0,d\left(
\lambda \mu \right) \right) =\lambda \mu $, we have $d\left( \lambda \mu
\right) \leq d\left( x\right) $ and $\sigma ^{d\left( \lambda \mu \right)
}x\in s\left( \mu \right) \partial \Lambda $. Furthermore, $x\left(
0,d\left( \lambda \mu \right) \right) =\lambda \mu $, implies $x\left(
0,d\left( \lambda \right) \right) =\lambda $ and then we have $d\left(
\lambda \right) \leq d\left( x\right) $ and $\sigma ^{d\left( \lambda
\right) }x\in s\left( \lambda \right) \partial \Lambda $. Hence,
\begin{equation*}
S_{\mu ^{\ast }}S_{\lambda ^{\ast }}\left( x\right) =S_{\mu ^{\ast }}\sigma
^{d\left( \lambda \right) }x=\sigma ^{d\left( \lambda \right) +d\left( \mu
\right) }x=\sigma ^{d\left( \lambda \mu \right) }x=S_{\left( \lambda \mu
\right) ^{\ast }}\left( x\right)
\end{equation*
if $x\left( 0,d\left( \lambda \mu \right) \right) =\lambda \mu $, and
S_{\mu ^{\ast }}S_{\lambda ^{\ast }}\left( x\right) =0=S_{\left( \lambda \mu
\right) ^{\ast }}\left( x\right) $ otherwise. Therefore, $S_{\mu ^{\ast
}}S_{\lambda ^{\ast }}=S_{\left( \lambda \mu \right) ^{\ast }}$.
Now we show (KP3). Take $\lambda ,\mu \in \Lambda $. If $r\left( \lambda
\right) \neq r\left( \mu \right) $, then $S_{\lambda ^{\ast }}S_{\mu }=0$
and $\Lambda ^{\min }\left( \lambda ,\mu \right) =\emptyset $, as required.
Suppose $r\left( \lambda \right) =r\left( \mu \right) $. We hav
\begin{equation*}
S_{\lambda ^{\ast }}S_{\mu }\left( x\right)
\begin{cases}
\left( \mu x\right) \left( d\left( \lambda \right) ,d\left( \mu x\right)
\right) & \text{if }x\in s\left( \mu \right) \partial \Lambda \text{ and
\left( \mu x\right) \left( 0,d\left( \lambda \right) \right) =\lambda \text{
} \\
0 & \text{otherwise.
\end{cases
\end{equation*
Take $x\in s\left( \mu \right) \partial \Lambda $. Note that $s\left( \mu
\right) =r\left( \tau \right) $ for $(\rho ,\tau )\in \Lambda ^{\min }\left(
\lambda ,\mu \right) $. First suppose $\left( \mu x\right) \left( 0,d\left(
\lambda \right) \right) \neq \lambda $. Then for $(\rho ,\tau )\in \Lambda
^{\min }\left( \lambda ,\mu \right) $
\begin{equation*}
\left( \mu x\right) \left( 0,d\left( \lambda \rho \right) \right) \neq
\lambda \rho \text{ and }\left( \mu x\right) \left( 0,d\left( \mu \tau
\right) \right) \neq \mu \tau \text{.}
\end{equation*
Hence $x\left( 0,d\left( \tau \right) \right) \neq \tau $ and $S_{\rho
}S_{\tau ^{\ast }}\left( x\right) =S_{\rho }\left( 0\right) =0$. Therefore
\begin{equation*}
\sum_{(\rho ,\tau )\in \Lambda ^{\min }\left( \lambda ,\mu \right) }S_{\rho
}S_{\tau ^{\ast }}\left( x\right) =0.
\end{equation*
Next suppose $\left( \mu x\right) \left( 0,d\left( \lambda \right) \right)
=\lambda $. Since $\left( \mu x\right) \left( 0,d\left( \lambda \right)
\right) =\lambda $ and $\left( \mu x\right) \left( 0,d\left( \mu \right)
\right) =\mu $, there is $\tau \in s\left( \mu \right) \Lambda $ such that
(\rho ,\tau )\in \Lambda ^{\min }\left( \lambda ,\mu \right) $ and $\left(
\mu x\right) \left( 0,d\left( \mu \tau \right) \right) =\mu \tau $.
Therefore $x\left( 0,d\left( \tau \right) \right) =\tau $. Note that this
\tau $ is unique by the factorisation property. Hence for $(\rho ^{\prime
},\tau ^{\prime })\in \Lambda ^{\min }\left( \lambda ,\mu \right) $ such
that $(\rho ^{\prime },\tau ^{\prime })\neq (\rho ,\tau )$, we have $S_{\rho
^{\prime }}S_{\tau ^{\prime \ast }}\left( x\right) =0$. Also $x\left(
0,d\left( \tau \right) \right) =\tau $, thu
\begin{align*}
S_{\rho }S_{\tau ^{\ast }}\left( x\right) & =S_{\rho }\left( x\left( d\left(
\tau \right) ,d\left( x\right) \right) \right) =\rho \left[ x\left( d\left(
\tau \right) ,d\left( x\right) \right) \right] \\
& =\rho \left[ \left( \mu x\right) \left( d\left( \mu \tau \right) ,d\left(
\mu x\right) \right) \right] \\
& =\rho \left[ \left( \mu x\right) \left( d\left( \lambda \rho \right)
,d\left( \mu x\right) \right) \right] \text{ (since }\mu \tau =\lambda \rho
\text{)} \\
& =\left( \mu x\right) \left( d\left( \lambda \right) ,d\left( \mu x\right)
\right)
\end{align*
an
\begin{equation*}
\sum_{(\rho ^{\prime },\tau ^{\prime })\in \Lambda ^{\min }\left( \lambda
,\mu \right) }S_{\rho }S_{\tau ^{\ast }}\left( x\right) =S_{\rho }S_{\tau
^{\ast }}\left( x\right) =\left( \mu x\right) \left( d\left( \lambda \right)
,d\left( \mu x\right) \right) =S_{\lambda ^{\ast }}S_{\mu }\left( x\right)
\text{,}
\end{equation*
as required.
Finally, we show (KP4). Take $E\in \operatorname{FE}\left( \Lambda \right) $. Take
x\in r\left( E\right) \partial \Lambda $. Since $E\in x\left( 0\right) \operatorname
FE}\left( \Lambda \right) $ and $x$ is a boundary path, then there exists
\lambda \in E$ such that $x\left( 0,d\left( \lambda \right) \right) =\lambda
$. This implie
\begin{align*}
\left( S_{r\left( E\right) }-S_{\lambda }S_{\lambda ^{\ast }}\right) \left(
x\right) & =S_{r\left( E\right) }\left( x\right) -S_{\lambda }S_{\lambda
^{\ast }}\left( x\right) \\
& =x-S_{\lambda }\left( x\left( d\left( \lambda \right) ,d\left( x\right)
\right) \right) \\
& =x-x=0\text{.}
\end{align*
Henc
\begin{equation*}
\big(\prod_{\lambda \in E}\left( S_{r\left( E\right) }-S_{\lambda
}S_{\lambda ^{\ast }}\right) \big)\left( x\right) =0
\end{equation*
for $x\in r\left( E\right) \partial \Lambda $, and $\prod_{\lambda \in
E}\left( S_{r\left( E\right) }-S_{\lambda }S_{\lambda ^{\ast }}\right) =0$.
Thus $\left\{ S_{\lambda },S_{\mu ^{\ast }}:\lambda ,u\in \Lambda \right\} $
is a Kumjian-Pask $\Lambda $-family, as claimed. Now note that for $v\in
\Lambda ^{0}$, $v\partial \Lambda $ is nonempty. This implies that for all
r\in \left. R\right\backslash \left\{ 0\right\} $ and $v\in \Lambda ^{0}$,
rS_{v}\neq 0$.
\end{proof}
Using an alternate construction of a Kumjian-Pask $\Lambda $-family, we next
show that there is an $R$-algebra which is universal for Kumjian-Pask
\Lambda $-families.
\begin{theorem}
\label{universal-KP-family}Let $\Lambda $ be a finitely aligned $k$-graph
and $R$ be a commutative ring with $1$.
\begin{enumerate}
\item[(a)] There is a universal $R$-algebra ${\normalsize \operatorname{KP}
_{R}\left( \Lambda \right) $ generated by a Kumjian-Pask $\Lambda $-family
\{s_{\lambda },s_{\mu ^{\ast }}:\lambda ,u\in \Lambda \}$ such that whenever
$\left\{ S_{\lambda },S_{\mu ^{\ast }}:\lambda ,u\in \Lambda \right\} $ is a
Kumjian-Pask $\Lambda $-family in an $R$-algebra $A$, then there exists a
unique $R$-algebra homomorphism $\pi _{S}:{\normalsize \operatorname{KP}}_{R}\left(
\Lambda \right) \rightarrow A$ such that $\pi _{S}\left( s_{\lambda }\right)
=S_{\lambda }$ and $\pi _{S}\left( s_{\mu ^{\ast }}\right) =S_{\mu ^{\ast }}$
for $\lambda ,\mu \in \Lambda $.
\item[(b)] We have $rs_{v}\neq 0$ for all $r\in \left. R\right\backslash
\left\{ 0\right\} $ and $v\in \Lambda ^{0}$.
\item[(c)] The subset
\begin{equation*}
{\normalsize \operatorname{KP}}_{R}\left( \Lambda \right) _{n}:=\operatorname{span
_{R}\left\{ s_{\lambda }s_{\mu ^{\ast }}:\lambda ,\mu \in \Lambda ,d\left(
\lambda \right) -d\left( \mu \right) =n\right\}
\end{equation*
forms a $\mathbb{Z}^{k}$-grading of ${\normalsize \operatorname{KP}}_{R}\left(
\Lambda \right) $.
\end{enumerate}
\end{theorem}
\begin{proof}
We use an argument similar to \cite[Theorem 3.4]{ACaHR13} and \cite[Theorem
3.7]{CFaH14}. To show (a), first we define $X:=\Lambda \cup G\left( \Lambda
^{\neq 0}\right) $ and $\mathbb{F}_{R}\left( w\left( X\right) \right) $ be
the free algebra on the set $w\left( X\right) $ of words on $X$. Let $I$ be
the ideal of $\mathbb{F}_{R}\left( w\left( X\right) \right) $ generated by
elements of the following sets:
\begin{enumerate}
\item[(i)] $\left\{ vw-\delta _{v,w}v:v,w\in \Lambda ^{0}\right\} $,
\item[(ii)] $\{\lambda -\mu \nu ,\lambda ^{\ast }-\nu ^{\ast }\mu ^{\ast
}:\lambda ,\mu ,\nu \in \Lambda $ and $\lambda =\mu \nu \}$,
\item[(iii)] $\{\lambda ^{\ast }\mu -\sum_{(\rho ,\tau )\in \Lambda ^{\min
}\left( \lambda ,\mu \right) }\rho \tau ^{\ast }:\lambda ,\mu \in \Lambda \}
, and
\item[(iv)] $\{\prod_{\lambda \in E}\left( r\left( E\right) -\lambda \lambda
^{\ast }\right) :E\in \operatorname{FE}\left( \Lambda \right) \}$.
\end{enumerate}
Now define ${\normalsize \operatorname{KP}}_{R}\left( \Lambda \right) :=\mathbb{F
_{R}\left( w\left( X\right) \right) /I$ and $q:\mathbb{F}_{R}\left( w\left(
X\right) \right) \rightarrow \mathbb{F}_{R}\left( w\left( X\right) \right)
/I $ be the quotient map. Define $s_{\lambda }:=q\left( \lambda \right) $
for $\lambda \in \Lambda $, and $s_{\mu ^{\ast }}:=q\left( \mu ^{\ast
}\right) $ for $\mu ^{\ast }\in G\left( \Lambda ^{\neq 0}\right) $. Then
\{s_{\lambda },s_{\mu ^{\ast }}:\lambda \in \Lambda ,\mu ^{\ast }\in G\left(
\Lambda ^{\neq 0}\right) \}$ is a Kumjian-Pask $\Lambda $-family in
{\normalsize \operatorname{KP}}_{R}\left( \Lambda \right) $.
Now let $\left\{ S_{\lambda },S_{\mu ^{\ast }}:\lambda ,u\in \Lambda
\right\} $ be a Kumjian-Pask $\Lambda $-family in an $R$-algebra $A$. Define
$f:X\rightarrow A$ by $f\left( \lambda \right) :=S_{\lambda }$ for $\lambda
\in \Lambda $, and $f\left( \mu ^{\ast }\right) :=S_{\mu ^{\ast }}$ for $\mu
^{\ast }\in G\left( \Lambda ^{\neq 0}\right) $. The universal property of
\mathbb{F}_{R}\left( w\left( X\right) \right) $ gives an unique $R$-algebra
homomorphism $\phi :\mathbb{F}_{R}\left( w\left( X\right) \right)
\rightarrow A$ such that $\phi |_{X}=f$. Since $\left\{ S_{\lambda },S_{\mu
^{\ast }}:\lambda ,u\in \Lambda \right\} $ is a Kumjian-Pask $\Lambda
-family, then $I\subseteq \ker \left( \phi \right) $. Thus there exists an
R $-algebra homomorphism $\pi _{S}:{\normalsize \operatorname{KP}}_{R}\left( \Lambda
\right) \rightarrow A$ such that $\pi _{S}\circ q=\phi $. The homomorphism
\pi _{S}$ is unique since the elements in $X$ generate $\mathbb{F}_{R}\left(
w\left( X\right) \right) $ as an algebra. Furthermore, we have $\pi
_{S}\left( s_{\lambda }\right) =S_{\lambda }$ for $\lambda \in \Lambda $ and
$\pi _{S}\left( s_{\mu ^{\ast }}\right) =S_{\mu ^{\ast }}$ for $\mu ^{\ast
}\in G\left( \Lambda ^{\neq 0}\right) $, as required.
To show (b), let $\left\{ S_{\lambda },S_{\mu ^{\ast }}:\lambda ,u\in
\Lambda \right\} $ be the Kumjian-Pask $\Lambda $-family as in Proposition
\ref{the-boundary-path-representation}. Then $rS_{v}\neq 0$ for $v\in
\Lambda ^{0}$. Since $\pi _{S}\left( rs_{v}\right) =rS_{v}\neq 0$ for all
r\in \left. R\right\backslash \left\{ 0\right\} $ and $v\in \Lambda ^{0}$,
we have $rs_{v}\neq 0$ for all $r\in \left. R\right\backslash \left\{
0\right\} $ and $v\in \Lambda ^{0}$.
Next we show (c). We first extend the degree map to $w\left( X\right) $ by
d\left( w\right) :=\sum_{i=1}^{\left\vert w\right\vert }d\left( \left(
w_{i}\right) \right) $ for $w\in w\left( X\right) $. By \cite[Proposition 2.
]{ACaHR13}, $\mathbb{F}_{R}\left( w\left( X\right) \right) $ is $\mathbb{Z
^{k}$-graded by the subgroup
\begin{equation*}
\mathbb{F}_{R}\left( w\left( X\right) \right) _{n}:=\left\{ \sum_{w\in
w\left( X\right) }r_{w}w:r_{w}\neq 0\text{ implies }d\left( w\right)
=n\right\} \text{.}
\end{equation*}
Now we claim that the ideal $I$ defined in (a) is a graded ideal. It
suffices to show that $I$ is generated by elements in $\mathbb{F}_{R}\left(
w\left( X\right) \right) _{n}$ for some $n\in \mathbb{Z}^{k}$. Since
d\left( v\right) =0$ for $v\in \Lambda ^{0}$, then the generators in (i)
belong to $\mathbb{F}_{R}\left( w\left( X\right) \right) _{0}$. If $\lambda
=\mu \nu $ in $\Lambda $, then $\lambda -\mu \nu $ belongs to $\mathbb{F
_{R}\left( w\left( X\right) \right) _{d\left( \lambda \right) }$ and
\lambda ^{\ast }-\nu ^{\ast }\mu ^{\ast }$ belongs to $\mathbb{F}_{R}\left(
w\left( X\right) \right) _{-d\left( \lambda \right) }$. For $\lambda ,\mu
\in \Lambda $ and $(\rho ,\tau )\in \Lambda ^{\min }\left( \lambda ,\mu
\right) $, we hav
\begin{equation*}
d\left( \rho \right) -d\left( \tau \right) =\left( d\left( \lambda \right)
\vee d\left( \mu \right) -d\left( \lambda \right) \right) -\left( d\left(
\lambda \right) \vee d\left( \mu \right) -d\left( \mu \right) \right)
=-d\left( \lambda \right) +d\left( \mu \right)
\end{equation*
and then the generators in (iii) belong to $\mathbb{F}_{R}\left( w\left(
X\right) \right) _{-d\left( \lambda \right) +d\left( \mu \right) }$.
Finally, a word $\lambda \lambda ^{\ast }$ has degree $0$ and then the
generators in (iv) belong to $\mathbb{F}_{R}\left( w\left( X\right) \right)
_{0}$. Thus $I$ is a graded ideal.
Since $I$ is graded, then ${\normalsize \operatorname{KP}}_{R}\left( \Lambda \right)
=\mathbb{F}_{R}\left( w\left( X\right) \right) /I$ is graded by the subgroup
\begin{equation*}
\left( \mathbb{F}_{R}\left( w\left( X\right) \right) /I\right) _{n}:=\operatorname
span}_{R}\left\{ q\left( w\right) :w\in w\left( X\right) ,d\left( w\right)
=n\right\} \text{.}
\end{equation*
By Proposition \ref{properties-of-KP}.(b), we have ${\normalsize \operatorname{KP}
_{R}\left( \Lambda \right) =\operatorname{span}_{R}\left\{ s_{\lambda }s_{\mu ^{\ast
}}:\lambda ,u\in \Lambda ,s\left( \lambda \right) =s\left( \mu \right)
\right\} $. We have to show tha
\begin{equation*}
{\normalsize \operatorname{KP}}_{R}\left( \Lambda \right) _{n}:=\operatorname{span
_{R}\left\{ s_{\lambda }s_{\mu ^{\ast }}:\lambda ,u\in \Lambda ,d\left(
\lambda \right) -d\left( \mu \right) =n\right\} =\left( \mathbb{F}_{R}\left(
w\left( X\right) \right) /I\right) _{n}\text{.}
\end{equation*
Take $\lambda ,u\in \Lambda $ with $d\left( \lambda \right) -d\left( \mu
\right) =n$. Then $s_{\lambda }s_{\mu ^{\ast }}=q\left( \lambda \right)
q\left( \mu ^{\ast }\right) =q\left( \lambda \mu ^{\ast }\right) $ and
d\left( \lambda \mu ^{\ast }\right) =d\left( \lambda \right) -d\left( \mu
\right) =n$. Hence $s_{\lambda }s_{\mu ^{\ast }}\in \left( \mathbb{F
_{R}\left( w\left( X\right) \right) /I\right) _{n}$ and ${\normalsize \operatorname
KP}}_{R}\left( \Lambda \right) _{n}\subseteq \left( \mathbb{F}_{R}\left(
w\left( X\right) \right) /I\right) _{n}$.
To prove $\left( \mathbb{F}_{R}\left( w\left( X\right) \right) /I\right)
_{n}\subseteq {\normalsize \operatorname{KP}}_{R}\left( \Lambda \right) _{n} $, we
first establish the following claim:
\begin{claim}
\label{Ff(w(X))-in-KP} Let $X:=\Lambda \cup G\left( \Lambda ^{\neq 0}\right)
$ and $q:\mathbb{F}_{R}\left( w\left( X\right) \right) \rightarrow
{\normalsize \operatorname{KP}}_{R}\left( \Lambda \right) $ be the quotient map.
Then for $w\in w\left( X\right) $, we have $q\left( w\right) \in
{\normalsize \operatorname{KP}}_{R}\left( \Lambda \right) _{d\left( w\right) }$.
\end{claim}
\begin{proof}[Proof of Claim~\protect\ref{Ff(w(X))-in-KP}]
We are modifying the proof of \cite[Lemma 3.5]{ACaHR13} and \cite[Lemma 3.8
{CFaH14} using our version of (KP3). We prove the claim by induction on
\left\vert w\right\vert $. For $\left\vert w\right\vert =0$, we have $w=v$
for some $v\in \Lambda ^{0}$. Then $q\left( w\right) =s_{v}=s_{v}s_{v^{\ast
}}$ and $d\left( v\right) -d\left( v\right) =0$. So $q\left( w\right) \in
{\normalsize \operatorname{KP}}_{R}\left( \Lambda \right) _{d\left( w\right) }$.
For $\left\vert w\right\vert =1$, we have two possibilities. First suppose
w=\lambda $ for $\lambda \in \Lambda $. Then $q\left( w\right) =s_{\lambda
}=s_{\lambda }s_{s\left( \lambda \right) ^{\ast }}$ and $d\left( \lambda
\right) -d\left( s\left( \lambda \right) \right) =d\left( \lambda \right) $.
So $q\left( w\right) \in {\normalsize \operatorname{KP}}_{R}\left( \Lambda \right)
_{d\left( w\right) }$. Next suppose $w=\lambda ^{\ast }$ for $\lambda \in
\Lambda $. Then $q\left( w\right) =s_{\lambda ^{\ast }}=s_{s\left( \lambda
\right) }s_{\lambda ^{\ast }}$ and $d\left( s\left( \lambda \right) \right)
-d\left( \lambda \right) =-d\left( \lambda \right) =d\left( \lambda ^{\ast
}\right) $. So $q\left( w\right) \in {\normalsize \operatorname{KP}}_{R}\left(
\Lambda \right) _{d\left( w\right) }$.
For $\left\vert w\right\vert =2$, we have four possibilities: $w=\lambda \mu
^{\ast }$, $w=\lambda \mu $, $w=\mu ^{\ast }\lambda ^{\ast }$, or $w=\lambda
^{\ast }\mu $. For the first three cases, we hav
\begin{align*}
& q\left( \lambda \mu ^{\ast }\right) =s_{\lambda }s_{\mu ^{\ast }}\text{
and }d\left( \lambda \right) -d\left( \mu \right) =d\left( \lambda \mu
^{\ast }\right) \text{,} \\
& q\left( \lambda \mu \right) =s_{\lambda \mu }s_{s\left( \mu \right) ^{\ast
}}\text{ and }d\left( \lambda \mu \right) -d\left( s\left( \mu \right)
\right) =d\left( \lambda \mu \right) \text{,} \\
& q\left( \mu ^{\ast }\lambda ^{\ast }\right) =s_{s\left( \mu \right)
}s_{\left( \lambda \mu \right) ^{\ast }}\text{ and }d\left( s\left( \mu
\right) \right) -d\left( \left( \lambda \mu \right) ^{\ast }\right) =d\left(
\mu ^{\ast }\lambda ^{\ast }\right) \text{,}
\end{align*
as required. Suppose $w=\lambda ^{\ast }\mu $. By (KP3), we hav
\begin{equation*}
q\left( \lambda ^{\ast }\mu \right) =s_{\lambda ^{\ast }}s_{\mu
}=\sum_{(\rho ,\tau )\in \Lambda ^{\min }\left( \lambda ,\mu \right)
}s_{\rho }s_{\tau ^{\ast }}\text{.}
\end{equation*
For $(\rho ,\tau )\in \Lambda ^{\min }\left( \lambda ,\mu \right) $, we have
$\lambda \rho =\mu \tau $ and then $d\left( w\right) =d\left( \mu \right)
-d\left( \lambda \right) =d\left( \rho \right) -d\left( \rho \right) $. So
q\left( w\right) \in {\normalsize \operatorname{KP}}_{R}\left( \Lambda \right)
_{d\left( w\right) }$.
Now suppose that $n\geq 2$ and $q\left( y\right) \in {\normalsize \operatorname{KP}
_{R}\left( \Lambda \right) _{d\left( y\right) }$ for every word $y$ with
\left\vert y\right\vert \leq n$. Let $w$ be a word with $\left\vert
w\right\vert =n+1$ and $q\left( w\right) \neq 0$. If $w$ contains a subword
w_{i}w_{i+1}=\lambda \mu $, then $\lambda $ and $\mu $ are composable in
\Lambda $ since otherwise $q\left( \lambda \mu \right) =0$. Now let
w^{\prime }$ be the word obtained from $w$ by replacing $w_{i}w_{i+1}$ with
the single path $\lambda \mu $, and the
\begin{equation*}
q\left( w\right) =s_{w_{1}}\cdots s_{w_{i-1}}s_{\lambda }s_{\mu
}s_{w_{i+2}}s_{w_{n+1}}=s_{w_{1}}\cdots s_{w_{i-1}}s_{\lambda \mu
}s_{w_{i+2}}s_{w_{n+1}}=q\left( w^{\prime }\right) \text{.}
\end{equation*
Since $\left\vert w^{\prime }\right\vert =n$ and $d\left( w^{\prime }\right)
=d\left( w\right) $, the inductive hypothesis implies $q\left( w\right) \in
{\normalsize \operatorname{KP}}_{R}\left( \Lambda \right) _{d\left( w\right) }$. A
similar argument shows $q\left( w\right) \in {\normalsize \operatorname{KP}
_{R}\left( \Lambda \right) _{d\left( w\right) }$ whenever $w$ contains a
subword $w_{i}w_{i+1}=\mu ^{\ast }\lambda ^{\ast }$.
So suppose $w$ contains no subword of the form $\lambda \mu $ or $\mu ^{\ast
}\lambda ^{\ast }$. Since $\left\vert w\right\vert \geq 3$, either
w_{1}w_{2}$ or $w_{2}w_{3}$ has the form $\lambda ^{\ast }\mu $. By (KP3),
we write $q\left( w\right) $ as a sum of terms $q\left( y^{i}\right) $ with
\left\vert y^{i}\right\vert =n+1$ and $d\left( y^{i}\right) =d\left(
w\right) $. Since $\left\vert w\right\vert \geq 3$, each nonzero summand
q\left( y^{i}\right) $ contains a factor of the form $s_{\gamma }s_{\rho }$
or one of the form $s_{\tau ^{\ast }}s_{\gamma ^{\ast }}$. Then the previous
argument shows that every $q\left( y^{i}\right) \in {\normalsize \operatorname{KP}
_{R}\left( \Lambda \right) _{d\left( w\right) }$ and $q\left( w\right) \in
{\normalsize \operatorname{KP}}_{R}\left( \Lambda \right) _{d\left( w\right) }$, as
required. \hfil\penalty100\hbox{}\nobreak\hfil
\hbox{\qed\
Claim~\ref{Ff(w(X))-in-KP}} \renewcommand\qed{}
\end{proof}
Every element in $\left( \mathbb{F}_{R}\left( w\left( X\right) \right)
/I\right) _{n}$ is in the form $q\left( w\right) $ with $w\in w\left(
X\right) $ and $d\left( w\right) =n$, which, by Claim \ref{Ff(w(X))-in-KP},\
belongs to ${\normalsize \operatorname{KP}}_{R}\left( \Lambda \right) _{n}$. Then
\left( \mathbb{F}_{R}\left( w\left( X\right) \right) /I\right) _{n}\subseteq
{\normalsize \operatorname{KP}}_{R}\left( \Lambda \right) _{n} $, as required.
\end{proof}
\begin{definition}
Suppose that $\left\{ S_{\lambda },S_{\mu ^{\ast }}:\lambda ,u\in \Lambda
\right\} $ is the Kumjian-Pask $\Lambda $-family in the $R$-algebra $\operatorname
End}\left( \mathbb{F}_{R}\left( \partial \Lambda \right) \right) $ as in\
Proposition \ref{the-boundary-path-representation}. We call the $R$-algebra
homomorphism $\pi _{S}:{\normalsize \operatorname{KP}}_{R}\left( \Lambda \right)
\rightarrow \operatorname{End}\left( \mathbb{F}_{R}\left( \partial \Lambda \right)
\right) $ obtained from Theorem \ref{universal-KP-family}.(a), the \emph
boundary path representation of }${\normalsize \operatorname{KP}}_{R}\left( \Lambda
\right) $.
\end{definition}
\section{The graded-invariant uniqueness theorem}
\label{Section-the-graded-uniqueness-theorem}Throughout this section,
\Lambda $ is a finitely aligned $k$-graph and $R$ is a commutative ring with
identity $1$.
\begin{theorem}[The graded-uniqueness theorem]
\label{the-graded-uniqueness-theorem}Let $\Lambda $ be a finitely aligned $k
-graph, $R$ be a commutative ring with $1$, and $A$ be a $\mathbb{Z}^{k}
-graded $R$-algebra. Suppose that $\pi :{\normalsize \operatorname{KP}}_{R}\left(
\Lambda \right) \rightarrow A$ is a $\mathbb{Z}^{k}$-graded $R$-algebra
homomorphism such that $\pi \left( rs_{v}\right) \neq 0$ for all $r\in
\left. R\right\backslash \left\{ 0\right\} $ and $v\in \Lambda ^{0}$. Then
\pi $ is injective.
\end{theorem}
We start the proof of Theorem \ref{the-graded-uniqueness-theorem} by
adapting some $C^{\ast }$-algebra results used to prove the gauge-invariant
uniqueness theorem \cite[Theorem 4.2]{RSY04} to Kumjian-Pask algebras.
Although the argument is rather technical, the point is that most of the
argument in $C^{\ast}$-algebra setting also works in our situation.
First we recall from \cite[Definition 2.5]{RSY04} that a \emph{Cuntz-Krieger
}$\Lambda $\emph{-family} is a collection $\left\{ T_{\lambda }:\lambda \in
\Lambda \right\} $ of partial isometries (in other words, it satisfies
T_{\lambda }=T_{\lambda }T_{\lambda }^{\ast }T_{\lambda }$ for $\lambda \in
\Lambda $, see \cite[Appendix A]{CBMS}) in a $C^{\ast }$-algebra $B$
satisfying:
\begin{enumerate}
\item[(TCK1)] $\left\{ T_{v}:v\in \Lambda ^{0}\right\} $ is a collection of
mutually orthogonal projections;
\item[(TCK2)] $T_{\lambda }T_{\mu }=T_{\lambda \mu }$ whenever $s\left(
\lambda \right) =r\left( \mu \right) $;
\item[(TCK3)] $T_{\lambda }^{\ast }T_{\mu }=\sum_{(\rho ,\tau )\in \Lambda
^{\min }\left( \lambda ,\mu \right) }T_{\rho }T_{\tau }^{\ast }$ for all
\lambda ,\mu \in \Lambda $; and
\item[(CK)] $\prod_{\lambda \in E}\left( T_{r\left( E\right) }-T_{\lambda
}T_{\lambda }^{\ast }\right) =0$ for all $E\in \operatorname{FE}\left( \Lambda
\right) $.
\end{enumerate}
For a finitely aligned $k$-graph $\Lambda $, there exists a universal
C^{\ast }$-algebra $C^{\ast }\left( \Lambda \right) $ generated by the
universal Cuntz-Krieger $\Lambda $-family $\left\{ t_{\lambda }:\lambda \in
\Lambda \right\} $. Now suppose that $\left\{ S_{\lambda },S_{\mu ^{\ast
}}:\lambda ,u\in \Lambda \right\} $ is a Kumjian-Pask $\Lambda $-family in
an $R$-algebra $A$ and we define $T_{\lambda }:=S_{\lambda }$ for $\lambda
\in \Lambda $ and $T_{\mu }^{\ast }:=S_{\mu ^{\ast }}$ for $\mu \in G\left(
\Lambda ^{\neq 0}\right) $. Then $\left\{ T_{\lambda }:\lambda \in \Lambda
\right\} $ is a collection satisfying $T_{\lambda }=T_{\lambda }T_{\lambda
}^{\ast }T_{\lambda }$ for $\lambda \in \Lambda $, (TCK1-3) and (CK). (Note
that we do not say that $\left\{ T_{\lambda }:\lambda \in \Lambda \right\} $
is a Cuntz-Krieger $\Lambda $-family, since we need a $C^{\ast }$-algebra
containing $T_{\lambda },T_{\mu }^{\ast }$.) Similarly, a Cuntz-Krieger
\Lambda $-family in a $C^{\ast }$-algebra gives a Kumjian-Pask $\Lambda
-family. Thus one can translate proofs about Cuntz-Krieger $\Lambda
-families to proofs about Kumjian-Pask $\Lambda $-families.
The key ingredient to proof of Theorem \ref{the-graded-uniqueness-theorem}
is proving that the uniqueness theorem holds on the core ${\normalsize \operatorname
KP}}_{R}\left( \Lambda \right) _{0}:=\operatorname{span}_{R}\left\{ s_{\lambda
}s_{\mu ^{\ast }}:d\left( \lambda \right) =d\left( \mu \right) \right\} $
(Theorem \ref{injectivity-on-the core}). First we establish some preliminary
results and notation.
Following \cite[Lemma 3.2]{RSY04}, for every finite set $E\subseteq \Lambda
, there exists a finite set $F\subseteq \Lambda $ which contains $E$ and
satisfie
\begin{align}
\lambda ,\mu ,\rho ,\tau \in & F\text{, }d\left( \lambda \right) =d\left(
\mu \right) \text{, }d\left( \rho \right) =d\left( \tau \right) \text{,
s\left( \lambda \right) =s\left( \mu \right) \text{, and }s\left( \rho
\right) =s\left( \tau \right) \text{ } \label{pi-E} \\
& \text{imply }\left\{ \lambda \alpha ,\tau \beta :\left( \alpha ,\beta
\right) \in \Lambda ^{\min }\left( \mu ,\rho \right) \right\} \subseteq
\text{.} \notag
\end{align
We then write
\begin{equation*}
\Pi E:=\bigcap \{F\subseteq \Lambda :E\subseteq F\text{ and }F\text{
satisfies \eqref{pi-E}}\}
\end{equation*
and $\Pi E\times _{d,s}\Pi E$ for the set $\left\{ \left( \lambda ,\mu
\right) \in \Pi E\times \Pi E:d\left( \lambda \right) =d\left( \mu \right)
,s\left( \lambda \right) =s\left( \mu \right) \right\} $. Note that $\Pi E$
is finite. Now recall from Notation 3.12 of \cite{RSY04} that for $\lambda
\in \Pi E$, we writ
\begin{equation*}
T\left( \lambda \right) :=\left\{ \nu \in s\left( \lambda \right) \Lambda
:d\left( \nu \right) \neq 0,\lambda \nu \in \Pi E\right\} \text{.}
\end{equation*
Since $\lambda T\left( \lambda \right) \subseteq \Pi E$ and $\Pi E$ is
finite, then $T\left( \lambda \right) $ is also finite.
Now suppose that $\left\{ S_{\lambda },S_{\mu ^{\ast}}:\lambda ,u\in \Lambda
\right\}$ is a Kumjian-Pask $\Lambda $-family in an $R$-algebra $A$. The
argument of Lemma 3.2 of \cite{RSY04} shows that the se
\begin{equation*}
M_{\Pi E}^{S}:=\operatorname{span}_{R}\left\{ S_{\lambda }S_{\mu ^{\ast }}:\left(
\lambda ,\mu \right) \in \Pi E\times _{d,s}\Pi E\right\}
\end{equation*
is closed under multiplication. For $\left( \lambda ,\mu \right) \in \Pi
E\times _{d,s}\Pi E$, defin
\begin{equation*}
\Theta \left( S\right) _{\lambda ,\mu }^{\Pi E}:=S_{\lambda }\big(\prod_{\nu
\in T\left( \lambda \right) }\left( S_{s\left( \lambda \right) }-S_{\lambda
\nu }S_{\left( \lambda \nu \right) ^{\ast }}\right) \big)S_{\mu ^{\ast
\text{.}}
\end{equation*}
Applying the argument of Proposition 3.9 and Proposition 3.11 of \cite{RSY04}
gives the following.
\begin{lemma}
\label{span-of-M}Let $\left\{ S_{\lambda },S_{\mu ^{\ast }}:\lambda ,u\in
\Lambda \right\} $ be a Kumjian-Pask $\Lambda $-family in an $R$-algebra $A$
and $E \subseteq \Lambda$ be finite. For $\left( \lambda ,\mu \right)
,\left( \rho ,\tau \right) \in \Pi E\times _{d,s}\Pi E$, we have
\begin{equation*}
\Theta \left( S\right) _{\lambda ,\mu }^{\Pi E}\Theta \left( S\right) _{\rho
,\tau }^{\Pi E}=\delta _{\mu ,\rho }\Theta \left( S\right) _{\lambda ,\tau
}^{\Pi E}, \quad S_{\lambda }S_{\mu ^{\ast }}=\sum_{\lambda \nu \in \Pi
E}\Theta \left( S\right) _{\lambda \nu ,\mu \nu }^{\Pi E}
\end{equation*
and $M_{\Pi E}^{S}$ is spanned by the set $\{\Theta \left( S\right)
_{\lambda ,\mu }^{\Pi E}:\left( \lambda ,\mu \right) \in \Pi E\times
_{d,s}\Pi E\}$.
\end{lemma}
\begin{lemma}
\label{injectivity-on-M}Let $\Lambda $ be a finitely aligned $k$-graph, $R $
be a commutative ring with $1$ and $E \subseteq \Lambda$ be finite. Suppose
that $\pi :{\normalsize \operatorname{KP}}_{R}\left( \Lambda \right) \rightarrow A$
is an $R$-algebra homomorphism such that $\pi \left( rs_{v}\right) \neq 0$
for all $r\in \left. R\right\backslash \left\{ 0\right\} $ and $v\in \Lambda
^{0}$. Let $\left( \lambda ,\mu \right) \in \Pi E\times _{d,s}\Pi E$. Then
the following conditions are equivalent:
\begin{enumerate}
\item[(a)] $\pi \big(\Theta \left( s\right) _{\lambda ,\mu }^{\Pi E}\big)=0$.
\item[(b)] $\Theta \left( s\right) _{\lambda ,\mu }^{\Pi E}=0$.
\item[(c)] $T\left( \lambda \right) $ is exhaustive.
\end{enumerate}
Furthermore, for $r\in \left. R\right\backslash \left\{ 0\right\} $ we hav
\begin{equation*}
\pi \big(r\Theta \left( s\right) _{\lambda ,\mu }^{\Pi E}\big)=0\text{ if
and only if }r\Theta \left( s\right) _{\lambda ,\mu }^{\Pi E}=0
\end{equation*
and $\pi $ is injective on $M_{\Pi E}^{s}$.
\end{lemma}
\begin{proof}
By following the argument of Proposition 3.13 and Corollary 3.17 of \cit
{RSY04}, we have the three equivalent conditions. Now take $\left( \lambda
,\mu \right) \in \Pi E\times _{d,s}\Pi E$ and $r\in \left. R\right\backslash
\left\{ 0\right\} $. If $r\Theta \left( s\right) _{\lambda ,\mu }^{\Pi E}=0
, we trivially have $\pi \big(r\Theta \left( s\right) _{\lambda ,\mu }^{\Pi
E}\big)=0$. So suppose $\pi \big(r\Theta \left( s\right) _{\lambda ,\mu
}^{\Pi E}\big)=0$. Since $\pi \left( rs_{v}\right) \neq 0$ for all $r\in
\left. R\right\backslash \left\{ 0\right\} $ and $v\in \Lambda ^{0}$, then
by Remark \ref{properties-of-KP-additional-2}, $\pi \big(r\Theta \left(
s\right) _{\lambda ,\mu }^{\Pi E}\big)=0$ implies that $T\left( \lambda
\right) $ is exhaustive (since $r\neq 0$) and by (c)$\Rightarrow $(b),
\Theta \left( s\right) _{\lambda ,\mu }^{\Pi E}=0$. So $r\Theta \left(
s\right) _{\lambda ,\mu }^{\Pi E}=0$, as required.
Next we show that $\pi $ is injective on $M_{\Pi E}^{s}$. Take $a\in M_{\Pi
E}^{s}$ such that $\pi \left( a\right) =0$. We have to show $a=0$. Since
a\in M_{\Pi E}^{s}$ and $M_{\Pi E}^{s}=\operatorname{span}_{R}\{\Theta \left(
s\right) _{\lambda ,\mu }^{\Pi E}:\left( \lambda ,\mu \right) \in \Pi
E\times _{d,s}\Pi E\}$ (Lemma \ref{span-of-M}), we write $a=\sum_{\left(
\lambda ,\mu \right) \in F}r_{\lambda ,\mu }\Theta \left( s\right) _{\lambda
,\mu }^{\Pi E}$ where $F\subseteq \Pi E\times _{d,s}\Pi E$ is finite and for
all $\left( \lambda ,\mu \right) \in F$, we have $r_{\lambda ,\mu }\in R$
and $\Theta \left( s\right) _{\lambda ,\mu }^{\Pi E}\neq 0$. If $T\left(
\lambda \right) $ is exhaustive for some $\left( \lambda ,\mu \right) \in F
, then by (c)$\Rightarrow $(b), $\Theta \left( s\right) _{\lambda ,\mu
}^{\Pi E}=0$, which contradicts $\Theta \left( s\right) _{\lambda ,\mu
}^{\Pi E}\neq 0$. So $T\left( \lambda \right) $ is non-exhaustive for all
\left( \lambda ,\mu \right) \in F$. Since $\pi \left( a\right) =0$, then for
$\left( \rho ,\tau \right) \in F$, we have
\begin{align*}
0& =\pi \big(\Theta \left( s\right) _{\rho ,\rho }^{\Pi E}\big)\pi \left(
a\right) \pi \big(\Theta \left( s\right) _{\tau ,\tau }^{\Pi E}\big) \\
& =\pi \big(\Theta \left( s\right) _{\rho ,\rho }^{\Pi E}\big)\pi \big
\sum_{\left( \lambda ,\mu \right) \in F}r_{\lambda ,\mu }\Theta \left(
s\right) _{\lambda ,\mu }^{\Pi E}\big)\pi \big(\Theta \left( s\right) _{\tau
,\tau }^{\Pi E}\big) \\
& =r_{\rho ,\tau }\pi \big(\Theta \left( s\right) _{\rho ,\tau }^{\Pi E}\big)
= r_{\rho ,\tau } \Theta
\left( \pi\left(s\right)\right) _{\rho ,\tau }^{\Pi E
\text{ (by Lemma \ref{span-of-M}).}
\end{align*
But now since $\pi \left( rs_{v}\right) \neq 0$ for all $r\in \left.
R\right\backslash \left\{ 0\right\} $ and $v\in \Lambda ^{0}$, then by
Remark \ref{properties-of-KP-additional-2}, $r_{\rho ,\tau } \Theta
\left( \pi\left(s\right)\right) _{\rho ,\tau }^{\Pi E}=0$ implies that $r_{\rho ,\tau
}=0$ (since $T\left( \rho \right) $ is non-exhaustive). Therefore, $a=0$ and
$\pi $ is injective on $M_{\Pi E}^{s}$.
\end{proof}
A direct consequence of Lemma \ref{injectivity-on-M} is:
\begin{theorem}
\label{injectivity-on-the core}Let $\Lambda $ be a finitely aligned $k
-graph and $R$ be a commutative ring with $1$. Suppose that $\pi
{\normalsize \operatorname{KP}}_{R}\left( \Lambda \right) \rightarrow A$ is an $R
-algebra homomorphism such that $\pi \left( rs_{v}\right) \neq 0$ for all
r\in \left. R\right\backslash \left\{ 0\right\} $ and $v\in \Lambda ^{0}$.
Then $\pi $ is injective on ${\normalsize \operatorname{KP}}_{R}\left( \Lambda
\right) _{0}$.
\end{theorem}
\begin{proof}
Take $a\in {\normalsize \operatorname{KP}}_{R}\left( \Lambda \right) _{0}$ such that
$\pi \left( a\right) =0$. We have to show $a=0$. Write $a=\sum_{\left(
\lambda ,\mu \right) \in F}r_{\lambda ,\mu }s_{\lambda }s_{\mu ^{\ast }}$
with $d\left( \lambda \right) =d\left( \mu \right) $ for $\left( \lambda
,\mu \right) \in F$. Define $E:=\left\{ \lambda ,\mu :\left( \lambda ,\mu
\right) \in F\right\} $ and then $a\in M_{\Pi E}^{s}$. Since $\pi $ is
injective on $M_{\Pi E}^{s}$ (Lemma \ref{injectivity-on-M}), $a=0$.
\end{proof}
Now we establish the last stepping stone result before proving Theorem \re
{the-graded-uniqueness-theorem}.
\begin{lemma}
\label{generator-of-ideal}Let $I$ be a graded ideal of ${\normalsize \operatorname{K
}}_{R}\left( \Lambda \right) $. Then $I$ is generated as an ideal by the set
$I_{0}:=I\cap {\normalsize \operatorname{KP}}_{R}\left( \Lambda \right) _{0}$.
\end{lemma}
\begin{proof}
We generalise the argument of \cite[Lemma 5.1]{T11}. Take $n\in \mathbb{Z
^{k}$ and write $n=n_{1}-n_{2}$ such that $n_{1},n_{2}\in \mathbb{N}^{k}$.
We show that $I_{n}:=I\cap {\normalsize \operatorname{KP}}_{R}\left( \Lambda \right)
_{n}$ is contained in ${\normalsize \operatorname{KP}}_{R}\left( \Lambda \right)
_{n_{1}}I_{0}{\normalsize \operatorname{KP}}_{R}\left( \Lambda \right) _{n_{2}}$.
Now take $a\in I_{n}$ and write $a=\sum_{\left( \lambda ,\mu \right) \in
F}r_{\lambda ,\mu }s_{\lambda }s_{\mu ^{\ast }}$. Note that $d\left( \lambda
\right) -d\left( \mu \right) =n$ for $\left( \lambda ,\mu \right) \in F$.
Since $n=n_{1}-n_{2}$ with $n_{1},n_{2}\in \mathbb{N}^{k}$, then for every
\left( \lambda ,\mu \right) \in F$, by the factorisation property, there
exist $\lambda _{1},\lambda _{2},\mu _{1},\mu _{2}$ such tha
\begin{equation*}
\lambda =\lambda _{1}\lambda _{2}\text{, }\mu =\mu _{1}\mu _{2}\text{,
d\left( \lambda _{1}\right) =n_{1}\text{, }d\left( \mu _{1}\right) =n_{2
\text{, and }d\left( \lambda _{2}\right) =d\left( \mu _{2}\right) \text{.}
\end{equation*
Hence $a=\sum_{\left( \lambda _{1}\lambda _{2},\mu _{1}\mu _{2}\right) \in
F}r_{\lambda _{1}\lambda _{2},\mu _{1}\mu _{2}}s_{\lambda _{1}}\left(
s_{\lambda _{2}}s_{\mu _{2}^{\ast }}\right) s_{\mu _{1}^{\ast }}$. Take
\left( \alpha _{1}\alpha _{2},\beta _{1}\beta _{2}\right) \in F$. \ Note
that for $\nu ,\gamma \in \Lambda $ with $d\left( \nu \right) =d\left(
\gamma \right) $, by Remark \ref{properties-of-KP-additional}, we have
s_{\nu ^{\ast }}s_{\gamma }=0$ if $\nu \neq \gamma $. The
\begin{align*}
s_{\alpha _{1}^{\ast }}as_{\beta _{1}}& =\sum_{\left( \lambda _{1}\lambda
_{2},\mu _{1}\mu _{2}\right) \in F}r_{\lambda _{1}\lambda _{2},\mu _{1}\mu
_{2}}\left( s_{\alpha _{1}^{\ast }}s_{\lambda _{1}}\right) \left( s_{\lambda
_{2}}s_{\mu _{2}^{\ast }}\right) \left( s_{\mu _{1}^{\ast }}s_{\beta
_{1}}\right) \\
& =\sum_{\left( \alpha _{1}\lambda _{2},\beta _{1}\mu _{2}\right) \in
F}r_{\alpha _{1}\lambda _{2},\beta _{1}\mu _{2}}s_{\lambda _{2}}s_{\mu
_{2}^{\ast }}
\end{align*
since $d\left( \alpha _{1}\right) =n_{1}=d\left( \lambda _{1}\right) $ and
d\left( \beta _{1}\right) =n_{2}=d\left( \mu _{1}\right) $ for $\left(
\lambda _{1}\lambda _{2},\mu _{1}\mu _{2}\right) \in F$. Since $a\in I$,
then $s_{\alpha _{1}^{\ast }}as_{\beta _{1}}\in I$. Furthermore, since
d\left( \lambda _{2}\right) =d\left( \mu _{2}\right) $ for $\left( \alpha
_{1}\lambda _{2},\beta _{1}\mu _{2}\right) \in F$, then $s_{\alpha
_{1}^{\ast }}as_{\beta _{1}}\in {\normalsize \operatorname{KP}}_{R}\left( \Lambda
\right) _{0}$. Hence, for $\left( \alpha _{1}\alpha _{2},\beta _{1}\beta
_{2}\right) \in F$,
\begin{equation*}
\sum_{\left( \alpha _{1}\lambda _{2},\beta _{1}\mu _{2}\right) \in
F}r_{\alpha _{1}\lambda _{2},\beta _{1}\mu _{2}}s_{\lambda _{2}}s_{\mu
_{2}^{\ast }}=s_{\alpha _{1}^{\ast }}as_{\beta _{1}}\in I_{0}
\end{equation*
an
\begin{equation*}
\sum_{\left( \alpha _{1}\lambda _{2},\beta _{1}\mu _{2}\right) \in
F}r_{\alpha _{1}\lambda _{2},\beta _{1}\mu _{2}}s_{\alpha _{1}\lambda
_{2}}s_{\left( \beta _{1}\mu _{2}\right) ^{\ast }}=s_{\alpha _{1}}\left(
s_{\alpha _{1}^{\ast }}as_{\beta _{1}}\right) s_{\beta _{1}^{\ast }}\in
{\normalsize \operatorname{KP}}_{R}\left( \Lambda \right) _{n_{1}}I_{0}{\normalsize
\operatorname{KP}}_{R}\left( \Lambda \right) _{n_{2}}\text{.}
\end{equation*
Therefore
\begin{align*}
a& =\sum_{\left( \lambda _{1}\lambda _{2},\mu _{1}\mu _{2}\right) \in
F}r_{\lambda _{1}\lambda _{2},\mu _{1}\mu _{2}}s_{\lambda _{1}\lambda
_{2}}s_{\left( \mu _{1}\mu _{2}\right) ^{\ast }} \\
& =\sum_{\left\{ \left( \alpha _{1},\beta _{1}\right) :\left( \alpha
_{1}\alpha _{2},\beta _{1}\beta _{2}\right) \in F\right\} }\sum_{\left(
\alpha _{1}\lambda _{2},\beta _{1}\mu _{2}\right) \in F}r_{\alpha
_{1}\lambda _{2},\beta _{1}\mu _{2}}s_{\alpha _{1}\lambda _{2}}s_{\left(
\beta _{1}\mu _{2}\right) ^{\ast }}
\end{align*
belongs to ${\normalsize \operatorname{KP}}_{R}\left( \Lambda \right) _{n_{1}}I_{0
{\normalsize \operatorname{KP}}_{R}\left( \Lambda \right) _{n_{2}}$, and
I_{n}\subseteq {\normalsize \operatorname{KP}}_{R}\left( \Lambda \right)
_{n_{1}}I_{0}{\normalsize \operatorname{KP}}_{R}\left( \Lambda \right) _{n_{2}}$.
Now since $I$ is a graded ideal $I=\bigoplus_{n\in \mathbb{Z}^{k}}I_{n}$ and
$I$ is generated as an ideal by $I_{0}$.
\end{proof}
\begin{proof}[Proof of Theorem \protect\ref{the-graded-uniqueness-theorem}]
Because $\pi$ is graded, we have that $\ker \pi$ is a graded ideal. By Lemma
\ref{generator-of-ideal}, the ideal $\ker \pi $ is generated by the set
\ker \pi \cap {\normalsize \operatorname{KP}}_{R}\left( \Lambda \right) _{0}$. Thus
it suffices to show $\pi |_{{\normalsize \operatorname{KP}}_{R}\left( \Lambda
\right) _{0}}:{\normalsize \operatorname{KP}}_{R}\left( \Lambda \right)
_{0}\rightarrow A$ is injective. However, the injectivity follows from
Theorem \ref{injectivity-on-the core}.
\end{proof}
One immediate application of Theorem \ref{the-graded-uniqueness-theorem} is:
\begin{proposition}
\label{KP-is-dense-in-C}Let $\Lambda $ be a finitely aligned $k$-graph. Let
\left\{ s_{\lambda },s_{\mu ^{\ast }}:\lambda ,u\in \Lambda \right\} $ be
the universal Kumjian-Pask $\Lambda $-family for $R=\mathbb{C}$ and $\left\{
t_{\lambda }:\lambda \in \Lambda \right\} $ be the universal Cuntz-Krieger
\Lambda $-family. Then there is an isomorphism $\pi _{t}:{\normalsize \operatorname
KP}}_{\mathbb{C}}\left( \Lambda \right) \rightarrow \operatorname{span}_{\mathbb{C
}\left\{ t_{\lambda }t_{\mu }^{\ast }:\lambda ,\mu \in \Lambda \right\} $
such that $\pi _{t}\left( s_{\lambda }\right) =t_{\lambda }$ and $\pi
_{t}\left( s_{\mu ^{\ast }}\right) =t_{\mu }^{\ast }$ for $\lambda ,u\in
\Lambda $. In particular, ${\normalsize \operatorname{KP}}_{\mathbb{C}}\left(
\Lambda \right) $ is isomorphic to a dense subalgebra of $C^{\ast }\left(
\Lambda \right) $.
\end{proposition}
\begin{proof}
Since $\left\{ t_{\lambda }:\lambda \in \Lambda \right\} $ satisfies
(TCK1-3) and (CK), then $\left\{ t_{\lambda },t_{\mu }^{\ast }:\lambda ,\mu
\in \Lambda \right\} $ also satisfies (KP1-4) and is a Kumjian-Pask $\Lambda
$-family in $C^{\ast }\left( \Lambda \right) $. Thus the universal property
of ${\normalsize \operatorname{KP}}_{\mathbb{C}}\left( \Lambda \right) $ gives a
homomorphism $\pi _{t}$ from ${\normalsize \operatorname{KP}}_{\mathbb{C}}\left(
\Lambda \right) $ onto the dense subalgebr
\begin{equation*}
A:=\operatorname{span}_{\mathbb{C}}\left\{ t_{\lambda }t_{\mu }^{\ast }:\lambda ,\mu
\in \Lambda \right\}
\end{equation*
of $C^{\ast }\left( \Lambda \right) $.
Next we show the injectivity of $\pi _{t}$. By Theorem \re
{the-graded-uniqueness-theorem}, it suffices to show that $\pi _{t}$ is a
\mathbb{Z}^{k}$-graded algebra homomorphism. We claim that $A$ is graded b
\begin{equation*}
A_{n}:=\operatorname{span}_{\mathbb{C}}\left\{ t_{\lambda }t_{\mu }^{\ast }:\lambda
,\mu \in \Lambda ,d\left( \lambda \right) -d\left( \mu \right) =n\right\}
\text{.}
\end{equation*
Note that for $\lambda ,\mu ,\rho ,\tau \in \Lambda $ with $d\left( \lambda
\right) -d\left( \mu \right) =n$ and $d\left( \rho \right) -d\left( \tau
\right) =m$, we hav
\begin{align*}
t_{\lambda }t_{\mu }^{\ast }t_{\rho }t_{\tau }^{\ast }& =t_{\lambda }\big
\sum_{(\mu ^{\prime },\rho ^{\prime })\in \Lambda ^{\min }\left( \mu ,\rho
\right) }t_{\mu ^{\prime }}t_{\rho ^{\prime }}^{\ast }\big)t_{\tau }^{\ast
\text{ (by (TCK3))} \\
& =\sum_{(\mu ^{\prime },\rho ^{\prime })\in \Lambda ^{\min }\left( \mu
,\rho \right) }t_{\lambda \mu ^{\prime }}t_{\tau \rho ^{\prime }}^{\ast }
\end{align*
and for $(\mu ^{\prime },\rho ^{\prime })\in \Lambda ^{\min }\left( \mu
,\rho \right) $
\begin{align*}
d\left( \lambda \mu ^{\prime }\right) -d\left( \tau \rho ^{\prime }\right) &
=d\left( \lambda \right) +d\left( \mu ^{\prime }\right) -d\left( \tau
\right) -d\left( \rho ^{\prime }\right) \\
& =d\left( \lambda \right) +\left( d\left( \mu \right) \vee d\left( \rho
\right) -d\left( \mu \right) \right) \\
& \quad -d\left( \tau \right) -\left( d\left( \mu \right) \vee d\left( \rho
\right) -d\left( \rho \right) \right) \\
& =\left( d\left( \lambda \right) -d\left( \mu \right) \right) -\left(
d\left( \tau \right) -d\left( \rho \right) \right) \\
& =n+m\text{.}
\end{align*
Hence $A_{n}A_{m}\subseteq A_{n+m}$. Since each spanning element $t_{\lambda
}t_{\mu }^{\ast }$ belongs to $A_{d\left( \lambda \right) -d\left( \mu
\right) }$, every element $a$ of $A$ can be written as a finite sum $\sum
a_{n}$ with $a_{n}\in A_{n}$. For $a_{n}\in A_{n}$ such that a finite sum
\sum a_{n}=0$, then we have each $a_{n}=0$ by following the argument of \cit
[Lemma 7.4]{ACaHR13}. Thus $\left\{ A_{n}:n\in \mathbb{Z}^{k}\right\} $ is a
grading of $A$, as claimed. Then $\pi _{t}$ is a $\mathbb{Z}^{k}$-graded and
by Theorem \ref{the-graded-uniqueness-theorem}, $\pi _{t}$ is injective.
\end{proof}
\section{Steinberg algebras}
\label{Section-Steinberg-algebra}Steinberg algebras were introduced by
Steinberg in \cite{St10} and are algebraic analogues of groupoid $C^{\ast}
-algebras. In \cite{CS15}, Clark and Sims show that for every $1$-graph $E$,
its Leavitt path algebra is isomorphic to a Steinberg algebra. In this
section, we show that for every finitely aligned $k$-graph $\Lambda $, its
Kumjian-Pask algebra is isomorphic to a Steinberg algebra (Proposition \re
{KP-is-isomorphic-to-Steinberg-algebras}). We start out with an introduction
to groupoids and Steinberg algebras in general.
A groupoid $\mathcal{G}$ is a small category in which every morphism has an
inverse. For a groupoid $\mathcal{G}$, we write $r\left( a\right) $ and
s\left( a\right) $ to denote the \emph{range} and \emph{source} of $a\in
\mathcal{G}$. Because $r\left( a\right) =s\left( a^{-1}\right) $ for $a\in
\mathcal{G}$, then $r$ and $s$ have the common image. We call this common
image the \emph{unit space }of $\mathcal{G}$ and denote it $\mathcal{G
^{\left( 0\right) }$. A pair $\left( a,b\right) \in \mathcal{G}\times
\mathcal{G}$ is said \emph{composable }if $s\left( a\right) =r\left(
b\right) $. We then use notation $\mathcal{G}^{\left( 2\right) }$ to denote
the collection of composable pairs in $\mathcal{G}$. For $A,B\subseteq
\mathcal{G}$, we writ
\begin{equation*}
AB:=\left\{ ab:a\in A,b\in B,\left( a,b\right) \in \mathcal{G}^{\left(
2\right) }\right\} \text{.}
\end{equation*}
We say $\mathcal{G}$ is a \emph{topological groupoid} if $\mathcal{G}$ is
endowed with a topology such that composition and inversion on $\mathcal{G}$
are continuous. We also call an open set $U\subseteq \mathcal{G}$ an \emph
open bisection }if $s$ and $r$ restricted to $U$ are homeomorhisms into
\mathcal{G}^{(0)}$. Finally, we call $\mathcal{G}$ \emph{ample} if $\mathcal
G}$ has a basis of compact open bisections.
\begin{remark}
\label{remark-of-groupouid-Glambda-totally-disconnected} Note that if
\mathcal{G}$ is ample, then $\mathcal{G}$ is locally compact and \'{e}tale.
In fact, $\mathcal{G}$ is Hausdorff ample if and only if $\mathcal{G}$ is
locally compact, Hausdorff and \'etale with totally disconnected unit space.
\end{remark}
Now suppose that $\mathcal{G}$ is a Hausdorff ample groupoid and $R$ is a
commutative ring with $1$. As in \cite[Section 2.2]{CE-M15}, the Steinberg
algebra\footnote
In \cite{St10}, Steinberg writes $R\mathcal{G}$ to denote $A_{R}(\mathcal{G
) $.} associated to $\mathcal{G}$ is
\begin{equation*}
A_{R}\left( \mathcal{G}\right) :=\{f:\mathcal{G}\rightarrow R:f\text{ is
locally constant and has compact support}\}
\end{equation*
where addition and scalar multiplication are defined pointwise, and
convolution is given b
\begin{equation*}
\left( f\star g\right) \left( a\right) :=\sum_{r\left( a\right) =r\left(
b\right) }f\left( b\right) g\left( b^{-1}a\right) \text{.}
\end{equation*
Furthermore, for compact open bisections $U$ and $V$, we have the
characteristic function $1_U \in A_R(\mathcal{G})$ and
\begin{equation*}
1_{U}\star 1_{V}=1_{UV}
\end{equation*
\cite[Proposition 4.3]{St10}. Note that for $f\in A_{R}\left( \mathcal{G
\right) $, $\operatorname{supp}\left( f\right) $ is clopen (\cite[Remark 2.1]{CE-M15
).
\begin{example}
\label{groupouid-Glambda}To each finitely aligned $k$-graph $\Lambda $, we
define the associated \emph{boundary-path groupoid} $\mathcal{G}_{\Lambda }$
from \cite[Definition 4.8]{Y07} as follows. Writ
\begin{equation*}
\Lambda \ast _{s}\Lambda :=\left\{ \left( \lambda ,\mu \right) \in \Lambda
\times \Lambda :s\left( \lambda \right) =s\left( \mu \right) \right\} \text{
}
\end{equation*
The objects of $\mathcal{G}_{\Lambda}$ are
\begin{equation*}
\operatorname{Obj}\left( \mathcal{G}_{\Lambda }\right) :=\partial \Lambda \text{.}
\end{equation*
The morphisms are
\begin{align*}
\operatorname{Mor}\left( \mathcal{G}_{\Lambda }\right) & :=\{\left( \lambda
z,d\left( \lambda \right) -d\left( \mu \right) ,\mu z\right) \in \partial
\Lambda \times \mathbb{Z}^{k}\times \partial \Lambda : \\
& \quad \quad \quad \quad \quad \quad \left( \lambda ,\mu \right) \in
\Lambda \ast _{s}\Lambda ,z\in s\left( \lambda \right) \partial \Lambda \} \\
& =\{\left( x,m,y\right) \in \partial \Lambda \times \mathbb{Z}^{k}\times
\partial \Lambda :\text{there exists }p,q\in \mathbb{N}^{k}\text{ such that}
\\
& \quad \quad \quad \quad \quad \quad p\leq d\left( x\right) ,q\leq d\left(
y\right) ,p-q=m\text{ and }\sigma ^{p}x=\sigma ^{q}y\}\text{.}
\end{align*
The range and source maps are given by $r\left( x,m,y\right) :=x$ and
s\left( x,m,y\right) :=y$, and composition is defined such tha
\begin{equation*}
\left( \left( x_{1},m_{1},y_{1}\right) ,\left( y_{1},m_{2},y_{2}\right)
\right) \mapsto \left( x_{1},m_{1}+m_{2},y_{2}\right) \text{.}
\end{equation*
Fianlly inversion is given by $\left( x,m,y\right) \mapsto \left(
y,-m,x\right) $.
Next, we show how to realise $\mathcal{G}_{\Lambda}$ as a topological
groupoid. For $\left( \lambda ,\mu \right) \in \Lambda \ast _{s}\Lambda $
and finite non-exhaustive subset $G\subseteq s\left( \lambda \right) \Lambda
$, we writ
\begin{equation*}
Z_{\Lambda }\left( \lambda \right) :=\lambda \partial \Lambda \text{,}
\end{equation*
\begin{equation*}
Z_{\Lambda }\left( \left. \lambda \right\backslash G\right) :=\left.
Z_{\Lambda }\left( \lambda \right) \right\backslash \Big(\bigcup_{\nu \in
G}Z_{\Lambda }\left( \lambda \nu \right) \Big)\text{,}
\end{equation*
\begin{align*}
Z_{\Lambda }\left( \lambda \ast _{s}\mu \right) & :=\{\left( x,d\left(
\lambda \right) -d\left( \mu \right) ,y\right) \in \mathcal{G}_{\Lambda
}:x\in Z_{\Lambda }\left( \lambda \right) ,y\in Z_{\Lambda }\left( \mu
\right) \\
& \quad \quad \quad \quad \quad \text{ and }\sigma ^{d\left( \lambda \right)
}x=\sigma ^{d\left( \mu \right) }y\}\text{,}
\end{align*
an
\begin{equation*}
Z_{\Lambda }\left( \left. \lambda \ast _{s}\mu \right\backslash G\right)
:=\left. Z_{\Lambda }\left( \lambda \ast _{s}\mu \right) \right\backslash
\Big(\bigcup_{\nu \in G}Z_{\Lambda }\left( \lambda \nu \ast _{s}\mu \nu
\right) \Big).
\end{equation*
The sets $Z_{\Lambda }\left( \left. \lambda \ast _{s}\mu \right\backslash
G\right) $ form a basis of compact open bisections for a second-countable,
Hausdorff topology on $\mathcal{G}_{\Lambda }$ under which it is an ample
groupoid. Further, the sets $Z_{\Lambda }\left( \left. \lambda
\right\backslash G\right) $ form a basis of compact open sets for $\mathcal{
}_{\Lambda }^{\left( 0\right) }$.
\end{example}
\begin{remark}
\label{remark-of-groupouid-Glambda}A number of notes of this example:
\begin{enumerate}
\item[(i)] We think of $\mathcal{G}_{\Lambda }^{\left( 0\right) }=\partial
\Lambda $ as a subset of $\mathcal{G}_{\Lambda }$ under the correspondence
x\mapsto \left( x,0,x\right) $.
\item[(ii)] In \cite{Y07}, Yeend defines $Z_{\Lambda }\left( \left. \lambda
\right\backslash G\right) $ and $Z_{\Lambda }\left( \left. \lambda \ast
_{s}\mu \right\backslash G\right) $ where $G$ is finite. However, if $G$ is
exhaustive, then $Z_{\Lambda }\left( \left. \lambda \right\backslash
G\right) $ and $Z_{\Lambda }\left( \left. \lambda \ast _{s}\mu
\right\backslash G\right) $ are empty sets. Thus our definitions make sure
that both $Z_{\Lambda }\left( \left. \lambda \right\backslash G\right) $ and
$Z_{\Lambda }\left( \left. \lambda \ast _{s}\mu \right\backslash G\right) $
are non-empty.
\end{enumerate}
\end{remark}
Next we generalise \cite[Proposition 4.3]{CFST14} as follows:
\begin{proposition}
\label{KP-is-isomorphic-to-Steinberg-algebras}Let $\Lambda $ be a finitely
aligned $k$-graph and $\mathcal{G}_{\Lambda }$ be its boundary-path groupoid
as defined in Example \ref{groupouid-Glambda}. Let $R$ be a commutative ring
with $1$. Then there is an isomorphism $\pi _{T}:{\normalsize \operatorname{KP}
_{R}\left( \Lambda \right) \rightarrow A_{R}\left( \mathcal{G}_{\Lambda
}\right) $ such that $\pi _{T}\left( s_{\lambda }\right) =1_{Z_{\Lambda
}\left( \lambda \ast _{s}s\left( \lambda \right) \right) }$ and $\pi
_{T}\left( s_{\mu ^{\ast }}\right) =1_{Z_{\Lambda }\left( s\left( \mu
\right) \ast _{s}\mu \right) }$ for $\lambda ,u\in \Lambda $.
\end{proposition}
The only part of the proof of Proposition~\re
{KP-is-isomorphic-to-Steinberg-algebras} that requires much additional work
is showing the surjectivity of $\pi _{T}$. For this, we establish the
following two lemmas. These lemmas show that the characteristic function
associated to a compact open set in $\mathcal{G}_{\Lambda}$ can be written
as a sum of elements in the form $1_{Z_{\Lambda }\left( \left. \lambda \ast
_{s}\mu \right\backslash G\right) }$.
\begin{lemma}
\label{intersection-of-Z}Let $\left( \lambda ,\mu \right) ,\left( \lambda
^{\prime },\mu ^{\prime }\right) \in \Lambda \ast _{s}\Lambda $, $G\subseteq
s\left( \lambda \right) \Lambda $, and $G^{\prime }\subseteq s\left( \lambda
^{\prime }\right) \Lambda $. Define $F:=\Lambda ^{\min }\left( \lambda
,\lambda ^{\prime }\right) \cap \Lambda ^{\min }\left( \mu ,\mu ^{\prime
}\right) $. Then
\begin{equation}
Z_{\Lambda }\left( \left. \lambda \ast _{s}\mu \right\backslash G\right)
\cap Z_{\Lambda }\left( \left. \lambda ^{\prime }\ast _{s}\mu ^{\prime
}\right\backslash G^{\prime }\right) =\bigsqcup\limits_{(\gamma ,\gamma
^{\prime })\in F}Z_{\Lambda }\left( \left. \lambda \gamma \ast _{s}\mu
^{\prime }\gamma ^{\prime }\right\backslash \left[ \operatorname{Ext}\left( \gamma
;G\right) \cup \operatorname{Ext}\left( \gamma ^{\prime };G^{\prime }\right) \right]
\right) \text{.} \tag{*}
\end{equation}
\end{lemma}
\begin{proof}
We generalise the argument of \cite[Example 3.2]{CS15} for $1$-graphs. First
we show that the collectio
\begin{equation*}
\left\{ Z_{\Lambda }\left( \left. \lambda \gamma \ast _{s}\mu ^{\prime
}\gamma ^{\prime }\right\backslash \left[ \operatorname{Ext}\left( \gamma ;G\right)
\cup \operatorname{Ext}\left( \gamma ^{\prime };G^{\prime }\right) \right] \right)
:(\gamma ,\gamma ^{\prime })\in F\right\}
\end{equation*
is disjoint. It suffices to show that the collectio
\begin{equation*}
\left\{ Z_{\Lambda }\left( \lambda \gamma \ast _{s}\mu ^{\prime }\gamma
^{\prime }\right) :(\gamma ,\gamma ^{\prime })\in F\right\}
\end{equation*
is disjoint. Suppose for contradiction that there exist $(\gamma ,\gamma
^{\prime }),(\gamma ^{\prime \prime },\gamma ^{\prime \prime \prime })\in F$
such that $(\gamma ,\gamma ^{\prime })\neq (\gamma ^{\prime \prime },\gamma
^{\prime \prime \prime })$ and $V:=Z_{\Lambda }\left( \lambda \gamma \ast
_{s}\mu ^{\prime }\gamma ^{\prime }\right) \cap Z_{\Lambda }\left( \lambda
\gamma ^{\prime \prime }\ast _{s}\mu ^{\prime }\gamma ^{\prime \prime \prime
}\right) \neq \emptyset $. Note that if $\gamma =\gamma ^{\prime \prime }$,
the
\begin{align*}
\lambda ^{\prime }\gamma ^{\prime }& =\lambda \gamma \text{ (since }(\gamma
,\gamma ^{\prime })\in \Lambda ^{\min }\left( \lambda ,\lambda ^{\prime
}\right) \text{)} \\
& =\lambda \gamma ^{\prime \prime }\text{ (since }\gamma =\gamma ^{\prime
\prime }\text{)} \\
& =\lambda ^{\prime }\gamma ^{\prime \prime \prime }\text{ (since }(\gamma
^{\prime \prime },\gamma ^{\prime \prime \prime })\in \Lambda ^{\min }\left(
\lambda ,\lambda ^{\prime }\right) \text{)}
\end{align*
and $\gamma ^{\prime }=\gamma ^{\prime \prime \prime }$ by the factorisation
property, which contradicts $(\gamma ,\gamma ^{\prime })\neq (\gamma
^{\prime \prime },\gamma ^{\prime \prime \prime })$. The same argument shows
that $\gamma ^{\prime }=\gamma ^{\prime \prime \prime }$ implies $\gamma
=\gamma ^{\prime \prime }$. Hence $\gamma \neq \gamma ^{\prime \prime }$ and
$\gamma ^{\prime }\neq \gamma ^{\prime \prime \prime }$. Meanwhile, since
(\gamma ,\gamma ^{\prime }),(\gamma ^{\prime \prime },\gamma ^{\prime \prime
\prime })\in F$, then $d\left( \gamma \right) =d\left( \gamma ^{\prime
\prime }\right) $ and $d\left( \gamma ^{\prime }\right) =d\left( \gamma
^{\prime \prime \prime }\right) $. Take $\left( x,m,y\right) \in V$. Then
x\in Z_{\Lambda }\left( \lambda \gamma \right) $ and $x\in Z_{\Lambda
}\left( \lambda \gamma ^{\prime \prime }\right) $. Since $d\left( \gamma
\right) =d\left( \gamma ^{\prime \prime }\right) $, then $d\left( \lambda
\gamma \right) =d\left( \lambda \gamma ^{\prime \prime }\right) $ and
\gamma =x\left( d\left( \lambda \right) ,d\left( \lambda \gamma \right)
\right) =x\left( d\left( \lambda \right) ,d\left( \lambda \gamma ^{\prime
\prime }\right) \right) =\gamma ^{\prime \prime }$, which contradicts
\gamma \neq \gamma ^{\prime \prime }$. Hence the collection $\left\{
Z_{\Lambda }\left( \lambda \gamma \ast _{s}\mu ^{\prime }\gamma ^{\prime
}\right) :(\gamma ,\gamma ^{\prime })\in F\right\} $ is disjoint, and so i
\begin{equation*}
\left\{ Z_{\Lambda }\left( \left. \lambda \gamma \ast _{s}\mu ^{\prime
}\gamma ^{\prime }\right\backslash \left[ \operatorname{Ext}\left( \gamma ;G\right)
\cup \operatorname{Ext}\left( \gamma ^{\prime };G^{\prime }\right) \right] \right)
:(\gamma ,\gamma ^{\prime })\in F\right\} \text{.}
\end{equation*}
Next we show the right inclusion of (*). Writ
\begin{equation*}
U:=Z_{\Lambda }\left( \left. \lambda \ast _{s}\mu \right\backslash G\right)
\cap Z_{\Lambda }\left( \left. \lambda ^{\prime }\ast _{s}\mu ^{\prime
}\right\backslash G^{\prime }\right)
\end{equation*
and take $\left( x,m,y\right) \in U$. We show $\left( x,m,y\right) \in
Z_{\Lambda }\left( \left. \lambda \gamma \ast _{s}\mu ^{\prime }\gamma
^{\prime }\right\backslash \left[ \operatorname{Ext}\left( \gamma ;G\right) \cup
\operatorname{Ext}\left( \gamma ^{\prime };G^{\prime }\right) \right] \right) $ for
some $\left( \gamma ,\gamma ^{\prime }\right) \in F$. Because $x\in
Z_{\Lambda }\left( \lambda \right) $ and $x\in Z_{\Lambda }\left( \lambda
^{\prime }\right) $, then $d\left( x\right) \geq d\left( \lambda \right)
\vee d\left( \lambda ^{\prime }\right) $ and there exists $\left( \gamma
,\gamma ^{\prime }\right) \in \Lambda ^{\min }\left( \lambda ,\lambda
^{\prime }\right) $ such that
\begin{equation}
x\in Z_{\Lambda }\left( \lambda \gamma \right) .
\label{equ-x-in-Z(lambda,gamma)}
\end{equation
Using a similar argument, there exists $\left( \gamma ^{\prime \prime
},\gamma ^{\prime \prime \prime }\right) \in \Lambda ^{\min }\left( \mu ,\mu
^{\prime }\right) $ such that
\begin{equation}
y\in Z_{\Lambda }\left( \mu \gamma ^{\prime \prime }\right) .
\label{equ-y-in-Z(mu,gamma)}
\end{equation}
We claim that $\gamma =\gamma ^{\prime \prime }$ and $\gamma ^{\prime
}=\gamma ^{\prime \prime \prime }$. To see this, note that $m=d\left(
\lambda \right) -d\left( \mu \right) =d\left( \lambda ^{\prime }\right)
-d\left( \mu ^{\prime }\right) $ an
\begin{align*}
d\left( \gamma \right) & =d\left( \lambda \right) \vee d\left( \lambda
^{\prime }\right) -d\left( \lambda \right) =\left( d\left( \mu \right)
+m\right) \vee \left( d\left( \mu ^{\prime }\right) +m\right) -\left(
d\left( \mu \right) +m\right) \\
& =\left( d\left( \mu \right) \vee d\left( \mu ^{\prime }\right) \right)
+m-\left( d\left( \mu \right) +m\right) =d\left( \mu \right) \vee d\left(
\mu ^{\prime }\right) -d\left( \mu \right) =d\left( \gamma ^{\prime \prime
}\right) \text{.}
\end{align*
Since $\left( x,m,y\right) \in Z_{\Lambda }\left( \left. \lambda \ast
_{s}\mu \right\backslash G\right) $, then $\sigma ^{d\left( \lambda \right)
}x=\sigma ^{d\left( \mu \right) }y$ and
\begin{equation*}
\gamma =\left( \sigma ^{d\left( \lambda \right) }x\right) \left( 0,d\left(
\gamma \right) \right) =\left( \sigma ^{d\left( \mu \right) }y\right) \left(
0,d\left( \gamma ^{\prime }\right) \right) =\gamma ^{\prime \prime }\text{.}
\end{equation*
Using a similar argument, we also get $\gamma ^{\prime }=\gamma ^{\prime
\prime \prime }$ proving the claim.
Next we show that $\left( x,m,y\right) \in Z_{\Lambda }\left( \lambda \gamma
\ast _{s}\mu ^{\prime }\gamma ^{\prime }\right) $. By
\eqref{equ-x-in-Z(lambda,gamma)} and \eqref{equ-y-in-Z(mu,gamma)}, we have
x\in Z_{\Lambda }\left( \lambda \gamma \right) $ and $y\in Z_{\Lambda
}\left( \mu \gamma ^{\prime \prime }\right) $. Since $\gamma =\gamma
^{\prime \prime }$, $\gamma ^{\prime }=\gamma ^{\prime \prime \prime }$,
\left( \gamma ^{\prime \prime },\gamma ^{\prime \prime \prime }\right) \in
\Lambda ^{\min }\left( \mu ,\mu ^{\prime }\right) $, then $\mu \gamma
^{\prime \prime }=\mu \gamma =\mu ^{\prime }\gamma ^{\prime }$ and $y\in
Z_{\Lambda }\left( \mu ^{\prime }\gamma ^{\prime }\right) .$ On the other
hand, since $\left( x,m,y\right) \in Z_{\Lambda }\left( \left. \lambda \ast
_{s}\mu \right\backslash G\right) $, then $\sigma ^{d\left( \lambda \right)
}x=\sigma ^{d\left( \mu \right) }y$ an
\begin{equation*}
\sigma ^{d\left( \lambda \gamma \right) }x=\sigma ^{d\left( \mu \gamma
\right) }y=\sigma ^{d(\mu ^{\prime }\gamma ^{\prime })}y
\end{equation*
since $\mu \gamma =\mu ^{\prime }\gamma ^{\prime }$. Since $m=d\left(
\lambda \right) -d\left( \mu \right) =d\left( \lambda \gamma \right)
-d\left( \mu ^{\prime }\gamma ^{\prime }\right) $, then $\left( x,m,y\right)
\in Z_{\Lambda }\left( \lambda \gamma \ast _{s}\mu ^{\prime }\gamma ^{\prime
}\right) $, as required.
Finally we show that $\left( x,m,y\right) \notin Z_{\Lambda }\left( \lambda
\gamma \nu \ast _{s}\mu ^{\prime }\gamma ^{\prime }\nu \right) $ for all
\nu \in \operatorname{Ext}\left( \gamma ;G\right) \cup \operatorname{Ext}\left( \gamma
^{\prime };G^{\prime }\right) $. Suppose for a contradiction that there
exists $\nu \in \operatorname{Ext}\left( \gamma ;G\right) \cup \operatorname{Ext}\left(
\gamma ^{\prime };G^{\prime }\right) $ such that $\left( x,m,y\right) \in
Z_{\Lambda }\left( \lambda \gamma \nu \ast _{s}\mu ^{\prime }\gamma ^{\prime
}\nu \right) $. Without loss of generality, suppose $\nu \in \operatorname{Ext
\left( \gamma ;G\right) $. Then there exists $\nu ^{\prime }\in G$ such that
$\gamma \nu \in Z_{\Lambda }\left( \nu ^{\prime }\right) $. Since $x\in
Z_{\Lambda }\left( \lambda \gamma \nu \right) $, $y\in Z_{\Lambda }\left(
\mu ^{\prime }\gamma ^{\prime }\nu \right) =Z_{\Lambda }\left( \mu \gamma
\nu \right) $, and $\gamma \nu \in Z_{\Lambda }\left( \nu ^{\prime }\right)
, then $x\in Z_{\Lambda }\left( \lambda \nu ^{\prime }\right) $ and $y\in
Z_{\Lambda }\left( \mu \nu ^{\prime }\right) $ where $\nu ^{\prime }\in G$.
This contradicts $\left( x,m,y\right) \in Z_{\Lambda }\left( \left. \lambda
\ast _{s}\mu \right\backslash G\right) $. Hence
\begin{equation*}
\left( x,m,y\right) \in Z_{\Lambda }\left( \left. \lambda \gamma \ast
_{s}\mu ^{\prime }\gamma ^{\prime }\right\backslash \left[ \operatorname{Ext}\left(
\gamma ;G\right) \cup \operatorname{Ext}\left( \gamma ^{\prime };G^{\prime }\right)
\right] \right)
\end{equation*
an
\begin{equation*}
U\subseteq \bigsqcup\limits_{(\gamma ,\gamma ^{\prime })\in F}Z_{\Lambda
}\left( \left. \lambda \gamma \ast _{s}\mu ^{\prime }\gamma ^{\prime
}\right\backslash \left[ \operatorname{Ext}\left( \gamma ;G\right) \cup \operatorname{Ext
\left( \gamma ^{\prime };G^{\prime }\right) \right] \right) \text{.}
\end{equation*}
Next we show the left inclusion of (*). Take $\left( \gamma ,\gamma ^{\prime
}\right) \in F$ and
\begin{equation}
\left( x,m,y\right) \in Z_{\Lambda }\left( \left. \lambda \gamma \ast
_{s}\mu ^{\prime }\gamma ^{\prime }\right\backslash \left[ \operatorname{Ext}\left(
\gamma ;G\right) \cup \operatorname{Ext}\left( \gamma ^{\prime };G^{\prime }\right)
\right] \right) \text{.} \label{equ-x-in-Z-without-ext}
\end{equation
We show $\left( x,m,y\right) $ belongs to both $Z_{\Lambda }\left( \left.
\lambda \ast _{s}\mu \right\backslash G\right) $ and $Z_{\Lambda }\left(
\left. \lambda ^{\prime }\ast _{s}\mu ^{\prime }\right\backslash G^{\prime
}\right) $. Without loss of generality, it suffices to show $\left(
x,m,y\right) \in Z_{\Lambda }\left( \left. \lambda \ast _{s}\mu
\right\backslash G\right) $. First we show that $\left( x,m,y\right) \in
Z_{\Lambda }\left( \lambda \ast _{s}\mu \right) $. Note that we have $\mu
\gamma =\mu ^{\prime }\gamma ^{\prime }$ and $m=d\left( \lambda \gamma
\right) -d\left( \mu ^{\prime }\gamma ^{\prime }\right) =d\left( \lambda
\right) -d\left( \mu \right) $. On the other hand, $\left( x,m,y\right) \in
Z_{\Lambda }\left( \lambda \gamma \ast _{s}\mu ^{\prime }\gamma ^{\prime
}\right) $ also implies $x\in Z_{\Lambda }\left( \lambda \gamma \right) $
and $y\in Z_{\Lambda }\left( \mu ^{\prime }\gamma ^{\prime }\right)
=Z_{\Lambda }\left( \mu \gamma \right) $. Furthermore,
\begin{align*}
\sigma ^{\left( \lambda \right) }x& =\left[ x\left( d\left( \lambda \right)
,d\left( \lambda \gamma \right) \right) \right] \left[ \sigma ^{\left(
\lambda \gamma \right) }x\right] \\
& =\gamma \left[ \sigma ^{\left( \lambda \gamma \right) }x\right] \text{
(since }x\left( d\left( \lambda \right) ,d\left( \lambda \gamma \right)
\right) =\gamma \text{)} \\
& =\gamma \lbrack \sigma ^{(\mu ^{\prime }\gamma ^{\prime })}y]\text{ (since
}\sigma ^{\left( \lambda \gamma \right) }x=\sigma ^{(\mu ^{\prime }\gamma
^{\prime })}y\text{)} \\
& =\left[ y\left( d\left( \mu \right) ,d\left( \mu \gamma \right) \right)
\right] [\sigma ^{(\mu ^{\prime }\gamma ^{\prime })}y]\text{ (since }y\left(
d\left( \mu \right) ,d\left( \mu \gamma \right) \right) =\gamma \text{)} \\
& =\left[ y\left( d\left( \mu \right) ,d\left( \mu \gamma \right) \right)
\right] [\sigma ^{(\mu \gamma )}y]\text{ (since }\mu \gamma =\mu ^{\prime
}\gamma ^{\prime }) \\
& =\sigma ^{(\mu )}y
\end{align*
and then $\left( x,m,y\right) \in Z_{\Lambda }\left( \lambda \ast _{s}\mu
\right) $, as required.
To complete the proof, we have to show $\left( x,m,y\right) \notin
Z_{\Lambda }\left( \lambda \nu \ast _{s}\mu \nu \right) $ for all $\nu \in G
. Suppose for contradiction that there exists $\nu \in G$ such that $\left(
x,m,y\right) \in Z_{\Lambda }\left( \lambda \nu \ast _{s}\mu \nu \right) $.
In particular, $x\in Z_{\Lambda }\left( \lambda \nu \right) $. Since $x\in
Z_{\Lambda }\left( \lambda \gamma \right) $ and $x\in Z_{\Lambda }\left(
\lambda \nu \right) $, then there exists $\nu ^{\prime }\in \operatorname{Ext}\left(
\gamma ;\left\{ \nu \right\} \right) $ such that $x\in Z_{\Lambda }\left(
\lambda \gamma \nu ^{\prime }\right) $. Henc
\begin{align*}
\sigma ^{(\lambda \gamma \nu ^{\prime })}x& =\sigma ^{(\mu \gamma \nu
^{\prime })}y\text{ (since }\sigma ^{\left( \lambda \right) }x=\sigma ^{(\mu
)}y\text{)} \\
& =\sigma ^{(\mu ^{\prime }\gamma ^{\prime }\nu ^{\prime })}y\text{ (since
\mu \gamma =\mu ^{\prime }\gamma ^{\prime }\text{),}
\end{align*
\begin{align}
\left( \sigma ^{(\mu )}y\right) \left( 0,d\left( \gamma \nu ^{\prime
}\right) \right) & =\left( \sigma ^{\left( \lambda \right) }x\right) \left(
0,d\left( \gamma \nu ^{\prime }\right) \right) \text{ (since }\sigma
^{\left( \lambda \right) }x=\sigma ^{(\mu )}y\text{)}
\label{equ-intersection-of-Z-1} \\
& =x\left( d\left( \lambda \right) ,d\left( \lambda \gamma \nu ^{\prime
}\right) \right) \notag \\
& =\gamma \nu ^{\prime }\text{ (since }x\in Z_{\Lambda }\left( \lambda
\gamma \nu ^{\prime }\right) \text{),} \notag
\end{align
an
\begin{align*}
y\left( 0,d\left( \mu ^{\prime }\gamma ^{\prime }\nu ^{\prime }\right)
\right) & =y\left( 0,d\left( \mu \gamma \nu ^{\prime }\right) \right) \text{
(since }\mu \gamma =\mu ^{\prime }\gamma ^{\prime }\text{)} \\
& =\mu \gamma \nu ^{\prime }\text{ (by \eqref{equ-intersection-of-Z-1})} \\
& =\mu ^{\prime }\gamma ^{\prime }\nu ^{\prime }\text{ (since }\mu \gamma
=\mu ^{\prime }\gamma ^{\prime }\text{).}
\end{align*
Furthermore
\begin{align*}
d\left( \lambda \gamma \nu ^{\prime }\right) -d\left( \mu ^{\prime }\gamma
^{\prime }\nu ^{\prime }\right) & =d\left( \lambda \gamma \right) -d\left(
\mu ^{\prime }\gamma ^{\prime }\right) \\
& =d\left( \lambda \gamma \right) -d\left( \mu \gamma \right) \text{ (since
\mu \gamma =\mu ^{\prime }\gamma ^{\prime }\text{)} \\
& =d\left( \lambda \right) -d\left( \mu \right) =m\text{.}
\end{align*
Hence $\left( x,m,y\right) \in Z_{\Lambda }\left( \lambda \gamma \nu
^{\prime }\ast _{s}\mu ^{\prime }\gamma ^{\prime }\nu ^{\prime }\right) $
for some $\nu ^{\prime }\in \operatorname{Ext}\left( \gamma ;\left\{ \nu \right\}
\right) \subseteq \operatorname{Ext}\left( \gamma ;G\right) $, which contradicts
\eqref{equ-x-in-Z-without-ext}. The conclusion follows.
\end{proof}
\begin{lemma}
\label{compact-open-bisection-U-is-in-span}Let $\left\{ Z_{\Lambda }\left(
\left. \lambda _{i}\ast _{s}\mu _{i}\right\backslash G_{i}\right) \right\}
_{i=1}^{n}$ be a finite collection of compact open bisection sets an
\begin{equation*}
U:=\bigcup_{i=1}^{n}Z_{\Lambda }\left( \left. \lambda _{i}\ast _{s}\mu
_{i}\right\backslash G_{i}\right) \text{.}
\end{equation*
The
\begin{equation*}
1_{U}\in \operatorname{span}_{R}\left\{ 1_{Z_{\Lambda }\left( \left. \lambda \ast
_{s}\mu \right\backslash G\right) }:\left( \lambda ,\mu \right) \in \Lambda
\ast _{s}\Lambda ,G\subseteq s\left( \lambda \right) \Lambda \right\} \text{
}
\end{equation*}
\end{lemma}
\begin{proof}
It is trivial for $n=1$. Now let $n=2$ and $F:=\Lambda ^{\min }\left(
\lambda _{1},\lambda _{2}\right) \cap \Lambda ^{\min }\left( \mu _{1},\mu
_{2}\right) \,$. If $F=\emptyset $, then
\begin{equation*}
1_{U}=1_{Z_{\Lambda }\left( \left. \lambda \ast _{s}\mu \right\backslash
G\right) }+1_{Z_{\Lambda }\left( \left. \lambda ^{\prime }\ast _{s}\mu
^{\prime }\right\backslash G^{\prime }\right) }\text{.}
\end{equation*
Otherwise, by Proposition \ref{intersection-of-Z}, we have
\begin{equation*}
1_{U}=1_{Z_{\Lambda }\left( \left. \lambda \ast _{s}\mu \right\backslash
G\right) }+1_{Z_{\Lambda }\left( \left. \lambda ^{\prime }\ast
_{s}\mu^{\prime}\right\backslash G^{\prime }\right) }-\sum\limits_{(\gamma
,\gamma ^{\prime })\in F}1_{Z_{\gamma ,\gamma ^{\prime }}}
\end{equation*
where $Z_{\gamma ,\gamma ^{\prime }}:=Z_{\Lambda }\left( \left. \lambda
\gamma \ast _{s}\mu ^{\prime }\gamma ^{\prime }\right\backslash \operatorname{Ext
\left( \gamma ;G\right) \cup \operatorname{Ext}\left( \gamma ^{\prime };G^{\prime
}\right) \right) $, as required. For $n\geq 3$, by using the
inclusion-exclusion principle and de Morgan's law, $1_{U}$ can be written as
a sum of elements in the form $1_{Z_{\Lambda }\left( \left. \lambda \ast
_{s}\mu \right\backslash G\right) }$.
\end{proof}
\begin{proof}[Proof of Proposition \protect\re
{KP-is-isomorphic-to-Steinberg-algebras}]
Define $T_{\lambda }:=1_{Z_{\Lambda }\left( \lambda \ast _{s}s\left( \lambda
\right) \right) }$. Then by \cite[Theorem 6.13]{FMY05} (or \cite[Example 7.1
{Y07}), $\left\{ T_{\lambda},T_{\mu ^{\ast }}:\lambda ,u\in \Lambda \right\}
$ is a Kumjian-Pask $\Lambda $-family in $A_{R}\left( \mathcal{G}_{\Lambda
}\right) $. Hence, there exists a homomorphism $\pi _{T}:{\normalsize \operatorname
KP}}_{R}\left( \Lambda \right) \rightarrow A_{R}\left( \mathcal{G}_{\Lambda
}\right) $ such that $\pi _{T}\left( s_{\lambda }\right) =T_{\lambda }$ and
\pi _{T}\left( s_{\mu ^{\ast }}\right) =T_{\mu ^{\ast }}$ for $\lambda ,\mu
\in \Lambda $ by Theorem~\ref{universal-KP-family}(a).
To see that $\pi _{T}$ is injective, first we show that $\pi _{T}$ is
graded. Take $\lambda ,\mu \in \Lambda $. Then $s_{\lambda }s_{\mu ^{\ast
}}\in {\normalsize \operatorname{KP}}_{R}\left( \Lambda \right) _{d\left( \lambda
\right) -d\left( \mu \right) }$ an
\begin{equation*}
\pi _{T}\left( s_{\lambda }s_{\mu ^{\ast }}\right) =1_{Z_{\Lambda }\left(
\lambda \ast _{s}\mu \right) }=1_{\{\left( x,d\left( \lambda \right)
-d\left( \mu \right) ,y\right) :\left( \lambda ,\mu \right) \in \Lambda \ast
_{s}\Lambda ,z\in s\left( \lambda \right) \partial \Lambda \}}\in
A_{R}\left( \mathcal{G}_{\Lambda }\right) _{d\left( \lambda \right) -d\left(
\mu \right) }\text{.}
\end{equation*
Since for every $n\in \mathbb{Z}^{k}$, ${\normalsize \operatorname{KP}}_{R}\left(
\Lambda \right) _{n}$ is spanned by elements in the form $s_{\lambda }s_{\mu
^{\ast }}$ (Theorem \ref{universal-KP-family}.(c)), then for $n\in \mathbb{Z
^{k}$, $\pi _{T}\left( {\normalsize \operatorname{KP}}_{R}\left( \Lambda \right)
_{n}\right) \subseteq A_{R}\left( \mathcal{G}_{\Lambda }\right) _{n}$ and
\pi _{T}$ is graded. Since $\pi _{T}\left( rs_{v}\right) =r1_{Z_{\Lambda
}\left( v\ast _{s}v\right) }\neq 0$ for all $r\in \left. R\right\backslash
\left\{ 0\right\} $ and $v\in \Lambda ^{0}$, and $\pi _{T}$ is graded, then
by Theorem \ref{the-graded-uniqueness-theorem}, $\pi _{T}$ is injective, as
required.
Finally we show the surjectivity of $\pi _{T}$. Take $f\in A_{R}\left(
\mathcal{G}_{\Lambda }\right) $. By \cite[Lemma 2.2]{CE-M15}, $f$ can be
written as $\sum_{U\in F}a_{U}1_{U}$ where $a_{U}\in R$, each $U$ is in the
form $\bigcup_{i=1}^{n}Z_{\Lambda }\left( \left. \lambda _{i}\ast _{s}\mu
_{i}\right\backslash G_{i}\right) $ for some $n\in \mathbb{N}$, and $F$ is
finite set of mutually disjoint elements. Hence, to show $f\in \operatorname{im
\left( \pi _{T}\right) $, it suffices to show
\begin{equation*}
1_{U}\in \operatorname{im}\left( \pi _{T}\right)
\end{equation*
where $U:=\bigcup_{i=1}^{n}Z_{\Lambda }\left( \left. \lambda _{i}\ast
_{s}\mu _{i}\right\backslash G_{i}\right) $ for some $n\in \mathbb{N}$ and
collection $\left\{ Z_{\Lambda }\left( \left. \lambda _{i}\ast _{s}\mu
_{i}\right\backslash G_{i}\right) \right\} _{i=1}^{n}$. By Lemma \re
{compact-open-bisection-U-is-in-span}, $1_{U}$ can be written as the sum of
elements in the form $1_{Z_{\Lambda }\left( \left. \lambda \ast _{s}\mu
\right\backslash G\right) }$. On the other hand, for $\left( \lambda ,\mu
\right) \in \Lambda \ast _{s}\Lambda $ and finite $G\subseteq s\left(
\lambda \right) \Lambda $, we hav
\begin{align}
T_{\lambda }\big(\prod_{\nu \in G}\left( T_{s\left( \lambda \right) }-T_{\nu
}T_{\nu ^{\ast }}\right) \big)T_{\mu ^{\ast }}& =1_{Z_{\Lambda }\left(
\lambda \ast _{s}s\left( \lambda \right) \right) }\big(\prod_{\nu \in
G}\left( 1_{Z_{\Lambda }\left( s\left( \lambda \right) \ast _{s}s\left(
\lambda \right) \right) }-1_{Z_{\Lambda }\left( \nu \ast _{s}\nu \right)
}\right) \big)1_{Z_{\Lambda }\left( s\left( \mu \right) \ast _{s}\mu \right)
} \label{equ-1_Z(lambda*mu-G)} \\
& =1_{Z_{\Lambda }\left( \lambda \ast _{s}s\left( \lambda \right) \right)
\big(\prod_{\nu \in G}\left( 1_{Z_{\Lambda }\left( \left. s\left( \lambda
\right) \ast _{s}s\left( \lambda \right) \right\backslash \left\{ \nu
\right\} \right) }\right) \big)1_{Z_{\Lambda }\left( s\left( \mu \right)
\ast _{s}\mu \right) } \notag \\
& =1_{Z_{\Lambda }\left( \lambda \ast _{s}s\left( \lambda \right) \right)
\big(1_{\prod_{\nu \in G}Z_{\Lambda }\left( \left. s\left( \lambda \right)
\ast _{s}s\left( \lambda \right) \right\backslash \left\{ \nu \right\}
\right) }\big)1_{Z_{\Lambda }\left( s\left( \mu \right) \ast _{s}\mu \right)
} \notag \\
& =1_{Z_{\Lambda }\left( \lambda \ast _{s}s\left( \lambda \right) \right)
\big(1_{Z_{\Lambda }\left( \left. s\left( \lambda \right) \ast _{s}s\left(
\lambda \right) \right\backslash G\right) }\big)1_{Z_{\Lambda }\left(
s\left( \mu \right) \ast _{s}\mu \right) } \notag \\
& =1_{Z_{\Lambda }\left( \left. \lambda \ast _{s}\mu \right\backslash
G\right) } \notag
\end{align
since $s\left( \lambda \right) =s\left( \mu \right) $. Hence, $1_{Z_{\Lambda
}\left( \left. \lambda \ast _{s}\mu \right\backslash G\right) }$ belongs to
\operatorname{im}\left( \pi _{T}\right) $ and then so does $1_{U}$, as required.
Therefore, $\pi _{T}$ is surjective and then is an isomorphism.
\end{proof}
\begin{remark}
\label{remark-directed-graph-and-locally-convex}Finitely aligned $k$-graphs
include $1$-graphs and row-finite $k$-graphs with no sources. Further, in
these cases, the boundary path groupoid $\mathcal{G}_{\Lambda }$ of Example
\ref{groupouid-Glambda} coincides with $\mathcal{G}_{E}$ of \cite{CS15} and
\mathcal{G}_{\Lambda }$ of \cite{CFST14}. Thus, we have generalised Example
3.2 of \cite{CS15} and Proposition 4.3 of \cite{CFST14}. For locally convex
row-finite $k$-graphs, our construction gives a Steinberg algebra model of
the Kumjian-Pask algebras of \cite{CFaH14}.
\end{remark}
\section{Aperiodic higher-rank graphs and effective groupoids}
\label{Section-aperiodic-effective}In this section and Section \re
{Section-cofinal-minimal}, we investigate the relationship between a $k
-graph $\Lambda $ and its boundary-path groupoid $\mathcal{G}_{\Lambda }$ as
constructed in Example \ref{groupouid-Glambda}. We expect the Cuntz-Krieger
uniqueness theorem (Theorem \ref{the-CK-uniqueness-theorem}) to apply only
to \emph{aperiodic} finitely aligned $k$-graphs (definition below). On the
other hand, \emph{effective} groupoids (definition below) are needed in the
hypothesis of the Cuntz-Krieger uniqueness theorem for Steinberg algebras
(Theorem \ref{the-CK-uniqueness-theorem-for-Steinberg-algebras}). In this
section, our main result is Proposition \ref{aperiodic-iff-effective} which
says that a finitely aligned $k$-graph $\Lambda $ is aperiodic if and only
if the boundary-path groupoid $\mathcal{G}_{\Lambda }$ is effective.
We say a boundary path $x$ is \emph{aperiodic }if for all $\lambda ,\mu \in
\Lambda r\left( x\right) $, $\lambda \neq \mu $ implies $\lambda x\neq \mu x
. We say a finitely aligned $k$-graph $\Lambda $ is \emph{aperiodic }if for
each $v\in \Lambda ^{0}$, there exists an aperiodic boundary path $x$ with
r\left( x\right) =v$.
\begin{remark}
There are several equivalent ways to define the aperiodicity condition for
finitely aligned $k$-graphs (see \cite{FMY05,LS10,RSY04,Sh12}). However,
those definitions are equivalent by \cite[Proposition 3.6]{LS10} and \cite
Proposition 2.11]{Sh12}. The definition we use is called Condition (B
^{\prime }$) in \cite[Remark 7.3]{FMY05} and \cite[Definition 2.1.(ii)]{Sh12
.
\end{remark}
\begin{remark}
For $1$-graphs, the aperiodicity condition is known as Condition (L), which,
using our conventions, says that every cycle has an entry (see \cit
{A15,AA08,BPRS00,KPR98,CBMS,T07,T11}).
\end{remark}
Next let $\mathcal{G}$ be a toplogical groupoid. Define $\operatorname{Iso}\left(
\mathcal{G}\right) $ the \emph{isotropy groupoid}\ of $\mathcal{G}$ b
\begin{equation*}
\operatorname{Iso}\left( \mathcal{G}\right) :=\left\{ a\in \mathcal{G}:s\left(
a\right) =r\left( a\right) \right\} \text{.}
\end{equation*
We then say $\mathcal{G}$ is \emph{effective} if the interior of $\operatorname{Iso
\left( \mathcal{G}\right) $ is $\mathcal{G}^{\left( 0\right) }$. See \cite
Lemma 3.1]{BCFS14} for some equivalent characterisations.
\begin{proposition}
\label{aperiodic-iff-effective}Let $\Lambda $ be a finitely aligned $k
-graph. Then $\Lambda $ is aperiodic if and only if the boundary-path
groupoid $\mathcal{G}_{\Lambda }$ is effective.
\end{proposition}
\begin{proof}
$\left( \Rightarrow \right) $ First suppose that $\Lambda $ is aperiodic. We
trivially have $\mathcal{G}_{\Lambda }^{\left( 0\right) }$ belongs to the
interior of $\operatorname{Iso}\left( \mathcal{G}_{\Lambda }\right) $. Now we show
the reverse inclusion. Take $a$ an interior point of $\operatorname{Iso}\left(
\mathcal{G}_{\Lambda }\right) $. Then there exits $Z_{\Lambda }\left( \left.
\lambda \ast _{s}\mu \right\backslash G\right) $ such that $Z_{\Lambda
}\left( \left. \lambda \ast _{s}\mu \right\backslash G\right) \subseteq
\operatorname{Iso}\left( \mathcal{G}_{\Lambda }\right) $ and $a\in Z_{\Lambda
}\left( \left. \lambda \ast _{s}\mu \right\backslash G\right) $. We show
\lambda =\mu $.
Note that since $a\in Z_{\Lambda }\left( \left. \lambda \ast _{s}\mu
\right\backslash G\right) $, then $Z_{\Lambda }\left( \left. \lambda \ast
_{s}\mu \right\backslash G\right) $ is not empty and by Remark \re
{remark-of-groupouid-Glambda}.(ii), $G$ is not exhaustive. Hence, there
exists $\nu \in s\left( \lambda \right) \Lambda $ such that $\Lambda ^{\min
}\left( \nu ,\gamma \right) =\emptyset $ for $\gamma \in G$. Because
\Lambda $ is aperiodic, there exists a aperiodic boundary path $x\in s\left(
\nu \right) \partial \Lambda .$
We claim that the boundary path $\nu x$ is also aperiodic. Suppose for
contradiction that there exists $\lambda ^{\prime },\mu ^{\prime }\in
\Lambda r\left( \nu x\right) $ such that $\lambda ^{\prime }\neq \mu
^{\prime }$ and
\begin{equation}
\lambda ^{\prime }\left( \nu x\right) =\mu ^{\prime }\left( \nu x\right) .
\label{equ-lambdanux-equal-munux}
\end{equation
Since $\lambda ^{\prime },\mu ^{\prime },\nu \in \Lambda $, by unique the
factorisation property we have $\lambda ^{\prime }\neq \mu ^{\prime }$
implies $\lambda ^{\prime }\nu \neq \mu ^{\prime }\nu $. Now because $x$ is
aperiodic, $\lambda ^{\prime }\nu \neq \mu ^{\prime }\nu $ implies $\lambda
^{\prime }\nu \left( x\right) \neq \mu ^{\prime }\nu \left( x\right) $,
which contradicts \eqref{equ-lambdanux-equal-munux}. Hence, $\nu x$ is
aperiodic, as claimed.
Since $\lambda \nu x\in \left. Z_{\Lambda }\left( \lambda \right)
\right\backslash Z_{\Lambda }\left( \lambda \gamma \right) $ and $\mu \nu
x\in \left. Z_{\Lambda }\left( \mu \right) \right\backslash Z_{\Lambda
}\left( \mu \gamma \right) $ for $\gamma \in G$, we have
\begin{equation*}
\left( \lambda \nu x,d\left( \lambda \right) -d\left( \mu \right) ,\mu \nu
x\right) \in Z_{\Lambda }\left( \left. \lambda \ast _{s}\mu \right\backslash
G\right) \text{.}
\end{equation*
Thus $Z_{\Lambda }\left( \left. \lambda \ast _{s}\mu \right\backslash
G\right) \subseteq \operatorname{Iso}\left( \mathcal{G}_{\Lambda }\right) $, and
hence $\lambda \nu x=\mu \nu x$. Since $\nu x$ is aperiodic, we have
\lambda \left( \nu x\right) =\mu \left( \nu x\right) $ which implies
\lambda =\mu $. Therefore, $\mathcal{G}_{\Lambda }$ is effective.
$\left( \Leftarrow \right) $ Now suppose that $\Lambda $ is not aperiodic.
Then there exists $v\in \Lambda ^{0}$ such that for all boundary path $x\in
v\partial \Lambda $, $x$ is not aperiodic.
\begin{claim}
\label{xGx-not-equal-x}For $x\in v\partial \Lambda $, we have $x\mathcal{G
_{\Lambda }x\neq \left\{ x\right\} $.
\end{claim}
\begin{proof}[Proof of Claim~\protect\ref{xGx-not-equal-x}]
Take $x\in v\partial \Lambda $. Since $x$ is not aperiodic, then there exist
$\lambda ,\mu \in \Lambda r\left( x\right) $ such that $\lambda \neq \mu $
and $\lambda x=\mu x$. If $d\left( \lambda \right) =d\left( \mu \right) $,
then
\begin{equation*}
\lambda =\left( \lambda x\right) \left( 0,d\left( \lambda \right) \right)
=\left( \mu x\right) \left( 0,d\left( \mu \right) \right) =\mu \text{,}
\end{equation*
which contradicts with $\lambda \neq \mu $.
So suppose $d\left( \lambda \right) \neq d\left( \mu \right) $. Note that
for $1\leq i\leq k$ such that $d\left( \lambda \right) _{i}\neq d\left( \mu
\right) _{i}$, we have $d\left( x\right) _{i}=\infty $ (since $\lambda x=\mu
x$). Henc
\begin{equation*}
\left( \left( d\left( \lambda \right) \vee d\left( \mu \right) \right)
-d\left( \lambda \right) \right) \vee \left( \left( d\left( \lambda \right)
\vee d\left( \mu \right) \right) -d\left( \mu \right) \right) \leq d\left(
x\right) \text{.}
\end{equation*
Write $p:=\left( d\left( \lambda \right) \vee d\left( \mu \right) \right)
-d\left( \lambda \right) $ and $q:=\left( d\left( \lambda \right) \vee
d\left( \mu \right) \right) -d\left( \mu \right) $. Then
\begin{align*}
\sigma ^{p}x &=\sigma ^{p}\left( \sigma ^{d\left( \lambda \right) }\left(
\lambda x\right) \right) =\sigma ^{d\left( \lambda \right) \vee d\left( \mu
\right) }\left( \lambda x\right) \\
&=\sigma ^{d\left( \lambda \right) \vee d\left( \mu \right) }\left( \mu
x\right) \text{ (since }\lambda x=\mu x\text{)} \\
&=\sigma ^{q}\left( \sigma ^{d\left( \mu \right) }\left( \mu x\right)
\right) =\sigma ^{q}x
\end{align*
and $p\neq q$ (since $d\left( \lambda \right) \neq d\left( \mu \right) $).
This implies $\left( x,p-q,x\right) \in \left. \mathcal{G}_{\Lambda
}\right\backslash \mathcal{G}_{\Lambda }^{\left( 0\right) }$ and $x\mathcal{
}_{\Lambda }x\neq \left\{ x\right\} $. \hfil\penalty100\hbox{}\nobreak\hfill
\hbox{\qed\
Claim~\ref{xGx-not-equal-x}} \renewcommand\qed{}
\end{proof}
Since $x\mathcal{G}_{\Lambda }x\neq \left\{ x\right\} $ for all $x\in
v\partial \Lambda $, then
\begin{equation*}
Z_{\Lambda }\left( v\right) \cap \{z\in \mathcal{G}_{\Lambda }^{\left(
0\right) }:z\mathcal{G}_{\Lambda }z=\left\{ z\right\} \}=\emptyset
\end{equation*
and $\{z\in \mathcal{G}_{\Lambda }^{\left( 0\right) }:z\mathcal{G}_{\Lambda
}z=\left\{ z\right\} \}$ is not dense in $\mathcal{G}_{\Lambda }^{\left(
0\right) }$. Since $\mathcal{G}_{\Lambda }$ is locally compact,
second-countable, Hausdorff and \'{e}tale, then by \cite[Proposition 3.6.(b)
{R08}, $\mathcal{G}_{\Lambda }$ is not effective, as required.
\end{proof}
\begin{remark}
In fact, for a finitely aligned $k$-graph $\Lambda $, the following five
conditions are equivalent:
\begin{enumerate}
\item[(a)] $\mathcal{G}_{\Lambda }$ is effective.
\item[(b)] $\mathcal{G}_{\Lambda }$ is \emph{topologically principal } in
that the set of units with trivial isotropy is dense in $\mathcal{G}^{(0)}$.
\item[(c)] $\mathcal{G}_{\Lambda }$ satisfies Condition (1) of Theorem 5.1
of \cite{RSWY12}.
\item[(d)] $\Lambda $ has \emph{no local periodicity} as defined in \cit
{Sh12}.
\item[(e)] $\Lambda $ is aperiodic.
\end{enumerate}
In \cite[Proposition 3.6]{R08}, Renault shows that for a locally compact,
second-countable, Hausdorff, \'{e}tale $\mathcal{G}$, $\mathcal{G}$ is
effective if and only if it is topologically principle. Since the
boundary-path groupoid $\mathcal{G}_{\Lambda }$ is locally compact,
second-countable, Hausdorff and \'{e}tale, then (a)$\Leftrightarrow $(b).
Meanwhile, in \cite[Theorem 5.2]{Y07}, Yeend proves (b)$\Leftrightarrow
(c). [Note that Yeend uses notion \textquotedblleft \emph{essentially free
\textquotedblright\ instead of \textquotedblleft topologically
principal\textquotedblright .] Lemma 5.6 of \cite{RSWY12} gives (c)
\Leftrightarrow $(d). Finally, (d)$\Leftrightarrow $(e) follows from \cite
Proposition 2.11]{Sh12}.
\end{remark}
\section{Cofinal higher-rank graphs and minimal groupoids}
\label{Section-cofinal-minimal} In this section, we show that a finitely
aligned $k$-graph $\Lambda $ is cofinal if and only if the boundary-path
groupoid $\mathcal{G}_{\Lambda }$ is minimal (Proposition \re
{cofinal-iff-minimal}). Later, we use this relationship to study the
simplicity of Kumjian-Pask algebras in Section \re
{Section-Basic-Simpllicity}.
Recall from \cite[Definition 8.4]{Si06(G)} that we say a $k$-graph $\Lambda $
is \emph{cofinal} if for all $v\in \Lambda ^{0}$ and $x\in \partial \Lambda
, there exists $n\leq d\left( x\right) $ such that $v\Lambda x\left(
n\right) \neq \emptyset $.
In a groupoid $\mathcal{G}$, a subset $U\subseteq \mathcal{G}^{\left(
0\right) }$ is called \emph{invariant}\ if $s\left( a\right) \in U$ implies
r\left( a\right) \in U$ for all $a\in \mathcal{G}$. Note that $U$ is
invariant if and only if $\mathcal{G}^{\left( 0\right) }\backslash U$ is
invariant. We then say a topological groupoid $\mathcal{G}$ is \emph{minimal}
if $\mathcal{G}^{\left( 0\right) }$ has no nontrivial open invariant
subsets. Equivalently, $\mathcal{G}$ is minimal if for each $x\in \mathcal{G
^{\left( 0\right) }$, the orbit $\left[ x\right] :=s\left( x\mathcal{G
\right) $ is dense in $\mathcal{G}^{\left( 0\right) }$.
\begin{proposition}
\label{cofinal-iff-minimal}Let $\Lambda $ be a finitely aligned $k$-graph.
Then $\Lambda $ is cofinal if and only if the boundary-path groupoid
\mathcal{G}_{\Lambda }$ is minimal.
\end{proposition}
\begin{proof}
$\left( \Rightarrow \right) $ Suppose that $\Lambda $ is cofinal. Take $x\in
\mathcal{G}_{\Lambda }^{\left( 0\right) }$. We have to show that $\left[
\right] $ is dense in $\mathcal{G}_{\Lambda }^{\left( 0\right) }$. Take a
nonempty open set $Z_{\Lambda }\left( \left. \lambda \right\backslash
G\right) $ and we claim that $Z_{\Lambda }\left( \left. \lambda
\right\backslash G\right) \cap \left[ x\right] \neq \emptyset $. Since
Z_{\Lambda }\left( \left. \lambda \right\backslash G\right) $ is nonempty,
we have that $G$ is not exhaustive (see Remark \re
{remark-of-groupouid-Glambda}.(i)). Then there exists $\nu \in s\left(
\lambda \right) \Lambda $ such that $\Lambda ^{\min }\left( \nu ,\gamma
\right) =\emptyset $ for $\gamma \in G$. Now consider the vertex $s\left(
\lambda \nu \right) $ and the boundary path $x$. Since $\Lambda $ is
cofinal, then there exists $n\leq d\left( x\right) $ such that $s\left(
\lambda \nu \right) \Lambda x\left( n\right) \neq \emptyset $. Take $\mu \in
s\left( \lambda \nu \right) \Lambda x\left( n\right) $. Because $x$ is a
boundary path, so is $\sigma ^{n}x$. Hence,
\begin{equation*}
y:=\lambda \nu \mu \left[ \sigma ^{n}x\right]
\end{equation*
is also a boundary path. It is clear that $y\in Z_{\Lambda }\left( \lambda
\right) $ and since $\Lambda ^{\min }\left( \nu ,\gamma \right) =\emptyset $
for $\gamma \in G$, we have $y\notin Z_{\Lambda }\left( \lambda \gamma
\right) $ for $\gamma \in G$. Hence, $y\in Z_{\Lambda }\left( \left. \lambda
\right\backslash G\right) $.
On the other hand, since $y=\lambda \nu \mu \left[ \sigma ^{n}x\right] $,
then $\left( x,n-d\left( \lambda \nu \mu \right) ,y\right) \in \mathcal{G
_{\Lambda }$ and $y\in \left[ x\right] $. Therefore, $Z_{\Lambda }\left(
\left. \lambda \right\backslash G\right) \cap \left[ x\right] \neq \emptyset
$. Thus, $\left[ x\right] $ is dense in $\mathcal{G}_{\Lambda }^{\left(
0\right) }$ and $\mathcal{G}_{\Lambda }$ is minimal.
$\left( \Leftarrow \right) $ Suppose that $\Lambda $ is not cofinal. Then
there exist $v\in \Lambda ^{0}$ and $x\in \partial \Lambda $ such that for
all $n\leq d\left( x\right) ,$ we have $v\Lambda x\left( n\right) =\emptyset
$. We claim $Z_{\Lambda }\left( v\right) \cap \left[ x\right] =\emptyset $.
Suppose for contradiction that $Z_{\Lambda }\left( v\right) \cap \left[
\right] \neq \emptyset $. Take $y\in Z_{\Lambda }\left( v\right) \cap \left[
x\right] $. Because $y\in \left[ x\right] $, then there exists $p,q\in
\mathbb{N}^{k}$ such that $\left( x,p-q,y\right) \in \mathcal{G}_{\Lambda }
. This implies $\sigma ^{p}x=\sigma ^{q}y$. Since $y\in Z_{\Lambda }\left(
v\right) $, then $r\left( y\right) =v$. Hence, $\sigma ^{p}x=\sigma ^{q}y$
and $r\left( y\right) =v$ imply that $y\left( 0,q\right) $ belongs to
v\Lambda x\left( p\right) $, which contradicts with $v\Lambda x\left(
n\right) =\emptyset $ for all $n\leq d\left( x\right) $. Therefore,
Z_{\Lambda }\left( v\right) \cap \left[ x\right] =\emptyset $, as claimed,
and $\left[ x\right] $ is not dense in $\mathcal{G}_{\Lambda }^{\left(
0\right) }$. Thus, $\mathcal{G}_{\Lambda }$ is not minimal.
\end{proof}
\section{The Cuntz-Krieger uniqueness theorem}
\label{Section-the-CK-uniqueness-theorem}Throughout this section, $\Lambda $
is a finitely aligned $k$-graph and $R$ is a commutative ring with identity
1$.
\begin{theorem}[The Cuntz-Krieger uniqueness theorem]
\label{the-CK-uniqueness-theorem}Let $\Lambda $ be an aperiodic finitely
aligned $k$-graph, $R$ be a commutative ring with $1$. Suppose that $\pi
{\normalsize \operatorname{KP}}_{R}\left( \Lambda \right) \rightarrow A$ is an $R
-algebra homomorphism such that $\pi \left( rs_{v}\right) \neq 0$ for all
r\in \left. R\right\backslash \left\{ 0\right\} $ and $v\in \Lambda ^{0}$.
Then $\pi $ is injective.
\end{theorem}
We show Theorem \ref{the-CK-uniqueness-theorem} by using the Cuntz-Krieger
uniqueness theorem for Steinberg algebras \cite[Theorem 3.2]{CE-M15}. First
we verify an alternate formulation of the Cuntz-Krieger uniqueness theorem
for Steinberg algebras that will be useful.
\begin{theorem}
\label{the-CK-uniqueness-theorem-for-Steinberg-algebras}Let $\mathcal{G}$ be
an effective, Hausdorff, ample groupoid, and $R$ be a commutative ring with
1$. Let $\mathcal{B}$ be a basis of compact open bisection for the topology
on $\mathcal{G}$. Let $\phi :A_{R}\left( \mathcal{G}\right) \rightarrow A$
be an $R$-algebra homomorphism. Suppose that $\ker \left( \phi \right) \neq
0 $. Then there exist $r\in \left. R\right\backslash \left\{ 0\right\} $ and
$B\in \mathcal{B}$ such that $B\subseteq \mathcal{G}^{\left( 0\right) }$ and
$\phi \left( r1_{B}\right) =0$.
\end{theorem}
\begin{proof}
Since $\ker \left( \phi \right) \neq 0$, then by \cite[Theorem 3.2]{CE-M15},
there exist $r\in \left. R\right\backslash \left\{ 0\right\} $ and a
nonempty compact open subset $K\subseteq \mathcal{G}^{\left( 0\right) }$
such that $\phi \left( r1_{K}\right) =0$. Since $K$ is open, then there is
B\in \mathcal{B}$ such that $B\subseteq K$. Hence, $B\subseteq \mathcal{G
^{\left( 0\right) }$ an
\begin{equation*}
0=\phi \left( r1_{K}\right) \phi \left( 1_{B}\right) =\phi \left(
r1_{KB}\right) =\phi \left( r1_{K\cap B}\right) =\phi \left( r1_{B}\right)
\text{.}
\end{equation*}
\end{proof}
\begin{proof}[Proof of Theorem \protect\ref{the-CK-uniqueness-theorem}]
First note that $\mathcal{G}_{\Lambda }$ is a Hausdorff and ample groupoid
that is effective by Proposition~\ref{aperiodic-iff-effective}. Thus it
satisfies the hypothesis of Theorem \re
{the-CK-uniqueness-theorem-for-Steinberg-algebras}. Now recall the
isomorphism $\pi _{T}:{\normalsize \operatorname{KP}}_{R}\left( \Lambda \right)
\rightarrow A_{R}\left( \mathcal{G}_{\Lambda }\right) $ as in Proposition
\ref{KP-is-isomorphic-to-Steinberg-algebras}. Then $\pi _{T}\left(
s_{\lambda }\right) =1_{Z_{\Lambda }\left( \lambda \ast _{s}s\left( \lambda
\right) \right) }$ and $\pi _{T}\left( s_{\mu ^{\ast }}\right)
=1_{Z_{\Lambda }\left( s\left( \mu \right) \ast _{s}\mu \right) }$ for
\lambda ,u\in \Lambda $. Define $\phi :=\pi \circ \pi _{T}^{-1}$. To show
the injectivity of $\pi $, it suffices to show that $\phi $ is injective.
Suppose for contradiction that $\phi $ is not injective. By Theorem \re
{the-CK-uniqueness-theorem-for-Steinberg-algebras}, there exist $r\in \left.
R\right\backslash \left\{ 0\right\} $ and $Z_{\Lambda }\left( \left. \lambda
\right\backslash G\right) $ such that $\phi \left( r1_{Z_{\Lambda }\left(
\left. \lambda \right\backslash G\right) }\right) =0$. Since $1_{Z_{\Lambda
}\left( \left. \lambda \right\backslash G\right) }$ can be identified as
1_{Z_{\Lambda }\left( \left. \lambda \ast _{s}\lambda \right\backslash
G\right) }$ (Remark \ref{remark-of-groupouid-Glambda}.(i)), then by
following the argument of \eqref{equ-1_Z(lambda*mu-G)}, we ge
\begin{equation*}
\phi \left( r1_{Z_{\Lambda }\left( \left. \lambda \right\backslash G\right)
}\right) =\pi \big(rs_{\lambda }\big(\prod_{\nu \in G}\left( s_{s\left(
\lambda \right) }-s_{\nu }s_{\nu ^{\ast }}\right) \big)s_{\lambda ^{\ast }
\big)
\end{equation*
and the
\begin{equation}
\pi \big(rs_{\lambda }\big(\prod_{\nu \in G}\left( s_{s\left( \lambda
\right) }-s_{\nu }s_{\nu ^{\ast }}\right) \big)s_{\lambda ^{\ast }}\big)=
\text{.} \label{equ-equal-0-the-CK-uniqueness-thm}
\end{equation}
On the other hand, since $\pi \left( rs_{v}\right) \neq 0$ for all $r\in
\left. R\right\backslash \left\{ 0\right\} $ and $v\in \Lambda ^{0}$, and $G$
is finite non-exhaustive, then by Proposition \ref{properties-of-KP}.(d),
\begin{equation*}
\pi \big(rs_{\lambda }\big(\prod_{\nu \in G}\left( s_{s\left( \lambda
\right) }-s_{\nu }s_{\nu ^{\ast }}\right) \big)s_{\lambda ^{\ast }}\big)\neq
0\text{,}
\end{equation*
which contradicts \eqref{equ-equal-0-the-CK-uniqueness-thm}. The conclusion
follows.
\end{proof}
One application of Theorem \ref{the-CK-uniqueness-theorem} is:
\begin{corollary}
\label{injectivity-of-the-boundary-path-representation}Let $\Lambda $ be
finitely aligned $k$-graph and $R$ be a commutative ring with $1$. Then
\Lambda $ is aperiodic if and only if the boundary-path representation $\pi
_{S}:{\normalsize \operatorname{KP}}_{R}\left( \Lambda \right) \rightarrow \operatorname{End
\left( \mathbb{F}_{R}\left( \partial \Lambda \right) \right) $ is injective.
\end{corollary}
To show Corollary \ref{injectivity-of-the-boundary-path-representation}, we
establish some results and notation.
Following \cite[Definition 2.3]{Sh12}, for a finitely aligned $k$-graph
\Lambda $, we say $\Lambda $ has \emph{no local periodicity}\ if for every
v\in \Lambda ^{0}$ and every $n\neq m\in \mathbb{N}^{k}$, there exists $x\in
v\partial \Lambda $ such that either $d\left( x\right) \ngeq n\vee m$ or
\sigma ^{n}x\neq \sigma ^{m}x$. If no local aperiodicity fails at $v\in
\Lambda ^{0}$, then there are $n\neq m\in \mathbb{N}^{k}$ such that $\sigma
^{n}x=\sigma ^{m}x$ for all $x\in v\partial \Lambda $. In this case, we say
\Lambda $ has \emph{local periodicity}\ $n,m$ \emph{at }$v\in \Lambda ^{0}$.
\begin{lemma}[{\protect\cite[Lemma 2.9]{Sh12}}]
\label{lemma-has-local-periodicity}Let $\Lambda $ be a finitely aligned $k
-graph which has local periodicity $n,m$ at $v\in \Lambda ^{0}$. Then
d\left( x\right) \geq n\vee m$ and $\sigma ^{n}x=\sigma ^{m}x$ for every
x\in v\partial \Lambda $. Fix $x\in v\partial \Lambda $ and set $\mu
=x\left( 0,m\right) $, $\alpha =x\left( m,m\vee n\right) $, and $\nu =\left(
\mu \alpha \right) \left( 0,n\right) $. Then $\mu \alpha y=\nu \alpha y$ for
every $y\in s\left( \alpha \right) \partial \Lambda $.
\end{lemma}
\begin{proof}[Proof of Corollary \protect\re
{injectivity-of-the-boundary-path-representation}]
$\left( \Rightarrow \right) $ Suppose that $\Lambda $ is aperiodic. By
Proposition \ref{the-boundary-path-representation}, we have $\pi _{S}\left(
rs_{v}\right) \neq 0$ for all $r\in \left. R\right\backslash \left\{
0\right\} $ and $v\in \Lambda ^{0}$. Since $\Lambda $ is aperiodic, then by
Theorem \ref{the-CK-uniqueness-theorem}, $\pi _{S}$ is injective.
$\left( \Leftarrow \right) $ Suppose that $\Lambda $ is not aperiodic. We
are following the argument of \cite[Lemma 5.9]{ACaHR13}. Since $\Lambda $ is
not aperiodic, by \cite[Proposition 2.11]{Sh12}, there exist $v\in \Lambda
^{0}$ and $n\neq m\in \mathbb{N}^{k}$ such that $\Lambda $ has local
periodicity $n,m$ at $v\in \Lambda ^{0}$. Let $\mu ,\nu ,\alpha $ be as in
Lemma \ref{lemma-has-local-periodicity} and define $a:=s_{\mu \alpha
}s_{\left( \mu \alpha \right) ^{\ast }}-s_{\nu \alpha }s_{\left( \mu \alpha
\right) ^{\ast }}$. We claim that $a\in \left. \ker \left( \pi _{S}\right)
\right\backslash \left\{ 0\right\} $.
First we show that $a\neq 0$. Suppose for contradiction that $a=0$. Then
s_{\mu \alpha }s_{\left( \mu \alpha \right) ^{\ast }}=s_{\nu \alpha
}s_{\left( \mu \alpha \right) ^{\ast }}$. Note that $d\left( s_{\mu \alpha
}s_{\left( \mu \alpha \right) ^{\ast }}\right) =d\left( \mu \alpha \right)
-d\left( \mu \alpha \right) =0$ an
\begin{equation*}
d\left( s_{\nu \alpha }s_{\left( \mu \alpha \right) ^{\ast }}\right)
=d\left( \nu \alpha \right) -d\left( \mu \alpha \right) =d\left( \nu \right)
+d\left( \alpha \right) -d\left( \mu \right) -d\left( \alpha \right)
=n-m\neq 0\text{.}
\end{equation*
Hence $s_{\mu \alpha }s_{\left( \mu \alpha \right) ^{\ast }}=s_{\nu \alpha
}s_{\left( \mu \alpha \right) ^{\ast }}=0$. Thus, $0=s_{\left( \mu \alpha
\right) ^{\ast }}\left( s_{\mu \alpha }s_{\left( \mu \alpha \right) ^{\ast
}}\right) s_{\mu \alpha }=s_{s\left( \mu \alpha \right) }^{2}=s_{s\left( \mu
\alpha \right) }$, which contradicts Theorem \ref{universal-KP-family}.(b).
Hence $a\neq 0$.
Now we show that $a\in \ker \left( \pi _{S}\right) $. Take $y\in \partial
\Lambda $, and it suffices to show $\pi _{S}\left( a\right) \left( y\right)
=0$. Recall that $\pi _{S}\left( s_{\lambda }\right) =S_{\lambda }$ and $\pi
_{S}\left( s_{\mu ^{\ast }}\right) =S_{\mu ^{\ast }}$ where
\begin{equation*}
S_{\lambda }\left( y\right)
\begin{cases}
\lambda y & \text{if }s\left( \lambda \right) =r\left( y\right) \text{;} \\
0 & \text{otherwise,
\end{cases
\text{ and }S_{\mu ^{\ast }}\left( y\right)
\begin{cases}
\sigma ^{d\left( \mu \right) }y & \text{if }y\left( 0,d\left( \mu \right)
\right) =\mu \text{;} \\
0 & \text{otherwise.
\end{cases
\end{equation*
First suppose that $y\left( 0,d\left( \mu \alpha \right) \right) \neq \mu
\alpha $. Then $S_{\left( \mu \alpha \right) ^{\ast }}\left( y\right) =0$
and $\pi _{S}\left( a\right) \left( y\right) =S_{\mu \alpha }S_{\left( \mu
\alpha \right) ^{\ast }}\left( y\right) -S_{\nu \alpha }S_{\left( \mu \alpha
\right) ^{\ast }}\left( y\right) =0$. Next suppose that $y\left( 0,d\left(
\mu \alpha \right) \right) =\mu \alpha $. Then
\begin{equation*}
\pi _{S}\left( a\right) \left( y\right) =\left( S_{\mu \alpha }-S_{\nu
\alpha }\right) \left( \sigma ^{d\left( \mu \alpha \right) }y\right) \text{.}
\end{equation*
Since $y\in \partial \Lambda $, then $\sigma ^{d\left( \mu \alpha \right)
}y\in s\left( \alpha \right) \partial \Lambda $ and by Lemma \re
{lemma-has-local-periodicity}, $\mu \alpha \left( \sigma ^{d\left( \mu
\alpha \right) }y\right) =\nu \alpha \left( \sigma ^{d\left( \mu \alpha
\right) }y\right) $ and hence $\pi _{S}\left( a\right) \left( y\right) =0$.
Thus, $a\in \left. \ker \left( \pi _{S}\right) \right\backslash \left\{
0\right\} $, as claimed, and $\pi _{S}$ is not injective.
\end{proof}
\section{Basic simplicity and simplicity}
\label{Section-Basic-Simpllicity}As in \cite{T11}, we say an ideal $I$ in
{\normalsize \operatorname{KP}}_{R}\left( \Lambda \right) $ is \emph{basic} if
whenever $r\in \left. R\right\backslash \left\{ 0\right\} $ and $v\in
\Lambda ^{0}$, we have $rs_{v}\in I$ implies $s_{v}\in I$. We also say that
{\normalsize \operatorname{KP}}_{R}\left( \Lambda \right) $ is \emph{basically simpl
}\ if its only basic ideals are $\left\{ 0\right\} $ and ${\normalsize \operatorname
KP}}_{R}\left( \Lambda \right) $.
In this section, we investigate necessary and sufficient conditions for
{\normalsize \operatorname{KP}}_{R}\left( \Lambda \right) $ to be basically simple
(Theorem \ref{basic-simplicity}) and to be simple (Theorem \ref{simplicity
). We show that both results can be viewed as a consequences of basic
simplicity and simplicity characterisations of Steinberg algebras.
Therefore, we state necessary and sufficient conditions for the Steinberg
algebra $A_{R}\left( \mathcal{G}\right) $ to be basically simple and to be
simple in the following two theorems.
\begin{theorem}[{\protect\cite[Theorem 4.1]{CE-M15}}]
\label{basic-simplicity-for-Steinberg-algebras}Let $\mathcal{G}$ be an
Hausdorff, ample groupoid and $R$ be a commutative ring with $1$. Then
A_{R}\left( \mathcal{G}\right) $ is basically simple if and only if
\mathcal{G}$ is effective and minimal.
\end{theorem}
\begin{theorem}[{\protect\cite[Corollary 4.6]{CE-M15}}]
\label{simplicity-for-Steinberg-algebras}Let $\mathcal{G}$ be an Hausdorff,
ample groupoid and $R$ be a commutative ring with $1$. Then $A_{R}\left(
\mathcal{G}\right) $ is simple if and only if $R$ is a field and $\mathcal{G}
$ is effective and minimal.
\end{theorem}
Now we are ready to prove our results in this section.
\begin{theorem}
\label{basic-simplicity}Let $\Lambda $ be a finitely aligned $k$-graph and
let $R$ be a commutative ring with $1$. Then ${\normalsize \operatorname{KP}
_{R}\left( \Lambda \right) $ is basically simple if and only if $\Lambda $
is aperiodic and cofinal.
\end{theorem}
\begin{proof}
$\left( \Rightarrow \right) $ First suppose that ${\normalsize \operatorname{KP}
_{R}\left( \Lambda \right) $ is basically simple. By Proposition \re
{KP-is-isomorphic-to-Steinberg-algebras}, $A_{R}\left( \mathcal{G}_{\Lambda
}\right) $ is also basically simple and then by Theorem \re
{basic-simplicity-for-Steinberg-algebras}, $\mathcal{G}_{\Lambda }$ is
effective and minimal. On the other hand, $\mathcal{G}_{\Lambda }$ is
effective implies that $\Lambda $ is aperiodic (Proposition \re
{aperiodic-iff-effective}), and $\mathcal{G}_{\Lambda }$ is minimal implies
that $\Lambda $ is cofinal (Proposition \ref{cofinal-iff-minimal}). The
conclusion follows.
$\left( \Leftarrow \right) $ Next suppose that $\Lambda $ is aperiodic and
cofinal. By Proposition \ref{aperiodic-iff-effective} and Proposition \re
{cofinal-iff-minimal}, $\mathcal{G}_{\Lambda }$ is effective and minimal and
then by Theorem \ref{basic-simplicity-for-Steinberg-algebras}, $A_{R}\left(
\mathcal{G}_{\Lambda }\right) $ is basically simple. Since $A_{R}\left(
\mathcal{G}_{\Lambda }\right) $ is isomorphic to ${\normalsize \operatorname{KP}
_{R}\left( \Lambda \right) $ (Proposition \re
{KP-is-isomorphic-to-Steinberg-algebras}), then ${\normalsize \operatorname{KP}
_{R}\left( \Lambda \right) $ is also basically simple, as required.
\end{proof}
\begin{theorem}
\label{simplicity}Let $\Lambda $ be a finitely aligned $k$-graph and let $R$
be a commutative ring with $1$. Then ${\normalsize \operatorname{KP}}_{R}\left(
\Lambda \right) $ is simple if and only if $R$ is a field and $\Lambda $ is
aperiodic and cofinal.
\end{theorem}
\begin{proof}
$\left( \Rightarrow \right) $ First suppose that ${\normalsize \operatorname{KP}
_{R}\left( \Lambda \right) $ is simple. Then ${\normalsize \operatorname{KP}
_{R}\left( \Lambda \right) $ is also basically simple and Theorem \re
{basic-simplicity} implies that $\Lambda $ is aperiodic and cofinal. On the
other hand, since ${\normalsize \operatorname{KP}}_{R}\left( \Lambda \right) $ is
simple, then by Proposition \ref{KP-is-isomorphic-to-Steinberg-algebras},
A_{R}\left( \mathcal{G}_{\Lambda }\right) $ is also simple and by Theorem
\ref{simplicity-for-Steinberg-algebras}, $R$ is a field, as required.
$\left( \Leftarrow \right) $ Next suppose that $R$ is a field and $\Lambda $
is aperiodic and cofinal. By Proposition \ref{aperiodic-iff-effective} and
Proposition \ref{cofinal-iff-minimal}, $\mathcal{G}_{\Lambda }$ is effective
and minimal. Hence, by Theorem \ref{simplicity-for-Steinberg-algebras},
A_{R}\left( \mathcal{G}_{\Lambda }\right) $ is simple and by Proposition \re
{KP-is-isomorphic-to-Steinberg-algebras}, so is ${\normalsize \operatorname{KP}
_{R}\left( \Lambda \right) $.
\end{proof}
| {
"timestamp": "2015-12-22T02:19:40",
"yymm": "1512",
"arxiv_id": "1512.06547",
"language": "en",
"url": "https://arxiv.org/abs/1512.06547",
"abstract": "We extend the the definition of Kumjian-Pask algebras to include algebras associated to finitely aligned higher-rank graphs. We show that these Kumjian-Pask algebras are universally defined and have a graded uniqueness theorem. We also prove the Cuntz-Kreiger uniqueness theorem; to do this, we use a groupoid approach. As a consequence of the graded uniqueness theorem, we show that every Kumjian-Pask algebra is isomorphic to the Steinberg algebra associated to its boundary path groupoid. We then use Steinberg algebra results to prove the Cuntz-Kreiger uniqueness theorem and also to characterize simplicity and basic simplicity.",
"subjects": "Rings and Algebras (math.RA)",
"title": "Kumjian-Pask algebras of finitely-aligned higher-rank graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9732407175907054,
"lm_q2_score": 0.7279754489059774,
"lm_q1q2_score": 0.7084953482816694
} |
https://arxiv.org/abs/1512.01700 | Stabilizing the unstable output of persistent homology computations | We propose a general technique for extracting a larger set of stable information from persistent homology computations than is currently done. The persistent homology algorithm is usually viewed as a procedure which starts with a filtered complex and ends with a persistence diagram. This procedure is stable (at least to certain types of perturbations of the input). This justifies the use of the diagram as a signature of the input, and the use of features derived from it in statistics and machine learning. However, these computations also produce other information of great interest to practitioners that is unfortunately unstable. For example, each point in the diagram corresponds to a simplex whose addition in the filtration results in the birth of the corresponding persistent homology class, but this correspondence is unstable. In addition, the persistence diagram is not stable with respect to other procedures that are employed in practice, such as thresholding a point cloud by density. We recast these problems as real-valued functions which are discontinuous but measurable, and then observe that convolving such a function with a suitable function produces a Lipschitz function. The resulting stable function can be estimated by perturbing the input and averaging the output. We illustrate this approach with a number of examples, including a stable localization of a persistent homology generator from brain imaging data. | \section{Introduction} \label{sec:intro}
Persistence diagrams, also called barcodes, are one of the main tools in topological data analysis (TDA) \cite{Carlsson2009,FrosLand99,Edelsbrunner2010,ghrist:survey}. In combination with machine-learning and statistical techniques, they have been used in a wide variety of real-world applications, including the assessment of road network reconstruction \cite{Ahmed2014Roads}, neuroscience \cite{cbk:ipmi2009}, \cite{Bendich2015trees}, vehicle tracking \cite{Bendich2015tracking}, object recognition \cite{chunyuan:2014}, protein compressiblity \cite{Gameiro:2015b}, and protein structure~\cite{giseon:maltose}.
Put briefly, these persistence diagrams (PDs) are multi-sets of points in the extended plane, and they compactly describe some of the multi-scale topological and geometric information present in a high-dimensional point cloud,
or carried by a real-valued function on a domain.
Several theorems \cite{CohenSteiner2007,Chazal2009b,CohenSteiner2010} state that PDs are stable with respect to certain variations in the point-cloud or functional input, and so the conclusions drawn from them can be taken with some confidence.
On the other hand, there is additional potentially very useful but unstable information produced during the computation of PDs.
For example, a point far from the diagonal in the degree-zero PD represents a connected component with high persistence. This component first appears somewhere and the computation that produces the PD can be used to find its location. However this location is not stable: as we describe below, a small change in the input will cause only a small change in the persistence of this connected component, but it can radically alter the location of its birth.
Also, persistent homology computations may rely on parameters such that the output PD is not stable with respect to changes of these parameters.
\subsection{Our Contribution}
This paper introduces a method for stabilizing desirable but unstable outputs of persistent homology computations. The main idea is the following. On the front end, we think of a persistent homology computation $\mathcal{C}$ as being parametrized by a vector ${\bf a} = (a_1, \ldots, a_n)$ of real numbers. These parameters could specify the input to the computation (e.g. the values on the vertices of a simplicial complex) or they could specify other values used in the computation (e.g. threshold parameters used in de-noising or bandwidths for smoothing). For a given choice of ${\bf a}$, we get a PD. On the back end, we consider a function $p$ that extracts a real-number summary from a PD; for example, $p$ might extract the persistence of a homology class created by the addition of a specific edge in a filtered simplicial complex.
The composite function $h$ that maps the parameter vector to the real-number need not be continuous, but it will in many cases be {measurable}.
We convolve this function with a Gaussian (or indeed any Lipschitz function) to produce a new Lipschitz function that carries the persistence-based information we desire.
Our main theoretical results (Theorems \ref{thm:stability1}, \ref{thm:stability2} and \ref{thm:stability3}) give conditions on functions $h$ and $K$ (where $K$ will usually be a kernel) that guarantee that the convolution $h * K$ is Lipschitz with specified Lipschitz constant.
From these we obtain the following, where more precise statements are given as Corollaries \ref{cor:triangular}, \ref{cor:epanechnikov} and \ref{cor:gaussian}.
\begin{theorem}
If $h$ is locally essentially bounded then for the triangular and Epanechnikov kernels, $h * K$ is locally Lipschitz.
If $h$ is essentially bounded then for the Gaussian kernel, $h * K$ is Lipschitz.
\end{theorem}
In practice, this can be translated to \textbf{a simple procedure for stabilizing unstable persistent homology computations: perturb the input by adding, for example, Gaussian noise, and redo the computation; repeat and average}. By the law of large numbers, the result converges to the desired stable value.
\begin{theorem}
Let $\bm{\epsilon}_1,\ldots,\bm{\epsilon}_M$ be drawn independently from $K$. Then
\begin{equation*}
\frac{1}{M} \sum_{i=1}^M h({\bf a} - \bm{\epsilon}_i) \to (h*K)({\bf a}).
\end{equation*}
\end{theorem}
For the reader familiar with persistent homology who wants to see how this works in practice, we provide the following example.
Consider the function $f$ on the square in Figure~\ref{truefunction}.
This induces a function $\bar{f}$ on the torus since $f(x,y)=0$ on the boundary of the square. Suppose we are only given a finite sample of this induced function and we are interested in the presence of long-lived bars which are born in the region of the torus corresponding to the second quadrant of the square.
\begin{figure}
\centering\includegraphics[width=45mm]{truefunction.png}
\caption{The graph of the function on the square $[-\pi,\pi]^2$
given by
$f(u,v) = \sin(u)\sin(v)(1-0.9^*1_{\{u<0,v<0\}}):[-\pi,\pi]^2\to
\mathbb{R}$. It induces a function on the torus, $\bar{f}:T^2 \to \mathbb{R}$, with two global minima with value $-1$, one global maximum with value $1$ and one local maximum with value $0.1$.}
\label{truefunction}
\end{figure}
To be concrete, we start with a sample $X$ of $N$ points from the graph of $\bar{f}$, by sampling $u_i,v_i$ independently from the uniform distribution on $[-\pi,\pi]$ and letting $z_i = f(u_i,v_i)$.
We use $X$ to construct a filtered simplicial complex approximating the unknown function $\bar{f}$ as follows.
From the points $\{(u_i,v_i)\}$ we construct a Delaunay triangulation of the torus. We filter this filtration by assigning the vertex $(u_i,v_i)$ the value $z_i$ and assigning edges and triangles the maximum value of their vertices.
\begin{figure}
\includegraphics[width=0.26\textwidth]{sample1flat.png}
\includegraphics[width=0.35\textwidth]{sample2.png}
\includegraphics[width=0.35\textwidth]{sample3.png}
\caption{Sample of 1000 points from the graph
$\{(x,f(x)) : x\in T^2\}$, where the function values are indicated
using the same color scale as in Figure~\ref{truefunction}. The
points on the torus are used to construct a Delaunay
triangulation, which is filtered using the function values. On the
right we indicate the filtration values by moving the points in
the normal direction.}
\label{fig:sample}
\end{figure}
We compute the $0$-dimensional extended\footnote{Extended persistent homology~\cite{cseh:extendingP} pairs the global minimum with the global maximum.} persistence diagram of this filtered simplicial complex.
Let $h(X)$ be the length of the longest bar if that bar was born in the region corresponding to the second quadrant and $0$ otherwise.
This process defines a function $h:\mathbb{R}^{3N}\to\mathbb{R}$, but $h$ is unstable. Consider the sample $X$ in Figure~\ref{fig:sample}.
We have $h(X)=0$ since the global minimum, highlighted in red, is born outside the region corresponding to the second quadrant. Because of the symmetry of $f$, $h(X)$ is $0$ approximately half the time and about $2$ otherwise.
Let $K$ denote the $3N$-variate Gaussian with mean $0$ and standard deviation $0.2$.
Sample $\epsilon_1,\ldots,\epsilon_M$ independently from $K$.
Compute $\frac{1}{M} \sum_{i=1}^M h(X-\epsilon_i)$.
See Figure~\ref{fig:long-bars}.
As $M$ increases, this quantity converges to $g(X)$, where $g := h * K$ is the stabilized version of $h$.
\begin{figure}
\centering\includegraphics[width=60mm]{conv.png}
\caption{Locations and sizes of 100 long bars from the trials. Averaging the lengths of the blue bars over 1000 trials we get $1.3644$, which is consistent with the fact that $h(X)$ is $0$ or about $2$ with equal probability.
We should not expect $\lim_{M\to\infty}\frac{1}{M}\sum_{i=1}^{M}h(X+\epsilon_i)$ to converge to $1$ because unlike $f$, a particular sample $X$ is not symmetric with respect to the second and fourth quadrants.}
\label{fig:long-bars}
\end{figure}
\subsection{Related Work}
Partial inspiration for the main idea of our work (when faced with an instability caused by a near-interchange of values, perturb the values many times and take some sort of average) comes from the trembling-hand equilibrium solution \cite{munch2015probabilistic} to the non-uniqueness problem for Frechet means of persistence diagrams.
Several recent papers have advocated principled approaches for extracting features from PDs, including persistence landscapes \cite{Bubenik2015landscapes}, the stable multi-scale kernel~\cite{Reininghaus2015kernel}, intensity functionals \cite{Chen2015intensity}, persistence images~\cite{Adams:2017}, the stable topological signature~\cite{Carriere2015signatures}, and the cover-tree entropy reduction~\cite{Smith:2017}. Our result complements these ideas: once one identifies some specific parts of the persistence diagram as having good classification power, one can then attempt to locate, in a robust way, the portions of the domain responsible for these parts.
Other papers (e.g. \cite{Chazal2011measure,bckl:nonparametric,adams:2011}) have developed sophisticated schemes for data-cleaning before persistent homology computation.
These techniques are generally fragile to certain initial parameter choices, such as the $m_0$ parameter in \cite{Chazal2011measure}. Again, we provide a complementary role: any of these schemes can be run many times for several perturbations of an initial parameter choice, and the output can then be taken with confidence.
Finally, Zomorodian and Carlsson~\cite{Zomorodian2008localizing} use Mayer-Vietoris as inspiration in their technique for localizing (relative to a cover) homology classes within a given simplicial complex. However, this works only for a fixed simplicial complex, not a simplicial complex endowed with a filtration, and the results are certainly fragile to changes in this fixed complex.
\subsection{Outline}
Persistent homology computations and stability theorems are reviewed in Section \ref{sec:PD}, although we assume the reader is already somewhat familiar with them.
Several examples of important but unstable persistence-based information are given in Section \ref{sec:IG}, and we then describe a general approach that stabilizes them in Section \ref{sec:main}. Computational experiments, and some suggested interpretations of the results, are presented in Section \ref{sec:examples}.
Potential future directions are discussed in Section \ref{sec:Disc}.
\section{Persistent Homology and Stability} \label{sec:PD}
The treatment of persistence diagrams here is adapted from \cite{Edelsbrunner2010}. For a more general discussion, see \cite{Chazal2009b}. We assume the reader is familiar with the basics of homology groups: the textbook \cite{Munkres2} is a good introduction. All homology groups are assumed to be computed over some fixed field.
For concreteness, we restrict our attention to simplicial complexes, but our results also apply to more general complexes.
\subsection{Persistent Homology}
\subsubsection{Filtered simplicial complexes}
Persistent homology is computed for a finite \emph{filtered abstract simplicial complex}. That is, we have a finite \emph{abstract simplicial complex}, a collection, $K=\{\sigma\}$, of nonempty subsets of a fixed finite set that satisfy the condition that if $\emptyset \neq \tau \subseteq \sigma \in K$ then $\tau \in K$. In addition, we have a \emph{filtration}, a function $f:K \to \mathbb{R}$ such that $\tau \subseteq \sigma$ then $f(\tau) \leq f(\sigma)$. That is, $f$ is order preserving.
\subsubsection{Persistence Diagrams}
Fix a homological dimension $p$.
Suppose the distinct values of $f$ are $r_1 < \ldots < r_m.$
For each $1 \leq i \leq m,$ define $K^i = \{\sigma \in K \mid f(\sigma) \leq r_i\}$.
Since $f$ is order preserving, each $K^i$ is a subcomplex. Whenever $i \leq j$, there is an inclusion $K^i \hookrightarrow K^j$, which
induces a homomorphism:
\[
f_p^{i,j}: H_p(K^i) \to H_p(K^j).
\]
A homology class $\alpha \in H_p(K^i)$ is a \emph{persistent homology class} that is \emph{born} at level $i$ if $\alpha \notin \im f_p^{i-1,i}$, and that \emph{dies} entering level $j$
if $f_p^{i,j}(\alpha) =0$ but $f_p^{i,j-1}(\alpha) \neq 0$.
If $\alpha$ never dies, we say that it dies entering level $j=\infty$ and $r_{\infty} = \infty$.
The \emph{persistence} of $\alpha$ is defined to be $pers(\alpha) = r_j - r_i.$ The set of classes which are born at $i$ and die entering level $j$ form a vector space, with rank denoted $\mu_p^{i,j}.$
The degree-$p$ \emph{persistence diagram} of $f$, $\Dgm_p(f),$ encodes these ranks. It is a multiset of points in the extended plane, with a point
of multiplicity $\mu_p^{i,j}$ at each point $(r_i,r_j).$
\subsubsection{Persistent homology computations}
In practice, one constructs a filtered abstract simplicial complex from some other starting data. In addition, more information can be extracted from the persistent homology algorithm than just the persistence diagram.
We define a \emph{persistent homology computation}, $\mathcal{C}$, to be a function whose input consists of real numbers $a_1,a_2,\ldots,a_n$. These may include input values and also parameter values for the computation.
Using this input, $\mathcal{C}$ constructs an abstract simplicial complex $K$ together with a filtration $f$.
The output of $\mathcal{C}$ consists of a degree-$p$ persistence diagram together with for each $(r_i,r_j)$ in the persistence diagram (counted with multiplicity), a $p$-simplex $\sigma$ with $f(\sigma)=r_i$, and a $p$-cycle $\alpha$ containing $\sigma$ that is born at level $i$ and that dies at level $j$.
\subsubsection{Examples}
\begin{example}
\label{ex:HF}
\emph{Functions on simplicial complexes.}
A filtered abstract simplicial complex, $K$, may be obtained from a real-valued function, $F$, on the points in a finite simplicial complex, $\mathcal{K}$. As a set $K \cong \mathcal{K}$. A filtration, $f$, on $K$ is defined by $f(\sigma) = \sup_{x \in \sigma} F(x)$.
\begin{figure}
\centering
\includegraphics[scale=0.2]{ZeroDiag.pdf}
\caption{Left: The graph of a function $F$ on a simplicial complex $\mathcal{K}$. Right: the degree-zero persistence diagram $\Dgm_0(f)$ for the corresponding abstract simplicial complex $K$ and filtration $f$. The labeled points
have coordinates $u = (f(x),f(w))$ and $v = (f(y),f(z)).$ The point on the very top has infinite $y$-coordinate.}
\label{fig:ZeroDiag}
\end{figure}
For example, let $\mathcal{K}$ be the geometric line graph (i.e. an embedding of a graph - consisting of vertices and edges - in the plane), shown on the bottom of the left side of Figure \ref{fig:ZeroDiag}.
Above this, we have the graph of a function $F$ on the points in $\mathcal{K}$.
From this, we have a corresponding abstract simplicial complex $K$ and filtration $f$.
The persistence diagram $\Dgm_0(f)$ is on the right.
The input to $\mathcal{C}$ consists of the function values (from left to right) $a_1,a_2,\ldots,a_n$.
\end{example}
\begin{example}
\emph{Distance to a PL-Curve.}
\label{ex:FDC}
Consider the piecewise-linear curve $C$ on the left side of Figure \ref{fig:EC}.
Moving clockwise, we order its vertices $A = v_1, v_2, \ldots v_N = D$.
Let $K$ be the full simplex on these $N$ vertices. For each vertex $v$, define $f(v) = 0$.
For each edge of the form $e = (v_i, v_{i+1}),$ define $f(e) = 0$, and for any other edge $e = (v_i, v_j),$
we set $f(e)$ to be the Euclidean distance between $v_i$ and $v_j$. Finally, for any higher simplex $\sigma$, set $f(\sigma) = max_{e \subseteq \sigma} f(e).$ The degree-one persistence diagram $\Dgm_1(f)$ appears on the right
of Figure \ref{fig:EC}.
\begin{figure}
\centering
\includegraphics[scale=0.3]{EmbedCurve.pdf}
\caption{Left: a piecewise-linear curve in the plane. The distance between $A$ and $D$ is slightly smaller than
the distance between $B$ and $C$. Right: $\Dgm_1(f),$ where $f$ is as defined in the text. The points $u$ and $v$ correspond to one-cycles that are
created by the additions of edges $(A,D)$ and $(B,C)$, respectively.}
\label{fig:EC}
\end{figure}
\end{example}
Here the input to $\mathcal{C}$ consists of the $2n$ coordinates of the vertices.
We note this paradigm can be extended to curves $C$ in higher-dimensional ambient spaces, or even to higher-dimensional complexes.
\begin{example}
\label{ex:RF}
\emph{Point cloud -- Rips.}
Suppose that $X = \{x_1,\ldots,x_N\}$ is a set of points in some metric space $(Y,d)$.
We let $K$ be the full simplex on these vertices. Define $f(v) = 0$ for each vertex and $f(e) = d(v,w)$ for each edge $e = (v,w).$
As above, we set $f(\sigma) = max_{e \subseteq \sigma} f(e)$ for all higher-dimensional simplices.
This is called the Rips filtration.
We denote $\Dgm_p(X) = \Dgm_p(f)$.
The input to $\mathcal{C}$ consists of the coordinates of the points in $X$ in some parametrization of $Y$.
Alternatively, it consists of the entries of the distance matrix $D = (d(x_i,x_j))$.
For example, let $X$ be the circular point cloud on the top-left of Figure \ref{fig:FuzzyOutliers}. The corresponding $\Dgm_1(X)$ appears on the top-right.
\end{example}
\begin{example}
\label{ex:geometry}
\emph{Point cloud -- geometry.}
Often, the simplicial complex in the previous example is too large to work with.
Instead one applies some geometric ideas to construct a smaller filtered simplicial complex.
Examples include witness complexes~\cite{deSilvaCarlsson:witness}, the graph-induced complex~\cite{Dey:2013}, and the use of nudged elastic bands~\cite{adams:2011}.
These constructions typically include one or more parameters, which we append to the input to $\mathcal{C}$.
\end{example}
\begin{example}
\label{ex:statistics}
\emph{Point cloud -- statistics.}
Instead of using geometric ideas to construct a smaller point cloud we can use statistical ideas.
For example, one can use a kernel to smooth the point cloud to obtain a density estimator on the underlying space $Y$ and use this to filter a triangulation of $Y$~\cite{cbk:ipmi2009,bckl:nonparametric,Chen:2015b}.
Or one may use the local density to threshold the point cloud~\cite{cidsz:mumford}; we consider this in more detail in Example~\ref{ex:DTC}.
Again, these constructions include one or more parameters, which we append to the input for $\mathcal{C}$.
\end{example}
\begin{example}
\label{ex:regression}
\emph{Regression.}
Here we present a variant of Example~\ref{ex:HF} in which we are not given the simplicial complex.
Instead we sample points $X = (x_1,\ldots,x_N)$, $x_i \in \mathbb{R}^d$ from some probability distribution on $\mathbb{R}^d$. We also sample corresponding function values $y_i \in \mathbb{R}$. For example, we may have $y_i = f(x_i) + \epsilon_i$, where $\epsilon_i$ is sampled from a multivariate Gaussian.
We use $X$ to construct a Delaunay triangulation $K$.
We then use $Y = (y_1,\ldots,y_N)$ to filter $K$ as follows: $f(\sigma) = \max_{x_i\in\sigma} y_i$. This is called the lower star filtration.
Instead of the sample points lying in $\mathbb{R}^d$, they may lie on some compact Riemannian manifold.
See the torus example in Section~\ref{sec:intro}.
Instead of filtering $K$ directly using $Y$, one can instead use $(X,Y)$ to construct an estimator $\hat{f}$ of the unknown regression function $f$. We can then use $\hat{f}$ to filter $K$~\cite{bckl:nonparametric}.
\end{example}
\subsection{Stability}
The persistence diagram $\Dgm_p(f)$ is a summary of the function $f$, and it turns out to be a stable one. The discussion here is adapted from \cite{CohenSteiner2007}.
For a broader description, see \cite{Chazal2009b}.
For convenience, to each persistence diagram, we add every point $(r,r)$ on the major diagonal, each with infinite multiplicity.
Now suppose that $\phi: D \to D'$ is some bijection between two persistence diagrams; bijections exist because of the infinite-multiplicity points along the diagonal. The cost of $\phi$ is defined to be $C(\phi) = \sup_{u \in D} ||u - \phi(u)||_{\infty};$ that is, the largest box-norm distance between matched points. The \emph{bottleneck distance} $W_{\infty}(D,D')$ is defined to be the minimum cost amongst all such bijections.
For example, if $D$ and $D'$ are the black and red diagrams, respectively, on the right side of Figure \ref{fig:NoisyZeroDiag}, then the best bijection would pair $u$ with $u'$, $v$ with $v'$, the two infinite-persistence points with each other, and the other two points with the closest diagonal points. The bottleneck distance would
be the cost of this bijection.
\begin{figure}
\centering
\includegraphics[scale=0.2]{NoisyZeroDiag.pdf}
\caption{Left: The graphs of functions $f$ (black) and $g$ (red), both on the same domain $\mathcal{K}$. Right: the persistence diagrams $\Dgm_0(f)$ and $\Dgm_0(g)$, using the same color scheme.
}
\label{fig:NoisyZeroDiag}
\end{figure}
The Diagram Stability Theorem~\cite{CohenSteiner2007} guarantees that persistence diagrams of nearby functions are close to one another. More precisely,
we have $W_{\infty}(D_p(f), D_p(g)) \leq ||f - g||_{|\infty}.$
This is illustrated by Figure \ref{fig:NoisyZeroDiag}.
Note that the difference between $f$ and $g$ is measured in the $L_{\infty}$ norm. In the point cloud context (Ex. \ref{ex:RF}), this translates into requiring that the two point cloud inputs be Hausdorff-close. However, the persistence diagram is not stable with respect to the addition of outliers.
We discuss this problem in more detail in Section \ref{subsec:PC} and propose a solution in Section \ref{sec:main}.
\section{Instability}
\label{sec:IG}
The Diagram Stability Theorem tells us that the persistence diagram obtained in the output of a persistent homology computation is stable with respect to certain input used to construct a filtered abstract simplicial complex.
However, other outputs of persistent homology computations are not stable. This includes the simplices and cycles that generate persistent homology classes. These are of great interest to practitioners hoping to interpret persistence calculations more directly.
In addition, many persistence computations rely on choices of parameters and the resulting persistence diagrams may be unstable with respect to these choices.
\subsection{Instability of Generating Cycles/Simplices}
\label{subsec:GCS}
Persistence diagrams are useful and robust measures of the \emph{size} of topological features. What they are less good at, on the other hand, is robustly pinpointing the \emph{location} of important topological features.
We use Figure \ref{fig:NoisyZeroDiag} to illustrate this problem.
Suppose that we have the fixed domain $K$ and we observe the function $f$. One of the most prominent points in $\Dgm_0(f)$ is $u$, which corresponds to the pair of values
$f(x)$ and $f(w).$ We might thus be tempted to say that $f$ has an important feature, a component of high-persistence, \emph{at} $x$. But consider the nearby function $g$ instead. Its diagram $\Dgm_0(g)$ has a point $u'$ that is very close to $u$, but this point corresponds to the pair of values $f(y)$ and $f(w)$.
There is still a component born at $g(x)$, but it corresponds to the much smaller persistence point $v'$.
And so while the persistence of the point $u$ is a stable summary of the function $f$, the actual location $x$ of the topological feature it corresponds to is not.
This is unfortunate. Several recent works (\cite{Bendich2015tracking}, \cite{Bendich2015trees}, among others) have shown that the presence of points in certain regions of the persistence diagram has strong correlation with covariates under study. For example, each diagram in the second cited work came from a filtration of the brain artery tree in a specific patient's brain, and it was found that the density of points in a certain middle-persistence range gave strong correlations with patient age. It would of course be tempting to hold specific locations in the brain responsible for these points with high distinguishing power.
Unsurprisingly, this problem remains for persistent homology in higher degrees. Consider Figure \ref{fig:EC} again. It is easy to see that edge $(A,D)$ creates the large loop
which corresponds to point $u \in \Dgm_1(f)$. However, a slight perturbation of the vertex configuration could render $(B,C)$ responsible for this loop instead, and so we cannot robustly locate the persistence of this loop \emph{at} $(A,D)$.
In Section \ref{sec:main}, we both rigorously define this non-robustness and suggest a method for addressing it.
\subsection{Instability of Parameter Choices}
\label{subsec:PC}
The Diagram Stability Theorem guarantees the persistence diagrams associated to two Hausdorff-close point clouds will themselves be close. However,
it says nothing about the outlier problem. For example, consider again the point cloud $X$ (Figure \ref{fig:FuzzyOutliers}, top-left) from Example \ref{ex:RF} to which we apply the Rips construction.
Its persistence diagram $\Dgm_1(X)$ (top-right of same figure) has one high-persistence point, which corresponds to the ``circle'' that we qualitatively see when
looking at the points.
On the other hand, consider the point cloud $X'$ on the bottom-left, which consists of $X$ and three ``outlier'' points spread across the interior of the circle.
The diagram $\Dgm_1(X')$ (bottom-right) is not close to $\Dgm_1(X)$: there is still one point of fairly high persistence, but it's much closer to the diagonal than before.
In practice, this problem is often addressed by first de-noising the point cloud in some way. For example, Carlsson et. al. \cite{Carlsson2008Klein} first thresholded
by density before computing Rips filtrations when they discovered a Klein bottle in the space of natural images.
There are no guarantees that a different, nearby choice of density threshold parameter would not give a qualitatively different persistence diagram.
Section \ref{sec:main} addresses this by introducing a general method for handling parameter choice in persistence computations.
\begin{figure}[ht]
\begin{minipage}[b]{0.45\linewidth}
\centering
\includegraphics[width=\textwidth,scale=0.01]{FuzzyCircle.pdf}
\label{fig:figure1}
\end{minipage}
\begin{minipage}[b]{0.45\linewidth}
\centering
\includegraphics[width=\textwidth,scale=0.01]{FuzzyCircleOneDiag.pdf}
\end{minipage}
\begin{minipage}[b]{0.45\linewidth}
\centering
\includegraphics[width=\textwidth,scale=0.01]{OutlierCircle.pdf}
\label{fig:figure2}
\end{minipage}
\begin{minipage}[b]{0.45\linewidth}
\centering
\includegraphics[width=\textwidth,scale=0.01]{OutlierCircleOneDiag.pdf}
\end{minipage}
\caption{Illustration of the outlier problem for the persistent homology of the Vietoris-Rips complex of a point cloud. All figures produced in MATLAB. Top left: $150$ points $X$, sampled with bounded noise from a circle.
Top right: $\Dgm_1(X).$ Bottom left: $153$ points $Y$, which is $X$ plus three outlier points. Bottom right: $\Dgm_1(Y).$
}
\label{fig:FuzzyOutliers}
\end{figure}
\section{Theory: Stability from convolutions} \label{sec:main}
In this section we show how functions may be stabilized by convolving them with a kernel. First, we give three general results with various assumptions on the function and the kernel. Next, we apply them in three particular cases: the simple triangular kernel and the commonly used Epanechnikov and Gaussian kernels.
We then outline a few specific examples, some of which will be explored via experiment in the next section.
\subsection{Lipschitz functions and convolution}
Let us start by recalling a few definitions.
For $C \geq 0$, a function $f:\mathbb{R}^n \to \mathbb{R}$ is said to be \emph{$C$-Lipschitz} if for all $u,v \in \mathbb{R}^n$, $\abs{f(u)-f(v)} \leq C \abs{u-v}$, where $\abs{x}$ denotes the Euclidean norm.
We will call a function \emph{Lipschitz} if it is $C$-Lipschitz for some $C \geq 0$.
The support of $f$, denoted $\supp(f)$, is the closure of the subset of $\mathbb{R}^n$ where $f$ is non-zero.
Let $h,g:\mathbb{R}^n \to \mathbb{R}$ be (Lebesgue) measurable functions that are defined almost everywhere.
The \emph{1-norm} of $h$, is given by $\norm{h}_1 = \int_{\mathbb{R}^n}\abs{h(t)}dt$, if it exists.
The \emph{essential supremum} of $h$, denoted by $\norm{h}_{\infty}$, is the smallest number $a$ such that the set $\{x \ \mid \ \abs{f(x)} > a\}$ has measure $0$.
If it exists,
the \emph{convolution product} of $h$ and $g$, is given by
\begin{equation*}
(h*g)(t) = \int_{\mathbb{R}^n} h(s)g(t-s)ds = \int_{\mathbb{R}^n} h(t-s) g(s) ds.
\end{equation*}
It exists everywhere, for example, if
one function is essentially bounded and the other is integrable;
or if
one function is bounded and compactly supported and the other is locally integrable~\cite[Section 473D]{Fremlin2000}.
\begin{assumption}
Throughout this section we assume that $h:\mathbb{R}^d \to \mathbb{R}$ is defined almost everywhere, $K:\mathbb{R}^d \to \mathbb{R}$ and that that the convolution product $h * K$ exists almost everywhere.
\end{assumption}
\subsection{Stability theorems}
\label{sec:stability}
We now give several conditions on a pair of functions which imply that their convolution product is (locally) Lipschitz.
The first result appears in \cite[473D(d)]{Fremlin2000}, but the proof is included here for completeness.
\begin{theorem} \label{thm:stability1}
If $\norm{h}_1 = C$ and $K$ is $D$-Lipschitz, then $h*K$ is $CD$-Lipschitz.
\end{theorem}
\begin{proof}
Let $g = h*K$.
First we have, $g(u) - g(v) = \int_{\mathbb{R}^n} h(s) \left( K(u-s) - K(v-s) \right) ds.$ Then,
$\abs{g(u) - g(v)} \leq \int_{\mathbb{R}^n} \abs{h(s)} \abs{ K(u-s) - K(v-s) } ds \leq \int_{\mathbb{R}^n} \abs{h(s)} D \abs{u-v} ds \leq CD \abs{u-v}.$
\end{proof}
Let $B_{\alpha}(x)$ denote the closed ball of radius $\alpha$ centered at $x \in \mathbb{R}^d$, and
let $V_d$ denote the volume of the $d$-dimensional ball of radius $1$.
\begin{theorem} \label{thm:stability2}
Let $x \in \mathbb{R}^d$ and let $\alpha>0$. If $\norm{h}_{\infty} \leq M$ on $B_{2\alpha}(x)$,
$K$ is $D$-Lipschitz and
$\supp(K) \subseteq B_{\alpha}(0)$,
then $h*K$ is $2MD\alpha^dV_d$-Lipschitz in $B_{\alpha}(x)$.
\end{theorem}
\begin{proof}
Let $g = h*K$. Let $u,v \in B_{\alpha}(x)$.
As in the previous proof,
$\abs{g(u) - g(v)} \leq \int_{\mathbb{R}^n} \abs{h(s)} \abs{ K(u-s) - K(v-s) } ds \leq \int_{B_{\alpha}(u) \cup B_{\alpha}(v)} \abs{h(s)} D \abs{u-v} \, dx \leq 2MD\alpha^dV_d \abs{u-v}$.
\end{proof}
\begin{theorem} \label{thm:stability3}
If $\norm{h}_{\infty} \leq M$ and $\int \abs{K(s+t) - K(s)} \, ds \leq D\abs{t}$ for all $t \in \mathbb{R}^d$, then $h*K$ is $MD$-Lipschitz.
\end{theorem}
\begin{proof}
Let $g = h*K$.
Again,
$\abs{g(u) - g(v)} \leq \int_{\mathbb{R}^n} \abs{h(s)} \abs{ K(u-s) - K(v-s) } ds \leq \int M \abs{K(u-v+x)-K(x)}\,dx \leq MD\abs{u-v}$.
\end{proof}
\subsection{Application to kernels}
\label{sec:kernels}
We now apply the above theorems to smooth a function $h$, obtaining a Lipschitz function.
That is, we will take $K$ to be a \emph{kernel}, a non-negative integrable real-valued function on $\mathbb{R}^n$ satisfying $\int K(x) dx = 1$, $\int x K(x) dx = 0$ and $\int x^2 K(x) dx < \infty$.
For example, we can choose $K$ to be the \emph{triangular kernel}, $K(x) = c \max(1-\norm{x}, 0)$, for appropriate normalization constant $c$.
The most common choices are the Gaussian kernel and the Epanechnikov kernel, which are described below.
Notice that if $K$ is a kernel, then so is $K_{\alpha}(x) = \frac{1}{{\alpha}^n} K(\frac{x}{\alpha})$.%
\footnote{More generally,
we can choose the bandwidth to be a symmetric positive definite matrix $H$ and let $K_H(x) = \frac{1}{\sqrt{\det{H}}}K(H^{-1/2}x)$.
}
The parameter $\alpha$ is called the \emph{bandwidth} and allows one to control the amount of smoothing.
We have $g = h * K_{\alpha}$.
\subsubsection{The triangular kernel}
\label{sec:triangular}
Let $\alpha>0$. Let $V_d$ denote the volume of the $n$-dimensional ball of radius $1$. For $A \subseteq \mathbb{R}^d$, let $I_A$ denote the indicator function on $A$. That is, $I_A(x) = 1$ if $x \in A$ and $0$ otherwise. The triangular kernel is given by
\begin{equation*}
K_{\alpha}(x) = \frac{d+1}{\alpha^d V_d} \left( 1 - \frac{\abs{x}}{\alpha} \right) I_{B_{\alpha}(0)}.
\end{equation*}
Note that $\supp(K_{\alpha}) = B_{\alpha}(0)$ and $K_{\alpha}$ is $\frac{d+1}{\alpha^{d+1} V_d}$-Lipschitz.
Applying Theorem~\ref{thm:stability2}, we have the following.
\begin{corollary} \label{cor:triangular}
Let $x \in \mathbb{R}^d$. If $\norm{h}_{\infty}\leq M$ on $B_{2\alpha}(x)$ then $h*K_{\alpha}$ is $\frac{2M(d+1)}{\alpha}$-Lipschitz in $B_{\alpha}(x)$.
\end{corollary}
Note that it follows that if bound on $h$ is global then so is the Lipschitz bound.
\subsubsection{The Epanechnikov kernel}
\label{sec:epanechnikov}
Let $\alpha>0$.
The Epanechnikov kernel is given by
\begin{equation*}
K_{\alpha}(x) = \frac{d+2}{2\alpha^d V_d} \left( 1 - \frac{\abs{x}^2}{\alpha^2} \right) I_{B_{\alpha}(0)}.
\end{equation*}
Now $\supp(K_{\alpha}) = B_{\alpha}(0)$ and $K_{\alpha}$ is $\frac{d+2}{\alpha^{d+1}V_d}$-Lipschitz.
Applying Theorem~\ref{thm:stability2}, we have the following.
\begin{corollary} \label{cor:epanechnikov}
Let $x \in \mathbb{R}^d$. If $\norm{h}_{\infty}\leq M$ on $B_{2\alpha}(x)$ then $h*K_{\alpha}$ is $\frac{2M(d+2)}{\alpha}$-Lipschitz in $B_{\alpha}(x)$.
\end{corollary}
\subsubsection{The Gaussian kernel}
\label{sec:gaussian}
Let $\alpha>0$. The Gaussian kernel is given by
\begin{equation*}
K_{\alpha}(x) = \frac{1}{\alpha^d(2\pi)^{d/2}} e^{-x^2/2\alpha^2}.
\end{equation*}
\begin{lemma}
For the Gaussian kernel $K_{\alpha}$,
let $f(t) = \int \abs{K_{\alpha}(s+t)-K_{\alpha}(s)}\,ds$. Then $f(t) \leq \frac{2}{\alpha\sqrt{2\pi}} \abs{t}$ for all $t \in \mathbb{R}^d$.
\end{lemma}
\begin{proof}
It is an exercise to show that
\begin{equation*}
f(t) = \frac{4}{\sqrt{2\pi}} \int_0^{\abs{t}/2\alpha} e^{-x^2/2}\,dx.
\end{equation*}
It follows that $f(t) \leq \frac{4}{\sqrt{2\pi}} \int_0^{\abs{t}/2\alpha}\,dt = \frac{2}{\alpha\sqrt{2\pi}} \abs{t}$.
\end{proof}
Thus by Theorem~\ref{thm:stability3} we have the following
\begin{corollary} \label{cor:gaussian}
If $\norm{h}_{\infty} \leq M$ then $h*K_{\alpha}$ is $\left(\frac{2}{\pi}\right)^{\frac{1}{2}} \frac{M}{\alpha}$-Lipschitz.
\end{corollary}
\subsection{Stable Computations in Practice}
Suppose that we can compute $h(x)$ for values of $x$ for which it is defined, we can sample from $K$, and that for a fixed $a \in \mathbb{R}^d$
we want to compute $g(a) = (h * K)(a) = \int_{\mathbb{R}^d} h(a-x)K(x) dx$. In practice, we will not be able to evaluate this integral analytically.
We approximate $g(a)$ as follows.
Let $V$ be a random variable with probability distribution given by the kernel $K$ (one writes $V \sim K$). Let $W$ be the random variable given by $h(a-V)$. Then the expected value of $W$ is given by $E[W] =
\int_{\mathbb{R}^d} h(a-x)K(x)dx = g(a)$.
We will approximate $E[W]$ by drawing a sample $\epsilon_1,\ldots,\epsilon_M$ where $\epsilon_i \sim K$ are independent.
Then $E[W]$ can be approximated by $\overline{W}_M = \frac{1}{M} \sum_{i=1}^M h(a-\epsilon_i)$.
By the law of large numbers, $\overline{W}_M \to E[W]$, where the convergence may be taken to be in probability (the weak law) or almost surely (the strong law).
This is the justification for the computations in Section~\ref{sec:examples}.
Let us record this result.
\begin{theorem}
Let $a \in \mathbb{R}^d$ and $\epsilon_1,\ldots,\epsilon_M$ be drawn independently from $K$. Then
\[
\frac{1}{M} \sum_{i=1}^M h(a-\epsilon_i) \to g(a).
\]
\label{thm:simulation}
\end{theorem}
If $M$ is large, the cost of computing $h$ $M$ times may be considerable. In the case that $h$ is a persistent homology calculation we suggest the following. Consider the set of filtered complexes, $L_i$ produced by $\{a-\epsilon_i\}$. If $\epsilon_i$ and $\epsilon_j$ are sufficiently close then $L_i$ and $L_j$ are isomorphic up to a change of filtration value and one doesn't need to repeat the persistence computation. More generally, one may obtain the persistent homology of similar filtered complexes using vineyard updates~\cite{csem:vineyards}.
\subsection{Stability of the choice of kernel}
\label{sec:stability-kernel}
As should be clear, and as borne out by the experiments Section~\ref{sec:examples}, the value of $(h*K)({\bf a})$,
for fixed $h$ and ${\bf a}$, will certainly depend on $K$. However, there is no fragility of output with respect to this choice, as shown by the following fact.
\begin{theorem}
Let $h: \mathbb{R}^d \to \mathbb{R}$ be an essentially bounded function. Then the map $K \to h*K$ is Lipschitz, from $L^1(\mathbb{R}^d)$ to $L^{\infty}(\mathbb{R}^d)$.
\label{thm:choice}
\end{theorem}
\begin{proof}
Let $\phi:L^{1}(\mathbb{R}^d) \to L^{\infty}(\mathbb{R}^d)$ be given by $\phi(K) = h*K$.
For $x \in \mathbb{R}^d$, $\abs{\left[ \phi(K) - \phi(K') \right] (x) } \leq \int \abs{h(a-t)}\,\abs{K(t)-K'(t)}\,dt \leq \norm{h}_{\infty} \norm{K-K'}_1$.
\end{proof}
\subsection{Choice of bandwidth}
\label{sec:bandwidth}
After choosing a family of kernels, such as the Gaussian kernels $K_{\alpha}$ described in Section~\ref{sec:gaussian}, the most important choice in implementing the method described here is the choice of bandwidth $\alpha$.
Choosing the amount of smoothing is a well-studied problem in nonparametric regression, where increasing the bandwidth decreases the estimation variance, but increases the squared bias. Both of these terms contribute to the error. A bandwidth which optimizes this trade-off may be estimated using cross-validation.
Our situation is somewhat different and a proper understanding of this problem requires analysis that goes beyond the scope of the present paper.
However, we offer some suggestions for the choice of bandwidth. First, it may be chosen to obtain a desired amount of smoothness of $h*K_{\alpha}$. For example, we may want $h*K_{\alpha}$ to be $1$-Lipschitz. Second, it seems reasonable to choose the bandwidth to be (at least) equal the level of estimated noise of the input data.
One may combine these two to find the minimum bandwidth that satisfies both requirements.
\section{Application to persistent homology computations}
Now let us apply the results of the previous section to persistent homology.
Assume we have a persistent homology computation, $\mathcal{C}$,
with input the real numbers $a_1,\ldots,a_n$.
Let $\mathcal{O}$ be the set of outputs of this computation.
Let $D \subseteq \mathbb{R}^n$ be the set of all inputs for which $\mathcal{C}$ is defined.
If $D \neq \mathbb{R}^n$ then add a state $\emptyset$ to $\mathcal{O}$
and say that the computation sends all points in $\mathbb{R}^n - D$ to $\emptyset$.
Thus we can use this computation to define a function
$H: \mathbb{R}^n \to \mathcal{O}$.
Let $p$ be a real-valued function on $\mathcal{O}$ with $p(\emptyset)=0$.
Let $h = p \circ H: \mathbb{R}^{n} \to \mathbb{R}$.
We will need $h$ to be (Lebesgue) measurable.
To make this less abstract, we show how the instabilities described in Sections \ref{subsec:GCS} and \ref{subsec:PC} can be addressed by this method.
\begin{example}
\emph{Stable persistence located at a point.}
\label{ex:component}
We return to Example \ref{ex:HF}, where we have a geometric line graph $\mathcal{K}$ with $n$ vertices $v_1, \ldots, v_n$, and edges $e_i = (v_i, v_{i+1})$ for $i = 1, \ldots n-1.$
To produce a filtration of the type used in this example, we just need to know $n$ function values.
More precisely, our persistence computation takes as input a vector ${\bf a} = (a_1,\ldots,a_n) \in \mathbb{R}^n$, from which we obtain a piecewise linear function, $F_{\bf a}$, on $\mathcal{K}$ determined by
$F_{\bf a}(v_i)=a_i$.
Next we consider the corresponding abstract simplicial complex $K$ and filtration $f_{\bf a}$.
Then we compute the persistence diagram $\Dgm_0(f_{\bf a})$.
This defines a function $H: \mathbb{R}^N \to \mathcal{O}$, where $H({\bf a}) = \Dgm_0(f_{\bf a})$.
Now fix a specific vertex $x$ in $K$. A given diagram $\Dgm_0(f_{\bf a})$ either contains a point $u(x) = (b(x),d(x))$ that represents a persistent connected component born at $x$ in the filtration,
or it does not. In the former case, we define $p_x(\Dgm_0(f_{\bf a})) = d(x) - b(x),$ and in the latter we define $p(x)(\Dgm_0(f_{\bf a})) = 0$; that is, we map the diagram
to the persistence of the connected component created by this specific vertex.
The discontinuity of the function $h_x = p_x \circ H: \mathbb{R}^n \to \mathbb{R}$ expresses the instability of localizing the persistence of a connected component.
Referring to Figure \ref{fig:NoisyZeroDiag}, suppose that the vectors ${\bf a}$ and ${\bf e}$ produce the functions $f$ and $g$, respectively, and that the vertex $x$ is as marked in the figure. Then $h_x({\bf a})$ is the persistence of $u$, while $h_x({\bf e})$ is the persistence
of $v'$.
Corollaries \ref{cor:triangular} and \ref{cor:epanechnikov}
guarantee that smoothing $h_x$ by a Triangular kernel and Epanechnikov kernel will result in a locally Lipschitz function.
To be able to convolve with the Gaussian kernel and apply Corollary~\ref{cor:gaussian} we need $h_{\sigma}$ to be essentially bounded. We can arrange this by specifying that the domain $D$ of $\mathcal{C}$ be compact and that $p_{\sigma}$ be bounded. This requires that all of the persistence pairs in the output of $\mathcal{C}$ be finite. This can be arranged by truncating at some value $M$ or by applying extended persistence \cite{cseh:extendingP}.
The resulting $h_x$ is Lipschitz.
The experiments in Section \ref{subsec:PG} show how this works in practice.
\end{example}
\begin{example}
\emph{Stable persistence located at an edge.}
\label{ex:edge}
We return to Example \ref{ex:FDC}. In this case, $K$ is the full complex on $n$ vertices, and we start with $n$ ordered points in the plane which lead to a piecewise-linear curve $C$.
That is, $\mathcal{C}$ takes as input a vector ${\bf a} \in \mathbb{R}^{2n}$ and places a vertex $v_i$ at $(a_{2i-1}, a_{2i})$, thus creating a curve $C_{{\bf a}}.$ This leads to a filtration
$f_{\bf a}$ of $K$ and finally we produce $\Dgm_1(f_{\bf a}) \in \mathcal{O}$. As before, $H({\bf a}) = \Dgm_1(f_{\bf a})$ defines a function $H: \mathbb{R}^{2n} \to \mathcal{O}.$
If we fix a specific edge $\sigma$, we can proceed as in Example \ref{ex:component} by defining the function $p_{\sigma}$ and thus
$h_{\sigma} = p_{\sigma} \circ H$. For example, taking $\sigma = (A,D)$ in Figure \ref{fig:EC} and letting ${\bf a}$ be the vector which led to that specific point configuration, we have $h_{\sigma}({\bf a})$ equal to the persistence of $u$. As above, $g_{\sigma} = h_{\sigma}* K_{\alpha}$ is (locally) Lipschitz.
\end{example}
\begin{example}
\emph{Stable persistence of generating cycles.}
Instead of tracking which $j$-simplex creates a persistent homology class, a persistent homology algorithm may record a $j$-cycle, $\gamma$, that represents the persistence class. In this case, we can define $p_{\gamma}: \mathcal{O} \to \mathbb{R}$ to be $d-b$ if $\gamma$ represents a persistence pair $[b,d)$ or otherwise $0$. Let $h_{\gamma} = p_{\gamma}H$ and then $g_{\gamma} = h_{\gamma} * K_{\alpha}$ is (locally) Lipschitz.
\end{example}
\begin{example}
\emph{Stability in density-thresholding choice.}
\label{ex:DTC}
Let $Y$ be the point cloud on the bottom-left of Figure \ref{fig:FuzzyOutliers}, which we recall was created from the point cloud on the top-left by adding three outlier points. Consider any de-noising process parametrized by some real numbers.
For a specific example, let ${\bf k} = (\delta, \epsilon)$.
For each $y \in Y$, let $C_{\delta}(y) = \{x \in Y \mid ||x - y|| \leq \delta\}$.
Then define
$$
Y_{\epsilon}^{\delta} = \{y \in Y \mid \frac{|C_{\delta}(y)|}{|Y|} \geq \epsilon\}.
$$
One then applies the Rips construction to obtain a filtered abstract simplicial complex from $Y_{\epsilon}^{\delta}$, and then computes $\Dgm_1(Y_{\epsilon}^{\delta}).$
We may consider the input of our persistent homology computation $\mathcal{C}$ to be $a_1,\ldots,a_{2n},\delta,\epsilon$: that is, the coordinates of the vertices and the parameter values.
However, we may also take the coordinates to be fixed and only consider the parameters to be our input.
Doing this, we obtain $H: \mathbb{R}^2 \to \mathcal{O}.$
In this case, define $p(D) = max_{u \in D} pers(u)$ for any degree-one diagram $D$.
Then the discontinuity of the function $h: \mathbb{R}^2 \to \mathbb{R}$ given by
\[
{\bf k} = (\delta, \epsilon) \mapsto \Dgm_1(Y_{\epsilon}^{\delta}) \mapsto p(\Dgm_1(Y_{\epsilon}^{\delta}))
\]
expresses the instability of the threshold-parameter choice referred to in Section \ref{subsec:PC}.
If ${\bf k}$ is chosen so that the three outlier points are cleaned up, then $h({\bf k})$ will be the persistence of
the most prominent point on the top-right of Figure \ref{fig:FuzzyOutliers}. On the other hand, a very nearby choice of ${\bf k}$ might
fail to clean up these points, and we would get the persistence of the most prominent point on the bottom-right of Figure~\ref{fig:FuzzyOutliers}.
As above, $g_{\sigma} = h_{\sigma}* K_{\alpha}$ is (locally) Lipschitz.
\end{example}
\section{Experiments and Interpretations} \label{sec:examples}
This section more deeply investigates some of the examples above, both via a few proof-of-principle experiments with synthetic data and via some suggested
interpretations of the results. All experiments were run in MATLAB, using TDATools
for the persistent homology computations.
\subsection{Experiments}
\label{subsec:PG}
\subsubsection{First line graph experiment.}
First we explore Example \ref{ex:HF} (though with a different case than the one in Figure~\ref{fig:ZeroDiag}), where the input to a persistent homology computation is a choice of function-values on the vertices
of a simplicial complex. Specifically, we consider a line graph $\mathcal{K}$ with vertices $v_1, \ldots ,v_7$, and the initial input choice
${\bf a} = (10,11,12.5, 13, 9.9,20,1).$ The left side of Figure \ref{fig:PathGraph} shows the graph of the PL-function $F_{\bf a}$, and the persistence
diagram $H({\bf a})$ is in the middle. Note that the high-persistence dot $(9.9,20)$ and the medium-persistence one $(10,13)$ are created by the additions
of $v_5$ and $v_1$, respectively; that is, $h_5({\bf a}) = 10.01$ and $h_1({\bf a}) = 3.$ These values are of course unstable to perturbations
of ${\bf a}$: for instance, if we switch the first and fifth entries of ${\bf a}$, the reader can check that $h_5((9.9,11,12.5,13,10,20,1)) = 3.$
\begin{figure}[ht]
\begin{minipage}[b]{0.28\linewidth}
\centering
\includegraphics[width=\textwidth,scale=0.1]{PathGraph.pdf}
\end{minipage}
\begin{minipage}[b]{0.28\linewidth}
\centering
\includegraphics[width=\textwidth,scale=0.05]{PathGraphZeroDiag.pdf}
\end{minipage}
\begin{minipage}[b]{0.28\linewidth}
\centering
\includegraphics[scale=0.3]{PathGraphConvolutionGraphs.pdf}
\end{minipage}
\caption{Input and results of first line graph experiment Left: a graph of the function $F_{\bf a}$ defined on a line graph with seven vertices. Middle: the PD
$H({\bf a}) = \Dgm_0(F_{\bf a})$. We follow the extended persistence convention and pair the global min with the global max.
Right: Results of experiment. Middle graph shows the value of $g_{5,\alpha}({\bf a})$ versus $1,000*\alpha$, bottom graph shows $g_{1,\alpha}({\bf a})$ versus $1,000*\alpha$, top graph shows their sum.}
\label{fig:PathGraph}
\end{figure}
Let $K_{\alpha}$ be a seven-dimensional Gaussian kernel with mean at the origin and bandwidth $\alpha$. For each $i = 1, \ldots, 7,$ put
$g_{i,\alpha} = h_i * K_{\alpha}$.
The right side of Figure \ref{fig:PathGraph} shows graphs of the approximate values of $g_{5,\alpha}({\bf a})$ and $g_{1,\alpha}({\bf a})$, plotted against $\alpha$, as well as a graph of their sum. There were $100$ evenly spaced values of $\alpha$ used, ranging from $\alpha = 0.001$ to $\alpha = 0.1$.
To make these graphs, we followed the approximation procedure suggested by Theorem \ref{thm:simulation}. For each fixed $\alpha$, we took $N = 1000$ independent
draws $\bm{\epsilon}_1, \ldots \bm{\epsilon}_{1000}$ from $K_{\alpha}$, and computed
$$
g_5({\bf a}) \approx \frac{1}{1000} \sum_{i=1}^{1000} h_5({\bf a} + \bm{\epsilon}_i),
$$
with an identical procedure for $g_1({\bf a}).$
\subsubsection{Second line graph experiment.}
Again we explore Example \ref{ex:HF}, this time with the input ${\bf a} = (5,1.1,1,1.05,15)$ to a persistent homology computation that builds a filtration
on a line graph with five vertices $v_1, \ldots v_5$. The function $F_{\bf a}$, whose graph is on the left of Figure \ref{fig:FlatGraph}, has a global min at $v_3$.
From the diagram in the middle, we see $h_3({\bf a}) = 15 -1 = 14$. Note that $h_i({\bf a}) = 0$ for $i \neq 3$, since only one component
is created during the entire filtration. On the right, we see convolved values of these functions, with notation and computation procedure exactly as above.
\begin{figure}[ht]
\begin{minipage}[b]{0.28\linewidth}
\centering
\includegraphics[width=\textwidth,scale=0.1]{FlatPath.pdf}
\end{minipage}
\begin{minipage}[b]{0.28\linewidth}
\centering
\includegraphics[width=\textwidth,scale=0.05]{FlatGraphZeroDiag.pdf}
\end{minipage}
\begin{minipage}[b]{0.28\linewidth}
\centering
\includegraphics[scale=0.3]{FlatPathConvolutionGraphs.pdf}
\end{minipage}
\caption{Input and results of second line graph experiment Left: a graph of the function $F_{\bf a}$ defined on a line graph with seven vertices. Middle: the PD
$H({\bf a}) = \Dgm_0(F_{\bf a})$. We follow the extended persistence convention and pair the global min with the global max.
Right: Results of experiment. Moving from bottom to top, the values of $g_{2,\alpha}({\bf a}), g_{4,\alpha}({\bf a}), g_{3,\alpha}({\bf a})$, and their sum, all
plotted against $1000 * \alpha.$ }
\label{fig:FlatGraph}
\end{figure}
\subsubsection{Distance-to-a-curve experiment.}
Next we reconsider Example \ref{ex:FDC}.
Let $C$ be the PL-curve with nine vertices on the left side of Figure \ref{fig:SmallBigDiamond}.
In our language, $C = C_{\bf a}$, where the input vector ${\bf a}$ specifies the coordinates of the nine vertices:
$v_1 = (0,0.1), v_2 = (1,1), v_3 = (2,0.12), v_4 = (7,5), v_5 = (12,0),
v_6 = (7,-5), v_7 = (2,-0.12), v_8 = (1,-1), v_9 = (0,-0.1).$
Following the vocabulary of Example \ref{ex:FDC}, this curve placement leads to a order-preserving function $f_{{\bf a}}$ on the abstract
full complex $K$ on nine vertices.
Its degree-one PD $H({\bf a}) = \Dgm_1(f_{{\bf a}})$, in the middle of the same figure, has only two off-diagonal points.
The first, at $(0.2,10)$, is created by the positive edge between $v_1$ and $v_9$, while the second, at $(0.23,2)$, comes
from the edge between $v_3$ and $v_7$.
Thus we have $h_{1,9}({\bf a}) = 9.8$ and $h_{3,7}({\bf a}) = 1.77$. As usual, these values are highly unstable to small perturbations in the vertex positions.
\begin{figure}[ht]
\begin{minipage}[b]{0.28\linewidth}
\centering
\includegraphics[width=\textwidth,scale=0.1]{SmallBigDiamond.pdf}
\end{minipage}
\begin{minipage}[b]{0.28\linewidth}
\centering
\includegraphics[width=\textwidth,scale=0.05]{SmallBigDiamondOneDiag.pdf}
\end{minipage}
\begin{minipage}[b]{0.28\linewidth}
\centering
\includegraphics[scale=0.3]{DiamondConvolutionGraphs.pdf}
\end{minipage}
\caption{Input and results of distance-to-curve experiment. Left: the PL plane curve $C_{{\bf a}}$ whose nine vertices are defined in the text. Middle: the PD
$H({\bf a}) = \Dgm_1(F_{{\bf a}})$.
Right: Results of experiment. Top graph shows the value of $g_{1,9,\alpha}({\bf a})$ versus $10,000*\alpha$, bottom graph shows $g_{3,7,\alpha}({\bf a})$ versus $10,000*\alpha$.}
\label{fig:SmallBigDiamond}
\end{figure}
In very similar fashion to the last experiment, we then computed approximate values at ${\bf a}$ for the convolved functions
$g_{1,9,\alpha} = f_{1,9} * K_{\alpha}$ and $g_{3,7,\alpha} = f_{3,7} * K_{\alpha}$
where $K_{\alpha}$ was a nine-dimensional Gaussian kernel with bandwidth $\alpha$. This time we used $100$ evenly spaced values of $\alpha$, going
from $0.0001$ to $0.01$.. The results appear on the right side of Figure \ref{fig:SmallBigDiamond}.
\subsection{Interpretations}
We now offer some possible interpretations one can draw from these results, and also suggest some potential uses of this technique in practice.
\subsubsection{Locating a point in the domain.}
Let $u = (9.9,20)$ be one of the high-persistence points in the diagram for our first experiment. It is strictly accurate to say that $u$ was created, for this specific persistent homology computation, by the addition of $v_5$ to the filtration. It is also a potentially misleading thing to say.
We propose that the difference between the persistence of $u$ and the values of the convolutions $g_{5,\alpha}({\bf a})$ might be seen as an indicator for how confidently one should locate $u$ at $v_5$. The graphs on the right side of Figure \ref{fig:PathGraph} tell us that this confidence should be low. On the other hand, the other high-persistence point $w = (1,20)$ is created by the addition of $v_7$.
It turns out that $g_{7,\alpha}({\bf a})$ remains very close to $19$ for all $\alpha$ within a reasonable range.
\subsubsection{Spreading out a point in the domain.}
Alternatively, one might choose to give $u$ a more fuzzy location. A reasonable idea would be to spread out its location between vertices $v_5$ and $v_1$, since $v_1$ is responsible for creating the same component in a very near-by filtration. The graphs in figure \ref{fig:PathGraph} bear this out: note that the sum of the two convolution values $g_{5,\alpha}({\bf a}) + g_{1,\alpha}({\bf a})$ is always very close to the sum of the persistences of the components created by $v_1$ and $v_5$.
Similarly, in the second experiment, it would be reasonable to smear the location of the only point throughout the immediate neighborhood of $v_3$.
\subsubsection{Convolved values as features.}
One could also use the values of $g_i$ or $g_{i,j}$ as features in a machine-learning scheme. That is, the vector $(g_{1,\alpha}({\bf a}), \ldots, g_{7,\alpha}({\bf a}))$ could be used as a summary feature of both the filtration created by ${\bf a}$ and the noise model $K_{\alpha}$. The stabilities offered by Theorems \ref{thm:stability1} and \ref{thm:choice} make this an appealing option.
\section{Discussion}
\label{sec:Disc}
Persistence diagrams have already been used to produce features for machine-learning and statistical methods. This paper takes a first step towards the extraction of stable features that describe much of the other information produced during a persistent homology computation.
We plan to demonstrate the utility of these new features in applications: for example, by exploring the locations of the distinguishing persistence points in the brain artery dataset from \cite{Bendich2015trees}.
It would also be nice to build a set of visualization tools. For example, one might want to compute a PD, click on a point, and have the possible location candidates shown on the domain, perhaps with some sort of heat map of likelihood.
Finally, we also hope to enrich the theory whose development has started here.
It would be nice to work instead in the category of topological spaces, to define
some versions of the functions $h_x$ and $g_x$, and to prove Lipschitz-continuity of the latter. We believe that the direction suggested by Example \ref{ex:regression} may be the right one.
\subparagraph*{Acknowledgments}
The authors would like to thank Justin Curry, Francis Motta, Chris Tralie, and Ulrich Bauer for helpful conversations. The first author would like to thank the University of Florida for hosting him during the initial research phase. The first author was partially supported by the NSF awards BIGDATA 1444791 and WBSE 3331753 and the Air Force Research Laboratory, under STTR \# FA8750-16-C-0220. The second author was partially supported by AFOSR award FA9550-13-1-0115.
\bibliographystyle{plain}
| {
"timestamp": "2017-05-01T02:02:13",
"yymm": "1512",
"arxiv_id": "1512.01700",
"language": "en",
"url": "https://arxiv.org/abs/1512.01700",
"abstract": "We propose a general technique for extracting a larger set of stable information from persistent homology computations than is currently done. The persistent homology algorithm is usually viewed as a procedure which starts with a filtered complex and ends with a persistence diagram. This procedure is stable (at least to certain types of perturbations of the input). This justifies the use of the diagram as a signature of the input, and the use of features derived from it in statistics and machine learning. However, these computations also produce other information of great interest to practitioners that is unfortunately unstable. For example, each point in the diagram corresponds to a simplex whose addition in the filtration results in the birth of the corresponding persistent homology class, but this correspondence is unstable. In addition, the persistence diagram is not stable with respect to other procedures that are employed in practice, such as thresholding a point cloud by density. We recast these problems as real-valued functions which are discontinuous but measurable, and then observe that convolving such a function with a suitable function produces a Lipschitz function. The resulting stable function can be estimated by perturbing the input and averaging the output. We illustrate this approach with a number of examples, including a stable localization of a persistent homology generator from brain imaging data.",
"subjects": "Computational Geometry (cs.CG); Algebraic Topology (math.AT)",
"title": "Stabilizing the unstable output of persistent homology computations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9732407168145568,
"lm_q2_score": 0.7279754489059775,
"lm_q1q2_score": 0.7084953477166523
} |
https://arxiv.org/abs/2212.03104 | On LC-subgroup of a periodic group | As a natural continuation of study $LCM$-groups, we explore other properties of $LCM$-groups and $LC$-series. We obtain some characterizations of finite groups which are not LCM-groups but all proper sections are $LCM$-groups. Also, for a $p$-group $G$, we prove that $G$ is a $LC$-nilpotent group and we obtain a bound for its $LC$-nilpotency. Finally, as an application, we prove that a finite supersolvable group, groups of order $pq, pq^2$ and $pqr$ are $LC$-nilpotent groups, where $p,q$ and $r$ are prime numbers. | \section{Introduction}
Let $G$ be a periodic group, and let
$LCM(G)$ be the set of all $x \in G$ such that
$o(x^ny)$ divides the least common multiple of $o(x^n)$ and $o(y)$ for all $y \in G$ and all integers $n$. The subgroup generated by $LCM(G)$ of $G$ is denoted by $LC(G)$. A group $G$ is said a LCM-group if $G=LCM(G)$.
In \cite{mohsen} the authors introduced the LCM-groups and LC-series. They prove many results about theses groups, for instance
\begin{thm} Let $G$ be a locally finite group.
Then
$ LC(G)$ is a locally nilpotent subgroup of $G$.
\end{thm}
Therefore, as a natural continuation, in this paper we explore some properties of LCM-groups and LCM-series defined in \cite{mohsen}.
A main importance of to study the behavior, in periodic groups, of the order of elements on the structure of the group, is certainly in the well known Burnside Problem. Here our focus is to understand how the finite order of the product of the elements can determine the structure of the group. In \cite{mohsen} has been proven, for a periodic group $G$, which $LCM(G)$ is an $Aut(G)$-invariant subgroup of $G$ (see Lemma 2.4 in \cite{mohsen}). Here the main ingredient in the our proofs is the fact that the behavior of the $p$-elements in $LCM(G)$ is enough to determine all information about $LCM(G)$.
Let $G$ be a group.
As defined in \cite{mohsen}, set $LC_1(G)=LC(G)$ and for $i=2,3,\ldots$, define
$LC(\frac{G}{LC_{i-1}(G)})=\frac{LC_{i}(G)}{LC_{i-1}(G)}$.
We say the group $G$ is a $LC$-nilpotent group, whenever
there exists a finite $LC$-series
\begin{equation}\label{eq11}
LC_0=1\leq LC_1(G)\leq LC_2(G)\leq\ldots\leq LC_k(G)=G
\end{equation}
such that $\frac{LC_i(G)}{LC_{i-1}(G)}$ is a nilpotent group for all $i=1,2,\ldots,k$. In this case the $LC$-series (\ref{eq11}), is called a $LC$-nilpotent series for $G$.
A subgroup series of a group G is a finite chain of subgroups of G contained in each other. Subgroup series can simplify the study of a group to the study of simpler subgroups and their relations, and several subgroup series can be invariantly defined and are important invariants of groups. We show that any finite supersolvable group is a $LC$-nilpotent group (Corollary \ref{7}).
For any real number $r$, let $\lfloor r\rfloor$ be the greatest integer $n$ such that $n\leq r$. In this paper, we establish a connection between the nilpotency class of a finite $p$-group and the LC-nilpotency, for $p$ a prime number.
\begin{thm}\label{222}
Let $G$ be a finite $p$-group with nilpotency class $t$.
Then $G$ is a $LC$-nilpotent group of class at most $\lfloor t/(p-1)\rfloor+1$.
\end{thm}
Note that this bound is sharp, because $G= C_p \wr C_p$ is a LC-nilpotent group with class $2=\lfloor p/(p-1)\rfloor+1$, for all prime numbers $p$.
In \cite{mohsen} was posed the following question
\begin{que1}[Question 3.7]
What is the set of all LC-nilpotent groups of class two?
\label{questao}
\end{que1}
By Theorem \ref{222}, a finite $p$-group $G$ with nilpotency class less than $2(p-1)$ is LC-nilpotent group of class two whenever $G\not\in CP2$.
For a $p$-group $G$ and an integer $n$, we denote the sets $<\{x\in G| \ x^{p^{n}}=1\}>$ and $<\{x^{p^n}\in G| \ x\in G\}>$ by $\Omega_{n}(G)$ and $\mho_{n}(G)$, respectively.
Let $CP2$ be the class of finite groups $G$ such that $o(xy)\leq max\{o(x), o(y)\}$ for all $ x\neq y \in G$ \cite{Deb}.
The periodic group $G$ is called a $LCM$-group whenever $LCM(G)=G$.
In \cite{mohsen} authors prove that:
\begin{thm} \label{cp2} Let $G$ be a finite group.
Then $G$ is a $LCM$-group if and only if $G$ is a nilpotent group and all Sylow subgroups of $G$ are in $CP2$.
\end{thm}
The natural question is the following: What can we say about structure of the group which is not a $LCM$-group but all proper sections of $G$ are $LCM$-group.
We classify all such finite groups.
In what follows, we adopt the notation established in the Isaacs' book on finite groups \cite{I}.
\section{$LCM_p$ of periodic groups}
We shall need the following results.
\begin{thm}(Theorem D in \cite{Deb}) \label{a} A finite group $G$ is contained in $CP2$ if and only if one of the following statements holds:
\begin{enumerate}
\item $G$ is a $p-$group and $\Omega_{n}(G)=\{x\in G\ | \ x^{p^{n}}=1\}$ for all integers $n$.
\item $G$ is a Frobenius group of order $p^{\alpha}q^{\beta}$, $p<q$, with kernel $F(G)$ of order $p^{\alpha}$ and cyclic complement.
\end{enumerate}
\end{thm}
\begin{thm}(Theorem 2.6 in \cite{mohsen})\label{12} Let $G$ be a finite group.
Then $G$ is an $LCM$-group if and only if $G$ is a nilpotent group and each Sylow subgroup of $G$ belongs to $CP2$.
\end{thm}
From of Lemma 2.4 of \cite{mohsen}, if $G$ is a periodic group, then $LCM(G)$ is a $Aut(G)$-invariant subgroup of $G$.
Let $G$ be a periodic group and $p$ a prime number. We denote the set of all $p$-elements $x\in G$ such that
$o(hy)\mid lcm(o(h),o(y))$ for any $p$-element $y$ of $G$ and all $h\in \langle x\rangle$ by $LCM_p(G)$.
\begin{lem}\label{ces}
Let $G$ be a periodic group and $p$ a prime number.
(i) For any $\sigma\in Aut(G)$, we have $LCM_p(G)^{\sigma}=LCM_p(G)$.
(ii) The subgroup generated by $LCM_p(G)$ is a $p$-group.
\end{lem}
\begin{proof}{
(i) Let $x\in LCM_p(G)$ and let $\sigma\in Aut(G)$.
Let $y$ be a $p$-element of $G$, and let $h\in \langle x\rangle$.
Then there exists $p$-element $z\in G$ such that $\sigma(z)=y$.
Then $$o(\sigma(h)y)=o(\sigma(hz))=o(hz)\mid lcm(o(h),o(z)=lcm(o(\sigma(h),o(y)).$$
(ii) Let $x\in LCM_p(G)$.
Let $S$ be a subset of $x^G$ such that $F:=\langle x^G\rangle=\langle S\rangle$ but $F\neq \langle T\rangle$ for all proper subsets $T$ of $S$.
Any $w\in F$ is just a finite sequence $w=s_{1}\ldots s_{r}$ whose entries $ s_{1},\ldots ,s_{r}$ are elements of $S\cup S^{-1}$. The integer $r$ is called the length of the element $w$ and its norm $|w|$ with respect to the generating set $S$ is defined to be the shortest length of $w$ over $S$.
Let $y\in F.$ We claim that $o(y)\mid o(x)$.
We proceed by induction on $|y|$. The case $|y|=0$ is trivial.
So suppose that the result is true for all $a\in F$ with $|a|<|y|$. There are $s_1,\ldots,s_k\in S$, $\epsilon_i\in\{1,-1\}$ and positive integers $n_1,\ldots,n_k$ such that
$y=(s_1)^{\epsilon_1n_1}\ldots(s_k)^{\epsilon_kn_k}$.
We may assume that $n_1>0$.
By induction hypothesis, we have $o((s_1)^{\epsilon_1(n_1-1)}\ldots(s_k)^{\epsilon_kn_k})\mid o(x)$.
From part (i) $x^g,(x^g)^{-1}\in LCM_p(G)$ for all $g\in G$, we have
$S\subseteq LCM(H_p(G), A)$, and so $s_1\in LCM_p(G)$. Consequently,
\begin{eqnarray*}
o(y)&=&o(s_1((s_1)^{\epsilon_1 (n_1-1)}(s_k)^{\epsilon_2 n_2}\ldots(s_k)^{\epsilon_k n_k})))\\&\mid& lcm(o(s_1), o((s_1)^{\epsilon_1 (n_1-1)}(s_2)^{\epsilon_2 n_2}\ldots(s_k)^{\epsilon_k n_k}))\mid\\&\vdots&\\&\mid&
lcm(o(s_1),o(x))\\&=&o(x).
\end{eqnarray*}
Let $x_1,x_2\in LCM_p(G)$.
Since $\langle x_1^G,x_2^G\rangle=\langle x_1^G\rangle\langle x_2^G\rangle$, we have $o(x_1x_2)\mid lcm(o(x_1),o(x_2)$. It follows that $\langle LCM_p(G)\rangle$ is a $p$-group.
}
\end{proof}
The following proposition shows that $LCM_p(G)$ is a subset of $LCM(G)$ whenever $G$ is a periodic group.
\begin{prop}\label{equ}
Let $G$ be a periodic group, and let $g\in G$ be a $p$-element. Then $g\in LCM_p(G)$ if and only if
$g\in LCM(G)$.
\end{prop}
\begin{proof}
If $g\in LCM(G)$, then by definition of $LCM(G)$, we have
$o(hy)\mid lcm(o(h),o(y))$ for any $p$-element $y$ of $G$ and all $h\in \langle g\rangle$, so $g\in LCM_p(G)$.
So suppose that $g\in LCM_p(G)$.
Then for all $p$-elements $y\in G$, we have $o(hy)\mid lcm(o(h),o(y))$ for any $h\in \langle g\rangle$.
Let $z\in G$ and let $h\in \langle g\rangle$. Then $o(z)=p^ka$ where $p\nmid a$.
There exists $y,v\in G$ such that $z=yv=vy$ where $o(y)=p^k$ and
$o(v)=a$.
Then
$(zh)^{a}=h^zh^{z^2}...h^{z^a}z^a=h^zh^{z^2}...h^{z^a}y^a$.
From Lemma \ref{ces}, $ \langle h^G\rangle$ is a normal $p$-subgroup of $G$, and so $\langle h^G\rangle\langle y\rangle$ is a $p$-group. Hence
$h^{z^i}...h^{z^a}z^a$ is a $p$-element for $i=1,...,a$.
Consequently,
\begin{eqnarray*}
o((zh)^a)=o(h^zh^{z^2}...h^{z^a}y^a)&\mid& lcm(o(h^z),o(h^{z^2}...h^{z^a}y^a))\\&\mid&
lcm(o(h^z),o(h^{z^2}),o(h^{z^3}h^{z^4}...h^{z^a}y^a))\\&\mid&
\vdots
\\&\mid& lcm(o(h^z),\cdots, o(h^{z^a})),o(y))\\&=&
lcm(o(h),o(y)).
\end{eqnarray*}
Then $$o(zh)\mid a \cdot lcm(o(h),o(y))= lcm(o(h),a\cdot o(y))=lcm(o(h),o(z)),$$ and so $g\in LCM(G)$.
\end{proof}
\begin{lem}
Let $G$ be a periodic group, and let $x,y\in LCM(G)$ such that
$xy=yx$ and $gcd(o(x),o(y))=1$. Then $xy\in LCM(G).$
\end{lem}
\begin{proof}
Let $z\in\langle x\rangle \langle y\rangle$.
Then $z=x^ny^m$ for some integers $n$ and $m$. We have $o(z)= lcm(o(x^n),o(y^m)).$
Let $g\in G$.
We have $$o(zg)=o(x^ny^mg)\mid lcm(o(x^n),o(y^mg))\mid lcm(o(x^n),o(y^m),o(g))=lcm(o(x^ny^m),o(g)).$$
Therefore $xy\in LCM(G).$
\end{proof}
\begin{prop}\label{pro}
Let $G$ and $H$ be two periodic groups.
Then $LCM(G)\times LCM(H)\subseteq LCM(G\times H)$.
In addition, if
$gcd(exp(G),exp(H))=1$, then $LCM(G)\times LCM(H)=LCM(G\times H)$.
\end{prop}
\begin{proof}
Let $g\in \langle g_1\rangle\subseteq LCM(G)$ and $h\in \langle h_1\rangle \subseteq LCM(H)$.
Let $x\in G$ and $y\in H$.
Since $H\cap G=1$, we have $o(xy)=lcm(o(x),o(y))$ and $o(gh)=lcm(o(g),o(h)).$
We have
$$o(ghxy)\mid lcm(o(gx),o(hy))\mid lcm(o(g),o(x),o(h),o(y))=lcm(o(gh),o(xy)).$$
Hence $g_1h_1\in LCM(G\times H).$
Suppose that $gcd(exp(G),exp(H))=1$.
By the first part we need to prove that $LCM(G\times H)\subseteq LCM(G)\times LCM(H)$.
Let $xt\in LCM(G\times H)$ where $x\in G$ and $t\in H$, and $n$ an integer.
Then $(xt)^n=x^nt^n\in LCM(G\times H)$.
Set $x^n=g$ and $t^n=h$.
Let $y\in G$. Then $o(ghy)\mid lcm(o(gh),o(y))$.
Since $gh=hg$, we have $o(gh)\mid lcm(o(g),o(h))= o(g)o(h)$.
We have $(gh)^{o(g)}=g^{o(g)}h^{o(g)}=h^{o(g)}$.
Since $gcd(o(g),o(h))=1$, we have $o(h^{o(g)})=o(h)$.
Therefore $o(h)\mid o(gh)$. By a similar argument $o(g)\mid o(gh)$.
Then $o(g)o(h)=lcm(o(g),o(h))\mid o(gh).$
Since $gcd(exp(G),exp(H))=1$, we have
$o(gh)=o(g)o(h)$ and $o(ghy)=o(gyh)=o(gy)o(h)$. Therefore $$o(gy)o(h)=o(ghy)\mid lcm(o(gh),o(y))=lcm(o(g),o(y),o(h))=lcm(o(g),o(y))o(h).$$
Consequently, $o(gy)\mid lcm(o(g),o(y))$, and so $x\in LCM(G)$.
By the same argument as the above, $t\in LCM(H)$.
Therefore $xt\in LCM(G)\times LCM(H)$.
Hence $LCM(G\times H)\subseteq LCM(G)\times LCM(G)$.
\end{proof}
The following example shows that $LC(G)\times LC(H)$ can be a proper subgroup of $LC(G\times H)$.
\begin{exam} Let $T=\langle x,y\rangle\times \langle a,b\rangle$ where
$\langle x,y\rangle\cong\langle a,b\rangle\cong D_8$ and $o(x)=o(a)=4$ and
$o(y)=o(b)=2$. We have $LC(\langle x,y\rangle)=\langle x\rangle$ and $LC(\langle a,b\rangle)=\langle a\rangle$. Therefore $LC\langle x,y\rangle)\times LC(\langle a,b\rangle)=\langle x\rangle \times \langle a\rangle$. But
$xya\in LCM(T)\setminus \langle x\rangle \times \langle a\rangle$.
\end{exam}
Let $G$ be a periodic group and $N$ a normal subgroup of $G$. The following example show that $\frac{LC(G)N}{N}$ is not necessarily a subgroup of $LC(\frac{G}{N})$.
\begin{exam} Let $G=\langle x,y\rangle\times \langle a,b\rangle$ where
$\langle x,y\rangle\cong\langle a,b\rangle\cong D_8$ and $o(x)=o(a)=4$ and
$o(y)=o(b)=2$. Clearly,
$xya\in LCM(G)$. Let $N=\langle a^2\rangle$.
Then $o(xyaN)=2$.
Since $o(xya(y^{-1})N)=o(xN)=4\nmid lcm(o(xyaN),o(y^{-1}N))=2$, we have
$xyaN\not\in LC(\frac{G}{N}$). Hence $\frac{LC(G)N}{N}$ is not a subgroup of
$LC(\frac{G}{N})$.
\end{exam}
\begin{cor}\label{42}
Let $G$ be a finite group, and let $P\in Syl_p(Fit(G))$ and $Q\in Syl_p(G)$.
If $Q\in CP2$, then $P\subseteq LCM(G)$.
\end{cor}
\begin{proof}
Let $x\in P$ and $y$ be a $p$-element of order $p^m$. There exists $g\in G$ such that $y\in Q^g$. Since $P\leq Q^g$, we have $x^n,y\in Q^g$ for all integers $n$.
It follows from $Q^g\in CP2$ that
$o(x^ny)\mid lcm(o(x^n),o(y))$.
From Proposition \ref{equ}, $x\in LCM(G)$.
Then $P\subseteq LCM(G)$.
\end{proof}
Let $G$ be a periodic group, and let $x\in G$.
By Proposition \ref{equ}, to verify that $x\in LCM(G)$, it is enough to show that for any prime divisor $p$ of $o(x)$
the Sylow $p$-subgroup of $\langle x\rangle$ is a subset of $ LCM_p(G)$. Hence, to find any information about $LCM(G)$ we need to find some information about $LCM(H)$ where $H$ is a $p$-group.
If $G$ is a finite $LCM$-group from Theorem \ref{cp2},
$G\in CP2$. The natural question is the following: what can we say about the structure of finite $p$-group $G$ such that all proper sections of $G$ are $LCM$-group but $G$ is not a $LCM$-group?
In this section, we prove that such groups are a subset of minimal irregular $p$-groups.
Let $G$ be a periodic group, and let $p$ be a prime divisor of $exp(G)$. For simplicity, for any non-empty subset $H$ of $G$ we denote the set of all $x\in G$ such that $x^{p^i}=1$ by $\Omega_{(i)}(H)$ and the subgroup generated by $\Omega_{(i)}(H)$ is denoted by $\Omega_i(H)$.
\begin{lem}\label{a2}
Let $G$ be a finite group such that $\frac{G}{\Omega_1(G)}\in CP2$. If
$\Omega_1(G)=\Omega_{(1)}(G)$, then $G\in CP2$.
\end{lem}
\begin{proof} Let $N:=\Omega_1(G)$.
Let $i\geq 1$ be an integer. Since $\frac{G}{N}\in CP2$, from Theorem \ref{a}, $\Omega_{i}(\frac{G}{N})=\Omega_{(i)}(\frac{G}{N}).$ For the reason that $N=\Omega_1(G)=\{z\in G: z^p=1\}$, we have
$\frac{\Omega_{i+1}(G)N}{N}\subseteq\Omega_{i}(\frac{G}{N})=\{xN: x\in \Omega_{(i+1)}(G)\}.$
Let $x\in \Omega_{i+1}(G)$.
Then $xN\in \Omega_{i}(\frac{G}{N})$, so $xN=gN$ where $g\in \Omega_{(i+1)}(G)$.
Therefore $x^{p^i}N=(xN)^{p^i}=(gN)^{p^i}=N$.
It follows that $x^{p^{i+1}}=1$, because $exp(N)=p$, so $x\in \Omega_{i+1}(G).$
Consequently, $\Omega_{i+1}(G)=\Omega_{(i+1)}(G)$ for $i=0,1,...$, and so from Theorem \ref{a}, $G\in CP2$.
\end{proof}
\begin{lem}\label{ccc}
Let $G$ be a $p$-group. Then $ \Omega_1(LCM(G))=\Omega_{(1)}(LCM(G))$.
\end{lem}
\begin{proof}
Let $z\in \Omega_1(LCM(G))\setminus \{1\}$.
Let $x_1,...,x_r\in \Omega_{(1)}(LCM(G))$ such that
$z=x_1...x_r$ where $r$ is minimal. We have $o(x_i)=p$ for all $i=1,...,r$.
Since $$o(z)=o(x_1...x_r)\mid lcm(o(x_1),o(x_2...x_r))\mid ...\mid lcm(o(x_1),...,o(x_r))=p.$$ Hence $exp(\Omega_1(LCM(G)))=p$, and so
$\Omega_1(LCM(G))=\Omega_{(1)}(LCM(G))$.
\end{proof}
Let $G$ be a finite group, we say $G$ is a minimal $NLCM$-group, whenever $G$ is not a $LCM$-group, but
all proper sections of $G$ are $LCM$-group.
From the following theorem, one we can see that the $NLCM$-group has restricted structure.
\begin{thm}\label{min}
Let $G$ be a finite $NLCM$-group.
(a) If $G$ is not a $p$-group, then
$G$ is a minimal non-nilpotent group.
(b) if $G$ is a $p$-group, then
\begin{enumerate}
\item $G$ has a maximal subgroup $H$ such that $exp(H)=p$ and $G=H\rtimes \langle u\rangle$ for some $u\in G$ of order $p$.
\item $G=\Omega_1(G)\neq \Omega_{(1)}(G).$
\item There exist $h,u\in G$ such that $G=\langle h,u\rangle$.
\item $|Z(G)|=|\mho_1(G)|=exp(\frac{G}{Z(G)})=p$.
\end{enumerate}
\end{thm}
\begin{proof}
(a) From Corollary 2.15 of \cite{mohsen}, $G$ is a minimal non-nilpotent group.
(b)
By Theorem \ref{12}, any proper section of $G$ belongs to $CP2$.
If $\Omega_{(1)}(G)=\Omega_{1}(G)$, then from Lemma \ref{a2}, $G\in CP2$, and so $G$ is an $LCM$-group, which is a contradiction. So $\Omega_{(1)}(G)\neq \Omega_{1}(G)$, and so $\Omega_{1}(G)=G$.
Let $H$ be a maximal subgroup of $G$ such that $|\Omega_1(H)|$ is maximal in the set of all maximal subgroups of $G$. Since $\Omega_{1}(G)=G$, there exists $u\in \Omega_{(1)}(G)\setminus H.$
If $exp(H)>p$, then $R:=\Omega_{(1)}(H)\langle u\rangle$ is a proper subgroup of $G$ such that
$|\Omega_{(1)}(R)|>|\Omega_{(1)}(H)|$, which is a contradiction. So $exp(H)=p$.
Since $exp(G)=p^2$, there exists $h\in H$ and integer $i$ such that
$o(hu^i)=p^2$. If $\langle h,u\rangle\neq G$, then $\langle h,u\rangle\in CP2$, so $o(hu^i)=p$, which is a contradiction. Therefore $G=\langle h,u\rangle.$
Let $N\neq 1$ be a normal subgroup of $G$. We have $\frac{G}{N}\in CP2.$
If $exp(\frac{G}{N})>p$, then $\Omega_{1}(\frac{G}{N})=\frac{T}{N}$ is a proper subgroup of $\frac{G}{N}$.
Since $T\in CP2$, then $ G=\Omega_1(G)\subseteq \Omega_1(T)\neq G$, which is a contradiction.
So $exp(\frac{G}{N})=p.$
We claim that $Z(G)$ is a cyclic group.
Suppose for a contradiction that $G$ has two distinct normal minimal subgroups $N$ and $D$.
Then $exp(\frac{G}{N})=exp(\frac{G}{D})=p$.
Let $x\in G$. Then $(xN)^p=N$ and $x^pD=D$. It follows that
$x^p\in N\cap D=1$. Therefore $exp(G)=p$, which is a contradiction.
So $Z(G)$ is a cyclic group, as claimed.
If $|Z(G)|>p$, then $Z(G)\nleq H$.
Therefore $G=HZ(G)$, and so $G\in CP2$, which is a contradiction.
Therefore $|Z(G)|=p$.
Since $exp(\frac{G}{Z(G)})=p$, we have $\mho_1(G)\leq Z(G)$, and hence $\mho_1(G)=Z(G).$
\end{proof}
Note that there exists a minimal irregular $3$-group $G$ such that $G$ is a $LCM$-group, for example, AllSmallGroups(81,IsAbelian,false)[5].
\section{$LC$-series}
\begin{lem}\label{zp}
Let $G$ be a finite $p$-group.
Then $Z_{p-1}(G)\leq LCM(G)$.
\end{lem}
\begin{proof}
Let $x\in Z_{p-1}(G)$ and $y\in G$.
Let $H=\langle x,y\rangle$, and let $z\in \langle x\rangle$.
Since the nilpotency class of $H$ is less than $p$, $H$ is a regular group. Therefore $o(zy)\mid lcm(o(z),o(y))$.
Since $y$ is an arbitrary element of $G$, we deduce that
$x\in LCM(G)$.
\end{proof}
\begin{thm}
Let $G$ be a finite $p$-group of nilpotency class $t$.
Then $G$ is a $LC$-nilpotent group of class at most $\lfloor t/(p-1)\rfloor+1$.
\end{thm}
\begin{proof}
We proceed by induction on $t$.
If $t< p$, then
by Lemma \ref{zp}, $G=LCM(G)$, and so $G$ is a $LC$-nilpotent group of class $\lfloor t/(p-1)\rfloor+1=1$. So suppose that $t\geq p$.
From Lemma \ref{zp}, $Z_{p-1}(G)\leq LC(G)$.
Then the nilpotency class of $\frac{G}{LC(G)}$ is at most
$t-p+1$. By induction hypothesis, $\frac{G}{LC(G)}$ is a $LC$-nilpotent group of class at most $\lfloor (t-p+1)/(p-1)\rfloor+1$.
Therefore $G$ is a $LC$-nilpotent group of class at most $\lfloor t/(p-1)\rfloor+1$.
\end{proof}
Now, we prove that any
finite supersolvable group is a $LC$-nilpotent group.
Let $H$ be a group and $G\lhd H$. We say that $G$ is
$H$-supersolvable if $G$ has a supersolvable series
$$1=G_0\leq G_1\leq \ldots\leq G_k=G$$ with $G_i\lhd H$, for all
$i=1,2,\ldots,k$.
\begin{thm}\label{5}
Let $H$ be a periodic group, and let $G$ be a normal $H$-supersolvable subgroup of $H$. Then
$G$ is a finite $LC$-nilpotent group and $LC_i(G)\leq LC_i(H)$ for all $i=1,2,\ldots$.
\end{thm}
\begin{proof}
Let $$1=N_0\leq N_1\leq \ldots\leq N_k=G$$
be a normal series for $G$ such that $\frac{N_{i+1}}{N_i}$ is a cyclic group of prime orders and $N_i\lhd H$ for all $i=0,1,\ldots,k-1$.
We show that $N_i\leq LC_i(H)$ for all $i$. We proceed by induction on $i.$
First suppose that $i=1$.
Let $N_1=\langle a\rangle$ where $o(a)=p$. Suppose for a contradiction that there exists $y\in H$ such that $o(ya)\nmid lcm(o(y),o(a))$. Let $V=G\langle y\rangle$.
If $y\in C_V(a)$, then clearly, $o(ya)\mid lcm(o(y),o(a))$, which is a contradiction. So suppose that $y\not\in C_V(a)$.
If $p=2$, then $a\in Z(V)$, which is a contradiction.
So suppose that $p>2$.
If $\langle y\rangle \cap N_1\neq 1$, then $ay=ya$, which is a contradiction. So $\langle y\rangle \cap N_1=1$. There exists an integer $j$ such that $a^y=a^j$.
Let $o(y)=m$. First suppose that $p\mid m$.
Then there exists $P\in Syl_p(G)$ such that $y^{m/p}\in P$.
Since $N_1\leq Z(P)$, we have $y^{m/p}a=ay^{m/p}$.
Then $$(ya)^m=((ya)^{m/p})^p=(a^ya^{y^2}...a^{y^{m/p}}y^{m/p})^p=(a^{j+j^2+...+j^{m/p}}y^{m/p})^p=1,$$ and so
$o(ay)\mid lcm(o(y),o(a))=m$, which is a contradiction.
So suppose that $p\nmid m$.
Then $$((ya)^m)^p=(a^ya^{y^2}...a^{y^{m}}y^{m})^p=(a^{j+j^2+...+j^{m}})^p=1,$$
and so $(ya)^{mp}=1$.
Therefore $o(ya)\mid lcm(o(y),o(a))=pm$, which is a contradiction.
It follows that $N_1\leq LC_1(H)$.
Now suppose that $i>1.$
By induction hypothesis, $N_{i-1}\leq LC_{i-1}(H)$ and $\frac{N_iLC_{i-1}(H)}{LC_{i-1}(H)}\leq LC(\frac{H}{LC_{i-1}(H)})=\frac{LC_i(H)}{LC_{i-1}(H)}.$
Therefore $N_{i}\leq LC_{i}(H)$ for all $i=0,1,2,...,k$.
\end{proof}
\begin{cor}\label{7}
Let $G$ be a finite supersolvable group.
Then $G$ is a $LC$-nilpotent group.
\end{cor}
Let $A_4$ be the alternating group of 4 symbols. Then $A_4$ is not a supersolvable group, but
$LC_1(A_4)$ is a group of order $4$ and $LC_2(A_4)=A_4$.
Hence $A_4$ is a $LC$-nilpotent group which is a not a supersolvable group.
Therefore the set of all $LC$-nilpotent groups is bigger than the set of all finite supersolvable groups.
\begin{cor}\label{7}
Let $G$ be a finite groups of order $pq$, $pq^2$ and $pqr$ where $p$, $q$ and $r$ are prime numbers.
Then $G$ is a $LC$-nilpotent group.
\end{cor}
\begin{proof}
If $|G|=pq$, then $G$ is a supersolvable group, and so $G$ is a $LC$-nilpotent group.
So suppose that $|G|=pq^2.$
If $p=q$, then $G$ is a $p$-group, and proof is clear.
So suppose that $p\neq q$. Let $N$ be a normal minimal subgroup of $G$.
If $p=|N|$, then $G$ is a supersolvable group, and so $G$ is a $LC$-nilpotent group.
So suppose that $N\leq Q\in Syl_q(G)$.
Let $x\in N$ and $y\in Q^g$ for some $g\in G$.
Since $Q$ is an abelian group, we have $o(xy)\mid lcm(o(x),o(y))$.
From Lemma \ref{equ}, $x\in LCM(G)$.
Since $|\frac{G}{LC(G)}|\mid pq$, we have $\frac{G}{LC(G)}$ is a $LC$-nilpotent group, and then $G$ is a $LC$-nilpotent group.
Now suppose that $|G|=pqr$. The case $|G|=p^3$ is trivial. From the first part we may assume that $p<q<r$.
Therefore $G$ is a supersolvable group, and so $G$ is a $LC$-nilpotent group.
\end{proof}
For each finite group G and each integer $d\geq 1$, let $L_G(d)=\{x\in G: x^d=1\}$. We say two finite groups $G$ and $H$ are of the same order type if and only if $|L_G(d)|=
|L_H(d)|$, for all $d=1,2,\ldots$. In 1980’s,
J.G.Thompson put forward a problem about the solvability of finite groups.
\begin{que}
(J.G. Thompson, \cite{Kh}) Suppose $G$ and $H$ are groups of the same
order type. If $G$ is a solvable, then is it true that $H$ is also
a solvable group?
\end{que}
Thompson problem is true
for the following cases:
\begin{itemize}
\item $G$ is a supersolvable group (see \cite{Xu}).
\item the number of the maximal order elements of $G$ is $2p$, $2p^2$ or 30(see \cite{3}, \cite{8} and \cite{6}).
\item the cardinality of the set of numbers of the same order elements of $G$
is not more than 2 (see \cite{13}).
\end{itemize}
In view of this question answer to the following question would be interesting.
\begin{que}
Suppose $G$ and $H$ are groups of the same
order type. If $G$ is a $LC$-nilpotent group, then is it true that $H$ is also
a solvable group?
\end{que}
| {
"timestamp": "2022-12-07T02:16:31",
"yymm": "2212",
"arxiv_id": "2212.03104",
"language": "en",
"url": "https://arxiv.org/abs/2212.03104",
"abstract": "As a natural continuation of study $LCM$-groups, we explore other properties of $LCM$-groups and $LC$-series. We obtain some characterizations of finite groups which are not LCM-groups but all proper sections are $LCM$-groups. Also, for a $p$-group $G$, we prove that $G$ is a $LC$-nilpotent group and we obtain a bound for its $LC$-nilpotency. Finally, as an application, we prove that a finite supersolvable group, groups of order $pq, pq^2$ and $pqr$ are $LC$-nilpotent groups, where $p,q$ and $r$ are prime numbers.",
"subjects": "Group Theory (math.GR)",
"title": "On LC-subgroup of a periodic group",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9732407168145568,
"lm_q2_score": 0.7279754489059775,
"lm_q1q2_score": 0.7084953477166523
} |
https://arxiv.org/abs/2102.03277 | Minimum projective linearizations of trees in linear time | The Minimum Linear Arrangement problem (MLA) consists of finding a mapping $\pi$ from vertices of a graph to distinct integers that minimizes $\sum_{\{u,v\}\in E}|\pi(u) - \pi(v)|$. In that setting, vertices are often assumed to lie on a horizontal line and edges are drawn as semicircles above said line. For trees, various algorithms are available to solve the problem in polynomial time in $n=|V|$. There exist variants of the MLA in which the arrangements are constrained. Iordanskii, and later Hochberg and Stallmann (HS), put forward $O(n)$-time algorithms that solve the problem when arrangements are constrained to be planar (also known as one-page book embeddings). We also consider linear arrangements of rooted trees that are constrained to be projective (planar embeddings where the root is not covered by any edge). Gildea and Temperley (GT) sketched an algorithm for projective arrangements which they claimed runs in $O(n)$ but did not provide any justification of its cost. In contrast, Park and Levy claimed that GT's algorithm runs in $O(n \log d_{max})$ where $d_{max}$ is the maximum degree but did not provide sufficient detail. Here we correct an error in HS's algorithm for the planar case, show its relationship with the projective case, and derive simple algorithms for the projective and planar cases that run without a doubt in $O(n)$ time. | \section{Introduction}
\label{sec:introduction}
A linear arrangement $\arr$ of a graph $G=(V,E)$ is a linear ordering of its vertices (it can also be seen as a permutation), i.e., vertices lie on a horizontal line. In such arrangement, the distance $d(u,v)$ between two vertices $u,v$ can be defined as $d(u,v)=|\arr(u) - \arr(v)|$ where $\arr$ maps the $n$ vertices to the $n$ distinct integers in $[1,n]$. The minimum linear arrangement problem (MLA) consists of finding a $\arr$ that minimizes the cost $D=\sum_{\{u,v\}\in E}d(u,v)$ \cite{Garey1976a,Chung1984a}. In arbitrary graphs, the problem is NP-hard \cite{Garey1976a}. For trees, various algorithms are available to solve the problem in polynomial time \cite{Goldberg1976a,Shiloach1979a,Chung1984a}. Goldberg and Klipker devised an $\bigO{n^3}$ algorithm \cite{Goldberg1976a}. Later, Shiloach contributed with an $\bigO{n^{2.2}}$ algorithm \cite{Shiloach1979a}. Finally, Chung contributed with two algorithms running in $\bigO{n^2}$ time and $\bigO{n^\lambda}$ time, respectively, where $\lambda$ is any real number satisfying $\lambda>\log 3/\log 2$ \cite{Chung1984a}. The latter algorithm is the best algorithm known.
There exist several variants of the MLA problem; two of them are the {\em planar} and the {\em projective} variants. In the {\em planar} variant, namely the MLA problem under the planarity constraint, the placement of the vertices of a free tree is constrained so that there are no edge crossings. These arrangements are known as {\em planar} arrangements \cite{Kuhlmann2006a}, and also one-page book embeddings \cite{Bernhart1974a}. Two undirected edges of a graph $\{s,t\},\{u,v\}\in E$ cross if $\arr(s) < \arr(u) < \arr(t) < \arr(v)$ when, without loss of generality, $\arr(s)<\arr(t)$, $\arr(u)<\arr(v)$ and $\arr(s)<\arr(u)$. To the best of our knowledge, the first $\bigO{n}$ algorithm was put forward by Iordanskii \cite{Iordanskii1987a}. Sixteen years later, Hochberg and Stallmann (HS) put forward another $\bigO{n}$-time algorithm \cite{Hochberg2003a}. However, their algorithm contains an error which is corrected in this paper.
In the {\em projective} variant, namely the MLA problem under the projectivity constraint, a rooted tree is arranged so that there are no edge crossings (i.e., the arrangement is planar) and the root is not covered. These arrangements are known as projective \cite{Kuhlmann2006a,Melcuk1988a}. A vertex $w$ is covered by an edge $\{u,v\}$ if $\arr(u)<\arr(w)<\arr(v)$ when, without loss of generality, $\arr(u)<\arr(v)$. Fig. \ref{fig:OPl:first_proj_gt_pla}(a) shows a projective arrangement while Fig. \ref{fig:OPl:first_proj_gt_pla}(b) shows an arrangement that is projective if we take vertex 2 as the root but not if we take vertex 1 as the root. Gildea and Temperley (GT) \cite{Gildea2007a} sketched an algorithm to solve this variant. The tree shown in Fig. \ref{fig:OPl:first_proj_gt_pla} is the smallest tree for which there is a vertex that, when chosen as the root, makes the minimum cost of the projective case be greater than that of the planar case (there are no other 6-vertex trees where that happens). While GT claimed that their sketch runs in $\bigO{n}$ \cite[p. 2]{Gildea2007a}, Park and Levy (PL) argued that it runs in time $\bigO{n \log d_{max}}$, where $d_{max}$ is the maximum degree. However, PL did not give enough detail to support their conclusion \cite{Park2009a}. In this article, we show that this is an overestimation of the actual complexity: the problem can be actually solved in $\bigO{n}$ time.
The remainder of the article is organized as follows. Section \ref{sec:notation_review} introduces the notation and reviews HS's algorithm. Section \ref{sec:min_planar} corrects and completes HS's algorithm \cite{Hochberg2003a}. The error is located in a recursive subprocedure ({\tt embed\_branch}) of HS's algorithm. In Section \ref{sec:min_projective}, we present two detailed $\bigO{n}$-time algorithms for the projective case that stem from HS's algorithm. HS's algorithm already contained a `subalgorithm' for solving the projective case although the authors did not identify it as such in their article \cite{Hochberg2003a}. Indeed, their algorithm can be reinterpreted as consisting of two main steps: finding a centroidal\footnote{ In this paper we follow the same terminology and notation as in \cite[Pages 35-36]{Harary1969a}. Therefore, we consider the {\em center} to be the set of {\em central} vertices, the vertices whose eccentricity is equal to the radius, and the {\em centroid} to be the set of {\em centroidal} vertices, the set of vertices whose weight, i.e., the size of the largest subtree, is minimum.} vertex (as in Iordanskii's algorithm \cite{Iordanskii1987a}) and then solving the projective case for the input tree rooted at that vertex. Hence the first algorithm for the projective case is obtained extracting the relevant part from HS's original algorithm, completing and simplifying it and, critically, using the correction indicated in Section \ref{sec:min_planar}. Our second algorithm for the projective case is a re-engineered version based on intervals that results into a more compact, clearer and simpler algorithm that can be utilized to solve also the planar case and can be seen as a formal interpretation of GT's sketch. Indeed, Section \ref{sec:min_projective} unifies, in a sense, HS's algorithm and GT's sketch. Put differently, solving the minimization of $D$ on a tree under planarity is equivalent to solving the projective case for a tree rooted at a specific vertex. For instance, the minimum $D$ under planarity for the tree in Fig. \ref{fig:OPl:first_proj_gt_pla} is obtained when calculating the minimum $D$ under projectivity when the tree is rooted at the vertex marked with a square in Fig. \ref{fig:OPl:first_proj_gt_pla}(b). Section \ref{sec:conclusions} draws some general conclusions and indicates some future paths for research.
\begin{figure}
\centering
\includegraphics[scale=1]{figures-first_tree_proj_gt_planar.pdf}
\caption{Two different linear arrangements of the same free tree $\Ftree$. a) A minimum projective arrangement of $\Ftree$ rooted at $1$ with cost $D=7$; the circled dot denotes the root. b) A minimum planar arrangement of $\Ftree$ with cost $D=6$ under the planarity constraint; the squared dot denotes its (only) centroidal vertex.}
\label{fig:OPl:first_proj_gt_pla}
\end{figure}
\section{Notation and review}
\label{sec:notation_review}
Throughout this paper we use $\Ftree=(V,E)$ to denote a free tree, and $\Rtree=(V,E;\Root)$ to denote a tree $T$ rooted at a vertex $\Root$ where $n=|V|$. Free trees have undirected edges, and rooted trees have directed edges; we consider the edges of a rooted tree to be oriented away from the root. In rooted trees, we refer to the parent of a vertex $u$ as $p(u)$; in a directed edge $(u,v)$, $p(v)=u$. We use $\SubRtree{u}$ to denote a subtree of $\Rtree$ rooted at $u\in V$ (if $u=\Root$ then $\SubRtree{u}=\Rtree$), and $\neighs{u}$ to denote the set of neighbors of vertex $u$ in $T$. We call $\SubRtree{v}$ an {\em immediate subtree} of $\SubRtree{u}$ rooted at $v$ if $(u,v)\in E(\Rtree)$. Extending the notation in \cite{Hochberg2003a}, we use $\SortedSubRtree{1}[u], \cdots, \SortedSubRtree{k}[u]$ to denote the $k$ immediate subtrees of a subtree $\SubRtree{u}$ of $\Rtree$ sorted decreasingly by size. We also use $n_1\ge\cdots\ge n_k\ge 1$ to denote their sizes, i.e., $n_{i}$ denotes the size of $\SortedSubRtree{i}[u]$; we omit the vertex when referring to immediate subtrees of $\Rtree$. Henceforth assume, without loss of generality, that $k$ is even. Recall that $\arr(u)$ is the position of $u\in V$ in the linear arrangement.
Now we summarize the core ideas and tools derived by HS \cite{Hochberg2003a}. Firstly, using Lemmas 6, 11 in \cite{Hochberg2003a}, it is easy to see that an optimal projective arrangement of $\Rtree$ is obtained by arranging the immediate subtrees of $\Rtree$ inwards, decreasingly by size and on alternating sides, namely $\SortedSubRtree{1}, \SortedSubRtree{3}, \cdots, \Root, \cdots, \SortedSubRtree{4}, \SortedSubRtree{2}$ or $\SortedSubRtree{2}, \SortedSubRtree{4}, \cdots, \Root, \cdots, \SortedSubRtree{3}, \SortedSubRtree{1}$. Immediate subtrees of $\Rtree$ can be arranged in any of the two orders, whereas immediate subtrees of $\SubRtree{u}$, $u\neq\Root$ have to be placed according to the side in which $u$ is placed with respect to $p(u)$: if $u$ is placed to $p(u)$'s left then the optimal order is $\SortedSubRtree{1}[u], \SortedSubRtree{3}[u], \cdots, u, \cdots, \SortedSubRtree{4}[u], \SortedSubRtree{2}[u]$ (Fig. \ref{fig:OPr:embedding_subtrees}(a)), and if $u$ is placed to $p(u)$'s right the optimal order is $\SortedSubRtree{2}[u], \SortedSubRtree{4}[u], \cdots, u, \cdots, \SortedSubRtree{3}[u], \SortedSubRtree{1}[u]$ (Fig. \ref{fig:OPr:embedding_subtrees}(b)). Notice that the root is not covered in any of these planar arrangements, as required by the projectivity constraint \cite{Kuhlmann2006a,Melcuk1988a}.
\begin{figure}
\centering
\includegraphics[scale=0.85]{figures-optimal_arrangements_respect_root.pdf}
\caption{a,b) Optimal arrangements of $\SubRtree{v}$ according to the relative position of $v$ with respect to $v$'s parent. c) Depiction of the directed edges $(p(u),u),(u,v)\in E$ in an optimal projective arrangement, divided into the anchor (the part of the edge $(u,v)$ to the left of the vertical line), and the coanchor (the part of the edge $(u,v)$ to the right). The length of the anchor of edge $(p(u),u)$ is the sum $n_j$ for even $j\in[2,k]$.}
\label{fig:OPr:embedding_subtrees}
\end{figure}
Secondly \cite[Theorem 12]{Hochberg2003a}, an optimal planar arrangement of a free tree $\Ftree$ is obtained when $\Ftree$ is rooted at one of its centroidal vertices. Therefore, an optimal planar arrangement of a free tree $\Ftree$ is an optimal projective arrangement of $\Rtree[c]$, where $c$ denotes one of the (possible two) centroidal vertices of $\Ftree$. For the calculation of a centroidal vertex, HS defined $s(u,v)$, which we call {\em directional} size of subtrees. The directional size $s(u,v)$ in a free tree $\Ftree$, for $\{u,v\}\in E(\Ftree)$, is the size of $\SubRtree{v}[u]$ (Fig.~\ref{fig:OPr:directional_sizes}). Notice that $s(v,u)+s(u,v)=n$. They also outlined a way of calculating all of the $s(u,v)$ in $\bigO{n}$ time \cite[Section 6]{Hochberg2003a}, but did not provide any pseudocode; here we provide it in Algorithm \ref{algo:compute_suvs:free_tree}. Using the $s(u,v)$ for all edges in $T$, we can construct a sorted adjacency list of the tree which we denote as $L$, with the pseudocode given in Algorithm \ref{algo:compute_sorted_L:free_tree}, and with it we calculate one of the centroidal vertices. Algorithm \ref{algo:compute_centroid} reports the pseudocode for the calculation of the centroidal vertex. All algorithms have $\bigO{n}$-time and $\bigO{n}$-space complexity.
We also need to consider the rooting of the list $L$ with respect to a given vertex $w$, denoted as $L^w$. This operation is called $\textsc{root\_list}(L,w)$ in the pseudocode. It transforms the representation of an undirected tree into a directed tree and consists of the action of removing edges of the form $(u,p(u))$, where $u\neq w$, from $L$, starting at the given vertex $w$ which acts as a root. In other words, vertex $w$ induces an orientation of the edges towards the leaves (i.e., away from $w$), and we have to remove one of the two edges $(u,v),(v,u)$ from $L$ for every $\{u,v\}\in E$. Since this can be done fairly easily in linear time, we do not give the pseudocode for this operation.
\begin{figure}
\centering
\includegraphics[scale=0.95]{figures-directional_sizes.pdf}
\caption{a) A free tree with $s(u,v)=7$, $s(v,u)=3$ and $s(v,w)=s(w,v)=5$. b) The free tree in a) rooted at $v$; $|V(\SubRtree{u}[v])|=s(v,u)$. Borrowed from \cite[Fig. 7]{Hochberg2003a}.}
\label{fig:OPr:directional_sizes}
\end{figure}
\begin{algorithm}
\caption{Calculation of directional sizes for free trees. Cost $\bigO{n}$ time, $\bigO{n}$ space.}
\label{algo:compute_suvs:free_tree}
\DontPrintSemicolon
\SetKwProg{Fn}{Function}{ is}{end}
\Fn{\textsc{compute\_s\_ft}$(\Ftree)$} {
\KwInTwo{$\Ftree$ free tree.}
\KwOutTwo{$S=\{(u,v,s(u,v)),(v,u,s(v,u)) \;|\; \{u,v\}\in E\}$.}
$S \gets \emptyset$ \;
$u_* \gets $ choose an arbitrary vertex \;
\For {$v\in \neighs{u_*}$} {
$(\_,S') \gets $\textsc{comp\_s\_ft\_rec}($\Ftree, (u_*,v)$) \;
$S \gets S \cup S'$
}
\Return $S$
}
\SetKwProg{Fn}{Function}{ is}{end}
\Fn{\textsc{comp\_s\_ft\_rec}$(\Ftree, (u,v))$} {
\KwInTwo{$\Ftree$ free tree, $(u,v)$ directing edge.}
\KwOutTwo{$s$ the size of $\SubRtree{v}[u_*]$ in vertices, $S = \{(u,v, s(u,v)), (v,u, s(v,u)) \;|\; \{u,v\}\in E(\SubRtree{v}[u_*])\}$.}
$s\gets 1$ \;
\For {$w\in \neighs{v}$} {
\If {$w\neq u$} {
$(s', S') \gets$ \textsc{comp\_s\_ft\_rec}($\Ftree, (v,w)$) \;
$s \gets s + s'$ \;
$S \gets S \cup S'$ \;
}
}
\tcp{$s=s(u,v)$, $n-s=s(v,u)$}
\tcp{Append at end in $\bigO{1}$}
$S \gets S \cup \{(u,v, s), (v,u, n-s)\}$ \;
\Return $(s,S)$
}
\end{algorithm}
\begin{algorithm}
\caption{Calculation of the sorted adjacency list for free trees. Cost $\bigO{n}$ time, $\bigO{n}$ space.}
\label{algo:compute_sorted_L:free_tree}
\DontPrintSemicolon
\SetKwProg{Fn}{Function}{ is}{end}
\Fn{\textsc{sorted\_adjacency\_list\_ft}$(\Ftree)$} {
\KwInTwo{$\Ftree$ free tree.}
\KwOutTwo{$L$, the decreasingly-sorted adjacency list of $\Ftree$.}
\tcp{Algorithm \ref{algo:compute_suvs:free_tree}}
$S\gets \textsc{compute\_s\_ft}(\Ftree)$ \;
Sort the triples $(u,v,s)$ in $S$ decreasingly by $s$ using counting sort \cite{Cormen2001a} \label{algo:sorting_suvs:free_tree}\;
$L \gets \{\emptyset\}^n$ \;
\For {$(u,v,s)\in S$} {
\tcp{Append at end in $\bigO{1}$}
$L[u] \gets L[u] \cup {(v,s)}$ \;
}
\Return $L$
}
\end{algorithm}
\begin{algorithm}
\caption{Calculation of a centroidal vertex of a free tree. Cost $\bigO{n}$ time, $\bigO{n}$ space.}
\label{algo:compute_centroid}
\DontPrintSemicolon
\SetKwProg{Fn}{Function}{ is}{end}
\Fn{\textsc{find\_centroidal\_vertex}$(\Ftree)$} {
\KwInTwo{$\Ftree$ free tree.}
\KwOutTwo{A centroidal vertex of $\Ftree$.}
\tcp{Algorithm \ref{algo:compute_sorted_L:free_tree}}
$L \gets \textsc{sorted\_adjacency\_list\_ft}(\Ftree)$ \;
\Return \textsc{find\_centroidal\_vertex$(\Ftree, L)$}
}
\SetKwProg{Fn}{Function}{ is}{end}
\Fn{\textsc{find\_centroidal\_vertex}$(\Ftree, L)$} {
\KwInTwo{$\Ftree$ free tree, $L$ sorted adjacency list of $\Ftree$.}
\KwOutTwo{A centroidal vertex of $\Ftree$.}
$u\gets $ choose an arbitrary vertex \;
\While {true} {
\tcp{$\bigO{1}$ time since $L[u]$ is sorted}
$(v,s) \gets$ largest entry in $L[u]$ \;
\lIf {$s > n/2$} {
$u\gets v$
}
\lElse {
\Return $u$
}
}
}
\end{algorithm}
\section{Minimum planar linear arrangements}
\label{sec:min_planar}
In \cite{Hochberg2003a}, HS present an easy-to-understand algorithm to calculate a minimum planar linear arrangement for any free tree in linear time. The idea behind the algorithm was presented in Section \ref{sec:notation_review}. The implementation has two procedures, {\tt embed} and {\tt embed\_branch}, that perform a series of actions in the following order:
\begin{itemize}
\item Procedure {\tt embed} gets one centroidal vertex, $c$, uses it as a root and orders its immediate subtrees by size.
\item Procedure {\tt embed} puts immediate subtrees with an even index in one side of the arrangement and immediate subtrees with odd index in the other side (the bigger the subtree, the farther away from the centroidal vertex), calling procedure {\tt embed\_branch} for every subtree.
\item Procedure {\tt embed\_branch} calculates recursively a displacement of all nodes with respect to the placement of the centroidal vertex (of the whole tree) in the linear arrangement.
\item Procedure {\tt embed} calculates the centroidal vertex's position (the sum of sizes of trees on the left of the centroidal vertex) and applies the displacement to the rest of the nodes.
\end{itemize}
\begin{algorithm}
\caption{Step (5) from procedure \textsc{embed}.}
\label{algo:embed_branch-embed-5}
\DontPrintSemicolon
$\pi(c)\gets leftSum + 1$ \;
$relPos[c]\gets 0$ \;
\For {each vertex $v$} {
$\pi(v)\gets \pi(c) + relPos[v]$ \;
}
\end{algorithm}
From Algorithm \ref{algo:embed_branch-embed-5}, we can see that vector {\tt relPos} must contain the displacement of all nodes from the position of the centroidal vertex in the linear arrangement. Note that these are only the last lines of {\tt embed}. The problem lays in procedure {\tt embed\_branch}, which does not calculate correctly the displacement vector {\tt relPos}. In Algorithm \ref{algo:embed_branch}, we give a correct version of procedure {\tt embed\_branch}, where changes with respect to HS's version are marked in red. Lines \ref{algo:embed_branch:first} to \ref{algo:embed_branch:add_underanchor_base} are needed to calculate the correct displacement. For a vertex $u\neq c$, variable {\em under\_anchor} is the number of nodes of $\SubRtree{u}[c]$ between $u$ and $p(u)$. Adding {\em under\_anchor} to parameter {\em base} (line \ref{algo:embed_branch:add_underanchor_base}), we obtain the correct displacement. There is also a slight modification in the recursive calls (lines \ref{algo:embed_branch:recursive_call:even} and \ref{algo:embed_branch:recursive_call:odd}) which is the addition of all the parameters needed.
\begin{algorithm}
\caption{\textsc{embed\_branch} corrected}
\label{algo:embed_branch}
\DontPrintSemicolon
\SetKwProg{Fn}{Function}{ is}{end}
\Fn{\textsc{embed\_branch}$(L^c, v, base, dir, relPos)$} {
\KwInTwo{(Rooted) sorted adjacency list $L^c$ for $\Rtree[c]$ as described in Section \ref{sec:notation_review}; $v$ the root of the subtree to be arranged; $base$ the displacement for the starting position of the subtree arrangement; $dir$ whether or not $v$ is to the left or to the right of its parent.}
\KwOutTwo{$relPos$ contains the displacement from the centroidal vertex of all nodes of the subtree.}
\tcp{the children of $v$ decreasingly sorted by size}
$C_v \gets L^c[v]$ \;
$before\gets after\gets 0$ \;
\begingroup
\color{red}
$under\_anchor\gets 0$ \label{algo:embed_branch:first} \;
\For {$i = 1$ \bf{to} $\vert C_v\vert$ \bf{step} $2$}
{
\tcp{$v$'s $i$-th child, $|V(\SubRtree{v_i}[c])|$ its size}
$v_i, \Nvert{i} \gets C_v[i]$ \;
$under\_anchor\gets under\_anchor + \Nvert{i}$ \;
}
$base\gets base + dir*(under\_anchor + 1)$ \label{algo:embed_branch:add_underanchor_base} \;
\endgroup
\For {$i = $ $\vert C_v\vert$ \bf{downto} $1$} {
$v_i, \Nvert{i} \gets C_v[i]$\;
\If {$i$ is even} {
\begin{tabular}{@{\hspace*{0.0em}}l@{}}
\textsc{embed\_branch}$(L^c, v_i$, \\
$\quad base - dir*before, -dir, relPos)$
\end{tabular} \label{algo:embed_branch:recursive_call:even} \;
$before\gets before + n_i$ \;
}
\Else {
\begin{tabular}{@{\hspace*{0.0em}}l@{}}
\textsc{embed\_branch}$(L^c, v_i$, \\
$\quad base + dir*after, dir, relPos)$
\end{tabular} \label{algo:embed_branch:recursive_call:odd} \;
$after\gets after + n_i$ \;
}
}
$relPos[v]\gets \textcolor{red}{base} $\;
}
\end{algorithm}
We should note that {\tt embed} needs to calculate a sorted adjacency list $L$ to calculate a centroidal vertex $c$ for $\Rtree[c]$ (Algorithm \ref{algo:compute_centroid}). However, in order to calculate the arrangement, we need $L$ to be rooted at $c$, then we use $L^c$ (see Section \ref{sec:notation_review}, explanation of \textsc{root\_list}).
In Section \ref{sec:min_projective}, we give an even simpler algorithm that can be seen as a different interpretation of HS's algorithm as it uses the same idea for ordering the subtrees but instead of calculating displacements for nodes it only uses the interval of positions where a subtree must be arranged.
Prior to HS's work, Iordanskii \cite{Iordanskii1987a} presented an algorithm to solve the task of minimizing $D$ under the planarity constraint. He devised a different approach to solve the same problem: given a free tree, the algorithm roots the tree at its centroid, and then separates the tree into chains of vertices, which have to be arranged in such a way that a planar arrangement is produced. The proper selection of the chains, coupled with the proper labeling of their vertices, produces a minimum planar arrangement. An outline of the algorithm that is applied on $\Rtree[c]$ is as follows \cite{Iordanskii2014a}:
\begin{enumerate}
\item Select an arbitrary vertex $v_0$ in the current decomposition subtree (initial tree).
\item Go from vertex $v_0$ along the branches with the greatest number of vertices to some hanging vertex $v_i$.
\item Starting from vertex $v_i$, construct a chain along the branches with the largest number of vertices to some other hanging vertex $v_j$.
\item Assign the highest and lowest numbers to the vertices $v_i$ and $v_j$ from the range allocated for the current decomposition subtree ($1$ and $n$ for the initial tree).
\item Enumerate monotonically the chain connecting the vertices $v_i$ and $v_j$, leaving the corresponding ranges of numbers for each selected decomposition subtree.
\item The procedure recursively repeats until all vertices are numbered.
\end{enumerate}
The algorithm requires $\bigO{n}$ comparison operations and $\bigO{n\log{n}}$ additional memory.
Iordanskii's approach differs markedly from HS's algorithm, e.g., using chains instead of anchors, and here we have focused on deriving a couple of algorithms for the projective case that stems from HS's algorithm for the planar case.
\section{Minimum projective linear arrangements}
\label{sec:min_projective}
The two algorithms for the projective that are presented in this section have $\bigO{n}$-time and $\bigO{n}$-space complexity, hence our upper bound for the projective case is tighter than that given by PL \cite{Park2009a}. The first algorithm is derived from HS's for the planar case (Algorithm \ref{algo:HS_adaptation}). This algorithm is obtained after extracting the relevant part from HS's original algorithm, adapting it and simplifying procedure {\tt embed}. The simplifications have to do with reducing the computations that Algorithm \ref{algo:compute_suvs:free_tree} does, which are not necessary in the projective variant (Algorithms \ref{algo:compute_suvs:rooted_tree} and \ref{algo:compute_sorted_L:rooted_tree}). Algorithm \ref{algo:compute_suvs:rooted_tree} is the simplified version of \ref{algo:compute_suvs:free_tree} that calculates only the sizes of the subtrees $\SubRtree{u}$ of $\Rtree$ for every vertex $u$ of $\Rtree$; Algorithm \ref{algo:compute_sorted_L:rooted_tree} constructs the rooted sorted adjacency list of a rooted tree $\Rtree$ with less calculations than Algorithm \ref{algo:compute_sorted_L:free_tree}. There is no equivalent to Algorithm \ref{algo:compute_centroid} for rooted trees because we do not need to look for any centroidal vertex. Finally, one has to use the correction of the subprocedure {\tt embed\_branch} Algorithm \ref{algo:embed_branch}. Algorithm \ref{algo:HS_adaptation} inherits the $\bigO{n}$-time and $\bigO{n}$-space complexity from HS's algorithm.
\begin{algorithm}
\caption{Calculation of size of subtrees for rooted trees. Cost $\bigO{n}$ time, $\bigO{n}$ space.}
\label{algo:compute_suvs:rooted_tree}
\DontPrintSemicolon
\SetKwProg{Fn}{Function}{ is}{end}
\Fn{\textsc{compute\_s\_rt}$(\Rtree)$} {
\KwInTwo{$\Rtree$ rooted tree.}
\KwOutTwo{$S = \{(u,v, s(u,v)) \;|\; (u,v)\in E\}$.}
$S \gets \emptyset$ \;
\For {$v\in \neighs{\Root}$} {
$(\_,S') \gets \textsc{comp\_s\_rt\_rec}(\Rtree, (\Root,v))$ \;
$S \gets S \cup S'$
}
\Return $S$
}
\SetKwProg{Fn}{Function}{ is}{end}
\Fn{\textsc{comp\_s\_rt\_rec}$(\Rtree, (u,v))$} {
\KwInTwo{$\Rtree$ rooted tree, $(u,v)$ directing edge.}
\KwOutTwo{$s$ the size of $\SubRtree{v}$ in vertices, $S = \{(u,v, s(u,v)) \;|\; (u,v)\in E(\SubRtree{v})\}$.}
$s\gets 1$ \;
\tcp{Iterate on the out-neighbours of $v$}
\For {$w\in \neighs{v}$} {
$(s', S') \gets \textsc{comp\_s\_rt\_rec}(\Rtree, (v,w))$ \;
$s \gets s + s'$ \;
$S \gets S \cup S'$ \;
}
\tcp{$s=s(u,v)$}
\tcp{Append at end in $\bigO{1}$}
$S \gets S \cup \{(u,v, s)\}$ \;
\Return $(s,S)$
}
\end{algorithm}
\begin{algorithm}
\caption{Calculation of the sorted adjacency list for rooted trees. Cost $\bigO{n}$ time, $\bigO{n}$ space.}
\label{algo:compute_sorted_L:rooted_tree}
\DontPrintSemicolon
\SetKwProg{Fn}{Function}{ is}{end}
\Fn{\textsc{sorted\_adjacency\_list\_rt}$(\Rtree)$} {
\KwInTwo{$\Rtree$ rooted tree.}
\KwOutTwo{$L$ the decreasingly-sorted adjacency list of $\Rtree$.}
\tcp{Algorithm \ref{algo:compute_suvs:rooted_tree}}
$S\gets \textsc{compute\_s\_rt}(\Ftree)$ \;
Sort the triples $(u,v,s)$ in $S$ decreasingly by $s$ using counting sort \cite{Cormen2001a} \label{algo:sorting_suvs:rooted_tree}\;
$L \gets \{\emptyset\}^n$ \;
\For {$(u,v,s)\in S$} {
\tcp{Append at end in $\bigO{1}$}
$L[u] \gets L[u] \cup {(v,s)}$ \;
}
\Return $L$
}
\end{algorithm}
\begin{algorithm}
\caption{Adaptation of HS's main procedure for the projective case.}
\label{algo:HS_adaptation}
\DontPrintSemicolon
\SetKwProg{Fn}{Function}{ is}{end}
\Fn{\textsc{HS\_Projective}$(\Rtree)$} {
\KwInTwo{$\Rtree$ rooted tree at $\Root$.}
\KwOutTwo{An optimal projective arrangement $\arr$.
\tcp{Steps 1 and 3 of HS's algorithm}
\tcp{Algorithm \ref{algo:compute_sorted_L:rooted_tree}}
$L^\Root \gets \textsc{sorted\_adjacency\_list\_rt}(\Rtree)$ \;
$relPos \gets \{0\}^n$ \;
$leftSum \gets rightSum \gets 0$ \;
\For {$i=k$ \textbf{downto} $1$} {
\If {$i$ is even} {
\tcp{Algorithm \ref{algo:embed_branch}}
\begin{tabular}{@{\hspace*{0.0em}}l@{}}
\textsc{embed\_branch}$(L^\Root, v_i,rightSum,1,$ \\
$\quad relPos)$
\end{tabular}
$rightSum \gets rightSum + n_i$ \;
}
\Else {
\tcp{Algorithm \ref{algo:embed_branch}}
\begin{tabular}{@{\hspace*{0.0em}}l@{}}
\textsc{embed\_branch}$(L^\Root, v_i,-leftSum,-1,$ \\
$\quad relPos)$
\end{tabular}
$leftSum \gets leftSum + n_i$ \;
}
}
$\arr \gets \{0\}^n$ \tcp{empty arrangement}
$\arr(\Root) \gets leftSum + 1$\;
$relPos[\Root] \gets 0$ \;
\lFor {each vertex $v$} {
$\arr(v) \gets \arr(\Root) + relPos[v]$
}
\Return $\arr$
}
\end{algorithm}
The second algorithm for the projective case is based on a different approach based on intervals (Algorithms \ref{algo:min_projective} and \ref{algo:arrange}). Although the pseudocode given can be regarded as a formal interpretation of GT's sketch \cite{Gildea2007a} its correctness stems largely from the theorems and lemmas given by HS \cite{Hochberg2003a} (summarized in Section \ref{sec:notation_review}). In Algorithm \ref{algo:min_projective} we give the main procedure that includes the call to the embedding recursive procedure, given in Algorithm \ref{algo:arrange}, which could be seen as a combination of HS's methods {\tt embed\_branch} and {\tt embed} excluding the calculation of one of the centroidal vertices \cite{Hochberg2003a}.
Algorithm \ref{algo:arrange} calculates the arrangement of the input tree $\Rtree$ using intervals of integers $[a,b]$, where $1\le a\le b\le n$, that indicate the first and the last position of the vertices of a subtree in the linear arrangement; an approach based on intervals (but using chains) was considered earlier by Iordanskii \cite{Iordanskii2014a}. For the case of $\Rtree$, the interval is obviously $[1,n]$, as seen in the first call to Algorithm \ref{algo:arrange} (line \ref{algo:min_projective:first_rec_call} of Algorithm \ref{algo:min_projective} and line \ref{algo:min_planar:first_rec_call} of \ref{algo:min_planar}). The loop at line \ref{algo:arrange:for_loop} of Algorithm \ref{algo:arrange} is responsible for arranging all immediate subtrees of $\SubRtree{u}$ following the ordering described by HS (Section \ref{sec:notation_review}). Now, let $\SubRtree{u}$ ($u\neq\Root$) be a subtree of $\Rtree$ to be embedded in the interval $[a,b]$, where $u$, $a$ and $b$ are parameters of the recursive procedure. If one of the immediate subtrees of $\SubRtree{u}$, say $\SubRtree{v}$ with $\Nvert{v} = |V(\SubRtree{v})|$, is to be arranged in the available interval farthest to the left of its parent $u$, its interval is $[a, a + \Nvert{v} - 1]$ (lines \ref{algo:arrange:start_big_if_1}-\ref{algo:arrange:end_big_if_1}); when it is to be arranged in the available interval farthest to the right of $u$, its interval is $[b - \Nvert{v} + 1, b]$ (lines \ref{algo:arrange:start_big_else_1}-\ref{algo:arrange:end_big_else_1}). Notice that the side (with respect to $u$) to which subtree $\SubRtree{v}$ has to be arranged is decided by changing the value of the variable {\tt side}, whose initial value is given in either line \ref{algo:arrange:initial_side_right} or line \ref{algo:arrange:initial_side_left} depending on the side to which $u$ has been placed with respect to its parent (said side is given as the parameter $\tau$ to the recursive procedure). After $\SubRtree{v}$ is arranged, we need to update the left and right limits of the arrangement of $\SubRtree{u}$: if the subtree $\SubRtree{v}$ is arranged to the left of $u$, the left limit is to be increased by $\Nvert{v}$ (line \ref{algo:arrange:big_if_2}), and when it is arranged to the right of $u$, the right limit is to be decreased by $\Nvert{v}$ (line \ref{algo:arrange:big_else_2}). When all immediate subtrees of $\SubRtree{u}$ have been arranged (line \ref{algo:arrange:finish_recursive}), only node $u$ needs to be arranged, thus the remaining interval $[a,b]$ has one element, and then $a = b$ and $\pi(u)=a$.
Furthermore, using this recursive procedure, solving the planar variant is straightforward, (Algorithms \ref{algo:min_planar} and \ref{algo:arrange}): given a free tree $\Ftree$, we simply have to find a centroidal vertex $c$ of $\Ftree$ (Algorithm \ref{algo:compute_centroid}) where to root the tree and then supply $ \Rtree[c]$ and $L^c$ as input of Algorithm \ref{algo:arrange}. This is due to the fact that an optimal planar arrangement for $\Ftree$ is an optimal projective arrangement for $\Rtree[c]$ \cite{Hochberg2003a}. Clearly, an optimal planar arrangement for $\Ftree$ needs not be an optimal projective arrangement for $\Rtree$ for $\Root\neq c$, as $\Root$ might be covered. Fig. \ref{fig:OPl:first_proj_gt_pla}(a) shows an optimal projective arrangement of the rooted tree $\Rtree[1]$, which is not an optimal planar arrangement of $\Ftree$; Fig. \ref{fig:OPl:first_proj_gt_pla}(b) shows an arrangement that is both optimal planar for $\Ftree$ and optimal projective for $\Rtree[2]$.
\begin{algorithm}
\caption{Linear-time calculation of an optimal projective arrangement.}
\label{algo:min_projective}
\DontPrintSemicolon
\SetKwProg{Fn}{Function}{ is}{end}
\Fn{\textsc{arrange\_optimal\_projective}$(\Rtree)$} {
\KwInTwo{$\Rtree$ rooted tree at $\Root$.}
\KwOutTwo{An optimal projective arrangement $\arr$.
\tcp{Algorithm \ref{algo:compute_sorted_L:rooted_tree}}
$L^\Root \gets \textsc{sorted\_adjacency\_list\_rt}(\Rtree)$ \label{algo:min_projective:adjacency_list} \;
$\arr \gets \{0\}^n$ \tcp{empty arrangement}
\tcp{The starting side `right' is arbitrary.}
\tcp{Algorithm \ref{algo:arrange}.}
\textsc{Arrange}$(L^\Root, \Root, \textrm{right}, 1, n, \arr)$ \label{algo:min_projective:first_rec_call} \;
\Return $\arr$
}
\end{algorithm}
\begin{algorithm}
\caption{Linear-time calculation of an optimal planar arrangement.}
\label{algo:min_planar}
\DontPrintSemicolon
\SetKwProg{Fn}{Function}{ is}{end}
\Fn{\textsc{arrange\_optimal\_planar}$(\Ftree)$} {
\KwInTwo{$\Ftree$ free tree.}
\KwOutTwo{An optimal planar arrangement $\arr$.
\tcp{Algorithm \ref{algo:compute_sorted_L:free_tree}}
$L \gets \textsc{sorted\_adjacency\_list\_ft}(\Ftree)$ \;
\tcp{Algorithm \ref{algo:compute_centroid}}
$c \gets $\textsc{find\_centroidal\_vertex$(\Ftree, L)$} \;
\tcp{list $L$ rooted at $c$ (Section \ref{sec:notation_review})}
$L^c \gets \textsc{root\_list}(L,c)$ \;
$\arr \gets \{0\}^n$ \tcp{empty arrangement}
\tcp{The starting side `right' is arbitrary.}
\tcp{Algorithm \ref{algo:arrange}.}
\textsc{Arrange}$(L^c, c, \textrm{right}, 1, n, \arr)$ \label{algo:min_planar:first_rec_call} \;
\Return $\arr$
}
\end{algorithm}
\begin{algorithm}
\caption{Optimal arrangement of a tree according to its sorted adjacency list.}
\label{algo:arrange}
\DontPrintSemicolon
\SetKwProg{Fn}{Function}{ is}{end}
\Fn{\textsc{Arrange}$(L^\Root, u, \tau, a,b, \arr)$} { \label{algo:min_projective:recursive_procedure}
\KwInTwo{(Rooted) sorted adjacency list $L^\Root$ as described in Section \ref{sec:notation_review}; $u$ the root of the subtree to be arranged; $\tau$ position of $u$ with respect to its parent $p(u)$; $[a,b]$ interval of positions of the arrangement where to embed $\SubRtree{u}$; $\arr$ the partially-constructed arrangement.}
\KwOutTwo{$\arr$ updated with the optimal projective arrangement for $\SubRtree{u}$ in $[a,b]$.}
$C_u \gets L^\Root[u]$ \tcp{the children of $u$ decreasingly sorted by size}
\lIf {$\tau$ is $\mathrm{right}$} { side $\gets$ right \label{algo:arrange:initial_side_right} }
\lElse { side $\gets$ left \label{algo:arrange:initial_side_left} }
\For {$i$ from $1$ to $|C_u|$} { \label{algo:arrange:for_loop}
\tcp{the $i$-th child of $u$ and its size $\Nvert{v}=|V(\SubRtree{v})|$}
$v, \Nvert{v} \gets C_u[i]$ \;
\If {$\mathrm{side}$ is $\mathrm{left}$} {
$\tau_{\mathrm{next}} \gets$ left \label{algo:arrange:start_big_if_1} \;
$a_{\mathrm{next}}\gets a$ \;
$b_{\mathrm{next}}\gets a + \Nvert{v} - 1$ \label{algo:arrange:end_big_if_1}
}
\Else {
$\tau_{\mathrm{next}} \gets$ right \label{algo:arrange:start_big_else_1} \;
$a_{\mathrm{next}}\gets b - \Nvert{v} + 1$ \;
$b_{\mathrm{next}}\gets b$ \label{algo:arrange:end_big_else_1}
}
\textsc{Arrange}$(L^\Root, v, \tau_{\mathrm{next}}, a_{\mathrm{next}}, b_{\mathrm{next}}, \arr)$ \label{algo:arrange:rec_call} \;
\If {$\mathrm{side}$ is $\mathrm{left}$} { \label{algo:arrange:big_if_2}
$a \gets a + \Nvert{v}$
}
\Else { \label{algo:arrange:big_else_2}
$b \gets b - \Nvert{v}$
}
side $\gets$ opposite side \;
}
$\arr(u) \gets a$ \label{algo:arrange:finish_recursive} \;
}
\end{algorithm}
Algorithm \ref{algo:min_projective}'s time and space complexities are $\bigO{n}$. First, the sorted, and already rooted, adjacency list $L^\Root$ of $\Rtree$ can be computed in $\bigO{n}$ (line \ref{algo:min_projective:adjacency_list}). The running time of Algorithm \ref{algo:arrange} is clearly $\bigO{n}$: the `for' loop (line \ref{algo:arrange:for_loop}) contains constant-time operations, a single recursive call and, since each loop consists of $\degree{u}=|\neighs{u}|$ iterations (for a vertex $u$), the total running time is $\bigO{\sum_{u\in V}\degree{u}}=\bigO{n}$ because every vertex is visited only once. The spatial complexity is $\bigO{n}$: sorting and building the adjacency list $L^u$ requires $\bigO{n}$ space (for any $u$) and Algorithm \ref{algo:arrange} requires extra $\bigO{n}$ space (for the whole stack of the recursion) in the worst case (for path graphs). The same can be said about Algorithm \ref{algo:min_planar}.
\section{Conclusions and future work}
\label{sec:conclusions}
To the best of our knowledge, our work is the first to highlight a relationship between the MLA problem under planarity and the same problem under projectivity. We have shown that HS's algorithm for planarity \cite{Hochberg2003a} contains a subalgorithm to solve the projective case. We suspect that Iordanskii's algorihtm for planarity \cite{Iordanskii1987a} may also contain a subalgorithm for the projective case. We have corrected a few aspects of HS's algorithm (Algorithm \ref{algo:embed_branch}).
We provided two detailed algorithms for the projective case that run without a doubt in $\bigO{n}$ time. One that stems directly from HS's original algorithm for the planar case (Algorithms \ref{algo:HS_adaptation} and \ref{algo:embed_branch}), and another interval-based algorithm (Algorithms \ref{algo:min_projective} and \ref{algo:arrange}) that builds on HS's work but is less straightforward. The latter algorithm leads immediately to a new way to solve the planar case in $\bigO{n}$ time (Algorithms \ref{algo:min_planar} and \ref{algo:arrange}) thanks to the correspondence between the planar case and the projective case that we have uncovered in this article.
GT \cite{Gildea2007a} sketched an algorithm for the projective case and claimed it to run in linear time. PL \cite{Park2009a} added some details but not sufficiently, concluding that it runs in $\bigO{n \log d_{max}}$ time, which, as we have seen, overestimates the actual complexity. During the reviewing process of this paper, it has come to our knowledge a Master Thesis \cite{Bommasani2020a} where an error in GT's algorithm is pointed out. This error does not affect our implementation.
It could be the case that a unified approach for planarity and projectivity could also be adopted for the maximum linear arrangement problem \cite{Hassin2000a}. To the best of our knowledge, a polynomial-time algorithm for the unrestricted case is not forthcoming. An intriguing question is if the maximum linear arrangement problem on trees can be solved in linear time for the projective and planar variants as in the corresponding minimization problem.
\section*{Acknowledgements}
We are grateful to M. Iordanskii for helpful discussions. LAP is supported by Secretaria d’Universitats i Recerca de la Generalitat de Catalunya and the Social European Fund. RFC and LAP are supported by the grant TIN2017-89244-R from MINECO (Ministerio de Econom\'{i}a, Industria y Competitividad). RFC is also supported by the recognition 2017SGR-856 (MACDA) from AGAUR (Generalitat de Catalunya). JLE is funded by the grant PID2019-109137GB-C22 from MINECO.
\bibliographystyle{plain}
| {
"timestamp": "2022-05-04T02:16:29",
"yymm": "2102",
"arxiv_id": "2102.03277",
"language": "en",
"url": "https://arxiv.org/abs/2102.03277",
"abstract": "The Minimum Linear Arrangement problem (MLA) consists of finding a mapping $\\pi$ from vertices of a graph to distinct integers that minimizes $\\sum_{\\{u,v\\}\\in E}|\\pi(u) - \\pi(v)|$. In that setting, vertices are often assumed to lie on a horizontal line and edges are drawn as semicircles above said line. For trees, various algorithms are available to solve the problem in polynomial time in $n=|V|$. There exist variants of the MLA in which the arrangements are constrained. Iordanskii, and later Hochberg and Stallmann (HS), put forward $O(n)$-time algorithms that solve the problem when arrangements are constrained to be planar (also known as one-page book embeddings). We also consider linear arrangements of rooted trees that are constrained to be projective (planar embeddings where the root is not covered by any edge). Gildea and Temperley (GT) sketched an algorithm for projective arrangements which they claimed runs in $O(n)$ but did not provide any justification of its cost. In contrast, Park and Levy claimed that GT's algorithm runs in $O(n \\log d_{max})$ where $d_{max}$ is the maximum degree but did not provide sufficient detail. Here we correct an error in HS's algorithm for the planar case, show its relationship with the projective case, and derive simple algorithms for the projective and planar cases that run without a doubt in $O(n)$ time.",
"subjects": "Data Structures and Algorithms (cs.DS); Computation and Language (cs.CL); Discrete Mathematics (cs.DM)",
"title": "Minimum projective linearizations of trees in linear time",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9732407152622597,
"lm_q2_score": 0.7279754489059774,
"lm_q1q2_score": 0.7084953465866181
} |
https://arxiv.org/abs/2011.08880 | Artifacts of Quantization in Distance Transforms | Distance transforms are a central tool in shape analysis, morphometry, and curve evolution problems. This work describes and investigates an artifact present in distance maps computed from sampled signals. Namely, sampling reflects through the distance transform causing quantization in the resulting distance map. Gradients of the quantized distance map show banding, affecting the quality of subsequence processing. Furthermore, this error is independent of the sampling period of the signal and cannot be removed by modifying the number of samples across an objects width. Where needed, distance maps should be computed from representations other than binary images. In the case where exact representations are needed, a dithering and noise removal algorithm is proposed. | \section{Introduction}
The distance transform is a fundamental tool of image processing, used extensively in shape analysis~\cite{blum1967transformation,kimmel1995skeletonization,siddiqi2002hamilton,niblack1992generating}, morphometry~\cite{hildebrand1997new}, and curve evolution~\cite{osher1988fronts, caselles1993geometric, malladi1995shape}.
The distance signal is a single, non-parametric representation of shape and volume, able to represent objects of any dimensions, of arbitrary topology, and of arbitrary definitions of distance.
These features make signed distance transforms appealing as a digital representation of spatial objects.
Many methods exist to digitally embed an object.
For binary images, the algorithm is simply called the distance transform~\cite{rosenfeld1966sequential,rosenfeld1968distance,danielsson1980euclidean,borgefors1986distance}.
Representation-specific methods are available for meshes~\cite{baerentzen2005signed} and parametric curves~\cite{pottmann2003geometry}.
This article is concerned with the fidelity of the distance transform of binary signals.
This study demonstrates that the distance transform of a sampled signal is equivalent to a quantized distance transform.
This leads to errors and artifacts in the gradients of the distance transform signal.
An algorithm is presented for removing the quantization and artifacts.
\section{Preliminaries}
\subsection{Distance Transform}
\label{sec:distance_transforms}
Define a metric space $(X, g)$ where $X$ is a set and $g$ a metric on the set.
Consider a subset $A \subset X$.
The distance transform assigns every $x \in X$ a positive real number corresponding to the minimum distance between $x$ and $A$.
\begin{equation}
d(x, A) = \inf_{a \in A} g(x, a)
\end{equation}
In many cases, an extension of the distance transform is used called the signed distance transform.
The boundary of the set $\partial A$ is defined as the set of points belonging to the closure of $A$ but not the interior.
The signed distance transform assigns every $x \in X$ the distance to the boundary with the added condition that the sign indicates if the point is inside or outside the set.
\begin{equation}
\label{eqn:sdt}
\phi(x) = \left\{
\begin{matrix}
+ d(x, A) & \text{if } x \in A^C \\
- d(x, A^C) & \text{if } x \in A
\end{matrix}
\right.
\end{equation}
This work uses the convention that the inside of the curve is negative.
The distance transform and signed distance transform of binary images are demonstrated in Figure~\ref{fig:fish} for the fish curve~\cite{lockwood1967book}.
\begin{figure}[t]
\centering
\begin{tabular}{cc}
\subfloat[$I(x)$]{
\includegraphics[width=0.4\linewidth]{fish_fish
\label{fig:fish:fish}
} &
\subfloat[{$d(x, A)$}]{
\includegraphics[width=0.4\linewidth]{fish_backward
\label{fig:fish:backward}
} \\
\subfloat[$d(x, A^C)$]{
\includegraphics[width=0.4\linewidth]{fish_forward
\label{fig:fish:forward}
} &
\subfloat[{$\phi(x)$}]{
\includegraphics[width=0.4\linewidth]{fish_signed
\label{fig:fish:signed}
}
\end{tabular}
\caption{For the fish curve (\ref{fig:fish:fish}), demonstration of the background distance transform (\ref{fig:fish:backward}), foreground distance transform (\ref{fig:fish:forward}), and signed distance transform (\ref{fig:fish:signed}).}
\label{fig:fish}
\end{figure}
\subsection{Sampling and Quantization}
From signal processing theory, digitization refers to the discretization of both the domain (termed sampling) and intensity (termed quantization) of a continuous signal.
The distinction is demonstrated in Figure~\ref{fig:sampling-quant}.
In this work, circular brackets are used to represent continuous signals and square brackets are used to represent sampled signals:
\begin{equation}
y(x) = y[nh]
\end{equation}
where $h$ is the sampling period and $n$ the sample number.
Similarly, $\tilde{y}(x)$ is used to denote a quantized signal.
A signal $\tilde{y}[nh]$ can be both sampled and quantized.
It will be seen that the distance transform of a sampled signal leads to quantization errors in the distance signal that produces structured artifacts in subsequent processing of the distance transform.
\begin{figure}[h]
\centering
\begin{tabular}{cc}
\subfloat[$y(x)$]{
\includegraphics[width=0.44\linewidth]{sampling-quantization_real
\label{fig:sampling-quant:real}
} &
\subfloat[{$y[nh]$}]{
\includegraphics[width=0.44\linewidth]{sampling-quantization_sampled
\label{fig:sampling-quant:sampled}
} \\
\subfloat[$\tilde{y}(x)$]{
\includegraphics[width=0.44\linewidth]{sampling-quantization_quant
\label{fig:sampling-quant:quantized}
} &
\subfloat[{$\tilde{y}[nh]$}]{
\includegraphics[width=0.44\linewidth]{sampling-quantization_sample_and_quant
\label{fig:sampling-quant:sampled-and-quantized}
}
\end{tabular}
\caption{(\ref{fig:sampling-quant:real}) Continuous signal, (\ref{fig:sampling-quant:sampled}) sampled ($h = 1.0$), (\ref{fig:sampling-quant:quantized}) quantized, (\ref{fig:sampling-quant:sampled-and-quantized}) and both sampled and quantized .}
\label{fig:sampling-quant}
\end{figure}
\subsection{Finite Differences}
In many applications, gradients need to be computed in the image.
These require finite approximations to the infinitesimal calculus.
Here, we use the following notation for forward, backward, and central difference:
\begin{eqnarray}
D^{+x}\phi &=& \frac{\phi(x + h) - \phi(x)}{h} \\
D^{-x}\phi &=& \frac{\phi(x) - \phi(x - h)}{h} \\
D^{0x}\phi &=& \frac{\phi(x + h) - \phi(x - h)}{2h}
\end{eqnarray}
One important property of signed distance transforms is that their magnitude gradient is $+1$ everywhere except at singularities (the medial axis).
Choosing a numerical approximation to the gradients, $D\phi$, the magnitude gradient can be computed at each point in an n-dimensional image:
\begin{equation}
\lvert\nabla\phi\rvert \approx |D\phi| = \sqrt{\sum_i^n \left(D^i \phi\right)^2}
\end{equation}
Similar approximations can be made for Laplacian, Hessian, and curvature by substituting finite difference approximations for infinitesimal differences.
These derivatives are used in specific image processing tasks such as active contours~\cite{caselles1993geometric, malladi1995shape} and skeletonization~\cite{kimmel1996sub,siddiqi2002hamilton}.
\section{Quantization in the \\ Distance Transform}
It is demonstrated that the distance transform of a sampled signal produces a quantized distance signal.
Pedagogical examples are given in one and two dimensions.
Finally, the problem is synthesized based on discretizing the distance metric.
\label{section:quantization}
\subsection{1D Example}
For motivation, we consider the signed distance transform of a one-dimensional rectangular function.
The one-dimensional sphere defined by its radius $r$ and center $x_0$ is given by:
\begin{equation}
\phi_{1D}(x) = \lvert x - x_0\rvert - r
\end{equation}
where $\lvert \cdot \rvert$ is the absolute value function.
The binary signal can be reconstructed using the Heaviside function, $H$
\begin{equation}
I(x) = H(-\phi(x))
\end{equation}
and embedded numerically to produce the estimation to the signed distance transform, $\tilde{\phi}_{1D}$.
A tilde is used as it will be seen that $\tilde{\phi}_{1D}$ is a quantized representation of $\phi_{1D}$.
Figure~\ref{fig:example1d} plots the continuous signal, the sampled signal, the Heaviside of the sampled signal, and the resulting distance transform.
This is synonymous with the standard method of representing a continuous object in a digital distance transform image.
As seen in Figure~\ref{fig:example1d:dt_Heaviside}, the distance transform of a sampled signal produces a quantized representation of the underlying distance transform.
Additionally, the quantized signal is a constant offset from the true signal.
Appendix~\ref{app:sdt} details why these values are at half integers and not whole integers.
\begin{figure}[h]
\centering
\begin{tabular}{cc}
\subfloat[$\phi(x)$]{
\includegraphics[width=0.44\linewidth]{quant-1d_real
\label{fig:example1d:real}
} &
\subfloat[{$I(x) = H(-\phi(x))$}]{
\includegraphics[width=0.44\linewidth]{quant-1d_Heaviside
\label{fig:example1d:Heaviside}
} \\
\subfloat[{$I[nh]$}]{
\includegraphics[width=0.44\linewidth]{quant-1d_sampled
\label{fig:example1d:sampled}
} &
\subfloat[{$\tilde{\phi}[nh] = sdt(I[nh])$}]{
\includegraphics[width=0.44\linewidth]{quant-1d_dt_Heaviside
\label{fig:example1d:dt_Heaviside}
}
\end{tabular}
\caption{(\ref{fig:example1d:real}) The signed distance transform of a 1D sphere ($x_0=5.0$, $r=2.25$), (\ref{fig:example1d:Heaviside}) binarized, (\ref{fig:example1d:sampled}) sampled ($h=1.0$), and (\ref{fig:example1d:dt_Heaviside}) the computed signed distance transform. The computed signed distance transform is quantized and biased compared to the continuous distance map.}
\label{fig:example1d}
\end{figure}
\subsection{2D Example}
The problem is now explored in two dimensions.
A circular signal is again used with the absolute value replaced with the $\ell^2$ norm and $x$, $x_0$ understood as 2D vectors.
\begin{equation}
\label{eqn:phi_2d_sphere}
\phi_{2D}(x) = \lVert x - x_0 \rVert_2 - r
\end{equation}
In Figure~\ref{fig:example2d}, the signed distance transform of the Heaviside of the sampled signal is compared to the original signal.
To demonstrate quantization, the numerical signal $\tilde{\phi}_{2D}[ih, jh]$ is plotted against $\phi_{2D}[ih, jh]$.
\begin{figure}[h]
\centering
\begin{tabular}{cc}
\subfloat[{$\phi_{2D}[ih, jh]$}]{
\includegraphics[width=0.44\linewidth]{quant-2d_phi
\label{fig:example2d:phi}
} &
\subfloat[{$I = -H(\phi_{2D})$}]{
\includegraphics[width=0.44\linewidth]{quant-2d_In
\label{fig:example2d:In}
} \\
\subfloat[{$\tilde{\phi}_{2D}[ih, jh] = sdt(I)$}]{
\includegraphics[width=0.44\linewidth]{quant-2d_d
\label{fig:example2d:d}
} &
\subfloat[Quantization]{
\includegraphics[width=0.44\linewidth]{quant-2d_regress
\label{fig:example2d:regression}
}
\end{tabular}
\caption{Quantization in the signed distance transform of a sampled 2D sphere ($x_0=\left(5.0, 5.0\right)^T$, $r=2.25$, $h=1.0$). \ref{fig:example2d:phi} is the ideal signed distance transform, \ref{fig:example2d:In} the binarized image, and \ref{fig:example2d:d} the computed signed distance transform demonstrating quantization. \ref{fig:example2d:phi} is plotted against \ref{fig:example2d:d} in \ref{fig:example2d:regression} where dashed lines represent quantized levels and the solid line demonstrates the ideal relationship $y=x$.}
\label{fig:example2d}
\end{figure}
Comparing Figure~\ref{fig:example2d:phi} to Figure~\ref{fig:example2d:d}, the signed distance transform of the sampled signal has fewer gray levels.
From Figure~\ref{fig:example2d:regression}, quantization is seen as intensities aligning along discrete horizontal lines.
However, the quantization is not necessarily at integers, taking on a finite set of non-integer multiples of $h$.
Furthermore, as seen in the 1D case, the quantized signal is slightly lower (or biased) compared to the ideal signal.
\subsection{Analysis and Synthesis}
The problem is now generalized to vectors of arbitrary dimension.
Consider any element of the set $x \in X$ that can be represented as an integer vector multiplied by the sampling period.
\begin{equation}
x[i, j] = h \left(i, j\right)^T
\end{equation}
The Euclidean distance between any two points will be of the form
\begin{equation}
d(x, p) = h \sqrt{(i_x-i_p)^2 + (j_x - j_p)^2}
\end{equation}
The consequence is that $d$ can only take a finite set of values.
\begin{equation}
d \in \left\{h l \given l = \sqrt{i^2 + j^2}, i,j \in \Z \right\}
\end{equation}
This can be extended to arbitrary dimensions by considering the metric $g$ between all discrete samples of the space $X$ (see Section~\ref{sec:distance_transforms}).
For vectors, this set is the metric between the integer vectors and the zero vector.
\begin{equation}
\label{eqn:quant-metric}
d \in \left\{ hl \given l = g(z, 0) , z \in \Z^n \right\}
\end{equation}
Discretization of the metric is visualized in Figure~\ref{fig:discretized_metric}.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\linewidth]{discretization-of-g_contour}
\caption{Discretization of the $\ell^2$ metric. Grey lines denote contours of the continuous metric while points denote possible samples from quantization.}
\label{fig:discretized_metric}
\end{figure}
In summary, the distance transform of a sampled signal is a sampled and quantized representation of the true distance transform.
The samples are a finite set of integer and non-integer multiples of the sampling period.
For a Euclidean metric, these values will be square roots of the sum of square integers.
\section{Artifacts of Quantization}
The consequences of quantization are now explored.
It will be seen that quantization leads to artifacts in the gradients of the distance transform.
These artifacts appear from the distance transform being flat from quantization.
\subsection{Gradient of a 2D Sphere}
From Equation~\ref{eqn:phi_2d_sphere}, the gradient of the image can be computed.
\begin{equation}
\nabla \phi_{sphere} = \frac{x - x_0}{\lVert x - x_0 \rVert_2}
\end{equation}
As this equation holds in arbitrary dimensions, the reference to dimensions is dropped.
Note that the magnitude gradient is $+1$ as is expected of a signed distance transform.
Numerical gradients of the quantized signal are compared to the ideal gradients in Figure~\ref{fig:gradients}.
At the \textit{y} axis, the derivative in the \textit{x} direction is zero as expected (Figure~\ref{fig:gradients:dc}).
However, just forward or backwards from the \textit{x} axis the derivative remains exactly zero.
For the same point in the magnitude gradient image (Figure~\ref{fig:gradients:mag_dfc}), the magnitude gradient is positive one.
The fact that the gradient for some points is flat, while the magnitude of the gradient is one, is important for reinitialization algorithms~\cite{sussman1994level,peng1999pde} where removal of the artifact will converge very slowly.
\begin{figure*}[h]
\centering
\begin{tabular}{cccc}
\multicolumn{4}{c}{ \begin{tabular}{ccc}
\subfloat[{$\phi[ih, jh]$}]{
\includegraphics[width=0.2\linewidth]{grad_Phi
\label{fig:gradients:phin}
} &
\subfloat[{$I = H(-\phi)$}]{
\includegraphics[width=0.22\linewidth]{grad_I
\label{fig:gradients:In}
} &
\subfloat[{$\tilde{\phi}[ih, jh]$}]{
\includegraphics[width=0.2\linewidth]{grad_Phi_approx
\label{fig:gradients:approx_phi}
} \end{tabular}} \\
\subfloat[{$\nabla_x \phi$}]{
\includegraphics[width=0.2\linewidth]{grad_DX
\label{fig:gradients:grad}
} &
\subfloat[{$D^{0x}\tilde{\phi}$}]{
\includegraphics[width=0.2\linewidth]{grad_Phi_approx_Dcx
\label{fig:gradients:dc}
} &
\subfloat[{$\lVert\nabla\phi\rVert$}]{
\includegraphics[width=0.2\linewidth]{grad_Mag_grad
\label{fig:gradients:mag_grad}
} &
\subfloat[{$\lVert D^0\tilde{\phi}\rVert$}]{
\includegraphics[width=0.2\linewidth]{grad_Phi_approx_DcM
\label{fig:gradients:mag_dfc}
} \\
\subfloat[{$\nabla_x \phi(x, 1.5)$}]{
\includegraphics[width=0.2\linewidth]{grad_DX_1D
\label{fig:grad-flat:del_phi}
} &
\subfloat[{$D^{0x} \tilde{\phi}(x, 1.5)$}]{
\includegraphics[width=0.2\linewidth]{grad_Phi_approx_Dcx_1D
\label{fig:grad-flat:del_d}
} &
\subfloat[{$\lVert \nabla \phi(x, 1.5) \rVert$}]{
\includegraphics[width=0.2\linewidth]{grad_Mag_grad_1D
\label{fig:grad-flat:mag_grad_phi}
} &
\subfloat[{$\lVert D^{0} \tilde{\phi}(x, 1.5) \rVert$}]{
\includegraphics[width=0.2\linewidth]{grad_Phi_approx_DcM_1D
\label{fig:grad-flat:mag_grad_d}
}
\end{tabular}
\caption{Accuracy of numerical gradients of quantized distance transforms to represent true gradients of distance transforms using a 2D sphere ($x_0=\left(5.0, 5.0\right)^T$, $r=2.5$, $h=0.5$). (a-c) demonstrate the quantized signed distance transform. (d-e, h-i) demonstrate banding and flat \textit{x} gradients. (f-g, j-k) demonstrate a near ideal magnitude gradient.}
\label{fig:gradients}
\end{figure*}
\subsection{Interpretation from Voronoi Diagrams}
The Voronoi diagram ~\cite{voronoi1908nouvelles} of the binary image is overlaid on the gradient and gradient magnitude in Figure~\ref{fig:voronoi}.
Parallel Voronoi lines correspond to bands of constant quantization in the distance transform.
Placing a finite difference stencil across the parallel Voronoi edges leads to gradients forced flat, visualized as banding in the gradient image.
However, the gradient magnitude remains close to one.
As the finite difference stencil increases, this error will not change until the stencil extends beyond parallel Voronoi edges.
\begin{figure*}[h]
\centering
\begin{tabular}{cccc}
\subfloat[Voronoi]{
\includegraphics[width=0.2\linewidth]{voronoi_voronoi
\label{fig:voronoi:voronoi}
} &
\subfloat[{$\tilde{\phi}$}]{
\includegraphics[width=0.2\linewidth]{voronoi_phi
\label{fig:voronoi:phi}
} &
\subfloat[{$D^{0x} \tilde{\phi}$}]{
\includegraphics[width=0.2\linewidth]{voronoi_d_c_x
\label{fig:voronoi:d_x}
} &
\subfloat[{$\lVert D^{0} \tilde{\phi} \rVert$}]{
\includegraphics[width=0.2\linewidth]{voronoi_m_c
\label{fig:voronoi:mag_grad}
}
\end{tabular}
\caption{Voronoi diagram (\ref{fig:voronoi:voronoi}) overlaid on the signed distance transform (\ref{fig:voronoi:phi}), finite difference in the x-direction (\ref{fig:voronoi:d_x}), and magnitude gradient (\ref{fig:voronoi:mag_grad}).}
\label{fig:voronoi}
\end{figure*}
\subsection{Higher Order Gradients}
Finally, higher order derivatives are explored.
The finite difference approximation to the second derivative is straight forward.
\begin{equation}
D^{xx} \phi = \frac{\phi(x+h) - 2\phi(x) + \phi(x-h)}{h^2}
\end{equation}
The mixed derivative $D^{yx}$ has a similar form.
From the first and second derivatives, the Laplacian can be computed.
\begin{equation}
\Delta \phi = \phi_{xx} + \phi_{yy}
\end{equation}
Similarily, the curvature can also be computed.
\begin{equation}
\kappa = \frac{\phi_y^2\phi_{xx} - 2 \phi_x \phi_y \phi_{xy} + \phi_x^2 \phi_{yy}}{\left(\phi_x^2 + \phi_y^2\right)^{3/2}}
\end{equation}
Again, infinitesimal differences are replaced by their respective finite difference approximations.
Since an analytic function exists for the distance transform, all these functions are known exactly.
These ideal images are now compared to the finite difference approximation of the quantized signal.
Figure~\ref{fig:higher_grads} plots errors in the second derivative, mixed derivative, Laplacian, and curvature of the signed distance signal.
Large errors are seen in all values, sometimes amplified and sometimes flat.
Structure is seen in the gradient along the diagonals.
Higher order derivatives experience banding and extremely large errors from quantization.
\begin{figure*}[h]
\centering
\begin{tabular}{ccccc}
\subfloat[$\phi$]{
\includegraphics[width=0.15\linewidth]{curvature_phi
\label{fig:higher_grads:phi}
} &
\subfloat[{$d^2\phi/dx^2$}]{
\includegraphics[width=0.15\linewidth]{curvature_phi_xx
\label{fig:higher_grads:phi_xx}
} &
\subfloat[{$d^2\phi/dxdy$}]{
\includegraphics[width=0.15\linewidth]{curvature_phi_xy
\label{fig:higher_grads:phi_xy}
} &
\subfloat[{$\Delta \phi$}]{
\includegraphics[width=0.15\linewidth]{curvature_phi_lap
\label{fig:higher_grads:phi_lap}
} &
\subfloat[{$\kappa$}]{
\includegraphics[width=0.15\linewidth]{curvature_phi_curve
\label{fig:higher_grads:phi_curve}
} \\
\subfloat[{$\tilde{\phi}$}]{
\includegraphics[width=0.15\linewidth]{curvature_d
\label{fig:higher_grads:d}
} &
\subfloat[{$D^{xx}\tilde{\phi}$}]{
\includegraphics[width=0.15\linewidth]{curvature_d_xx
\label{fig:higher_grads:d_xx}
} &
\subfloat[{$D^{xy}\tilde{\phi}$}]{
\includegraphics[width=0.15\linewidth]{curvature_d_xy
\label{fig:higher_grads:d_xy}
} &
\subfloat[{$\Delta \tilde{\phi}$}]{
\includegraphics[width=0.15\linewidth]{curvature_d_lap
\label{fig:higher_grads:d_lap}
} &
\subfloat[{$\tilde{\kappa}$}]{
\includegraphics[width=0.15\linewidth]{curvature_d_curve
\label{fig:higher_grads:d_curve}
} \\
&
\subfloat[Error]{
\includegraphics[width=0.15\linewidth]{curvature_xx
\label{fig:higher_grads:error_xx}
} &
\subfloat[Error]{
\includegraphics[width=0.15\linewidth]{curvature_xy
\label{fig:higher_grads:error_xy}
} &
\subfloat[Error]{
\includegraphics[width=0.15\linewidth]{curvature_lap
\label{fig:higher_grads:error_lap}
} &
\subfloat[Error]{
\includegraphics[width=0.15\linewidth]{curvature_curve
\label{fig:higher_grads:error_lap}
}
\end{tabular}
\caption{Errors in the finite difference approximation of second derivatives and functions thereof from a quantized signed distance signal. Error signal plotted along the path $y=1.0$. Column (b,g,k) demonstrate second derivatives along \textit{x}. Column (c,h,l) demonstrate the mixed derivative. Column (d, i, m) demonstrate the laplacian. Column (e, j, n) demonstrate the curvature.}
\label{fig:higher_grads}
\end{figure*}
\subsection{Real Data}
Finally, this result is experimentally investigated using real data.
A $24$ x $24$ 2D slice of canine trabecular bone is used (isotropic nominal resolution of $34 \: \mu m$).
The image, Voronoi diagram, signed distance transform, and gradient computations are visualized in Figure~\ref{fig:real-data}.
Banding artifacts are seen in the y-gradient while the magnitude gradient is near $+1$.
Note that the magnitude gradient approaches zero near the medial axis of the object, consistent with the central difference operator.
\begin{figure*}[h]
\centering
\begin{tabular}{ccccc}
\subfloat[$I$]{
\includegraphics[width=0.17\linewidth]{real-data_I
\label{fig:real-data:voronoi}
} &
\subfloat[Voronoi]{
\includegraphics[width=0.17\linewidth]{real-data_voronoi
\label{fig:real-data:phi}
} &
\subfloat[{$\tilde{\phi}$}]{
\includegraphics[width=0.17\linewidth]{real-data_phi
\label{fig:real-data:phi}
} &
\subfloat[{$D^{0y} \tilde{\phi}$}]{
\includegraphics[width=0.17\linewidth]{real-data_d_c_y
\label{fig:real-data:d_x}
} &
\subfloat[{$\lVert D^{0} \tilde{\phi} \rVert$}]{
\includegraphics[width=0.17\linewidth]{real-data_m_c
\label{fig:real-data:mag_grad}
}
\end{tabular}
\caption{Voronoi diagram and gradients for a real trabecular bone image.}
\label{fig:real-data}
\end{figure*}
\subsection{Analysis and Synthesis}
The problem is now summarized.
Based on Equation~\ref{eqn:quant-metric}, the computed finite difference can be computed.
\begin{eqnarray}
D^{+} \phi &=& \frac{h g(p,0) - h g(q,0)}{h} \\
\label{eqn:finite_diff_quant}
&=& g(p,0) - g(q,0)
\end{eqnarray}
This can be visualized by laying the finite difference stencil onto the origin of Figure~\ref{fig:discretized_metric}.
This implies that the finite differences of the quantized signed distance transform takes on a finite set of values given by the difference of the metrics between the points.
\begin{equation}
D \phi \in \left\{g(p, 0) - g(q, 0) \given p,q \in Z^n\right\}
\end{equation}
Of importance is that the finite difference approximation of the first order derivative (Equation~\ref{eqn:finite_diff_quant}) does not depend on the sample period, $h$, because the underlying signal is quantized to multiples of $h$.
This result is independent of the order of the finite difference approximation.
That is, only once the stencil extends beyond parallel Voronoi edges will the derivative not be flat.
The required size of the stencil could be arbitrarily large.
The origin of the error is that the binary image is flat compared to the unsampled signal.
A similar analysis can be applied to higher derivatives, which are seen to vary with some power of $h$.
However, if the first derivative is flat, so will be the second derivative.
Where the higher derivatives are not flat, quantization errors amplify into higher derivative errors.
As a result, the error cannot be changed by modifying the sampling period.
Whether the structure has 2 or 2000 samples across its width, the error will persist.
Finally, the gradient magnitude is also discretized.
\begin{equation}
\lVert D \phi \rVert \in \left\{\sqrt{\sum_i^n \left(g(p_i, 0) - g(q_i, 0)\right)^2} \:\middle|\: p_i, q_i \in Z^n\right\}
\end{equation}
However, the quantization error is small due to two factors.
First, the operator is the square-root of the sum-of-squares, taking on finite but many values.
Second, while the gradient in one direction may be flat, the gradient in another is very close to one.
The result is a good estimation of gradient magnitude.
In summary, quantization leads to artifacts in finite difference calculations.
Gradients are flat at points where the Voronoi cell is aligned with the finite difference stencil.
Higher derivatives are either flat as well or end up amplifying the quantization noise.
This error is independent of sample period.
\section{Handling Quantization }
Attention is now placed on handling the structured artifacts introduced by the quantization of the distance transform.
Prior analysis made no assumptions on the metric $g$ and thus is a problem for any distance transform.
For example, the Chamfer approximation to a Euclidian metric~\cite{montanari1968method} will also suffer artifacts due to quantization.
An important feature of the artifact is that obvious attempts to correct it are challenging.
For instance, any function of the embedding $f(\phi)$ will also be quantized.
Furthermore, by the chain rule, the spatial gradients will be flat.
The most obvious solution to quantization is to perform distance transforms based on a different representation.
Examples include distances from meshes~\cite{baerentzen2005signed}, parametric curves~\cite{pottmann2003geometry}, or other vector objects.
However, this is not possible in many applications where only the binary image is given.
Many possible methods could be designed for handling quantization.
Examples include weighted distance transforms such as replacing the infimum with the equation of the $\ell^p$ norm for negative values of $p$~\cite{brunet2016generalized}\footnote{This is not a valid norm but is a valid smooth approximation to minimum} or computing distance maps from greyscale signals~\cite{kimmel1996sub}.
In cases were some error in representation is allowed, these are exceptionally fast and robust algorithms.
However, it is not guaranteed that given a binary image, the properties of the embedding are preserved.
\subsection{Required Properties}
The first is that the Heaviside function should recover the true signal.
\begin{equation}
H(-\phi) = I
\end{equation}
This can be equivalently stated as not allowing $\phi$ to change sign.
The second is that the magnitude gradient is $+1$ everywhere except at the medial axis.
\begin{equation}
\lVert \nabla \phi \rVert = +1
\end{equation}
\subsection{Proposed Methods}
As is commonly done to handle quantization errors, dithering is applied to remove dependence between the quantized samples~\cite{lipshitz1992quantization}.
First, noise will be added such that the sign of the distance transform does not change.
Second, a reinitialization algorithm is performed to smooth the noise.
Importantly, the reinitialization algorithm preserves the sign of the distance transform while gradually shifting the magnitude gradient to $+1$.
The limitations of the method are that the procedure is extremely computationally intensive and adds a stochastic process to analysis.
However, it guarantees that the embedding does not change sign and can guarantee the magnitude gradient is within some convergence criterion of $+1$.
\subsubsection{Model of Noise}
First, noise is added to the signed distance map.
Noise is drawn from a uniform distribution between $-1$ and $+1$ and added to the image.
The dithered signal is represented by a hat, $\hat{\phi}$.
However, the amplitude is modified such that the sign of the distance transform does not change
\begin{equation}
\hat{\phi}(x) = \tilde\phi(x) + A(x) \cdot \mathcal{U}(-1, 1)
\end{equation}
where $\mathcal{U}$ is the uniform distribution and $A$ the amplitude given by
\begin{equation}
A(x) = \min\left(\frac{h}{\alpha}, \lvert\tilde{\phi}(x)\rvert\right)
\end{equation}
where $\alpha > 1$ controls the amplitude relative to the sampling period.
Since the signal is quantized to values of $h$, selecting $\alpha$ smaller than one adds noise larger than the quantization error.
On the other hand, selecting $\alpha$ too large will not add enough noise to sufficiently dither the image.
Given that the noise amplitude is clipped to the magnitude of the signal, it is impossible for the dithered signal to change sign.
An example of a dithered signal is given in Figure~\ref{fig:dither}.
\begin{figure*}[h]
\centering
\begin{tabular}{cccccc}
\subfloat[$\phi$]{
\includegraphics[width=0.12\linewidth]{dither_Phi
\label{fig:dither:phi}
} &
\subfloat[{$I$}]{
\includegraphics[width=0.13\linewidth]{dither_I
\label{fig:dither:I}
} &
\subfloat[$\tilde{\phi}$]{
\includegraphics[width=0.12\linewidth]{dither_d
\label{fig:dither:phi_quant}
} &
\subfloat[{$\hat{\phi}$}]{
\includegraphics[width=0.12\linewidth]{dither_dither
\label{fig:dither:dither}
} &
\subfloat[$A\cdot\mathcal{U}$]{
\includegraphics[width=0.12\linewidth]{dither_noise
\label{fig:dither:noise}
} &
\subfloat[{$H(-\hat{\phi})$}]{
\includegraphics[width=0.13\linewidth]{dither_dither_I
\label{fig:dither:h_dither}
}
\end{tabular}
\caption{Dithering of the quantized signal ($h = 0.5$, $\alpha = 2.0$). (\ref{fig:dither:h_dither}) The binary image does not change.}
\label{fig:dither}
\end{figure*}
\subsubsection{Reinitialization}
Next, the noise is removed.
Here, a reinitialization algorithm is used for smoothing out the noise~\cite{sussman1994level,peng1999pde}.
This is done by solving a partial differential equation that moves the embedding around until its magnitude gradient is $1$ everywhere:
\begin{equation}
\label{eqn:reinitialization}
\phi_t = \text{sgn}(\phi) \left(\lVert \nabla \phi \rVert - 1 \right)
\end{equation}
Here, $\phi_t$ is the partial derivative in time and $\text{sgn}\left(\cdot\right)$ is the sign function.
When implemented appropriately, this algorithm has some exceptional properties.
First, the algorithm works for very sharp surfaces such as would be seen in trabecular bone.
Second, the algorithm has fast convergence far away from the original surface as would be advantageous in transforms of large objects.
Finally, and most importantly, this algorithm does not change the sign of the distance transform during evolution.
Details on the implementation are available elsewhere~\cite{peng1999pde}.
For this work, Equation~\ref{eqn:reinitialization} is solved with a third-order total variation diminishing Runge-Kutta upwind scheme in time and a 5th order accurate weighted essentially non-oscillatory scheme in space~\cite{jiang2000weighted}.
A high order scheme was needed for accurate reinitialization.
\subsection{Quantifying Error}
Three errors are defined for measuring the accuracy of the model.
The first is the error in representation as the maximum difference between the Heaviside function and the binary image.
\begin{equation}
\label{eqn:rep_error}
e_{R} = \lVert H(-\phi) - I \rVert_\infty
\end{equation}
The second is the difference in the magnitude gradient from $1$
\begin{equation}
\label{eqn:mag_grad_error}
e_{MG} = \frac{1}{N} \lVert \lVert \nabla \phi \rVert - 1 \rVert_2
\end{equation}
where $N$ is the number of voxels in the image.
Lastly, the error between the finite difference gradient and ideal gradient is defined.
\begin{equation}
\label{eqn:grad_error}
e_{D} = \frac{1}{N} \lVert \lVert D \phi - \nabla \phi \rVert_2 \rVert_2
\end{equation}
Equation~\ref{eqn:grad_error} should be interpreted as first taking the $\ell^2$ norm at each pixel of the difference in gradient vectors and measuring this error across the image using the $\ell^2$ norm again.
Finite differences are computed using 4th order accurate central differences.
All errors are normalized to the size of the image so that they can be interpreted as if they were errors in a pixel.
\subsection{Experiments}
\subsubsection{Convergence of the Algorithm}
First, the convergence of the algorithm is analyzed.
The three errors are plotted over 400 iterations in Figure~\ref{fig:convergence2}.
Changes in the magnitude gradient error inflect around 20 iterations while gradient error continues to improve until just before 300 iterations
As guaranteed by the algorithm, the representation error is always zero (Figure~\ref{fig:convergence2:rep}).
Select iterations are visualized in Figure~\ref{fig:convergence}.
Banding of the gradient image is immediately removed while noise in the gradient image is visually removed between iteration 50 and 100.
\begin{figure*}[h]
\centering
\begin{tabular}{ccc}
\subfloat[{$\lVert \nabla \phi \rVert - 1$}]{
\includegraphics[width=0.3\linewidth]{convergence_MagGrad
\label{fig:convergence2:maggrad}
} &
\subfloat[{$\lVert D \phi - \nabla \phi\rVert$}]{
\includegraphics[width=0.3\linewidth]{convergence_Grad
\label{fig:convergence2:grad}
} &
\subfloat[{$H(-\phi) - I$}]{
\includegraphics[width=0.3\linewidth]{convergence_rep
\label{fig:convergence2:rep}
}
\end{tabular}
\caption{Log-plots of convergence measures over 400 iterations ($h=0.5$, $\alpha = 2.0$). The representation error (\ref{fig:convergence2:rep}) is zero.}
\label{fig:convergence2}
\end{figure*}
\begin{figure}[h]
\centering
\begin{tabular}{ccc}
\subfloat[{$\tilde{\phi}$}]{
\includegraphics[width=0.25\linewidth]{convergence_nonoise_phi
\label{fig:convergence:nonoise:phi}
} &
\subfloat[{$|D\tilde{\phi}|$}]{
\includegraphics[width=0.25\linewidth]{convergence_nonoise_maggrad
\label{fig:convergence:nonoise:maggrad}
} &
\subfloat[{$D^x \tilde{\phi}$}]{
\includegraphics[width=0.25\linewidth]{convergence_nonoise_dx
\label{fig:convergence:nonoise:dx}
} \\
\subfloat[{$\hat{\phi}$}]{
\includegraphics[width=0.25\linewidth]{convergence_phi_0
\label{fig:convergence:noise:phi:0}
} &
\subfloat[{$|D\hat{\phi}|$}]{
\includegraphics[width=0.25\linewidth]{convergence_maggrad_0
\label{fig:convergence:noise:maggrad:0}
} &
\subfloat[{$D^x \hat{\phi}$}]{
\includegraphics[width=0.25\linewidth]{convergence_dx_0
\label{fig:convergence:noise:dx:0}
} \\
\subfloat[{$\hat{\phi}_{10}$}]{
\includegraphics[width=0.25\linewidth]{convergence_phi_10
\label{fig:convergence:noise:phi:10}
} &
\subfloat[{$|D\hat{\phi}_{10}|$}]{
\includegraphics[width=0.25\linewidth]{convergence_maggrad_10
\label{fig:convergence:noise:maggrad:10}
} &
\subfloat[{$D^x \hat{\phi}_{10}$}]{
\includegraphics[width=0.25\linewidth]{convergence_dx_10
\label{fig:convergence:noise:dx:10}
} \\
\subfloat[{$\hat{\phi}_{50}$}]{
\includegraphics[width=0.25\linewidth]{convergence_phi_50
\label{fig:convergence:noise:phi:50}
} &
\subfloat[{$|D\hat{\phi}_{50}|$}]{
\includegraphics[width=0.25\linewidth]{convergence_maggrad_50
\label{fig:convergence:noise:maggrad:50}
} &
\subfloat[{$D^x \hat{\phi}_{50}$}]{
\includegraphics[width=0.25\linewidth]{convergence_dx_50
\label{fig:convergence:noise:dx:50}
} \\
\subfloat[{$\hat{\phi}_{100}$}]{
\includegraphics[width=0.25\linewidth]{convergence_phi_100
\label{fig:convergence:noise:phi:100}
} &
\subfloat[{$|D\hat{\phi}_{100}|$}]{
\includegraphics[width=0.25\linewidth]{convergence_maggrad_100
\label{fig:convergence:noise:maggrad:100}
} &
\subfloat[{$D^x \hat{\phi}_{100}$}]{
\includegraphics[width=0.25\linewidth]{convergence_dx_100
\label{fig:convergence:noise:dx:100}
} \\
\subfloat[{$\hat{\phi}_{400}$}]{
\includegraphics[width=0.25\linewidth]{convergence_phi_400
\label{fig:convergence:noise:phi:400}
} &
\subfloat[{$|D\hat{\phi}_{400}|$}]{
\includegraphics[width=0.25\linewidth]{convergence_maggrad_400
\label{fig:convergence:noise:maggrad:400}
} &
\subfloat[{$D^x \hat{\phi}_{400}$}]{
\includegraphics[width=0.25\linewidth]{convergence_dx_400
\label{fig:convergence:noise:dx:400}
}
\end{tabular}
\caption{Visualizing convergence for specific iterations ($h=0.5$, $\alpha = 2.0$). $\phi_n$ indicates iteration $n$.}
\label{fig:convergence}
\end{figure}
\subsubsection{Is Dithering Needed?}
Next, the value of dithering is investigated.
Reinitialization is performed without dithering and for dithering with varying values of $\alpha$.
The convergence of the gradient error and magnitude gradient error are displayed in Figure~\ref{fig:alpha2}.
The representation error was zero for all iterations and is not plotted.
A modest improvement is seen as $\alpha$ increases.
As alpha goes to infinity, the errors will settle at the undithered signal.
Images for select iterations are displayed in Figure~\ref{fig:alpha}.
For small $\alpha$, incorrect gradients last for larger iterations.
For all examples, a large number of iterations ($>20$) is needed to get sufficiently smooth gradients.
In summary, dithering is not imperative, but its use slightly improves accuracy.
Since dithering is essentially computationally free compared to the reinitialization algorithm, it is recommended to add dithering with a large $\alpha$ value.
\begin{figure}[h]
\centering
\begin{tabular}{c}
\subfloat[{$\lVert \nabla \phi \rVert - 1$}]{
\includegraphics[width=0.75\linewidth]{alpha_error_maggrad
\label{fig:alpha2:maggrad}
} \\
\subfloat[{$\lVert D \phi - \nabla \phi \rVert$}]{
\includegraphics[width=0.75\linewidth]{alpha_error_grad
\label{fig:alpha2:grad}
}
\end{tabular}
\caption{Log-plots of convergence measures over 400 iterations for different $\alpha$ ($h=0.5$).}
\label{fig:alpha2}
\end{figure}
\begin{figure}[h]
\centering
\begin{tabular}{ccc}
\subfloat[{$D^{x}\tilde{\phi}_0$}]{
\includegraphics[width=0.25\linewidth]{alpha_nonoise_grad_0
\label{fig:alpha:nonoise:phi}
} &
\subfloat[{$D^{x}\hat{\phi}_0^2$}]{
\includegraphics[width=0.25\linewidth]{alpha_alpha_2.0_grad_0
\label{fig:alpha:nonoise:maggrad}
} &
\subfloat[{$D^{x}\hat{\phi}_0^{20}$}]{
\includegraphics[width=0.25\linewidth]{alpha_alpha_20.0_grad_0
\label{fig:alpha:nonoise:dx}
} \\
\subfloat[{$D^{x}\tilde{\phi}_{20}$}]{
\includegraphics[width=0.25\linewidth]{alpha_nonoise_grad_20
\label{fig:alpha:nonoise:phi}
} &
\subfloat[{$D^{x}\hat{\phi}_{20}^2$}]{
\includegraphics[width=0.25\linewidth]{alpha_alpha_2.0_grad_20
\label{fig:alpha:nonoise:maggrad}
} &
\subfloat[{$D^{x}\hat{\phi}_{20}^{20}$}]{
\includegraphics[width=0.25\linewidth]{alpha_alpha_20.0_grad_20
\label{fig:alpha:nonoise:dx}
} \\
\subfloat[{$D^{x}\tilde{\phi}_{50}$}]{
\includegraphics[width=0.25\linewidth]{alpha_nonoise_grad_50
\label{fig:alpha:nonoise:phi}
} &
\subfloat[{$D^{x}\hat{\phi}_{50}^{2}$}]{
\includegraphics[width=0.25\linewidth]{alpha_alpha_2.0_grad_50
\label{fig:alpha:nonoise:maggrad}
} &
\subfloat[{$D^{x}\hat{\phi}_{50}^{20}$}]{
\includegraphics[width=0.25\linewidth]{alpha_alpha_20.0_grad_50
\label{fig:alpha:nonoise:dx}
} \\
\end{tabular}
\caption{Visualizing convergence for specific iterations and $\alpha$ ($h=0.5$). $\phi_n^\alpha$ indicates iteration $n$ for noise parameter $\alpha$.}
\label{fig:alpha}
\end{figure}
\subsubsection{Quantization in Mean Curvature}
The ability of the method to remove quantization of the mean curvature is demonstrated.
A single L4 vertebrae was used imaged at an in-plane resolution of 0.7 mm and a slice-thickness of 1.0 mm.
The image was embedded using a signed distance transform ~\cite{danielsson1980euclidean}.
The signed distance map was dithered ($\alpha = 20$) and reinitialized for 100 iterations.
A surface was extracted from both embeddings using marching cubes~\cite{lorensen1987marching}.
Fourth-order accurate finite difference stencils were placed on the vertices of the mesh and interpolated into the embeddings to compute the mean curvature locally.
This allowed the mean curvature to be visualized on the surface of the mesh and histograms to be computed from the vertices.
Local mean curvature across the trabecular bone is visualized in Figure~\ref{fig:l4}.
Mean curvature appears quantized when computed from the distance map (\ref{fig:l4:sdt}).
After running the proposed algorithm, the mean curvature takes on sensible values and the surface appears more smooth (\ref{fig:l4:corrected}).
We reiterate here that the embedding did not change sign, quantization has just been removed.
Finally, comparing the histogram of the distance map (\ref{fig:l4:mean}) to that of the corrected embedding (\ref{fig:l4:corrected_mean}), quantization of the mean curvature has been removed using the proposed method.
It should be noted that the algorithm showed no further improvements after 100 iterations and the representation cannot be made more accurate.
\begin{figure}[h]
\centering
\begin{tabular}{cc}
\subfloat[\small sdt]{
\includegraphics[width=0.44\linewidth]{L4
\label{fig:l4:sdt}
} &
\subfloat[\small Proposed]{
\includegraphics[width=0.44\linewidth]{L4_corrected
\label{fig:l4:corrected}
} \\
\subfloat[\small sdt histogram]{
\includegraphics[width=0.44\linewidth]{L4_mean
\label{fig:l4:mean}
} &
\subfloat[\small Proposed histogram]{
\includegraphics[width=0.44\linewidth]{L4_corrected_mean
\label{fig:l4:corrected_mean}
} \\
\end{tabular}
\caption{Visualization of mean curvature from an embedding. (\ref{fig:l4:sdt}) Traditional signed distance transform, (\ref{fig:l4:corrected}) dithered and reinitialized, (\ref{fig:l4:mean}) histogram of mean curvature in the traditional embedding, (\ref{fig:l4:corrected_mean}) histogram of mean curvature in the proposed correction.}
\label{fig:l4}
\end{figure}
\section{Consequences of Quantization}
\subsection{Morphological Image Processing}
Applications of the distance transform in morphological image processing include skeletonization~\cite{blum1967transformation,kimmel1995skeletonization,siddiqi2002hamilton}, shape matching~\cite{barrow1977parametric}, and thickness computation~\cite{hildebrand1997new}.
Many algorithms exist for each task, some making use of gradients and others not.
In gradient-free methods, quantization is unlikely to affect results.
In gradient-based methods~\cite{kimmel1995skeletonization,siddiqi2002hamilton}, these artifacts can affect results.
Most algorithms take explicit steps to improve gradient computation~\cite{kimmel1995skeletonization} or using integral rather than divergence representations~\cite{siddiqi2002hamilton}.
Quantization is unlikely to have noticeable affects in morphology tasks.
\subsection{Curve Evolution}
Many curve evolution problems such as active contours use signed distance signals as an implicit representation of the curve~\cite{osher1988fronts,caselles1993geometric,malladi1995shape}.
These algorithms evolve the embedding according to gradients of the image, knowing the curve can be recovered as the zero level set of the embedding.
The findings of this work demonstrate that that the initialization of an embedding from the signed distance transform of binary image data is no better than first order accurate, $O(h)$, due to quantization.
This is important because curve evolution based on mean curvature flow requires at least second order accuracy initialization~\cite{coquerelle2016fourth}.
Since most curve evolution problems are implemented with total variation diminishing numerical methods, the noise will not amplifying during evolution.
However, the error in the solution will be independent of sampling period.
Solving this issue with reinitialization is impractical because the reinitialization converges slowly, stopping before the representation is second order accurate.
This strongly motivates alternative embedding procedures.
\section{Conclusion}
Distance transforms of sampled signals produce a quantized approximation to the true distance signal.
Quantization leads to an artifact where numerical gradients are made flat.
This artifact is independent of sample period and manifests as banding in the gradient image.
If needed and where possible, the initial distance signal should be constructed from a representation other than a binary image.
However, given no other options, a dithering and reinitialization algorithm is proposed that removes artifacts in the gradients while preserving the representation of the binary signal and keeping the gradient magnitude nearly equal to $+1$.
\appendices
\section{Implementing the Signed Distance Transform}
\label{app:sdt}
Constructing a signed distance transform requires some details.
The authors prefer the following procedure:
\begin{equation}
\label{eqn:sdt_corrected}
\phi[nh] = \left\{
\begin{matrix}
-h/2 + d(nh, A) & \text{if } nh \in A^C \\
+h/2 - d(nh, A^C) & \text{if } nh \in A
\end{matrix}
\right.
\end{equation}
Justification is given in Figure~\ref{fig:sdt}.
By adding half the sampling period to the embedding, the resulting signed transform crosses zero between the two edge samples.
Furthermore, it keeps the magnitude gradient equal to one at the edge.
This is why the quantization aligns on half-integers in Figure~\ref{fig:example1d:dt_Heaviside}.
\begin{figure}[h]
\centering
\begin{tabular}{cc}
\multicolumn{2}{c}{
\subfloat[{$I[nh]$}]{
\includegraphics[width=0.45\linewidth]{sdt_In
\label{fig:sdt:binary}
}} \\
\subfloat[{$d(nh, A^C)$}]{
\includegraphics[width=0.45\linewidth]{sdt_d_inside
\label{fig:sdt:inside}
} &
\subfloat[{$d(nh, A)$}]{
\includegraphics[width=0.45\linewidth]{sdt_d_backwards
\label{fig:sdt:outside}
} \\
\subfloat[Eqn~\ref{eqn:sdt}]{
\includegraphics[width=0.45\linewidth]{sdt_d_standard
\label{fig:sdt:standard}
} &
\subfloat[Eqn~\ref{eqn:sdt_corrected}]{
\includegraphics[width=0.45\linewidth]{sdt_d_improved
\label{fig:sdt:corrected}
} \\
\end{tabular}
\caption{Construction of the signed distance transform from two distance transforms. In~\ref{fig:sdt:standard}, the signal jumps from $+1$ to $-1$ at the zero crossing. This is resolved in~\ref{fig:sdt:corrected}.}
\label{fig:sdt}
\end{figure}
\bibliographystyle{IEEEtran}
| {
"timestamp": "2020-11-19T02:01:03",
"yymm": "2011",
"arxiv_id": "2011.08880",
"language": "en",
"url": "https://arxiv.org/abs/2011.08880",
"abstract": "Distance transforms are a central tool in shape analysis, morphometry, and curve evolution problems. This work describes and investigates an artifact present in distance maps computed from sampled signals. Namely, sampling reflects through the distance transform causing quantization in the resulting distance map. Gradients of the quantized distance map show banding, affecting the quality of subsequence processing. Furthermore, this error is independent of the sampling period of the signal and cannot be removed by modifying the number of samples across an objects width. Where needed, distance maps should be computed from representations other than binary images. In the case where exact representations are needed, a dithering and noise removal algorithm is proposed.",
"subjects": "Image and Video Processing (eess.IV)",
"title": "Artifacts of Quantization in Distance Transforms",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9732407137099625,
"lm_q2_score": 0.7279754489059774,
"lm_q1q2_score": 0.7084953454565838
} |
https://arxiv.org/abs/2007.11133 | Unsupervised Learning of Solutions to Differential Equations with Generative Adversarial Networks | Solutions to differential equations are of significant scientific and engineering relevance. Recently, there has been a growing interest in solving differential equations with neural networks. This work develops a novel method for solving differential equations with unsupervised neural networks that applies Generative Adversarial Networks (GANs) to \emph{learn the loss function} for optimizing the neural network. We present empirical results showing that our method, which we call Differential Equation GAN (DEQGAN), can obtain multiple orders of magnitude lower mean squared errors than an alternative unsupervised neural network method based on (squared) $L_2$, $L_1$, and Huber loss functions. Moreover, we show that DEQGAN achieves solution accuracy that is competitive with traditional numerical methods. Finally, we analyze the stability of our approach and find it to be sensitive to the selection of hyperparameters, which we provide in the appendix.Code available atthis https URL. Please address any electronic correspondence to dylanrandle@alumni.this http URL. | \section*{Appendix}
\label{appendix}
\subsection*{Description of Experiments}
\label{appendix:experiments}
A plot of the various classical loss functions is provided in Figure \ref{fig:huber_loss_l1_l2_simple}.
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{figures/huber_l2_l1.png}
\caption[Comparison of Classic Loss Functions]{Comparison of $L_2$, $L_1$, and Huber loss functions. The Huber loss is equal to $L_2$ for $e \leq 1$ and to $L_1$ for $e > 1$.}
\label{fig:huber_loss_l1_l2_simple}
\end{figure}
\paragraph{Exponential Decay (EXP)} \label{sec:exp_decay}
Consider a model for population decay $x(t)$ given by the exponential differential equation
\begin{equation}
\dot{x}(t) + x(t) = 0,
\end{equation}
with $ x(0) = 1 $ and $t \in (0, 10)$. The ground truth solution $x(t) = e^{-t}$ can be obtained analytically. We reiterate, however, that our method is fully unsupervised and does not make use of this solution data during training. We simply use the ground truth solutions to report mean squared errors of predicted solutions.
To set up the problem for DEQGAN, we define $LHS = \dot{x} + x$ and $RHS = 0$. Figure \ref{fig:gan_exp} presents the results from training DEQGAN on this equation.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{figures/gan_exp.png}
\caption[DEQGAN Training: Exponential Decay]{Visualization of DEQGAN training for the exponential decay problem. The left-most figure plots the mean squared error vs. iteration. To the right, we plot the value of the generator (G) and discriminator (D) losses at each iteration. Right of this we plot the prediction of the generator $\hat{x}$ and the true analytic solution $x$ as functions of time $t$. The right-most figure plots the absolute value of the residual of the predicted solution $\hat{F}$.}
\label{fig:gan_exp}
\end{figure}
\paragraph{Simple Harmonic Oscillator (SHO)}
Consider the motion of an oscillating body $x(t)$, which can be modeled by the simple harmonic oscillator differential equation
\begin{equation}
\ddot{x}(t) + x(t) = 0,
\end{equation}
with $x(0) = 0$, $ \dot{x}(0) = 1$, and $t \in (0, 2\pi)$. This differential equation can be solved analytically and has an exact solution $x(t) = \sin{t}$.
Here we set $LHS=\ddot{x} + x$ and $RHS = 0$. Figure \ref{fig:gan_sho} plots the results of training DEQGAN on this problem.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{figures/gan_sho.png}
\caption[DEQGAN Training: Simple Oscillator]{Visualization of DEQGAN training for the simple harmonic oscillator problem. The left-most figure plots the mean squared error vs. step (iteration) count. To the right of this, we plot the value of the generator (G) and discriminator (D) losses for each step. Right of this we plot the prediction of the generator $\hat{x}$ and the true analytic solution $x$ as functions of time $t$. The right-most figure plots the absolute value of the residual of the predicted solution $\hat{F}$.}
\label{fig:gan_sho}
\end{figure}
\paragraph{Nonlinear Oscillator (NLO)}
Further increasing the complexity of the differential equations being considered, consider a less idealized oscillating body subject to additional forces, whose motion $x(t)$ we can described by the nonlinear oscillator differential equation
\begin{equation}
\ddot{x}(t)+2 \beta \dot{x}(t)+\omega^{2} x(t)+\phi x(t)^{2}+\epsilon x(t)^{3} = 0,
\end{equation}
with $\beta=0.1, \omega=1, \phi=1, \epsilon=0.1$, $x(0) = 0$, $\dot{x}(0) = 0.5$, and $t \in (0, 4\pi)$. This equation does not admit an analytical solution. Instead, we use the high-quality solver provided by SciPy's \url{solve_ivp} \cite{scipy_cite}.
We set $LHS=\ddot{x}+2 \beta \dot{x}+\omega^{2} x+\phi x^{2}+\epsilon x^{3} = 0$ and $RHS=0$. Figure \ref{fig:gan_nlo} plots the results obtained from training DEQGAN on this equation.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{figures/gan_nlo.png}
\caption[DEQGAN Training: Nonlinear Oscillator]{Visualization of DEQGAN training for the nonlinear oscillator problem. The left-most figure plots the mean squared error vs. step (iteration) count. To the right of this, we plot the value of the generator (G) and discriminator (D) losses for each step. Right of this we plot the prediction of the generator $\hat{x}$ and the ground truth solution $x$ as functions of time $t$. The right-most figure plots the absolute value of the residual of the predicted solution $\hat{F}$.}
\label{fig:gan_nlo}
\end{figure}
\paragraph{Non-Autonomous System (NAS)}
Consider the system of ordinary differential equations given by
\begin{equation}
\dot{x}(t) = -ty
\end{equation}
\begin{equation}
\dot{y}(t) = tx
\end{equation}
with $x(0)=1$, $y(0)=0$, and $t \in (0, 2\pi)$. This equation has an exact analytical solution given by
\begin{equation}
x = \cos\left(\frac{t^2}{2}\right)
\end{equation}
\begin{equation}
y = \sin\left(\frac{t^2}{2}\right).
\end{equation}
Here we set
\begin{equation}
LHS = \left[ \frac{dx}{dt} + ty, \frac{dy}{dt} - xy \right]^T
\end{equation}
and $RHS=\left[0, 0 \right]^T$. Figure \ref{fig:gan_coo} plots the result of training DEQGAN on this problem.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{figures/gan_coo.png}
\caption[DEQGAN Training: SIR Model]{Visualization of DEQGAN training for the non-autonomous system of equations. The left-most figure plots the mean squared error vs. step (iteration) count. To the right of this, we plot the value of the generator (G) and discriminator (D) losses for each step. Right of this we plot the predictions of the generator $\hat{x}, \hat{y}$ and the true analytic solutions $x$, $y$ as functions of time $t$. The right-most figure plots the absolute value of the residuals of the predicted solution $\hat{F_j}$ for each equation $j$.}
\label{fig:gan_coo}
\end{figure}
\paragraph{SIR Epidemiological Model (SIR)}
Given the recent outbreak and pandemic of novel coronavirus (COVID-19) \cite{coronavirus}, we consider an epidemiological model of infectious disease spread given by a system of ordinary differential equations. Specifically, consider the Susceptible $S(t)$, Infected $I(t)$, Recovered $R(t)$ model for the spread of an infectious disease over time $t$. The model is defined by a system of three ordinary differential equations
\begin{equation} \label{eq:sir1}
\frac{dS}{dt} = - \beta \frac{IS}{N}
\end{equation}
\begin{equation} \label{eq:sir2}
\frac{dI}{dt} = \beta \frac{IS}{N} - \gamma I
\end{equation}
\begin{equation} \label{eq:sir3}
\frac{dR}{dt} = \gamma I
\end{equation}
where $\beta=3, \gamma=1$ are given constants related to the infectiousness of the disease, $N = S+I+R$ is the (constant) total population, $S(0)=0.99, I(0)=0.01, R(0)=0$, and $t \in (0, 10)$. Again we use SciPy's \url{solve_ivp} solver \cite{scipy_cite} to obtain ground truth solutions.
We set $LHS$ to be the vector
\begin{equation}
LHS = \left[ \frac{dS}{dt} + \beta \frac{IS}{N}, \frac{dI}{dt} - \beta \frac{IS}{N} + \gamma I, \frac{dR}{dt} - \gamma I \right]^T
\end{equation}
and $RHS=\left[0, 0, 0 \right]^T$. We present the results of training DEQGAN to solve this system of differential equations in Figure \ref{fig:gan_sir}.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{figures/gan_sir.png}
\caption[DEQGAN Training: SIR Model]{Visualization of DEQGAN training for the SIR system of equations. The left-most figure plots the mean squared error vs. step (iteration) count. To the right of this, we plot the value of the generator (G) and discriminator (D) losses for each step. Right of this we plot the predictions of the generator $\hat{S}, \hat{I}, \hat{R}$ and the ground truth solutions $S$, $I$, $R$ as functions of time $t$. The right-most figure plots the absolute value of the residuals of the predicted solution $\hat{F_j}$ for each equation $j$.}
\label{fig:gan_sir}
\end{figure}
\paragraph{Poisson Equation (POS)}
Consider the Poisson partial differential equation (PDE) given by \begin{equation}
\frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2} = 2x(y-1)(y-2x+xy+2)e^{x-y}
\end{equation}
where $(x,y) \in [0, 1] \times [0, 1]$. The equation is subject to Dirichlet boundary conditions on the edges of the unit square \begin{equation} \begin{split}
u(x, y)\bigg|_{x=0} &= 0 \\
u(x, y)\bigg|_{x=1} &= 0 \\
u(x, y)\bigg|_{y=0} &= 0 \\
u(x, y)\bigg|_{y=1} &= 0. \\
\end{split} \end{equation} The analytical solution is \begin{equation}
u(x, y) = x(1-x)y(1-y)e^{x-y}.
\end{equation} We use the two-dimensional Dirichlet boundary adjustment formulae provided in \citet{feiyu_neurodiffeq}. To set up the problem for DEQGAN we let \begin{equation}
LHS = \frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2} - 2x(y-1)(y-2x+xy+2)e^{x-y}
\end{equation}
and $RHS = 0$. We present the results of training DEQGAN on this problem in Figure \ref{fig:gan_pos}.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{figures/gan_pos.png}
\caption[DEQGAN Training: Poisson Equation]{Visualization of DEQGAN training for the Poisson equation. The left-most figure plots the mean squared error vs. step (iteration) count. To the right of this, we plot the value of the generator (G) and discriminator (D) losses for each step. Right of this we plot the prediction of the generator $\hat{u}$ as a function of position $(x,y)$. The right-most figure plots the absolute value of the residual $\hat{F}$, as a function of $(x,y)$.}
\label{fig:gan_pos}
\end{figure}
\subsection*{DEQGAN Hyperparameters}
We performed $1000$ iterations of random search to tune the hyperparameters of DEQGAN for each differential equation. Table \ref{table:hyperparameters} summarizes the final hyperparameter settings used for DEQGAN.
\begin{table}
\caption{Hyperparameter Settings for DEQGAN}
\label{table:hyperparameters}
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lcccccc}
\toprule
Hyperparameter & EXP & SHO & NLO & NAS & SIR & POS \\
\midrule
Num. Iterations & \num{2e3} & \num{1e4} & \num{2e4} & \num{5e4} & \num{3e4} & \num{4e3} \\
Num. Grid Points & $100$ & $400$ & $400$ & $800$ & $800$ & $32 \times 32$\\
$G$ Units/Layer & $30$ & $40$ & $40$ & $30$ & $40$ & $40$ \\
$G$ Num. Layers & $2$ & $4$ & $4$ & $3$ & $2$ & $4$ \\
$D$ Units/Layer & $20$ & $40$ & $30$ & $50$ & $20$ & $20$ \\
$D$ Num. Layers & $4$ & $2$ & $3$ & $2$ & $3$ & $4$ \\
Activations & $\tanh$ & $\tanh$ & $\tanh$ & $\tanh$ & $\tanh$ & $\tanh$ \\
$G$ Learning Rate & $0.008$ & $0.009$ & $0.006$ & $0.006$ & $0.010$ & $0.008$ \\
$D$ Learning Rate & $0.0005$ & $0.002$ & $0.0007$ & $0.001$ & $0.002$ & $0.002$ \\
$G$ $\beta_1$ (Adam) & $0.671$ & $0.444$ & $0.102$ & $0.706$ & $0.207$ & $0.410$ \\
$G$ $\beta_2$ (Adam) & $0.143$ & $0.633$ & $0.763$ & $0.861$ & $0.169$ & $0.447$ \\
$D$ $\beta_1$ (Adam) & $0.866$ & $0.271$ & $0.541$ & $0.538$ & $0.193$ & $0.593$ \\
$D$ $\beta2$ (Adam) & $0.165$ & $0.142$ & $0.677$ & $0.615$ & $0.617$ & $0.915$ \\
Exponential LR Decay ($\gamma$) & $0.991$ & $0.998$ & $0.999$ & $0.9998$ & $0.9996$ & $0.996$ \\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\end{table}
\subsection*{Non-GAN Hyperparameter Tuning}
\label{appendix:non-gan-tuning}
Table \ref{table:experimental_results_non_gan_tuning} presents results after tuning the hyperparameters of the alternative unsupervised neural network method with $L_1$, $L_2$, and Huber loss functions. We ran $1000$ random search iterations for each differential equation.
\begin{table}[h]
\caption{Experimental Results With Non-GAN Hyperparameter Tuning}
\label{table:experimental_results_non_gan_tuning}
\centering
\begin{tabular}{lccccccccc}
\toprule
& \multicolumn{5}{c}{Mean Squared Error} \\
\cmidrule(lr){2-6}
Key & $L_1$ & $L_2$ & Huber & DEQGAN & Traditional \\
\midrule
EXP & \num{1e-4} & \num{3e-8} & \num{1e-8} & \num{3e-16} & \num{2e-14} (RK4) \\
SHO & \num{2e-5} & \num{3e-9} & \num{1e-9} & \num{1e-12} & \num{1e-11} (RK4) \\
NLO & \num{2e-5} & \num{5e-10} & \num{6e-10} & \num{2e-12} & \num{4e-11} (RK4) \\
NAS & \num{5e-1} & \num{2e-4} & \num{7e-6} & \num{8e-9} & \num{2e-9} (RK4) \\
SIR & \num{2e-5} & \num{6e-10} & \num{3e-10} & \num{2e-10} & \num{5e-13} (RK4) \\
POS & \num{1e-5} & \num{3e-10} & \num{2e-10} & \num{8e-13} & \num{3e-10} (FD) \\
\bottomrule
\end{tabular}
\end{table}
\section{Introduction}
\label{intro}
In fields such as physics, chemistry, biology, engineering, and economics, differential equations are applied to the modeling of important and complex phenomena. While traditional methods for solving differential equations perform well, and the theory for their stability and convergence are well established, the recent success of deep learning techniques \cite{imagenet, seq2seq, attention, transformer, RL_atari, RL_dist, RL_robotic_manipulation, RL_Go} has inspired researchers to apply neural networks to solving differential equations \cite{physics_informed_nns, rnns_odesystem, denns_cosmos, mattheakis2019physical, conv_lstm_pdes, mattheakis2020hamiltonian, nn_highdim_pdes, highdim_nn_pde_forward_backward, dgm_highdim_pde_nn}.
Applying neural networks to solving differential equations can provide a range of benefits over traditional methods. By removing a reliance on finely crafted grids which suffer from the ``curse of dimensionality", neural networks can be more effective than traditional solvers in high-dimensional settings \cite{nn_highdim_pdes, highdim_nn_pde_forward_backward, dgm_highdim_pde_nn}. Furthermore, recent work has shown that neural network solutions can be more accurate in obeying certain physical constraints, such as conservation of energy \cite{mattheakis2020hamiltonian, physics_informed_nns}. Neural networks can also provide provide a more accurate interpolation scheme \cite{Lagaris_1998}. Finally, forward passes of neural networks are embarrassingly data-parallel, even in difficult-to-parallelize temporal dimensions, and can readily leverage parallel computing architectures.
Interest in research on solving differential equations with unsupervised neural networks is growing. However, due to a lack (to the best of our knowledge) of theoretical justification for a particular choice of loss function, we propose ``learning" the loss function with a Generative Adversarial Network (GAN) \cite{goodfellow2014generative}. For data following a known noise model, there is clear theoretical justification, based on the maximum likelihood principle, for fitting models with particular loss functions. For example, in the case of a Gaussian noise model
\begin{equation}
y = x + \epsilon, \quad \epsilon \sim \mathcal{N}(0, \sigma^2),
\end{equation}
the maximum likelihood estimate of the model parameters minimizes the squared error ($L_2$ norm) loss function. In the case of deterministic differential equations, however, there is no noise model and we lack formal justification for a particular choice of loss function among multiple options.
To circumvent this problem, we propose GANs for solving differential equations in a fully unsupervised manner. The discriminator model of a GAN can be thought of as learning the loss function used for optimizing the generator. Moreover, GANs have been shown to excel in scenarios where classic loss functions, such as the mean squared error, struggle due to their inability to capture complex spatio-temporal dependencies \cite{GAN_VAE, superresolution, styleGAN}.
Our main contribution is a novel method, which we call Differential Equation GAN (DEQGAN), for formulating the task of solving differential equations in a \emph{fully unsupervised} manner as a GAN training problem. DEQGAN works by separating the differential equation into left-hand side ($LHS$) and right-hand side ($RHS$), then training the generator to produce a $LHS$ that is indistinguishable to the discriminator from the $RHS$. Experimental results show that our method produces solutions which obtain multiple orders of magnitude lower mean squared errors (computed from known analytic or numerical solutions) than a comparable unsupervised neural network method with (squared) $L_2$, $L_1$, and Huber loss functions. Moreover, DEQGAN achieves solution accuracy that is competitive with traditional fourth-order Runge-Kutta and second-order finite difference methods.
\section{Related Work}
\citet{dissanayake1994neural} were one of the first to develop a method for solving differential equations with neural networks. They showed that a neural network-based method could solve differential equations when transformed to unconstrained optimization problems. \citet{Lagaris_1998} extended this work by introducing analytical adjustments to the neural network output that exactly satisfied initial and boundary conditions. They showed that their method achieved lower interpolation error than the finite element method, while maintaining equal error on a fixed mesh. The authors expanded this work to consider arbitrarily-shaped domains in higher dimensions \cite{lagaris_arbitrary_boundary}, and applied neural networks to quantum mechanics \cite{Lagaris_quantum}.
Recent work has solved high-dimensional partial differential equations (PDEs) with neural networks in place of basis functions \cite{dgm_highdim_pde_nn} and by reformulating PDEs as backward stochastic differential equations \cite{nn_highdim_pdes}. To reduce the need to re-learn known physics, \citet{mattheakis2019physical} embedded physical symmetries into the structure of neural networks, and \citet{physics_informed_nns} regularized neural networks according to physical models described by nonlinear PDEs, leading to improved solution accuracy and training convergence over physics-agnostic counterparts. Leveraging recent advances in deep learning, \citet{rnns_odesystem} developed a supervised recurrent neural network method that uses measurement data to solve ordinary differential equations with unknown functional forms, and \citet{conv_lstm_pdes} presented a fully convolutional LSTM network that augments traditional finite difference/finite volume methods used to solve PDEs. \citet{nn4diffeq_survey} presented a survey of neural network and radial basis function methods for solving differential equations.
In parallel, \citet{goodfellow2014generative} introduced the idea of learning generative models with neural networks and an adversarial training algorithm, called Generative Adversarial Networks (GANs). To solve issues of GAN training instability, \citet{arjovsky2017wasserstein} introduced a formulation of GANs based on the Wasserstein distance, and \citet{gulrajani2017improved} added a gradient penalty to approximately enforce a Lipschitz constraint on the discriminator. \citet{gan_spectral_norm} introduced an alternative method for enforcing the Lipschitz constraint with a spectral normalization technique that outperforms the former method on some problems.
Further work has applied GANs to differential equations with solution data used for supervision. \citet{physicsinformedGAN} apply GANs to stochastic differential equations by using ``snapshots" of ground-truth data for semi-supervised training. A project by students at Stanford \cite{turbulence_enrichment} employed GANs to perform ``turbulence enrichment" of solution data in a manner akin to that of super-resolution for images proposed by \citet{superresolution}. Our work distinguishes itself from other GAN-based approaches for solving differential equations by being \emph{fully unsupervised}, and removing the dependence on using supervised training data (i.e. solutions of the equation).
\section{Background}
\label{sec:background}
\subsection{Unsupervised Neural Networks for Differential Equations}
Early work by \citet{dissanayake1994neural} proposed solving differential equations in an unsupervised manner with neural networks. Their paper considers general differential equations of the form
\begin{equation} \label{eq:lagaris_diffeq}
F(t, \Psi(t), \Delta \Psi(t), \Delta^2 \Psi(t), \ldots) = 0
\end{equation}
where $\Psi(t)$ is the desired solution, $\Delta$ and $\Delta^2$ represent the first and second derivatives, and the system is subject to certain boundary or initial conditions. The learning problem is then formulated as minimizing the sum of squared errors of the above equation
\begin{equation} \label{eq:lagaris_objective}
\min_{\theta}{\sum_{t \in \mathcal{D}}{F(t, \Psi_{\theta}(t), \Delta \Psi_{\theta}(t), \Delta^2 \Psi_{\theta}(t), \ldots)^2}}
\end{equation}
where $\Psi_{\theta}$ is a neural network parameterized by $\theta$, $\mathcal{D}$ is the domain of the problem, and we compute derivatives with automatic differentiation. This allows us to use backpropagation \cite{backprop} to train the parameters of the neural network to satisfy the differential equation. Note that this formalism can be trivially extended to handle spatial domains and multidimensional problems, which we do in our experiments and describe in the \hyperref[appendix]{Appendix}.
\subsection{Generative Adversarial Networks}
Generative Adversarial Networks (GANs) \cite{goodfellow2014generative} are a type generative model that use two neural networks to induce a generative distribution $p(x)$ of the data by formulating the inference problem as a two-player, zero-sum game.
The generative model first samples a latent variable $z \sim \mathcal{N}(0,1)$, which is used as input into the generator $G$ (e.g. a neural network). A discriminator $D$ is trained to classify whether its input was sampled from the generator (i.e. ``fake") or from a reference data set (i.e. ``real").
Informally, the process of training GANs proceeds by optimizing a minimax objective over the generator and discriminator such that the generator attempts to trick the discriminator to classify ``fake" samples as ``real". Formally, one optimizes
\begin{equation} \label{eq:gan_goodfellow}
\min_{G} \max_{D} V(D,G) = \mathbb{E}_{x \sim p_{\text{data}}(x)}[\log{D(x)}]
+ \mathbb{E}_{z \sim p_{z}(z)}[1 - \log{D(G(z))}]
\end{equation}
where $x \sim p_{\text{data}}(x)$ denotes samples from the empirical data distribution, and $p_z \sim \mathcal{N}(0,1)$ samples in latent space. In practice, the optimization alternates between gradient ascent and descent steps for $D$ and $G$ respectively.
\subsubsection{Two Time-Scale Update Rule}
\citet{ttur_gan} proposed the two time-scale update rule (TTUR) for training GANs, a method in which the discriminator and generator are trained with separate learning rates. They showed that their method led to improved performance and proved that, in some cases, TTUR ensures convergence to a stable local Nash equilibrium. One intuition for TTUR comes from the potentially different loss surface curvatures of the discriminator and generator. Allowing learning rates to be tuned to a particular loss surface can enable more efficient gradient-based optimization. We make use of TTUR throughout this paper as an instrumental lever when tuning GANs to reach desired performance.
\subsubsection{Spectral Normalization}
Proposed by \citet{gan_spectral_norm}, Spectrally Normalized GAN (SN-GAN) is a method for controlling exploding discriminator gradients when optimizing Equation \ref{eq:gan_goodfellow} that leverages a novel weight normalization technique.
The key idea is to control the Lipschitz constant of the discriminator by constraining the spectral norm of each layer in the discriminator. Specifically, the authors propose dividing the weight matrices $W_i$ of each layer $i$ by their spectral norm $\sigma(W_i)$
\begin{equation}
W_{SN,i} = \frac{W_i}{\sigma(W_i)},
\end{equation}
where
\begin{equation}
\sigma(W_i) = \max_{\| h_i\|_2 \leq 1} \| W_i h_i \|_2
\end{equation}
and $h_i$ denotes the input to layer $i$. The authors prove that this normalization technique bounds the Lipschitz constant of the discriminator above by $1$, thus strictly enforcing the $1$-Lipshcitz constraint on the discriminator. In our experiments, adopting the SN-GAN formulation leads to even better performance than WGAN-GP \cite{arjovsky2017wasserstein, gulrajani2017improved}.
\subsection{Guaranteeing Initial \& Boundary Conditions} \label{physical_constraints}
\citet{Lagaris_1998} showed that it is possible to exactly satisfy initial and boundary conditions by adjusting the output of the neural network. For example, consider adjusting the neural network solution $\Psi(t)$ to satisfy the initial condition $\Psi(0) = x_0$. We can apply the transformation \begin{equation}
\Psi(t)' = x_0 + t \Psi(t)
\end{equation} which exactly satisfies the condition. \citet{mattheakis2019physical} proposed an augmented transformation
\begin{equation} \label{eq:adjustment}
\Psi(t)' = \Phi\left(\Psi(t)\right) = x_0 + \left(1 - e^{-(t-t_0)}\right) \Psi(t)
\end{equation}
that further improved training convergence. Intuitively, Equation \ref{eq:adjustment} adjusts the output of the neural network $\Psi(t)$ to be exactly $x_0$ when $t=t_0$, and decays this constraint exponentially in $t$.
\subsection{Residual Connections}
\citet{residual_connections} showed that adding residual connections improved training of deep neural networks. We employ residual connections to our deep networks as they allow gradients to more easily flow through the models and thereby reduce numerical instability. Residual connections augment a typical activation with the identity operation
\begin{equation}
y = \mathcal{F}(x, W_i) + x
\end{equation}
where $\mathcal{F}$ is the activation function, $x$ is the input to the unit, $W_i$ are the weights, and $y$ is the output of the unit. This acts as a ``skip connection", allowing inputs and gradients to forego the nonlinear component.
\section{Differential Equation GAN}
Here we present our method, Differential Equation GAN (DEQGAN), which trains a GAN to solve differential equations in a \emph{fully unsupervised} manner. To do this, we rearrange the differential equation such that the left-hand side ($LHS$) contains all of the terms which depend on the generator (e.g. $\hat{\Psi}$, $\Delta \hat{\Psi}$, $\Delta^2 \hat{\Psi}$, etc.), and the right-hand side ($RHS$) contains only constants (e.g. zero).
From here we sample points from the domain $t \sim \mathcal{D}$ and use them as input to a generator $G(t)$, which produces candidate solutions $\hat{\Psi}$. We adjust $\hat{\Psi}$ for initial or boundary conditions according to Equation~\ref{eq:adjustment}. Then we construct the $LHS$ from the differential equation $F$ using automatic differentiation
\begin{equation}
LHS = F\left(t, \hat{\Psi}(t), \Delta \hat{\Psi}(t), \Delta^2 \hat{\Psi}(t)\right)
\end{equation}
and set $RHS$ to its appropriate value (in our examples, $RHS=0$).
From here, training proceeds in a manner similar to traditional GANs. We update the weights of the generator $G$ and discriminator $D$ according to the gradients
\begin{equation}
\label{eq:generator_grad}
\eta_G = \nabla_{\theta_{g}} \frac{1}{m} \sum_{i=1}^{m} \log{ \left(1 - D \left( LHS^{(i)} \right) \right)},
\end{equation}
\begin{equation} \label{eq:discriminator_grad}
\eta_{D} = \nabla_{\theta_{d}} \frac{1}{m} \sum_{i=1}^{m} \left[ \log D \left( RHS^{(i)} \right) +
\log \left( 1 - D \left( LHS^{(i)} \right) \right) \right]
\end{equation}
where $LHS^{(i)}$ is the output of $G\left(t^{(i)}\right)$ after adjusting for initial or boundary conditions and constructing the $LHS$ from $F$. Note that we perform stochastic gradient \emph{descent} for $G$ (gradient steps $\propto -\eta_{G}$), and stochastic gradient \emph{ascent} for $D$ (gradient steps $\propto \eta_{D}$). We provide a schematic representation of DEQGAN in Figure \ref{fig:gan_diagram} and detail the training steps in Algorithm \ref{alg:gan_algo}.
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{figures/DEQGAN_diagram.png}
\caption[DEQGAN Diagram]{Schematic representation of DEQGAN. We perturb points $t$ from the mesh and input them to a generator $G$, which produces candidate solutions $\hat{\Psi}$. Then we analytically adjust these solutions according to $\Phi$ and apply automatic differentiation to construct $LHS$ from the differential equation $F$. $RHS$ and $LHS$ are passed to a discriminator $D$, which is trained to classify them as ``real" and ``fake" respectively.}
\label{fig:gan_diagram}
\end{figure}
\begin{algorithm}
\caption{DEQGAN}
\label{alg:gan_algo}
\begin{algorithmic}
\STATE {\bfseries Input:} Differential equation $F$, generator $G(\cdot;\theta_g)$, discriminator $D(\cdot;\theta_d)$, mesh $t$ of $m$ points with spacing $\Delta t$, perturbation precision $\tau$, analytic adjustment function $\Phi$, total steps $N$, learning rates $\alpha_G, \alpha_D$, Adam optimizer \cite{adamoptimizer2014} parameters $\beta_{G1}, \beta_{G2}, \beta_{D1}, \beta_{D2}$ \\
\FOR{$i=1$ {\bfseries to} $N$}
\FOR{$j=1$ {\bfseries to} $m$}
\STATE Perturb $j$-th point in mesh $t_{s}^{(j)} = t^{(j)} + \epsilon, \epsilon \sim \mathcal{N}(0,\frac{\Delta t}{\tau})$
\STATE Forward pass $\hat{\Psi} = G(t_{s}^{(j)})$
\STATE Analytic adjustment $\hat{\Psi}' = \Phi(\hat{\Psi})$ (Equation \ref{eq:adjustment})
\STATE Compute $LHS^{(j)} = F(t, \hat{\Psi}', \nabla{\hat{\Psi}'}, \nabla^{2}{\hat{\Psi}'})$, set $RHS^{(j)} = 0$
\ENDFOR
\STATE Compute gradients $\eta_G, \eta_D$ (Equation \ref{eq:generator_grad} \& \ref{eq:discriminator_grad})
\STATE Update generator $\theta_g \leftarrow \texttt{Adam}(\theta_g, -\eta_G, \alpha_G, \beta_{G1}, \beta_{G2})$
\STATE Update discriminator $\theta_d \leftarrow \texttt{Adam}(\theta_d, \eta_D, \alpha_D, \beta_{D1}, \beta_{D2})$
\ENDFOR
\STATE {\bfseries Output:} $G$
\end{algorithmic}
\end{algorithm}
Informally, our algorithm trains a GAN by setting the ``fake" component to be the $LHS$ (in our formulation, the residuals of the equation), and the ``real" component to be the $RHS$ of the equation. This results in a GAN that learns to produce solutions that make $LHS$ indistinguishable from $RHS$, thereby approximately solving the differential equation.
An important note here is that training can be unstable if $LHS$ and $RHS$ are not chosen properly. Specifically, we find that training fails if $RHS$ is a function of the generator. For example, consider the equation $\ddot{x}+x=0$. If we set $LHS=\ddot{x}$ and $RHS=-x$, then RHS is a function of the generator and will be constantly changing as the generator is updated throughout training, and DEQGAN will become exceedingly unstable. We can fix this, however, by simply setting $LHS=\ddot{x}+x$ and $RHS=0$.
Our intuition for this is that if $RHS$ depends on the outputs of the generator, the ``real" data distribution $p_{\text{data}}(x)$ (from Equation \ref{eq:gan_goodfellow}) changes as the generator weights are updated throughout training. If the distribution $p_{\text{data}}(x)$ is constantly changing, the discriminator will not have a reliable signal for learning to classify ``real" from ``fake", which violates a core assumption of traditional GANs. By setting $RHS=0$, we resolve the problem by effectively setting the ``real" distribution to be the fixed Dirac delta function $p_{\text{data}}(x) = \delta(0)$. For the examples in this paper, we move all terms of the differential equation to $LHS$ and set $RHS=0$.
\begin{table}
\caption{Summary of Experiments}
\label{table:summary_of_experiments}
\centering
\begin{tabular}{lccccc}
\toprule
Key & Equation & Class & Order & Linear & System \\
\midrule
EXP & $\dot{x}(t) + x(t) = 0$ & ODE & \nth{1} & Yes & No \\
SHO & $\ddot{x}(t) + x(t) = 0$ & ODE & \nth{2} & Yes & No \\
NLO & $\begin{aligned}
\ddot{x}(t)+2 \beta \dot{x}(t)+\omega^{2} x(t)
+\phi x(t)^{2}+\epsilon x(t)^{3} = 0
\end{aligned}$ & ODE & \nth{2} & No & No \\
NAS & $\begin{cases}
\dot{x}(t) = -ty \\
\dot{y}(t) = tx
\end{cases}$ & ODE & \nth{1} & Yes & Yes \\
SIR & $\begin{cases}
\dot{S}(t) &= - \beta I(t)S(t)/N \\
\dot{I}(t) &= \beta I(t)S(t)/N - \gamma I(t) \\
\dot{R}(t) &= \gamma I(t)
\end{cases}$ & ODE & \nth{1} & No & Yes \\
POS & $\begin{aligned} u_{xx} + u_{yy} = 2x(y-1)(y-2x+xy+2)e^{x-y}
\end{aligned}$ & PDE & \nth{2} & Yes & No \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}
\caption{Experimental Results}
\label{table:experimental_results}
\centering
\begin{tabular}{lccccccccc}
\toprule
& \multicolumn{5}{c}{Mean Squared Error} \\
\cmidrule(lr){2-6}
Key & $L_1$ & $L_2$ & Huber & DEQGAN & Traditional \\
\midrule
EXP & \num{1e-3} & \num{3e-6} & \num{1e-6} & \num{3e-16} & \num{2e-14} (RK4) \\
SHO & \num{2e-5} & \num{2e-9} & \num{8e-10} & \num{1e-12} & \num{1e-11} (RK4) \\
NLO & \num{6e-2} & \num{1e-9} & \num{4e-10} & \num{2e-12} & \num{4e-11} (RK4) \\
NAS & \num{6e-1} & \num{6e-5} & \num{2e-3} & \num{8e-9} & \num{2e-9} (RK4) \\
SIR & \num{7e-4} & \num{6e-9} & \num{3e-9} & \num{2e-10} & \num{5e-13} (RK4) \\
POS & \num{4e-6} & \num{5e-11} & \num{2e-11} & \num{8e-13} & \num{3e-10} (FD) \\
\bottomrule
\end{tabular}
\end{table}
\section{Experiments}
We conducted experiments on several differential equations of increasing complexity, comparing DEQGAN to an alternative unsupervised neural network method using squared $L_2$ (i.e. mean squared error\footnote{We use the term $L_2$ to avoid conflating the \emph{loss function} being used, which is the mean squared error on the unsupervised problem of minimizing the differential equation residuals, with the final \emph{evaluation metric}, which is the mean squared error of the predicted solution computed against the known ground truth.}), $L_1$, and Huber \cite{huber_loss} loss functions. We also report results obtained by the traditional fourth-order Runge-Kutta (RK4) and second-order finite differences (FD) methods for initial and boundary value problems, respectively. A detailed description of each experiment including exact problem specifications, hyperparameters, and a comparison of loss functions used is provided in the \hyperref[appendix]{Appendix}.
We report the mean squared errors of the solutions produced by each method, computed from known solutions obtained either analytically or with high-quality numerical solvers \cite{scipy_cite}, but we do not use these solution data for training. We add residual connections between neighboring layers of all models, and apply spectral normalization to the discriminator in all experiments. Results are obtained with hyperparameters tuned for DEQGAN. In the \hyperref[appendix]{Appendix}, we tune each alternative method for comparison but do not observe a meaningful difference.
We note that deterministic differential equations do not exhibit aleatoric uncertainty and that all errors we observe are therefore epistemic. Given that neural networks are, theoretically, universal function approximators \cite{universalapproximator}, one may initially expect to obtain arbitrarily low error. The reason we do not observe this is, first, that our models are finite-width and may lack representational capacity and, second, that optimizing the objective in Equation \ref{eq:gan_goodfellow} with stochastic gradient descent is unlikely to reach globally optimal solutions and is more likely to converge to local optima.
\begin{figure}
\centering
\begin{subfigure}[b]{0.475\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/exp_rand_reps_big.jpg}
\caption{Exponential Decay (EXP)}
\label{fig:exp_comparison}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.475\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/sho_rand_reps_big.jpg}
\caption{Simple Harmonic Oscillator (SHO)}
\label{fig:sho_gan_vs_l2}
\end{subfigure}
\vskip\baselineskip
\begin{subfigure}[b]{0.475\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/nlo_rand_reps_big.jpg}
\caption{Nonlinear Oscillator (NLO)}
\label{fig:nlo_gan_vs_l2}
\end{subfigure}
\quad
\begin{subfigure}[b]{0.475\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/coo_rand_reps_big.jpg}
\caption{Non-Autonomous System (NAS)}
\label{fig:coo_gan_vs_l2}
\end{subfigure}
\vskip\baselineskip
\begin{subfigure}[b]{0.475\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/sir_rand_reps_big.jpg}
\caption{SIR Disease Model (SIR)}
\label{fig:sir_gan_vs_l2}
\end{subfigure}
\quad
\begin{subfigure}[b]{0.475\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/pos_rand_reps_big.jpg}
\caption{Poisson Equation (POS)}
\label{fig:pos_gan_vs_l2}
\end{subfigure}
\caption{Mean squared errors vs. iteration for DEQGAN, $L_2$, $L_1$, and Huber loss for various equations. We perform five randomized trials and plot the median (bold) and $(25, 75)$ percentile range (shaded). We smooth the values using a simple moving average with window size $50$.}
\label{fig:experiments_rand_reps}
\end{figure}
Table \ref{table:summary_of_experiments} summarizes the equations we study in our experiments, and Table \ref{table:experimental_results} reports the lowest mean squared errors obtained across five trials for each method. We see that DEQGAN obtains multiple orders of magnitude lower mean squared errors than the alternative unsupervised neural network method with $L_1$, $L_2$, and Huber loss functions across the differential equations studied. Moreover, DEQGAN achieves accuracy that is competitive with the traditional RK4 and FD numerical methods.
Figure \ref{fig:experiments_rand_reps} plots mean squared error against training iteration for DEQGAN and the alternative neural network method with $L_1$, $L_2$ and Huber loss functions. We observe that DEQGAN converges to lower mean squared errors than the alternative unsupervised neural network method, often by multiple orders of magnitude, across the equations studied. We note, however, that the mean squared error curves of DEQGAN are less smooth and exhibit greater variability than the normal unsupervised neural network method, an issue we study in the following section.
\section{Stability of DEQGAN Training}
\label{discussion}
A point that we have not addressed is the instability of the DEQGAN training algorithm. The instability of GANs is not a new problem and much work has been dedicated to improving the stability and convergence of GANs \cite{arjovsky2017wasserstein, gulrajani2017improved, BEGAN, mirza2014conditional, gan_spectral_norm}. In our experiments we find that the initial weights of the generator and discriminator can have a substantial impact on the final performance of DEQGAN. The solution that we adopt is to fix the initial model weights when tuning hyperparameters for DEQGAN and to keep the same weight initialization thereafter, which appears to significantly reduce the problem of instability.
To illustrate the relationship between performance, hyperparameters, and initial model weights, Figure \ref{fig:exp_seeds_mse} plots the results of $500$ DEQGAN experiments for the exponential decay equation. For each experiment, we uniformly at random select model weight initialization random seeds as integers from the range $[0,9]$, as well as separate learning rates for the discriminator and generator in the range $[10^{-6}, 10^{-2}]$. We then record the final mean squared error on the validation set after running DEQGAN training for $500$ steps. Each line represents a combination of model weight initialization random seed, learning rate hyperparameters, and final (log) mean squared error of a single experiment.
Notably, the results as a whole exhibit considerable variation in final mean squared error. However, when we filter on experiments achieving low mean squared errors ($\leq 10^{-8}$), we see that hyperparameter settings exist, for each of the model weight initialization seeds, that provide highly accurate DEQGAN solutions. We also observe a pattern in the hyperparameters which produce the low mean squared error experiments. We note that relatively high generator learning rates and low discriminator learning rates lead to the best DEQGAN performance across different model initialization seeds.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\textwidth]{figures/exp_seeds_mse.png}
\caption[DEQGAN, Initialization, and Hyperparameters: Best MSEs]{Parallel plot showing the results of $500$ DEQGAN experiments on the exponential decay equation. The colors represent the different random seeds used to initialize model weights. We filter the results (non-selected lines appear gray) to highlight experiments achieving mean squared error $\leq 10^{-8}$. The mean squared error is plotted on a $log_{10}$ scale.}
\label{fig:exp_seeds_mse}
\end{figure}
\section{Conclusion}
\label{conclusion}
We have presented a novel method which leverages GAN-based adversarial training to ``learn" the loss function for solving differential equations with unsupervised neural networks. We have shown empirically that our our method, which we call Differential Equation GAN (DEQGAN), can obtain multiple orders of magnitude lower mean squared errors than an alternative unsupervised neural network method with $L_2$, $L_1$, and Huber loss functions. Moreover, we show that DEQGAN achieves solution accuracy that is competitive with the traditional fourth-order Runge-Kutta and second-order finite difference methods. While our approach is sensitive to hyperparmaters, we have shown that it is possible to train a GAN in a \emph{fully unsupervised} manner to achieve highly accurate solutions to differential equations.
\newpage
\section*{Broader Impact}
We hope that the broader impact of this work will be to advance the study of unsupervised neural network methods for solving differential equations. We do not believe that our work holds particularly poignant ethical or societal consequences. We note that our method does not provide theoretical guarantees of solution accuracy, and any critical implementations relying on this should exercise caution.
\begin{ack}
The authors would like to acknowledge helpful discussions with Marios Mattheakis, Cengiz Pehlevan, and Feiyu Chen.
\end{ack}
\bibliographystyle{apalike2}
| {
"timestamp": "2020-07-23T02:06:16",
"yymm": "2007",
"arxiv_id": "2007.11133",
"language": "en",
"url": "https://arxiv.org/abs/2007.11133",
"abstract": "Solutions to differential equations are of significant scientific and engineering relevance. Recently, there has been a growing interest in solving differential equations with neural networks. This work develops a novel method for solving differential equations with unsupervised neural networks that applies Generative Adversarial Networks (GANs) to \\emph{learn the loss function} for optimizing the neural network. We present empirical results showing that our method, which we call Differential Equation GAN (DEQGAN), can obtain multiple orders of magnitude lower mean squared errors than an alternative unsupervised neural network method based on (squared) $L_2$, $L_1$, and Huber loss functions. Moreover, we show that DEQGAN achieves solution accuracy that is competitive with traditional numerical methods. Finally, we analyze the stability of our approach and find it to be sensitive to the selection of hyperparameters, which we provide in the appendix.Code available atthis https URL. Please address any electronic correspondence to dylanrandle@alumni.this http URL.",
"subjects": "Machine Learning (cs.LG); Machine Learning (stat.ML)",
"title": "Unsupervised Learning of Solutions to Differential Equations with Generative Adversarial Networks",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9732407137099625,
"lm_q2_score": 0.7279754489059774,
"lm_q1q2_score": 0.7084953454565838
} |
https://arxiv.org/abs/2201.00450 | On randomized sketching algorithms and the Tracy-Widom law | There is an increasing body of work exploring the integration of random projection into algorithms for numerical linear algebra. The primary motivation is to reduce the overall computational cost of processing large datasets. A suitably chosen random projection can be used to embed the original dataset in a lower-dimensional space such that key properties of the original dataset are retained. These algorithms are often referred to as sketching algorithms, as the projected dataset can be used as a compressed representation of the full dataset. We show that random matrix theory, in particular the Tracy-Widom law, is useful for describing the operating characteristics of sketching algorithms in the tall-data regime when $n \gg d$. Asymptotic large sample results are of particular interest as this is the regime where sketching is most useful for data compression. In particular, we develop asymptotic approximations for the success rate in generating random subspace embeddings and the convergence probability of iterative sketching algorithms. We test a number of sketching algorithms on real large high-dimensional datasets and find that the asymptotic expressions give accurate predictions of the empirical performance. | \section{Introduction}
Sketching is a probabilistic data compression technique that makes use of random projection \citep{cormode_sketch_2011, mahoney_randomized_2011, woodruff_sketching_2014}. Suppose interest lies in a $n \times d$ dataset $\mat{A}$. When $n$ and or $d$ are large, typical data analysis tasks will involve a heavy numerical computing load. This computational burden can be a practical obstacle for statistical learning with Big Data. When the sample size $n$ is the computational bottleneck, sketching algorithms use a linear random projection to create a smaller sketched dataset of size $k \times d$, where $k \ll n$. The random projection can be represented as a $k \times n$ random matrix $\mat{S}$, and the sketched dataset $\widetilde{\mat{A}}$ is generated through the linear embedding $\widetilde{\mat{A}}=\mat{S}\mat{A}$. The smaller sketched dataset $\widetilde{\mat{A}}$ is used as a surrogate for the full dataset $\mat{A}$ within numerical routines. Through a judicious choice of the distribution on the random sketching matrix $\mat{S}$, it is often possible to bound the error that is introduced stochastically into calculations given the use of the randomized approximation $\widetilde{\mat{A}}$ in place of $\mat{A}$
The selected distribution of the random sketching matrix $\mat{S}$ can be divided into two categories, data-oblivious sketches, where the distribution is not a function of the source data $\mat{A}$, and data-aware sketches, where the distribution is a function of $\mat{A}$. The majority of data-aware sketches perform weighted sampling with replacement, and are closely connected to finite population survey sampling methods \citep{ma_statistical_2015, quiroz_2018_subsampling}. The analysis of data-oblivious sketches requires different methods to data-aware sketches, as there are no clear ties to finite-population subsampling. In general, data-oblivious sketches generate a dataset of $k$ pseudo-observations, where each instance in the compressed representation $\widetilde{\mat{A}}$ has no exact counterpart in the original source dataset $\mat{A}$.
Three important data-oblivious sketches are the Gaussian sketch, the Hadamard sketch and the Clarkson-Woodruff sketch. The Gaussian sketch is the simplest of these, where each element in the $k \times n$ matrix $\mat{S}$ is an independent sample from a $N(0, 1/k)$ distribution. The Hadamard sketch uses structured elements for fast matrix multiplication, and the Clarkson-Woodruff uses sparsity in $\mat{S}$ for efficient computation of the sketched dataset. The comparative performance between distributions on $\mat{S}$ is of interest, as there is a trade-off between the computational cost of calculating $\widetilde{\mat{A}}$ and the fidelity of the approximation $\widetilde{\mat{A}}$ with respect to original $\mat{A}$ when choosing the type of sketch. Our results help to establish guidelines for selecting the sketching distribution.
Sketching algorithms are typically framed using stochastic $(\delta, \epsilon)$ error bounds, where the algorithm is shown to attain $(1 \pm \epsilon)$ accuracy with probability at least $1-\delta$ \citep{woodruff_sketching_2014}. These notions are made more precise in Section \ref{sec:sketching}. Existing bounds are typically developed from a worst-case non-asymptotic viewpoint \citep{mahoney_randomized_2011, woodruff_sketching_2014, tropp_improved_2011}. We take a different approach, and use random matrix theory to develop asymptotic approximations to the success probability given the sketching distortion factor $\epsilon$.
Our main result is an asymptotic expression for the probability that a Gaussian based sketching algorithm satisfies general $(1 \pm \epsilon)$ probabilistic error bounds in terms of the Tracy-Widom law (Theorem \ref{thm:gaussian_embedding_tw_limit}), which describes the distribution of the extreme eigenvalues of large random matrices \citep{tracy_level_1994, johnstone_distribution_2001}. We then identify regularity conditions where other data-oblivious projections are expected to demonstrate the same limiting behavior (Theorem \ref{thm:data_oblivious_asymptotic_embedding}). If the motivation for using a sketching algorithm is data compression due to large $n$, the asymptotic approximations are of particular interest as they become more accurate as the computational benefits afforded by the use of a sketching algorithm increase in tandem. Empirical work has found that the quality of results can be consistent across the choice of random projections \citep{venkata_johnson_2011, le_2013_fastfood, dahiya_2018_empirical}, and our results shed some light on this issue. An application is to determine the convergence probability when sketching is used in iterative least-squares optimisation. We test the asymptotic theory and find good agreement on datasets with large sample sizes $n \gg d$. Our theoretical and empirical results show that random matrix theory has an important role in the analysis of data-oblivious sketching algorithms for data compression.
\section{Sketching}
\label{sec:sketching}
\subsection{Data-oblivious sketches}
\label{subsec:sketching}
As mentioned, a key component in a sketching algorithm is the distribution on $\mat{S}$. Four important random linear maps are:
\begin{itemize}
\item The uniform sketch implements subsampling uniformly with replacement followed by a rescaling step. The Uniform projection can be represented as $\mat{S}=\sqrt{n/k}\Phi$. The random matrix $\Phi$ subsamples $k$ rows of $\mat{A}$ with replacement. Element $\Phi_{r,i}=1$ if observation $i$ in the source dataset is selected in the $r$th subsampling round $(r=1, \ldots, k; \ i=1\ldots, n)$. The uniform sketch can be implemented in $O(k)$ time.
\item A Gaussian sketch is formed by independently sampling each element of $\mat{S}$ from a $N(0, 1/k)$ distribution. Computation of the sketched data is $O(ndk)$.
\item The Hadamard sketch is a structured random matrix \citep{ailon_fast_2009}. The sketching matrix is formed as $\mat{S} = \Phi\mat{H}\mat{D}/\sqrt{k}$, where $\Phi$ is a $k \times n$ matrix and $\mat{H}$ and $\mat{D}$ are both $n \times n$ matrices. The fixed matrix $\mat{H}$ is a Hadamard matrix of order $n$. A Hadamard matrix is a square matrix with elements that are either $+1$ or $-1$ and orthogonal rows. Hadamard matrices do not exist for all integers $n$, the source dataset can be padded with zeroes so that a conformable Hadamard matrix is available. The random matrix $\mat{D}$ is a diagonal matrix where each of the $n$ diagonal entries is an independent Rademacher random variable. The random matrix $\Phi$ subsamples $k$ rows of $\mat{H}$ with replacement. The structure of the Hadamard sketch allows for fast matrix multiplication, reducing calculation of the sketched dataset to $O(nd \log k)$ operations.
\item The Clarkson-Woodruff sketch is a sparse random matrix \citep{clarkson_low_2013}. The projection can be represented as the product of two independent random matrices, $\mat{S} = \mat{\Gamma}\mat{D}$, where $\mat{\Gamma}$ is a random $k \times n$ matrix and $\mat{D}$ is a random $n \times n$ matrix. The matrix $\mat{\Gamma}$ is initialized as a matrix of zeros. In each column, independently, one entry is selected and set to $+1$. The matrix $\mat{D}$ is a diagonal matrix where each of the $n$ diagonal entries is an independent Rademacher random variable. This results in a sparse $\mat{S}$, where there is only one nonzero entry per column. The sparsity of the Clarkson-Woodruff sketch speeds up matrix multiplication, dropping the complexity of generating the sketched dataset to $O(nd)$.
\end{itemize}
The Gaussian sketch was central to early work on sketching algorithms \citep{sarlos_improved_2006}. The drawback of the Gaussian sketch is that computation of the sketched data is quite demanding, taking $O(ndk)$ operations. As such, there has been work on designing more computationally efficient random projections.
Sketch quality is commonly measured using $\epsilon$-subspace embeddings (\citet[Chapter 2]{woodruff_sketching_2014}, \cite{meng_low-distortion_2013}, \cite{yang_implementing_2015}). These are defined below.
\begin{definition}{\emph{$\epsilon$-subspace embedding}}
\label{defn:epsilon_subspace_embedding}\newline
For a given $n \times d$ matrix $\mat{A}$, we call a $k \times n $ matrix $\mat{S}$ an $\epsilon$-subspace embedding for $\mat{A}$, if for all vectors $\vect{z} \in \mathbb{R}^{d}$
\begin{align*}
(1-\epsilon)|| \mat{A}\vect{z}||_{2}^{2} \le ||\mat{S} \mat{A}\vect{z}||_{2}^{2} \le (1+\epsilon)|| \mat{A}\vect{z}||_{2}^{2}.
\end{align*}
\end{definition}
An $\epsilon$-subspace preserves the linear structure of the original dataset up to a multiplicative $(1 \pm \epsilon)$ factor. Broadly speaking, the covariance matrix of the sketched dataset $\widetilde{\mat{A}}=\mat{S}\mat{A}$ is similar to the covariance matrix of the source dataset $\mat{A}$ if $\epsilon$ is small. Mathematical arguments show that the sketched dataset is a good surrogate for many linear statistical methods if the sketching matrix $\mat{S}$ is an $\epsilon$-subspace embedding for the original dataset, with $\epsilon$ sufficiently small \citep{woodruff_sketching_2014}. Suitable ranges for $\epsilon$ depend on the task of interest and structural properties of the source dataset \citep{mahoney_structural_2016}.
The Gaussian, Hadamard and Clarkson-Woodruff projections are popular data-oblivious projections as it is possible to argue that they produce $\epsilon$-subspace embeddings with high probability for an arbitrary data matrix $\mat{A}$. It is considerably more difficult to establish universal worst case bounds for the uniform projection \citep{drineas_sampling_2006, ma_statistical_2015}. We include the uniform projection in our discussion as it is a useful baseline.
\begin{table}[h]
\centering
\begin{tabular}{@{} l l l l @{}}
\toprule
Sketch & Sketching time & Required sketch size $k$ \\
\midrule
Gaussian & $O(ndk) $ & $O((d+\log(1/\delta))/\epsilon^2) $\\
Hadamard & $O(nd \log k)$ & $O((\sqrt{d}+\sqrt{\log n})^2(\log (d/\delta))/ \epsilon^2)$\\
Clarkson-Woodruff & $O(nd)$& $O(d^2/(\delta\epsilon^2))$ \\
Uniform & $O(k)$ & $-$ & \\
\bottomrule
\end{tabular}
\caption[Properties of different data-oblivious random projections.]{\label{tab:run_time}Properties of different data-oblivious random projections (see \citet{woodruff_sketching_2014} and the references therein). The third column refers to the necessary sketch size $k$ to obtain an $\epsilon$-subspace embedding for an arbitrary $n \times d$ source dataset with at least probability $(1-\delta)$.}
\end{table}
\subsection{Sketching algorithms}
Sketching algorithms have been proposed for key linear statistical methods such as low rank matrix approximation, principal components analysis, linear discriminant analysis and ordinary least squares regression \citep{mahoney_randomized_2011, woodruff_sketching_2014, erichson_randomized_2016, falcone_2021_matrix}. Sketching has also been investigated for Bayesian posterior approximation \citep{bardenet_note_2015, geppert_random_2017}. A common thread throughout these works is the reliance on the generation of an $\epsilon$-subspace embedding. In general, $\epsilon$ serves an approximation tolerance parameter, with smaller $\epsilon$ guaranteeing higher fidelity to exact calculation with respect to some divergence measure.
An example application of sketching is ordinary least squares regression \citep{sarlos_improved_2006}. The sketched responses and predictors are defined as $\widetilde{\vect{y}}=\mat{S}\vect{y}, \widetilde{\mat{X}}=\mat{S}\mat{X}$. Let $\vect{\vect{\beta}}_{F} = \argmin_{\vect{\beta}}\lVert \vect{y}-\mat{X}\vect{\beta} \rVert_{2}^{2}, \vect{\beta}_{S} = \argmin_{\vect{\beta}}\lVert \widetilde{\vect{y}}-\widetilde{\mat{X}}\vect{\beta} \rVert_{2}^{2}$, and $RSS_{F}=\lVert \vect{y}-\mat{X}\vect{\beta}_{F}\rVert_{2}^{2}$. It is possible to establish the concrete bounds, that if ${\mat{S}}$ is an $\epsilon$-subspace embedding for $\mat{A}=(\vect{y}, {\mat{X}})$ \citep{sarlos_improved_2006}, then
\begin{align*}
\lVert \vect{\beta}_{S}- \vect{\beta}_{F}\rVert_{2}^{2} &\le \dfrac{\epsilon^2}{\sigma_{\text{min}}^2(\mat{X})}RSS_{F},
\end{align*}
where $\sigma_{\text{min}}(\mat{X})$ represents the smallest singular value of the design matrix $\mat{X}$. If $\epsilon$ is very small, then $\vect{\beta}_{S}$ is a good approximation to $\vect{\beta}_{F}$.
Given the central role of $\epsilon$-subspace embeddings (Definition \ref{defn:epsilon_subspace_embedding}), the success probability,
\begin{align}
\Pr (\mat{S} \text{ is an $\epsilon$-subspace embedding for $\mat{A}$})
\end{align}
is thus an important descriptive measure of the uncertainty attached to the randomized algorithm. The probability statement is over the random sketching matrix $\mat{S}$ with the dataset $\mat{A}$ treated as fixed. The embedding probability is difficult to characterize precisely using existing theory \citep{venkata_johnson_2011}. The bounds in Table \ref{tab:run_time} only give qualitative guidance about the embedding probability. Users will benefit from more prescriptive results in order to choose the sketch size $k$, and the type of sketch for applications \citep{grellmann_random_2016, geppert_random_2017, ahfock_statistical_2020, falcone_2021_matrix}.
Another use for sketching is in iterative solvers for ordinary least squares regression. A sketch $\widetilde{\mat{X}} = \mat{S}\mat{X}$ can be used to generate a random preconditioner, $(\widetilde{\mat{X}}^{\mathsf{T}}\widetilde{\mat{X}})^{-1}$, that is then applied to the normal equations $\mat{X}^{\mathsf{T}}\mat{X}\vect{\beta}=\mat{X}^{\mathsf{T}}\vect{y}$. Given some initial value $\vect{\beta}^{(0)}$, the iteration is defined as
\begin{align}
\vect{\beta}^{(t+1)} &= \vect{\beta}^{(t)} + (\widetilde{\mat{X}}^{\mathsf{T}}\widetilde{\mat{X}})^{-1}\mat{X}^{\mathsf{T}}(\vect{y}-\mat{X}\vect{\beta}^{(t)}). \label{eq:basic_iteration}
\end{align}
If $\widetilde{\mat{X}}^{\mathsf{T}}\widetilde{\mat{X}}=\mat{X}^{\mathsf{T}}\mat{X}$ the iteration will converge in a single step. The degree of noise in the preconditioner will be influenced by the sketch size $k$. A sufficient condition for convergence of the iteration \eqref{eq:basic_iteration} is that $\mat{S}$ is an $\epsilon$-subspace embedding for $\mat{X}$ with $\epsilon < 0.5$ \citep{pilanci_iterative_2016}. As is typical with randomized algorithms, we accept some failure probability in order to relax the computational demands. It is of interest to develop expressions for the failure probability of the algorithm as a function of the sketch size $k$, as this can give useful guidelines in practice. It is possible to establish worst case bounds using the results in Table \ref{tab:run_time}, however we will aim to give a point estimate of the probability. Although it is possible to improve on the iteration \eqref{eq:basic_iteration} using acceleration methods \citep{meng_2014_lsrn, dahiya_2018_empirical, lacotte_2020_limiting}, we focus on the basic iteration to introduce our asymptotic techniques.
\subsection{Operating characteristics}
Let the singular value decomposition of the source dataset be given by $\mat{A}=\mat{U}\mat{D}\mat{V}^{\mathsf{T}}$. Let $\sigma_{\text{min}}(\mat{M})$ and $\sigma_{\text{max}}(\mat{M})$ denote the minimum and maximum singular values respectively, of a matrix $\mat{M}$. Likewise, let $\lambda_{\text{min}}(\mat{M})$ and $\lambda_{\text{max}}(\mat{M})$ denote the minimum and maximum eigenvalues of a matrix $\mat{M}$. It is possible to show
\begin{align}
\Pr (\mat{S} \text{ is an $\epsilon$-subspace embedding for $\mat{A}$}) &= \Pr(\sigma_{\text{max}}(\mat{I}_{d}-\mat{U}^{\mathsf{T}}\mat{S}^{\mathsf{T}}\mat{S}\mat{U}) \le \epsilon), \label{eq:embedding_probability}
\end{align}
where $\mat{U}$ is the $n \times d$ matrix of left singular vectors of the source data matrix $\mat{A}$ \citep{woodruff_sketching_2014}. Now as
\begin{align}
\sigma_{\text{max}}(\mat{I}_{d}-\mat{U}^{\mathsf{T}}\mat{S}^{\mathsf{T}}\mat{S}\mat{U}) &= \text{max}(\lvert 1-\lambda_{\text{min}}(\mat{U}^{\mathsf{T}}\mat{S}^{\mathsf{T}}\mat{S}\mat{U}) \rvert, \lvert 1-\lambda_{\text{max}}(\mat{U}^{\mathsf{T}}\mat{S}^{\mathsf{T}}\mat{S}\mat{U}) \rvert) , \label{eq:max_identity}
\end{align}
the extreme eigenvalues of $\mat{U}^{\mathsf{T}}\mat{S}^{\mathsf{T}}\mat{S}\mat{U}$ are the critical factor in generating $\epsilon$-subspace embeddings. The convergence behavior of the basic iteration \eqref{eq:basic_iteration} is also tied to the eigenvalues of $\mat{U}^{\mathsf{T}}\mat{S}^{\mathsf{T}}\mat{S}\mat{U}$ where $\mat{A}=\mat{X}$. Providing that $(\widetilde{\mat{X}}^{\mathsf{T}}\widetilde{\mat{X}})$ is of rank $d$, the maximum eigenvalue satisfies
\begin{align*}
\lambda_{\text{max}}((\widetilde{\mat{X}}^{\mathsf{T}}\widetilde{\mat{X}})^{-1}\mat{X}^{\mathsf{T}}\mat{X}) &= \lambda_{\text{max}}((\mat{U}^{\mathsf{T}}\mat{S}^{\mathsf{T}}\mat{S}\mat{U})^{-1}).
\end{align*}
From standard results on iterative solvers \citep{hageman_2012_applied}, a necessary and sufficient condition for the iteration to converge is $
\underset{t \to \infty}{\lim}\lVert\vect{\beta}_{F} - \vect{\beta}^{(t)} \rVert_{2} = 0$ if and only if $ \lambda_{\text{max}}((\widetilde{\mat{X}}^{\mathsf{T}}\widetilde{\mat{X}})^{-1}\mat{X}^{\mathsf{T}}\mat{X}) < 2$. The probability of convergence can then be expressed as
\begin{align}
\Pr\left(\underset{t \to \infty}{\lim}\lVert\vect{\beta}_{F} - \vect{\beta}^{(t)} \rVert_{2} = 0\right) &= \Pr(\lambda_{\text{min}}(\mat{U}^{\mathsf{T}}\mat{S}^{\mathsf{T}}\mat{S}\mat{U}) > 0.5). \label{eq:convergence_probability}
\end{align}
Most existing results on the probabilities \eqref{eq:embedding_probability} and \eqref{eq:convergence_probability} are finite sample lower bounds \citep{tropp_improved_2011, nelson_osnap_2013, meng_randomized_2014}. Worst case bounds can be conservative in practice, and there is value in developing other methods to characterize the performance of randomized algorithms \citep{ halko_finding_2011,raskutti_statistical_2014, lopes_2018_error, dobriban_2018_new}. The embedding probability \eqref{eq:embedding_probability} and the convergence probability \eqref{eq:convergence_probability} are related to the extreme eigenvalues of $\mat{U}^{\mathsf{T}}\mat{S}^{\mathsf{T}}\mat{S}\mat{U}$. In Section \ref{sec:gaussian} we study this distribution for the Gaussian sketch and develop a Tracy-Widom approximation. The approximation is then extended to the Clarkson-Woodruff and Hadamard sketches in Section \ref{sec:data_oblivious}.
\section{Gaussian sketch}
\label{sec:gaussian}
\subsection{Exact representations}
\citet[Section 2.3]{meng_randomized_2014} notes that when using a Gaussian sketch, it is instructive to consider {directly} the distribution of the random variable $
\sigma_{\text{max}}(\mat{I}_{d}-\mat{U}^{\mathsf{T}}\mat{S}^{\mathsf{T}}\mat{S}\mat{U})$ to study the embedding probability \eqref{eq:embedding_probability}. Consider an arbitrary $n \times d$ data matrix $\mat{A}$. As $\mat{S}$ is a matrix of independent Gaussians with mean zero and variance $1/k$, it is possible to show that
\begin{align*}
\mat{U}^{\mathsf{T}}\mat{S}^{\mathsf{T}}\mat{S}\mat{U} &\sim \text{Wishart}\left(k, \mat{I}_{d}/k\right) ,
\end{align*}
as $\mat{U}^{\mathsf{T}}\mat{U}=\mat{I}_{d}$. The key term $\mat{U}^{\mathsf{T}}\mat{S}^{\mathsf{T}}\mat{S}\mat{U}$ is in some sense a pivotal quantity, as its distribution is invariant to the actual values of the data matrix $\mat{A}$. When using a Gaussian sketch, the probability of obtaining an $\epsilon$-subspace embedding has no dependence on the number of original observations $n$, or on the values in the data matrix $\mat{A}$. This is a useful property for a data-oblivious sketch, as it is possible to develop universal performance guarantees that will hold for any possible source dataset. This invariance property is also noted in \citet{meng_randomized_2014}, although the derivation is different.
Let us define the random matrix $\mat{W} \sim \text{Wishart}(k, \mat{I}_{d}/k)$. The success probability of interest can then be expressed in terms of the extreme eigenvalues of the Wishart distribution
The embedding probability of interest has the representation:
\begin{align}
\Pr (\mat{S} \text{ is an $\epsilon$-subspace embedding for $\mat{A}$}) &=
\Pr\left( | 1- \lambda_{\text{min}}(\mat{W})| \le \epsilon, | 1- \lambda_{\text{max}}(\mat{W})| \le \epsilon \right). \label{eq:wishart_embedding_eigenvalues}
\end{align}
where we have made use of the expression for the maximum singular value \eqref{eq:max_identity}.
It is difficult to obtain a mathematically tractable expression for the embedding probability as it involves the joint distribution of the extreme eigenvalues \citep{chiani_2017_probability}. \citeauthor{meng_randomized_2014} forms a lower bound on the probability \eqref{eq:wishart_embedding_eigenvalues} using concentration results on the eigenvalues of the Wishart distribution.
The convergence probability \eqref{eq:convergence_probability}, can also be related to the eigenvalues of the Wishart distribution. Assuming $k \ge d$, the matrix $\widetilde{\mat{X}}^{\mathsf{T}}\widetilde{\mat{X}}$ has full rank with probability one. As such, using the same pivotal quantity $\mat{U}^{\mathsf{T}}\mat{S}^{\mathsf{T}}\mat{S}\mat{U}$ as before,
\begin{align}
\Pr\left(\underset{t \to \infty}{\lim}\lVert\vect{\beta}_{F} - \vect{\beta}^{(t)} \rVert_{2} = 0\right)&= \Pr(\lambda_{\text{min}}(\mat{W}) > 0.5), \label{eq:conv_prob}
\end{align}
where $\mat{W}\sim \text{Wishart}(k, \mat{I}_{d}/k)$. The convergence probability \eqref{eq:conv_prob} has no dependence on the specific response vector $\vect{y}$ or design matrix $\mat{X}$ under consideration. Problem invariance is a highly desirable property for a randomized iterative solver \citep{roosta_subsampled_2016, lacotte_2020_limiting}. Both the embedding probability and the convergence probability are related to the extreme eigenvalues of the Wishart distribution. The extreme eigenvalues of Wishart random matrices are a well studied topic in random matrix theory \citep{edelman_eigenvalues_1988}, and we can make use of existing results to analyse the operating characteristics of sketching algorithms. In the following section we develop approximations to the embedding probability and the convergence probability in the asymptotic regime:
\begin{align}
n,d,k \to \infty, \quad n > k, \quad d/k \to \alpha \in (0, 1]. \label{eq:big_data_regime}
\end{align}
The limit is asymptotic in $n$, $d$ and $k$, with the constraint that the number of variables to sketch size tends to a constant $\alpha$. This can be interpreted as a type of Big Data asymptotic, where we consider tall and wide datasets through the limit in $n$ and $d$, and increasing sketch sizes $k$ to cope with the expanding number of variables $d$. Although there is no explicit dependence on $n$ for the finite sample expressions \eqref{eq:embedding_probability} and \eqref{eq:conv_prob} for the Gaussian sketch, the asymptotic limit in $n$ is still used to emphasize that we are taking limits in the tall-data setting.
\citet{dobriban_2018_new} analyse the mean squared error of single-pass sketching algorithms for linear regression in this asymptotic framework under the assumption of a generative model. Our analysis is different as we are concerned with the embedding and convergence probabilities (\eqref{eq:embedding_probability} and \eqref{eq:convergence_probability}), rather than the accuracy of population parameter estimates. In independent work, \citet{lacotte_2020_limiting} study the limiting empirical spectral distribution of Hadamard sketch in the asymptotic regime \eqref{eq:big_data_regime}. Here we are concerned with the fluctuations of the extreme eigenvalues rather than the bulk of the spectrum.
\subsection{Random matrix theory}
Random matrix theory involves the analysis of large random matrices \citep{bai_spectral_2010}. The Tracy-Widom law is an important result in the study of the extreme eigenvalue statistics \citep{tracy_level_1994}. \citet{johnstone_distribution_2001} showed that Tracy-Widom law gives the asymptotic distribution of the maximum eigenvalue of a $\text{Wishart}(k, \mat{I}_{d}/k)$ matrix after appropriate centering and scaling. In subsequent work \cite{ma_accuracy_2012} showed that the rate of convergence could be improved from ${O}(d^{-1/3})$ to ${O}(d^{-2/3})$ by using different centering and scaling constants than in \cite{johnstone_distribution_2001}. We build from the convergence result given by \citeauthor{ma_accuracy_2012}.
The \texttt{R} package \texttt{RMTstat} contains a number of functions for working with the Tracy-Widom distribution \citep{johnstone_2014_rmtstat}. The main application of the Tracy-Widom law to statistical inference has been its use in hypothesis testing in high-dimensional statistical models \citep{johnstone_high_2006, bai_spectral_2010}. To the best of our knowledge, the connection to sketching algorithms has not been explored in great depth. The Tracy-Widom law can be used to approximate the embedding probability \eqref{eq:embedding_probability}.
\begin{theorem}
\label{thm:gaussian_embedding_tw_limit}
Suppose we have an arbitrary $n \times d$ data matrix $\mat{A}$ where $n >d$ and $\mat{A}$ is of rank $d$. Furthermore assume we take a Gaussian sketch of size $k$. Consider the limit in $n, k$ and $d$, such that $d/k \to \alpha$ with $\alpha \in (0, 1]$. Define centering and scaling constants $\mu_{k,d}$ and $\sigma_{k,d}$ as
\begin{align*}
\mu_{k,d} &= k^{-1}(\sqrt{k-1/2}+\sqrt{d-1/2})^{2}, \quad \sigma_{k,d} = \dfrac{k^{-1}(\sqrt{k-1/2}+\sqrt{d-1/2})}{\left(1/{\sqrt{k-1/2}}+1/{\sqrt{d-1/2}}\right)^{1/3}}.
\end{align*}
Set $Z \sim F_{1}$ where $F_{1}$ is the Tracy-Widom distribution. Let $\psi_{n,k,d}$ give the exact embedding probability and let $\widehat{\psi}_{n,k,d}$ give the asymptotic approximation to the embedding probability:
\begin{align*}
\psi_{n,k,d} &= \Pr \left(\mat{S} \emph{ is an $\epsilon$-subspace embedding for $\mat{A}$}\right), \quad \widehat{\psi}_{n,k,d} = \Pr \left( Z \le \dfrac{\epsilon +1-\mu_{k,d}}{\sigma_{k,d}}\right).
\end{align*}
Then asymptotically in $n, d$ and $k$, for any $\epsilon>0$,
\begin{align*}
\underset{n,d,k \to \infty}{\lim} \left\lvert \psi_{n,k,d} - \widehat{\psi}_{n,k,d} \right\rvert &=0
\end{align*}
Furthermore, for even $d$, $ \left\lvert \gamma_{n,k,d}-\widehat{\gamma}_{n,k,d} \right\rvert = {O}(d^{-2/3})$.
\end{theorem}
The proof is given in the supplementary material.
The convergence probability of the iterative algorithm \eqref{eq:convergence_probability} can also be approximated using the Tracy-Widom law.
\begin{theorem}
\label{thm:gaussian_convergence_tw}
Suppose we have an arbitrary $n \times d$ data matrix $\mat{A}$ where $n >d$ and $\mat{A}$ is of rank $d$. Furthermore, assume we take a Gaussian sketch of size $k$. Consider the limit in $n, k$ and $d$, such that $d/k \to \alpha$ with $\alpha \in (0, 1]$. Set
\begin{align*}
\mu_{k,d} &= (\sqrt{k-1/2}-\sqrt{d-1/2})^{2},\\
\sigma_{k,d} &= (\sqrt{k-1/2}-\sqrt{d-1/2})\left(\dfrac{1}{\sqrt{k-1/2}}-\dfrac{1}{\sqrt{d-1/2}}\right)^{1/3},
\end{align*}
and define the following centering and scaling constants $ \tau_{k,d} = \sigma_{k,d}/\mu_{k,d}, \nu_{k,d} = \log(\mu_{k,d})-\log k -\tau_{k,d}^2/8$. Set $Z \sim F_{1}$, where $F_{1}$ is the Tracy-Widom distribution. Let $\gamma_{n,k,d}$ give the exact convergence probability, and $\widehat{\gamma}_{n,k,d}$ give the asymptotic approximation to the convergence probability:
\begin{align*}
\gamma_{n,k,d} &= \Pr \left(\underset{t \to \infty}{\lim}\lVert\vect{\beta}_{F} - \vect{\beta}^{(t)} \rVert_{2} = 0\right), \quad
\widehat{\gamma}_{n,k,d} = \Pr\left(Z \le \dfrac{\nu_{k,d}-\log (1/2)}{\tau_{k,d}}\right).
\end{align*}
Then for all starting values $\vect{\beta}^{(0)}$, asymptotically in $n, d$ and $k$,
\begin{align*}
\quad \underset{n,d,k \to \infty}{\lim} \left\lvert \gamma_{n,k,d}-\widehat{\gamma}_{n,k,d} \right\rvert = 0.
\end{align*}
Furthermore, for even $d$, $ \left\lvert \gamma_{n,k,d}-\widehat{\gamma}_{n,k,d} \right\rvert = {O}(d^{-2/3})$.
\end{theorem}
The proof is given in the supplementary material.
The embedding probability for the Gaussian sketch can be estimated by simulating $\mat{W} \sim \text{Wishart}(k, \mat{I}_{d}/k)$ and using the empirical distribution of the random variable $\sigma_{\text{max}}\left(\mat{I}_{d} - \mat{W}\right)$. To assess the accuracy of the approximation in Theorem \ref{thm:gaussian_embedding_tw_limit}, we generated $B=10,000$ random Wishart matrices $\mat{W}^{[1]}, \ldots, \mat{W}^{[B]}$. For each simulated matrix $\mat{W}^{[b]}$ we computed the distortion factor $\epsilon^{[b]} = \sigma_{\text{max}}(\mat{I}_{d}-\mat{W}^{[b]})$ for $b=1, \ldots, B$. The simulated distortion factors $\epsilon^{[1]}, \ldots, \epsilon^{[B]}$ were used to give a Monte Carlo estimate of the embedding probability:
\begin{align}
\widehat{\Pr}(\mat{S} \text{ is an } \epsilon\text{-subspace embedding for } \mat{A}) &= \dfrac{1}{B}\sum_{b=1}^{B}\mathbbm{1}(\epsilon^{[b]} \le \epsilon). \label{eq:monte_carlo_embedding}
\end{align}
We used the \texttt{ARPACK} library \citep{lehoucq_arpack_1998} to compute the maximum singular values $\sigma_{\text{max}}(\mat{I}_{d}-\mat{W}^{[b]})$. The estimated embedding probabilities are displayed in Figure \ref{fig:epsilon_ecdf} for different dimensions $d$. The sketch size to variables ratio, $k/d$, was held fixed at 20. The solid red line shows the empirical probability of obtaining an $\epsilon$-subspace embedding. The dashed black line gives the Tracy-Widom approximation given in Theorem \ref{thm:gaussian_embedding_tw_limit}. The agreement is consistently good over dimensions $d$, and the range of sketch sizes $k$ that were considered.
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{Figures/gaussian_embedding.pdf}
\caption{Accuracy of Tracy-Widom approximation for embedding probability \eqref{eq:wishart_embedding_eigenvalues} for the Gaussian sketch. The dashed black line gives the asymptotic limit, the solid red line gives the empirical probability. When $d\ge 20$ the approximation given in Theorem \ref{thm:gaussian_embedding_tw_limit} is very accurate. }
\label{fig:epsilon_ecdf}
\end{figure*}
\section{Computationally efficient sketches}
\label{sec:data_oblivious}
\subsection{Asymptotics}
Asymptotic methods are useful to analyse data-oblivious sketches that do not admit interpretable finite sample distributions \citep{li_very_2006, ahfock_statistical_2020, lacotte_2020_limiting}. Here we describe the limiting behavior of the sketched algorithms for fixed $k$ and $d$ as the number of source observations $n$ increases.
Under an assumption on the limiting leverage scores of the source data matrix, we can establish a limit theorem for the Hadmard and Clarkson-Woodruff sketches. The leverage scores are an important structural property in sketching algorithms \citep{mahoney_structural_2016}.
\begin{assumption}
\label{assum:leverage}
Define the singular value decomposition of the $n \times d$ source dataset as $\mat{A}_{(n)}=\mat{U}_{(n)}\mat{D}_{(n)}\mat{V}_{(n)}^{\mathsf{T}}$. Let $\vect{u}_{(n)i}^{\mathsf{T}}$ give the $i$th row in $\mat{U}_{(n)}$. Assume that the maximum leverage score tends to zero, that is
\begin{align*}
\lim_{n \to \infty} \underset{i=1, \ldots, n}{\textnormal{max}} \lVert \vect{u}_{(n)i} \rVert_{2}^{2} = 0.
\end{align*}
\end{assumption}
The asymptotic probability of obtaining an $\epsilon$-subspace embedding for the Hadamard and Clarkson-Woodruff sketches can be related to the Wishart distribution.
\begin{theorem}
\label{thm:data_oblivious_asymptotic_embedding}
Consider a sequence of arbitrary $n \times d$ data matrices $\mat{A}_{(n)}$, where each data matrix is of rank $d$, and $d$ is fixed. Let $\mat{A}_{(n)}=\mat{U}_{(n)}\mat{D}_{(n)}\mat{V}_{(n)}^{\mathsf{T}}$ represent the singular value decomposition of $\mat{A}_{(n)}$. Let $\mat{S}_{(n)}$ be a $k \times n$ Hadamard or Clarkson-Woodruff sketching matrix where $k$ is also fixed. Suppose that Assumption \ref{assum:leverage} is satisfied. Then as $n$ tends to infinity with $k$ and $d$ fixed,
\begin{align*}
\underset{n \to \infty}{\lim}\Pr\left( \mat{S}_{(n)} \emph{ is an $\epsilon$-subspace embedding for $\mat{A}_{(n)}$} \right) &= \Pr\left(\sigma_{\emph{max}}(\mat{I}_{d}-\mat{W}) \le \epsilon \right),
\end{align*}
where $\mat{W} \sim \emph{Wishart}(k, \mat{I}_{d}/k)$.
\end{theorem}
The proof is given in the supplementary material.
Theorem \ref{thm:data_oblivious_asymptotic_embedding} states the the embedding probability for the Hadamard and Clarkson-Woodruff sketches converges to that of the Gaussian sketch as $n \to \infty$. Therefore, Theorem \ref{thm:gaussian_embedding_tw_limit} can also be used to approximate the embedding probability. Empirical studies have shown that the Hadamard and Clarkson-Woodruff sketches can give similar quality results to the Gaussian projection \citep{venkata_johnson_2011,le_2013_fastfood, dahiya_2018_empirical}. Theorem \ref{thm:data_oblivious_asymptotic_embedding} helps to characterize situations where this phenomenon is expected to be observed.
\begin{remark}
The same line of proof used in Theorem \ref{thm:data_oblivious_asymptotic_embedding} can be used to show that the convergence probability of \eqref{eq:basic_iteration} using the Hadamard and Clarkson-Woodruff projections converges to that of the Gaussian sketch under Assumption \ref{assum:leverage}. Theorem \ref{thm:gaussian_convergence_tw} also gives an asymptotic approximation for the Hadamard and Clarkson-Woodruff sketches.
\end{remark}
It remains to establish a formal limit theorem in terms of the Tracy-Widom distribution for the Hadamard and Clarkson-Woodruff sketches. The proof of Theorem \ref{thm:data_oblivious_asymptotic_embedding} treats $k$ and $d$ as fixed, with only $n$ being taken to infinity. It is possible that Assumption \ref{assum:leverage} on the leverage scores will remain sufficient in the expanding dimension scenario. For any $d$, the maximum leverage score must be greater than the average leverage score,
\begin{align*}
\underset{i=1,\ldots, n}{\text{max}} \lVert \vect{u}_{(n)i} \rVert_{2}^{2} &\ge \dfrac{1}{n}\sum_{i=1}^{n} \lVert \vect{u}_{(n)i} \rVert_{2}^{2} = \dfrac{d}{n}.
\end{align*}
If we maintain that Assumption \ref{assum:leverage} holds on the leverage scores as $n,d,k \to \infty$, this implies that $d/n \to 0$. As we have assumed that our primary motivation for sketching is data compression when $n \gg d$, we feel that analysis in the asymptotic regime $d/n \to 0$ is reasonable for this use-case setting. The asymptotic approximations developed here are recommended for applications of sketching in tall-data problems $n \gg d$.
The key result is that the Hadamard and Clarkson-Woodruff sketches behave like the Gaussian projection for large $n$, with $k$ and $d$ fixed. If the Tracy-Widom approximation in Theorem \ref{thm:gaussian_embedding_tw_limit} is good for finite $k$ and $d$ with the Gaussian sketch, it should hold well for the Hadamard and Clarkson-Woodruff projections for $n$ sufficiently large.
\subsection{Uniform sketch}
It is considerably more difficult to approximate the embedding probability for the uniform sketch compared to the other data-oblivious projections. \citet{vershynin_2010_introduction} provides a bound for the uniform sketch that is useful for comparative purposes.
\begin{theorem}[\cite{vershynin_2010_introduction}, Theorem 5.1]
\label{thm:subsampling_finite}
Consider an $n \times d$ matrix $\mat{U}$ such that $\mat{U}^{\mathsf{T}}\mat{U}=\mat{I}_{d}$. Let $\vect{u}_{i}^{\mathsf{T}}$ represent the $i$-th row in $\mat{U}$ for $i=1, \ldots, n$. Let $m$ give an upper bound on the leverage scores, so
\begin{align*}
\underset{i=1, \ldots, n}{\max} \ \lVert \vect{u}_{i} \rVert_{2}^{2} \le m.
\end{align*}
Let ${\mat{S}}$ be a $k \times d$ uniform sketch of size $k$. Then for every $t \ge 0$, with probability at least $1-2d\exp(-ct^2)$ one has
\begin{align*}
1- t\sqrt{\dfrac{mn}{k}} \le \sigma_{\emph{min}}(\mat{S}\mat{U}) \le \sigma_{\emph{max}}(\mat{S}\mat{U}) \le 1+t\sqrt{\dfrac{mn}{k}}.
\end{align*}
\end{theorem}
Theorem \ref{thm:subsampling_finite} can be used to give a lower bound on the probability of obtaining an $\epsilon$-subspace embedding. Both Theorem \ref{thm:subsampling_finite} and Theorem \ref{thm:data_oblivious_asymptotic_embedding} involve the maximum leverage score. Holding $k$ and $d$ fixed, in order for the bound in Theorem \ref{thm:subsampling_finite} to remain controlled as the sample size $n$ increases, the maximum leverage score $m$ must decrease at a sufficient rate. In contrast, Assumption \ref{assum:leverage} does not enforce a rate of decay on the maximum leverage score, only that it eventually tends to zero as $n \to \infty$. This suggests that the uniform projection could be more sensitive to the maximum leverage score than the Gaussian, Hadamard and Clarkson-Woodruff projections. As mentioned earlier, it is very difficult to give a general expression for the embedding probability \eqref{eq:embedding_probability} when using the uniform sketch as it will be a complicated function of the source dataset $\mat{A}$. An advantage of the Gaussian, Hadamard and Clarkson-Woodruff projections is that a Tracy-Widom approximation can be motivated under mild regularity conditions.
\section{Data application}
\subsection{$\epsilon$-subspace embedding}
We tested the theory on a large genetic dataset of European ancestry participants in UK Biobank. The covariate data consists of genotypes at $p=1032$ genetic variants in the Protein Kinase C Epsilon (PKC$\varepsilon$) gene on $n=407,779$ subjects. Variants were filtered to have minor allele frequency of greater than one percent. The response variable was haemoglobin concentration adjusted for age, sex and technical covariates. The region was chosen as many associations with haemoglobin concentration were discovered in a genome-wide scan using univariable models; these associations were with variants with different allele frequencies, suggesting multiple distinct causal variants in the region. We also considered a subset of this dataset with $p=130$ representative markers identified by hierarchical clustering. When including the intercept and response, the PKC$\varepsilon$ subset has $n=407,779, d=132$, and the full PKC$\varepsilon$ dataset has $n=407,779, d=1034$.
The full PKC$\varepsilon$ dataset is of moderate size, so it was feasible to take the singular value decomposition of the full $n \times d$ dataset $\mat{A}=\mat{U}\mat{D}\mat{V}^{\mathsf{T}}$. Given the singular value decomposition we ran an oracle procedure to estimate the exact embedding probability. We generated $B$ sketching matrices $\mat{S}^{[1]}, \ldots, \mat{S}^{[B]}$. These were used to compute $
\epsilon^{[b]} = \sigma_{\text{max}}(\mat{I}_{d}-\mat{U}^{\mathsf{T}}\mat{S}^{[b]\mathsf{T}}\mat{S}^{[b]}\mat{U})$
for $b=1, \ldots, B$ and give an estimated embedding probability as in \eqref{eq:monte_carlo_embedding}. When working with the full PKC$\varepsilon$ dataset we simulated directly from the matrix normal distribution $\widetilde{\mat{U}} \sim \text{MN}(\mat{I}_{k}, \mat{I}_{d}/k)$ for the Gaussian sketch, rather than computing the matrix multiplication $\mat{S}\mat{U}$. We took $B=1,000$ sketches of the PKC$\varepsilon$ subset, and $B=100$ sketches of the full PKC$\varepsilon$ dataset using the uniform, Gaussian, Hadamard and Clarkson-Woodruff projections, with $k=20\times d$.
Figure \ref{fig:prkce_medium} shows the empirical and theoretical embedding probabilities for the PKC$\varepsilon$ subset $(n=407,779, d=132)$ for each type of sketch. The observed and theoretical curves match well for the Gaussian, Hadamard and Clarkson-Woodruff projection. The uniform projection performs worse than the other data-oblivious random projections, as larger values of $\epsilon$ indicate weaker approximation bounds. The uniform projection does not satisfy a central limit theorem for fixed $k$, so we do not necessarily expect the Tracy-Widom law to give a good approximation for the uniform projection.
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{Figures/prkce_medium_ecdf.pdf}
\caption{Analysis of subset of PKC$\varepsilon$ dataset $(n=407,779, d=132)$ with $B=1,000$ sketches of size $k=20d$. The dashed black line and the solid red line gives the theoretical and empirical embedding probabilities respectively. The Tracy-Widom approximation is accurate for the Gaussian, Hadamard and Clarkson-Woodruff sketches.}
\label{fig:prkce_medium}
\end{figure}
Figure \ref{fig:prkce_large} shows the empirical and theoretical embedding probabilities for the full PKC$\varepsilon$ dataset $(n=407,779, d=1032)$ for each type of sketch. The Tracy-Widom approximation is accurate for the Gaussian sketch, but there are some deviations for the Hadamard and the Clarkson-Woodruff sketch. Interestingly, the empirical cdf for the Hadamard sketch (red) is to the left of the theoretical value (black), indicating smaller values of $\epsilon$ than predicted. The distribution of $\epsilon$ has a longer right tail under the Clarkson-Woodruff sketch than is predicted by the Tracy-Widom law.
The deviation from the Tracy-Widom limit in Figure \ref{fig:prkce_large} could be because the finite sample approximation is poor. Theorem \ref{thm:data_oblivious_asymptotic_embedding} suggests that the Hadamard and Clarkson-Woodruff projections behave like the Gaussian sketch for $n$ sufficiently large with respect to $d$. To test this we bootstrapped the full PKC$\varepsilon$ dataset to be ten times its original size. The bootstrapped PKC$\varepsilon$ dataset has $n=4,077,790, d=1034$. We took one thousand sketches of size $k=20\times d$ using the Clarkson-Woodruff projection and ran the oracle procedure of computing $\epsilon^{[b]} = \sigma_{\text{max}}(\mat{I}_{d}-\mat{U}^{\mathsf{T}}\mat{S}^{[b]\mathsf{T}}\mat{S}^{[b]}\mat{U})$ for each sketch. Figure \ref{fig:prkce_bootstrap} compares the distribution of $\sigma_{\text{max}}(\mat{I}_{d}-\mat{U}^{\mathsf{T}}\mat{S}^{\mathsf{T}}\mat{S}\mat{U})$ using Clarkson-Woodruff projection on the original dataset and on the large bootstrapped dataset. As $n$ increases we expect the quality of the Tracy-Widom approximation to improve. Panel (a) of Figure \ref{fig:prkce_bootstrap} compares the theoretical to the simulation results on the original dataset. The Clarkson-Woodruff projection shows greater variance than expected. Panel (b) compares the theoretical to the simulation results on the bootstrapped dataset. In (b) there is very good agreement between the empirical distribution and the theoretical distribution. It seems that for this dataset $n \approx 400,000$ is not big enough for the large sample asymptotics to kick in. At $n \approx 4$ million the Tracy-Widom approximation is very good. As mentioned earlier, our motivation for using a sketching algorithm is to perform data compression with tall datasets $n \gg d$. This example highlights that the asymptotic approximations become more accurate as the sample size $n$ grows while the computational incentives for using sketching increase in parallel.
\begin{table}
\centering
\begin{tabular}{@{} l l l @{}}
\toprule
Projection & Subset $(p=132)$ & Full $(p=1034)$ \\
\midrule
Gaussian & 769 & - \\
Hadamard & 17.2& 156 \\
Clarkson-Woodruff & 1.33 & 21 \\
Uniform & 0.03& 2.8 \\
\bottomrule
\end{tabular}
\caption{Mean sketching time (seconds) over ten sketches for each dataset. The Gaussian sketch is considerably slower than the Hadamard and Clarkson-Woodruff sketches on the subset as is expected from Table \ref{tab:run_time} }
\label{tab:prkce_epsilon_medium_times}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{Figures/prkce_large_ecdf.pdf}
\caption{Analysis of full PKC$\varepsilon$ dataset $(n=407,779, d=1,034)$ with $B=100$ sketches of size $k=20d$. The $x$-axis is different in each panel.The dashed black line and the solid red line gives the theoretical and empirical embedding probabilities respectively. The Uniform projection is much less successful at generating $\epsilon$-subspace embeddings than the other data-oblivious projections.}
\label{fig:prkce_large}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{Figures/prkce_bootstrap_ecdf.pdf}
\caption{Comparison of results on the original PKC$\varepsilon$ dataset ($n=407,779$) and the bootstrapped larger PKC$\epsilon$ dataset ($n=4,077,790$). The dashed black line and the solid red line gives the theoretical and empirical embedding probabilities respectively. As expected from Theorem \ref{thm:data_oblivious_asymptotic_embedding}, the accuracy of the Tracy-Widom increases with $n$.}
\label{fig:prkce_bootstrap}
\end{figure}
\subsection{Iterative optimisation}
We considered iterative least-squares optimisation using the song year dataset available from the UCI machine learning repository. The dataset has $n=515,344$ observations, $p=90$ covariates, and year of song release as the response. We assessed the convergence probability by running the iteration \eqref{eq:basic_iteration} with the sketched preconditioner. The initial parameter estimate $\vect{\beta}^{(0)}$ was a vector of zeros. The iteration was run for 2000 steps, with convergence being declared if the gradient norm condition $\lVert \mat{X}^{\mathsf{T}}(\vect{y}-\vect{X}\vect{\beta}^{(t)})\rVert_{2} < 10^{-6}$ was satisfied any time step $t$. This convergence criterion was used instead of $\lVert\vect{\beta}_{F} - \vect{\beta}^{(t)} \rVert_{2}$ as $\vect{\beta}_{F}$ will not be known in practice. This was repeated one hundred times for each of the random projections discussed in Section \ref{subsec:sketching} using different sketch sizes $k$. Figure \ref{fig:year_convergence} compares the empirical (black solid points) and theoretical convergence probabilities (dashed red line) against the sketch size $k$. The point-ranges represent 95\% confidence intervals. The Gaussian, Hadamard and Clarkson-Woodruff show near identical behavior, and the empirical convergence probabilities closely match the theoretical predictions using Theorem \ref{thm:gaussian_convergence_tw}. The uniform sketch was much less successful in generating preconditioners, the algorithm did not show convergence in any replication at each sketch size $k$. In this example, the additional computational cost of the Gaussian, Hadamard and Clarkson-Woodruff sketches compared to the Uniform subsampling has clear benefits.
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{Figures/year_convergence_plot.pdf}
\caption{Convergence probability on year dataset $(n=515,344, d=91)$. Black solid points show the empirical convergence probability over $B=100$ sketches. The red dashed line gives the theoretical convergence probability using Theorem \ref{thm:gaussian_convergence_tw}. The Tracy-Widom approximation is accurate for the Gaussian, Hadamard and Clarkson-Woodruff sketches. The uniform sketch fails to generate useful preconditioners. }
\label{fig:year_convergence}
\end{figure}
\section{Conclusion}
The analysis of the asymptotic behavior of common data-oblivious random projections revealed an important connection to the Tracy-Widom law. The probability of attaining an $\epsilon$-subspace embedding (Definition \ref{defn:epsilon_subspace_embedding}) is an integral descriptive measure for many sketching algorithms. The asymptotic embedding probability can approximated using the Tracy-Widom law for the Gaussian, Hadamard and Clarkson-Woodruff sketches. The Tracy-Widom law can also be used to estimate the convergence probability for iterative schemes with a sketched preconditioner. We have tested the predictions empirically and seen close agreement. The majority of existing results for sketching algorithms have been established using non-asymptotic tools. Asymptotic results are a useful complement that can provide answers to important questions that are difficult to address concretely in a finite dimensional framework.
There was a stark contrast between the performance of the basic uniform projection and the other data-oblivious projections (Gaussian, Hadamard and Clarkson-Woddruff) in the data application. The Hadmard and Clarkson-Woodruff projections are expected to behave like the Gaussian projection under mild regularity conditions on the maximum leverage score. We observed this phenomenon when $n/d$ was large, as is required by Theorem \ref{thm:data_oblivious_asymptotic_embedding}. The Hadamard and Clarkson-Woodruff projections are substantially more computationally efficient than the Gaussian projection (recall Table \ref{tab:run_time}), so their universal limiting behavior implies that the trade-off between computation time and performance guarantees is asymptotically negligible in the regime \eqref{eq:big_data_regime}.
The Tracy-Widom law has found many applications in high-dimensional statistics and probability \citep{edelman_2013_random}, and we have shown that it useful for describing the asymptotic behavior of sketching algorithms. The asymptotic behaviour with respect to large $n$ is of practical interest, as this is the regime where sketching is attractive as a data compression technique. The universal behavior of high-dimensional random matrices has practical and theoretical consequences for randomized algorithms that use linear dimension reduction \citep{dobriban_2018_new, lacotte_2020_limiting}.
\bibliographystyle{spcustom}
| {
"timestamp": "2022-01-04T02:19:05",
"yymm": "2201",
"arxiv_id": "2201.00450",
"language": "en",
"url": "https://arxiv.org/abs/2201.00450",
"abstract": "There is an increasing body of work exploring the integration of random projection into algorithms for numerical linear algebra. The primary motivation is to reduce the overall computational cost of processing large datasets. A suitably chosen random projection can be used to embed the original dataset in a lower-dimensional space such that key properties of the original dataset are retained. These algorithms are often referred to as sketching algorithms, as the projected dataset can be used as a compressed representation of the full dataset. We show that random matrix theory, in particular the Tracy-Widom law, is useful for describing the operating characteristics of sketching algorithms in the tall-data regime when $n \\gg d$. Asymptotic large sample results are of particular interest as this is the regime where sketching is most useful for data compression. In particular, we develop asymptotic approximations for the success rate in generating random subspace embeddings and the convergence probability of iterative sketching algorithms. We test a number of sketching algorithms on real large high-dimensional datasets and find that the asymptotic expressions give accurate predictions of the empirical performance.",
"subjects": "Numerical Analysis (math.NA); Computation (stat.CO)",
"title": "On randomized sketching algorithms and the Tracy-Widom law",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9732407183668539,
"lm_q2_score": 0.7279754430043072,
"lm_q1q2_score": 0.7084953431029406
} |
https://arxiv.org/abs/math/0411166 | A context-free and a 1-counter geodesic language for a Baumslag-Solitar group | We give a language of unique geodesic normal forms for the Baumslag-Solitar group BS(1,2) that is context-free and 1-counter. We discuss the classes of context-free, 1-counter and counter languages, and explain how they are inter-related. | \section{Introduction}
In this article we give a simple combinatorial description of a language\
of normal form s for the solvable Baumslag-Solitar\ group BS$(1,2)$\ with the standard generating set,
such that each normal form\ word is geodesic, each group element has a unique normal form\
representative, and the language\ is accepted by a (partially blind)
1-counter automaton. It follows that the language is context-free.
Several authors have studied geodesic\ language s for the (solvable) Baumslag-Solitar\
groups, including Brazil \cite{Brazil}, Collins, Edjvet and Gill
\cite{CollinsEG}, Freden and McCann \cite{Freden}, Groves
\cite{Groves}, Miller \cite{Miller}, and the author and Hermiller
\cite{EHerm}. It is well known that Baumslag-Solitar\ groups are asynchronously
automatic but not automatic \cite{Epstein}, and the asynchronous language\
is not geodesic. Groves proved that no geodesic language\ of normal form s for a
solvable Baumslag-Solitar\ group with standard generating set\ can be regular \cite{Groves},
so we could say that context-free or 1-counter is the next best thing.
Collins, Edjvet and Gill proved that the growth function (the formal
power series where the $n$-th coeficient is the number of elements
having a geodesic representative of length $n$) of a solvable Baumslag-Solitar\
group is rational \cite{CollinsEG}, and Freden and McCann have studied
growth functions for the non-solvable case \cite{Freden}.
If $G$ is a group with generating set $\mathcal G$, we say two words $u,v$
are equal in the group, or $u=_Gv$, if they represent the same group
element. We say $u$ and $v$ are identical if the are equal in the free
monoid, that is, they are equal in $\mathcal G^*$.
\begin{defn}[$G$-automaton]\label{defn:EE}
Let $G$ be a group and $\Sigma$ a finite set. A (non-deterministic)
{\em $G$-automaton} $A_G$ over $\Sigma$ is a finite directed graph
with a distinguished {\em start vertex} $q_0$, some distinguished
{\em accept vertices}, and with edges labeled by $(\Sigma^{\pm
1}\cup\{\epsilon\})\times G$. If $p$ is a path in $A_G$, the element
of $(\Sigma^{\pm 1})$ which is the first component of the label of
$p$ is denoted by $w(p)$, and the element of $G$ which is the second
component of the label of $p$ is denoted $g(p)$. If $p$ is the empty
path, $g(p)$ is the identity element of $G$ and $w(p)$ is the empty
word. $A_G$ is said to {\em accept} a word $w\in (\Sigma^{\pm 1})$
if there is a path $p$ from the start vertex to some accept vertex
such that $w(p)=w$ and $g(p)=_G 1$.
\end{defn}
\begin{defn}[Finite state automaton; Regular]
If $G$ is the trivial group, then $A_G$ is a (non-deterministic)
{\em finite state automaton}. A language is {\em regular} if it is the set
of strings accepted by a finite state automaton.
\end{defn}
\begin{defn}[Counter; 1-counter]
A language is {\em $k$-counter} if it is accepted by some
$\mathbb Z^k$-automaton. We call the generators of $\mathbb Z^k$ and their inverses
{\em counters}. A language is {\em counter} if it is $k$-counter for
some $k\geq 1$.
\end{defn}
For example, the language $\{a^nb^na^n \; | \; n\in \mathbb N\}$ is
accepted by the $\mathbb Z^2$-automaton in Figure \ref{fig:counterNotCF}, with
alphabet $a,b$ and counters $x_1,x_2$.
\begin{figure}[ht!]
\begin{center}
\includegraphics[height=3cm]{pics/counter-anbnan.eps}
\end{center}
\caption{A counter automaton accepting $a^nb^na^n$.}
\label{fig:counterNotCF}
\end{figure}
In the case of $\mathbb Z$-automata, we assume that the generator is $1$ and
the binary operation is addition, and we may insist without loss of
generality each transition changes the counter by either $0,1$ or
$-1$. We can do this by adding states and transitions to the automaton
appropriately. That is, if some edge changes the counter by $k\neq
0,\pm 1$ then divide the edge into $|k|$ edges using more states. The
symbols $+,-$ indicate a change of $1,-1$ respectively on a
transition.
\begin{defn}[Pushdown automaton; Context-free]
A {\em pushdown automaton} is a 6-tuple $(Q,\Sigma,\Gamma, \tau, q_0,
A)$ where $Q,\Sigma, \Gamma$ and $A$ are all finite sets, and \begin{enumerate}
\item $Q$ is the set of states,
\item $\Sigma$ is the input alphabet together with the empty word $\epsilon$,
\item $\Gamma$ is the stack alphabet together with $\epsilon$ (the
empty symbol),
\item $\tau$ is the transition function,
\item $q_0$ is the start state,
\item $A\subseteq Q$ is the set of accept states.\end{enumerate}
The transition function takes as input a state and an input letter,
and outputs a state and a stack instruction of the form $\gamma \rightarrow
\beta$, which means pop $\gamma$ from the top of the stack then push
$\beta$ on the top of the stack. Note that $\epsilon \rightarrow \gamma$ means
push $\gamma$ onto the stack, $\gamma \rightarrow \epsilon$ means pop $\gamma$
off the stack, and $\epsilon \rightarrow \epsilon$ means do nothing (and in
this case will be omitted).
A word is accepted by the automaton if there is a sequence of
transitions starting from the state $q_0$ with an empty stack, pushing
and popping stack symbols, to an accept state. Note that you can
always push new symbols onto the stack, but you can only pop if the
correct symbol is on top of the stack.
A language is {\em context-free} if it is the language of some pushdown automaton.
\end{defn}
As an example, the language $\{a^nb^n \; | \; n\in \mathbb N\}$ is
accepted by the pushdown automaton\ in Figure \ref{fig:PDAanbn} with alphabet $a,b$
and stack symbols $\$,1$, and this language is not regular
\cite{HU},\cite{Sipser}.
\begin{figure}[ht!]
\begin{center}
\includegraphics[height=3cm]{pics/PDAanbn.eps}
\end{center}
\caption{Pushdown automaton accepting $a^nb^n$.}
\label{fig:PDAanbn}
\end{figure}
Note that our definition of counter automata is not equivalent to a
pushdown automata with a stack (with one type of token) for each
counter, since in our definition, we cannot test the value of the
counter until we are done reading the input. For this reason, these
automata are sometimes referred to as ``partially blind'' or
vision-impaired counter automata, since the cannot ``see'' whether the
counter is non-zero except at the end.
\begin{defn}[Baumslag-Solitar\ group]
The group with presentation \\ $\langle a,t \; | \;
tat^{-1}=a^p\rangle$ is the {\em solvable Baumslag-Solitar\ group} BS$(1,p)$, for
\\ $p\in \mathbb Z, p\geq 2$.
\end{defn}
\noindent
In this article we will consider the group BS$(1,2)$. Let $\mathcal
G=\{a,a^{-1},t,t^{-1}\}$ be the inverse closed generating set\ for BS$(1,2)$. We
give a picture of part of the Cayley graph\ for BS$(1,2)$\ in Figure
\ref{fig:BS12}. From the side the Cayley graph\ looks like a binary tree. See
\cite{Epstein} for a detailed description of the Cayley graph.
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=13cm]{pics/bsCG.eps}
\end{center}
\caption{Part of the Cayley graph\ for BS$(1,2)$. }
\label{fig:BS12}
\end{figure}
The paper is organised as follows. In Sections \ref{sect:prelim} and
\ref{sect:counternotcf} we examine the various definitions of formal
languages presented above, and establish their relative intersections
and inclusions, which we illustrate in Figure \ref{fig:sets}. In
particular we prove that 1-counter languages as defined are context-free. In
Section \ref{sect:nf} we define a normal form language for BS$(1,2)$\ and
prove that each normal form word is geodesic, and the language of
normal form words bijects to the set of group elements. In Section
\ref{sect:mainthm} we prove that this normal form language is
1-counter, which implies it is context-free. Then in the last section
we show that the language of all geodesics for BS$(1,2)$\ is not counter.
\section{1-counter languages}\label{sect:prelim}
\begin{lem}\label{lem:1countercf}
Every 1-counter language is context-free.
\end{lem}
\noindent
\textit{Proof.} Let $L$ be a 1-counter language accepted by a
1-counter machine $M$. We will construct a (non-deterministic) pushdown automaton\ $N$
that accepts the language $L$, with stack symbols $\$_+,\$_-$ and
$1$. Let $M_+$ be a copy of $M$ obtained by replacing transitions
$(a,+)$ by $(a,\epsilon\rightarrow 1)$ and $(a,-)$ by $(a,1\rightarrow \epsilon)$, and
let $M_-$ be a copy of $M$ obtained by replacing transitions $(a',-)$
by $(a',\epsilon \rightarrow 1)$ and $(a',+)$ by $(a',1\rightarrow \epsilon)$.
$N$ is constructed from these two automata $M_+$ and $M_-$ as follows.
The states of $N$ consist of two distinct states $q_+,q_-$ for each
state $q$ of $M$, plus a new start state $s_0$ and a new single accept
state $p$. There is a transition labelled $(\epsilon, \epsilon \rightarrow
\$_+)$ from $s_0$ to the former start state $(q_0)_+$ in $M_+$. For each
$q_+$ in $M_+$ there is a transition labelled $(\epsilon, \$_+ \rightarrow
\$_- )$ from $q_+$ to the corresponding state $q_-$ in $M_-$, and a
transition labelled $(\epsilon, \$_- \rightarrow \$_+ )$ from $q_-$ to $q_+$
in $M_+$.
Finally for every accept state $q$ in $M$ there is a transition
labelled $(\epsilon, \$_+\rightarrow \epsilon)$ from $q_+$ in $M_+$ to the
single accept state $p$, and $(\epsilon, \$_-\rightarrow \epsilon)$ from $q_-$
in $M_-$ to the single accept state $p$.
This new machine works by starting with an empty stack and pushing
$\$_+ $ on the bottom. Then if the old machine increments the counter,
the new machine adds $1$ to the stack. From then on if the counter
value never dips below zero, the new machine will stay in the $M_+$
states. However if there is ever a ``pop 1'' but the symbol on the
stack is $\$_+ $, pass over to $M_-$. Then the height of the stack now
represents the negative value of the counter, you stay in this side
until the value of the counter comes back to zero, in which case you
can switch.
It follows that the language of $N$ is precisely the language of the
1-counter machine $L$. $\Box$
\begin{lem}\label{lem:not1counter2}
The language of strings of the form $a^mb^ma^nb^n$ is both counter and
context-free but not 1-counter.
\end{lem}
\textit{Proof.} The pushdown automaton\ and the $\mathbb Z^2$-automaton in Figure
\ref{fig:4counter}
\begin{figure}[ht!]
\begin{center}
\begin{tabular}{c}
\includegraphics[height=3cm]{pics/PDAambmanbn.eps}\\
\\
\includegraphics[height=3cm]{pics/counter-ambmanbn.eps}
\end{tabular}
\end{center}
\caption{Pushdown automaton and 2-counter machine accepting $a^mb^ma^nb^n$}
\label{fig:4counter}
\end{figure}
both accept this language, so it is context-free\ and counter.
Suppose by way of contradiction that the language is 1-counter, and
let $M$ be a 1-counter machine for it with $p$ states. Assume without
loss of generality that each transition changes the counter by
either $0, -1$ or $1$.
Define $a_1=a^{p^2}, b_1=b^{p^2}, a_2=a^{p^2},b_2=b^{p^2}$, and
consider the word $s=a_1b_1a_2b_2$
which belongs to the language.
Consider the prefix $a_1=a^{p^2}$. Since this prefix is longer than the
number of states, it must visit some state twice, so
$a_1=x_0y_0z_0$ where $y_0$ represents a loop of length at most
$p$.
If going around $y_0$ causes a net change of zero in the value of the
counter, then going around it twice would give a new word that is
accepted by $M$, but not of the form $a^mb^ma^nb^n$. So assume the net
change is $k_0$ with $|k_0|\geq 1$.
Let $s_1=x_0z_0$ which has length at least $p^2-p$, so must go around
a loop in $M$. So $s_1=x_1y_1z_1$ with $y_1$ a loop of length at most
$p$. Again, if the net change in the counter going around $y$ is zero
then we can go around $y_1$ twice and have a word accepted by $M$ that
is not in the language.
If the net change is $k_1$ of the opposite sign to $k_0$ then there is
a word that goes $|k_1|$ times around the loop $y_0$ then $|k_0|$
times around $y_1$, which keeps the final value of the counter at
zero, so is accepted by $M$, but since we are pumping the $a^{p^2}$
prefix of $s$ we have a word that is not in the language.
Thus $y_1$ changes the counter by $k_1$ with $|k_1|\geq 1$ and having the
same sign of $k_0$. Let $s_2=x_1z_1$ with length at least $p^2-2p$.
Iteratively we can write $s_i=x_iy_iz_i$ with $y_i$ a loop which
changes the value of the counter by an amount $k_i$ of the same sign
as $k_0$, until there are no loops left in $x_iz_i$, which does not
happen until at least $p$ iterations (since $s_i$ has length at least
$p^2-ip$).
Since $x_iz_i$ has no loops, it has length at most $p-1$. So it
changes the value of the counter by at most $l_1$ where
$|l_1|<p$. Whereas, the sum of the $|y_i|$ changes the value of the
counter by at least $p$ since each one contributes at least $1$ to the
sum.
Now repeat this analysis for the subwords $b_1,a_2$ and $b_2$.
If all the loops in each subword change the counter by the same sign,
then we have a contradiction, since the net change of all the loops is
greater than $4p$ whereas the net change of the four remaining
$x_iz_i$ segments is less than $4p$, so they cannot cancel each other.
Thus at least two subwords have loops of opposite signs. If the loops
in $a_1$ have the same sign as the loops in $a_2$ and $b_2$, then the
loops in $b_1$ must have the opposite sign. So suppose that some loop
in $b_1$ changes the counter by $k$, and some loop in $a_2$ changes
the counter by $l$ of the opposite sign to $k$. Then pumping the first
loop by $|l|$ and the second by $|k|$ gives a word that is accepted by $M$
and not in the language.
Otherwise if the loops in $a_1$ have the opposite sign to the loops in
either $a_2$ or $b_2$, then take a loop in $a_1$ which changes the
counter by $k$ and a loop in $a_2$ or $b_2$ that changes the counter
by $l$ of the opposite sign to $k$. Then pumping the first loop by
$|l|$ and the second by $|k|$ gives a word that is accepted by $M$ and
not in the language. $\Box$
\begin{cor}
1-counter languages are not closed under concatenation or intersection.
\end{cor}
\noindent
\textit{Proof.} The language $C=\{a^nb^n \; | \; n \in \mathbb N \}$
is 1-counter but $CC$ is not 1-counter by the previous lemma (Lemma
\ref{lem:not1counter2}).
The languages $D=\{a^nb^nc^m \; | \; m,n \in \mathbb N \}$ and
$E=\{a^mb^nc^n \; | \; m,n \in \mathbb N \}$ are 1-counter, but $D\cap
E=\{a^nb^nc^n \; | \; n \in \mathbb N \}$ is not context-free
\cite{HU},\cite{Sipser} so by Lemma \ref{lem:1countercf} is not
1-counter. $\Box$
However, we have
\begin{lem}[Closure properties of $k$-counter language s]
\label{lem:closure1counter}
If $C,C'$ are $k$-counter for $k\geq 1$ and $L$ is regular, then
$C\cup C'$, $C\cap L$, $CL$ and $LC$ are all $k$-counter.
\end{lem}
\noindent
\textit{Proof.} Let $M,M'$ be $k$-counter automata for $C,C'$, with
start states $q_0,q_0'$, states $S,S'$, and accept states $A,A'$,
respectively. Then construct a $k$-counter automaton accepting $C\cup
C'$ with a new start state $p_0$ joined to $q_0,q_0'$ by two epsilon
transitions.
Let $N$ be a finite state automaton\ for $L$ with states $T$, start state $p_0$ and
accept states $B$. Construct a $k$-counter automaton accepting $C\cap
L$ having states $S\times T$, start state $(q_0,p_0)$, such that
$(q,p)$ is an accept state if $q\in A,p\in B$ (they are both accept
states), and if there are transitions from $q$ to $r$ in $M$ labelled
by $(a,g)$ and $p$ to $s$ in $N$ labelled by $a$ where $g\in \mathbb Z^k$,
then there is a transition from $(q,p)$ to $(r,s)$ labelled $(a,g)$.
Construct a $k$-counter automaton accepting $CL$ with start state
$q_0$ and accept states $B$ by adding an epsilon transition from each
accept state of $M$ to $p_0$.
Construct a $k$-counter automaton accepting $LC$ with start state
$p_0$ and accept states $A$ by adding an epsilon transition from each
accept state of $N$ to $q_0$. $\Box$
Iterating the union operation a finite number of times gives
\begin{cor}\label{cor:union}
The union of a finite number of $k$-counter languages is $k$-counter.
\end{cor}
\section{Context-free and not counter}\label{sect:counternotcf}
The language $\{a^nb^na^n\; | \; n\in \mathbb N\}$ accepted by the
$\mathbb Z^2$-automaton in Figure \ref{fig:counterNotCF} is not context-free\ by standard
results \cite{HU},\cite{Sipser}. In this section we show that
conversely, there is a language that is context-free but not counter.
Consider a string of letters $a,b,c$. We say a string contains a {\em
square} if it has a subword of the form $ww$. An interesting result
from combinatorics is that one can write out a square-free word in
$a,b,c$ of arbitrary length. This is due to Thue and Morse and
described in \cite{Lothaire} (Chapter 2). In particular we have
\begin{prop}[Thue-Morse]\label{prop:squarefree} Define a homomorphism
$f$ on $\{a,b,c\}$ by $f(a)=abc,f(b)=ac$ and $f(c)=b$. Then for
any $i\in \mathbb N$, $f^i(a)$ is square-free. \end{prop}
For example, to compute $f^3(a)$ we have \\
$a\rightarrow abc\rightarrow abcacb\rightarrow abcacbabcbac$.
In order to show that a language is not counter we make use of the
following lemma.
\begin{lem}[Swapping Lemma] \label{lem:swap}
If $L$ is counter then there is a constant $s>0$, the ``swapping
length'', such that if $w\in L$ with length at least $2s+1$ then $w$
can be divided into four pieces $w=uxyz$ such that $|uxy|\leq 2s+1$,
$|x|,|y|>0$ and $uyxz\in L$.
\end{lem}
\noindent
\textit{Proof.} Let $s$ be the number of states in the counter
automaton, and let $p$ be a path in the $Z^k$-automaton such that
$w(p)=w$. If $p$ visits each state at most twice then it cannot have
length more than $2s$, so $p$ visits some state at least three times.
Let $u$ be the first part of $w(p)$ until it hits this state, then $x$
a non-trivial loop back to this state the second time, $y$ a loop back
a third time, and $z$ the rest of $w$. So $w(p)=uxyz$ ends at an
accept state, and the second component of $p$ equals
$g(u)g(x)g(y)g(z)=_{\mathbb Z^k} 1$. Switching the orders of $x$ and $y$,
the path $uyxz$ still takes you to the same accept state, and
$g(uyxz)=_{\mathbb Z^k} 1$ since all elements of $\mathbb Z^k$ commute, so $uyxz\in
L$. $\Box$
Note its similarity to the pumping lemmas for regular and context-free
languages \cite{HU},\cite{Sipser}. This lemma is only of any use if
your word $w$ has no squares, otherwise you can just swap the square
and get the same word (that is $x=y$).
\begin{thm}\label{thm:cfnotcounter}
There is a language that is context-free but not counter.
\end{thm}
\noindent
\textit{Proof.} Consider the language of all strings in $a,b,c$ of the
form $ww^R$, where $w^R$ is word obtained by reversing $w$. It is well
known that this is a context-free\ language \cite{HU},\cite{Sipser}, since it
is accepted by a pushdown automaton\ which uses the stack to store the first half of
the word, then checks the last half of the word matches.
Suppose by way of contradiction that this language is counter, with
swapping length $p$ as in Lemma \ref{lem:swap}. Let $w$ be a
square-free word from Proposition \ref{prop:squarefree} of length at
least $2p+1$. Then $ww^R$ can be split into four subwords $u,x,y,z$
such that $uxy$ falls in the first $w$ prefix. Since $w$ has no
squares and $x,y$ are adjacent words then it must be that $x\neq
y$. But $uyxz$ will fail to be in the language because the second part
will not be the reverse of the first part. $\Box$
In Figure \ref{fig:sets} we have a diagram of sets of regular,
1-counter, context-free\ and counter languages, and by the above results we
have shown the given inclusions.
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=13.5cm]{pics/sets4.eps}
\end{center}
\caption{Intersections of the formal languages}
\label{fig:sets}
\end{figure}
The fact that there are counter languages that are not context-free\ and vice
versa can be observed by considering word problems for various groups.
The word problem for a group $G$ with generating set $\mathcal G$ is the
set $WP(G)=\{w\in \mathcal G^*: \overline w=1\}$ of all words in the
generating set that evaluate to the identity element. By work of
Muller and Schupp \cite{MS}, the word problem for the group $\mathbb Z^2$ is
not a context-free\ language, whereas the word problem of the free group on two
(or more) generators is context-free. Elston and Ostheimer \cite{EO} proved
that a group has a deterministic counter word problem (with a so-called
inverse property) if and only if it is virtually abelian, so the word
problem for $Z^2$ is counter. To see why $WP(F_2)$ is not counter,
consider a Thue-Morse word made up of an arbitrary number of subwords
$(aaa),(aba),(ab^{-1}a)$, followed by its ``reverse'' in the subwords
$(a^{-1}a^{-1}a^{-1}),(a^{-1}b^{-1}a^{-1}),(a^{-1}ba^{-1})$. This word
is in the word problem, but applying the Swapping lemma (Lemma
\ref{lem:swap}) gives a word that is non-trivial.
The first examples of languages that are counter but not context-free
were given by Mitrana and Stiebe in \cite{Mitrana}. Mitrana and Stiebe
give the following lemma, which they call the ``interchange lemma'',
which they use to show that the language of palindromes, and the
language $\{a^ib^i\;| i\geq 0\}^*$, are not counter. We include it here
for completeness, and to show how it differs from the Swapping Lemma
above.
\begin{lem}[Interchange Lemma \cite{Mitrana}]
If $L$ is the language of a $G$-automaton where $G$ is an abelian
group, then there is a constant $p$ such that for any word $x\in L$ of
length at least $p$, and for any given subdivision of $x$ into
subwords $v_1w_1v_2w_2 \ldots w_pv_{p+1}$ with $|w_i|\geq 1$, there
are some $r,s$ such that the word obtained from $x$ by interchanging
$w_r$ and $w_s$ is in $L$.
\end{lem}
\section{The normal form\ language}\label{sect:nf}
Recall that BS$(1,2)$\ $=\langle a,t \; | \; tat^{-1}=a^2\rangle$ with the
(standard) inverse closed generating set\ $\mathcal G=\{a,a^{-1},t,t^{-1}\}$. We
wish to describe geodesic words with respect to this generating set.
\begin{defn}[$E,N,P,X$]
A word is of the form $E$ if it is $a^i$. A word is of the form $N$ if
it has no $t$ letters and at least one $t^{-1}$ letter. A word is of
the form $P$ if it has no $t^{-1}$ letters and at least one $t$
letter.
A word is of the form $X$ if it is the concatenation of a $P$ word of
$t$-exponent $k$, followed by an $N$ word of $t$-exponent $(-k)$. That
is, an $X$ word is a word of type $PN$ with zero $t$-exponent.
\end{defn}
Benson Farb called words of type $X$ ``mesas'', since drawing an $X$
word in the Cayley graph\ resembles this land formation. See Figure
\ref{fig:mesa}.
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=13.5cm]{pics/mesa.eps}
\end{center}
\caption{An $X$ word}
\label{fig:mesa}
\end{figure}
While the following fact is well known, we include an elementary proof
of it here for completeness.
\begin{lem}[Commutation]\label{lem:commutation}
If $u$ has zero $t$-exponent\ then $au=ua$ and $a^{-1}u=ua^{-1}$.
\end{lem}
\noindent
\textit{Proof.} If $u$ is type $X$ then $u=_{BS}a^i$ so
$au=a^{i+1}=ua$.
If $u$ is type $NP$ then let $u=vw$ where $v$ is type $N$ with $t$-exponent\
$-k$ and $w$ is type $P$ (so has $t$-exponent\ $k$). Each time we push $a^i$
past a $t^{-1}$ it becomes $a^{2i}$ since $at^{-1}=t^{-1}a^2$. Then
$au=avw=va^{2^k}w$. Each time we push $a^{2^i}$ past a $t$ it becomes
$a^{2^{i-1}}$ since $a^2t=ta$. So $au=avw=va^{2^k}w=vwa=ua$. Finally
if $u$ is any other form, first replace each occurrence of
$ta^it^{-1}$ in $u$ by $a^{2i}$. Then $u$ becomes a word of type $NP$
with zero $t$-exponent. We can pass $a$ through this word as in the previous
case, and then put $u$ back in its original form and we are done. $
\Box $
\begin{lem}[Miller \cite{Miller}]
\label{lem:Miller} Every geodesic word in $\mathcal G
^*$ is a subword of a word of type $NPN$ or $PNP$. \end{lem} See
Lemma $1$ of \cite{Groves} for a proof. We can use this lemma to
describe a subset of geodesic words that represent every group
element.
Define a type $NP_{\leq}$ word to be a word of type $NP$ with
non-positive $t$-exponent\ sum, and type $NP_>$ to be a word of type $NP$
with positive $t$-exponent\ sum.
\begin{lem}[Ten types]\label{lem:TenTypes} Every
element of BS$(1,2)$\ has a geodesic representative in $\mathcal G^*$ that is one
of ten types:\\ $E,X,N,XN,NP_{\leq}, XNP$ having $t$-exponent\ $\leq
0$, or \\ $P,PX,NP_>,NPX$ having $t$-exponent\ $>0$, such that no more than
three $a$ or $a^{-1}$ letters can occur in succession in the
geodesic.\end{lem}
Hermiller and the author used a similar characterisation in our work
on minimal almost convexity \cite{EHerm}.
\noindent
\textit{Proof.} Every group element can be represented by some
geodesic word in $\mathcal G^*$. If a geodesic word has no $t^{\pm 1}$
letters then it is type $E$. Otherwise by Lemma \ref{lem:Miller} it is
a word of type $N,P,NP,PN,NPN$ or $PNP$.
If the geodesic is type $NP$ then it either has non-positive
$t$-exponent sum, so is type $NP_{\leq}$, or positive $t$-exponent
sum, so is type $NP_>$.
If the geodesic is type $PN$ then it either has zero $t$-exponent sum,
so is type $X$, negative $t$-exponent sum, so is type $XN$, or
positive $t$-exponent sum, so is type $PX$.
Suppose the geodesic is a word $w$ of type $NPN$. If $w$ has positive
$t$-exponent sum it is type $NPX$. If $w$ has zero $t$-exponent sum,
then write it as $ux$ where $u$ is type $NP$ with zero $t$-exponent
sum and $x$ is type $X$. By Lemma \ref{lem:commutation} $w=_{BS}xu$
which has the same length and is type $XNP$. If $w$ has negative
$t$-exponent sum, then
$w=a^{\epsilon_1}t^{-1}uta^{\epsilon_2}txt^{-1}a^{\epsilon_2}t^{-1}v$
where $u$ is type $E$ or $NP$ with zero $t$-exponent sum, $x$ is type
$E$ or $X$, $v$ is type $E$ or $N$, and $\epsilon_i\in \mathbb Z$.
Then by Lemma \ref{lem:commutation}
\begin{eqnarray*}
w =_{BS} a^{\epsilon_1+\epsilon_2+\epsilon_3}(t^{-1}ut)(txt^{-1})t^{-1}v \\
=_{BS} a^{\epsilon_1+\epsilon_2+\epsilon_3}(txt^{-1})(t^{-1}ut)t^{-1}v
\end{eqnarray*}
which is not geodesic since we can cancel $tt^{-1}$ at the end.
Finally, suppose the geodesic is a word $w$ of type $PNP$. If $w$ has
negative or zero $t$-exponent sum it is type $XNP$. If $w$ has
positive $t$-exponent sum, then
$w=a^{\epsilon_1}txt^{-1}a^{\epsilon_2}t^{-1}uta^{\epsilon_2}tv$ where
$x$ is type $E$ or $X$, $u$ is type $E$ or $NP$ with zero $t$-exponent
sum, $v$ is type $E$ or $P$, and $\epsilon_i\in \mathbb Z$. Then by
Lemma \ref{lem:commutation}
\begin{eqnarray*}
w =_{BS} a^{\epsilon_1+\epsilon_2+\epsilon_3}(txt^{-1})(t^{-1}ut)tv \\
=_{BS} a^{\epsilon_1+\epsilon_2+\epsilon_3}(t^{-1}ut)(txt^{-1})tv
\end{eqnarray*}
which is not geodesic since we can cancel $t^{-1}t$ at the end.
The additional condition that no more than three $a$'s are allowed in
succession is obtained by observing that $a^6=_{BS}t^3at^{-1}$ so any
power of $a$ greater than five is not geodesic, and since
$a^4=ta^2t^{-1}$ and $a^5=ta^2t^{-1}a=ata^2t^{-1}$ we choose to
replace $a$-exponents of $4$ or $5$ by subwords of the same length. An
identical argument eliminates powers of $a^{-1}$ greater than three.
$\Box$
\begin{defn}[Run]
An {\em $N$-run} is a word of the form
$$a^{\epsilon_k}t^{-1}a^{\epsilon_{k-1}}t^{-1}\ldots
t^{-1}a^{\epsilon_1}t^{-1}a^{\epsilon_0}.$$ A {\em $P$-run} is a word
of the form $$a^{\epsilon_0}ta^{\epsilon_1}t\ldots
ta^{\epsilon_{k-1}}ta^{\epsilon_k}.$$ We can write a run in shorthand
by just writing the $a$-exponents.
For example,
$a^2t^{-1}at^{-1}a^0t^{-1}at^{-1}a^{-1}$ can be written as $2101(-1)$.
We call the $a$-exponents {\em entries} of the run. A run is {\em
non-trivial} if it has at least one non-zero entry. Note that a run
that has at least one $t$ or $t^{-1}$ letter will have at least two
entries, since by definition a run starts and ends with a power of $a$
(possibly $a^0$).
We say a geodesic has at most one non-trivial run if it can be
expressed as the concatenation of geodesic $N$- or $P$-runs such that
at most one factor is non-trivial. For example, the word
$t^2a^2t^{-1}at^{-2}$ can be written as $(t^2)(a^2t^{-1}at^{-2})$, so
has at most one run. \end{defn}
Drawing the $N$-run represented by $2101(-1)$ in the Cayley graph\ we start to
see what behaviour is allowed in a geodesic. For instance, the
sub-runs $1(-1)$ and $(-1)1$ are not allowed since
$$at^{-1}a^{-1}\rightarrow t^{-1}a \;\;\;\;\;\;\; a^{-1}t^{-1}a\rightarrow
t^{-1}a^{-1}.$$
Also, if the $N$-run $2101(-1)$ were preceded by a $t^{-1}$ then we
would have $t^{-1}a^2$ which can be written as $at^{-1}$. In fact,
the only time you could ever see an entry that is not $0,1$ or $-1$ is
at the {\em start} of an $N$-run, or the {\em end} of a $P$-run.
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=9cm]{pics/Nrun.eps}
\end{center}
\caption{The $N$-run $2101(-1)$}
\label{fig:ExampleRun}
\end{figure}
\begin{lem}[No $|i|>6$] \label{lem:No6}
If a run represents a geodesic word and has an entry $i$ that is not
one of $1,0$ and $(-1)$, then $i$ must be one of
$2,3,4,5,(-2)(-3),(-4),(-5)$ and occurs at the {\em start} of an
$N$-run or the {\em end} of a $P$-run.
\end{lem}
\noindent
\textit{Proof.} If $i\geq 6$ occurs at any point in a run then
$a^6\rightarrow ta^3t^{-1}$ so the run is not geodesic.
For $N$-runs, if $i\geq 2$ occurs after the start of the run then
$t^{-1}a^2\rightarrow at^{-1}$ so the run is not geodesic. If $i\leq -2$
occurs after the start of the run then $t^{-1}a^{-2}\rightarrow a^{-1}t^{-1}$
so the run is not geodesic.
For $P$-runs, if $i\geq 2$ occurs before the end of the run then
$a^2t\rightarrow ta$ so the run is not geodesic. If $i\leq -2$ occurs before
the end of the run then $a^{-2}t\rightarrow ta^{-1}$ so the run is not
geodesic.
$\Box$
\begin{lem}[No consecutive $1(-1),(-1)1$] \label{lem:No-11}
A geodesic run cannot contain $1(-1)$ or $(-1)1$.
\end{lem}
\noindent
\textit{Proof.}
For an $N$-run:
\[
\begin{array}{rclcrlc}
1(-1)& \rightarrow& 01 & \hspace{1cm} & at^{-1}a^{-1} & \rightarrow &t^{-1}a\\ (-1)1
&\rightarrow &0(-1) & \hspace{1cm} & a^{-1}t^{-1}a & \rightarrow& t^{-1}a^{-1}.
\end{array}
\]
For a $P$-run:
\[
\begin{array}{rclcrcl}
1(-1) &\rightarrow &(-1)0 & \hspace{1cm} & ata^{-1}&\rightarrow &a^{-1}t\\ (-1)1 &\rightarrow
&10 & \hspace{1cm} & a^{-1}ta&\rightarrow& at.
\end{array}
\]
See Figure \ref{fig:brick}.
$\Box$
\begin{figure}[ht!]
\begin{center}
\begin{tabular}{ccc}
\begin{tabular}{c}
\includegraphics[width=4cm]{pics/brickA.eps}
\end{tabular}& &
\begin{tabular}{c}
\includegraphics[width=4cm]{pics/brick.eps}
\end{tabular}
\end{tabular}
\end{center}
\caption{No $1(-1), (-1)1$ in a run }
\label{fig:brick}
\end{figure}
\begin{lem}[No consecutive $11,(-1)(-1)$]\label{lem:No11}
There exist rewrite rules which do not increase length which can be
applied to a geodesic run to eliminate all occurrences of consecutive
$11$ or $(-1)(-1)$ after the first two entries of an $N$-run and
before the last two entries of a $P$-run.
\end{lem}
\noindent
\textit{Proof.}
Let $i\in \mathbb Z$.\\ For an
$N$-run:
\[
\begin{array}{rclcrlc}
i11& \rightarrow& (i+1)0(-1) & \hspace{1cm} & a^it^{-1}at^{-1}a & \rightarrow &
a^{i+1}t^{-2}a^{-1}\\
i(-1)(-1) &\rightarrow &(i-1)01 & \hspace{1cm} & a^it^{-1}a^{-1}t^{-1}a^{-1}
& \rightarrow& a^{i-1}t^{-2}a.
\end{array}
\]
These moves are illustrated in Figure \ref{fig:brick1-1}.
\begin{figure}[ht!]
\begin{center}
\begin{tabular}{ccc}
\begin{tabular}{c}
\includegraphics[width=6cm]{pics/brick1-1A.eps}
\end{tabular}& &
\begin{tabular}{c}
\includegraphics[width=6cm]{pics/brick1-1.eps}
\end{tabular}
\end{tabular}
\end{center}
\caption{No $i11, i(-1)(-1)$ in an $N$-run }
\label{fig:brick1-1}
\end{figure}
We can always perform these rewrites to get a word of the same length
or shorter. That is, suppose you have an $N$-run, which is geodesic so
we assume has no $1(-1)$ or $(-1)1$. Starting at the right end of the
$N$-run, if there is an $i11$, we know that $i\geq 0$. Replacing this
by $(i+1)0(-1)$ gives a word that is not geodesic if $i>0$, otherwise
gives $10(-1)$. Now if the preceding entry is $(-1)$ the word is not
geodesic, so is $0,1$ or we are at the start of the run. A similar
argument holds when we see $i(-1)(-1)$.
So iterate this procedure until the start of the run is reached. This
eliminates all occurrences of adjacent nonzero entries after the first
two entries. That is, if the $N$-run starts with $110$ for example,
the rules don't apply.
For a $P$-run:
\[
\begin{array}{rclcrcl}
11i &\rightarrow &(-1)0(i+1) & \hspace{1cm} & atata^i&\rightarrow & a^{-1}t^2a^{i+1}\\
(-1)(-1)i &\rightarrow &10(i-1) & \hspace{1cm} & a^{-1}ta^{-1}ta^i&\rightarrow & at^2a^{i-1}.
\end{array}
\]
Similarly we can always perform these rewrites to get a word of the
same length or shorter, this time starting at the left end of the word
and moving right, so we can eliminate all adjacent nonzero entries
except in the last two positions. $\Box$
Next we will show that every geodesic of one of the ten types can be
``pushed'' into a geodesic word for the same group element that have
at most one non-trivial run. As an example, if
$w=a^{\epsilon_0}ta^{\epsilon_1}t\ldots
a^{\epsilon_k}ta^nt^{-1}a^{\eta_k}t^{-1}\ldots
t^{-1}a^{\eta_1}t^{-1}a^{\eta_0}$ is a geodesic $X$ word, then we can
push the inner subword $a^{\epsilon_k}ta^nt^{-1}a^{\eta_k}$ to
$ta^nt^{-1}a^{\epsilon_k+\eta_k}$, and iteratively push at each level
to get\\ $t^ka^nt^{-1}a^{\epsilon_k+\eta_k}t^{-1}\ldots
t^{-1}a^{\epsilon_1+\eta_1}t^{-1}a^{\epsilon_0+\eta_0}$.
We show this in Figure \ref{fig:pushRun}.
\begin{figure}[ht!]
\begin{tabular}{ccc}
\includegraphics[height=3.5cm]{pics/pushRun2.eps}
& &
\includegraphics[height=3.5cm]{pics/pushRun.eps}
\end{tabular}
\caption{Pushing an $X$ word to have one non-trivial $N$-run.}
\label{fig:pushRun}
\end{figure}
\begin{lem}[At most one run]\label{lem:OneRun}
Every group element is represented by some geodesic of one of the ten
types having at most one non-trivial run.
\end{lem}
\noindent
\textit{Proof.} By Lemma \ref{lem:TenTypes} each group element is
represented by some geodesic of one of the ten types. If the word is
type $E,N,P$ then there is at most one non-trivial run. If it is $X,
XN$ or $PX$ then by Lemma \ref{lem:commutation} we can push $a$
letters to one side of the $X$ word to get at most one non-trivial
run, as we did in the example above. For $NP_{\leq}$ words we have
$w=w_Nw_{NP}$ where $w_{NP}$ has zero $t$-exponent, so by Lemma
\ref{lem:commutation} we can push $a$ letters to the left of the $NP$
word to get at most one run non-trivial run. For $XNP$ words we have
$w=w_Xw_Nw_{NP}$ where $w_{NP}$ has zero $t$-exponent, so by Lemma
\ref{lem:commutation} we can push $a$ letters to one side of the $X$
and $NP$ words to get at most one non-trivial run. For $NP_<$ words
we have $w=w_{NP}w_P$ where $w_{NP}$ has zero $t$-exponent, so by
Lemma \ref{lem:commutation} we can push $a$ letters to the right of
the $NP$ word to get at most one non-trivial run. For $NPX$ words we
have $w=w_Nw_{NP}w_X$ where $w_{NP}$ has zero $t$-exponent, so by
Lemma \ref{lem:commutation} we can push $a$ letters to one side of the
$X$ and $NP$ words to get at most one non-trivial run. $\Box$
Given that every word can be pushed into a word having at most one
non-trivial run, and we can choose which patterns are not allowed in a
run, we are ready to define the normal form language.
The only issue that remains is the prefix of each run. For example, a
geodesic of type $X$ can be pushed into a word with exactly one
$N$-run. The start of this run can be chosen to be either
$a^2t^{-1},a^3t^{-1}, a^{-2}t^{-1}$ or $a^{-3}t^{-1}$, for if the run
starts with $1$ then $tat^{-1}\rightarrow a^2$ so is not geodesic. If it
starts with $4$ or $5$ then by Lemma \ref{lem:TenTypes} $a^4\rightarrow
ta^2t^{-1}$ and $a^5\rightarrow ta^2t^{-1}a$ so we elect to write it starting
with a $2$ instead, and if the run starts with $i\geq 6$ then it is
not geodesic.
The next few entries could be any one of the following:\\
$200,201,210,300,301,30(-1),310$ or the negatives of these.
Note that the prefix $20(-1)$ is not allowed since $t^2a^2t^{-2}a^{-1}$
is not geodesic, whereas $30(-1)$ is allowed since $t^2a^3t^{-2}a^{-1}$
is geodesic. See Figure \ref{fig:prefix}.
\begin{figure}[ht!]
\begin{tabular}{ccc}
\includegraphics[height=4cm]{pics/prefix1.eps}
& &
\includegraphics[height=4cm]{pics/prefix2.eps}
\end{tabular}
\caption{Prefixes for $N$-runs of an $X$ word.}
\label{fig:prefix}
\end{figure}
Each case is treated separately in the following lemma. Then after
these prefixes (suffixes for $P$-runs) the run has only $0, 1,(-1)$
with no consecutive nonzero entries.
\begin{lem}[Prefixes/suffixes of runs]\label{lem:PrefSuff}
In this lemma we assume that each word has been pushed into a word
with at most one non-trivial run, and that each run has at least three
$t^{\pm 1}$ letters.
\begin{itemize}
\item The $N$-run in a geodesic word of type $X,XN,XNP$ with
non-positive $t$-exponent sum must start with one of\\
$200,201,210,300,301,30(-1),310$ or the negatives of these.
\item The $N$-run in a geodesic word of type $N,NP_{\leq}$ with
non-positive $t$-exponent sum must start with one of\\
$000,001,010,100,101,10(-1),110,200,201,20(-1),210,300,301,30(-1),310$
or the negatives of these.
\item The $P$-run in a geodesic word of type $P,NP_>$ with positive
$t$-exponent sum must end with one of \\
$000,100,010,001,101,(-1)01,011,002,102,(-1)02,012,003,103,(-1)03,013$
or the negatives of these.
\item The $P$-run in a geodesic word of type $PX,NPX$ with positive
$t$-exponent sum must end with one of\\
$002,102,012,003,103,(-1)03,013$ or the negatives of these. \end{itemize}
\end{lem}
\noindent
\textit{Proof.} If an $N$-run starts with $i11$ or $i(-1)(-1)$ then
by Lemma \ref{lem:No11} we can replace $i11$ by $(i+1)0(-1)$ and
$i(-1)(-1)$ by $(i-1)01$ without increasing length. Thus the first
three entries of an $N$-run will include a $0$.
If an $N$-run in a word of type $X,XN$ or $XNP$ starts with $i$ with
$|i| \geq 4$ then we can replace $ta^{4+j}t^{-1}$ by
$t^2a^2t^{-1}a^jt^{-1}$ to get a word of the same type and preserving
length. If an $N$-run in a word of type $X,XN$ or $XNP$ starts with
$i$ with $|i| \leq 1$ then we can replace $ta^it^{-1}$ by $a^{2i}$,
reducing length, contradicting the fact that the word is geodesic.
Thus an $N$-run in a word of type $X,XN$ or $XNP$ starts with
$2,3,(-2)$ or $(-3)$.
This gives the following possibilities for the first three entries:\\
$200,201,20(-1),210,2(-1)0,300,301,30(-1),310,3(-1)0$ or the negatives
of these. We can eliminate $2(-1)0$ and $3(-1)0$ since they encode
$a^it^{-1}a^{-1}t^{-1}=a^{i-1}t^{-1}at^{-1}$ for $i=2,3$ so are not
geodesic. We also observe that $20(-1)$ encodes $t^2a^2t^{-2}a^{-1}$
which is not geodesic (as seen in Figure \ref{fig:prefix}).
This leaves $200,201,210,300,301,30(-1),310$ (or their negatives) as
the possible prefixes to the $N$-run in a geodesic of type $X,XN$ or
$XNP$. It is easy to check that each of these prefixes is geodesic.
If the $N$-run in a word of type $N,NP_{\leq}$ starts with $i$ with
$|i| \geq 4$ then we can replace $ta^{4+j}t^{-1}$ by
$t^2a^2t^{-1}a^jt^{-1}$ preserving length. Note that they become words
of type $XN$ or $XNP$. If the $N$-run in a word of type $N,NP_{\leq}$
starts with $i$ with $|i| \leq 3$ then we can have prefixes of the
form $i0$, $i10$ when $i>0$ and $i(-1)0$ when $i<0$.
Explicitly, this gives\\
$000,001,010,100,101,10(-1),110,200,201,20(-1),210,300,301,30(-1),310$\\
or their negatives. It is easy to check that each of these prefixes is
geodesic. Note that in this case we cannot eliminate $20(-1)$ since
there are no preceding $t$'s.
The proof for $P$-runs follows a similar argument, and is omitted.
$\Box$
\begin{lem}[Short runs]\label{lem:short}
In this lemma we assume that each word has been pushed into a word
with at most one non-trivial run, and that each run has no more than
two $t^{\pm 1}$ letters.
\begin{itemize}
\item
The geodesics of type $X,XN$ and $XNP$ are the set $L_1$ of words of
the form
\[
\begin{array}{lrl}
ta^it^{-1}a^j, & ij= & (\pm 2)0, (\pm 3)0, 21,31,
(-2)(-1),(-3)(-1); \\
ta^it^{-1}a^jt^{-1}a^k, & ijk= & (\pm 2)00, (\pm 3)00, 201,(\pm 3)01,
(-2)0(-1), \\ && (\pm 3)0(-1), 210,310, (-2)(-1)0,(-3)(-1)0; \\
t^2a^it^{-1}a^jt^{-1}a^k, & ijk= &(\pm 2)00, (\pm 3)00, 201,(\pm
3)01, (-2)0(-1), \\ && (\pm 3)0(-1), 210,310, (-2)(-1)0,(-3)(-1)0; \\
ta^it^{-1}a^jt^{-1}a^kt, & ijk= & 201,(\pm 3)01, (-2)0(-1),(\pm
3)0(-1).
\end{array}
\]
\item
The geodesics of type $N$ and $NP_{\leq}$ are the set $L_2$ of words of
the form
\[
\begin{array}{lrl}
a^it^{-1}a^j, & ij= & 00,(\pm 1)0,(\pm 2)0, (\pm 3)0, 0(\pm 1),0(\pm
2), 0(\pm 3),\\ && 11,21,31, (-1)(-1),(-2)(-1),(-3)(-1); \\
a^it^{-1}a^jt, & ij= & 0(\pm 1),0(\pm 2),0(\pm 3),
11,21,31, \\ && (-1)(-1),(-2)(-1),(-3)(-1); \\
a^it^{-1}a^jt^{-1}a^k, & ijk= & 000,(\pm 1)00,(\pm 2)00, (\pm 3)00,
0(\pm 1)0,0(\pm 2)0, 0(\pm 3)0, \\ && 00(\pm 1), 00(\pm 2), 00(\pm 3), (\pm
1)0(\pm 1), (\pm 2)0(\pm 1), (\pm 3)0(\pm 1), \\ && 110, 210,310,
(-1)(-1)0,(-2)(-1)0,(-3)(-1)0; \\
a^it^{-1}a^jt^{-1}a^kt, & ijk= & 00(\pm 1),00(\pm 2), 00(\pm 3),(\pm
1)0(\pm 1), (\pm 2)0(\pm 1),(\pm 3)0(\pm 1);\\
a^it^{-1}a^jt^{-1}a^kt^2, & ijk= &00(\pm 1),00(\pm 2), 00(\pm 3),(\pm
1)0(\pm 1), (\pm 2)0(\pm 1),(\pm 3)0(\pm 1).
\end{array}
\]
\item
The geodesics of type $P$ and $NP_{>}$ are the set $L_3$ of words of
the form
\[
\begin{array}{lrl}
a^ita^j, & ij= & 00,0(\pm 1),0(\pm 2),0(\pm 3),(\pm 1)0,
11,12,13,\\ && (-1)(-1),(-1)(-2),(-1)(-3);\\
t^{-1}a^ita^j & ij= &(\pm 1)0, 11,12,13,(-1)(-1),(-1)(-2),(-1)(-3);\\
a^ita^jta^k, & ijk= & 000,00(\pm 1),00(\pm 2),00(\pm 3), (\pm 1)0(\pm
1), (\pm 1)0(\pm 2), (\pm 1)0(\pm 3),\\ && 0(\pm
1)0,011,012,013,0(-1)(-1),0(-1)(-2),0(-1)(-3);\\
t^{-1}a^ita^jta^k, & ijk= &(\pm 1)0(\pm 1),(\pm 1)0(\pm
2),(\pm 1)0(\pm 3);\\
t^{-2}a^ita^jta^k, & ijk= &(\pm 1)0(\pm 1),(\pm 1)0(\pm 2),(\pm
1)0(\pm 3).
\end{array}
\]
\item
The geodesics of type $PX$ and $NPX$ (must have positive $t$-exponent)
are the set $L_4$ of words of the form
\[
\begin{array}{lrl}
a^ita^jta^kt^{-1}, & ijk= & 00(\pm 2),00(\pm 3), 012,013, 0(-1)(-2),
0(-1)(-3),\\ && 102,10(\pm 3), (-1)0(-2),(-1)0(\pm 3).
\end{array}
\]
\end{itemize}
\end{lem}
\noindent
\textit{Proof.} The proof is by exhaustive search. For the first two
cases we have either one or two $t^{-1}$ letters, so we consider
$t^pa^it^{-1}a^jt^q$ and $t^pa^it^{-1}a^jt^{-1}a^kt^q$. The
$t$-exponent must be non-positive, so $p+q\leq 1$ in the first case
and $p+q\leq 2$ in the second case. For the $a$-exponents, $|i|\leq 3$
and $|j|,|k|\leq 1$. This gives a finite set of possibilities, so we
run through each and check if it gives a geodesic. Note that the
pattern $20(-1)$ is not a geodesic if it appears in an $N$-run
preceded by a $t$, yet it is geodesic if it is in a $N$ or $NP_{\geq}$
geodesic.
By Lemma \ref{lem:No11} we choose to reject runs of the form $(i,1,1)$
and $(i,-1,-1)$ in favour of $(i+1,0,-1)$ and $(i-1,0,1)$ respectively,
so that we never see three non-zero entries in a row, even at the
start of a run. The details of the exhaustive check are omitted.
For the third and forth cases we have either one or two $t$ letters,
so we consider $t^{-p}a^ita^jt^{-q}$ and
$t^{-p}a^ita^jta^kt^{-q}$. The $t$-exponent must be positive, so
$p,q=0$ in the third and $p+q\leq 1$ in the forth cases. For the
$a$-exponents, $|k|\leq 3$ and $|j|,|k|\leq 1$. This gives a finite
set of possibilities, so we run through each and check if it gives a
geodesic.
By Lemma \ref{lem:No11} we choose to reject runs of the form $(1,1,i)$
and $(-1,-1,i)$ in favour of $(-1,0,i+1)$ and $(1,0,i-1)$ respectively,
so that we never see three non-zero entries in a row, even at the end
of a run. The details of the exhaustive check are omitted. $\Box$
\begin{defn}[Normal form]\label{defn:nf}
There are ten distinct types of normal form\ words. \begin{itemize}
\item Type $\mathcal{NF}_E$ words are precisely $\epsilon, a^{\pm 1}, a^{\pm
2}, a^{\pm 3}$.
\item Type $\mathcal{NF}_X, \mathcal{NF}_{XN}$ and $\mathcal{NF}_{XNP}$, all with zero or
negative $t$-exponent, are the words:
$t^ka^{\epsilon_l}t^{-1}a^{\epsilon_{l-1}}t^{-1}\ldots
a^{\epsilon_1}t^{-1}a^{\epsilon_0}t^m$ such that $k>0$ and $l\geq
k+m$, $\epsilon_0 \neq 0$ if $m>0$, the $N$-run starts with one of
$200,201,210,300,301,30(-1),310$ or the negatives of these, and
after this has only $0,1,(-1)$ with no consecutive nonzero entries
(that is, no $1(-1),(-1)1,11$ or $(-1)(-1)$ in the run).
If there are less than three $t^{-1}$ letters in the run, then the
word is in the set $L_1$ of Lemma \ref{lem:short}.
\item Type $\mathcal{NF}_N$ and $ \mathcal{NF}_{NP_{\leq}}$, all with negative
$t$-exponent, are the words:\\
$a^{\epsilon_l}t^{-1}a^{\epsilon_{l-1}}t^{-1}\ldots
a^{\epsilon_1}t^{-1}a^{\epsilon_0}t^k$ such that $0\leq k\leq l$,
$\epsilon_0 \neq 0$ if $k>0$, the $N$-run starts with one of \\
$000,001,010,100,101,10(-1),110,200,201,20(-1),210,300,301,30(-1),310$\\
or the negatives of these, and after this has only $0,1,(-1)$ with no
consecutive nonzero entries.
If there are less than three $t^{-1}$ letters in the run, then the
word is in the set $L_2$ of Lemma \ref{lem:short}.
\item Type $\mathcal{NF}_P$ and $ \mathcal{NF}_{NP_>}$, all with positive $t$-exponent,
are the words:\\ $t^{-k}a^{\epsilon_0}ta^{\epsilon_{1}}t\ldots
a^{\epsilon_{l-1}}ta^{\epsilon_l}$\\ such that $0\leq k < l$,
$\epsilon_0 \neq 0$ if $k>0$, the $P$-run ends with one of \\
$000,100,010,001,101,(-1)01,011,002,102,(-1)02,012,003,103,(-1)03,013$\\
or the negatives of these, and before this has only $0,1,(-1)$ with no
consecutive nonzero entries.
If there are less than three $t$ letters in the run, then the word is
in the set $L_3$ of Lemma \ref{lem:short}.
\item Type $\mathcal{NF}_{PX}$ and $ \mathcal{NF}_{NPX}$, all with positive
$t$-exponent, are the words:\\
$t^{-k}a^{\epsilon_0}ta^{\epsilon_{1}}t\ldots
a^{\epsilon_{l-1}}ta^{\epsilon_l}t^{-m}$ such that $k>0, m\geq 0$
and $k+m< l$, $\epsilon_0 \neq 0$ if $k>0$, the $P$-run ends with one
of $002,102,012,003,103,(-1)03,013$ or the negatives of these, and
before this has only $0,1,(-1)$ with no consecutive nonzero entries.
The $P$-run must have at least two $t$ letters since the $t$-exponent
of the word is positive.
If there are less than three $t$ letters in the run, then the word is
in the set $L_4$ of Lemma \ref{lem:short}.
\end{itemize}
\end{defn}
\begin{lem}[The language of normal forms surjects to the group]
\label{lem:Surject}
Every group element is represented by a normal form word.
\end{lem}
\noindent
\textit{Proof.} By Lemma \ref{lem:OneRun} every group element is
represented by a geodesic having at most one run. Then by Lemma
\ref{lem:No11} we can remove any occurrences of $11$ and $(-1)(-1)$ in
the run (except possibly at the start of $N$ and $NP_{\leq}$ words and
the end of $P$ and $NP_>$ words) without lengthening the word. Then if
the resulting run does not start (or end) with one of the number
patterns given in Lemma \ref{lem:PrefSuff} relative to its type, it is
not geodesic, and if it does, the word is in normal form. $\Box$
\begin{defn}[HNN-extension]
\label{defn:hnn}
If $G$ is a group with presentation $\langle \mathcal G\;|\; \mathcal R\rangle$
and $\phi:A\rightarrow B$ is an isomorphism of subgroups $A,B\subseteq G$,
define the {\em HNN-extension} $G_{\phi}$ of $G$ by $\phi$ to be the group with
presentation $ \langle \mathcal G, t\;|\; \mathcal R, \{tat^{-1}=\phi(a) : a \in
A\}\rangle$. The generator $t$ is called the {\em stable letter} and
$A,B$ are called {\em associated subgroups}.
\end{defn}
The group BS$(1,2)$\ is an HNN-extension\ of $\langle a \rangle$ with the isomorphism
$\phi(a)=a^2$ between associated subgroups $\langle a \rangle$ and
$\langle a^2 \rangle$. The following fact about HNN-extension s can be read in
\cite{LS}. \begin{lem}[Britton's Lemma] \label{lem:Blemma} If $w$ is
a word containing a $t^{\pm 1}$ letter in an HNN-extension\ of $G_{\phi}$ with
associated subgroups $A,B$ and if $w=_{G_{\phi}}1$ then $w$ must
contain a subword (called a {\em pinch}) of the form $tat^{-1}$ or
$t^{-1}\phi(a)t$ for some element $a\in A$. \end{lem}
\begin{cor}[$t$-exponent]\label{cor:texp}
For each element $g\in$ BS$(1,2)$\ there is an integer $k$ such that every
word for $g$ has $t$-exponent $k$.
\end{cor}
\noindent \textit{Proof.} If $w$ represents the identity and has no
$t^{\pm 1}$ letters then its $t$-exponent sum is zero. If $w$
represents the identity and has $t^{\pm 1}$ letters then by
Britton's lemma it contains a pinch. Removing a pinch leaves the
$t$-exponent of $w$ unchanged, so either you can remove all $t^{\pm
1}$ letters, in which case the $t$-exponent sum was zero, or you
cannot remove all $t^{\pm 1}$ letters, in which case the word did not
represent the identity.
If $w$ and $u$ are two words for the same group element with
$t$-exponents $k$ and $l$ respectively, then $wu^{-1}=_{BS} 1$ and has
$t$-exponent $k-l=0$, so $w$ and $u$ have the same $t$-exponent.
$\Box$
\begin{lem}[$a$-exponents]\label{lem:a-exp}
The $X$ word $w=t^ka^jt^{-1}a^{\epsilon_{k-1}}t^{-1}\ldots
t^{-1}a^{\epsilon_0}$ represents the element $a^N$ where
$$N=2^kj+ \sum_{i=0}^{k-1} 2^i\epsilon_i.$$
Moreover if each $|\epsilon_i|\leq 1$ for all $i\leq k-1$, $|j|\geq 2$ and
$\epsilon_{k-1}$ is zero or the same sign as $j$, then $|N|\geq 4$.
Also, the $X$ word $w=a^{\epsilon_0}ta^{\epsilon_1}t\ldots
ta^{\epsilon_{k-1}}ta^jt^{-k}$ represents the element $a^N$ where
$$N=2^kj+ \sum_{i=0}^{k-1} 2^i\epsilon_i,$$ and moreover if each
$|\epsilon_i|\leq 1$ for all $i\leq k-1$, $|j|\geq 2$ and
$\epsilon_{k-1}$ is zero or the same sign as $j$, then $|N|\geq 4$.
\end{lem}
\noindent
\textit{Proof.}
To prove the first assertion we will use induction on $k$. If $k=1$ we have\\
$w=ta^jt^{-1}a^{\epsilon_0}=a^{2j+\epsilon_0}$.
Assuming the statement holds for $k$, then\\
$w=t^{k+1}a^jt^{-1}a^{\epsilon_k}t^{-1}a^{\epsilon_{k-1}}t^{-1} \ldots
t^{-1}a^{\epsilon_0}$ \\
$=t^ka^{2j+\epsilon_k}t^{-1}a^{\epsilon_{k-1}}t^{-1} \ldots
t^{-1}a^{\epsilon_0}=a^N$ \\ where
$N=2^k(2j+\epsilon_k)+\sum_{i=0}^{k-1} 2^i\epsilon_i$.
The smallest possible value for $|N|$ is when $|j|=2$,
$\epsilon_{k-1}=0$ and each $\epsilon_i$ is $-\frac{j}{|j|}$. In this
case\\ $|N|\geq 2^k(2)+ 0 + \sum_{i=0}^{k-2} 2^i(-1)$ \\ $= 2^k(2) -
\sum_{i=0}^{k-2} 2^i$\\ $= 2^k(2) - (2^{k-1}-1)$\\ $\geq 2(2)-(1-1)=4$
since $k\geq 1$.
To prove the second assertion we will again use induction on $k$. If
$k=1$ we have\\ $w=a^{\epsilon_0}ta^jt^{-1}=a^{2j+\epsilon_0}$.
Assuming the statement holds for $k$, then\\
$w=a^{\epsilon_0}t\ldots ta^{\epsilon_{k-1}} ta^{\epsilon_k}ta^jt^{-k-1}$\\
$=a^{\epsilon_0}t\ldots ta^{\epsilon_{k-1}} ta^{2^j+\epsilon_k}t^{-k}=a^N$\\
where $N=2^k(2j+\epsilon_k)+\sum_{i=0}^{k-1}2_i\epsilon_i$\\
The smallest possible value for $|N|$ is when $|j|=2$,
$\epsilon_{k-1}=0$ and each $\epsilon_i$ is $-\frac{j}{|j|}$. In this
case\\ $|N|\geq 2^k(2)+ 0 + \sum_{i=0}^{k-2} 2^i(-1)$ \\ $= 2^k(2) -
\sum_{i=0}^{k-2} 2^i$\\ $= 2^k(2) - (2^{k-1}-1)$\\ $\geq 2(2)-(1-1)=4$
since $k\geq 1$. $\Box$
\begin{lem}[Uniqueness for $\mathcal{NF}_E\cup \mathcal{NF}_X$]
\label{lem:Xunique}
If $w,u\in \mathcal{NF}_E\cup\mathcal{NF}_X$ and $w=_{BS}u$ then $w$ and $u$ are
identical.
\end{lem}
\noindent
\textit{Proof.} If $w,u\in \mathcal{NF}_E$ then $w=a^i$ and $u=a^j$ and
$a^i=_{BS}a^j$ means $a^{i-j}=1$, so $i=j$ and $w$ and $u$ are
identical.
If $w\in \mathcal{NF}_X$ then we can write
$w=t^ka^{\epsilon_k}t^{-1}a^{\epsilon_{k-1}}t^{-1}\ldots
a^{\epsilon_1}t^{-1}a^{\epsilon_0}$ with $k>0$, which evaluates to the
power $N$ with $|N|\geq 4$ by Lemma \ref{lem:a-exp}, so $w$ cannot be
equal to a word in $\mathcal{NF}_E$.
If $u\in \mathcal{NF}_X$ and $w=_{BS}u$ then we can write
$u=t^la^{\eta_l}t^{-1}a^{\eta_{l-1}}t^{-1}\ldots
t^{-1}a^{\eta_{k}}t^{-1}\ldots a^{\eta_1}t^{-1}a^{\eta_0}$, where
without loss of generality we are assuming that $k\leq l$. Since both
words evaluate to the same power of $a$ we have
\begin{eqnarray*}
\epsilon_k2^k+\epsilon_{k-1}2^{k-1}+\ldots +\epsilon_12+\epsilon_0 & =
&\eta_l2^l+\eta_{l-1}2^{l-1}+\ldots+\eta_{k}2^{k}+\ldots
+\eta_12+\eta_0.\end{eqnarray*}
Let $i\in \mathbb N$ such that $\epsilon_j=\eta_j$ for all $j<i$ and
$\epsilon_i\neq \eta_i$. Then cancelling and dividing through by $2^i$
we have
\begin{equation}\label{eqn1}
\epsilon_k2^{k-i}+\epsilon_{k-1}2^{k-1-i}+\ldots +\epsilon_i =
\eta_l2^{l-i}+\eta_{l-1}2^{l-1-i}+\ldots +\eta_i.
\end{equation}
If $i=k$ then $|\epsilon_k|=2$ or $3$ and we have
$\epsilon_k=\eta_l2^{l-k}+\eta_{l-1}2^{l-1-k}+\ldots +\eta_k$. If
$l=k$ then $\epsilon_k=\eta_k$ so $w$ and $u$ are identical. If
$l\geq k+1$ then
$|\epsilon_k|=|\eta_l2^{l-k}+\eta_{l-1}2^{l-k-1}+\ldots \eta_k|\geq
4$ since $|\eta_l|\geq 2$ and $\eta_{l-1}$ is either $0$ or the same
sign as $\eta_l$, but $|\epsilon_k|\leq 3$ so this is a contradiction.
If $i<k$ then $\epsilon_i, \eta_i$ are either $0,\pm 1$ since they
occur in the middle of a run. By Equation \ref{eqn1} they must be of
the same parity, and they cannot both be zero so one is $1$ and one is
$(-1)$. If $i+1<k$ then $\epsilon_{i+1}=\eta_{i+1}=0$ and we
contradict the equation since one side is equal to $1 \mod 4$ and the
other is $(-1) \mod 4$.
So $i+1=k$, so the run in $w$ starts with $210$ or $310$ (or their
negatives). Then $w=t^ka^st^{-1}aw''$ and $u=t^ku't^{-1}a^{-1}w''$
with $s=2,3$ so $u'=_{BS}a^{s+1}$ so is $a^3$ or $a^4$, which by
Lemma \ref{lem:TenTypes} is written as $ta^2t^{-1}$ if it occurs
in a normal form word. Then the run in $u$ must start with either
$3(-1)$ or $20(-1)$, neither of which is allowed in a normal form\ word, so
$w$ and $u$ are identical. $\Box$
\begin{lem}[Uniqueness for $\mathcal{NF}_N\cup \mathcal{NF}_{XN}$]
\label{lem:Nunique}
If $w,u\in \mathcal{NF}_N\cup\mathcal{NF}_{XN}$ and $w=_{BS}u$ then $w$ and $u$ are
identical.
\end{lem}
\noindent
\textit{Proof.} If $w$ and $u$ are two normal form\ words representing the
same group element, then they have the same $t$-exponent by Lemma
\ref{cor:texp}. If $w,u\in \mathcal{NF}_{XN}$ with $t$-exponent $(-k)$ then
$t^kw,t^ku$ are in $\mathcal{NF}_{X}$ so by Lemma \ref{lem:Xunique} they are
identical. Note that $\mathcal{NF}_{XN}$ and $\mathcal{NF}_X$ words have the same
$N$-run structure, the only difference is the length of the $t^l$
prefix.
If $w\in \mathcal{NF}_N$ then let $w=a^{\epsilon_k}t^{-1}\ldots
t^{-1}a^{\epsilon_0}$ and let \\ $u=u't^{-1}a^{\eta_{k-1}}t^{-1}\ldots
t^{-1}a^{\eta_0}$ where $u'$ evaluates to $a^n$ and is type $X$ or
$E$. The words $t^kw$ and $t^ku$ evaluate to the same power of $a$,
which is $\epsilon_k2^k+\ldots
+\epsilon_0=n2^k+\eta_{k-1}2^{k-1}+\ldots +\eta_0$. Let $i\in \mathbb
N$ such that $\epsilon_j=\eta_j$ for all $j<i$ and $\epsilon_i\neq
\eta_i$. Then cancelling and dividing through by $2^i$ we get
\begin{equation}\label{eqn2}
\epsilon_k2^{k-i}+\ldots
+\epsilon_i=n2^{k-i}+\eta_{k-1}2^{k-i-1}+\ldots +\eta_i.
\end{equation}
If $i=k$ then $\epsilon_k=n$. Now $|\epsilon_k|\leq 3$ and $u'$ is an
$E$ or $X$ word with the same $a$-exponent. By Lemma \ref{lem:a-exp}
if $u'$ is type $X$ then it evaluates to $a^N$ with $|N|\geq 4$, so
$u'$ is type $E$, indeed it is exactly $a^{\epsilon_k}$, so $w$ and
$u$ are identical.
If $i<k$ then $\epsilon_i,\eta_i=\pm 1$ since they are in the middle
of a run, and have the same parity by Equation \ref{eqn2}. If $i<k+1$
then we have a contradiction since $\epsilon_{i+1}=\eta_{i+1}=0$ and
the equation has $4x+1$ on one side and $4y-1$ on the other for
integers $x,y$. So $i=k+1$ and
$\epsilon_k2+\epsilon_{k-1}=n2+\eta_{k-1}$ so $n=\epsilon_k\pm 1$
since $\epsilon_{k-1} -\eta_{k-1}=\pm 2$, and $\epsilon_{k-1}$ has the
same sign as $\epsilon_k$.
If $u'$ is type $X$ then $|n|\geq 4$ by Lemma \ref{lem:a-exp} but
$|\epsilon_k|\leq 3$, so the only chance for equality is when the run
in $w$ starts with $31$ and $\eta_{k-1}=-1$. Then $u'=_{BS}a^4$ which
is written as $ta^2t^{-1}$ in a normal form word, but then the run in
$u$ starts with $20(-1)$ which is not allowed. Thus $u$ is also in
$\mathcal{NF}_N$. Without loss of generality assume $\epsilon_k>0$ so
$\epsilon_{k-1}=1$ and $\eta_{k-1}=-1$. Then $n$ must be negative
since the run in $u$ starts with $n(-1)$, and we have a contradiction.
$\Box$
\begin{lem}[Uniqueness for $\mathcal{NF}_P\cup \mathcal{NF}_{PX}$]
\label{lem:Punique}
If $w,u\in \mathcal{NF}_P\cup\mathcal{NF}_{PX}$ and $w=_{BS}u$ then $w$ and $u$ are
identical.
\end{lem}
\noindent
\textit{Proof.} If $w,u\in \mathcal{NF}_P\cup\mathcal{NF}_{PX}$ then $w^{-1}$ and
$u^{-1}$ are in $\mathcal{NF}_N\cup\mathcal{NF}_{XN}$, so by Lemma \ref{lem:Nunique}
since $w^{-1}=_{BS}u^{-1}$ then $w^{-1}$ and $u^{-1}$ are identical,
and so $w$ and $u$ are identical. $\Box$
\begin{lem}[Uniqueness]\label{lem:Unique}
Every group element is represented by a unique normal form word.
\end{lem}
\noindent
\textit{Proof.} If $w$ and $u$ are two normal form\ words representing the
same group element, then they have the same $t$-exponent by Lemma
\ref{cor:texp}.
If $w$ and $u$ have zero $t$-exponent then they are of the form
$E,X,NP$ or $XNP$. If neither is $NP$ or $XNP$ then they are
identical by Lemma \ref{lem:Xunique}. If one is $NP$ or $XNP$ then
let $w=w't^{-1}a^{\epsilon_{k-1}}t^{-1}\ldots t^{-1}a^{\epsilon_0}t^k$
and $u=u't^{-1}a^{\eta_{l-1}}t^{-1}\ldots t^{-1}a^{\eta_0}t^l$ where
$w',u'$ evaluate to powers of $a$ and assume without loss of
generality that $k>0$ and $k\geq l$. Then
$wu^{-1}=w't^{-1}a^{\epsilon_{k-1}}t^{-1}\ldots
t^{-1}a^{\epsilon_0}t^{k-l}a^{-\eta_0}t\ldots
ta^{-\eta_{l-1}}(u')^{-1}$\\ $=_{BS}1$. Since $k>0$ then
$\epsilon_0=\pm 1$ so if we replace $w'$ and $u'$ by the corresponding
powers of $a$ (by pinching $ta^st^{-1}$ subwords) we have a word that
does not admit any pinches, contradicting Britton's Lemma. Thus
$k=l$. Then the words $wt^{-k}$ and $ut^{-k}$ are equal and in
$\mathcal{NF}_N\cup\mathcal{NF}_{XN}$ so by Lemma \ref{lem:Nunique} must be identical,
so $w$ and $u$ are identical.
If $w$ and $u$ have negative $t$-exponent then they are of the form
$N,XN,NP$ or $XNP$. If neither is $NP$ or $XNP$ then they are
identical by Lemma \ref{lem:Nunique}. If one is $NP$ or $XNP$ then
let $w=w't^{-1}a^{\epsilon_{k-1}}t^{-1}\ldots t^{-1}a^{\epsilon_0}t^l$
and let \\ $u=u't^{-1}a^{\eta_{p-1}}t^{-1}\ldots t^{-1}a^{\eta_0}t^q$
where $k>l,p>q$, and $w',u'$ evaluate to powers of $a$. Assume without
loss of generality that $l>0$ and $l\geq q$. Then\\
$wu^{-1}=w't^{-1}a^{\epsilon_{k-1}}t^{-1}\ldots
t^{-1}a^{\epsilon_0}t^{l-q}a^{-\eta_0}t\ldots
ta^{-\eta_{p-1}}(u')^{-1}=_{BS}1$. Since $l>0$ then $\epsilon_0=\pm 1$
so after replacing $w'$ and $u'$ by the corresponding powers of $a$,
we have a word that does not admit any more pinches, contradicting
Britton's Lemma. Thus $l=q$. Then the words $wt^{-l}$ and $ut^{-l}$
are equal and in $\mathcal{NF}_N\cup\mathcal{NF}_{XN}$ so by Lemma \ref{lem:Nunique}
must be identical, so $w$ and $u$ are identical.
If $w$ and $u$ have positive $t$-exponent then they are of the form
$P,PX,NP$ or $NPX$. If neither is $NP$ or $NPX$ then they are
identical by Lemma \ref{lem:Punique}. If one is $NP$ or $NPX$ then
let $w=t^{-k}a^{\epsilon_0}t\ldots t a^{\epsilon_{k-1}}w'$ and let
$u=t^{-p}a^{\eta_0}t \ldots ta^{\eta_{k-1}tu'}$ where $k<l,p<q$, and
$w',u'$ evaluate to powers of $a$. Assume without loss of generality
that $k>0$ and $k\geq p$. Then \\$u^{-1}w=
(u')^{-1}t^{-1}a^{-\eta_{k-1}}t^{-1}\ldots t^{-1}a^{-\eta_0}
t^{p-k}a^{\epsilon_0}t\ldots t a^{\epsilon_{k-1}}w'=_{BS}1$. Since
$k>0$ then $\epsilon_0=\pm 1$ so after replacing $w'$ and $u'$ by
their corresponding powers of $a$ we have a word that cannot be
pinched, contradicting Britton's Lemma. Thus $k=p$. Then the words
$t^kw$ and $t^ku$ are equal and in $\mathcal{NF}_P\cup\mathcal{NF}_{PX}$ so by Lemma
\ref{lem:Punique} must be identical, so $w$ and $u$ are identical.
$\Box$
\begin{lem}[Normal forms are geodesic]\label{lem:geodNF}
Each normal form\ word is a geodesic.
\end{lem}
\noindent
\textit{Proof.} Suppose that a word $w\in \mathcal{NF}$ is not
geodesic. Choose a geodesic word $u=_{BS}w$ that is one of the ten
types in Lemma \ref{lem:TenTypes}. By Lemma \ref{lem:OneRun} we can
move $u$ into a word $u'$ of the same length having one run.
If $u'$ is in normal form\ then since $w$ and $u'$ are both normal form\ words that
equate to the same group element then $w,u'$ must be identical by Lemma
\ref{lem:Unique}.
If $u'$ is not in normal form, it either violates the prefix rules (as in
Lemma \ref{lem:PrefSuff}) or has an adjacent pair of nonzero digits in
its run.
If the run in $u'$ has an occurrence of $1(-1)$ or $(-1)1$ then $u'$ is
not geodesic. If the run in $u'$ has an occurrence of $11$ or
$(-1)(-1)$ that is not at the start of an $N$-run or the end of a
$P$-run, then by Lemma \ref{lem:No11} we can perform a length
preserving rewrite to eliminate it. If this causes $u'$ to have a
$1(-1)$ then $u$ was not geodesic, and it it causes $u'$ to have a
$11$ or $(-1)(-1)$ then repeatedly applying Lemma \ref{lem:No11} from
right to left in an $N$-run, or left to right in a $P$-run, we can
eliminate all occurrences of pairs of nonzero digits.
Finally if the start or end is not one of the prefixes in Lemma
\ref{lem:PrefSuff} then either $u'$ is not geodesic (if the prefix is
$20(-1)$ for example), or is equal to a normal form word of the same
length, which means that the original word $w$ is geodesic. $\Box$
\section{The main theorem}\label{sect:mainthm}
\begin{thm}
The language $\mathcal{NF}$ is a 1-counter language.
\end{thm}
\noindent
\textit{Proof.} The ten types of normal-form geodesics listed in
Definition \ref{defn:nf} break up into five cases. The set $\mathcal{NF}_E$ is
a 1-counter language\ since it is finite. We can describe a $\mathbb Z$-automaton
for each of the remaining four cases to accept the remaining nine
types.
Consider the set of normal forms words of type $X,XN$ and $XNP$. The
language $L_1$ of Lemma \ref{lem:short} describes the set of normal
form words of these types with at most two $t^{-1}$ letters in the
$N$-run, and since $L_1$ is finite, it is a regular language.
Let $L_1'$ be the set of words of the form
$\{t^ka^it^{-2}at^{-1},t^ka^jt^{-2}a^{-1}t^{-1}\: | \; k=1,2,3, i=2, \pm
3, j=-2,\pm 3\}$. This is a finite set so is regular, and is the set
of $X$ (and $XN$) normal form words with three $t^{-1}$'s in the
$N$-run, that corresponds to the prefix $201,301,30(-1) $ and their
negatives.
The remaining $X,XN$ and $XNP$ normal form words (with an $N$-run of
$3$ or more $t^{-1}$ letters) are accepted by the automaton on the
left of Figure \ref{fig:aut1}. The edge labeled $\kappa$ stands for a
collection of paths labeled by
\[
\begin{array}{ll}
a^i(t^{-1},-)(t^{-1},-)(t^{-1},-), & i=\pm 2,\pm 3; \\
a^i(t^{-1},-)(t^{-1},-)a(t^{-1},-)(t^{-1},-), & i= 2, \pm 3;\\
a^i(t^{-1},-)(t^{-1},-)a^{-1}(t^{-1},-)(t^{-1},-), & i= -2, \pm 3;\\
a^i (t^{-1},-)a(t^{-1},-)(t^{-1},-), & i= 2, 3; \\
a^i (t^{-1},-)a^{-1}(t^{-1},-)(t^{-1},-), & i= -2, -3.
\end{array}
\]
The union of these three (regular and 1-counter) languages is 1-counter.
\begin{figure}[ht!]
\begin{tabular}{ccc}
\includegraphics[height=6.5cm]{pics/aut1.eps}& &
\includegraphics[height=6.5cm]{pics/aut2.eps}
\end{tabular}
\caption{Counter automata for normal form $X,XN,XNP$ words and
$N,NP_{\leq}$ words with $N$-run length at least $3$.}
\label{fig:aut1}
\end{figure}
Next, consider the set of normal forms words of type $N$ and
$NP_{\leq}$. The language $L_2$ of Lemma \ref{lem:short} describes
the set of normal form words of these types with at most two $t^{-1}$
letters in the $N$-run, and since $L_2$ is finite, it is a regular
language.
Let $L_2'$ be the set of words of the form $\{a^it^{-2}a^{\pm
1}t^{-1}\: | \; i=0, \pm 1,\pm 2, \pm 3\}$. This is a
finite set so is regular, and is the set of $N$ (and $NP_{\leq}$) normal
form words with three $t^{-1}$'s in the $N$-run, that corresponds to
the prefix $00(\pm 1),10(\pm 1),20(\pm 1),30(\pm 1) $ and their
negatives.
The remaining $N$ and $NP_{\leq}$ normal form words (with an $N$-run
of $3$ or more $t^{-1}$ letters) are accepted by the automaton on the
right of Figure \ref{fig:aut1}. The edge labeled $\kappa'$ stands for
a collection of paths labeled by
\[
\begin{array}{ll}
a^i(t^{-1},-)(t^{-1},-)(t^{-1},-), & i=0, \pm 1, \pm 2,\pm 3; \\
a^i(t^{-1},-)(t^{-1},-)at^{-1}(t^{-1},-), & i= 0,\pm 1,\pm 2, \pm 3;\\
a^i(t^{-1},-)(t^{-1},-)a^{-1}(t^{-1},-)(t^{-1},-),& i= 0,\pm 1,\pm 2, \pm 3;\\
a^i (t^{-1},-)a(t^{-1},-)(t^{-1},-), & i= 0,1,2, 3; \\
a^i (t^{-1},-)a^{-1}(t^{-1},-)(t^{-1},-), & i= 0,-1,-2, -3.
\end{array}
\]
Next, consider the set of normal forms words of type $P$ and $NP_>$. The
language $L_3$ of Lemma \ref{lem:short} describes the set of normal
form words of these types with at most two $t$ letters in the
$P$-run, and since $L_3$ is finite, it is a regular language.
Let $L_3'$ be the set of words of the form $\{ta^{\pm 1}t^2a^i \: | \;
i=0, \pm 1,\pm 2, \pm 3\}$. This is a finite set so is regular, and
is the set of $P$ (and $NP_>$) normal form words with three $t$'s in
the $P$-run, that corresponds to the suffix $(\pm 1)00,(\pm 1)01,(\pm
1)02,(\pm 1)03 $ and their negatives.
The remaining $P$ and $NP_>$ normal form words (with a $P$-run of $3$
or more $t$ letters) are accepted by the automaton on the left of
Figure \ref{fig:POS}. The edge labeled $\lambda$ stands for a
collection of paths labeled by
\[
\begin{array}{ll}
(t,+)(t,+)(t,+)a^i & i=0, \pm 1, \pm 2,\pm 3; \\
(t,+)(t,+)a(t,+)(t,+)a^i & i= 0,\pm 1,\pm 2, \pm 3;\\
(t,+)(t,+)a^{-1}(t,+)(t,+)a^i & i= 0,\pm 1,\pm 2, \pm 3;\\
(t,+)(t,+)a(t,+)a^i & i= 0,1,2, 3;\\
(t,+)(t,+)a^{-1}(t,+)a^i & i= 0,-1,-2, -3.
\end{array}
\]
\begin{figure}[ht!]
\begin{tabular}{ccc}
\includegraphics[height=5cm]{pics/aut3.eps}& &
\includegraphics[height=5cm]{pics/aut4.eps}
\end{tabular}
\caption{Counter automata for normal form $P,NP_>$ words and
$PX,NPX$ words with $P$-run length at least $3$.}
\label{fig:POS}
\end{figure}
Lastly, consider the set of normal forms words of type $PX$ and
$NPX$. The language $L_4$ of Lemma \ref{lem:short} describes
the set of normal form words of these types with (at most) two $t$
letters in the $P$-run, and since $L_4$ is finite, it is a regular
language.
Let $L_4'$ be the set of words of the form
$\{tat^2a^it^{-k},ta^{-1}t^2a^jt^{-k},
\: | \; k=1,2,3, i=2, \pm
3, j=-2,\pm 3\}$. This is a finite set so is regular, and is the set
of $PX$ (and $NPX$) normal form words with three $t$'s in the
$P$-run, that corresponds to the suffix $102,103,(-1)03 $ and their
negatives.
The remaining $PX$ and $NPX$ normal form words (with a $P$-run of $3$ or more
$t$ letters) are accepted by the automaton on the right of Figure
\ref{fig:POS}. The edge labeled $\lambda'$ stands for a collection of
paths labeled by
\[
\begin{array}{ll}
(t,+)(t,+)(t,+) a^i & i=\pm 2,\pm 3; \\
(t,+)(t,+)a(t,+)(t,+)a^i & i= 2, \pm 3;\\
(t,+)(t,+)a^{-1}(t,+)(t,+)a^i & i= -2, \pm 3;\\
(t,+)(t,+)a(t,+)a^i & i= 2, 3; \\
(t,+)(t,+)a^{-1}(t,+)a^i & i= -2, -3.
\end{array}
\]
By Lemma \ref{lem:closure1counter} the union of a 1-counter and
a regular language is 1-counter so each of the ten types is 1-counter,
and by Lemma \ref{cor:union} the union of 1-counter languages is
1-counter. $\Box$
\begin{cor}
The language of normal forms for BS$(1,2)$ with the standard
generating set is context-free.
\end{cor}
\section{Full language of geodesics}
In this section we prove that the language of all geodesic words in
the standard generating set is not counter. To prove this we will
mimic the proof of Theorem \ref{thm:cfnotcounter}. Recall that in
that proof we constructed a word $ww^R$ on three symbols whose prefix
is square-free and suffix is its reverse, and applied the Swapping
Lemma (Lemma \ref{lem:swap}) to obtain a contradiction.
Let $w$ be a word in BS$(1,2)$\ with no $a^{-1}$ letters. Define the {\em
$t$-encoding} of $w$ to be a string of integers $n_1n_2\ldots n_k$
such that $w=t^{n_1}at^{n_2}\ldots at^{n_k}$. If $w$ starts (or
respectively ends) with an $a$ then $n_1=0$ (or respectively $n_k=0$).
As an example, the word
\begin{eqnarray*}
at^2a^2ta^3t^4at^{-9}at^2at^{-1} & = & t^0 a t^2 a t^0 a t a t^0
a t^0 a t^4 a t^{-9} a t^2 a t^{-1}
\end{eqnarray*}
is encoded as $0201004(-9)2(-1).$ Note that previously our encodings
have been of $a$-exponents, but this new encoding will be useful for
the argument to follow.
\begin{figure}[ht!]
\begin{center}
\begin{tabular}{c}
\includegraphics[height=4cm]{pics/fsa-2.eps}
\end{tabular}
\end{center}
\caption{A finite state automaton accepting the language $L$ in the
proof of Theorem \ref{thm:BSFullnotcounter}}
\label{fig:FSA_tencodings}
\end{figure}
\begin{thm}\label{thm:BSFullnotcounter}
The language of all geodesic words in BS$(1,2)$\ with respect to the
generating set $\{a^{\pm 1}, t^{\pm 1}\}$ is not counter.
\end{thm}
\noindent
\textit{Proof.} Suppose that the full language is counter, and call
it $C$. Define $L$ to be the set of words in $\{a,t^{\pm 1}\}$
accepted by the finite state automaton\ in Figure \ref{fig:FSA_tencodings}. That is,
$L$ is the set of $PN$ words whose $t$-encodings are words of the form
$\{10,20,30\}\{10,20,30\}^*0\{-10,-20,-30\}\{-10,-20,-30\}^*.$
Since $L$ is regular, the intersection of $C$ and $L$ is counter. Let
$M$ be a counter automaton accepting $C\cap L$, with alphabet $a^{\pm
1},t^{\pm 1}$. We can construct a new counter automaton $M'$ which
accepts the set of $t$-encoded words of $C\cap L$ as follows.
The states, start state, accept states and counters are the same as
for $M$. The new alphabet is $\{0,\pm 10,\pm 20,\pm 30\}$.
The transitions are defined as follows.
If there is a path labelled by $t^ia$ in $M$ from $p$ to $q$, then add
an edge in $M'$ from $p$ to $q$ labeled by $i$, and the counters are
changed by the same amount as they were following the path $t^ia$ in
$M$. Thus a word is accepted by $M$ if and only if its encoding is
accepted by $M'$. Since $M$ accepts $C\cap L$, the only subwords of
the form $t^ia$ that appear in accepted words are for $i=0,\pm 10,\pm
20$ or $\pm 30$. Let $p$ be the swapping length for $M'$.
Next, take a Thue-Morse word in three symbols, which we choose to be
$10,20,30$, of length greater than $2p$. This word encodes a $P$ word
$u$ of some $t$-exponent $10c$. We wish to find some kind of
``reverse'' of $u$, as we did in the proof of Theorem
\ref{thm:cfnotcounter}. We find a word $v$ to act as the ``reverse''
by the following procedure.
\begin{enumerate}
\item Write $u$ as $t^{10}a^{\epsilon_1}t^{10}a^{\epsilon_2}\ldots
t^{10}a^{\epsilon_k}t^{10}$ where $\epsilon_i=0,1$.
\item Reverse this word.
\item Replace $a^0$ with $a^1$ and $a^1$ with $a^0$ in this word.
\item Replace $t^{10}$ with $t^{-10}$ in this word to get $v$. \end{enumerate}
For example, the Thue-Morse word $10,20,30,10,30,20,10,20,30,20,10,30$
encodes the word
\begin{eqnarray*}
u & = & t^{10}at^{20}at^{30}at^{10}at^{30}at^{20}at^{10}at^{20}a
t^{30}at^{20}at^{10}at^{30}
\end{eqnarray*}
\noindent
Step 1: Write $u$ as
\begin{eqnarray*}
u & = &
|a^{1}|a^0|a^{1}|a^0|a^0|a^{1}|a^{1}|a^0|a^0|a^{1}|a^0|
a^{1}|a^{1}|a^0|a^{1}|a^0|a^0|a^{1}|a^0|a^{1}|a^{1}|a^0|a^0|
\end{eqnarray*}
where the $t^{10}$ terms are replaced by bars $|$, to make it easier
to read.
\noindent
Step 2: Reversing this word gives
\begin{eqnarray*}
u^R & = &
|a^0|a^0|a^1|a^1|a^0|a^1|a^0|a^0|a^1|a^0|a^1|a^1|a^0|a^1|a^0|a^0|
a^1|a^1|a^0|a^0|a^1|a^0|a^1|.
\end{eqnarray*}
\noindent Step 3: Replacing $a^0$ by $a^1$ and vice versa gives
\begin{eqnarray*}
& & |a^1|a^1|a^0|a^0|a^1|a^0|a^1|a^1|a^0|a^1|a^0|a^0|a^1|a^0|a^1|a^1|
a^0|a^0|a^1|a^1|a^0|a^1|a^0|.
\end{eqnarray*}
\noindent
Step 4: Replacing $t^{10}$ by $t^{-10}$ gives
\begin{eqnarray*}
v & = & \dag a^1 \dag a^1 \dag a^0 \dag a^0 \dag a^1 \dag a^0 \dag a^1
\dag a^1 \dag a^0 \dag a^1 \dag a^0 \dag a^0 \dag a^1 \dag a^0 \dag
a^1 \dag a^1 \dag a^0 \dag a^0 \dag a^1 \dag a^1 \dag a^0 \dag a^1
\dag a^0 {\dag}\\ & = & t^{-10}at^{-10}at^{-30} at^{-20}at^{-10}a
t^{-20}a t^{-30}at^{-20}a t^{-10}at^{-30}a t^{-10}at^{-20}at^{-20}
\end{eqnarray*}
\noindent
where $\dag$ represents $t^{-10}$.
The $t$-encoding for $v$ is then
\begin{equation*}
(-10)(-10)(-30)(-20)(-10)(-20)(-30) (-20)(-10)(-30)(-10)(-20)(-20).
\end{equation*}
Note that $v$ does not have to be square-free. Note also that the
$t$-exponent of $v$ is $-10c$, where $10c$ is the $t$-exponent of
$u$.
Now to understand what motivated us to produce this $v$ from $u$,
consider the word $w=ua^2v=uat^0av$. This word is type $X$. Drawing
$w$ in a sheet of the Cayley graph\ we see that at every tenth level there is
an $a$ letter, either on the part going up the sheet (the $u$ part) or
the part going down (the $v$ part). See the left side of Figure
\ref{fig:sheet-tencodings}.
\begin{figure}[ht!]
\begin{tabular}{c}
\includegraphics[width=14.5cm]{pics/sheet_2.eps}
\end{tabular}
\caption{The word $w=ua^2v$ drawn in a sheet of the Cayley graph.}
\label{fig:sheet-tencodings}
\end{figure}
We will now show that $w$ is a geodesic. Consider the word $w'$
obtained from $w$ by commuting all $a$ letters to the right. Since
there is exactly one $a$ at every tenth level of $w$, we have
$w'=t^{10c}a^2t^{-10}(at^{-10})^{c-1}.$ Then $w'$ is a normal form $X$
word, since its $N$-run is of the form $200\ldots$ with no consecutive
non-zero entries. Thus by Lemma \ref{lem:geodNF} is geodesic, and
since $w'$ has the same length as $w$ then $w$ is geodesic. So $w$ is
in $C\cap L$, it is accepted by the counter automaton $M$, and its
$t$-encoding is accepted by $M'$.
Applying the Swapping Lemma (Lemma \ref{lem:swap}) to the encoding of
$w$, we switch two adjacent subwords in the first half of $w$, that
is, in the $t$-encoding of $u$, which is square-free.
This new string is a $t$-encoding of some other word in the group,
which is an $X$ word, essentially the same as $w$ except that at some
level(s) we see a shift one step to the right in both sides of the
word (viewed in the sheet of the Cayley graph). See Figure
\ref{fig:sheet-swap}.
\begin{figure}[ht!]
\begin{tabular}{c}
\includegraphics[width=14.5cm]{pics/sheet_swap.eps}
\end{tabular}
\caption{Swapping two subwords in the $P$ part of $w$ leads to a
word with $t^{-1}a^2t^{-1}$.}
\label{fig:sheet-swap}
\end{figure}
When we commute $a$-letters to the right in this word, we will see
$t^{-1}a^2t^{-1}$ at some point(s) in the $N$-run, and thus the swapped
word is not a geodesic, so not in $C\cap L$, and this is a
contradiction. $\Box$
\section{Acknowledgements}
My sincere thanks to Bob Gilman, Ray Cho, Walter Neumann, Jon
McCammond, Susan Hermiller, Sarah Rees, Rick Thomas, Nik Ruskuc, Kim
Ruane, Mauricio Gutierrez, Sean Cleary, Jennifer Taback and Gretchen
Ostheimer for their help and suggestions that have all contributed to
this work. I wish to thank the reviewer of this paper for pointing out
that the normal form language described here is a 1-counter language,
as well as many other very useful suggestions and corrections. The
labels for the figures were produced using Andrew Rechnitzer's
\texttt{equation$\_$edit} program.
| {
"timestamp": "2004-11-08T16:07:59",
"yymm": "0411",
"arxiv_id": "math/0411166",
"language": "en",
"url": "https://arxiv.org/abs/math/0411166",
"abstract": "We give a language of unique geodesic normal forms for the Baumslag-Solitar group BS(1,2) that is context-free and 1-counter. We discuss the classes of context-free, 1-counter and counter languages, and explain how they are inter-related.",
"subjects": "Group Theory (math.GR)",
"title": "A context-free and a 1-counter geodesic language for a Baumslag-Solitar group",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.973240718366854,
"lm_q2_score": 0.727975443004307,
"lm_q1q2_score": 0.7084953431029406
} |
https://arxiv.org/abs/1702.08232 | Hajós-like theorem for signed graphs | The paper designs five graph operations, and proves that every signed graph with chromatic number $q$ can be obtained from all-positive complete graphs $(K_q,+)$ by repeatedly applying these operations. This result gives a signed version of the Hajós theorem, emphasizing the role of all-positive complete graphs played in the class of signed graphs, as played in the class of unsigned graphs. | \section{Introduction}
We consider a graph to be finite and simple, i.e., with no loops or multiple edges.
Let $G$ be a graph and $\sigma\colon\ E(G)\rightarrow \{1,-1\}$ be a mapping. The pair $(G,\sigma)$ is called a \emph{signed graph}. We say that $G$ is the \emph{underlying graph} of $(G,\sigma)$ and $\sigma$ is a \emph{signature} of $G$.
The \emph{sign} of an edge $e$ is the value of $\sigma(e)$, and the \emph{sign product} $sp(H)$ of a subgraph $H$ is defined as $sp(H)=\prod_{e\in E(H)}\sigma(e)$.
An edge is \emph{positive} if it has positive sign; otherwise, the edge is \emph{negative}.
A signature $\sigma$ is \emph{all-positive} (resp., \emph{all-negative}) if it has positive sign (resp., negative sign) on each edge.
A graph $G$ together with an all-positive signature is denoted by $(G,+)$ and similarly, $(G,-)$ denotes a signed graph where the signature is all-negative.
Throughout the paper, to be distinguished from ``a signed graph'' and from ``a multigraph'', ``a graph'' is always regarded as an unsigned simple graph.
Let $(G,\sigma)$ be a signed graph. For $v \in V(G)$, denote by $E(v)$ the set of edges incident to $v$.
A \emph{switching} at a vertex $v$ defines a signed graph $(G,\sigma')$ with $\sigma'(e) = -\sigma(e)$ if $e \in E(v)$, and $\sigma'(e) = \sigma(e)$ if $e \in E(G)\setminus E(v)$.
Two signed graphs $(G,\sigma)$ and $(G,\sigma^*)$ are {\em switch-equivalent} (briefly, \emph{equivalent}) if they can be obtained from each other by a sequence of switchings. We also say that
$\sigma$ and $\sigma^*$ are \emph{equivalent signatures} of $G$.
A signed graph $(G,\sigma)$ is \emph{balanced} if each circuit contains even number of negative edges; otherwise, $(G,\sigma)$ is \emph{unbalanced}.
A signed graph $(G,\sigma)$ is \emph{antibalanced} if each circuit contains even number of positive edges.
It is well known (see e.g. \cite{Raspaud_2011}) that $(G,\sigma)$ is balanced if and
only if $\sigma$ is switch-equivalent to an all-positive signature, and $(G,\sigma)$ is antibalanced
if and only if $\sigma$ is switch-equivalent to an all-negative signature.
In the 1980s, Zaslavsky \cite{Zaslavsky_1982, Zaslavsky_1984} initiated the study on vertex colorings of signed graphs.
The natural constraints for a coloring $c$ of a signed graph $(G,\sigma)$ are that, (1) $c(v) \not= \sigma(e) c(w)$ for each edge $e=vw$, and (2) the colors can be inverted under switching, i.e., equivalent signed graphs have the same chromatic number.
In order to guarantee these properties of a coloring, Zaslavsky \cite{Zaslavsky_1982} used
$2k+1$ ``signed colors" from the color set $\{-k, \dots, 0, \dots, k\}$ and studied the
interplay between colorings and zero-free colorings through the chromatic polynomial.
Recently, M\'a\v{c}ajov\'a, Raspaud and \v{S}koviera \cite{Raspaud_2014}
modified this approach. For $n = 2k+1$ let $M_n = \{0, \pm 1, \dots,\pm k\}$, and
for $n = 2k$ let $M_n = \{\pm 1, \dots,\pm k\}$.
A mapping $c$ from $V(G)$ to $M_n$ is a {\em signed $n$-coloring} of $(G,\sigma)$, if $c(v) \not= \sigma(e) c(w)$ for each edge $e=vw$.
They defined $\chi_{\pm}((G,\sigma))$ to be the smallest number $n$ such that $(G,\sigma)$ has a signed $n$-coloring, and called it the \textit{signed chromatic number}.
A distinct version of vertex colorings of signed graphs, defined by homomorphisms of signed graphs, was proposed in \cite{Edita_Sopena_2014}.
In \cite{KS_2015}, the authors studied circular colorings of signed graphs. The related integer $k$-coloring of a signed graph $(G,\sigma)$ is defined as follows.
Let $\mathbb{Z}_k$ denote the cyclic group of integers modulo $k$, and the inverse of an element $x$ is denoted by $-x$.
A function $c : V(G) \rightarrow \mathbb{Z}_k$ is a \emph{$k$-coloring} of $(G,\sigma)$ if $c(v) \not= \sigma(e) c(w)$ for each edge $e=vw$. Clearly,
such colorings satisfy the constrains (1) and (2) of a vertex coloring of signed graphs.
The {\em chromatic number $\chi((G,\sigma))$} of a signed graph $(G,\sigma)$ is the smallest $k$ such that $(G,\sigma)$ has a $k$-coloring.
As shown in \cite{KS_2015}, two equivalent signed graphs have the same chromatic number.
In this paper, we follow this version of vertex colorings of signed graphs.
Many questions concerning the colorings of a signed graph have been discussed. In \cite{Raspaud_2014} and \cite{Stiebitz_2015},
they study the signed chromatic number $\chi_{\pm}$ of signed graphs.
The chromatic spectrum and signed chromatic spectrum of signed graphs are given in \cite{Yingli_2015_00614}.
A few classical results concerning the choice number of graphs are generalized to signed graphs in \cite{Steffen_2015}.
This paper addresses an analogue of a well-known theorem of Haj\'{o}s for signed graphs.
In 1961, Haj\'{o}s proved a result on the chromatic number of graphs, which is one of the classical results in the field of graph colorings. This result has several equivalent formulations, one of them states as the following two theorems.
\begin{theorem}[\cite{Hajos_1961}] \label{thm_closed}
The class of all graphs that are not $q$-colorable is closed under the following three operations:
\begin{enumerate}[(1)]
\setlength{\itemsep}{-0.1cm}
\item Add vertices or edges;
\item Identify two nonadjacent vertices;
\item Let $G_1$ and $G_2$ be two vertex-disjoint graphs with $a_1b_1\in E(G_1)$ and $a_2b_2\in E(G_2)$. Make a graph $G$ from $G_1\cup G_2$ by removing $a_1b_1$ and $a_2b_2$, identifying $a_1$ with $a_2$, and adding a new edge between $b_1$ and $b_2$ (see Figure \ref{oper_3}).
\end{enumerate}
\end{theorem}
\begin{figure}[h]
\centering
\includegraphics[width=12cm]{oper_3.pdf} \\
\caption{Operation $(3)$} \label{oper_3}
\end{figure}
The Operation (3) is known as the Haj\'{o}s construction in literature.
\begin{theorem}[Haj\'{o}s theorem, \cite{Hajos_1961}] \label{thm_Hajos}
Every non-$q$-colorable graph can be obtained by Operations (1)-(3) from the complete graph $K_{q+1}$.
\end{theorem}
The Haj\'{o}s theorem have been generalized in several different ways, by considering more general colorings than vertex $k$-colorings of graphs.
The analogues of the Haj\'{o}s theorem are proposed for list-colorings \cite{Gravier_1996}, for weighted colorings \cite{Araujo_2013}, and for group colorings \cite{An_2010}.
However, all of these extensions are still restricted to unsigned graphs.
In this paper, we analogously establish a result on the chromatic number $\chi$ of signed graphs, that generalizes the result of Haj\'{o}s to signed graphs. Hence, this result is a signed version of the Haj\'{o}s theorem and is called the Haj\'{o}s-like theorem of signed graphs (briefly, the Haj\'{o}s-like theorem).
To prove this theorem, we consider signed multigraphs rather than signed simple graphs.
Indeed, for vertex colorings of signed multigraphs, it suffices to consider signed bi-graphs, a subclass of signed multigraphs in which no two edges of the same sign locate between two same vertices.
Clearly, signed bi-graphs contains signed simple graphs as a particular subclass.
Hence, the Haj\'{o}s-like theorem holds for signed bi-graphs and in particular, for signed graphs.
Moreover, the theorem shows that, for the class of signed bi-graphs, the complete graphs together with an all-positive signature plays the same role as it plays for the class of unsigned graphs.
The structure of the rest of the paper are arranged as follows.
In section \ref{sec_complete}, we design five operations on signed bi-graphs and show that these operations are closed in the class of non-$q$-colorable signed bi-graphs for any given positive integer $q$.
Moreover, we established some lemmas necessary for the proof of the Haj\'{o}s-like theorem.
In section \ref{sec_Hajos}, we address the proof of the Haj\'{o}s-like theorem.
\section{Graph operations on signed bi-graphs} \label{sec_complete}
\subsection{Signed bi-graphs}
A \emph{bi-graph} is a multigraph having no loops and having at most two edges between any two distinct vertices.
Let $G$ be a bi-graph, $u$ and $v$ be two distinct vertices of $G$.
Denote by $E(u,v)$ the set of edges connecting $u$ to $v$, and let $m(u,v)=|E(u,v)|$.
Clearly, $0 \leq m(u,v)\leq 2$.
A bi-graph $G$ is \emph{bi-complete} if $m(x,y)=2$ for any $x,y \in V(G)$, and is \emph{just-complete} if $m(x,y)=1$ for any $x,y \in V(G)$.
A \emph{signed bi-graph} $(G,\sigma)$ is a bi-graph $G$ together with a signature $\sigma$ of $G$ such that any two multiple edges have distinct signs.
A bi-complete signed bi-graph of order $n$ is denoted by $(K_n,\pm)$.
It is not hard to see that $\chi((K_n,\pm))=2n-2$ and $\chi((K_n,+))=n$.
The concepts of $k$-coloring, chromatic number and switching of signed graphs are naturally extended to signed bi-graphs, working in the same way, and the related notations are inherited.
Let $(G,\sigma)$ be a signed multigraph. Between each pair of vertices, remove all the multiple edges of the same sign but one. We thereby obtain a signed bi-graph $(G',\sigma')$. We can see that $c$ is a $k$-coloring of $(G,\sigma)$ if and only if $c'$ is a $k$-coloring of $(G',\sigma')$, where $c'$ is the restriction of $c$ into $(G',\sigma')$.
Therefore, for the vertex colorings of signed multigraphs, it suffices to consider signed bi-graphs.
\subsection{Graph operations}
Let $k$ be a nonnegative integer. A signed bi-graph is \emph{$k$-thin} if it is a bi-complete signed bi-graph minus at most $k$ pairwise vertex-disjoint edges. Clearly, if a signed bi-graph is $0$-thin, then it is bi-complete.
\begin{theorem} \label{thm_closed_sb}
The class of all signed bi-graphs that are not $q$-colorable is closed under the following operations:
\begin{enumerate}[(sb1)]
\setlength{\itemsep}{-0.1cm}
\item Add vertices or signed edges.
\item Identify two nonadjacent vertices.
\item Let $(G_1,\sigma_1)$ and $(G_2,\sigma_2)$ be two vertex-disjoint signed bi-graphs. Let $v$ be a vertex of $G_1$ and $e$ be a positive edge of $G_2$ with ends $x$ and $y$. Make a graph $(G,\sigma)$ from $(G_1,\sigma_1)$ and $(G_2,\sigma_2)$ by splitting $v$ into two new vertices $v_1$ and $v_2$, removing $e$, and identifying $v_1$ with $x$ and $v_2$ with $y$ (see Figure \ref{oper_sb3}).
\item Switch at a vertex.
\item When $q$ is even, remove a vertex that has at most $\frac{q}{2}$ neighbors; when $q$ is odd, remove a negative edge whose ends are connected by no other edges, identify these two ends, and add signed edges so that the resulting bi-signed graph is
$\frac{q-3}{2}$-thin.
\end{enumerate}
\begin{figure}[h]
\centering
\includegraphics[width=12cm]{oper_sb3.pdf} \\
\caption{Operation $(sb3)$} \label{oper_sb3}
\end{figure}
\end{theorem}
\begin{proof}
Since Operations $(sb1),(sb2),(sb4)$ neither make loops nor decrease the chromatic number, it follows that the class of non-$q$-colorable signed bi-graphs is closed under these operations.
For Operation $(sb3)$, suppose to the contrary that $(G,\sigma)$ is $q$-colorable.
Let $c$ be a $q$-coloring of $(G,\sigma)$.
Denote by $x'$ and $y'$ the vertices of $G$ obtained from $x$ and $y$, respectively.
If $c(x')=c(y')$, then the restriction of $c$ into $G_1$, where $v$ is assigned with the same color as $x'$ and $y'$, gives a $q$-coloring of $(G_1,\sigma_1)$, contradicting with the fact that $(G_1,\sigma_1)$ is not $q$-colorable.
Hence, we may assume that $c(x')\neq c(y')$.
Note that $e$ is a positive edge of $(G_2,\sigma_2)$.
Thus the restriction of $c$ into $G_2$ gives a $q$-coloring of $(G_2,\sigma_2)$, contradicting with the fact that $(G_2,\sigma_2)$ is not $q$-colorable.
Therefore, the statement holds for Operation $(sb3)$.
It remains to verify the theorem for Operation $(sb5)$.
For $q$ even, suppose to the contrary that the removal of a vertex $u$ from a non-$q$-colorable signed bi-graph $(G,\sigma)$ yields the $q$-colorability. Let $\phi$ be a $q$-coloring of $(G,\sigma)-u$ using colors from a set $S$, where $S=\{0,\pm 1,\ldots,\pm (\frac{q}{2}-1),\frac{q}{2}\}$.
Notice that each neighbor of $u$ makes at most two colors unavailable for $u$. Since $u$ has at most $\frac{q}{2}$ neighbors, $S$ still has a color available for $u$.
Hence, we can extend $\phi$ to be a $q$-coloring of $(G,\sigma)$, a contradiction.
For the case that $q$ is odd, let $(H,\sigma_H)$ be obtained from a non-$q$-colorable signed bi-graph $(H',\sigma_H')$ by applying this operation to a negative edge $e'$. Let $z$ be the resulting vertex from the two ends of $e'$, say $x'$ and $y'$.
Suppose to the contrary that $(H,\sigma_H)$ is $q$-colorable. Let $\psi$ be a $q$-coloring of $(H,\sigma_H)$ using colors from the set $\{0,\pm 1,\ldots,\pm (\frac{q-1}{2})\}$.
If $\psi(z)\neq 0$, then by assigning $x'$ and $y'$ with the color $\psi(z)$, we complete a $q$-coloring of $(H',\sigma_H')$, a contradiction.
Hence, we may assume that $\psi(z)=0$.
For $0\leq i\leq \frac{q-1}{2}$, let $V_i=\{v\in V(G)\colon\ |\psi(v)|=i\}$.
Clearly, each $V_i$ is an antibalanced set and in particular, $V_0$ is an independent set.
Since $(H,\sigma_H)$ is $\frac{q-3}{2}$-thin, we can deduce that there exists $p\in \{1,\ldots,\frac{q-1}{2}\}$ such that $|V_p|=1$.
Exchange the colors between $V_0$ and $V_p$, and then assign $x'$ and $y'$ with the same color as $z$, we thereby obtain a $q$-coloring of $(H',\sigma_H')$ from $\psi$, a contradiction.
\end{proof}
\subsection{Useful lemmas}
Operation $(sb3)$ can be extended to the following one which works for signed bi-graphs.
\begin{enumerate}[(sb3')]
\item Let $(G_1,\sigma_1)$ and $(G_2,\sigma_2)$ be two vertex-disjoint signed bi-graphs.
For each $i\in \{1,2\}$, let $e_i$ be an edge of $G_i$ with ends $x_i$ and $y_i$.
Make a graph $(G,\sigma)$ from $G_1\cup G_2$ by removing $e_1$ and $e_2$, identifying $x_1$ with $x_2$, and adding a new edge $e$ between $y_1$ and $y_2$ with $\sigma(e)=\sigma_1(e_1)\sigma_2(e_2)$.
\end{enumerate}
\begin{lemma} \label{lem_comb}
Operation $(sb3')$ is a combination of Operations $(sb3)$ and $(sb4)$.
\end{lemma}
\begin{proof}
We use the notations in the statement of Operation $(sb3')$.
First assume that at least one of $e_1$ and $e_2$ is a positive edge.
With loss of generality, say $e_1$ is positive.
We apply Operation $(sb3)$ to $(G_1,\sigma_1)$ and $(G_2,\sigma_2)$ where $e_1$ is removed, $x_2$ is split into two new vertices $x_2'$ and $x_2''$ with $y_2$ as the neighbor of $x_2'$ and all other neighbors of $x_2$ as the neighbors of $x_2''$, and then $x_2'$ is identified with $y_1$ and $x_2''$ is identified with $x_1$. The resulting signed bi-graph is exactly $(G,\sigma)$, we are done.
Hence, we may next assume that both $e_1$ and $e_2$ are negative edges.
Switch at $x_1$ in $(G_1,\sigma_1)$ and at $x_2$ in $(G_2,\sigma_2)$.
Since $e_1$ and $e_2$ are positive in the resulting signed bi-graphs, we may apply Operation $(sb3)$ similarly as above,
obtaining a signed bi-graph, which leads to $(G,\sigma)$ by switching again at $x_1$ and $x_2$.
\end{proof}
\begin{lemma}\label{lem_antibalanced}
A just-complete signed bi-graph is antibalanced if and only if the sign product on each triangle is $-1$,
and is balanced if and only if the sign product on each triangle is $1$.
\end{lemma}
\begin{proof}
For the first statement, since a just-complete signed bi-graph $(G,\sigma)$ is exactly a complete signed graph, $G$ is antibalanced if and only if the sign product on each circuit of length $k$ is $(-1)^k$. Hence, the proof for the necessity is trivial. Let us proceed to the sufficiency, which will be proved by induction on $k$.
Clearly, the statement holds for $k=3$ because of the assumption of the lemma. Assume that $k\geq 4$. Let $C$ be a circuit of length $k$.
Take any chord $e$ of $C$, which divides $C$ into two circuits $C_1$ and $C_2$. For $i\in \{1,2\}$, let $k_i$ denote the length of $C_i$. Thus, $k=k_1+k_2-2.$
By applying the induction hypothesis, we have $sp(C_i)=(-1)^{k_i}$. It follows that $sp(C)=sp(C_1)sp(C_2)=(-1)^k,$ the statement also holds.
The second statement can be argued in the same way as for the first one.
We only have to pay attention to the equivalence between that $(G,\sigma)$ is balanced and that the sign product on each circuit of length $r$ is 1.
\end{proof}
A signed bi-graph of order $3r$ is \emph{$\triangledown$-complete} if it is $(K_{3r},\pm)$ minus $r$ pairwise vertex-disjoint all-positive triangles.
Clearly, a $\triangledown$-complete signed bi-graph is complete.
\begin{lemma} \label{lem_triangle-complete}
The $\triangledown$-complete signed bi-graph of order $3r$ can be obtained from $(K_{2r+1},+)$ by Operations (sb1)-(sb5).
\end{lemma}
\begin{proof}
Take $r+1$ copies of $(K_{2r+1},+)$, say $(H_i,+)$ of vertex set $\{v_i^0,\ldots,v_i^{2r}\}$ for $0\leq i\leq r$.
For each $j\in \{1,\ldots,r\}$, switch at $v_0^j$, and then apply Operation (sb3') to $H_0$ and $H_j$ so that $v_0^jv_0^{j+r}$ and $v_j^0v_j^{2j}$ are removed and that $v_0^j$ is identified with $v_j^0$, and finally identify $v_0^j$ with $v_0^{j+r}$.
The resulting signed bi-graph is denoted by $(G,\sigma)$.
By Theorem \ref{thm_closed_sb}, since $(K_{2r+1},+)$ is not $2r$-colorable,
$(G,\sigma)$ is not $2r$-colorable either.
Note that $v_0^0$ has precisely $r$ neighbors in $G$. We can apply Operation (sb5) to $v_0^0$, i.e., we remove $v_0^0$ from $(G,\sigma)$.
In the resulting signed bi-graph, for each $1\leq k\leq 2r$,
since $v_1^k,\ldots,v_r^k$ are pairwise nonadjacent, we can apply Operation (sb1) to identify them into one vertex.
Denote by $(H,\sigma_H)$ the resulting signed bi-graph.
We can see that $(H,\sigma_H)$ is of order $3r$ and moreover, for $1\leq j\leq r$, the signed bi-graph induced by $\{v_0^j,v_1^{2j},v_1^{2j-1}\}$ is an unbalanced triangle.
It follows that, by adding signed edges and switching if needed, we obtain the $\triangledown$-complete signed bi-graph of order $3r$ from $(H,\sigma_H)$.
\end{proof}
\begin{lemma}\label{lem_bicomplete}
$(K_r,\pm)$ can be obtained from $(K_{2r-2},+)$ by Operation (sb1)-(sb5).
\end{lemma}
\begin{proof}
Let $(G,\sigma)$ be a copy of $(K_{2r-2},+)$ of vertices $v_1,\ldots,v_{2r-2}$.
Clearly, $(G,\sigma)$ is not $(2r-3)$-colorable.
Switch at $v_1$ and apply Operation (sb5) to $v_1v_2$ so that each of $v_3v_4,v_5v_6,\ldots,v_{2r-5}v_{2r-4}$ has no multiple edges.
For each $i\in \{2,3,\cdots,r-2\}$, switch at $v_{2i}$ and apply Operation (sb5) to $v_{2i-1}v_{2i}$ so that no new signed edges are added.
The resulting signed bi-graph is exactly $(K_r,\pm)$.
\end{proof}
\section{Haj\'{o}s-like theorem}\label{sec_Hajos}
We will need the following definitions for the proof of the Haj\'{o}s-like theorem.
Let $(G,\sigma)$ be a signed bi-graph.
An \emph{antibalanced set} is a set of vertices that induce an antibalanced graph.
Let $c$ be a $k$-coloring of $(G,\sigma)$. A set of all vertices $v$ with the same value of $|c(v)|$ is called a \emph{partite set} of $(G,\sigma)$. Let $U$ and $V$ be two partite sets. They are \emph{completely adjacent} if $m(u,v)\geq 1$ for any $u\in U$ and $v\in V$, \emph{bi-completely adjacent} if $m(u,v)=2$ for any $u\in U$ and $v\in V$, and \emph{just-completely adjacent} if $m(u,v)=1$ for any $u\in U$ and $v\in V$.
Let $(G,\sigma)$ be a signed bi-graph. A sequence $(x,y,z)$ of three vertices of $G$ is a \emph{triple} if there exist three integers $a,b,c$ satisfying
the following three conditions:
\begin{enumerate}[(i)]
\setlength{\itemsep}{-0.1cm}
\item $a,b,c\in \{1,-1\}$,
\item $ab=c,$
\item $a\notin \{\sigma(e)\colon\ e\in E(x,y)\}$, $b\notin \{\sigma(e)\colon\ e\in E(x,z)\}$, and $c\in \{\sigma(e)\colon\ e\in E(y,z)\}$.
\end{enumerate}
The sequence $(a,b,c)$ is called a \emph{code} of $(x,y,z)$.
Note that a triple may have more than one code.
\begin{theorem} \label{thm_Hajos_sb} (Haj\'{o}s-like theorem)
Every signed bi-graph with chromatic number $q$ can be obtained from $(K_q,+)$ by Operations (sb1)-(sb5).
\end{theorem}
\begin{proof}
Let $(G,\sigma)$ be a counterexample with minimum $|V(G)|$ and subjecting to it, $|E(G)|$ is maximum.
We first claim that $(G,\sigma)$ is complete.
Suppose to the contrary that $G$ has two non-adjacent vertices $x$ and $y$.
Let $(G_1,\sigma_1)$ and $(G_2,\sigma_2)$ be obtained from a copy of $(G,\sigma)$ by identifying $x$ with $y$ into a new vertex $v$ and by adding a positive edge $e$ between $x$ and $y$, respectively.
Since $(G,\sigma)$ has chromatic number $q$, it follows with Theorem \ref{thm_closed_sb} that both $(G_1,\sigma_1)$ and $(G_2,\sigma_2)$ has chromatic number at least $q$. Note the fact that $(K_i,+)$ can be obtained from $(K_j,+)$ by Operation (sb1) whenever $i>j$.
Thus by the minimality of $|V(G)|$, the graph $(G_1,\sigma_1)$ can be obtained from $(K_q,+)$ by Operations (sb1)-(sb5), and by the maximality of $|E(G)|$, so does $(G_2,\sigma_2)$.
We next show that $(G,\sigma)$ can be obtained from $(G_1,\sigma_1)$ and $(G_2,\sigma_2)$ by Operations $(sb2)$ and $(sb3)$, which contradicts the fact that $(G,\sigma)$ is a counterexample.
This contradiction completes the proof of the claim.
Apply Operation $(sb3)$ to $(G_1,\sigma_1)$ and $(G_2,\sigma_2)$ so that $e$ is removed and $v$ is split into $x$ and $y$.
In the resulting graph, identify each pair of vertices that corresponds to the same vertex of $G$ except $x$ and $y$, we thereby obtain exactly $(G,\sigma)$.
We next claim that $(G,\sigma)$ has no triples. The proof of this claim is analogous to the one above. Suppose to the contrary that $(G,\sigma)$ has a triple, say $(x,y,z)$. Let $(a,b,c)$ be a code of $(x,y,z)$.
Take two copies of $(G,\sigma)$.
Add an edge $e_1$ with sign $a$ into one copy between $x$ and $y$, obtaining $(G',\sigma')$.
Add an edge $e_2$ with sign $b$ into the other copy between $x$ and $z$, obtaining $(G'',\sigma'')$.
Clearly, both $(G',\sigma')$ and $(G'',\sigma'')$ have chromatic number at least $q$. By the maximality of $|E(G)|$, they can be obtained by Operations (sb1)-(sb5) from $(K_q,+)$.
To complete the proof of the claim, it remains to show that $(G,\sigma)$ can be obtained from $(G',\sigma')$ and $(G'',\sigma'')$ by Operations (sb1)-(sb5).
Note that Operations (sb3') is a combination of Operations $(sb3)$ and $(sb4)$ by Lemma \ref{lem_comb}.
Apply Operation (sb3') to $(G',\sigma')$ and $(G'',\sigma'')$ so that $e_1$ and $e_2$ are removed, $x'$ is identified with $x''$, and an edge $e$ is added between $y'$ and $z''$.
We have $\sigma(e)=\sigma(e_1)\sigma(e_2)=ab=c\in E(y,z)$.
By applying Operation (sb2) to each pair of vertices that are the copies of the same vertex of $G$ except $x$, we obtain $(G,\sigma)$.
We proceed the proof of the theorem by distinguishing two cases according to the parity of $q$.
Case 1: $q$ is odd.
Since $\chi((G,\sigma))=q$, the vertex set $V(G)$ can be divided into $k$ partite sets $V_1,\ldots,V_{k}$, where $k=\frac{q+1}{2}$, so that $V_1$ is an independent set and all others are antibalanced sets but not independent.
It follows that $|V_i|\geq 2$ for all $i\in \{2,\ldots,k\}$. By the first claim, $|V_1|=1$, and the graphs induced by $V_2,\dots,V_k$ are just-complete and moreover,
every two partite sets are completely adjacent.
Subcase 1.1: every two partite sets are bi-completely adjacent.
Take the vertex in $V_1$ and two arbitrary vertices from each of $V_2,\ldots,V_k$.
Let $(H,\sigma_H)$ be the signed bi-graph induced by these vertices.
Clearly, $|V(H)|=q.$
By the first claim and the assumption, we can see that $(H,\sigma_H)$ is a bi-complete signed bi-graph minus disjoint edges.
Hence, $(H,\sigma_H)$ can be obtained from $(K_q,+)$ by switching at vertices and adding signed edges.
Therefore, $(G,\sigma)$ can be obtained from $(K_q,+)$ by Operations $(sb1)$ and $(sb4)$, a contradiction.
Subcase 1.2: there exist two partite sets $V_j$ and $V_l$ that are not bi-completely adjacent.
If $V_j$ and $V_l$ are not just-completely adjacent, then there always exist three vertices $x,y,z$, w.l.o.g., say $x\in V_j$ and $y,z\in V_l$, such that $m(x,y)=1$ and $m(x,z)=2.$
Note that $m(y,z)=1$. Thus, $(y,x,z)$ is a triple of $(G,\sigma)$, contradicting the second claim.
Hence, we may assume that $V_j$ and $V_l$ are just-completely adjacent.
Recall that both $V_j$ and $V_l$ induce just-complete signed bi-graphs.
Thus, $V_j\cup V_l$ induces a just-complete signed bi-graphs as well, say $(Q,\sigma_Q)$.
By again the second claim, every triangle in $(Q,\sigma_Q)$ has sign product -1.
Thus, $(Q,\sigma_Q)$ is antibalanced by Lemma \ref{lem_antibalanced}.
It follows that $1\in \{j,l\}$ since otherwise, the division of $V(G)$, obtained from $\{V_1,\ldots,V_{k}\}$ by merging $V_j$ with $V_l$,
yields $\chi((G,\sigma))\leq q-2$, a contradiction.
W.l.o.g., let $j=1$.
We next show that every other pair of partite sets are bi-completely adjacent.
Suppose to the contrary that there exist another two partite sets, say $V_s$ and $V_t$, that are not bi-completely adjacent.
By the same argument as above, $1\in\{s,t\}$. We may assume that $s=1$.
Let $u_1,u_l,u_t$ be a vertex of $V_1,V_l,V_t$, respectively, such that $m(u_1,u_l),m(u_1,u_t)\leq 1.$
Note that $V_l$ and $V_t$ are bi-completely adjacent, we have $m(u_l,u_t)=2.$
It follows that $(u_1,u_l,u_t)$ is a triple of $(G,\sigma)$, contradicting the second claim.
Recall that $V_j\cup V_l$ is an antibalanced set. It follows that, except $V_j$ and $V_l$, any other partite set contains at least 3 vertices since otherwise,
say $|V_r|=2$ with $r\notin \{j,l\}$, the division of $V(G)$, obtained from $\{V_1,\ldots,V_{k}\}$ by merging $V_j$ with $V_l$ and splitting $V_r$ into two independent sets, yields $\chi((G,\sigma))\leq q-1$, a contradiction.
Take a vertex from $V_j$, two vertices from $V_l$ and three vertices from each of the rest partite sets.
Denote by $(H,\sigma_H)$ the signed bi-graph induced by these vertices. Clearly, $|V(H)|=\frac{3(q-1)}{2}$.
By mutiplicity of the edges in $(G,\sigma)$, we can see that $(H,\sigma_H)$ is a
$\triangledown$-complete signed bi-graph. By Lemma \ref{lem_triangle-complete}, $(H,\sigma_H)$ can be obtained from $(K_q,+)$ by Operations (sb1)-(sb5) and therefore, so does $(G,\sigma)$, a contradiction.
Case 2: $q$ is even.
Since $\chi((G,\sigma))=q$, the vertex set $V(G)$ can be divided into $k$ partite sets $V_1,\ldots,V_{k}$, where $k=\frac{q+2}{2}$, so that at least two of them are independent sets, say $V_1$ and $V_2$.
Subcase 2.1: every two partite sets are bi-completely adjacent.
Take a vertex from each partite set.
Clearly, these vertices induce $(K_\frac{q+2}{2},\pm)$.
Hence, $(G,\sigma)$ can be obtained from $(K_\frac{q+2}{2},\pm)$ by Operation (sb1).
By Lemma \ref{lem_bicomplete}, $(K_\frac{q+2}{2},\pm)$ can be obtained from $(K_q,+)$ by Operation (sb1)-(sb5), and therefore, so does $(G,\sigma)$, a contradiction.
Subcase 2.2: there exist two partite sets $V_j$ and $V_l$ that are not bi-completely adjacent. By a similar argument as in Subcase 1.2, we can deduce that $\{j,l\}=\{1,2\}$,
that every other pair of partite sets are bi-completely adjacent, and that $|V_3|,\ldots,|V_k|\geq 2.$ It follows with the first claim that $V_1\cup V_2$ induces two vertices with a negative edge between them.
Take a vertices from each of $V_1$ and $V_2$, and two vertices from each of the rest partite sets.
We can see that the signed bi-graph, induced by these vertices, can be obtained from $(K_q,+)$ by adding signed edges and switching at vertices.
Therefore, $(G,\sigma)$ can be obtained from $(K_q,+)$ by Operations (sb1)-(sb5), a contradiction.
\end{proof}
\begin{corollary}
Every signed graph with chromatic number $q$ can be obtained from $(K_q,+)$ by Operations (sb1)-(sb5).
\end{corollary}
| {
"timestamp": "2017-02-28T02:11:24",
"yymm": "1702",
"arxiv_id": "1702.08232",
"language": "en",
"url": "https://arxiv.org/abs/1702.08232",
"abstract": "The paper designs five graph operations, and proves that every signed graph with chromatic number $q$ can be obtained from all-positive complete graphs $(K_q,+)$ by repeatedly applying these operations. This result gives a signed version of the Hajós theorem, emphasizing the role of all-positive complete graphs played in the class of signed graphs, as played in the class of unsigned graphs.",
"subjects": "Combinatorics (math.CO)",
"title": "Hajós-like theorem for signed graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9732407121576652,
"lm_q2_score": 0.7279754430043072,
"lm_q1q2_score": 0.7084953385828037
} |
https://arxiv.org/abs/2009.14226 | Union-Find Decoders For Homological Product Codes | Homological product codes are a class of codes that can have improved distance while retaining relatively low stabilizer weight. We show how to build union-find decoders for these codes, using a union-find decoder for one of the codes in the product and a brute force decoder for the other code. We apply this construction to the specific case of the product of a surface code with a small code such as a $[[4,2,2]]$ code, which we call an augmented surface code. The distance of the augmented surface code is the product of the distance of the surface code with that of the small code, and the union-find decoder, with slight modifications, can decode errors up to half the distance. We present numerical simulations, showing that while the threshold of these augmented codes is lower than that of the surface code, the low noise performance is improved. | \section{Review of Homological Product}
\label{review}
In this section, we review the homological product. The product we use is a ``multiple sector" product rather than the ``single sector" product used in \cite{bravyi2014homological}.
Throughout this paper, we consider CSS codes over qubits, meaning that all the vector spaces that we define are over $\mathbb{Z}_2$; this means that we may ignore signs throughout.
\subsection{Homology and cohomology}
We consider only $\mathbb{Z}_2$-linear chain complexes and $\mathbb{Z}_2$-homology.
A $D$-dimensional {\em chain complex} is a sequence of $\mathbb{Z}_2$-linear spaces ${\cal C}_i$
$$
\{0\}
\overset{\partial_{D+1}}{\longrightarrow}
{\cal C}_D
\overset{\partial_D}{\longrightarrow}
{\cal C}_{D-1}
\cdots
\overset{\partial_2}{\longrightarrow}
{\cal C}_1
\overset{\partial_1}{\longrightarrow}
{\cal C}_0
\overset{\partial_0}{\longrightarrow}
\{0\}
$$
equipped with $\mathbb{Z}_2$-linear maps $\partial_i$ called {\em boundary maps} such that
$
\partial_{i+1} \circ \partial_i = 0
$
for all $i=0, \dots, D$.
When no confusion is possible, we will omit the subscript $i$.
In the present work, all the spaces ${\cal C}_i$ will be finite dimensional. Vectors of ${\cal C}_i$ are called {\em $i$-chains}. We assume that a basis is fixed for each space ${\cal C}_i$ and we refer to the basis elements as {\em $i$-cells}.
A $i$-chain can be interpreted equivalently as a binary vector or as a subset of $i$-cells.
As a consequence, we can talk about the boundary of a cell or a set of cells.
Two subsets of the chain space play a central role in homology:
the {\em cycles} space $\ker \partial_i$ denoted $\mathbf Z_i$,
and the space of {\em boundaries} or trivial cycles $\im \partial_{i+1}$ that we denote $\mathbf B_i$.
The quotient $H_i = \mathbf Z_i / \mathbf B_i$ is the {\em homology} group. In our setting, it is a $\mathbb{Z}_2$-linear space.
We can obtain a new chain complex by replacing each boundary map $\partial_i$ by its transposed map $\partial^i$. Identifying each space ${\cal C}_i$ with its dual, we obtain the complex
$$
\{0\}
\overset{\partial^{-1}}{\longrightarrow}
{\cal C}_0
\overset{\partial^0}{\longrightarrow}
{\cal C}_{1}
\overset{\partial^1}{\longrightarrow}
\cdots
{\cal C}_{D-1}
\overset{\partial^{D-1}}{\longrightarrow}
{\cal C}_D
\overset{\partial^{D}}{\longrightarrow}
\{0\}
$$
The map $\partial^i$ will be called the {\em coboundary map}
and the space $\mathbf Z^i = \ker \partial^i$ is the space of {\em cocycles}
and $\mathbf B^i = \im \partial^{i-1}$ is the {\em coboundary} space.
The {\em cohomology group} is defined to the be quotient $H^{i} = \mathbf Z^i / \mathbf B^i$.
\subsection{General Products}
The product takes as input two codes.
We describe both codes in terms of chain complexes.
The large code will be defined by some chain complex ${\cal B}$ defined by a sequence of vector spaces ${\cal B}_j$ indexed by some integer $j$ and a boundary map $\partial_B$ from ${\cal B}_{j}$ to ${\cal B}_{j-1}$ such that $\partial_B^2=0$.
For some given $q$, we associate the qubits of the large code with $q$-cells, and we associate $X$- and $Z$-checks of the code with $(q-1)$-cells and $(q+1)$-cells, respectively, with the boundary operator $\partial$ defining the checks of the code in the usual way (original references include \cite{kitaev2003fault,freedman2001projective,bombin2007homological}; see \cite{bravyi2014homological} for a review).
For use later, use $L$ to denote the collection of all cells.
The chain complex defining the large code may be obtained from some cellulation of a manifold, in which case we call it a {\it topological code}, but it need not be.
The fixed code $C$ is defined by some chain complex ${\cal C}$, with three vector spaces ${\cal C}_0,{\cal C}_1,{\cal C}_2$, with boundary operator $\partial_C$ which is a map from ${\cal C}_2$ to ${\cal C}_1$ and from ${\cal C}_1$ to ${\cal C}_0$ such that $\partial_C^2=0$. Qubits are associated with $1$-cells of ${\cal C}$ and $X$-checks and $Z$-checks are associated with $C_0$ and $C_2$ respectively. Assume that $C$ is an $[[n_C,k_C,d_C]]$ code with $n_X$ distinct $X$ checks and $n_Z$ distinct $Z$ checks. Assume further that there are no redundancies in the stabilizers of $C$ (i.e., no nontrivial product of stabilizers is the identity), so $n_X+n_Z+k_C=n_C$. Hence, the zeroth and second homology of ${\cal C}$ vanish.
The homological product code is given by taking the product of the complex defining the large code with the complex defining $C$, and then defining a code from that complex.
The product complex, which we denote ${\cal A}$, has spaces ${\cal A}_0,{\cal A}_1,\ldots$, with
$${\cal A}_i=\oplus_{j=0}^i ({\cal B}_j \otimes {\cal C}_{i-j}),$$
and the product complex has boundary operator
$$\partial=\partial_B \otimes I + I \otimes \partial_C.$$
Note that if the code is over qudits rather than qubits, there is a slightly more complicated sign structure for the boundary operator.
In the homological product code, qubits are identified with basis vectors of the space ${\cal A}_{q+1}$, i.e.,
$$({\cal B}_{q-1}\otimes {\cal C}_2) \oplus ({\cal B}_q \otimes {\cal C}_1) \oplus ({\cal B}_{q+1} \otimes {\cal C}_0).$$
Thus, for every $(q+1)$-cell of $L$, there are $n_X$ qubits, for every $q$-cell of $L$ there are $n_C$ qubits, and for every $(q-1)$-cell of $L$ there are $n_Z$ qubits.
The product code is an $[[n,k,d]]$ code where,
by the Kunneth formula, if the large code is an $[[n_B,k_B,d_B]]$ code, then given our assumption on the zeroth and second homology of ${\cal C}$, $$k=k_B k_C.$$
Logical $Z$ operators of the product are given by $(q+1)$-th homology classes (a particular choice of logical operator corresponds to a representative) and logical $X$ operators are given by $(q+1)$-th cohomology classes.
For a surface code on a square lattice, the number of $2$-cells is roughly $n_B/2$, depending on boundary conditions, and similarly for the number of $0$-cells.
So, in this case $$n\approx \frac{n_B (n_X+n_Z)}{2}+n_B n_C.$$
$X$-stabilizers of the product code are associated with basis vectors of ${\cal A}_q=$
$$({\cal B}_{q-2} \otimes {\cal C}_2) \oplus ({\cal B}_{q-1}\otimes {\cal C}_1) \oplus ({\cal B}_q \otimes {\cal C}_0),$$
so that for every $(q-2)$-cell there are $n_Z$ different stabilizers,
for every $(q-1)$-cell there are $k_C$ different stabilizers and for every $q$-cell that are $n_X$ different stabilizers.
Note that in many cases (for example, the surface code with $q=1$ or some LDPC codes), the first term
$({\cal B}_{q-2} \otimes {\cal C}_2)$ is absent.
The ``cells" of the product complex will be basis vectors of spaces ${\cal A}_q$, and a cell $e$ of the product complex will be {\it associated} with some cell $f$ of $L$ if the basis vector corresponding to $e$ is some product of the basis vector corresponding to $f$ with some basis vector of a cell in the fixed code.
Given any vector $v$ in this space ${\cal A}_q$, and any $(q-1)$-cell $e\in L$, we define the {\it vector of coefficients} of $v$ on $e$ on that cell in the obvious way: it is given by the $n_C$ distinct coefficients of $v$ for cells of the product complex which are associated to $e$.
\subsection{Simple Example}
As an example of this formalism, we give perhaps the simplest augmented surface code, the homological product of a surface code on a square lattice with a $[[4,2,2]]$ code with checks $XXXX$ and $ZZZZ$. Figure~\ref{fig:surface_code_x_422} shows the layout of the qubits are checks of the augmented surface code.
\begin{figure}
\centering
(a)
\includegraphics[scale=.45]{augmented_sc.pdf}
\bigskip
(b)
\includegraphics[scale=.45]{augmented_sc_stabilizers.pdf}
(c)
\includegraphics[scale=.45]{augmented_sc_logicals.pdf}
\caption{
(a) Homological product of a distance-three surface code with a $[[4, 2, 2]]$ code. Dashed lines represent the original lattice of the surface code.
Qubit are represented by circles and square represent the $X$-checks. A check acts on its neighboring qubits. One can obtain a similar representation of $Z$-checks by considering the dual lattice.
(b) Two checks and their support. All the checks are local and have weight at most 6.
(c) Two distinct $Z$ logical operators for the two logical qubits of the product code.
}
\label{fig:surface_code_x_422}
\end{figure}
This code has $4$ qubits per edge, one qubit per vertex, and one qubit per plaquette. It has one $X$ stabilizer on each edge $e$ given by
$$(\prod_{a=1}^4 X_e^a) (\prod_{p, e\in \partial p} X_p),$$
where $X_e^a$ for $a=1,\ldots,4$ are Pauli operators on the four qubits on edge $e$, the product is over plaquettes $p$ such that $e$ is in the boundary of $p$, and $X_p$ is a Pauli operator on the qubit on plaquette $p$.
This stabilizer is weight $6$.
It has four $X$ stabilizers on each vertex $v$ given by (for $a=1,\ldots,4$)
$$(\prod_{e, v\in \partial e} X_e^a) X_v,$$
where $X_v$ is a Pauli operator on the qubit on vertex $v$.
This stabilizer is weight $5$.
There is also one $Z$ stabilizer on each edge $e$ given by
$$(\prod_{a=1}^4 Z_e^a) (\prod_{v\in \partial e} Z_v),$$
and four $Z$ stabilizers on each plaquette $p$ given by
$$(\prod_{e\in \partial p} Z_e^a) Z_p.$$
\begin{figure}
\centering
\includegraphics[scale=.5]{distance_vs_overhead}
\caption{
Qubit overhead $n/k$ as a function of the minimum distance for the toric code
and augmented toric code.
The toric code reaches $d=30$ with 900 physical qubits
per logical qubit while the augmented toric code requires only
562.5 physical qubits per logical qubit saving almost $40\%$
of the qubits.
}
\label{fig:distance_vs_overhead}
\end{figure}
For any cellulation, the code has twice as many logical qubits as the surface code does.
Based a $m \times m$ square lattice of a torus one can define a
$[[2m^2, 2, m]]$ toric code. Its product with $[[4, 2, 2]]$ code provides a
$[[10m^2, 4, 2m]]$ which uses more physical qubits but encodes
twice has many logical qubits with a doubled minimum distance (Section~\ref{distance}).
Overall, this strategy leads to a significant reduction of the qubit overhead required to
achieve a given minimum distance as shown in Figure~\ref{fig:distance_vs_overhead}.
\section{Decoding Augmented Surface Codes}
\label{sddecode}
We now give our decoding algorithm for the case where the large code is a surface code and the fixed code has no redundancies in its checks.
Since we consider CSS codes, we consider
just the correction of $Z$-errors using $X$-type measurements.
By duality, $X$-type errors can be corrected with the same procedure.
So, for us, the error patterns will be on $2$-cells of the product complex and
the syndrome of an error will be the set of 1-cells on its boundary.
Before giving a detailed, technical description of decoding, let us review the union-find decoder for the surface code at a high level. The reader should see \cite{delfosse2017almost} for more details. The algorithm works by growing {\it clusters} of cells. Initially, there is exactly one cluster for each error, with the cluster containing just the single cell on which that error occurs. As the algorithm runs, clusters grows by adding $1$-cells, and when clusters join together, they merge into a single, larger cluster. A cluster is {\it valid} if it contains an even number of errors, which means that the error pattern on that cluster can be produced just by errors on the edges in that cluster. Clusters are not valid if they contain an odd number of errors, in which case to produce the observed error pattern, there must be at least one error on an edge leaving the cluster. This motivates the growth rules: only clusters which are not valid grow while clusters which are valid do not grow (though they can merge with a growing cluster which is not valid). Eventually, once growth stops and all clusters are valid, the decoder then applies a correction supported just on the clusters. The name ``union-find" refers to a data structure used to keep track of the growing clusters. The decoder here is a generalization of the union-find decoder, using in particular a more general way to determine validity of clusters.
\subsection{Error and syndrome}
To define the surface code, consider a finite cellulation $(V, E, F)$ of a closed surface
with vertex set $V$, edge set $E$ and face set $F$.
The chain complex ${\cal B}$ of the surface code is the chain complex of the cellulation
defined over the spaces ${\cal B}_2 = \mathbb{Z}_2^F$,
${\cal B}_1 = \mathbb{Z}_2^E$ and ${\cal B}_0 = \mathbb{Z}_2^V$.
The fixed code $C$ is a $[[n_C, k_C]]$ CSS code defined by a pair of binary
matrices $\mathbf H_X, \mathbf H_Z$ with $n_C$ columns satisfying $\mathbf H_Z^T \cdot \mathbf H_X = 0$.
The $n_X$ rows of $\mathbf H_X$ correspond to the $X$ type stabilizer generators
and $n_Z$ rows of $\mathbf H_Z$ define the $Z$ type stabilizer generators of $C$.
Equivalently, the code $C$ can be described by the chain complex
${\cal C}_2 = \mathbb{Z}_2^{n_Z} \rightarrow {\cal C}_1 = \mathbb{Z}_2^{n_C} \rightarrow {\cal C}_0 = \mathbb{Z}_2^{n_X}$.
The matrices $\mathbf H_X$ and $\mathbf H_Z^T$ are the matrices of the
boundary maps ${\cal C}_1 \rightarrow {\cal C}_0$ and ${\cal C}_2 \rightarrow {\cal C}_1$
respectively.
Qubits of the augmented surface code are placed on the $2$-cells of the homological product
${\cal A} = {\cal B} \otimes {\cal C}$.
A $Z$ error, {\em i.e.} a 2-chain
$x \in {\cal A}_2 = ({\cal B}_0 \otimes {\cal C}_2) \oplus ({\cal B}_1 \otimes {\cal C}_1) \oplus ({\cal B}_2 \otimes {\cal C}_0)$
can be uniquely written in the standard form
\begin{equation} \label{eqn:error}
x = \sum_{v \in V} v \otimes x(v) + \sum_{e \in E} e \otimes x(e) + \sum_{f \in F} f \otimes x(f)
\end{equation}
where $x(v) \in {\cal C}_2, x(e) \in {\cal C}_1$ and $x(f) \in {\cal C}_0$.
The 2-chain associated with trivial vectors $x(v), x(e)$ and $x(f)$ for all the cells of
the surface is the trivial 2-chain.
The syndrome $s(x)$ of an error $x$ as in \eqref{eqn:error} is obtained by applying the
boundary map of the product complex:
\begin{equation} \label{eqn:syndrome_eval}
s(x) = \sum_{v \in V} v \otimes \mathbf H_Z^T x(v)^T
+ \sum_{e \in E} e \otimes \mathbf H_X^T x(e)^T + \sum_{e \in E} \partial(e) \otimes x(e)
+ \sum_{f \in F} \partial(f) \otimes x(f)
\end{equation}
One can write this syndrome using the standard form
\begin{equation} \label{eqn:syndrome}
s = \sum_{v \in V} v \otimes s(v) + \sum_{e \in E} e \otimes s(e)
\end{equation}
with $s(v) \in {\cal C}_1$ and $s(e) \in {\cal C}_0$. The first and third terms of \eqref{eqn:syndrome_eval}
contribute to the vectors $s(v)$ and the second and fourth terms contribute to
$s(e)$.
\subsection{Generalization of the Union-Find decoder}
In order to design a decoder for augmented surface codes, we
use a decoder $D_X: {\cal C}_0 \rightarrow {\cal C}_1$
and a decoder $D_Z^T: {\cal C}_1 \rightarrow {\cal C}_2$
for the linear codes with parity check matrices $\mathbf H_X$ and
$\mathbf H_Z^T$ respectively. For a small CSS code, these two decoders
can be given as a lookup table. We can assume that they provide
a minimum weight correction.
Pick a family $x_1, \dots, x_{k_C} \in {\cal C}_1$ representing ${k_C}$ independent $X$ logical operators of the fixed CSS code $C$ and let
$z_1, \dots, z_{k_C} \in {\cal C}_1$ be independent $Z$ logical operators of $C$
such that $(x_i | z_j) = \delta_{i, j} \pmod 2$.
The union-find decoder will grow connected clusters in the surface until
all the clusters can be erased and corrected independently given the syndrome
$s$.
Such a cluster is said to be a {\em valid cluster}.
To determine if a cluster $\kappa$ is valid, we compute the validity vector
\begin{align} \label{eqn:validity}
\val(\kappa) = \sum_{v \in \kappa} ( (s(v) | x_1), \dots, (s(v) | x_{k_C}) ) \in \mathbb{Z}_2^{k_C}
\end{align}
A cluster is valid iff its validity vector is trivial.
In Section~\ref{valid}, we will prove that a valid cluster contains a valid error pattern.
\begin{algorithm}
\caption{Union-Find decoder for augmented surface codes}
\label{algo:uf_decoder}
\begin{algorithmic}[1]
\REQUIRE The syndrome $s(x)$ as in \eqref{eqn:syndrome} of an error $x \in A_2$.\\
\ENSURE An estimation $\tilde x \in A_2$ of $x$.
\STATE Set $\tilde x = 0$ and ${\cal E} = \{ e \in E \ | \ s(e) \neq 0 \}$.
\STATE {\bf Edge cancellation:}
\STATE Run over all edges $e = \{ u, v \} \in E$ and do:
\STATE \hspace{1cm} Compute $\tilde x(e) = D_X(s(e))$.
\STATE \hspace{1cm} Add $e \otimes \tilde x(e)$ to $\tilde x$.
\STATE \hspace{1cm} Add $u \otimes s(e)$ and $v \otimes s(e)$ to $s$.
\STATE {\bf Union-Find growth:}
\STATE Initialize clusters with a single vertex $\kappa_{v} = \{v\}$.
\STATE Merge clusters connected by an edge of ${\cal E}$.
\STATE While there exists at least one invalid cluster do:
\STATE \hspace{1cm} Select an invalid cluster $\kappa$ with minimum boundary.
\STATE \hspace{1cm} Grow $\kappa$ by one half-edge.
\STATE \hspace{1cm} Update the validity vector of $\kappa$.
\STATE Set ${\cal E}'$ to be the set of all edges fully covered by the grown clusters.
\STATE {\bf Peeling:}
\STATE Construct a spanning forest ${\cal F}$ of the subgraph ${\cal E}'$ of the surface.
\STATE While ${\cal F} \neq \emptyset$ do:
\STATE \hspace{1cm} Select an edge $e = \{ u, v \}$ of ${\cal F}$ such that $u$ is a leaf.
\STATE \hspace{1cm} Add $e \otimes s(u)$ to $\tilde x$.
\STATE \hspace{1cm} Add $u \otimes s(u)$ and $v \otimes s(u)$ to $s$.
\STATE \hspace{1cm} For $i=1, \dots, k_C$ do:
\STATE \hspace{2cm} If $(s(u)|x_i) = 1$ do:
\STATE \hspace{3cm} Add $e \otimes z_i$ to $\tilde x$.
\STATE \hspace{3cm} Add $u \otimes z_i$ and $v \otimes z_i$ to $s$.
\STATE \hspace{1cm} Remove $e$ from ${\cal F}$
\STATE {\bf Residual Node Correction:}
\STATE For all $v \in V$ do:
\STATE \hspace{1cm} Compute $\tilde x(v) = D_Z^T(s(v))$.
\STATE \hspace{1cm} Add $v \otimes \tilde x(v)$ to $\tilde x$.
\STATE Return $\tilde x$.
\end{algorithmic}
\end{algorithm}
Algorithm~\ref{algo:uf_decoder} works in four steps.
First, we eliminate the components of the syndrome on edges.
A syndrome component $e \otimes s(e) \in B_1 \otimes C_0$ is killed
(moved to the endpoints of $e$) by adding an $Z$ error on
the 2-cell $e \otimes D_X(s(e)) \in B_1 \otimes C_1$.
We will treat this edge set ${\cal E} \subset E$ as an erasure
in the surface.
The second step is a growth step similar to the union-find
decoder. On can erase and decode a cluster iff it is valid.
We apply the same growth procedure as in the union-find decoder
with erasure ${\cal E}$ and with the validity function defined above
based on the syndrome values $s(v)$ on nodes.
Once valid clusters are grown, they are erased and the peeling
decoder \cite{delfosse2020peeling} is used to identify a correction that explains the
syndrome inside each cluster.
This peeling step applies a
correction on the edges of a spanning tree of each cluster.
Over an edge $e = \{u, v\}$, two types of correction are applied.
(i) A correction $e \otimes s(u)$ moves the local syndrome $s(u)$
from $u$ to $v$.
(ii) A correction of the form $e \otimes z_i$ to used to cancel the
validity vector in node $u$.
As a result, after peeling, each node $v$ of the cluster, except the root $v_0$,
supports a trivial syndrome $v \otimes s(v)$ with $s(v) = 0$.
Moreover, the validity vector of each node of the cluster is trivial
because the cluster is valid.
Finally, the residual syndrome $v_0 \otimes s(v_0)$ on each cluster root
$v_0$ is eliminated using the decoder $D_Z^T$ by applying a correction
$v_0 \otimes D_Z^T(s(v_0))$. This local decoder can be applied
because $s(v_0)$ is a $Z$ stabilizer of $C$.
Indeed, we will show that $s(v_0)$ is orthogonal with $X$ stabilizers
and $X$ logical operators of $C$.
The root $v_0$ satisfies $\val(v_0) = 0$, which means that $s(v_0)$
is orthogonal with all the $X$ logical operators of $C$.
Moreover, since the cluster is valid before peeling, the restriction of the syndrome
to the cluster, is 1-boundary in ${\cal A}$ and this property is preserved after peeling.
As a consequence the $v_0 \otimes s(v_0)$ is a 1-boundary and it satisfies
$\partial( v_0 \otimes s(v_0) ) = 0$ (because $\partial \circ \partial = 0$).
This implies $\mathbf H_X s(v_0)^T = 0$, showing that $s(v_0)$ is orthogonal with
all $X$ stabilizers.
One can slightly improve the performance of Algorithm~\ref{algo:uf_decoder} by modifying the growth procedure. This variant of the decoder is described in Appendix~\ref{app:UF_modified}.
\subsection{Numerical results}
In this section, we report the result of numerical simulation of augmented toric codes.
We consider the product of a rotated toric code with different CSS codes.
We only simulate the correction of $Z$-error since the codes considered are
self-orthogonal.
To sample from $Z$-errors, we start from the trivial 2-chain and
we flip each bit of each vector $x(v), x(e)$ and $x(f)$ independently
with probability $p$. For simplicity, we do not consider the case of erasure errors.
We assume perfect syndrome extraction circuit. Our numerical simulations are based on the variant of Algorithm~\ref{algo:uf_decoder} proposed in Appendix~\ref{app:UF_modified}.
In Figure~\ref{fig:distance_vs_overhead} we observed that augmented topological codes can achieve the same distance as traditional topological codes with a reduced qubit overhead.
To illustrate further the advantage of augmented codes, we compare numerically the performance of toric codes and their product with a $[[4, 2, 2]]$ code. In both cases, the correction is performed using a union-find decoder.
Denote by $\tc(m)$ the toric code defined on the $m \times m$ lattice.
In Figure~\ref{fig:atc_vs_tc}, we compare the product $\tc(m) \otimes [[4, 2, 2]]$ with all the toric codes $\tc(\ell)$ achieving a smaller or equal qubit overhead.
Our numerical results show that augmented toric codes with $m \geq 3$ outperform toric codes in the regime of low physical error rate. At higher physical error rates, the augmented toric codes do not perform as well, and pseudo-thresholds extracted from these figures (i.e., the point at which logical error rate is equal to physical) are worse for the augmented codes.
As an example, the augmented toric code $\tc(6) \otimes [[4, 2, 2]]$ consumes 90 physical qubits per logical qubit and at low physical error rates it achieves a lower logical error rate than the toric code $\tc(9)$ which has comparable overhead.
The origin of the superiority of augmented toric codes is their increased minimum distance which reaches 12 for the code $\tc(6) \otimes [[4, 2, 2]]$, whereas the traditional toric code is limited to distance 9 for the same overhead.
\begin{figure}
\centering
\includegraphics[scale=.45]{atc3_vs_tc}
\includegraphics[scale=.45]{atc4_vs_tc}
\includegraphics[scale=.45]{atc5_vs_tc}
\includegraphics[scale=.45]{atc6_vs_tc}
\caption{Comparison of augmented toric codes $\tc(m) \otimes [[4, 2, 2]]$
with toric codes with smaller or equal overhead for $m=3, 4, 5, 6$.
Toric codes are decoder is the standard union-find decoder and Algorithm~\ref{algo:uf_decoder_modified} is used to decode augmented toric codes.
Augmented toric codes achieve a better logical error rate than toric codes in the regime of low physical error rate.
The overhead reported in these plots is the number of physical qubit per logical qubit.
}
\label{fig:atc_vs_tc}
\end{figure}
\subsection{Distance}
The results of \cref{distance} show that the distance of an augmented surface code equals the product of the distances of the large and fixed codes. Here we show that
\begin{lemma}
If the fixed code has distance $2$, then the union-find decoder (in its simplest form where we do not have separate clusters for each logical operator) decodes errors with weight less than half the distance. (It is already known\cite{delfosse2017almost} that for the {\it unaugmented} surface code, the union-find decoder has this property.)
Further, a modification of the algorithm (detailed in the proof of this lemma and not studied elsewhere in the paper) decodes errors with weight less than half the first for any distance of the fixed code.
\begin{proof}
First consider the case where the fixed code has distance $2$.
Suppose an error of weight $w$ occurs.
Let there be $m$
edges in ${\cal E}$ before edge cancellation; these edges have at least one error on them.
So, the sum over diameters of clusters in ${\cal E}$ is bounded by $m$.
After edge cancellation, there may be some edges that have an error on them but are not in ${\cal E}$. Call this set of edges ${\cal H}$, for ``hidden". There must be at least $2$ errors on each of these edges and so the number of such edges is at most $(w-m)/2$.
Now consider the growth process of clusters. (This step of the proof is almost exactly the same as the proof\cite{delfosse2017almost} that the union-find decoder decodes up to half the distance for a surface code.) If a cluster is not valid, then there must be some edge in ${\cal H}$ leaving the cluster.
A single step of the growth will cover half of one of the edges in ${\cal H}$, and will increase the sum of cluster diameters by at most $1$ (it is possible that it joins two clusters and so greatly increases the largest cluster diameter).
Suppose the growth process terminates after $t$ steps, at which point it has covered at least $t/2$ edges in ${\cal H}$ (hence, $t\leq w-m$) and the
sum of cluster diameters is bounded by $m+t\leq w$.
It is possible at this point that not all edges in ${\cal H}$ are covered: there may be paths in ${\cal H}$ going from
a cluster to itself, and the length of these paths is then bounded by $(w-m)/2-t/2=(w-m-t)/2$.
So,
the sum of the cluster diameters plus the sum of the lengths of these paths is bounded by $m+t+(w-m-t)/2 \leq w$.
So, the algorithm replaces the actual error with clusters of erasures plus possibly some operator which commutes with the stabilizers (i.e., due to paths in ${\cal H}$ which are not covered), with these errors supported on sets of diameter at most $w$.
If $w$ is smaller than the surface code distance, it decodes correctly.
Remark: we see that the worst case is when all edges in ${\cal H}$ are covered.
Now consider the case where the fixed code has distance $d>2$. We modify the algorithm as follows.
Each edge is broken into $d$ subedges. Note that $d$ may be odd. Rather than keeping a set of half edges, we keep a set of these subedges.
Let $w(e)$ denote the error weight on each edge, so that $w=\sum_e w(e)$.
We first perform the edge cancellation step. Let $c(e)$ denote the weight of the correction applied on edge $e$ if that correction weight is $\leq d/2$; if the correction weight is $>d/2$ then $c(e)=d/2$.
If an error of weight $w(e)$ occurs, with $w(e)\geq d/2$, then $c(e)\geq d-w(e)$.
We use a minimum weight decoder so that $c(e) \leq w(e)$. If we fix an error on an edge $e$ with a correction of some weight $c(e)$, we add the $c(e)$ closest subedges to each vertex attached to that edge to the subedge set, so that the total number of subedges on that edge in the subedge set is $2c(e)$. Thus, in the case that the distance is $2$, any correction means adding both half edges, but for larger distance, it is possible that only part of each edge is added.
We grow so that subedges are added starting closest to the vertex and moving outwards. Note that if $d$ is odd, it is possible that two clusters are separated by an odd number (for example, $1$) subedge; to deal with this, we add subedges to clusters sequentially, rather than in parallel, or, alternatively, one may further subdivide each subedge into two subsubedges to run growth in parallel.
Let $s(e)$ denote the total number of subedges on edge $e$ which are in the subedge set. Thus, initially $s(e)=2c(e)$, but $s(e)$ may increase as the algorithm runs.
The sum of cluster diameters initially is $\leq (2/d)\sum_e c(e)$
After applying edge cancellation, the error applied on each edge may be a nontrivial logical operator.
Let ${\cal L}$ denote the set of such edges with a nontrivial logical operator, and for an edge $e\in {\cal L}$, let $h(e)=d-s(e)$ denote the ``hidden weight" on that edge.
Let ${\cal H}$ denote the set of edges $e$ with $h(e)>0$: these are edges in ${\cal L}$ on which not all subedges are in the subedge set. Let the total hidden weight $h=\sum_{e\in {\cal H}} h(e)$.
Before any growth, immediately
after edge cancellation, the hidden weight is bounded by $h_0 \equiv \sum_{e \, {\rm s.t.} \, w(e)\geq d/2} (d-2c(e))$.
This hidden weight $h$ reduces as a result of the growth process.
If a cluster is not valid, then there must be some edge in ${\cal H}$ leaving the cluster. So, a single step of step of growth (growing on any single subedge) will reduce the hidden weight by at least $1$ and will increase the sum of cluster diameters by at most $2/d$.
So, if the algorithm terminates after $t$ steps, the sum of cluster diameters is upper bounded by $(2/d)(\sum_e c(e)+t)$,
and $t \leq h_0$.
There may possibly be some hidden weight that remains at the end of the algorithm: this is due to edges with hidden weight $d$
forming paths from a cluster to itself. Thus, the sum of cluster diameters plus the length of these paths
is bounded by $(2/d)(\sum_e c(e)+t)+(h_0-t)/d$. This is maximized at $t=h_0$, so this is bounded by $(2/d) \Bigl(\sum_e c(e)+h_0\Bigr)= (2/d) \Bigl( \sum_{e \, {\rm s.t.} \, w(e)<d/2} c(e)+\sum_{e \, {\rm s.t.} \, w(e) \geq d/2} (d-c(e))
\Bigr).$
Note that $c(e)\leq w(e)$ for all $e$ and also $d-c(e)\leq w(e)$ for all $w(e)\geq d/2$.
Hence, the sum of cluster diameters plus path lengths is bounded by $(2/d) \sum_e w(e)=(2/d) w$.
\end{proof}
\end{lemma}
\section{Union-Find Decoder in General}
\label{general}
We now consider how to construct a decoder for a more general choices of large and fixed codes.
As before, we assume that the large code has a union-find decoder, and we construct a union-find decoder for the homological product.
The initialization and growth aspects of the decoder can be chosen in various ways (for example, growing each cluster by a minimal amount). What we will be concerned with is the routines for testing if a cluster is valid and for decoding a valid cluster. We will assume that those two routines exist for any cluster of the large code, and we will construct them for the product code for any choice of fixed code in \cref{valid}, \cref{decode}, subject only to the requirement that the fixed code have no redundancies in its stabilizers.
\subsection{Syndrome Validation}
\label{valid}
We consider the question:
given some set of qubits and some observed syndrome on some set of $X$ checks, is there a $Z$ error pattern on those
qubits which can give rise to that syndrome on those checks, and no errors on other checks? That is, is the cluster consisting of those qubits and those checks {\it valid}?
Define a {\it sub-cell complex} to be some set of cells of the large code (i.e., some cells of the cellulation if it is a topological code, and some set of $X$- and $Z$-checks and qubits if it is not), such that if a cell is in the subcomplex, all cells in its boundary are also (in the case of a code that is not a topological code, we mean that if some $Z$-stabilizer is in the subcomplex, then all qubits in its support are, and if a qubit is in the subcomplex, then all $X$-stabilizers supported on it are, i.e., given a vector corresponding to some cell of the subcomplex, all cells in the support of the boundary of that vector are in the subcomplex).
We write such a sub-cell complex with regular capital letters such as $P$; corresponding to such a sub-cell complex there is a sub-chain complex which we write with calligraphic letters such as ${\cal P}$; this is a set of vector spaces ${\cal P}_j$ corresponding to $j$-cells of the sub-cell complex $P$, and we define the obvious boundary operator on the sub-chain complex. For brevity, we will refer to both $P$ and ${\cal P}$ as subcomplexes.
Then, the question at the start of this section is equivalent to the question: given some subcomplex $P$
(such that the given set of qubits is the set of $(q+1)$-cells in $P$, and such that the given set of checks is the set of $q$-cells in $P$), is there a vector supported on ${\cal P}_q$ whose boundary
is the given syndrome? Note that since we consider a subcomplex, we are guaranteed that a $Z$ error on a qubit in $P$ will not produce an error on any stabilizer which is not in the set of $(q-1)$-cells of the complex.
Using the language of homology, asking if a syndrome vector can be written as a boundary is equivalent to asking: is the syndrome a closed chain (i.e., does its boundary vanish) and is the syndrome homologically trivial?
This question can be answered in a simple way in one particular important case: that the subcomplex ${\cal P}$ is a homological product of some subcomplex ${\cal M}$ in the large code with some subcomplex ${\cal \tilde C}$ in the fixed code.
We will write $M$ to denote the sub-cell complex corresponding to ${\cal M}$.
If ${\cal C}$ is the $[[4,2,2]]$ erasure code, it makes sense to take ${\cal \tilde C}={\cal C}$ always, but if ${\cal C}$ is a larger code it may be useful in practice to take other choices for ${\cal \tilde C}$ and to ``grow" the subcomplex ${\cal \tilde C}$ in different clusters.
The subcomplex ${\cal \tilde C}$ itself defines some CSS code $\tilde C$ with some number $k_{\tilde C}$ of logical qubits.
Note that ${\cal \tilde C}$ has trivial second homology since ${\cal C}$ does.
We will further assume that ${\cal \tilde C}$ has trivial zeroth homology; this holds if ${\cal \tilde C}={\cal C}$ but may not hold in general.
The vector spaces ${\cal P}_j$ can be regarded as subspaces of ${\cal A}_j$: they are the subspaces where vectors vanish on entries which do not correspond to cells of $M$.
A syndrome on $M$ is some element of ${\cal A}_q$ which is in this subspace ${\cal P}_j$.
Then, we claim that
the following algorithm will determine,
given some syndrome $v$ in ${\cal P}_q$, whether or not it is the boundary of some element $w$ of ${\cal P}_{q+1}$.
First (this step is done offline, before running the union-find decoder), for each $j=1,\ldots,k_{\tilde C}$, construct a vector $x_j$ in $\mathbb{Z}_2^{n_C}$ so that these vectors span all possible logical $X$-operators for code $\tilde C$. In the terminology of topology, the cohomology classes $[x_j]$ must be independent.
Then, for each
$j=1,\ldots,k_{\tilde C}$, define a vector $\tilde v_j$ in ${\cal M}_{q-1}$ such that, for each $(q-1)$-cell $e$ of $M$, the coefficient of $\tilde v_j$
on that cell is given by the inner product of the vector of coefficients of $v$ on $e$ with $x_j$.
We call this vector $\tilde v_j$ the {\it partial inner product} of $v$ with $x_j$ and we write it $v_j=(v,x_j)$ in an abuse of notation.
Then, $v$ is the boundary of some $w$ if and only if $\partial_v=0$ and each $\tilde v_j$ is the boundary of some vector in ${\cal M}_j$. The question of whether $\tilde v_j$ is such a boundary, however, is precisely the question that a union-find decoder for the large code must solve so by assumption we have an algorithm to do this, and we will show that the question of whether $\partial v=0$ can always be solved in linear time.
Let us show that this algorithm is correct.
\begin{lemma}
Let ${\cal P}$ be a product ${\cal M} \otimes {\cal \tilde C}$ where ${\cal M},{\cal \tilde C}$ are subcomplexes of the large code and fixed code, respectively.
Assume that ${\cal \tilde C}$ has trivial zeroth and second homology.
Consider some syndrome $v\in {\cal A}_q$ such that $v$ is supported on ${\cal P}_q$.
Then $M$ is a valid cluster for $v$, meaning that $v$ is a boundary of some vector in ${\cal P}_{q+1}$, iff
$\partial_{\cal P} v = 0$ and, for all $j \in \{1,\ldots,k_{\tilde C}\}$, the vector $\tilde v_j$ is a boundary in ${\cal M}$, where $\tilde v_j$ is the partial inner product $\tilde v_j=(v,x_j)$.
Note that $\tilde v_j$ is a boundary in ${\cal M}$ iff its inner product with all $(q-1)$-th cohomology representatives of ${\cal M}$ vanishes; in the case that the large code is the surface code, there is one such representative and this gives precisely the $j$-th entry of of the validity vector considered previously.
\begin{proof}
$v$ is a boundary (i.e., it represents trivial homology) iff $\partial v=0$ and its inner product
with
a basis of cohomology representatives vanishes. (This is a consequence of the universal coefficient theorem though it can be proven more simply in the case of ${\mathbb Z}_2$ coefficients.)
By K\"{u}nneth, given that ${\cal \tilde C}$ has trivial zeroth and second homology,
a basis $x_k$ for those cohomology representatives can be given by the product of nontrivial $(q-1)$-th cohomology representatives of ${\cal M}$ with first cohomology representatives of ${\cal \tilde C}$, i.e., logical $X$ operators, namely the $x_j$.
Verifying that all these inner products vanish is equivalent to verifying, that for each $j=1,\ldots,k_{\tilde C}$,
the inner product of $\tilde v_j$ with all cohomology representatives of ${\cal M}$ vanishes, which in turn is equivalent to requiring that $\tilde v_j$ be a boundary in ${\cal M}$.
\end{proof}
\end{lemma}
Now consider how to efficiently check that $\partial v=0$.
The vector $\partial v$ is in the space
$({\cal M}_{q-3} \otimes {\cal \tilde C}_2) \oplus ({\cal M}_{q-2} \otimes {\cal \tilde C}_1) \oplus ({\cal M}_{q-1} \otimes {\cal \tilde C}_0).$
This can clearly be done efficiently if we have an efficient representation of the boundary operator $\partial$, simply by checking it for each $(q-3)$-,$(q-2)$-, or $(q-1)$-cell of $M$.
Further, in a union-find decoder there is no need to check this constraint $\partial v=0$ on cells $e$ in the interior of $M$ (meaning those cells that are not attached to any cell in $L\setminus M$), as the constraint is automatically satisfied on those cells by the assumption that the observed syndrome is the boundary of {\it some} error pattern on $L$.
So, one simply needs to check this constraint on cells at the boundary of $M$; in a union find decoder, this constraint then only needs to be checked once for each cell in $M$, when the cell is on the boundary.
\subsection{Decoding}
\label{decode}
We have given an algorithm to determine if some vector $v\in {\cal P}_q$ is a boundary of some $w$. We now give an algorithm that, if the answer to the previous question is yes, will find such a $w$. As before, we assume that we have an algorithm to solve this problem for the large code.
Recall that $v\in ({\cal M}_{q-2} \otimes {\cal \tilde C}_2) \oplus ({\cal M}_{q-1}\otimes {\cal \tilde C}_1) \oplus ({\cal M}_q \otimes {\cal \tilde C}_0)$ and we wish to find
a $w\in ({\cal M}_{q-1} \otimes {\cal \tilde C}_2) \oplus ({\cal M}_{q}\otimes {\cal \tilde C}_1) \oplus ({\cal M}_{q+1} \otimes {\cal \tilde C}_0)$ with $v=\partial w$.
We will in fact find a $w$ which vanishes in the subspace ${\cal M}_{q+1} \otimes {\cal \tilde C}_0$.
The construction is slightly notationally laborious in general but is straightforward: we add different boundaries to $v$ to cancel various components of it on subspaces ${\cal M}_q \otimes {\cal \tilde C}_0$, ${\cal M}_{q-1} \otimes {\cal \tilde C}_1$, and ${\cal M}_{q-2} \otimes {\cal \tilde C}_2$ in turn.
The first cancellation and the last cancellation are straightforward using the vanishing of zeroth and second homology of ${\cal \tilde C}$ (in fact the last cancellation happens ``automatically" as a result of a previous cancellation). The second cancellation is a little trickier due to nontrivial first homology of ${\cal \tilde C}$, but is reduced to the problem for the large code.
We begin with the first cancellation; this step is precisely the edge cancellation step of \cref{sddecode}.
Let $v_0$ be the component of $v$ in subspace ${\cal M}_q \otimes {\cal \tilde C}_0$.
Let $\partial_M$ be the boundary operator on ${\cal M}$. Define $``\partial_{\tilde C}^{-1}"$ to be an operator such that
$\partial_{\tilde C} ``\partial_{\tilde C}^{-1}" y=y$ for any vector $y\in {\cal \tilde C}_0$; in words, given an syndrome for $\tilde C$, $``\partial_{\tilde C}^{-1}"$ computes an error pattern that gives that observed syndrome; this operator is precisely the decoder $D_Z$ used in \cref{sddecode}. Such an operator exists because the zeroth homology of $\tilde C$ is trivial.
Let $w_0=``\partial_{\tilde C}^{-1}" v_0$.
Then, $\partial w_0=v_0+\partial_M ``\partial_{\tilde C}^{-1} v_0".$
So, the sum $v+\partial w_0$ vanishes in subspace ${\cal M}_q \otimes {\cal \tilde C}_0 $.
Replace $v$ with $v+\partial w_0$. We have reduced to the problem of finding $w$ such that $v=\partial w$ for $v$ vanishing in subspace ${\cal M}_q \otimes {\cal \tilde C}_0$.
Now, take vector $v$ and, for each $j=1,\ldots,k_{\tilde C}$, compute a vector $\tilde v_j$ as above. Then, apply the union-find decoder for the large code to find a $\tilde w_j$ such that $\partial_B \tilde w_j=\tilde v_j$. For each of the $X$ logical operators $x_j$ find corresponding $Z$ logical operators $z_j$, so that $(z_j,x_k)=\delta_{j,k}$. Then, consider vector
$v'=v+\partial(\sum_j \tilde w_j \otimes z_j).$
Since $z_j$ is a $Z$ logical operator, $\partial_{\tilde C} z_j=0$, so $v'$ still vanishes in subspace ${\cal M}_q \otimes {\cal \tilde C}_0$.
Let $v'_1$ be the component of $v'$ in subspace ${\cal M}_{q-1} \otimes {\cal \tilde C}_1$.
Since $\partial v'=0$, we have that for every $(q-1)$-cell, the boundary (using $\partial_{\tilde C}$) of the vector of coefficients of $v'$ on that cell vanishes, i.e., that vector of coefficients is a cycle. However, the term $\partial(\sum_j \tilde w_j \otimes z_j)$ in the sum for $v'$ guarantees then that that vector represents trivial homology. So, it is a boundary.
So, for each $(q-1)$-cell $e$, there is some vector $w_e$ such that $\partial_e w_e$ gives the vector of coefficients of $v'$ on cell $e$.
So, consider vector $v''=v'+\partial(\sum_e 1_e\otimes w_e),$ where $1_e$ denotes the basis vector corresponding to cell $e$.
Then, $v''$ vanishes on $({\cal M}_{q} \otimes {\cal \tilde C}_0 )\oplus ({\cal M}_{q-1} \otimes {\cal \tilde C}_1 )$.
So, the only nonvanishing component of $v''$ is on ${\cal M}_{q-2} \otimes {\cal \tilde C}_2 $. Then, since $\partial v''=0$, by computing $\partial v''$ projected onto subspace ${\cal M}_{q-2} \otimes {\cal \tilde C}_{1}$ and using that the second homology of ${\cal \tilde C}$ is trivial so that the only vector in ${\cal \tilde C}_2$ with vanishing boundary is the zero vector, we find that $v''=0$.
\section{Distance}
\label{distance}
The distance $d$ of a homological product obeys the upper bound $d\leq d_B d_C$ by the K\"{u}nneth formula.
It is known\cite{bravyi2014homological} that this bound is not necessarily tight. However, we now show that in many cases, including many choices of a topological large code, this bound indeed is tight.
We have the following trivial lower bound for the distance $d_B$ of the large code: if some equivalence class $[x]$ for $q$-th cohomology has at least $m$ distinct representatives, $x_1,\ldots,x_m$, such that the support of $x_i$ is disjoint from the support of $x_j$ for $i\neq j$, then any representative of an equivalence class $[z]$ for $q$-th homology which has nontrivial inner product with $[x]$ will have weight at least $m$, since, of course, each representative must have some intersection with each $x_i$. If this holds for all equivalence classes of $q$-th cohomology, then the $Z$ distance of the code is at least $m$.
As an example of this, consider a toric code on a torus; assume the torus is $L$-by-$L$ and call the two directions ``vertical" and ``horizontal". One choice of logical $Z$ operator is a string of Pauli $Z$ running in the vertical direction. There are $L$ different such strings of minimal length, corresponding to different locations of the string in the horizontal direction. So, any logical $X$ operator which anti-commutes with it must have weight at least $L$.
If some equivalence class $[x]$ has this property of having $m$ distinct representatives with disjoint support, we say that class has property $(*)$.
This bound has a trivial extension (we use $(q+1)$-th rather than $q$-th homology now since we intend to apply this bound to the product code, but of course this bound is true for any $q$):
if some equivalence class $[x]$ for $(q+1)$-th cohomology has at least $m$ distinct representatives, $x_1,\ldots,x_m$, such that the support of $x_i$ is disjoint from the support of $x_j$ for $i\neq j$,
and such that any representative
of
an equivalence class $[z]$ for $(q+1)$-th homology which has nontrivial inner product with $x_i$ for any $i$ must have weight at least $d_C$ on the support of $x_i$, then any representative of $[z]$ must have weight at least $md_C$.
If some class $[x]$ has this
this property of having $m$ distinct representatives with disjoint support and with lower bound $d_C$ on the weight of the intersection, we say that this class has property $(**)$.
We now show that if the class $[x]$ for the large code has property $(*)$, then the class $[x \otimes \ell]$ has property $(**)$ if $\ell$ represents an $X$ logical operator of fixed code $C$. Proof: let $z$ be some representative of a class with nontrivial inner product with $[x\otimes \ell]$. Fix some $i$ to choose a representative $x_i$ of $[x]$.
Let us say, given a vector $v$ in some subspace ${\cal B}_{q}\otimes {\cal C}_1$ or ${\cal B}_q \otimes {\cal C}_0$ that the partial inner product of that vector with some other vector $u$ in ${\cal B}_q$ is a vector in either ${\cal C}_1$ or ${\cal C}_0$, respectively, defined in the obvious way: regard $v$
as being a vector (of length ${\rm dim}({\cal C}_1)$ or ${\rm dim}({\cal C}_0)$, respectively) of vectors in ${\cal B}_q$, and take the inner product of each such vector with $u$.
Consider the projection of $z$ onto subspace
${\cal B}_{q}\otimes {\cal C}_1$ and take its partial inner product with $x_i$ to get a vector $u$ in $\mathbb{Z}_2^{n_C}$. We have $(u,\ell)=(z,x\otimes \ell)=1$. We will next show that $\partial u=0$. This will show that $u$ represents a $Z$ logical operator for $C$ and so has weight at least $d_C$, proving the desired result.
To show $\partial u=0$, consider the projection of $\partial z$ onto subspace ${\cal B}_q \otimes {\cal C}_0$; call this $s$. Of course, since $\partial z=0$ we have $s=0$.
Take the partial inner product of $s$ with $x_i$ to get
a vector $t$; again $t=0$.
However, $t$ is the sum of two contributions; the first is the partial inner product with $x_i$ of the boundary of the projection of $z$ into ${\cal B}_{q+1}\otimes {\cal C}_0$, but this term vanishes since $x_i$ is a cocycle and so has vanishing inner product with any boundary. The second term is exactly equal to $\partial u$, so indeed $\partial u=0$.
Then, with these results, for many cases such as the large code being a toric code on a torus or more generally any topological code on a torus, the distance of the homological product code equals the product $d_B d_C$. We leave open the question of the distance of the product code when the large code is a topological code with some boundaries.
| {
"timestamp": "2021-03-09T02:47:38",
"yymm": "2009",
"arxiv_id": "2009.14226",
"language": "en",
"url": "https://arxiv.org/abs/2009.14226",
"abstract": "Homological product codes are a class of codes that can have improved distance while retaining relatively low stabilizer weight. We show how to build union-find decoders for these codes, using a union-find decoder for one of the codes in the product and a brute force decoder for the other code. We apply this construction to the specific case of the product of a surface code with a small code such as a $[[4,2,2]]$ code, which we call an augmented surface code. The distance of the augmented surface code is the product of the distance of the surface code with that of the small code, and the union-find decoder, with slight modifications, can decode errors up to half the distance. We present numerical simulations, showing that while the threshold of these augmented codes is lower than that of the surface code, the low noise performance is improved.",
"subjects": "Quantum Physics (quant-ph)",
"title": "Union-Find Decoders For Homological Product Codes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9732407191430025,
"lm_q2_score": 0.7279754371026368,
"lm_q1q2_score": 0.7084953379242118
} |
https://arxiv.org/abs/2211.12204 | Online size Ramsey numbers: Path vs $C_4$ | Given two graphs $G$ and $H$, a size Ramsey game is played on the edge set of $K_\mathbb{N}$. In every round, Builder selects an edge and Painter colours it red or blue. Builder's goal is to force Painter to create a red copy of $G$ or a blue copy of $H$ as soon as possible. The online (size) Ramsey number $\tilde r(G,H)$ is the number of rounds in the game provided Builder and Painter play optimally. We prove that $\tilde r(C_4,P_n)\le 2n-2$ for every $n\ge 8$. The upper bound matches the lower bound obtained by J. Cyman, T. Dzido, J. Lapinskas, and A. Lo, so we get $\tilde r(C_4,P_n)=2n-2$ for $n\ge 8$. Our proof for $n\le 13$ is computer assisted. The bound $\tilde r(C_4,P_n)\le 2n-2$ solves also the "all cycles vs. $P_n$" game for $n\ge 8$ $-$ it implies that it takes Builder $2n-2$ rounds to force Painter to create a blue path on $n$ vertices or any red cycle. | \section{Introduction}
Let $G$ and $H$ be finite graphs. Consider the following game $\tilde R(G,H)$ played on the infinite board $K_\mathbb N$ (i.e. the board is a complete graph with the vertex set $\mathbb N$). In every round, Builder chooses a previously unselected edge of $K_\mathbb N$ and Painter colours it red or blue. The game ends when there is a red copy of $G$ or a blue copy of a $H$ at the board. The goals of the players are opposite, Builder aims to finish the game as soon as possible. The game, for $H=G=K_n$, was introduced by Beck \cite{beck}.
By $\tilde{r}(G,H)$ we denote the number of rounds in the game $\tilde R(G, H)$, provided both players play optimally and call it the online size Ramsey number. Let us emphasise that in the literature online size Ramsey numbers are called also online Ramsey numbers. The online size Ramsey numbers $\tilde{r}(G, H)$ are less intensively studied than classic Ramsey numbers or size Ramsey numbers $\hat{r}(H, G)$, i.e.~the minimum number of edges in a graph with the property that every two-colouring of its edges results in a red copy of $H$ or a blue copy of $G$. There are very few exact results for $\tilde{r}(G, H)$, even if every graph $G$, $H$ is a path or a cycle. It is known that $\tilde{r}(P_3,P_n)=\lceil 5(n-1)/4\rceil$ and $\tilde{r}(P_3,C_n)=\lceil 5n/4\rceil$ \cite{lo}. Nonetheless, there are quite a few papers where paths and cycles are involved, for example \cite{ga}, \cite{gryt}, \cite{ind}, \cite{lo}, \cite{ms}. We have also computer assisted results on selected graphs on up to $9$ vertices: \cite{pral}, \cite{pral1}, \cite{pral2}.
In this paper we study the game $\tilde R(C_4,P_n)$. As for small paths, it is known that: $\tilde{r}(C_4,P_3)=6$, $\tilde{r}(C_4,P_4)=8$ (\cite{lo}), $\tilde{r}(C_4,P_5)=9$ (\cite{dzido}), and $\tilde{r}(C_4,P_6)=11$ (\cite{ml}). In general, we have $\tilde{r}(C_4,P_n)\ge 2n-2$ for $n\ge 2$ and it follows from a stronger result by Cyman, Dzido, Lapinskas and Lo \cite{lo}, who proved that $\tilde{r}(C_k,H)\ge |V(H)|+|E(H)|-1$ for every $k\ge 3$ and any connected graph $H$. The best upper bound known so far for $n\ge 7$ is $\tilde{r}(C_4,P_n)\le 3n-5$ by Dybizba\'nski, Dzido and Zakrzewska (\cite{dzido}). We improve the upper bound and this is the main result of our paper.
\begin{theorem}\label{main}
$\tilde{r}(C_4,P_n)\le 2n-2$ for every $n\ge 8$.
Furthermore, $\tilde{r}(C_4,P_7)\le 13$.
\end{theorem}
In view of the above mentioned lower bound $\tilde{r}(C_4,P_n)\ge 2n-2$ as well as the known bound $\tilde{r}(C_4,P_7)\ge 13$ (\cite{dzido}), our theorem implies the following new exact results.
\begin{cor}
If $n\ge 8$, then $\tilde{r}(C_4,P_n)=2n-2$.
Furthermore, $\tilde{r}(C_4,P_7)=13$.
\end{cor}
Theorem \ref{main} solves also another problem related to games with paths and cycles. Schudrich \cite{ms} studied a Builder-Painter game on $K_\mathbb N$ in which Builder wants to force Painter to create a red cycle or a blue path $P_n$ as soon as possible, while Painter wants to keep the red graph acyclic and the blue graph free of $P_n$ as long as possible. Let $\tilde{r}({\mathcal C},P_n)$ denote the number of rounds provided Builder and Painter play optimally.
Schudrich proved that $\tilde{r}({\mathcal C}, P_n)\le 2.5 n+5$ for every $n$. Theorem \ref{main} improves this bound for $n\ge 8$ and, in view of the bound $\tilde{r}({\mathcal C}, P_n)\ge 2n-2$ obtained by the authors of \cite{lo}, we get the following result.
\begin{cor}
If $n\ge 8$, then $\tilde{r}({\mathcal C}, P_n)= 2n-2$.
\end{cor}
It is interesting that the corresponding size Ramsey number $\hat{r}({\mathcal C}, P_n)$ asymptotically does not differ very much from $\tilde{r}({\mathcal C}, P_n)$ in sense of the multiplicative constants. More precisely, in view of Theorem \ref{main} and the bounds on $\hat{r}({\mathcal C}, P_n)$ proved by Bal and Schudrich \cite{bal}, we have
$$
1.03+o(1)\le \frac{\hat{r}({\mathcal C}, P_n)}{\tilde{r}({\mathcal C}, P_n)}\le 1.98 +o(1).
$$
As the authors of \cite{bal} note, the upper bound $\hat{r}({\mathcal C}, P_n)< 3.947 n +O(1)$ has a computer assisted proof.
The proof of Theorem \ref{main} is based on induction. Our inductive argument, technically quite complicated, holds for $n\ge 14$ and two next sections are devoted mainly to that argument.
For $7\le n\le 13$ we analyse the games with the computer help. The idea of the algorithm is described in Section \ref{algorithm} and the C++ code is enclosed in the appendix.
\section{Preliminaries}\label{prelim}
For a graph $H$ we put $v(H)=|V(H)|$, $e(H)=|E(H)|$. If $A\subseteq V(H)$, then we denote by $N_H(A)$ the set of all edges of $H$ with at least one end in the set $A$ and we put $\deg_H(u)=|N_H(\{u\}|$.
We say that a graph $H$ is coloured if every its edge is blue or red. A graph is red (or blue) if every its edge is red (blue).
Let $c_1,c_2,\ldots,c_k\in\{b,r\}$ be consecutive edge colours of a coloured path $P_{k+1}$.
Then the coloured path is called a $c_1c_2\ldots c_k$-path.
We say a vertex of $K_\mathbb N$ is free at a moment of the game if it is not incident to any edge selected by Builder so far.
Given $n\ge 2$, and a coloured graph $H$ (it may be empty), consider the following auxiliary game $\text{RR}(C_4,P_n,H)$.
The board of the game is
$K_\mathbb N$, with exactly $e(H)$ edges coloured, inducing a copy of $H$ in $K_\mathbb N$. The rules of selecting and colouring edges
by Builder and Painter are the same as in the standard game, however, Painter is not allowed to colour an edge red if that
would create a red $C_4$. Builder wins if $e(H)\le 2n-2$ and after at most $2n-2-e(H)$ rounds
there is a blue $P_n$ at the board and exactly $n-1$ edges are blue; otherwise Painter is the winner.
The game ends after $2n-2-e(H)$ rounds or sooner; we assume that it stops before the round $2n-2-e(H)$
if more than $n-1$ edges are blue or there is a blue $P_n$. If $H$ is an empty graph then we write
$\text{RR}(C_4,P_n)$ instead of $\text{RR}(C_4,P_n,H)$.
It is clear that the first part of Theorem \ref{main} follows immediately from the following theorem.
\begin{theorem}\label{main2}
Suppose that $G$ is one of the coloured graphs: an empty graph, a $b$-path, a $br$-path, a $brb$-path, a $brr$-path, or a $brrb$-path.
Then for every $n\ge 8$ Builder has a winning strategy in $\text{RR}(C_4,P_n,G)$.
\end{theorem}
We will prove the above theorem in the next section. In the remaining part of this section, we prepare tools for the inductive argument of the proof. We begin by a lemma need for the induction base cases.
\begin{lemma}\label{warpocz}
Suppose that $H$ is one of the coloured graphs: a $b$-path, a $br$-path, a $brb$-path, a $brr$-path, or a $brrb$-path.
Then for every $7\le n\le 13$ Builder has a winning strategy in $\text{RR}(C_4,P_n,H)$.
Furthermore, for every $8\le n\le 13$ Builder has a winning strategy in $\text{RR}(C_4,P_n)$.
\end{lemma}
The proof of this lemma is based on computer calculations. We present the algorithm and its analysis in Section \ref{algorithm}.
For the analysis of the game $\text{RR}(C_4,P_n,H)$ we need some additional definitions.
We say that Builder in $\text{RR}(C_4,P_n,H)$ forces a blue path of length $k\ge 1$ if for $k$ consecutive rounds he selects
edges such that must be coloured blue according to the rules of the game and the $k$ edges induce a path of length $k$.
Suppose that $n,k\in\mathbb N$, $H$ is a coloured graph and contains a blue path $u_0u_1\ldots u_{k+1}$. We say that $H'$ is the graph obtained by contraction of the path $u_0u_1\ldots u_{k+1}$ to the edge $u_0u_{k+1}$
if $V(H')=V(H)\setminus \{u_1,u_2,\ldots,u_k\}$, $E(H')=(E(H)\setminus N_H(\{u_1,u_2,\ldots,u_k\}))\cup\{u_0u_{k+1}\}$,
the edge $u_0u_{k+1}$ is blue (if it was red in $H$, we change its colour) and the colours of the remaining edges in $H'$ are the same as in $H$.
For $x,y\in V(H)$ we define $\delta_H(x,y)=1$ if $x,y$ are adjacent in $H$ and $\delta_H(u_0,u_{k+1})=0$ otherwise.
The following lemma is the key realisation of the inductive argument used in the next section.
\begin{lemma}\label{sciag}
Suppose that $n,k\in\mathbb N$, $H$ is a coloured graph such that it contains a blue path $P=u_0u_1\ldots u_{k+1}$ and $u_0u_{k+1}\notin E(H)$ or $u_0u_{k+1}$ is red.
Furthermore, suppose that no edge from the set $N_H(\{u_1,u_2,\ldots,u_k\})\setminus E(P)$ is blue and
$$|N_H(\{u_1,u_2,\ldots,u_k\})|+\delta_H(u_0,u_{k+1})\le 2k+1.$$
Let $H'$ be the graph obtained from $H$ by contraction of the path $P$ to the edge $u_0u_{k+1}$.
If Builder has a winning strategy in $\text{RR}(C_4,P_{n-k},H')$, then he has a winning strategy in $\text{RR}(C_4,P_n,H)$.
\end{lemma}
\begin{proof}
Let $H$, $P$ and $H'$ be as in the assumption of the lemma.
Suppose that $K_\mathbb N$ contains the coloured graph $H$ and $K'_\mathbb N$ is the complete graph arising after the
contraction of the path $P=u_0u_1\ldots u_{k+1}$ to the edge $u_0u_{k+1}$. Consider two games:
$\text{RR}(C_4,P_n,H)$ played on $K_\mathbb N$ and $\text{RR}(C_4,P_{n-k},H')$ played on $K'_\mathbb N$.
We assumed that no edge from the set $N_H(\{u_1,u_2,\ldots,u_k\})\setminus E(P)$ is blue and $u_0u_k$ is not blue at start of
the game $\text{RR}(C_4,P_n,H)$ so at the beginning of both games the sets $B_0$ and $B'_0$ of all blue edges in
$\text{RR}(C_4,P_n,H)$ and $\text{RR}(C_4,P_{n-k},H')$ respectively, satisfy the condition $B_0=(B'_0\setminus\{u_0,u_{k+1}\})\cup E(P)$
and $|B_0|=|B'_0|-1+k+1=|B'_0|+k$.
Builder in $\text{RR}(C_4,P_n,H)$ uses Builder's winning strategy in $\text{RR}(C_4,P_{n-k},H')$
by selecting the same edge as Builder in $\text{RR}(C_4,P_{n-k},H')$ in every round.
Observe that this way Builder in $\text{RR}(C_4,P_n,H)$ never selects $u_0u_{k+1}$ nor any edge incident to any of
vertices $u_1,u_2,\ldots,u_k$.
We assumed that Builder wins $\text{RR}(C_4,P_{n-k},H')$ so after at most $2(n-k-1)-e(H')$ rounds of this game
the set $B'$ of all blue edges has exactly $n-k-1$ elements and there is a blue path $P'$ with $n-k-1$ edges.
Thus $E(P')=B'$ and it contains the blue edge $u_0u_{k+1}$.
In the corresponding game $\text{RR}(C_4,P_n,H)$ after at most $2(n-k-1)-e(H')$ rounds
the set $B$ of blue edges at the board differs from $B'$ in the same way as $B_0$ differs from $B'_0$ so
$B=(B'\setminus\{u_0,u_{k+1}\})\cup E(P)$ and $|B|=|B'|+k$. We infer that $|B|=n-1$ and the blue edges $(E(P')\setminus\{u_0,u_{k+1}\})\cup E(P)$ form a path on $n$ vertices.
Finally, let us estimate the number of rounds. If follows from the contraction definition that
$$e(H')=e(H)-|N_H(\{u_1,u_2,\ldots,u_k\})|+1-\delta_H(u_0,u_{k+1}).$$
Thus, in view of the assumptions of the lemma, we verify that
$$e(H')\ge e(H)-(2k+1)+1=e(H)-2k.$$
We know that the number of rounds in both games is the same and not greater than
$2(n-k-1)-e(H')$, so the game $\text{RR}(C_4,P_n,H)$ lasts not more than
$$2(n-k-1)-e(H')\le 2(n-k-1)-e(H)+2k=2n-2-e(H)$$
rounds.
We conclude that Builder wins the game $\text{RR}(C_4,P_n,H)$.
\end{proof}
We need two more lemmata, about graphs called butterflies.
The following graph will be called an $(s,t)$-butterfly with centers $(u_0,u_1)$.
\begin{center}
\begin{tikzpicture}
\jezk{6}{3}{4};
\node [below] at (a3) {$u_0$};
\node [below] at (a4) {$u_1$};
\node [above] at (b1) {$x_1$};
\node [above] at (b2) {$y_1$};
\wierz{x2}{$(b1)-(0.5,0)$};
\wierz{x3}{$(b1)-(1.0,0)$};
\wierz{xs}{$(b1)-(2.0,0)$};
\node [above] at (x2) {$x_2$};
\node [above] at (x3) {$x_3$};
\node [above] at (xs) {$x_s$};
\node at ($(x3)-(0.5,0)$) {$\ldots$};
\krk{a3}{x2};\krk{a3}{x3};\krk{a3}{xs};
\wierz{y2}{$(b2)+(0.5,0)$};
\wierz{y3}{$(b2)+(1.0,0)$};
\wierz{yt}{$(b2)+(2.0,0)$};
\node [above] at (y2) {$y_2$};
\node [above] at (y3) {$y_3$};
\node [above] at (yt) {$y_t$};
\node at ($(y3)+(0.5,0)$) {$\ldots$};
\krk{a4}{y2};\krk{a4}{y3};\krk{a4}{yt};
\end{tikzpicture}
\end{center}
The subgraph of the butterfly, consisting of edges $u_0x_i$ with $i=1,2,\ldots,s$ and the subgraph consisting of edges $u_1y_i$ with $i=1,2,\ldots,t$, are called the $u_0$-wing and $u_1$-wing of the butterfly, respectively.
\begin{lemma}\label{motyle}
Let $s\ge 1$ and $H$ be a red $(s,s)$-butterfly with centers $(u_0,u_1)$.
Then:
\begin{enumerate}[label=(\roman*)]
\item \label{bb}
Builder in $\text{RR}(C_4,P_n,H)$ can force a blue path on the vertex set $V(H)$, with ends $u_0,u_1$.
\item \label{br}
If $s\ge 2$, Builder in $\text{RR}(C_4,P_n,H)$ can force a blue path on the vertex set $V(H)\setminus\{u_1\}$,
with ends $u_0,y$ for a vertex $y$ of the $u_1$-wing.
\item \label{rr}
If $s\ge 2$, Builder in $\text{RR}(C_4,P_n,H)$ can force a blue path on the vertex set $V(H)\setminus\{u_0,u_1\}$,
with ends $x,y$ for a vertex $x$ of the $u_0$-wing and a vertex $y$ of the $u_1$-wing.
\end{enumerate}
\end{lemma}
\begin{proof}
Let us consider the following red $(s,s)$-butterfly $H$, with $s\ge 1$.
\begin{center}
\begin{tikzpicture}
\jez{6}{3}{4};
\node [below] at (a3) {$u_0$};
\node [below] at (a4) {$u_1$};
\node [below] at (a2) {$u_2$};
\node [below] at (a5) {$u_3$};
\node [below] at (a1) {$u_4$};
\node [below] at (a6) {$u_5$};
\node [above] at (b1) {$x_1$};
\node [above] at (b2) {$y_1$};
\wierz{x2}{$(b1)-(0.5,0)$};
\wierz{x3}{$(b1)-(1.0,0)$};
\wierz{xs}{$(b1)-(2.0,0)$};
\node [above] at (x2) {$x_2$};
\node [above] at (x3) {$x_3$};
\node [above] at (xs) {$x_s$};
\node at ($(x3)-(0.5,0)$) {$\ldots$};
\kr{a3}{x2};\kr{a3}{x3};\kr{a3}{xs};
\wierz{y2}{$(b2)+(0.5,0)$};
\wierz{y3}{$(b2)+(1.0,0)$};
\wierz{yt}{$(b2)+(2.0,0)$};
\node [above] at (y2) {$y_2$};
\node [above] at (y3) {$y_3$};
\node [above] at (yt) {$y_s$};
\node at ($(y3)+(0.5,0)$) {$\ldots$};
\kr{a4}{y2};\kr{a4}{y3};\kr{a4}{yt};
\end{tikzpicture}
\end{center}
Let $V_1=\{x_1,x_2,...x_s,u_2,u_5\}$ and $V_2=\{y_1,y_2,...y_s,u_3,u_4\}$. Consider (uncoloured) graph $G$ on the vertex set $V(H)\setminus\{u_0,u_1\}$ such that two vertices $w,w'$ are adjacent in $G$ if and only if there is a path of length 3 in $H$, with ends $w,w'$. This is not hard to observe that $G$ is a copy of the graph $K_{s+2,s+2}-P_4$ such that the sets $V_1$ and $V_2$ are the parts of its bipartition. $G$ is obtained from the complete bipartite graph $K_{s+2,s+2}$ by deleting three edges of the path $u_2u_4u_5u_3$.
\begin{figure}[h]
\centering
\begin{tikzpicture}
\wierz{x1}{$(0,0)$};
\node [below] at (x1) {$x_1$};
\wierz{y1}{$(0,1)$};
\node [above] at (y1) {$y_1$};
\node at ($(0.5,0)$) {$\ldots$};
\node at ($(0.5,1)$) {$\ldots$};
\wierz{xs}{$(1,0)$};
\node [below] at (xs) {$x_s$};
\wierz{ys}{$(1,1)$};
\node [above] at (ys) {$y_s$};
\wierz{u2}{$(2,0)$};
\node [below] at (u2) {$u_2$};
\wierz{u3}{$(2,1)$};
\node [above] at (u3) {$u_3$};
\wierz{u5}{$(3,0)$};
\node [below] at (u5) {$u_5$};
\wierz{u4}{$(3,1)$};
\node [above] at (u4) {$u_4$};
\wierz{u1}{$(4,0)$};
\node [below] at (u1) {$u_1$};
\wierz{u0}{$(4,1)$};
\node [above] at (u0) {$u_0$};
\krk{x1}{y1};
\krk{x1}{ys};
\krk{x1}{u3};
\krk{x1}{u4};
\krk{xs}{y1};
\krk{xs}{ys};
\krk{xs}{u3};
\krk{xs}{u4};
\krk{u2}{y1};
\krk{u2}{ys};
\krk{u2}{u3};
\krk{u5}{y1};
\krk{u5}{ys};
\krk{u5}{u0};
\krk{u1}{u4};
\end{tikzpicture}
\caption{Let $J$ be the graph of all edges that Builder can force to be blue in $H$. Lemma \ref{motyle} is equivalent to fact that if $s\ge s_0$, then $X$ has a hamiltonian path from $a$ to $b$ where $(X,a,b,s_0)\in \{(J,u_0,u_1,1),$ $(J-\{u_1\},u_0,y_i,2),(J-\{u_0,u_1\},x_i,y_j,2)\}$.}
\label{tab:H3}
\end{figure}
One can verify that if $s\ge 2$, then for every $w_1\in V_1, w_2\in V_2$ there is a hamiltonian path with ends $w_1, w_2$ in $G=K_{s+2,s+2}-P_4$ (in other words, $G$ is Hamilton-laceable). Furthermore, if $s=1$, then there is a hamiltonian path with ends $u_4, u_5$ in $G$.
Note that every edge $e\in E(G)$, if coloured red, creates a red $C_4$ in $H\cup\{e\}$.
Thus if Builder selects all edges of a hamiltonian path $P\subseteq G$ in the game $\text{RR}(C_4, P_n, H)$, then he forces a blue path on the vertex set $V(H)\setminus\{u_0,u_1\}$.
It follows from the above observations that
Builder can force a blue path on the vertex set $V(H)\setminus\{u_0,u_1\}$, with ends $u_4,u_5$ and, if $s\ge 2$, then he can force a blue path on the vertex set $V(H)\setminus\{u_0,u_1\}$, with any ends
$w_1\in V_1, w_2\in V_2$. We will use this fact in all three parts of the lemma.
Part \ref{rr} is an immediate consequence of the above observation and the fact that vertices $x_i$ and $y_i$ are in different bipartition sets.
In order to prove \ref{bb}, we consider $s\ge 1$ and a blue path $P$ on the vertex set $V(H)\setminus\{u_0,u_1\}$, with ends $u_4,u_5$, forced by Builder. Then Builder selects edges $u_5u_0$ and $u_4u_1$. Painter has to colour them blue, otherwise a red $C_4$ is created. Then $P$ extended by $u_5u_0$, $u_4u_1$ forms a blue path on the vertex set $V(H)$, with ends $u_0,u_1$.
In order to prove \ref{br}, we consider $s\ge 2$ and a blue path $P$ on the vertex set $V(H)\setminus\{u_0,u_1\}$, with ends $u_5\in V_1$ and $y_s\in V_2$, forced by Builder. Then Builder selects the edge $u_5u_0$ and Painter has to colour it blue.
Thereby a blue path on the vertex set $V(H)\setminus\{u_1\}$ is formed, with ends $u_0,y_s$.
\end{proof}
\begin{lemma}\label{malo}
Let $s\ge 2$, $s'\in\{s,s+1\}$ and $H$ be a coloured $(s',s)$-butterfly with centers $(c_1,c_2)$.
Suppose that every wing has at least two red edges, exactly one or two edges in the $c_1$-wing are blue
and the $c_2$-wing has at most as many blue edges as the $c_1$-wing.
Furthermore, if every wing has two blue edges, then $s'=s$.
Then Builder has a winning strategy in $\text{RR}(C_4,P_{v(H)},H)$.
\end{lemma}
\begin{proof}
Given $u\in V(H)$, let $\deg_r(u)$ and $\deg_b(u)$ be the number of red and blue edges of $H$, respectively, incident to $u$. Note that $\deg_r(u)+\deg_b(u)=\deg_H(u)$. It is not hard to verify that if $H$ fulfills the assumption of the lemma, then $\deg_H(c_1)=s'+2$, $\deg_H(c_2)=s+2$, and it satisfies one
of the following conditions.
\begin{enumerate}[label=(\roman*)]
\item \label{malobb}
$\deg_b(c_1)=1, \deg_b(c_2)\le 1$ and $\deg_r(c_1)=\deg_r(c_2)$;
\item \label{malorr}
$\deg_b(c_1)=2, \deg_b(c_2)\in \{1,2\}$ and $\deg_r(c_1)=\deg_r(c_2)$;
\item \label{malorb}
$\deg_b(c_1)\in\{1,2\}, \deg_b(c_2)\le 1$ and $\deg_r(c_1)=\deg_r(c_2)-1$;
\item \label{malobb2}
$\deg_b(c_1)=\deg_b(c_2)=1$ and $\deg_r(c_1)=\deg_r(c_2)+1$;
\item \label{malorb2}
$\deg_b(c_1)=2, \deg_b(c_2)=0$ and $\deg(c_1)=\deg(c_2)$.
\end{enumerate}
In order to verify that the above cases cover all possible values of $s'-s=\deg(c_1)-\deg(c_2)$, $\deg_b(c_1)$, $\deg_b(c_2)$, we summarize them in Table \ref{tab:cases}.
\begin{table}[h]
\centering
\begin{tabular}{l | l | l | l}
$s'-s$ & $\deg_b(c_1)$ & $\deg_b(c_2)$ & case\\
\hline
0&1&0& \ref{malorb}\\
0&1&1& \ref{malobb}\\
0&2&0& \ref{malorb2}\\
0&2&1& \ref{malorb}\\
0&2&2& \ref{malorr} \\
1&1&0& \ref{malobb} \\
1&1&1& \ref{malobb2}\\
1&2&0& \ref{malorb}\\
1&2&1& \ref{malorr}
\end{tabular}
\caption{All possible values of $s'-s,\deg_b(c_1),\deg_b(c_2)$ in Lemma \ref{malo}.}
\label{tab:cases}
\end{table}
We consider every case separately. In every case we denote by $P(c_i)$ the path (of length at most two, possibly trivial)
induced by all blue edges incident to $c_i$, for $i=1,2$ and $a_i,b_i$ denote the ends of $P(c_i)$.
We assume that $b_1\neq c_1$ and, if $P(c_2)$ is nontrivial, then $b_2\neq c_2$. Let $v(H)=n$.
Our goal is to present Builder's strategy such that after at most $2n-2-e(H)=n-1$ rounds of $\text{RR}(C_4,P_{v(H)},H)$
there is a blue path on $n$ vertices and it contains all blue edges of the board.
By $B$ we denote the red butterfly spanned by all red edges of $H$.
In case \ref{malobb} we apply Lemma \ref{motyle}\ref{bb} to the butterfly $B$ so Builder in $\text{RR}(C_4,P_{n},H)$,
as Builder in $\text{RR}(C_4,P_{v(B)},B)$,
can force a blue path $P$ with $V(P)=V(B)$, with ends $c_1,c_2$.
This path, extended by $P(c_1)$ and $P(c_2)$ (both of length at most one) forms a blue path on $n$ vertices
and clearly contains all blue edges of the board. Builder achieves that goal within $e(P)\le e(H)=n-1$ rounds
so he wins the game.
In case \ref{malorr} let $x_i,y_i$ be any two vertices of the $c_i$-wing of the butterfly $B$, for $i=1,2$.
Builder starts with selecting the edge $b_1x_1$ and, if Painter colours it red, then Builder selects
$b_1y_1$ and it must be coloured blue. So after at most two rounds exactly one of the edges $b_1x_1$,
$b_1y_1$ is blue. Builder continues in the same way with edges $b_2x_2$, $b_2y_2$ and obtains
another blue edge. Without loss of generality, we assume that edges $b_1y_1$ and $b_2y_2$ are blue.
At most 4 rounds are made in the game so far.
Then we apply Lemma \ref{motyle}\ref{rr} to the butterfly $B$. Hence Builder in $\text{RR}(C_4,P_{n},H)$,
as Builder in $\text{RR}(C_4,P_{v(B)},B)$,
can force a blue path $P$ with $V(P)=V(B)\setminus\{c_1,c_2\}$, with ends $y_1,y_2$.
The edge set
$$E(P(c_1))\cup\{b_1y_1\}\cup E(P)\cup\{b_2y_2\}\cup E(P(c_2))$$
forms a blue path on the vertex set $V(H)$ and this path contains all blue edges of the board.
Builder achieves that goal within at most
$$4+e(P)=3+v(B)-2=2+e(B)=2+e(H)-e(P(c_1))-e(P(c_2))\le e(H)=n-1$$
rounds so he wins the game.
Consider case \ref{malorb}.
Let $x,y$ be any two vertices of the $c_2$-wing of the butterfly $B$.
Builder starts with selecting the edge $b_1x$ and, if Painter colours it red, then Builder selects
$b_1y$ and it must be coloured blue. So after at most two rounds exactly one of the edges $b_1x$,
$b_1y$ is blue and we assume that $b_1y$ is blue. Let $B'=B\setminus\{c_2y\}$. Then $B'$
is an $(s,s)$-butterfly with $s\ge 2$.
Based on Lemma \ref{motyle}\ref{br}, Builder in $\text{RR}(C_4,P_{v(B')},B')$
-- and hence also Builder in $\text{RR}(C_4,P_{n},H)$ -- can force a blue path $P$ with $V(P)=V(B')\setminus\{c_1\}$,
with ends $c_2,z$, with some vertex $z$ of the $c_1$-wing.
Builder forces such a path, then selects $yz$. Painter has to colour it blue since otherwise a red cycle $c_1c_2yz$
would have appeared. Observe that the edge set $E(P(c_2))\cup E(P)\cup \{zy, yb_1\}\cup E(P(c_1))$
forms a blue path on the vertex set $V(H)$ and this path contains all blue edges of the board.
Builder achieves that goal within at most
$$2+e(P)+1=2+v(B')-1=2+e(B')=2+e(H)-1-e(P(c_1))-e(P(c_2))\le e(H)=n-1$$
rounds so he wins the game.
The analysis in case \ref{malobb2} is similar to the previous one so we omit it.
We proceed to the last case. Here $P(c_1)$ is a $bb$-path with ends $a_1,b_1\neq c_1$ and $P(c_2)$ is trivial.
Furthermore, the butterfly $B$ spanned by all red edges of $H$ has its
$c_2$-wing bigger by two than its $c_1$-wing.
The strategy of Builder is similar to the one in case \ref{malorb}.
Builder begins by obtaining a blue edge $b_1y$ for a vertex $y$ of the $c_2$-wing of the butterfly $B$
and it takes him at most two rounds. Since in case \ref{malorb2} the $c_2$-wing has at least four edges,
Builder can get another blue edge $a_1x$ for a vertex $x\neq y$ of the $c_2$-wing, within
at most two rounds.
Let $B'=B\setminus\{c_2x, c_2y\}$. Then $B'$ is an $(s,s)$-butterfly with $s\ge 2$.
We apply Lemma \ref{motyle}\ref{br} and Builder in $\text{RR}(C_4,P_{n},H)$ forces
a blue path $P$ with $V(P)=V(B')\setminus\{c_1\}$,
with ends $c_2,z$, with some vertex $z$ of the $c_1$-wing.
Then Builder selects $yz$ and Painter has to colour it blue.
Now the edge set $E(P)\cup \{zy, yb_1\}\cup E(P(c_1))\cup \{a_1x\}$
forms a blue path on the vertex set $V(H)$ and this path contains all blue edges of the board.
Builder achieves that goal within at most
$$4+e(P)+1=4+e(B')=4+e(H)-2-e(P(c_1))= e(H)=n-1$$ rounds.
In all five cases Builder wins the game $\text{RR}(C_4,P_{n},H)$.
\end{proof}
\section{Proof of theorem \ref{main2}}
We prove the theorem by induction on $n$. If $8\le n\le 13$, the assertion follows from Lemma \ref{warpocz}.
Suppose that $n\ge 14$ and assume that the theorem is true for the game $\text{RR}(C_4,P_{n'},G)$ for every $n'< n$
and every $G$ described in the theorem. We begin by analysis of the game $\text{RR}(C_4,P_{n})$, i.e. in case $G$ is empty.
\subsection{$G$ is empty}\label{gpusty}
The strategy of Builder depends on the moment the first blue edge is coloured by Painter.
As long as Painter colours all edges red, Builder selects the edges in the following order.
\begin{itemize}
\item Stage 1.
First, the edges of a $(1,1)$-butterfly with centers $(u_0,u_1)$ are selected in order, given by edge labels:
\begin{center}
\begin{tikzpicture}
\foreach \X in {1,...,6}{\wierz{a\X}{$(\X-1,0)$};}
\wierz{b1}{$(a3)+(0,1)$};
\wierz{b2}{$(a4)+(0,1)$};
\krke{a1}{a2}{6};
\krke{a2}{a3}{2};
\krke{a3}{a4}{1};
\krke{a4}{a5}{4};
\krke{a5}{a6}{7};
\krke{a3}{b1}{3};
\krke{a4}{b2}{5};
\node [below] at (a3) {$u_0$};
\node [below] at (a4) {$u_1$};
\end{tikzpicture}
\end{center}
\item Stage 2.
Then (as long as Painter colours all edges red) in every even round $j$ ($j\ge 8$) Builder selects an edge incident to the butterfly center $u_0$
and any free vertex, denoted by $u_j$, while in every odd round $j$ ($j\ge 9$) he selects an edge incident to another butterfly center $u_1$ and any free vertex $u_j$.
This stage ends if either $n$ is even and we have $n-1$ coloured edges, or $n$ is odd and we have $n-2$ coloured edges.
Thus, for even $n$ after Stage 2, if all edges are red, they induce an $((n-6)/2,(n-6)/2)$-butterfly:
\begin{center}
\begin{tikzpicture}
\motyl;
\node [below] at (a3) {$u_0$};
\node [below] at (a4) {$u_1$};
\node [above] at (u8) {$u_8$};
\node [above] at (u10) {$u_{10}$};
\node [above] at (upost) {$u_{n-2}$};
\node [above] at (u9) {$u_9$};
\node [above] at (u11) {$u_{11}$};
\node [above] at (uost) {$u_{n-1}$};
\end{tikzpicture}
\end{center}
In case of odd $n$, if after Stage 2 all edges are red, they induce an $((n-7)/2,(n-7)/2)$-butterfly:
\begin{center}
\begin{tikzpicture}
\motyl;
\node [below] at (a3) {$u_0$};
\node [below] at (a4) {$u_1$};
\node [above] at (u8) {$u_8$};
\node [above] at (u10) {$u_{10}$};
\node [above] at (upost) {$u_{n-3}$};
\node [above] at (u9) {$u_9$};
\node [above] at (u11) {$u_{11}$};
\node [above] at (uost) {$u_{n-2}$};
\end{tikzpicture}
\end{center}
\item Stage 3.
This stage only takes place if $n$ is odd (and all edges are red). Builder selects an edge incident to $u_{n-2}$ and any
free vertex $u_{n-1}$. Thus, after Stage 3, if all edges are red, they induce a $((n-7)/2,(n-7)/2)$-butterfly with the additional edge $u_{n-2}u_{n-1}$:
\begin{center}
\begin{tikzpicture}
\motyl;
\node [below] at (a3) {$u_0$};
\node [below] at (a4) {$u_1$};
\node [above] at (u8) {$u_8$};
\node [above] at (u10) {$u_{10}$};
\node [above] at (upost) {$u_{n-3}$};
\node [above] at (u9) {$u_9$};
\node [above] at (u11) {$u_{11}$};
\node [above] at (uost) {$u_{n-2}$};
\wierz{un}{$(uost)+(0,1)$};
\node [above] at (un) {$u_{n-1}$};
\kr{uost}{un};
\end{tikzpicture}
\end{center}
\end{itemize}
We will analyse the play based on Painter choices; Builder's strategy depends on the time the first blue edge appears (if any) in Stages 1, 2, 3.
\subsubsection{No blue edges in Stages 1, 2 and 3}
If $n$ is even and no blue edges after round $n-1$,
then the $n-1$ edges induce a red $(s,s)$-butterfly on $n$ vertices and in view of Lemma \ref{motyle}\ref{bb}
Builder can force a blue path $P_n$. After $2n-2$ rounds $n-1$ edges are red and $n-1$ edges are blue.
If $n$ odd and no blue edges after round $n-1$,
then we have a red $(s,s)$-butterfly on $n-1$ vertices, with centers $u_0$, $u_1$ and a red edge $u_{n-2}u_{n-1}$ as
described in Stage 3. Then in view of Lemma \ref{motyle}\ref{bb} Builder can force
a blue path $P_{n-1}$ on the vertex set of the $(s,s)$-butterfly, with ends $u_0$, $u_1$. Then he selects
the edge $u_0u_{n-1}$ and it must be coloured blue since vertices $u_0,u_{n-1}$ are ends of a red path of length 3.
This edge extends a blue path $P_{n-1}$.
Thus after $2n-2$ rounds $n-1$ edges are red and $n-1$ edges are blue and the blue edges induce a path $P_n$.
In both cases Builder wins $\text{RR}(C_4,P_{n})$.
\subsubsection{First blue edge in Stage 3}
Then $n\ge 14$ is odd, and after $n-1$ rounds we have the following coloured graph $H$ at the board.
\begin{center}
\begin{tikzpicture}
\motyl;
\node [below] at (a3) {$u_0$};
\node [below] at (a4) {$u_1$};
\node [below] at (a2) {$u_2$};
\node [above] at (b1) {$u_3$};
\node [below] at (a5) {$u_4$};
\node [above] at (b2) {$u_5$};
\node [below] at (a1) {$u_6$};
\node [below] at (a6) {$u_7$};
\node [above] at (u8) {$u_8$};
\node [above] at (u10) {$u_{10}$};
\node [above] at (upost) {$u_{n-3}$};
\node [above] at (u9) {$u_9$};
\node [above] at (u11) {$u_{11}$};
\node [above] at (uost) {$u_{n-2}$};
\wierz{un}{$(uost)+(0,1)$};
\node [above] at (un) {$u_{n-1}$};
\krn{uost}{un};
\end{tikzpicture}
\end{center}
Builder selects $u_{n-1}u_{n-3}$.
Suppose Painter colours it red. Then the red graph induced on the vertex set $\{u_0,u_1,\ldots, u_{n-4}\}$
is an $(s,s)$-butterfly with centers $u_0$, $u_1$ and Lemma \ref{motyle}\ref{bb} implies that
Builder can force a blue path $P_{n-3}$ with ends $u_0$, $u_1$, on the set $\{u_0,u_1,\ldots, u_{n-4}\}$.
Then he selects the edges $u_1u_{n-1}$ and $u_{n-2}u_{n-3}$; both edges must be coloured blue
so that a red $C_4$ is avoided. The blue path $u_1u_{n-1}u_{n-2}u_{n-3}$ extends the blue path $P_{n-3}$
to a blue path $P_n$.
The number of rounds made so far is $e(H)+1+e(P_{n-3})+2=2n-2$ and exactly $n-1$ edges are blue
so Builder wins $\text{RR}(C_4,P_{n})$.
Suppose now that the edge $u_{n-1}u_{n-3}$ selected in round $n$ is blue.
Then the red graph induced on the vertex set $\{u_0,u_1,\ldots, u_{n-2}\}$
is an $(s,s)$-butterfly with centers $u_0$, $u_1$, $s\ge 2$, and Builder
can force a blue path on the set $V(H)$ (with ends $u_0,u_1$) in a way similar to the one in the proof of
Lemma \ref{motyle}\ref{bb}. Builder can force a blue path from $u_0$ to $u_1$ through all vertices $\{u_2,u_3,\ldots, u_{n-2}\}$ such that $u_{n-3}$ and $u_{n-2}$ are adjacent. He selects all edges from this path except $u_{n-2}u_{n-3}$ since these two vertices have been already connected with the blue path
$u_{n-2}u_{n-1}u_{n-3}$. Thus a blue path $P_n$ is created after
$2n-3$ rounds of $\text{RR}(C_4,P_{n})$.
\subsubsection{First blue edge in Stage 2}
We consider a few cases, depending on the time Painter colours an edge blue for the first time.
\paragraph{}
First blue edge in round $n-j$, where either $j\in\{2,3\}$ or $n$ is even and $j\in\{1,4\}$.
\medskip\noindent
Then for the next $j-1\le 3$ rounds (if any) Builder selects edges as in Stage 2, starting from an edge incident to $u_1$,
no matter how Painter responds. Thus the wings of the coloured butterfly remain almost equal.
By case analysis we infer that after these $j-1$ rounds the obtained coloured graph $H$
is up to symmetry an $(s',s)$-butterfly on $n$ vertices and it fulfills the assumption of Lemma \ref{malo} (let us recall that $n\ge 14$).
Further in the game $\text{RR}(C_4,P_n)$ Builder plays according to a winning strategy of Builder in $\text{RR}(C_4,P_{v(H)},H)$
and thereby he wins $\text{RR}(C_4,P_n)$.
\paragraph{}
First blue edge in round $t$ for $10\le t\le n-5$ or in an odd round $t=n-4$.
\medskip\noindent
Let $H$ be the graph induced by $t$ coloured edges.
Suppose that $t$ is even. Then $10\le t\le n-5$, the $t-1$ red edges of $H$ induce a red $(s,s)$-butterfly $B$ with
centers $u_0,u_1$, $s\ge 2$ and there is one blue edge, say $u_0x$, selected in round $t$.
Based on Lemma \ref{motyle}\ref{br}, Builder forces a blue path $P_{t-1}$ on the vertex set $V(B)\setminus\{u_1\}$,
with ends $u_0$, $y$ for a vertex $y$ from the $u_1$-wing.
The blue edge $u_0x$ extends $P_{t-1}$ to a blue path $P_{t}$. Let $A$ be the set of internal vertices of $P_{t}$, i.e.
$A=V(H)\setminus\{y,x,u_0\}$. Then $|A|=t-2$, $N_H(A)$ consists of $t-1$ blue edges of the path $P_{t}$ and
$|E(B)\setminus\{u_1y\}|=t-2$ red edges, and the ends of $P_{t}$ are not adjacent. Thus
$|N_H(A)|+\delta_H(u_1,x)=2t-3=2|A|+1$.
The graph $H'$ obtained from $H$ by contraction $P_{t}$ to an edge $xy$ is a $br$-path.
Note that $n-|A|=n-t+2\ge 7$. If $n-|A|= 7$, we apply Lemma \ref{warpocz};
if $n-|A|\ge 8$, we apply the inductive hypothesis;
in both cases we infer that Builder has a winning strategy in $\text{RR}(C_4,P_{n-|A|},H')$. Then, by Lemma \ref{sciag},
we conclude that Builder has a winning strategy in $\text{RR}(C_4,P_n,H)$.
Suppose now that $t$ is odd. Then $11\le t\le n-4$ and the $t-1$ red edges of $H$ induce a red $(s+1,s)$-butterfly $B$ with
centers $u_0,u_1$, $s\ge 2$, and there is one blue edge, say $u_1x$, selected in round $t$. Let $B'$ be an $(s,s)$-butterfly
obtained from $B$ by deleting an edge $u_0z$ of the $u_0$-wing.
Based on Lemma \ref{motyle}\ref{br}, Builder forces a blue path $P_{t-2}$ on the vertex set $V(B')\setminus\{u_0\}$,
with ends $u_1$, $y$ for a vertex $y$ from the $u_0$-wing.
The blue edge $u_1x$ extends $P_{t-2}$ to a blue path $P_{t-1}$. Let $A$ be the set of internal vertices of $P_{t-1}$, i.e.
$A=V(H)\setminus\{u_0,x,y,z\}$. Then $|A|=t-3$, $N_H(A)$ consists of $t-2$ blue edges of the path $P_{t-1}$ and
$|E(B')\setminus\{u_0y\}|=t-3$ red edges, and the ends of $P_{t-1}$ are not adjacent. Thus
$|N_H(A)|+\delta_H(u_0,x)=2t-5=2|A|+1$.
The graph $H'$ obtained from $H$ by contraction $P_{t-1}$ to an edge $xy$ is a $brr$-path $x y u_0 z$.
Note that $n-|A|=n-t+3\ge 7$. If $n-|A|= 7$, we apply Lemma \ref{warpocz};
if $n-|A|\ge 8$, we apply the inductive hypothesis;
in both cases we infer that Builder has a winning strategy in $\text{RR}(C_4,P_{n-|A|},H')$.
In view of Lemma \ref{sciag}, Builder has a winning strategy in $\text{RR}(C_4,P_n,H)$.
\paragraph{}
First blue edge in round $t\in\{8,9\}$.
\medskip\noindent
The analysis is very similar to the argument in the previous case, but we use Lemma \ref{motyle}\ref{br}
instead of Lemma \ref{motyle}\ref{bb} and the result of contraction is a smaller path.
For the sake of completeness, we present the details.
Let $H$ be the graph induced by $t$ coloured edges.
Suppose that $t=8$. Then $t\le n-6$ since $n\ge 14$. The $t-1$ red edges of $H$ induce a red $(1,1)$-butterfly with
centers $u_0,u_1$ and there is one blue edge, say $u_0x$, selected in round $t$.
Based on Lemma \ref{motyle}\ref{bb}, Builder forces a blue path $P_t$ with the vertex set of the $(1,1)$-butterfly, with ends $u_0$, $u_1$.
The blue edge $u_0x$ extends $P_t$ to a blue path $P_{t+1}$. Let $A$ be the set of internal vertices of $P_{t+1}$, i.e.
$A=V(H)\setminus\{u_1,x\}$. Then $|A|=t-1$, $N_H(A)$ consists of $t$ blue edges of the path $P_{t+1}$ and $t-1$
red edges of $B$, and the ends of $P_{t+1}$ are not adjacent. Thus
$|N_H(A)|+\delta_H(u_1,x)=2t-1=2|A|+1$.
The graph $H'$ obtained from $H$ by contraction $P_{t+1}$ to an edge $u_1x$ is clearly a $b$-path.
Note that $n-|A|=n-t+1\ge 7$. If $n-|A|= 7$, we apply Lemma \ref{warpocz}; if $n-|A|\ge 8$, we apply the inductive hypothesis; in
both cases we infer that Builder has a winning strategy in $\text{RR}(C_4,P_{n-|A|},H')$. Then, it follows from Lemma \ref{sciag},
that Builder has a winning strategy in $\text{RR}(C_4,P_n,H)$.
Suppose now that $t=9$. Then $t\le n-5$. The $t-1$ red edges of $H$ induce a red $(s+1,s)$-butterfly $B$ with
centers $(u_0,u_1)$ and there is one blue edge, say $u_1x$, selected in round $t$. Let $B'$ be an $(s,s)$-butterfly
obtained from $B$ by deleting an edge $u_0y$ of the $u_0$-wing.
Based on Lemma \ref{motyle}\ref{bb}, Builder forces a blue path $P_{t-1}$ with the vertex set of the $B'$,
with ends $u_0$, $u_1$.
The blue edge $u_1x$ extends $P_{t-1}$ to a blue path $P_{t}$. Let $A$ be the set of internal vertices of $P_{t}$, i.e.
$A=V(H)\setminus\{u_0,x\}$. Then $|A|=t-2$, $N_H(A)$ consists of $t-1$ blue edges of the path $P_{t}$ and
$t-2$ red edges of $B'$, and the ends of $P_{t}$ are not adjacent. Thus
$|N_H(A)|+\delta_H(u_0,x)=2t-3=2|A|+1$.
The graph $H'$ obtained from $H$ by contraction $P_{t}$ to an edge $u_0x$ is a $br$-path $y u_0 x$.
We have $n-|A|=n-t+2\ge 7$. If $n-|A|= 7$, we apply Lemma \ref{warpocz}; if $n-|A|\ge 8$, we apply the inductive hypothesis; in
both cases Builder has a winning strategy in $\text{RR}(C_4,P_{n-|A|},H')$. Then by Lemma \ref{sciag}
we finish the argument.
So far we have described a strategy of Builder in the game $\text{RR}(C_4, P_n)$ provided the first blue edge appears
in Stage 2, Stage 3, or later. In the next subsection, we analyse the game when Painter colours an edge blue sooner.
\subsubsection{Blue edge in Stage 1}\label{stage1}
\paragraph{}\label{nieb7}
First blue edge in round 7.
\begin{center}
\begin{tikzpicture}
\jezobc{7};
\end{tikzpicture}
\end{center}
Builder forces a blue path $u_4u_2u_5u_3u_6u_1$, which together with the blue edge $u_7u_4$ forms a blue path
$P_7$ with ends $u_7,u_4$. Let $H$ be the graph induced by all coloured edges at that moment.
One can verify that for $A=\{u_4,u_2,u_5,u_3,u_6\}$ we have
$|N_H(A)|+\delta_H(u_7,u_1)=11=2|A|+1$.
The graph $H'$ obtained from $H$ by contraction $P$ to an edge $u_7u_1$ is a $br$-path $u_7u_1u_0$.
In short, we present the argument in the following picture.
\begin{tikzpicture}
\jezobc{7};
\end{tikzpicture}
\quad$\to$\quad
\begin{tikzpicture}
\jezobc{7};
\sciezkan{u4}{u2,u5,u3,u6,u1};
\end{tikzpicture}
\quad$\to$\quad
\begin{tikzpicture}
\sciez{3};
\krn{a1}{a2};
\node [below] at (a1) {$u_7$};
\node [below] at (a2) {$u_1$};
\node [below] at (a3) {$u_0$};
\end{tikzpicture}
Note that $n-|A|= n-5\ge 9$. By the inductive hypothesis we infer that Builder has a winning strategy in $\text{RR}(C_4,P_{n-|A|},H')$. It follows from Lemma \ref{sciag} that Builder has a winning strategy in $\text{RR}(C_4,P_n,H)$.
\paragraph{}\label{nieb6}
First blue edge in round 6.
\medskip\noindent
The analysis is similar to these in case \ref{nieb7} so we present the pictures only and updated calculations.
\begin{tikzpicture}
\jezobc{6};
\end{tikzpicture}
\quad$\to$\quad
\begin{tikzpicture}
\jezobc{6};
\sciezkan{u2}{u4,u3,u5};
\end{tikzpicture}
\quad$\to$\quad
\begin{tikzpicture}
\sciez{4};
\krn{a1}{a2};
\node [below] at (a1) {$u_6$};
\node [below] at (a2) {$u_5$};
\node [below] at (a3) {$u_1$};
\node [below] at (a4) {$u_0$};
\end{tikzpicture}
For $A=\{u_2,u_4,u_3\}$ we have
$|N_H(A)|+\delta_H(u_6,u_5)=7=2|A|+1$
and $n-|A|= n-3\ge 11$.
\paragraph{}\label{nieb5}
First blue edge in round 5.
\begin{center}
\begin{tikzpicture}
\jezobc{5};
\end{tikzpicture}
\end{center}
Here (and in the following cases) the argument is analogous to that in the previous cases.
However, before some contraction, we consider a few rounds more.
Builder's move in round 6 and all possible Painter's choices are
shown in the table below.
We assume that after Builder forces some blue edges and the obtained coloured graph
is denoted by $H$, shown in the table. Only then we make a path contraction.
The column last but one shows the vertices of the contracted path and the graph $H'$ obtained from $H$ by
contraction of the path. The last column contains the number
$$m=|N_H(A)|+\delta_H(x,y),$$
where $A$ is the set of internal vertices of the contracted path and $x,y$ are its ends.
This notation is also used in the remaining cases.
\medskip
\begin{tabular}{|c|c|c|c|}
\hline
after round 6& $H$ & contracted path and $H'$ & $m$\\
\hline
\begin{tikzpicture}
\jezobc{5};
\wierz{u6}{$(u2)-(1,0)$};
\node [below] at (u6) {$u_6$};
\kr{u2}{u6};
\end{tikzpicture}
&
\begin{tikzpicture}
\jezobc{5};
\wierz{u6}{$(u2)-(1,0)$};
\node [below] at (u6) {$u_6$};
\kr{u2}{u6};
\sciezkan{u1}{u6,u3,u4,u2};
\end{tikzpicture}
&
\begin{tabular}{c}
$u_5u_1u_6u_3u_4u_2$\\[10pt]
\begin{tikzpicture}
\sciez{3};
\krn{a1}{a2};
\node [below] at (a1) {$u_5$};
\node [below] at (a2) {$u_2$};
\node [below] at (a3) {$u_0$};
\end{tikzpicture}\\
\end{tabular}
&9\\
\hline
\begin{tikzpicture}
\jezobc{5};
\wierz{u6}{$(u2)-(1,0)$};
\node [below] at (u6) {$u_6$};
\krn{u2}{u6};
\end{tikzpicture}
&
\begin{tikzpicture}
\jezobc{5};
\wierz{u6}{$(u2)-(1,0)$};
\node [below] at (u6) {$u_6$};
\krn{u2}{u6};
\sciezkan{u2}{u4,u3};
\end{tikzpicture}
&
\begin{tabular}{c}
$u_6u_2u_4u_3$\\[10pt]
\begin{tikzpicture}
\sciez{5};
\krn{a1}{a2};
\krn{a4}{a5};
\node [below] at (a1) {$u_6$};
\node [below] at (a2) {$u_3$};
\node [below] at (a3) {$u_0$};
\node [below] at (a4) {$u_1$};
\node [below] at (a5) {$u_5$};
\end{tikzpicture}\\
\end{tabular}
&5\\
\hline
\end{tabular}
\medskip
One can easily verify that $m\le 2|A|+1$ and $n-|A|\ge 8$. Therefore by the inductive hypothesis Builder has a winning strategy in $\text{RR}(C_4,P_{n-|A|},H')$. Then based on Lemma \ref{sciag} we conclude that Builder has a winning strategy in $\text{RR}(C_4,P_n,H)$.
\paragraph{}\label{nieb4}
First blue edge in round 4.
\begin{center}
\begin{tikzpicture}
\jezobc{4};
\end{tikzpicture}
\end{center}
Builder selects an edge $u_1u_5$, where $u_5$ is any free vertex.
If Painter colours it red then we obtain the coloured graph isomorphic to the graph after round 5, considered in case \ref{nieb5}.
Thus we can assume that Painter colours $u_1u_5$ blue and the resulting graph is
\begin{center}
\begin{tikzpicture}
\jezobc{4};
\wierz{u5}{$(u1)+(0,1)$};
\node [above] at (u5) {$u_5$};
\krn{u1}{u5};
\end{tikzpicture}
\end{center}
Then Builder selects $u_3u_5$. If Painter colours it blue, we obtain the coloured graph $H$ on 6 vertices, with the blue path $u_4u_1u_5u_3$
such that $N_H(\{u_1,u_5\})+\delta_H(u_4,u_3)=4$. After the contraction of this path, we get the graph $H'$ which is a $brr$-path.
If Painter colours $u_3u_5$ red, Builder in round 6 forces a blue edge $u_5u_2$, and the resulting coloured graph $H$ on 6 vertices
has the blue path $u_4u_1u_5u_2$
such that $N_H(\{u_1,u_5\})+\delta_H(u_4,u_2)=5$. After the contraction of this path, we get the graph $H'$ which is a $brr$-path.
In both cases we have $m\le 2|A|+1$ and $n-|A|\ge 8$ so one can argue as before that Builder wins the game.
\paragraph{}\label{nieb3}
First blue edge in round 3.
\begin{center}
\begin{tikzpicture}
\jezobc{3};
\end{tikzpicture}
\end{center}
Builder selects two edges $u_2u_4$ and $u_2u_5$, where $u_4,u_5$ are any free vertices. Then Painter colours
$u_2u_4$ and $u_2u_5$. If both edges are red, we receive a coloured graph isomorphic to the graph after round 5, considered in case \ref{nieb5}. Therefore further we assume that at least one edge $u_2u_4$, $u_2,u_5$ is blue.
Consider two possibilities.
Suppose first that exactly one of the two edges is blue, say $u_2u_5$. Then Builder selects $u_2u_6$ with any free vertex $u_6$.
The further play, split into cases depending on Painter's choice in rounds 6, 7 and 8, is presented in the table below.
Let us recall that the graph $H$ shown in the table may contain additional blue edges, forced by Builder after
the rounds presented step by step. The notation {\em n/a} in the table means that we do not present Builder's move
in this round.
\medskip
\scalebox{0.74}{
\begin{tabular}{|c|c|c|c|c|c|}
\hline
after round 6& after round 7& after round 8 & $H$ & \makecell{contracted path\\ and $H'$} & $m$\\
\hline
\multirow{2}{*}{
\begin{tikzpicture}
\jezobc{3};
\wierz{u4}{$(u2)-(1,0)$};
\node [below] at (u4) {$u_4$};
\kr{u2}{u4};
\wierz{u5}{$(u2)+(0,1)$};
\node [above] at (u5) {$u_5$};
\krn{u2}{u5};
\wierz{u6}{$(u2)-(0,1)$};
\node [left] at (u6) {$u_6$};
\krn{u2}{u6};
\end{tikzpicture}
}
&
\begin{tikzpicture}
\jezobc{3};
\wierz{u4}{$(u2)-(1,0)$};
\node [below] at (u4) {$u_4$};
\kr{u2}{u4};
\wierz{u5}{$(u2)+(0,1)$};
\node [above] at (u5) {$u_5$};
\krn{u2}{u5};
\wierz{u6}{$(u2)-(0,1)$};
\node [left] at (u6) {$u_6$};
\krn{u2}{u6};
\krn{u0}{u5};
\end{tikzpicture}
&
\makecell[b]{n/a\\ \ }
&
\begin{tikzpicture}
\jezobc{3};
\wierz{u4}{$(u2)-(1,0)$};
\node [below] at (u4) {$u_4$};
\kr{u2}{u4};
\wierz{u5}{$(u2)+(0,1)$};
\node [above] at (u5) {$u_5$};
\krn{u2}{u5};
\wierz{u6}{$(u2)-(0,1)$};
\node [left] at (u6) {$u_6$};
\krn{u2}{u6};
\krn{u0}{u5};
\end{tikzpicture}
&
\makecell[b]{
$u_3u_0u_5u_2u_6$\\
\\
\begin{tikzpicture}
\sciez{2};
\krn{a1}{a2};
\node [below] at (a1) {$u_3$};
\node [below] at (a2) {$u_6$};
\end{tikzpicture}
}
&7\\
\cline{2-6}
&
\begin{tikzpicture}
\jezobc{3};
\wierz{u4}{$(u2)-(1,0)$};
\node [below] at (u4) {$u_4$};
\kr{u2}{u4};
\wierz{u5}{$(u2)+(0,1)$};
\node [above] at (u5) {$u_5$};
\krn{u2}{u5};
\wierz{u6}{$(u2)-(0,1)$};
\node [left] at (u6) {$u_6$};
\krn{u2}{u6};
\kr{u0}{u5};
\end{tikzpicture}
&
\makecell[b]{n/a\\ \ }
&
\begin{tikzpicture}
\jezobc{3};
\wierz{u4}{$(u2)-(1,0)$};
\node [below] at (u4) {$u_4$};
\kr{u2}{u4};
\wierz{u5}{$(u2)+(0,1)$};
\node [above] at (u5) {$u_5$};
\krn{u2}{u5};
\wierz{u6}{$(u2)-(0,1)$};
\node [left] at (u6) {$u_6$};
\krn{u2}{u6};
\kr{u0}{u5};
\sciezkan{u5}{u4,u1};
\end{tikzpicture}
&
\makecell[b]{
$u_6u_2u_5u_4u_1$\\
\\
\begin{tikzpicture}
\sciez{4};
\krn{a1}{a2};
\krn{a3}{a4};
\node [below] at (a1) {$u_6$};
\node [below] at (a2) {$u_1$};
\node [below] at (a3) {$u_0$};
\node [below] at (a4) {$u_3$};
\end{tikzpicture}
}
&7\\
\hlineB{3}
\multirow{3}{*}{
\begin{tikzpicture}
\jezobc{3};
\wierz{u4}{$(u2)-(1,0)$};
\node [below] at (u4) {$u_4$};
\kr{u2}{u4};
\wierz{u5}{$(u2)+(0,1)$};
\node [above] at (u5) {$u_5$};
\krn{u2}{u5};
\wierz{u6}{$(u2)-(0,1)$};
\node [left] at (u6) {$u_6$};
\kr{u2}{u6};
\end{tikzpicture}
}
&
\begin{tikzpicture}
\jezobc{3};
\wierz{u4}{$(u2)-(1,0)$};
\node [below] at (u4) {$u_4$};
\kr{u2}{u4};
\wierz{u5}{$(u2)+(0,1)$};
\node [above] at (u5) {$u_5$};
\krn{u2}{u5};
\wierz{u6}{$(u2)-(0,1)$};
\node [left] at (u6) {$u_6$};
\kr{u2}{u6};
\wierz{u7}{$(u6)-(0,1)$};
\node [left] at (u7) {$u_7$};
\kr{u6}{u7};
\end{tikzpicture}
&
\makecell[b]{n/a\\ \ }
&
\begin{tikzpicture}
\jezobc{3};
\wierz{u4}{$(u2)-(1,0)$};
\node [below] at (u4) {$u_4$};
\kr{u2}{u4};
\wierz{u5}{$(u2)+(0,1)$};
\node [above] at (u5) {$u_5$};
\krn{u2}{u5};
\wierz{u6}{$(u2)-(0,1)$};
\node [left] at (u6) {$u_6$};
\kr{u2}{u6};
\wierz{u7}{$(u6)-(0,1)$};
\node [left] at (u7) {$u_7$};
\kr{u6}{u7};
\sciezkan{u6}{u1,u4,u7,u0};
\end{tikzpicture}
&
\makecell[b]{
$u_3u_0u_7u_4u_1u_6$\\
\\
\begin{tikzpicture}
\sciez{4};
\krn{a1}{a2};
\krn{a3}{a4};
\node [below] at (a1) {$u_3$};
\node [below] at (a2) {$u_6$};
\node [below] at (a3) {$u_2$};
\node [below] at (a4) {$u_5$};
\end{tikzpicture}
}
&9\\
\clineB{2-6}{3}
&
\multirow{2}{*}{
\begin{tikzpicture}
\jezobc{3};
\wierz{u4}{$(u2)-(1,0)$};
\node [below] at (u4) {$u_4$};
\kr{u2}{u4};
\wierz{u5}{$(u2)+(0,1)$};
\node [above] at (u5) {$u_5$};
\krn{u2}{u5};
\wierz{u6}{$(u2)-(0,1)$};
\node [left] at (u6) {$u_6$};
\kr{u2}{u6};
\wierz{u7}{$(u6)-(0,1)$};
\node [left] at (u7) {$u_7$};
\krn{u6}{u7};
\end{tikzpicture}
}
&
\begin{tikzpicture}
\jezobc{3};
\wierz{u4}{$(u2)-(1,0)$};
\node [below] at (u4) {$u_4$};
\kr{u2}{u4};
\wierz{u5}{$(u2)+(0,1)$};
\node [above] at (u5) {$u_5$};
\krn{u2}{u5};
\wierz{u6}{$(u2)-(0,1)$};
\node [left] at (u6) {$u_6$};
\kr{u2}{u6};
\wierz{u7}{$(u6)-(0,1)$};
\node [left] at (u7) {$u_7$};
\krn{u6}{u7};
\kr{u5}{u0};
\end{tikzpicture}
&
\begin{tikzpicture}
\jezobc{3};
\wierz{u4}{$(u2)-(1,0)$};
\node [below] at (u4) {$u_4$};
\kr{u2}{u4};
\wierz{u5}{$(u2)+(0,1)$};
\node [above] at (u5) {$u_5$};
\krn{u2}{u5};
\wierz{u6}{$(u2)-(0,1)$};
\node [left] at (u6) {$u_6$};
\kr{u2}{u6};
\wierz{u7}{$(u6)-(0,1)$};
\node [left] at (u7) {$u_7$};
\krn{u6}{u7};
\kr{u5}{u0};
\sciezkan{u6}{u1,u4,u5};
\end{tikzpicture}
&
\makecell[b]{
$u_7u_6u_1u_4u_5u_2$\\
\\
\begin{tikzpicture}
\sciez{4};
\krn{a1}{a2};
\krn{a3}{a4};
\node [below] at (a1) {$u_7$};
\node [below] at (a2) {$u_2$};
\node [below] at (a3) {$u_0$};
\node [below] at (a4) {$u_3$};
\end{tikzpicture}
}
&9\\
\cline{3-6}
&
&
\begin{tikzpicture}
\jezobc{3};
\wierz{u4}{$(u2)-(1,0)$};
\node [below] at (u4) {$u_4$};
\kr{u2}{u4};
\wierz{u5}{$(u2)+(0,1)$};
\node [above] at (u5) {$u_5$};
\krn{u2}{u5};
\wierz{u6}{$(u2)-(0,1)$};
\node [left] at (u6) {$u_6$};
\kr{u2}{u6};
\wierz{u7}{$(u6)-(0,1)$};
\node [left] at (u7) {$u_7$};
\krn{u6}{u7};
\krn{u5}{u0};
\end{tikzpicture}
&
\begin{tikzpicture}
\jezobc{3};
\wierz{u4}{$(u2)-(1,0)$};
\node [below] at (u4) {$u_4$};
\kr{u2}{u4};
\wierz{u5}{$(u2)+(0,1)$};
\node [above] at (u5) {$u_5$};
\krn{u2}{u5};
\wierz{u6}{$(u2)-(0,1)$};
\node [left] at (u6) {$u_6$};
\kr{u2}{u6};
\wierz{u7}{$(u6)-(0,1)$};
\node [left] at (u7) {$u_7$};
\krn{u6}{u7};
\krn{u5}{u0};
\sciezkan{u6}{u1,u4};
\end{tikzpicture}
&
\makecell[b]{
$u_7u_6u_1u_4$\\
then\\
$u_3u_0u_5u_2$\\
\\
\begin{tikzpicture}
\sciez{4};
\krn{a1}{a2};
\krn{a3}{a4};
\node [below] at (a1) {$u_7$};
\node [below] at (a2) {$u_4$};
\node [below] at (a3) {$u_2$};
\node [below] at (a4) {$u_3$};
\end{tikzpicture}
}
&
\makecell[b]{5\\ then\\ 4}\\
\hline
\end{tabular}
}
\medskip
Let us comment on the last row, i.e.~when Painter colours blue edges $u_6u_7$ and $u_0u_5$ selected in rounds 7 and 8, respectively.
After round 8 Builder forces the blue path $u_6u_1u_4$. The blue path $u_7u_6u_1u_4$ has non-adjacent ends
and 5 coloured edges incident to its internal vertices.
So $N_H(\{u_6,u_1\})+\delta_H(u_7,u_4)=5\le 2|\{u_6,u_1\}|+1$.
We contract the path $u_7u_6u_1u_4$ and obtain a new coloured graph, let us denote it by $H''$, in which there is
the blue path $u_3u_0u_5u_2$ with non-adjacent ends and 4 coloured edges
incident to its internal vertices. Thus $N_{H''}(\{u_0,u_5\})+\delta_H(u_3,u_2)=4\le 2|\{u_0,u_5\}|+1$.
We contract the path $u_3u_0u_5u_2$ and the graph $H'$ in the table is the result.
By the inductive hypothesis, Builder has a winning strategy in $\text{RR}(C_4,P_{v(H')},H')$.
Then Lemma \ref{sciag} implies that Builder has a winning strategy in $\text{RR}(C_4,P_{v(H'')},H'')$.
Finally, we apply Lemma \ref{sciag} again and conclude that Builder wins $\text{RR}(C_4,P_{v(H)},H)$.
The analysis of other cases presented in the above table is simpler and we omit it.
Suppose now that both edges $u_2u_4$ and $u_2u_5$ selected by Builder in rounds 4 and 5 are coloured blue.
The further moves of Builder and cases depending on Painter's choice in rounds 6, 7 are presented in the table below.
\medskip
\scalebox{0.9}{
\begin{tabular}{|c|c|c|c|c|}
\hline
after round 6& after round 7& $H$ & \makecell{contracted path\\ and $H'$} & $m$\\
\hline
\multirow{2}{*}{
\begin{tikzpicture}
\jezobc{3};
\wierz{u4}{$(u2)-(1,0)$};
\node [below] at (u4) {$u_4$};
\krn{u2}{u4};
\wierz{u5}{$(u2)+(0,1)$};
\node [above] at (u5) {$u_5$};
\krn{u2}{u5};
\kr{u1}{u3};
\end{tikzpicture}
}
&
\begin{tikzpicture}
\jezobc{3};
\wierz{u4}{$(u2)-(1,0)$};
\node [below] at (u4) {$u_4$};
\krn{u2}{u4};
\wierz{u5}{$(u2)+(0,1)$};
\node [above] at (u5) {$u_5$};
\krn{u2}{u5};
\kr{u1}{u3};
\kr{u5}{u3};
\end{tikzpicture}
&
\begin{tikzpicture}
\jezobc{3};
\wierz{u4}{$(u2)-(1,0)$};
\node [below] at (u4) {$u_4$};
\krn{u2}{u4};
\wierz{u5}{$(u2)+(0,1)$};
\node [above] at (u5) {$u_5$};
\krn{u2}{u5};
\kr{u1}{u3};
\kr{u5}{u3};
\krn{u0}{u5};
\end{tikzpicture}
&
\makecell[b]{
$u_4u_2u_5u_0u_3$\\
\\
\begin{tikzpicture}
\sciez{3};
\krn{a1}{a2};
\node [below] at (a1) {$u_4$};
\node [below] at (a2) {$u_3$};
\node [below] at (a3) {$u_1$};
\end{tikzpicture}
}
&
7\\
\cline{2-5}
&
\begin{tikzpicture}
\jezobc{3};
\wierz{u4}{$(u2)-(1,0)$};
\node [below] at (u4) {$u_4$};
\krn{u2}{u4};
\wierz{u5}{$(u2)+(0,1)$};
\node [above] at (u5) {$u_5$};
\krn{u2}{u5};
\kr{u1}{u3};
\krn{u3}{u5};
\end{tikzpicture}
&
\begin{tikzpicture}
\jezobc{3};
\wierz{u4}{$(u2)-(1,0)$};
\node [below] at (u4) {$u_4$};
\krn{u2}{u4};
\wierz{u5}{$(u2)+(0,1)$};
\node [above] at (u5) {$u_5$};
\krn{u2}{u5};
\kr{u1}{u3};
\krn{u3}{u5};
\end{tikzpicture}
&
\makecell[b]{
$u_4u_2u_5u_3u_0$\\
\\
\begin{tikzpicture}
\sciez{3};
\krn{a1}{a2};
\node [below] at (a1) {$u_4$};
\node [below] at (a2) {$u_0$};
\node [below] at (a3) {$u_1$};
\end{tikzpicture}
}
&6\\
\hlineB{3}
\multirow{2}{*}{
\begin{tikzpicture}
\jezobc{3};
\wierz{u4}{$(u2)-(1,0)$};
\node [below] at (u4) {$u_4$};
\krn{u2}{u4};
\wierz{u5}{$(u2)+(0,1)$};
\node [above] at (u5) {$u_5$};
\krn{u2}{u5};
\krn{u1}{u3};
\end{tikzpicture}
}
&
\begin{tikzpicture}
\jezobc{3};
\wierz{u4}{$(u2)-(1,0)$};
\node [below] at (u4) {$u_4$};
\krn{u2}{u4};
\wierz{u5}{$(u2)+(0,1)$};
\node [above] at (u5) {$u_5$};
\krn{u2}{u5};
\kr{u0}{u5};
\krn{u1}{u3};
\end{tikzpicture}
&
\begin{tikzpicture}
\jezobc{3};
\wierz{u4}{$(u2)-(1,0)$};
\node [below] at (u4) {$u_4$};
\krn{u2}{u4};
\wierz{u5}{$(u2)+(0,1)$};
\node [above] at (u5) {$u_5$};
\krn{u2}{u5};
\kr{u0}{u5};
\krn{u1}{u3};
\end{tikzpicture}
&
\makecell[b]{
$u_4u_2u_5$\\
then\\
$u_0u_3u_1$\\
\\
\begin{tikzpicture}
\sciez{4};
\krn{a1}{a2};
\krn{a3}{a4};
\node [below] at (a1) {$u_4$};
\node [below] at (a2) {$u_5$};
\node [below] at (a3) {$u_0$};
\node [below] at (a4) {$u_1$};
\end{tikzpicture}
}
&
\makecell[b]{3\\ then\\ 3}\\
\cline{2-5}
&
\begin{tikzpicture}
\jezobc{3};
\wierz{u4}{$(u2)-(1,0)$};
\node [below] at (u4) {$u_4$};
\krn{u2}{u4};
\wierz{u5}{$(u2)+(0,1)$};
\node [above] at (u5) {$u_5$};
\krn{u2}{u5};
\krn{u0}{u5};
\krn{u1}{u3};
\end{tikzpicture}
&
\begin{tikzpicture}
\jezobc{3};
\wierz{u4}{$(u2)-(1,0)$};
\node [below] at (u4) {$u_4$};
\krn{u2}{u4};
\wierz{u5}{$(u2)+(0,1)$};
\node [above] at (u5) {$u_5$};
\krn{u2}{u5};
\krn{u0}{u5};
\krn{u1}{u3};
\end{tikzpicture}
&
\makecell[b]{
$u_4u_2u_5u_0u_3u_1$\\
\\
\begin{tikzpicture}
\sciez{2};
\krn{a1}{a2};
\node [below] at (a1) {$u_4$};
\node [below] at (a2) {$u_1$};
\end{tikzpicture}
}
&7\\
\hline
\end{tabular}
}
\medskip
Further, we argue analogously as before so we skip the details.
\paragraph{}\label{nieb2}
First blue edge in round 2.
\begin{center}
\begin{tikzpicture}
\jezobc{2};
\end{tikzpicture}
\end{center}
Then in round 3 Builder selects an edge incident to $u_1$ and any free vertex $u_3$. After the colour choice of Painter
we have either a $brb$-path or a $brr$-path at the board.
Suppose that $u_2u_0u_1u_3$ is a $brb$-path. The edges selected by Builder in rounds 4 and 5 and their possible colours
are presented in the table below. We use notation as in previous cases and skip the argument details since they are
analogous.
\medskip
\scalebox{0.9}{
\begin{tabular}{|c|c|c|c|c|}
\hline
after round 4& after round 5& $H$ & \makecell{contracted path\\ and $H'$} & $m$\\
\hline
\multirow{2}{*}{
\begin{tikzpicture}
\jezobc{2};
\wierz{u3}{$(u1)+(1,0)$};
\node [below] at (u3) {$u_3$};
\krn{u1}{u3};
\wierz{u4}{$(u1)+(0,1)$};
\node [above] at (u4) {$u_4$};
\kr{u1}{u4};
\end{tikzpicture}
}
&
\begin{tikzpicture}
\jezobc{2};
\wierz{u3}{$(u1)+(1,0)$};
\node [below] at (u3) {$u_3$};
\krn{u1}{u3};
\wierz{u4}{$(u1)+(0,1)$};
\node [above] at (u4) {$u_4$};
\kr{u1}{u4};
\kr{u4}{u3};
\end{tikzpicture}
&
\begin{tikzpicture}
\jezobc{2};
\wierz{u3}{$(u1)+(1,0)$};
\node [below] at (u3) {$u_3$};
\krn{u1}{u3};
\wierz{u4}{$(u1)+(0,1)$};
\node [above] at (u4) {$u_4$};
\kr{u1}{u4};
\kr{u4}{u3};
\sciezkan{u0}{u3};
\end{tikzpicture}
&
\makecell[b]{
$u_2u_0u_3u_1$\\
\\
\begin{tikzpicture}
\sciez{3};
\krn{a1}{a2};
\node [below] at (a1) {$u_2$};
\node [below] at (a2) {$u_1$};
\node [below] at (a3) {$u_4$};
\end{tikzpicture}
}
&
5\\
\cline{2-5}
&
\begin{tikzpicture}
\jezobc{2};
\wierz{u3}{$(u1)+(1,0)$};
\node [below] at (u3) {$u_3$};
\krn{u1}{u3};
\wierz{u4}{$(u1)+(0,1)$};
\node [above] at (u4) {$u_4$};
\kr{u1}{u4};
\krn{u4}{u3};
\end{tikzpicture}
&
\begin{tikzpicture}
\jezobc{2};
\wierz{u3}{$(u1)+(1,0)$};
\node [below] at (u3) {$u_3$};
\krn{u1}{u3};
\wierz{u4}{$(u1)+(0,1)$};
\node [above] at (u4) {$u_4$};
\kr{u1}{u4};
\krn{u4}{u3};
\end{tikzpicture}
&
\makecell[b]{
$u_1u_3u_4$\\
\\
\begin{tikzpicture}
\sciez{4};
\krn{a1}{a2};
\krn{a3}{a4};
\node [below] at (a1) {$u_2$};
\node [below] at (a2) {$u_0$};
\node [below] at (a3) {$u_1$};
\node [below] at (a4) {$u_4$};
\end{tikzpicture}
}
&3\\
\hlineB{3}
\multirow{2}{*}{
\begin{tikzpicture}
\jezobc{2};
\wierz{u3}{$(u1)+(1,0)$};
\node [below] at (u3) {$u_3$};
\krn{u1}{u3};
\wierz{u4}{$(u1)+(0,1)$};
\node [above] at (u4) {$u_4$};
\krn{u1}{u4};
\end{tikzpicture}
}
&
\begin{tikzpicture}
\jezobc{2};
\wierz{u3}{$(u1)+(1,0)$};
\node [below] at (u3) {$u_3$};
\krn{u1}{u3};
\wierz{u4}{$(u1)+(0,1)$};
\node [above] at (u4) {$u_4$};
\krn{u1}{u4};
\kr{u0}{u4};
\end{tikzpicture}
&
\begin{tikzpicture}
\jezobc{2};
\wierz{u3}{$(u1)+(1,0)$};
\node [below] at (u3) {$u_3$};
\krn{u1}{u3};
\wierz{u4}{$(u1)+(0,1)$};
\node [above] at (u4) {$u_4$};
\krn{u1}{u4};
\kr{u0}{u4};
\end{tikzpicture}
&
\makecell[b]{
$u_4u_1u_3$\\
\\
\begin{tikzpicture}
\sciez{4};
\krn{a1}{a2};
\krn{a3}{a4};
\node [below] at (a1) {$u_2$};
\node [below] at (a2) {$u_0$};
\node [below] at (a3) {$u_4$};
\node [below] at (a4) {$u_3$};
\end{tikzpicture}
}
&
3\\
\cline{2-5}
&
\begin{tikzpicture}
\jezobc{2};
\wierz{u3}{$(u1)+(1,0)$};
\node [below] at (u3) {$u_3$};
\krn{u1}{u3};
\wierz{u4}{$(u1)+(0,1)$};
\node [above] at (u4) {$u_4$};
\krn{u1}{u4};
\krn{u0}{u4};
\end{tikzpicture}
&
\begin{tikzpicture}
\jezobc{2};
\wierz{u3}{$(u1)+(1,0)$};
\node [below] at (u3) {$u_3$};
\krn{u1}{u3};
\wierz{u4}{$(u1)+(0,1)$};
\node [above] at (u4) {$u_4$};
\krn{u1}{u4};
\krn{u0}{u4};
\end{tikzpicture}
&
\makecell[b]{
$u_2u_0u_4u_1u_3$\\
\\
\begin{tikzpicture}
\sciez{2};
\krn{a1}{a2};
\node [below] at (a1) {$u_2$};
\node [below] at (a2) {$u_3$};
\end{tikzpicture}
}
&5\\
\hline
\end{tabular}
}
\medskip
Assume now that $u_2u_0u_1u_3$ is a $brr$-path. Further play is presented below, cases depend on
Painter's choice in rounds 4, 5, 6.
\medskip
\hskip-25pt
\scalebox{0.7}{
\begin{tabular}{|c|c|c|c|c|c|}
\hline
after round 4& after round 5& after round 6 & $H$ & \makecell{contracted path\\ and $H'$} & $m$\\
\hline
\multirow{3}{*}{
\begin{tikzpicture}
\jezobc{2};
\wierz{u3}{$(u1)+(1,0)$};
\node [below] at (u3) {$u_3$};
\kr{u1}{u3};
\wierz{u4}{$(u3)+(1,0)$};
\node [below] at (u4) {$u_4$};
\krn{u3}{u4};
\end{tikzpicture}
}
&
\begin{tikzpicture}
\jezobc{2};
\wierz{u3}{$(u1)+(1,0)$};
\node [below] at (u3) {$u_3$};
\kr{u1}{u3};
\wierz{u4}{$(u3)+(1,0)$};
\node [below] at (u4) {$u_4$};
\krn{u3}{u4};
\krnwyg{u4}{u1};
\end{tikzpicture}
&
\makecell[b]{n/a\\ \ }
&
\begin{tikzpicture}
\jezobc{2};
\wierz{u3}{$(u1)+(1,0)$};
\node [below] at (u3) {$u_3$};
\kr{u1}{u3};
\wierz{u4}{$(u3)+(1,0)$};
\node [below] at (u4) {$u_4$};
\krn{u3}{u4};
\krnwyg{u4}{u1};
\end{tikzpicture}
&
\makecell[b]{
$u_1u_4u_3$\\
\\
\begin{tikzpicture}
\sciez{4};
\krn{a1}{a2};
\krn{a3}{a4};
\node [below] at (a1) {$u_2$};
\node [below] at (a2) {$u_0$};
\node [below] at (a3) {$u_1$};
\node [below] at (a4) {$u_3$};
\end{tikzpicture}
}
&3\\
\clineB{2-6}{3}
&
\multirow{2}{*}{
\begin{tikzpicture}
\jezobc{2};
\wierz{u3}{$(u1)+(1,0)$};
\node [below] at (u3) {$u_3$};
\kr{u1}{u3};
\wierz{u4}{$(u3)+(1,0)$};
\node [below] at (u4) {$u_4$};
\krn{u3}{u4};
\krwyg{u4}{u1};
\end{tikzpicture}
}
&
\begin{tikzpicture}
\jezobc{2};
\wierz{u3}{$(u1)+(1,0)$};
\node [below] at (u3) {$u_3$};
\kr{u1}{u3};
\wierz{u4}{$(u3)+(1,0)$};
\node [below] at (u4) {$u_4$};
\krn{u3}{u4};
\krwyg{u4}{u1};
\wierz{u5}{$(u3)+(0,1)$};
\node [above] at (u5) {$u_5$};
\kr{u3}{u5};
\end{tikzpicture}
&
\begin{tikzpicture}
\jezobc{2};
\wierz{u3}{$(u1)+(1,0)$};
\node [below] at (u3) {$u_3$};
\kr{u1}{u3};
\wierz{u4}{$(u3)+(1,0)$};
\node [below] at (u4) {$u_4$};
\krn{u3}{u4};
\krwyg{u4}{u1};
\wierz{u5}{$(u3)+(0,1)$};
\node [above] at (u5) {$u_5$};
\kr{u3}{u5};
\sciezkan{u4}{u5,u0};
\end{tikzpicture}
&
\makecell[b]{
$u_2u_0u_5u_4u_3$\\
\\
\begin{tikzpicture}
\sciez{3};
\krn{a1}{a2};
\node [below] at (a1) {$u_2$};
\node [below] at (a2) {$u_3$};
\node [below] at (a3) {$u_1$};
\end{tikzpicture}
}
&7\\
\cline{3-6}
&
&
\begin{tikzpicture}
\jezobc{2};
\wierz{u3}{$(u1)+(1,0)$};
\node [below] at (u3) {$u_3$};
\kr{u1}{u3};
\wierz{u4}{$(u3)+(1,0)$};
\node [below] at (u4) {$u_4$};
\krn{u3}{u4};
\krwyg{u4}{u1};
\wierz{u5}{$(u3)+(0,1)$};
\node [above] at (u5) {$u_5$};
\krn{u3}{u5};
\end{tikzpicture}
&
\begin{tikzpicture}
\jezobc{2};
\wierz{u3}{$(u1)+(1,0)$};
\node [below] at (u3) {$u_3$};
\kr{u1}{u3};
\wierz{u4}{$(u3)+(1,0)$};
\node [below] at (u4) {$u_4$};
\krn{u3}{u4};
\krwyg{u4}{u1};
\wierz{u5}{$(u3)+(0,1)$};
\node [above] at (u5) {$u_5$};
\krn{u3}{u5};
\end{tikzpicture}
&
\makecell[b]{
$u_4u_3u_5$\\
\\
\begin{tikzpicture}
\sciez{5};
\krn{a1}{a2};
\krn{a4}{a5};
\node [below] at (a1) {$u_2$};
\node [below] at (a2) {$u_0$};
\node [below] at (a3) {$u_1$};
\node [below] at (a4) {$u_4$};
\node [below] at (a5) {$u_5$};
\end{tikzpicture}
}
&
3\\
\hlineB{3}
\begin{tikzpicture}
\jezobc{2};
\wierz{u3}{$(u1)+(1,0)$};
\node [below] at (u3) {$u_3$};
\kr{u1}{u3};
\wierz{u4}{$(u3)+(1,0)$};
\node [below] at (u4) {$u_4$};
\kr{u3}{u4};
\end{tikzpicture}
&
\begin{tikzpicture}
\jezobc{2};
\wierz{u3}{$(u1)+(1,0)$};
\node [below] at (u3) {$u_3$};
\kr{u1}{u3};
\wierz{u4}{$(u3)+(1,0)$};
\node [below] at (u4) {$u_4$};
\kr{u3}{u4};
\krnwyg{u4}{u0};
\end{tikzpicture}
&
\makecell[b]{n/a\\ \ }
&
\begin{tikzpicture}
\jezobc{2};
\wierz{u3}{$(u1)+(1,0)$};
\node [below] at (u3) {$u_3$};
\kr{u1}{u3};
\wierz{u4}{$(u3)+(1,0)$};
\node [below] at (u4) {$u_4$};
\kr{u3}{u4};
\krnwyg{u4}{u0};
\end{tikzpicture}
&
\makecell[b]{
$u_2u_0u_4$\\
\\
\begin{tikzpicture}
\sciez{4};
\krn{a1}{a2};
\node [below] at (a1) {$u_2$};
\node [below] at (a2) {$u_4$};
\node [below] at (a3) {$u_3$};
\node [below] at (a4) {$u_1$};
\end{tikzpicture}
}
&3\\
\hline
\end{tabular}
}
\medskip
\paragraph{}\label{nieb1}
First blue edge in round 1.
\medskip\noindent
In this case we have a blue edge $u_0u_1$. In the second round, Builder selects an edge incident to $u_1$ and some
free vertex $u_2$. If Painter colours $u_1u_2$ red, we obtain a coloured graph considered in the previous case \ref{nieb2}.
If Painter colours $u_1u_2$ blue, we have a blue path $u_0,u_1,u_2$, which can be contracted to a blue edge and based
on Lemma \ref{sciag} and the inductive hypothesis, we conclude that Builder wins the game.
\subsection{$G$ is not empty}
It remains to prove the statement if $G$ is a $b$-, $br$-, $brb$-, $brr$-, or $brrb$-path.
For a $b$-path, the game was analysed in Section \ref{stage1}, case \ref{nieb1}.
The cases of $br$-, $brb$- and $brrb$-paths were analysed
in Section \ref{stage1}, case \ref{nieb2}. In the same place we also checked that
Builder wins starting from a $brrr$- or $brrb$-path and hence he wins starting from a $brr$-path as well.
\section{Algorithm for small instances of $\text{RR}(C_4,P_n,H)$}
\label{algorithm}
We consider a slightly modified version of the game $\text{RR}(C_4,P_n,H)$ introduced in Section \ref{prelim}, which will be denoted by $\text{RRC}(C_4,P_n,H,v,e)$, where $v,e\in\mathbb N$.
The board of the game is
$K_\mathbb N$, with exactly $e(H)$ edges coloured, inducing a copy of $H$ in $K_\mathbb N$. The rules of selecting and colouring edges
by Builder and Painter are the same as in the standard game $\tilde R(C_4,P_n)$. The additional rules are:
\begin{enumerate}
\item[(1)]
Builder wins $\text{RRC}(C_4,P_n,H,v,e)$ if after at most $e-|E(H)|$ rounds there is a red $C_4$ or a blue $P_n$ at the board and he loses otherwise.
\item[(2)]
At any moment of the game, the graph induced by coloured edges (including the edges of $H$) has at most $v$ vertices.
\item[(3)]
At any moment of the game, the graph induced by coloured edges is connected.
\item[(4)]
In every round, Builder selects an edge $x$ such that the graph induced by all blue edges and $x$ is contained in a path on $n$ vertices.
\end{enumerate}
We will also write $rc(C_4,P_n,H,v,e)=1$ if Builder has a winning strategy in $\text{RRC}(C_4,P_n,H,v,e)$ and $rc(C_4,P_n,H,v,e)=0$ otherwise.
Note that the essential difference between $\text{RRC}(C_4,P_n,H,\infty,2n-2)$ and $\text{RR}(C_4,P_n,H)$ is the connectivity condition. This condition makes the game potentially harder for Builder. Moreover, if $v<v'$, then $rc(G_1,G_2,H,v,e)\le rc(G_1,G_2,H,v',e)$ (extra vertices cannot harm Builder). Our program, described below, computes the values $rc(C_4,P_n,H,n+1,2n-2)$ for $n\in\{7,8,...13\}$ and every $H$ which is one of the coloured graphs: empty graph, $b$-path, $br$-path, $brb$-path, $brr$-path, $brrb$-path.
As the result of the computation, we obtained
$rc(C_4,P_7,\emptyset,8,12)=0$ and in all other cases $rc(C_4,P_n,H,n+1,2n-2)=1$. It proves Lemma \ref{warpocz}, since the condition $rc(C_4,P_n,H,n+1,2n-2)=1$ implies also a winning strategy of Builder in $\text{RR}(C_4,P_n,H)$.
Using the program, we computed also that $rc(C_4,P_7,\emptyset,8,13)=1$ and it proves the second part of Theorem \ref{main}.
\subsection{Algorithm outline}
The main goal of the algorithm is to find a sufficient Builder's move in every position arising from every possible Painter's strategy. By a position we mean a coloured graph induced by all edges coloured in the game so far.
We analyse the game tree of $\text{RRC}(C_4,P_n,H,v,e)$ using a recursive method and a simple version of a standard alpha-beta pruning. Here is the outline.
Consider a position $H'$. Create the list of all Builder choices of his next move, not violating the rules (2), (3) and (4) of the game. Consider the first move, say edge $x$, on the list. Consider every possible colouring of $x$.
For the obtained coloured graph $H'+x$ check if it is isomorphic to a position already considered before.\footnote{In fact, there is no known polynomial algorithm for graph isomorphism problem, so the program looks for isomorphic graphs but doesn't try too hard. The idea is to sort all vertices by blue and red degrees and to look for identical graphs. The visited graphs are stored as a combination of adjacency matrices: the lower-diagonal one represents blue edges and the upper-diagonal one represents red ones. This allows to store a coloured graph in $|V(G)|^2$ bits.} If so, use the known value $rc(C_4,P_n,H'+x,v,e)$ and prune the further subtree. Otherwise, call the recursive procedure in order to find $rc(C_4,P_n,H'+x,v,e)$. If $rc(C_4,P_n,H'+x,v,e)=1$, prune all other branches of the game tree, starting from the position $H'$; otherwise consider the next edge on the Builder move list in position $H'$ and continue the analysis.
The algorithm stops if there is a red $C_4$ or a blue $P_n$ at the board, or there are $e-e(H)$ coloured edges. Evaluating a final position $H'$
is obvious: we put $rc(C_4,P_n,H',v,e)=1$ if $H'$ contains a red $C_4$ or a blue $P_n$; otherwise we put $rc(C_4,P_n,H',v,e)=0$.
\subsection{Implementation}
We implemented the above algorithm in C++ and present the code in Appendix \ref{app:program}.
Let us add that in order to shorten the running time, we create the list of all possible Builder's moves in a position $H$
of the game
$\text{RRC}(C_4,P_{n[t]},H,v[t],e[t])$ in such a way, that if $x$ is a winning move in position $H'$ in a game $\text{RRC}(C_4,P_{n[t-1]},H,v[t-1],e[t-1])$, then $x$ is first at the list while analysing position $H'$ of $\text{RRC}(C_4,P_{n[t]},H,v[t],e[t])$ (functions $n[t],v[t],e[t]$ denote here parameters of $t$-th analysed game in the program). At our standard PC, the program evaluated all games listed in Lemma \ref{warpocz} within 8 minutes and 20 seconds.
\section{Proof of Theorem \ref{main}}
The first part of the main theorem is the consequence of the fact that Builder has a winning strategy in the game $\text{RR}(C_4,P_n)$, as stated in Theorem \ref{main2}. The second part of Theorem \ref{main} follows from the computation $rc(C_4,P_7,\emptyset,8,13)=1$ mentioned in Section \ref{algorithm}, which implies that Builder in $\tilde R(C_4,P_7)$ can force Painter to create a red $C_4$ or a blue $P_7$ within at most 13 rounds.
\bibliographystyle{amsplain}
| {
"timestamp": "2022-11-23T02:14:00",
"yymm": "2211",
"arxiv_id": "2211.12204",
"language": "en",
"url": "https://arxiv.org/abs/2211.12204",
"abstract": "Given two graphs $G$ and $H$, a size Ramsey game is played on the edge set of $K_\\mathbb{N}$. In every round, Builder selects an edge and Painter colours it red or blue. Builder's goal is to force Painter to create a red copy of $G$ or a blue copy of $H$ as soon as possible. The online (size) Ramsey number $\\tilde r(G,H)$ is the number of rounds in the game provided Builder and Painter play optimally. We prove that $\\tilde r(C_4,P_n)\\le 2n-2$ for every $n\\ge 8$. The upper bound matches the lower bound obtained by J. Cyman, T. Dzido, J. Lapinskas, and A. Lo, so we get $\\tilde r(C_4,P_n)=2n-2$ for $n\\ge 8$. Our proof for $n\\le 13$ is computer assisted. The bound $\\tilde r(C_4,P_n)\\le 2n-2$ solves also the \"all cycles vs. $P_n$\" game for $n\\ge 8$ $-$ it implies that it takes Builder $2n-2$ rounds to force Painter to create a blue path on $n$ vertices or any red cycle.",
"subjects": "Combinatorics (math.CO)",
"title": "Online size Ramsey numbers: Path vs $C_4$",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9732407168145568,
"lm_q2_score": 0.7279754371026367,
"lm_q1q2_score": 0.7084953362291604
} |
https://arxiv.org/abs/solv-int/9605010 | A new integrable system related to the Toda lattice | A new integrable lattice system is introduced, and its integrable discretizations are obtained. A Bäcklund transformation between this new system and the Toda lattice, as well as between their discretizations, is established. | \section{Introduction}
We want to introduce in this paper a new integrable lattice system:
\begin{equation}\label{new}
\ddot{x}_k=\dot{x}_k\Big(\exp(x_{k+1}-x_k)-\exp(x_k-x_{k-1})\Big),
\end{equation}
along with two integrable discretizations thereof. In the difference equations
below $x_k=x_k(t)$ are supposed to be functions of the discrete time
$t\in h{\Bbb Z}$, and $\widetilde{x}_k=x_k(t+h)$, $\undertilde{x_k}=x_k(t-h)$. The first
of our integrable discretizations is implicit with respect to the updates
$\widetilde{x}_k$'s:
\begin{equation}\label{dnew1}
\frac{\exp(\widetilde{x}_k-x_k)-1}{\exp(x_k-\undertilde{x_k})-1}=
\frac{1+h\exp(\undertilde{x_{k+1}}-x_k)}{1+h\exp(x_k-\widetilde{x}_{k-1})},
\end{equation}
and the other is explicit:
\begin{equation}\label{dnew2}
\frac{\exp(\widetilde{x}_k-x_k)-1}{\exp(x_k-\undertilde{x_k})-1}=
\frac{1+h\exp(x_{k+1}-x_k)}{1+h\exp(x_k-x_{k-1})}.
\end{equation}
The system (\ref{new}) resembles much the usual Toda lattice:
\begin{equation}\label{Toda}
\ddot{x}_k=\exp(x_{k+1}-x_k)-\exp(x_k-x_{k-1}),
\end{equation}
and in fact turns out to be closely related to it my means of a sort of
B\"acklund transformation. To the author's knowledge, this system has not
appeared in the literature, despite its beauty and possible physical
applications.
It is by now well known that the Toda lattice admits several apparently different
integrable discretizations (which are, in fact, closely connected with each
other, but these connections are rather nontrivial). Two of the discretizations
are explicit with respect to $\widetilde{x}_k$'s, namely the Hirota's one \cite{H}:
\begin{equation}\label{Hir}
\exp(\widetilde{x}_k-x_k)-\exp(x_k-\undertilde{x_k})=
h^2\Big(\exp(x_{k+1}-x_k)-\exp(x_k-x_{k-1})\Big),
\end{equation}
and a standard--like one \cite{S1}:
\begin{equation}\label{stand}
\exp(\widetilde{x}_k-2x_k+\undertilde{x_k})=\frac{1+h^2\exp(x_{k+1}-x_k)}
{1+h^2\exp(x_k-x_{k-1})},
\end{equation}
which can be also presented in the form
\begin{equation}\label{Sur1}
\exp(\widetilde{x}_k-x_k)-\exp(x_k-\undertilde{x_k})=
h^2\Big(\exp(x_{k+1}-\undertilde{x_k})-\exp(\widetilde{x}_k-x_{k-1})\Big).
\end{equation}
Another two discretizations are implicit with respect to $\widetilde{x}_k$'s. They
were introduced in \cite{S3} and read:
\begin{equation}\label{Sur2}
\exp(\widetilde{x}_k-x_k)-\exp(x_k-\undertilde{x_k})=
h^2\Big(\exp(\undertilde{x_{k+1}}-x_k)-\exp(x_k-\widetilde{x}_{k-1})\Big)
\end{equation}
and
\begin{equation}\label{Sur3}
\exp(\widetilde{x}_k-x_k)-\exp(x_k-\undertilde{x_k})=
h^2\Big(\exp(\undertilde{x_{k+1}}-\undertilde{x_k})-\exp(\widetilde{x}_k-\widetilde{x}_{k-1})\Big).
\end{equation}
We discuss their algebraic structure, as well as their relations to each
other further on.
We shall demonstrate that (\ref{dnew1}) is related to (\ref{new}) just in
the same way as (\ref{Sur2}) is related to (\ref{Toda}), and shall also
elaborate an algebraic structure of the discretization (\ref{dnew2}).
All the systems above (continuous and discrete time ones) may be considered
either on an infinite lattice ($k\in{\Bbb Z}$), or on a finite one
($1\le k\le N$). In the last case one of the two types of boundary conditions
may be imposed: open--end ($x_0=\infty$, $x_{N+1}=-\infty$) or periodic
($x_0\equiv x_N$, $x_{N+1}\equiv x_1$). We shall be concerned only with the
finite lattices here, consideration of the infinite ones being to a large
extent similar.
\setcounter{equation}{0}
\section{Newtonian equations of motion:\newline
Lagrangian and Hamiltonian formulations}
All the equations introduced in the previous section, both continuous--
and discrete--time, are written in the Newtonian form:
\[
\ddot{x}_k=\Phi_k(\dot{x},x)\quad{\rm or}\quad \Psi_k(\widetilde{x},x,\undertilde{x})=0,
\]
respectively. They all turn out to admit a Lagrangian formulation.
Recall that in the continuous time case Lagrangian equations are given by
\begin{equation}\label{Lagr}
\frac{d}{dt}\frac{\partial{\cal L}}{\partial\dot{x}_k}-
\frac{\partial{\cal L}}{\partial x_k}=0,
\end{equation}
while their discrete time analog is given by
\begin{equation}\label{d Lagr}
\partial\Big(\Lambda(\widetilde{x},x)+\Lambda(x,\undertilde{x})\Big)/\partial x_k=0.
\end{equation}
Finally, recall that Lagrangian formulation implies also a possibility of
introducing a Hamiltonian one. Namely, in the continuous time case one
defines momenta $p_k$ canonically conjugated to the coordinates $x_k$ by
\begin{equation}
p_k=\partial{\cal L}/\partial\dot{x}_k.
\end{equation}
Then the flow defined by (\ref{Lagr}), being expressed in terms of $(x,p)$,
preserves the standard symplectic form $\sum dx_k\wedge dp_k$ on the phase space
${\Bbb R}^{2N}(x,p)$.
Moreover, this flow may be written in a canonical form
\begin{equation}\label{Ham}
\dot{x}_k=\partial H/\partial p_k,\quad \dot{p}_k=-\partial H/\partial x_k,
\end{equation}
the Hamiltonian function $H(x,p)$ being given by
\begin{equation}\label{L to H}
H=\sum_{k=1}^N \dot{x}_kp_k-{\cal L}.
\end{equation}
Analogously, in the discrete time case the momenta $p_k$ canonically conjugated
to $x_k$ are given by
\begin{equation}\label{dLagr:x to p}
p_k=\partial\Lambda(x,\undertilde{x})/\partial x_k.
\end{equation}
Then the map $(x,\undertilde{x})\mapsto(\widetilde{x},x)$ induces a symplectic map
$(x,p)\mapsto(\widetilde{x},\widetilde{p})$ of the phase space ${\Bbb R}^{2N}(x,p)$, i.e. a map
preserving the standard symplectic form $\sum dx_k\wedge dp_k$. Note that
(\ref{dLagr:x to p}) implies that the equations (\ref{d Lagr}) may be
presented as
\begin{equation}\label{dLagr:p}
p_k=-\partial\Lambda(\widetilde{x},x)/\partial x_k,
\end{equation}
\begin{equation}\label{dLagr:wp}
\widetilde{p}_k=\partial\Lambda(\widetilde{x},x)/\partial \widetilde{x}_k.
\end{equation}
\setcounter{equation}{0}
\section{Simplest flow of the Toda hierarchy\newline
and its bi--Hamiltonian structure}
The both lattices (\ref{new}) and (\ref{Toda}) arise from the simplest flow of
the Toda hierarchy under two different parametrizations of the
relevant variables $(a,b)$ (called Flaschka variables) by the canonically
conjugated variables $(x,p)$.
The simplest flow of the Toda hierarchy is:
\begin{equation}\label{TL}
\dot{a}_k=a_k(b_{k+1}-b_k), \quad \dot{b}_k=a_k-a_{k-1}.
\end{equation}
It may be considered either under open--end boundary conditions
($a_0=a_N=0$), or under periodic ones (all the subscripts are
taken (mod $N$), so that $a_0\equiv a_N$, $b_{N+1}\equiv b_1$).
It is easy to see that the flow (\ref{TL}) is Hamiltonian with respect to two
different compatible Poisson brackets. The first of them is linear:
\begin{equation}\label{l br}
\{a_k,b_k\}_1=-\{a_k,b_{k+1}\}_1=a_k
\end{equation}
(only the non--vanishing brackets are written down), and a Hamiltonian function
generating the flow (\ref{TL}) in this bracket is equal to
\begin{equation}\label{H 1}
H^{(1)}=\frac{1}{2}\sum_{k=1}^Nb_k^2+\sum_{k=1}^Na_k.
\end{equation}
The second Poisson bracket is given by:
\begin{equation}\label{q br}
\{b_{k+1},b_k\}_2=a_k, \quad \{a_{k+1},a_k\}_2=a_{k+1}a_k, \quad
\{b_k,a_k\}_2=-b_ka_k, \quad \{b_{k+1},a_k\}_2=b_{k+1}a_k,
\end{equation}
the corresponding Hamiltonian function being
\begin{equation}\label{H 2}
H^{(2)}=\sum_{k=1}^N b_k.
\end{equation}
An integrable discretization of the flow (\ref{TL}) is given by the difference
equations \cite{PGR}, \cite{S3}
\begin{equation}\label{dTL}
\widetilde{a}_k=a_k\frac{\beta_{k+1}}{\beta_k},\quad
\widetilde{b}_k=b_k+h\left(\frac{a_k}{\beta_k}-\frac{a_{k-1}}{\beta_{k-1}}\right),
\end{equation}
where $\beta_k=\beta_k(a,b)$ is defined as a unique set of functions
satisfying the recurrent relation
\begin{equation}\label{recur}
\beta_k=1+hb_k-\frac{h^2a_{k-1}}{\beta_{k-1}}
\end{equation}
together with an asymptotic relation
\begin{equation}\label{as beta}
\beta_k=1+hb_k+O(h^2).
\end{equation}
In the open--end case, due to $a_0=0$, we obtain from (\ref{recur}) the
following finite continued fractions expressions for $\beta_k$:
\[
\beta_1=1+hb_1;\quad
\beta_2=1+hb_2-\frac{h^2a_1}{1+hb_1};\quad\ldots\quad;
\]
\[
\beta_N=1+hb_N-\frac{h^2a_{N-1}}{1+hb_{N-1}-
\displaystyle\frac{h^2a_{N-2}}{1+hb_{N-2}-
\parbox[t]{1.0cm}{$\begin{array}{c}\\ \ddots\end{array}$}
\parbox[t]{2.2cm}{$\begin{array}{c}
\\ \\-\displaystyle\frac{h^2a_1}{1+hb_1}\end{array}$}}}.
\]
In the periodic case (\ref{recur}), (\ref{as beta}) uniquely define
$\beta_k$'s as $N$-periodic infinite continued fractions. It can be
proved that for $h$ small enough these continued fractions converge and their
values satisfy (\ref{as beta}).
It can be proved \cite{S3} that the map (\ref{dTL}) is Poisson with respect
to the both brackets (\ref{l br}) and (\ref{q br}), and hence with respect
to their arbitrary linear combination.
Let us recall also the Lax representations of the flow (\ref{TL}) and of the
map (\ref{dTL}). They are given in terms of the $N\times N$ Lax matrix $T$
depending on the phase space coordinates
$a_k, b_k$ and (in the periodic case) on the additional parameter $\lambda$:
\begin{equation}\label{T}
T(a,b,\lambda) = \sum_{k=1}^N b_kE_{kk}+\lambda\sum_{k=1}^N E_{k+1,k}+
\lambda^{-1}\sum_{k=1}^N a_kE_{k,k+1}.
\end{equation}
Here $E_{jk}$ stands for the matrix whose only nonzero entry on the intersection
of the $j$th row and the $k$th column is equal to 1. In the periodic case we
have $E_{N+1,N}=E_{1,N}, E_{N,N+1}=E_{N,1}$; in the open--end case we set
$\lambda=1$, and $E_{N+1,N}=E_{N,N+1}=0$.
The flow (\ref{TL}) is equivalent to
the following matrix differential equation:
\begin{equation}\label{Lax}
\dot{T}=\left[ T,B\right],
\end{equation}
where
\begin{equation}\label{B}
B(a,b,\lambda) = \sum_{k=1}^Nb_kE_{kk}+\lambda\sum_{k=1}^N E_{k+1,k},
\end{equation}
and the map (\ref{dTL}) is equivalent to the following matrix difference
equation:
\begin{equation}\label{dLax}
\widetilde{T}={\rm\bf B}^{-1}T{\rm\bf B},
\end{equation}
where
\begin{equation}\label{bB}
{\rm\bf B}(a,b,\lambda)=\sum_{k=1}^N\beta_kE_{kk}
+h\lambda\sum_{k=1}^NE_{k+1,k}.
\end{equation}
The spectral invariants of the matrix $T(a,b,\lambda)$ serve as
integrals of motion for the flow (\ref{TL}), as well as for the map
(\ref{dTL}).
In particular, it is easy to see that the Hamiltonian functions (\ref{H 1}),
(\ref{H 2}) are spectral invariants of the Lax matrix:
\[
H^{(1)}=\frac{1}{2}{\rm tr}(T^2),\quad H^{(2)}={\rm tr}(T).
\]
\setcounter{equation}{0}
\section{Reminding the Toda lattice case}
The Toda lattice (\ref{Toda}) admits a Lagrangian formulation with a
Lagrange function
\begin{equation}\label{Toda Lagr}
{\cal L}^{(1)}(x,\dot{x})
=\frac{1}{2}\sum_{k=1}^N\dot{x}_k^2-\sum_{k=1}^N\exp(x_k-x_{k-1}).
\end{equation}
A general procedure implies that the momenta $p_k$ are given by
\[
p_k=\partial {\cal L}^{(1)}/\partial\dot{x}_k=\dot{x}_k,
\]
so that the corresponding Hamiltonian function is
\begin{equation}\label{H 1 in xp}
H^{(1)}=\frac{1}{2}\sum_{k=1}^Np_k^2+\sum_{k=1}^N\exp(x_k-x_{k-1}),
\end{equation}
and the flow (\ref{TL}) takes the form of canonical equations of motion:
\begin{eqnarray*}
\dot{x}_k & = & \partial H^{(1)}/\partial p_k=p_k,\\
\dot{p}_k & = & -\partial H^{(1)}/\partial x_k=
\exp(x_{k+1}-x_k)-\exp(x_k-x_{k-1}).
\end{eqnarray*}
One sees immediately that this coincides with the flow (\ref{TL}), if
the Flaschka variables $(a,b)$ are introduced according to the formulas
\begin{equation}\label{l par}
a_k=\exp(x_{k+1}-x_k),\quad b_k=p_k.
\end{equation}
Obviously, this leads immediately to the linear Poisson brackets (\ref{l br}).
Let us turn now to the discrete time case. Consider first the equations of
motion (\ref{Sur2}). It is easy to see that they admit a Lagrangian formulation
with the Lagrange function
\begin{equation}\label{Sur2:Lagr}
\Lambda_1(\widetilde{x},x)=\sum_{k=1}^N\phi_1(\widetilde{x}_k-x_k)-h\sum_{k=1}^N\exp(x_k-\widetilde{x}_{k-1}),
\end{equation}
where $\phi_1(\xi)=(\exp(\xi)-1-\xi)/h$. Hence they are equivalent to the
symplectic map $(x,p)\mapsto(\widetilde{x},\widetilde{p})$ with
\begin{equation}\label{Sur2:p}
hp_k = \exp(\widetilde{x}_k-x_k)-1+h^2\exp(x_k-\widetilde{x}_{k-1}),
\end{equation}
\begin{equation}\label{Sur2:wp}
h\widetilde{p}_k = \exp(\widetilde{x}_k-x_k)-1+h^2\exp(x_{k+1}-\widetilde{x}_k).
\end{equation}
We demonstrate now that they may be put in the form (\ref{dTL}).
{\bf Proposition 1.} {\it If the variables $a_k$, $b_k$ are defined by}
(\ref{l par}), {\it and
\begin{equation}\label{Sur2:beta}
\beta_k=\exp(\widetilde{x}_k-x_k),
\end{equation}
then} (\ref{Sur2:p}), (\ref{Sur2:wp}) {\it imply} (\ref{dTL}), (\ref{recur}).
{\bf Proof.} The first equation of motion in (\ref{dTL}) follows immediately
from the definitions of $a_k=\exp(x_{k+1}-x_k)$, $\beta_k=\exp(\widetilde{x}_k-x_k)$.
The recurrent relation (\ref{recur}) is just a reformulation of (\ref{Sur2:p})
in the variables $a_k$, $b_k$, $\beta_k$. Finally, the second equation in
(\ref{dTL}) follows immediately from (\ref{Sur2:wp}) and (\ref{Sur2:p}). \rule{3mm}{3mm}
Note that this proposition implies immediately that the map (\ref{dTL})
is Poisson with respect to the linear bracket (\ref{l br}).
A very remarkable circumstance was found in \cite{S3}: an apparently different
discretization (\ref{Sur3}) is in fact only another parametrization of the
same map (\ref{dTL}). It is easy to see that (\ref{Sur3}) admits a Lagrangian
formulation with a Lagrange function
\begin{equation}\label{Sur3:Lagr}
\Lambda_2(\widetilde{x},x)=\sum_{k=1}^N\frac{1}{2h}(\widetilde{x}_k-x_k)^2-
\sum_{k=1}^{N}\phi_2(x_k-\widetilde{x}_{k-1}),
\end{equation}
where $\phi_2(\xi)=h^{-1}\int_0^\xi\log(1+h^2\exp(\eta))d\eta$.
(It is easy to see that the corresponding equations (\ref{d Lagr}) read:
\[
\widetilde{x}_k-2x_k+\undertilde{x_k}=
\log\Big(1+h^2\exp(\undertilde{x_{k+1}}-x_k)\Big)-
\log\Big(1+h^2\exp(x_k-\widetilde{x}_{k-1})\Big),
\]
which is equivalent to (\ref{Sur3})). Hence an equivalent form of writing
(\ref{Sur3}) in canonically conjugated variables $(x,p)$ is:
\begin{equation}\label{Sur3:p}
\exp(hp_k)=\exp(\widetilde{x}_k-x_k)\Big(1+h^2\exp(x_k-\widetilde{x}_{k-1})\Big)=
\exp(\widetilde{x}_k-x_k)+h^2\exp(\widetilde{x}_k-\widetilde{x}_{k-1}),
\end{equation}
\begin{equation}\label{Sur3:wp}
\exp(h\widetilde{p}_k)=\exp(\widetilde{x}_k-x_k)\Big(1+h^2\exp(x_{k+1}-\widetilde{x}_k)\Big)=
\exp(\widetilde{x}_k-x_k)+h^2\exp(x_{k+1}-x_k).
\end{equation}
{\bf Proposition 2.} {\it If the variables $a_k$, $b_k$ are defined by}
\begin{equation}\label{mod l par}
a_k=\exp(x_{k+1}-x_k+hp_k),\quad 1+hb_k=\exp(hp_k)+h^2\exp(x_k-x_{k-1}),
\end{equation}
{\it and
\begin{equation}\label{Sur3:beta}
\beta_k=\exp(hp_k),
\end{equation}
then} (\ref{Sur3:p}), (\ref{Sur3:wp}) {\it imply} (\ref{dTL}), (\ref{recur}).
{\bf Proof.} From (\ref{Sur3:p}), (\ref{Sur3:wp}), and the first equation in
(\ref{mod l par}) it follows that the first equation of motion in (\ref{dTL})
is satisfied, if
\[
\beta_k=\exp(\widetilde{x}_k-x_k)\Big(1+h^2\exp(x_k-\widetilde{x}_{k-1})\Big),
\]
which is just (\ref{Sur3:beta}). Now (\ref{recur}) is a reformulation of the
second equation in (\ref{mod l par}), if one takes into account that
$a_k/\beta_k=\exp(x_{k+1}-x_k)$. The second equation of motion in (\ref{dTL})
follows directly from the second equation in (\ref{mod l par}), (\ref{Sur3:wp}),
and (\ref{Sur3:p}). \rule{3mm}{3mm}
It is easy to calculate that a Poisson bracket for the variables $a_k$, $b_k$
resulting from the parametrization (\ref{mod l par}) reads:
\begin{equation}
\begin{array}{c}
\{b_{k+1},b_k\}=ha_k,\quad \{a_{k+1},a_k\}=ha_{k+1}a_k,\\ \\
\{b_{k+1},a_k\}=a_k+hb_{k+1}a_k,\quad \{b_k,a_k\}=-a_k-hb_ka_k,
\end{array}
\end{equation}
which is exactly a linear combination $\{\cdot,\cdot\}_1+h\{\cdot,\cdot\}_2$.
We conclude this section by noting that the both Lagrange functions
(\ref{Sur2:Lagr}) and (\ref{Sur3:Lagr}) serve as difference approximations
to the continuous time one (\ref{Toda Lagr}).
\setcounter{equation}{0}
\section{A new lattice}
We turn now to the system (\ref{new}). First of all, on sees readily that
it admits a Lagrangian formulation with
\begin{equation}\label{new Lagr}
{\cal L}^{(2)}(x,\dot{x})=\sum_{k=1}^N[\dot{x}_k\log(\dot{x}_k)-\dot{x}_k]-
\sum_{k=1}^N\exp(x_k-x_{k-1}).
\end{equation}
Hence the momenta $p_k$ are introduced by
\[
p_k=\partial{\cal L}^{(2)}/\partial\dot{x}_k=\log(\dot{x}_k),
\]
hence the corresponding Hamiltonian function is equal to
\begin{equation}\label{H 2 in xp}
H^{(2)}=\sum_{k=1}^N \exp(p_k)+\sum_{k=1}^N\exp(x_k-x_{k-1}),
\end{equation}
and the canonical form of the equations of motion is:
\begin{eqnarray*}
\dot{x}_k &=&\partial H^{(2)}/\partial p_k=\exp(p_k),\\
\dot{p}_k &=&-\partial H^{(2)}/\partial x_k=
\exp(x_{k+1}-x_k)-\exp(x_k-x_{k-1}).
\end{eqnarray*}
It is now easy to see that if one introduces variables $a_k$, $b_k$
according to
\begin{equation}\label{q par}
a_k=\exp(x_{k+1}-x_k+p_k), \quad b_k=\exp(p_k)+\exp(x_k-x_{k-1}),
\end{equation}
then their evolution induced by the flow above just coincides with
(\ref{TL}).
It can be readily checked that (\ref{q par}) leads to Poisson brackets
(\ref{q br}), and that the notation $H^{(2)}$ for the function
(\ref{H 2 in xp}) is consistent with (\ref{H 2}).
So, the equations (\ref{new}) admit a Lax representation (\ref{Lax})
with the matrices (\ref{T}), (\ref{B}), for the entries of which one has the
formulas (\ref{q par}), which is equivalent also to
\[
a_k=\dot{x}_k\exp(x_{k+1}-x_k),\quad b_k=\dot{x}_k+\exp(x_k-x_{k-1}).
\]
Turning to the discrete time system (\ref{dTL}), we find the following
results. It admits a Lagrangian formulation with
\begin{equation}\label{dnew1 Lagr}
\Lambda_3(\widetilde{x},x)=\sum_{k=1}^N\phi(\widetilde{x}_k-x_k)-
\sum_{k=1}^{N}\psi(x_k-\widetilde{x}_{k-1}),
\end{equation}
where the two functions $\phi(\xi), \psi(\xi)$ are defined by
\begin{equation}\label{phipsi}
\phi(\xi)=\int_0^{\xi}\log\left|\frac{\exp(\eta)-1}{h}\right|d\eta,\quad
\psi(\xi)=\int_0^{\xi}\log(1+h\exp(\eta))d\eta.
\end{equation}
Hence a symplectic map $(x,p)\mapsto(\widetilde{x},\widetilde{p})$ generated by (\ref{dnew1})
may be defined by the following relations:
\begin{eqnarray}
h\exp(p_k) & = & \Big(\exp(\widetilde{x}_k-x_k)-1\Big)\Big(1+h\exp(x_k-\widetilde{x}_{k-1})\Big),
\label{dnew1:p}\\
h\exp(\widetilde{p}_k) & = &
\Big(\exp(\widetilde{x}_k-x_k)-1\Big)\Big(1+h\exp(x_{k+1}-\widetilde{x}_k)\Big).\label{dnew1:wp}.
\end{eqnarray}
It is very remarkable that this map can be again reduced to (\ref{dTL})!
{\bf Proposition 3.} {\it If the variables $a_k$, $b_k$ are defined by}
(\ref{q par}), {\it and
\begin{equation}\label{dnew1:beta}
\beta_k=\exp(\widetilde{x}_k-x_k)\Big(1+h\exp(x_k-\widetilde{x}_{k-1})\Big),
\end{equation}
then} (\ref{dnew1:p}), (\ref{dnew1:wp}) {\it imply} (\ref{dTL}), (\ref{recur}).
{\bf Proof.} The first equation of motion in (\ref{dTL}) follows immediately
from $a_k=\exp(x_{k+1}-x_k+p_k)$ and (\ref{dnew1:p}), (\ref{dnew1:wp}),
(\ref{dnew1:beta}). The recurrent relation (\ref{recur}) follows from
(\ref{dnew1:beta}), (\ref{dnew1:p}), if one takes into account that
\begin{equation}\label{dnew1:aux}
h^2a_k/\beta_k=h\Big(\exp(x_{k+1}-x_k)-\exp(x_{k+1}-\widetilde{x}_k)\Big),
\end{equation}
and hence
\[
1+h\exp(x_k-x_{k-1})-\frac{h^2a_{k-1}}{\beta_{k-1}}=1+h\exp(x_k-\widetilde{x}_{k-1}).
\]
The second equation of motion follows from (\ref{q par}), (\ref{dnew1:p}),
and (\ref{dnew1:wp}) with the help of (\ref{dnew1:aux}). \rule{3mm}{3mm}
\setcounter{equation}{0}
\section{Discretizations related to \newline
relativistic Toda lattice}
We now turn to the discretizations (\ref{Hir}), (\ref{Sur1}).
A simple observation shows that these models are equivalent to
(\ref{Sur2}), (\ref{Sur3}), respectively, when considered as equations on
the lattice with the coordinates $(t,k)$. More precisely, the equations of
motion (\ref{Sur2}), (\ref{Sur3}) are recovered
from (\ref{Hir}), (\ref{Sur1}) after renaming $x_k(t)$ to $x_k(t-kh)$. However,
such renaming mixes the "spatial" and "temporal" variables, and this changes the
properties of the {\it initial value problem}, which we are concerned
with, dramatically.
First of all, from a practical point of view we must
remark that the Hirota's and the standard--like models are explicit
with respect to $\widetilde{x}_k$, while the models (\ref{Sur2}), (\ref{Sur3})
require to solve certain nonlinear algebraic equations (or, equivalently,
to evaluate continued fractions) in order to obtain the $\widetilde{x}_k$.
Another important difference between our new models and the old ones
lies in their algebraic, $r$--matrix structure. According to the
observation in \cite{PGR}, the Hirota's and the standard--like models are in
essence equivalent. More precisely, they both arise from the following
system of difference equations:
\begin{equation}\label{dRTL}
\widetilde{d}_k+h^2\widetilde{c}_{k-1}=d_k+h^2c_k,\quad \widetilde{d}_{k+1}c_k=d_k\widetilde{c}_k,
\end{equation}
if the variables $(c,d)$ are parametrized by canonically conjugated variables
$(x,p)$ in two different ways. An equivalent form of equations (\ref{dRTL})
may be obtained, if one resolves for $(\widetilde{c}_k,\widetilde{d}_k)$:
\begin{equation}\label{res dRTL}
\widetilde{d}_k=d_{k-1}\frac{d_k+h^2c_k}{d_{k-1}+h^2c_{k-1}},\quad
\widetilde{c}_k=c_k\frac{d_{k+1}+h^2c_{k+1}}{d_k+h^2c_k}.
\end{equation}
The map defined by these difference equations is Poisson with respect to two
different compatible Poisson brackets: a linear one,
\begin{equation}\label{r l br}
\{c_k,d_{k+1}\}_1=-c_k, \quad \{c_k,d_k\}_1=c_k,\quad \{d_k,d_{k+1}\}_1=h^2c_k,
\end{equation}
and a quadratic one,
\begin{equation}\label{r q br}
\{c_k,c_{k+1}\}_2=-c_kc_{k+1}, \quad \{c_k,d_{k+1}\}_2=-c_kd_{k+1}, \quad
\{c_k,d_k\}_2=c_kd_k.
\end{equation}
The Lax representation for the map (\ref{dRTL}) may be given in terms of the
$N\times N$ matrices depending on the dynamical variables $(c,d)$ and an
additional parameter $\lambda$:
\begin{eqnarray}
L(c,d,\lambda) & = & \sum_{k=1}^N d_kE_{kk}+h\lambda\sum_{k=1}^N E_{k+1,k},\\
U(c,d,\lambda) & = & \sum_{k=1}^N E_{kk}-h\lambda^{-1}\sum_{k=1}^N
c_kE_{k,k+1}.
\end{eqnarray}
It is easy to check that the difference equations (\ref{dRTL}) are
equivalent to the matrix equation
\begin{equation}\label{LU=UL}
U\widetilde{L}=L\widetilde{U},\quad{\rm or}\quad
\widetilde{L}\widetilde{U}^{-1}=U^{-1}L.
\end{equation}
In terms of the Lax matrix
\begin{equation}\label{r T}
T(c,d,\lambda)=L(c,d,\lambda)U^{-1}(c,d,\lambda)
\end{equation}
the equation (\ref{LU=UL}) takes the form
\begin{equation}
\widetilde{T}=U^{-1}TU=L^{-1}TL,
\end{equation}
which implies, in particular, that the spectral invariants of the matrix $T$
are integrals of motion for the map (\ref{dRTL}).
As observed in \cite{S1}, \cite{S2}, the matrix $T$ from (\ref{r T}) just coincides
with the Lax matrix of the {\it relativistic Toda hierarchy} (which is also
bi--Hamiltonian with respect to both brackets (\ref{r l br}), (\ref{r q br})).
We first recall how can the equations (\ref{Hir}), (\ref{Sur1}) be reduced
to (\ref{dRTL}), and then show that the same is true for (\ref{dnew2}).
We start with (\ref{Hir}). It is easy to find a Lagrangian formulation of these
equations with a Lagrange function
\begin{equation}\label{Hir:Lagr}
\Lambda_4(\widetilde{x},x)=\sum_{k=1}^N\phi_1(\widetilde{x}_k-x_k)-
h\sum_{k=1}^N\exp(\widetilde{x}_k-\widetilde{x}_{k-1}),
\end{equation}
(where, as in the previous section, $\phi_1(\xi)=(\exp(\xi)-1-\xi)/h$). Hence
the equations (\ref{Hir}) are equivalent to a symplectic map
$(x,p)\mapsto(\widetilde{x},\widetilde{p})$ with
\begin{eqnarray}
hp_k & = & \exp(\widetilde{x}_k-x_k)-1,\label{Hir:p}\\
h\widetilde{p}_k & = & \exp(\widetilde{x}_k-x_k)-1+
h^2\exp(\widetilde{x}_{k+1}-\widetilde{x}_k)-h^2\exp(\widetilde{x}_k-\widetilde{x}_{k-1}).\label{Hir:wp}
\end{eqnarray}
{\bf Proposition 4.} {\it Let the coordinates $(c,d)$ be parametrized by the
canonically conjugated varibles $(x,p)$ according to the formulas
\begin{equation}\label{r l par}
c_k=\exp(x_{k+1}-x_k),\quad d_k=1+hp_k-h^2\exp(x_{k+1}-x_k).
\end{equation}
Then} (\ref{Hir:p}), (\ref{Hir:wp}) {\it imply} (\ref{dRTL}).
{\bf Proof.} Obviously, we have from (\ref{Hir:p}), (\ref{Hir:wp}), and
(\ref{r l par}):
\[
d_k+h^2c_k=\exp(\widetilde{x}_k-x_k),\qquad \widetilde{d}_k+h^2\widetilde{c}_{k-1}=\exp(\widetilde{x}_k-x_k).
\]
Comparing these expressions, we get the first equation of motion in
(\ref{dRTL}), and the first of the expressions above together with
$c_k=\exp(x_{k+1}-x_k)$ implies the second equation in (\ref{res dRTL}).
\rule{3mm}{3mm}
It is important to notice that the parametrization (\ref{r l par}) results
in the linear Poisson bracket (\ref{r l br}), which proves independently
that the map (\ref{res dRTL}) is Poisson with respect to this bracket.
Turning now to (\ref{Sur1}), we find a Lagrangian formulation of these equations
(in the form (\ref{stand})) with
\begin{equation}\label{Sur1:Lagr}
\Lambda_5(\widetilde{x},x)=\sum_{k=1}^N\frac{1}{2h}(\widetilde{x}_k-x_k)^2-
\sum_{k=1}^{N}\phi_2(x_k-x_{k-1}),
\end{equation}
where, as in the previous section, $\phi_2(\xi)=
h^{-1}\int_0^\xi\log(1+h^2\exp(\eta))d\eta$.
Hence the expression for the momenta $p_k$ and their updates, equivalent to
(\ref{Sur1}), are:
\begin{eqnarray}
\exp(hp_k) & = &
\exp(\widetilde{x}_k-x_k)\;\frac{1+h^2\exp(x_k-x_{k-1})}{1+h^2\exp(x_{k+1}-x_k)},
\label{Sur1:p}\\
\exp(h\widetilde{p}_k) & = & \exp(\widetilde{x}_k-x_k),\label{Sur1:wp}
\end{eqnarray}
{\bf Proposition 5.} {\it Let the coordinates $(c,d)$ be parametrized by the
canonically conjugated varibles $(x,p)$ according to the formulas
\begin{equation}\label{r q par}
c_k=\exp(x_{k+1}-x_k+hp_k), \quad d_k=\exp(hp_k).
\end{equation}
Then} (\ref{Sur1:p}), (\ref{Sur2:wp}) {\it imply} (\ref{dRTL}).
{\bf Proof.} It is easy to see that (\ref{r q par}) allows to rewrite the
second equation in (\ref{dRTL}) as $\exp(x_{k+1}-x_k+h\widetilde{p}_{k+1})=
\exp(\widetilde{x}_{k+1}-\widetilde{x}_k+h\widetilde{p}_k)$. This equality is an obvious consequence of
(\ref{Sur1:wp}). Further, (\ref{r q par}) allows to rewrite the first equation
in (\ref{res dRTL}) as
\[
\exp(\widetilde{p}_k)=\exp(p_k)\frac{1+h^2\exp(x_{k+1}-x_k)}{1+h^2\exp(x_k-x_{k-1})},
\]
which follows immediately from (\ref{Sur1:p}), (\ref{Sur2:wp}). \rule{3mm}{3mm}
This time we notice that (\ref{r q par}) results (up to the factor $h$)
in the quadratic Poisson bracket (\ref{r q br}), which proves independently
that the map (\ref{res dRTL}) is Poisson with respect to this bracket.
It remains to perform analogous considerations for an explicit discretization
(\ref{dnew2}) of our new lattice (\ref{new}). Remarkably, this system turns
out to be still another realization of the same map (\ref{res dRTL})!
To demonstrate this, note that (\ref{dnew2}) admits a Lagrangian formulation
with
\begin{equation}\label{dnew2 Lagr}
\Lambda_6(\widetilde{x},x)=\sum_{k=1}^N\phi(\widetilde{x}_k-x_k)-
\sum_{k=1}^{N}\psi(x_k-x_{k-1}),
\end{equation}
where $\phi(\xi), \psi(\xi)$ are defined by (\ref{phipsi}).
Hence a Hamiltonian formulation of this system is given by:
\begin{eqnarray}
h\exp(p_k) & = &
\Big(\exp(\widetilde{x}_k-x_k)-1\Big)\;\frac{1+h\exp(x_k-x_{k-1})}{1+h\exp(x_{k+1}-x_k)},
\label{dnew2:p}\\
h\exp(\widetilde{p}_k) & = & \Big(\exp(\widetilde{x}_k-x_k)-1\Big),\label{dnew2:wp}
\end{eqnarray}
{\bf Proposition 6.} {\it Let the coordinates $(c,d)$ be parametrized by the
canonically conjugated varibles $(x,p)$ according to the formulas
\begin{equation}\label{new par}
c_k=\exp(x_{k+1}-x_k+p_k),\quad d_k=1+h\exp(p_k)+h\exp(x_k-x_{k-1}).
\end{equation}
Then} (\ref{dnew2:p}), (\ref{dnew2:wp}) {\it imply} (\ref{dRTL}).
{\bf Proof.} From (\ref{new par}) and (\ref{dnew2:p}) it follows:
\begin{eqnarray}
d_k+h^2c_k & = & 1+h\exp(x_k-x_{k-1})+h\exp(p_k)(1+h\exp(x_{k+1}-x_k))
\nonumber\\
& = & \exp(\widetilde{x}_k-x_k)\Big(1+h\exp(x_k-x_{k-1})\Big).
\label{dnew2:aux1}
\end{eqnarray}
Analogously, from (\ref{new par}) and (\ref{dnew2:wp}) it follows:
\begin{eqnarray}
\widetilde{d}_k+h^2\widetilde{c}_{k-1} & = & 1+h\exp(\widetilde{p}_k)+h\exp(\widetilde{x}_k-\widetilde{x}_{k-1})
(1+h\exp(\widetilde{p}_{k-1}))\nonumber\\
& = & \exp(\widetilde{x}_k-x_k)\Big(1+h\exp(x_k-x_{k-1})\Big).\label{dnew2:aux2}
\end{eqnarray}
Comparing these expressions, we get the first equation of motion in (\ref{dRTL}).
The second equation in (\ref{res dRTL}) is a direct consequence of
(\ref{dnew2:p}), (\ref{dnew2:wp}), and (\ref{dnew2:aux1}). \rule{3mm}{3mm}
It is easy to calculate that the parametrization (\ref{new par})
generates the following Poisson bracket:
\begin{equation}
\begin{array}{c}
\{c_{k+1},c_k\}=c_{k+1}c_k,\quad \{d_{k+1},d_k\}=h^2c_k,\\ \\
\{d_k,c_k\}=c_k-d_kc_k,\quad \{d_{k+1},c_k\}=-c_k+d_{k+1}c_k.
\end{array}
\end{equation}
This is, obviously, a linear combination of the brackets (\ref{r l br})
and (\ref{r q br}), namely $\{\cdot,\cdot\}_2-\{\cdot,\cdot\}_1$. Of course,
the Poisson property of the map (\ref{res dRTL}) with respect to this
bracket follows from the previous results, but the Proposition 6 gives
an alternative way to prove this.
\setcounter{equation}{0}
\section{Conclusion}
Identifying the variables $(a,b)$ in (\ref{l par}) and in (\ref{q par}), we
get a transformation between two sets of variables $(x,p)$ (and, consequently,
between two sets of variables $(\dot{x},x)$). This is exactly
the B\"acklund transformation between the lattice (\ref{new}) and the Toda
lattice (\ref{Toda}).
For each of these systems one has different integrable discretizations. Some
of them share the Lax matrix with the continuous time prototype. These
discretizations generate Newtonian equations implicit with respect to the
updates $\widetilde{x}_k$. Other discretizations have Lax representations with the
Lax matrix defining the {\it relativistic} Toda hierarchy. These discretizations
turn out to be explicit. All three apparently different implicit discretizations
turn out to be connected by B\"acklund transformations. An underlying fact is
that all three appear from one and the same integrable map, if the relevant
variables $(a,b)$ are parametrized by canonically conjugated ones $(x,p)$ in
three different ways, generating three different Poisson brackets on the
set of $(a,b)$ (and hence on the set of Lax matrices). Exactly the same holds
true for the three explicit discretizations.
We would like to note here that all the Poisson brackets on the sets of Lax
matrices were given an $r$--matrix interpretation in \cite{S2}.
| {
"timestamp": "1996-06-05T18:59:13",
"yymm": "9605",
"arxiv_id": "solv-int/9605010",
"language": "en",
"url": "https://arxiv.org/abs/solv-int/9605010",
"abstract": "A new integrable lattice system is introduced, and its integrable discretizations are obtained. A Bäcklund transformation between this new system and the Toda lattice, as well as between their discretizations, is established.",
"subjects": "Exactly Solvable and Integrable Systems (nlin.SI)",
"title": "A new integrable system related to the Toda lattice",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9859363758493942,
"lm_q2_score": 0.7185944046238982,
"lm_q1q2_score": 0.7084883630005394
} |
https://arxiv.org/abs/0711.4325 | On Three Different Notions of Monotone Subsequences | We review how the monotone pattern compares to other patterns in terms of enumerative results on pattern avoiding permutations. We consider three natural definitions of pattern avoidance, give an overview of classic and recent formulas, and provide some new results related to limiting distributions. | \section{Introduction}
Monotone subsequences in a permutation $p=p_1p_2\cdots p_n$ has been the
subject of vigorous research for over sixty years. In this paper, we will
review three different lines of work. In all of them, we will consider
increasing subsequences of a permutation of length $n$ that have a
{\em fixed} length $k$. This is in contrast to another line of work,
started by Ulam more than sixty years ago, in which the distribution of
the {\em longest} increasing subsequence of a random permutation has been
studied. That direction of research has recently reached a high point in
the article \cite{Deift} of Baik, Deift and Johansson.
The three directions we consider are distinguished by their definition
of monotone subsequences. We can simply require that $k$ entries of a
permutation increase from left to right, or we can in addition require that
these $k$ entries be in consecutive positions, or we can even require that
in they be consecutive integers {\em and} be in consecutive positions.
\section{Monotone Subsequences with No Restrictions}
The classic definition of pattern avoidance for permutations is as follows.
Let $p=p_1p_2\cdots p_n$ be a permutation, let $k<n$, and let $q=q_1q_2\cdots
q_k$ be another permutation. We say that $p$ {\em contains} $q$ as a pattern
if there exists a subsequence $1\leq i_1<i_2<\cdots <i_k\leq n$
so that for all indices
$j$ and $r$, the inequality $q_j<q_r$ holds if and only if the inequality
$p_{i_j}<p_{i_r}$ holds. If $p$ does not contain $q$, then we say
that $p$ {\em avoids} $q$. In other words, $p$ contains $q$ if $p$ has
a subsequence of entries, not necessarily in consecutive positions, which
relate to each other the same way as the entries of $q$ do.
\begin{example} The permutation 3174625 contains the pattern 123. Indeed,
consider the first, fourth, and seventh entries.
\end{example}
In particular, $p$ contains the monotone pattern $\alpha_k=12\cdots k$ if
and only if $p$ contains an increasing subsequence of length $k$. The
elements of this increasing subsequence do not have to be in consecutive
positions.
The enumeration of permutations avoiding a given pattern is a fascinating
subject. Let $S_n(q)$ denote the number of permutations of length $n$
(or, in what follows, $n$-permutations) that
avoid the pattern $q$.
\subsection{Patterns of Length Three}
Among patterns of length three, there is no difference
between the monotone pattern and other patterns as far as $S_n(q)$ is
concerned. This is the content of our first theorem.
\begin{theorem} \label{allsix}
Let $q$ be any pattern of length three, and let $n$ be any positive integer.
Then $S_n(q)=C_n={2n\choose n}/(n+1)$. In other words, $S_n(q)$ is the $n$th
Catalan number.
\end{theorem}
\begin{proof} If $p$ avoids $q$, then the reverse of $p$ avoids the reverse
of $q$, and the complement of $p$ avoids the complement of $q$. Therefore,
$S_n(123)=S_n(321)$ and $S_n(132)=S_n(231)=S_n(213)=S_n(312)$.
The fact that $S_n(132)=S_n(123)$ is proved using the well-known
Simion-Schmidt bijection \cite{Simion}. In a permutation,
let us call an entry a {\em left-to-right} minimum if it is smaller than
every entry on its left. For instance, the left-to-right minima of
4537612 are the entries 4, 3, and 1.
Take an $n$-permutation $p$ of length $n$ that
avoids 132, keep its left-to-right minima fixed, and arrange all other entries
in decreasing order in the positions that do not belong to left-to-right
minima, to get the permutation $f(p)$. For instance, if $p=34125$, then
$f(p)=35142$. Then $f(p)$ is a union of two decreasing sequences, so it is
123-avoiding. Furthermore, $f$ is a bijection between the two relevant set
of permutations. Indeed, if $r$ is a permutation counted by $S_n(123)$, then
$f^{-1}(r)$ is obtained by keeping the left-to-right minima of $r$ fixed,
and rearranging the remaining entries so that moving from left to right,
each slot is filled by the smallest remaining entry that is larger than the
closest left-to-right minimum on the left of that position.
In order to prove that $S_n(132)=C_n$, just note that in a 132-avoiding
$n$-permutation, any entry to the left of $n$ must be smaller than any entry
to the right of $n$. Therefore, if $n$ is in the $i$th position, then there
are $S_{i-1}(132)S_{n-i}(123)$ permutations of length $n$ that avoid 132.
Summing over all $i$, we get the recurrence
\[S_{n}(132)= \sum_{i=0}^{n-1}S_{i-1}(132)S_{n-i}(132),\]
which is the well-known recurrence for Catalan numbers.
\end{proof}
\subsection{Patterns of Length Four}
When we move to longer patterns, the situation becomes much more complicated
and less well understood. In his doctoral thesis \cite{West},
Julian West published the
following numerical evidence.
\begin{itemize}
\item for $S_n(1342)$, and $n=1,2,\cdots ,8$,
we have 1, 2, 6, 23, 103, 512, 2740, 15485
\item for $S_n(1234)$, and $n=1,2,\cdots ,8$,
we have 1, 2, 6, 23, 103, 513, 2761, 15767
\item for $S_n(1324)$, and $n=1,2,\cdots ,8$,
we have 1, 2, 6, 23, 103, 513, 2762, 15793.
\end{itemize}
These data are startling for at least two reasons. First, the numbers
$S_n(q)$ are no longer independent of $q$; there
are some patterns of length four that are easier to avoid than others.
Second, the monotone pattern 1234, special as it is,
does not provide the minimum or the
maximum value for $S_n(q)$. We point out that for
each $q$ of the other 21 patterns of length four, it is known that the
sequence $S_n(q)$ is identical to one of the three sequences $S_n(1342)$,
$S_n(1234)$, and $S_n(1324)$. See \cite{bona}, Chapter 4, for more
details.
Exact formulas are known for two of the above three sequences.
For the monotone pattern, Ira Gessel gave a formula using symmetric functions.
\begin{theorem} \cite{GesselF}, \cite{GesselP}
For all positive integers $n$, the identity
\begin{eqnarray} S_n(1234) & = &
2\cdot\sum_{k=0}^n {2k\choose k}{n\choose k}^2
\frac{3k^2+2k+1-n-2nk}{(k+1)^2 (k+2) (n-k+1) } \\
& = & \frac{1}{(n+1)^2(n+2)} \sum_{k=0}^n {2k\choose k}{n+1\choose k+1}
{n+2\choose k+1}.\end{eqnarray}
\end{theorem}
The formula for $S_n(1342)$ is due to the present author \cite{Bona1342},
and is quite surprising.
\begin{theorem} \label{exactf} For all positive integers $n$, we have
\begin{eqnarray*} S_n(1342) & = & (-1)^{n-1} \cdot \frac{(7n^2-3n-2)}{2} \\
& + & 3\sum_{i=2}^n (-1)^{n-i} \cdot
2^{i+1}\cdot \frac{(2i-4)!}{i!(i-2)!}\cdot
{{n-i+2\choose 2}}
. \end{eqnarray*}
\end{theorem}
This result is unexpected for two reasons. First, it shows that
$S_n(1342)$ is not simply less than $S_n(1234)$ for every $n\geq 6$; it is
{\em much less}, in a sense that we will explain in Subsection \ref{swlimits}.
For now, we simply state that while $S_n(1234)$ is ``roughly'' $9^n$,
the value of $S_n(1342)$ is``roughly'' $8^n$. Second, the formula is, in
some sense, simpler than that for $S_n(1234)$. Indeed, it follows
from Theorem \ref{exactf} that the ordinary generating
function of the sequence $S_n(1342)$ is
\[H(x)=\sum_{i\geq 0}F^i(x)=\frac{1}{1-F(x)}=
\frac{32x}{-8x^2+20x+1-(1-8x)^{3/2}}.\]
This is an {\em algebraic} power series. On the other hand,
it is known (Problem Plus 5.10 in \cite{bona} that the
ordinary generating
function of the sequence $S_n(1234)$ is {\em not} algebraic.
So permutations avoiding the
monotone pattern are not even the {\em nicest} among permutations avoiding
a given pattern, in terms of the generating functions that count them.
There is no known formula for the third sequence, that of the numbers
$S_n(1324)$. However, the following inequality is known \cite{bonathesis}.
\begin{theorem} For all integers $n\geq 7$, the inequality
\[S_n(1234) < S_n(1324) \]
holds.
\end{theorem}
\begin{proof} Let us call an entry of a permutation a {\em
right-to-left maximum} if it is larger than all entries on its right.
So Let us say that two $n$-permutations are in the same class
if they have the same left-to-right minima, and they are in the same
positions, and they have the same right-to-left maxima, and they are
in the same positions as well.
For example, $51234$ and $51324$ are in the same
class, but $z=24315$ and
$v=24135$ are not, as the third entry of $z$ is not a
left-to-right minimum, whereas that of $v$ is.
It is straightforward to see that each non-empty class contains exactly one
1234-avoiding permutation, the one in which the subsequence of entries that
are neither left-to-right minima nor right-to-left maxima is decreasing.
It is less obvious that each class contains {\em at least one} 1324-avoiding
permutation. Note that if a permutation contains a 1324-pattern,
then we can choose such a pattern so that its first element is a left-to-right
minimum and its last element is a right-to-left maximum. Take a 1324-avoiding
permutation, and take one of its 1324-patterns of the kind described in the
previous sentence. Interchange its second and third element. Observe that
this will keep the permutation within its original class. Repeat this
procedure as long as possible. The procedure will stop after a finite
number of steps since each step decreases the number of inversions of the
permutation. When the procedure stops, the permutation at hand avoids 1324.
This shows that $ S_n(1234) \leq S_n(1324)$ for all $n$. If $n\geq 7$, then
the equality cannot hold since there is at least one class that contains more
than one 1324-avoiding permutation. For $n=7$, this is the class
$3*1*7*5$, which contains 3612745 and 3416725. For larger $n$, this class
can be prepended by $n(n-1)\cdots 8$ to get a suitable class.
\end{proof}
It turns out again that $S_n(1324)$ is {\em much} larger than $S_n(1234)$.
We will give the details in Subsection \ref{swlimits}.
\subsection{Patterns of Any Length}
For general $k$, there are some good estimates known
for the value of $S_n(\alpha_k)$. The first one can be proved by
an elementary method.
\begin{theorem} \label{1rank}
For all positive integers $n$ and $k>2$, we have
\[S_n(123\cdots k)\leq (k-1)^{2n}.\]
\end{theorem}
\begin{proof}
Let us say that an entry $x$ of a permutation is of rank $i$ if it is
the end of an increasing subsequence of length $i$, but there is no increasing
subsequence of length $i+1$ that ends in $x$. Then for all $i$,
elements of rank $i$ must form a decreasing subsequence. Therefore, a
$q$-avoiding permutation can be decomposed into the union of $k-1$
decreasing subsequences. Clearly, there are at most
$(k-1)^{n}$ ways to partition
our $n$ entries into $k-1$ blocks. Then we have to place these blocks of
entries
somewhere in our permutation. There are at most $(k-1)^{n}$ ways
to assign each position of the permutation
to one of these blocks, completing the
proof.
\end{proof}
Indeed,
Theorem \ref{1rank} has a stronger version, obtained by Amitaj Regev
\cite{Regev}. It needs heavy analytic
machinery, and therefore will not be proved here.
We mention the result, however, as it shows that no matter what $k$ is,
the constant $(k-1)^2$ in Theorem \ref{1rank} cannot
be replaced by a smaller number, so the elementary estimate
of Theorem \ref{1rank} is optimal in some strong sense. We remind the
reader that functions
$f(n)$ and $g(n)$ are said to be {\em asymptotically equal} if
$\lim_{n\rightarrow \infty} \frac{f(n)}{g(n)}=1$.
\begin{theorem} \label{regev} \cite{Regev}
\label{monoton} For all $n$, $S_n(1234\cdots k)$
asymptotically equals \[\lambda_k \frac{(k-1)^{2n}}{n^{(k^2-2k)/2}} .\] Here
\[ \lambda_k=\gamma_k^2
\int\!\!\!\!\!\int\limits_{x_1\,\geq\,x_2\,\geq\,\cdots\,\geq\,x_k}
\!\!\!\!\!\!\!\!\!\!\!\!\!\cdots\int
[D(x_1,x_2,\cdots ,x_k)\cdot
e^{-(k/2)x^2 }] ^2 dx_1 dx_2\cdots dx_k, \]
where $D(x_1,x_2,\cdots ,x_k)=\Pi_{i<j} (x_i-x_j)$, and
$\gamma_k= (1/\sqrt{2\pi})^{k-1}\cdot k^{k^2/2}. $
\end{theorem}
\subsection{Stanley-Wilf Limits} \label{swlimits}
The following
celebrated result of Adam Marcus and G\'abor Tardos \cite{Marcus} shows
that in general, it is very difficult to avoid any given pattern $q$.
\begin{theorem} \cite{Marcus}
For all patterns $q$, there exists a constant $c_q$ so that
\begin{equation} S_n(q)\leq c_q^n.\end{equation}
\end{theorem}
It this not difficult to show using Fekete's lemma that the sequence
$\left(S_n(q) \right)^{1/n}$ is monotone increasing. The previous
theorem shows that it is bounded from above, leading to the following.
\begin{corollary} \label{limits}
For all patterns $q$, the limit
\[L(q)=\lim_{n\rightarrow \infty} \left(S_n(q) \right)^{1/n}
\]
exists.
\end{corollary}
The real number $L(q)$ is called the {\em Stanley-Wilf} limit, or
{\em growth rate} of the pattern $q$. In this terminology, Theorem
\ref{regev} implies that $L(\alpha_k)=(k-1)^2$. In particular,
$L(1234)=9$, while Theorem \ref{exactf} implies that $L(1342)=8$. So
it is not simply easier to avoid 1234 than 1342, it is {\em exponentially}
easier to do so.
Numerical evidence suggests that in the multiset of $k!$ real numbers
$S_n(q)$, the numbers $S_n(\alpha_k)$ are much closer to the maximum than
to the minimum. This led to the plausible conjecture that
for any pattern $q$ of length $k$, the inequality $L(q)\leq (k-1)^2$
holds. This would mean that while there are patterns of length $k$
that are easier to avoid
than $\alpha_k$, there are none that are much easier to avoid, in the sense
of Stanley-Wilf limits.
However, this conjecture has been disproved by the following
result of Michael Albert and al.
\begin{theorem} \cite{albert}
The inequality $L(1324)\geq 11.35$ holds.
\end{theorem}
In other words, it is not simply harder to avoid 1234 than 1324,
it is {\em exponentially} harder to do so.
\subsection{Asymptotic Normality} \label{asymptotics}
In this section we change direction and prove that the distribution of
the number of
copies of $\alpha_k$ in a randomly selected $n$-permutation converges
in distribution to a normal distribution. (For the rest of this
paper, when we say random permutation of length $n$, we always assume
that each $n$-permutation is selected with probability $1/n!$.)
Note that in the special case
of $k=2$, this is equivalent to
the classic result that the distribution of inversions
in random permutations is asymptotically normal. See \cite{fulman} and its
references for various proofs of that result, or \cite{ngendes} for a
generalization.
We need to introduce some notation for transforms of the random variable
$Z$. Let $\bar{Z}=Z-E(Z)$, let $\tilde{Z}=\bar{Z}/\sqrt{\hbox {Var}( Z)}$, and let
$Z_n\rightarrow N(0,1)$ mean that $Z_n$ converges in distribution to the
standard normal variable.
Our main tool in this section will be a theorem of Svante Janson
\cite{janson}. In order to be able to state that theorem, we need the
following definition.
\begin{definition}
Let $\{Y_{n,k}|k=1,2,\cdots ,N_n\}$ be an array of
random variables.
We say that a graph $G$ is
a {\em dependency graph} for $\{Y_{n,k}|k=1,2\cdots , N_n\}$
if the following
two conditions are satisfied:
\begin{enumerate}
\item There exists a bijection between the random variables $Y_{n,k}$ and
the vertices of $G$, and
\item If $V_1$ and $V_2$ are two disjoint sets of vertices of $G$ so that
no edge of $G$ has one endpoint in $V_1$ and another one in $V_2$, then
the corresponding sets of random variables are independent.
\end{enumerate}
\end{definition}
Note that the dependency graph of a
family of variables is not unique. Indeed if $G$ is a dependency graph
for a family and $G$ is not a complete graph,
then we can get other dependency graphs for the family
by simply adding new edges to $G$.
Now we are in position to state Janson's theorem, the famous
{\em Janson dependency criterion}.
\begin{theorem} \cite{janson} \label{janson}
Let $Y_{n,k}$ be an array of random variables such that for all $n$, and
for all $k=1,2,\cdots ,N_n$, the inequality $|Y_{n,k}|\leq A_n$ holds for
some real number $A_n$, and that the maximum degree of a dependency
graph of $\{Y_{n,k} | k=1,2,\cdots ,N_n \}$ is $\Delta_n$.
Set $Y_n=\sum_{k=1}^{N_n} Y_{n,k}$ and $\sigma_n^2= \hbox {Var} ( Y_n)$. If there
is a natural number $m$ so that
\begin{equation} \label{jansencond}
N_n\Delta_n^{m-1} \left (\frac{A_n}{\sigma_n} \right )^m \rightarrow 0,
\end{equation}
as $n$ goes to infinity, then \[ \tilde{Y}_n \rightarrow N(0,1) .\]
\end{theorem}
Let us order the ${n\choose k}$ subwords of length $k$ of the permutation
$p_1p_2\cdots p_n$ linearly in some way.
For $1\leq i\leq {n\choose k}$, let $X_{n,i}$
be the indicator random
variable of the event that in a randomly selected permutation of length $n$,
the $i$th subword of length $k$ in the permutation $p=p_1p_2\cdots p_n$
is a $12\cdots k$-pattern. We will now verify that the family of the
$X_{n,i}$ satisfies all conditions of the Janson Dependency Criterion.
First, $|X_{n,i}|\leq 1$ for all $i$ and all $n$, since the $X_{n,i}$ are
indicator random variables. So we can set $A_n=1$. Second, $N_n={n\choose k}$,
the total number of subwords of length $k$ in $p$. Third, if $a\neq b$, then
$X_a$ and $X_b$ are independent unless the corresponding subwords intersect.
For that, the $b$th subword must intersect the $a$th subword in $j$ entries,
for some $1\leq j\leq k-1$. For a fixed $a$th subword, the number of
ways that can happen is $\sum_{j=1}^{k-1} {k\choose j}{n-k\choose k-j}=
{n\choose k}-{n-k \choose k}-1$, where we used
the well-known Vandermonde identity to compute the sum.
Therefore,
\begin{equation} \label{maxdegree}
\Delta_n \leq {n\choose k}-{n-k \choose k}-1.
\end{equation}
In particular, note that (\ref{maxdegree}) provides an upper bound for
$\Delta_n$ in terms of a polynomial function of $n$ that is
of degree $k-1$ since terms of degree
$k$ will cancel.
There remains the task of finding a lower bound for $\sigma_n$ that
we can then use in applying Theorem \ref{janson}. Let $X_n=
\sum_{i=1}^{n\choose k} X_{n,i}$. We will show the following.
\begin{proposition} \label{varprop}
There exists a positive constant $c$ so that
for all $n$, the inequality
\[\hbox {Var}(X_n)\geq cn^{2k-1}\]
holds.
\end{proposition}
\begin{proof}
By linearity of expectation, we have
\begin{eqnarray} \label{variance}
\hbox {Var} (X_n) & = & E(X_n^2) - (E(X_n))^2 \\
& = & E \left (\left( \sum_{i=1}^{{n\choose k}} X_{n,i} \right )^2 \right )
- \left (E \left (\sum_{i=1}^{{n\choose k}} X_{n,i} \right ) \right )^2 \\
& = & E \left (\left( \sum_{i=1}^{{n\choose k}} X_{n,i} \right )^2 \right )
- \left( \sum_{i=1}^{{n\choose k}} E(X_{n,i}) \right )^2 \\
\label{lastone} & = & \sum_{i_1, i_2}
E(X_{n,i_1}X_{n,i_2}) - \sum_{i_1, i_2}
E(X_{n,i_1})E(X_{n,i_2}).
\end{eqnarray}
Let $I_1$ (resp. $I_2$) denote the $k$-element subword of $p$ indexed
by $i_1$, (resp. $i_2$). Clearly, it suffices to show that
\begin{equation} \label{simplified} \sum_{|I_1\cap I_2| \leq 1}
E(X_{n,i_1}X_{n,i_2}) - \sum_{i_1, i_2}
E(X_{n,i_1})E(X_{n,i_2}) \geq cn^{2k-1},\end{equation}
since the left-hand side of (\ref{simplified}) is obtained from the
(\ref{lastone}) by removing the sum of some positive terms, that is,
the sum of all $E(X_{n,i_1}X_{n,i_2})$ where $|I_1\cap I_2| >1$.
As $E(X_{n,i})=1/k!$ for each $i$, the sum with negative sign in
(\ref{lastone}) is
\[ \sum_{i_1, i_2}
E(X_{n,i_1})E(X_{n,i_2}) ={n\choose k}^2 \cdot \frac{1}{k!^2},\]
which is a polynomial function
in $n$, of degree $2k$ and of leading coefficient
$\frac{1}{k!^4}$. As far as the summands in (\ref{lastone}) with a positive
sign go, {\em most} of them are also equal to $\frac{1}{k!^2}$. More
precisely, $E(X_{n,i_1}X_{n,i_2})=\frac{1}{k!^2}$ when
$I_1$ and $I_2$ are disjoint, and that happens for
${n\choose k}{n-k\choose k}$ ordered pairs $(i_1,i_2)$
of indices. The sum of these
summands is
\begin{equation}
\label{disjoint} d_n={n\choose k}{n-k\choose k} \frac{1}{k!^2},
\end{equation}
which is again a polynomial function in $n$, of degree $2k$ and with leading
coefficient
$\frac{1}{k!^4}$. So summands
of degree $2k$ will cancel out in (\ref{lastone}). (We will see in the next
paragraph that the summands we have not yet considered add up to a polynomial
of degree $2k-1$.)
In fact, considering the two types of summands we studied in
(\ref{lastone}) and (\ref{disjoint}), we see that they add up to
\begin{eqnarray}
{n\choose k}{n-k\choose k} \frac{1}{k!^2}-{n\choose k}^2 \frac{1}{k!^2}
& = & n^{2k-1} \frac{2{k\choose 2}-{2k-1\choose 2}}{k!^4}+O(n^{2k-2}) \\
\label{theeasy} & = & n^{2k-1} \frac{-k^2}{k!^4} +O(n^{2k-2}) .
\end{eqnarray}
Next we look at ordered pairs of indices $(i_1,i_2)$ so that the corresponding
subwords $I_1$ and $I_2$ intersect in exactly one entry, the entry
$x$. Let us say that counting
from the left, $x$ is the $a$th
entry in $I_1$, and the $b$th entry in $I_2$. See Figure
\ref{subwords} for an illustration.
\begin{figure}[ht]
\begin{center}
\epsfig{file=subwords.eps}
\caption{In this example, $k=11$, $a=7$, and $b=5$. }
\label{subwords}
\end{center}
\end{figure}
Observe that
$X_{i_1}X_{i_2}=1$ if and only if all of the following independent
events hold.
\begin{itemize}
\item In the $(2k-1)$-element set of entries that belong to $I_1\cup I_2$,
the entry $x$ is the $(a+b-1)$th smallest. This happens with
probability $1/(2k-1)$.
\item The $a+b-2$ entries on the left of $x$ in $I_1\cup I_2$ are all smaller
than the $2k-a-b$ entries on the right of $x$ in $I_1\cup I_2$.
This happens with probability $\frac{1}{{2k-2\choose a+b-2}}$.
\item The subwords of $I_1$ on the left of $x$ and on the right of $x$,
and the subwords of $I_2$ on the left of $x$ and on the right of $x$
are all monotone increasing. This happens with probability
$\frac{1}{(a-1)!(b-1)!(k-a)!(k-b)!}$.
\end{itemize}
Therefore, if $|I_1\cap I_2|=1$, then
\begin{eqnarray} \label{oneprob}
P(X_{i_1}X_{i_2}=1) & = &
\frac{1}{(2k-1){2k-2\choose a+b-2}(a-1)!(b-1)!(k-a)!(k-b)!} \\
& = & \frac{1}{(2k-1)!}\cdot{a+b-2 \choose a-1}{2k-a-b\choose k-a}
.\end{eqnarray}
How many such ordered pairs $(I_1,I_2)$ are there? There are ${n\choose 2k-1}$
choices for the underlying set $I_1\cup I_2$. Once that choice is made,
the $a+b-1$st smallest entry of $I_1\cup I_2$ will be $x$. Then
the number of choices for the set of entries other than $x$
that will be part of $I_1$ is ${a+b-2\choose a-1}{2k-a-b\choose k-a}$.
Therefore, summing over all $a$ and $b$ and
recalling (\ref{oneprob}),
\begin{eqnarray} \label{contribution}
p_n & = & \sum_{|I_1\cap I_2|=1} E(X_{i_1}X_{i_2}=1) \\
& = &
\frac{1}{(2k-1)!}{n\choose 2k-1}\sum_{a,b}{a+b-2 \choose a-1}^2
{2k-a-b\choose k-a}^2.
\end{eqnarray}
The expression we just obtained is a polynomial of degree $2k-1$, in the
variable $n$. We claim that its leading coefficient is
larger than $k^2/k!^4$. If we can show that, the proposition will be proved
since (\ref{theeasy}) shows that the summands not included in
(\ref{contribution}) contribute about $-\frac{k^2}{k!^4}n^{2k-1}$ to
the left-hand side of (\ref{simplified}).
Recall that by the Cauchy-Schwarz inequality, if $t_1,t_2,\cdots, t_m$
are non-negative real numbers, then
\begin{equation}\label{schwarz}
\frac{\left(\sum_{i=1}^m t_i\right)^2}{m} \leq \sum_{i=1}^m t_i^2,
\end{equation}
where equality holds if and only if all the $t_i$ are equal.
Let us apply this inequality with the numbers ${a+b-2 \choose a-1}^2
{2k-a-b\choose k-a}^2$ playing the role of the $t_i$, where $a$ and $b$
range from 1 to $k$.
We get that
\begin{equation}
\label{cauchy} \sum_{1\leq a,b \leq k}{a+b-2 \choose a-1}^2
{2k-a-b\choose k-a}^2 > \frac{\left (
\sum_{1\leq a,b\leq k} {a+b-2 \choose a-1}
{2k-a-b\choose k-a} \right)^2}{k^2}. \end{equation}
We will use Vandermonde's identity to compute the right-hand side. To that
end, we first compute the sum of summands with a {\em fixed} $h=a+b$.
We obtain
\begin{eqnarray}
\sum_{1\leq a,b\leq k} {a+b-2 \choose a-1}
{2k-a-b\choose k-a} & = & \sum_{h=2}^{2k} \sum_{a=1}^k
{h-2\choose a-1}{2k-h\choose k-a} \\
& = & \sum_{h=2}^{2k} {2k-2\choose k-1} \\
& = & (2k-1) \cdot {2k-2\choose k-1}.
\end{eqnarray}
Substituting the last expression into the right-hand side of (\ref{cauchy})
yields
\begin{equation} \label{estimate} \sum_{1\leq a,b \leq k}{a+b-2 \choose a-1}^2
{2k-a-b\choose k-a}^2 > \frac{1}{k^2} \cdot (2k-1)^2 \cdot
{2k-2\choose k-1}^2.\end{equation}
\noindent Therefore, (\ref{contribution}) and (\ref{estimate}) imply that
\[p_n>\frac{1}{(2k-1)!}{n\choose 2k-1}\frac{(2k-1)^2}{k^2}
{2k-2\choose k-1}^2.\]
As we pointed out after (\ref{contribution}), $p_n$ is a polynomial of
degree $2k-1$ in the variable $n$. The last displayed inequality shows that
its leading coefficient is larger than
\[ \frac{1}{(2k-1)!^2} \cdot \frac{1}{k^2} \cdot \frac{(2k-2)!^2}{(k-1)!^4}
=\frac{k^2}{k!^4} \] as claimed.
Comparing this with (\ref{theeasy})
completes the proof of our Proposition.
\end{proof}
We can now return to the application of Theorem \ref{janson} to our
variables $X_{n,i}$. By Proposition \ref{varprop}, there is an absolute
constant $C$ so that $\sigma_n>Cn^{k-0.5}$ for all $n$.
So (\ref{jansencond}) will be satisfied if we show that
there exists a positive integer $m$ so that
\[{n\choose k} (dn^{k-1})^{m-1} \cdot (n^{-k+0.5})^m<
dn^{-0.5m}
\rightarrow 0.\]
Clearly, any positive integer $m$ is a good choice. So we have proved the
following theorem.
\begin{theorem} Let $k$ be a fixed positive integer,
and let $X_n$ be the random variable counting
occurrences of $\alpha_k$ in permutations of length $n$.
Then $\tilde{X}_n\rightarrow N(0,1)$. In other words, $X_n$ is asymptotically
normal.
\end{theorem}
\section{Monotone Subsequences with Entries in Consecutive Positions}
In 2001, Sergi Elizalde and Marc Noy \cite{elizalde}
considered similar problems using another definition of pattern containment.
Let us say that the permutation $p=p_1p_2\cdots p_n$ {\em tightly}
contains the permutation $q=q_1q_2\cdots q_k$ if there exists an index
$0\leq i\leq n-k$ so that $q_j<q_r$ if and only if $p_{i+j}<p_{i+r}$.
(We point out that this definition is a very special case of the one
introduced
by Babson and Steingrimsson in \cite{babson} and called {\em generalized
pattern avoidance}, but we will not need that much more general concept in
this paper.)
\begin{example}
While permutation 246351 contains 132 (take the second, third,
and fifth entries), it does not {\em tightly contain}
132 since there are no three entries in consecutive positions in 246351
that would form a 132-pattern.
\end{example}
If $p$ does not tightly contain $q$, then we say that $p$ {\em
tightly avoids} $q$. Let $T_n(q)$ denote the number of $n$-permutations
that tightly avoid $q$. An intriguing conjecture of Elizalde and Noy
\cite{elizalde} is the following.
\begin{conjecture} \label{enoy}
For any pattern $q$ of length $k$ and for any positive
integer $n$, the inequality
\[T_n(q)\leq T_n(\alpha_k)\]
holds.
\end{conjecture}
This is in stark contrast with the situation for traditional patterns, where,
as we have seen in the previous section,
the monotone pattern is not the easier or the harder to avoid, even in the
sense of growth rates.
\subsection{Tight Patterns of Length Three}
Conjecture \ref{enoy}
is proved in \cite{elizalde} in the special case of $k=3$.
As it is clear by taking reverses and complements that $T_n(123)=T_n(321)$ and
that $T_n(132)=T_n(231)=T_n(213)=T_n(312)$, it suffices to show that
$T_n(132)<T_n(123)$ if $n\geq n$. The authors achieve that by a simple
injection.
It turns out that the numbers $T_n(123)$ are not simply larger than the
numbers $T_n(132)$; they are larger even in the sense of logarithmic
asymptotics. The following results contain the details.
\begin{theorem} \label{tight123} \cite{elizalde}
Let $A_{123}(x)=\sum_{n\geq 0} T_n(123)\frac{x^n}{n!}$ be the
exponential generating function of the sequence $\{T_n(123)\}_{n\geq 0}$.
Then
\[A_{123}(x)=\frac{\sqrt{3}}{2} \cdot \frac{e^{x/2}}{\cos \left (
\frac{\sqrt{3}}{2}x+\frac{\pi}{6}\right )}.\] Furthermore,
\[T_n(123)\sim \gamma_1 \cdot (\rho_1)^n \cdot n!,\]
where $\rho_1=\frac{3\sqrt{3}}{2\pi}$ and $\gamma_1=e^{3\sqrt{3}\pi}$.
\end{theorem}
\begin{theorem} \label{tight132} \cite{elizalde}
Let $A_{132}(x)=\sum_{n\geq 0} T_n(132)\frac{x^n}{n!}$ be the
exponential generating function of the sequence $\{T_n(132)\}_{n\geq 0}$.
Then
\[A_{132}(x)=\frac{1}{1-\int_0^x e^{-t^2/2} dt}.\]
Furthermore,
\[T_n(132)\sim \gamma_2 \cdot (\rho_2)^n \cdot n!,\]
where $\rho_2^{-1}$ is the unique positive root of the equation
$\int_0^x e^{-t^2/2} dt=1$, and $\gamma_2= e^{(\rho_2)^{-2}/2}$.
\end{theorem}
\subsection{Tight Patterns of Length Four}
For tight patterns, the case of length four is even more complex than it is
for traditional patterns in that for tight patterns. Indeed,
it is not true that
each of the 24 sequences $T_n(q)$, where $q$ is a tight pattern of length
four, is identical to one of $T_n(1342)$, $T_n(1234)$, and $T_n(1324)$.
In fact, in \cite{elizalde}, Elizalde and Noy showed that there are exactly
seven distinct sequences of this kind. They have also proved the following
results.
\begin{theorem} We have
\begin{enumerate}
\item $T_n(1342)\sim \gamma_1 (\rho_1)^n \cdot n!$,
\item $T_n(1234)\sim \gamma_2 (\rho_2)^n \cdot n!$, and
\item $T_n(1243)\sim \gamma_3 (\rho_3)^n \cdot n!$,
\end{enumerate}
where $\rho_1^{-1}$ is the smallest positive root $z$ of the equation
$\int_0^z=e^{-t^3/6}dt=1$, $\rho_2^{-1}$ is the smallest positive root
of $\cos z -\sin z +e^{-z}=0$, and $\rho_3$ is the solution of a certain
equation involving Airy functions.
The approximate values of these constants are
\begin{itemize}
\item $\rho_1=0.954611$, $\gamma_1=1.8305194$,
\item $\rho_2=0.963005$, $\gamma_2=2.2558142$,
\item $\rho_3=0.952891$, $\gamma_3=1.6043282$.
\end{itemize}
\end{theorem}
These results are interesting for several reasons. First, we see that
again, $T_n(\alpha_4)$ is larger than the other $T_n(q)$, even in the
asymptotic sense. Second, $T_n(1234)\neq T_n(1243)$, in contrast to
the traditional case, where $S_n(1234)=S_n(1243)$. Third, the tight
pattern 1342 is {\em not} the hardest to avoid, unlike in the traditional
case, where $S_n(1342)\leq S_n(q)$ for any pattern $q$ of length four.
\subsection{Longer Tight Patterns}
For tight patterns that are longer than four, the only known results concern
monotone patterns. They have been found by Richard Warlimont, and,
independently, also by Sergi Elizalde and Marc Noy.
\begin{theorem} \label{longer1}
\cite{elizalde}, \cite{warlimont1}, \cite{warlimont2}
For all integers $k\geq 3$, the identity
\[\sum_{n\geq 0}T_n(\alpha_k)\frac{x^n}{n!}=\left(\sum_{i\geq 0}
\frac{x^{ik}}{(ik)!} - \sum_{i\geq 0} \frac{x^{ik+1}}{(ik+1)!}
\right )^{-1}
\] holds.
\end{theorem}
\begin{theorem} \label{longer2} \cite{warlimont2}
Let $k\geq 3$, let $f_k(x)=\sum_{i\geq 0}
\frac{x^{ik}}{(ik)!} - \sum_{i\geq 0} \frac{x^{ik+1}}{(ik+1)!}$, and
let $\omega_k$ denote the smallest positive root of $f_k(x)$.
Then
\[\omega_k=1+\frac{1}{m!}\left(1+O(1)\right),\]
and
\[\frac{T_n(\alpha_k)}{n!} \sim c_m\omega_k^{-n}.\]
\end{theorem}
\subsection{Growth Rates}
The form of the results in Theorems \ref{tight123} and \ref{tight132} is
not an accident. They are special cases of the following general theorem.
\begin{theorem} \cite{sergi} \label{sergi}
For all patterns $q$, there exists a constant $w_q$ so that
\[\lim_{n\rightarrow \infty} \left ( \frac{T_n(q)}{n!} \right )^{1/n}
=w_q.\]
\end{theorem}
Compare this with the result of Corollary \ref{limits}. That Corollary
and the fact that the sequence $(S_n(q)^{1/n}$ is increasing, show
that the numbers $S_n(q)$ are roughly as large as $L(q)^n$, for some
constant $L(q)$. Clearly, it is much easier to avoid a tight pattern than
a traditional pattern. However, Theorem \ref{sergi} shows how much easier
it is. Indeed, this time it is not the {\em number} of pattern avoiding
permutations is simply exponential; it is their {\em ratio} to all
permutations that is exponential.
The fact that $T_n(q)/n! < C_q^n$ for {\em some} $C_q$ is straightforward.
Indeed, $T_n(q)/n! < \left(\frac{k!-1}{k!}\right)^{\lfloor n/k \rfloor}$
by simply looking at $ \lfloor n/k \rfloor$ distinct subwords of $k$
consecutive entries. Interestingly, Theorem \ref{sergi} shows that this
straightforward estimate is optimal in some (weak) sense. Note that there
is no known way to get a result similarly close to the truth for traditional
patterns.
\subsection{Asymptotic Normality} \label{tightass}
Our goal now is to prove that the distribution of tight copies of
$\alpha_k$ are asymptotically
normal in randomly selected permutations of length $n$. Note that
in the special case of $k=2$, our problem is
reduced to the classic result stating that descents of permutations
are asymptotically normal. (Just as in the previous section, see
\cite{fulman} and its references for various proofs of this fact, or
\cite{ngendes} for a generalization.) Our method is
very similar to the one we used in Subsection \ref{asymptotics}. For
fixed $n$ and $1\leq i\leq n-k+1$, let $Y_{n,i}$
denote the indicator random variable of the event that in
$p=p_1p_2\cdots p_n$, the subsequence $p_ip_{i+1}\cdots p_{i+k-1}$ is
increasing. Set $Y_n=\sum_{i=1}^{n-k+1}Y_{n,i}$.
We want to use Theorem \ref{janson}.
Clearly, $|Y_{n,i}|\leq 1$ for every $i$, and $N_n=n-k+1$. Furthermore,
the graph with vertex set $\{1,2,\cdots ,n-k+1\}$ in which there is an edge
between $i$ and $j$ if and only if $|i-j|\leq k-1$ is a dependency graph
for the family $\{Y_{n,i}|1\leq i\leq n-k+1\}$. In this graph,
$\Delta_n=2k-2$. We will prove the following estimate for $\hbox {Var}(Y)$.
\begin{proposition} \label{tightupper}
There exists a positive constant $C$ so that $\hbox {Var}(Y)\geq cn$ for all $n$.
\end{proposition}
\begin{proof}
By linearity of expectation, we have
\begin{eqnarray} \label{tvariance}
\hbox {Var} (Y_n) & = & E(Y_n^2) - (E(Y_n))^2 \\
& = & E \left (\left( \sum_{i=1}^{n-k+1} Y_{n,i} \right )^2 \right )
- \left (E \left (\sum_{i=1}^{n-k+1} Y_{n,i} \right ) \right )^2 \\
& = & E \left (\left( \sum_{i=1}^{n-k+1} Y_{n,i} \right )^2 \right )
- \left( \sum_{i=1}^{n-k+1} E(Y_{n,i}) \right )^2 \\
\label{tlastone} & = & \sum_{i_1, i_2}
E(Y_{n,i_1}Y_{n,i_2}) - \sum_{i_1, i_2}
E(Y_{n,i_1})E(Y_{n,i_2}).
\end{eqnarray}
In (\ref{tlastone}), all the $(n-k+1)^2$ summands with a negative
sign are equal to $1/k!^2$. Among the summands with a positive sign,
the $(n-2k+1)(n-2k+2)$ summands in which $|i_1-i_2|\geq k$ are equal
to $1/k!^2$, the $n-k+1$ summands in which $i_1=i_2$ are equal to
$1/k!$, and the $2(n-2k+2)$ summands in which $|i_1-i_2|=k-1$ are
equal to $1/(k+1)!$. All remaining summands are non-negative. This
shows that
\begin{eqnarray*} \hbox {Var}(Y_n) & \geq & \frac{n(1-2k)+3k^2-2k+1}{k!^2}
+\frac{n-k+1}{k!} + \frac{2(n-k+2)}{(k+1)!} \\
& \geq & \left(\frac{1}{k!}+\frac{2}{(k+1)!} - \frac{2k-1}{k!^2}\right) n
+d_k,\end{eqnarray*}
where $d_k$ is a constant that depends only on $k$. As the coefficient
$\frac{1}{k!}+\frac{2}{(k+1)!} - \frac{2k-1}{k!^2}$ of $n$ in the
last expression is positive for
all $k\geq 2$, our claim is proved.
\end{proof}
The main theorem of this subsection is now immediate.
\begin{theorem} Let $Y_n$ denote the random variable counting tight copies
of $\alpha_k$ in a randomly selected permutation of length $n$. Then
$\tilde{Y}_n\rightarrow N(0,1)$.
\end{theorem}
\begin{proof} Use Theorem \ref{janson} with $m=3$, and let $C$ be the
constant of Proposition \ref{tightupper}. Then (\ref{jansencond})
simplifies to
\[(n-k+1)\cdot (2k-2)^2 \cdot \frac{C^3}{n^{1.5}},\] which converges
to 0 as $n$ goes to infinity.
\end{proof}
\section{Consecutive Entries in Consecutive Positions}
Let us take the idea of Elizalde and Noy
one step further, by restricting the notion of pattern containment further
as follows. Let $p=p_1p_2\cdots p_n$ be a permutation, let $k<n$, and let
$q=q_1q_2\cdots
q_k$ be another permutation. We say that $p$ {\em very tightly} contains
$q$ if there is an index $0\leq i\leq n-k$ and an integer $0\leq a\leq
n-k$ so that $q_j<q_r$ if and only if $p_{i+j}<p_{i+r}$, and,
\[\{p_{i+1},p_{i+2},\cdots ,p_{i+k}\}=\{a+1,a+2,\cdots ,a+k\} .\]
That is, $p$ very tightly contains $q$ if $p$ tightly contains $q$ and
the entries of $p$ that form a copy of $q$ are not just in consecutive
positions, but they are also consecutive as integers (in the sense that
their set is an interval). We point out that this definition was used
by A. Myers \cite{myers} who called it {\em rigid} pattern avoidance.
However, in order to keep continuity with our previous definitions, we
will refer to it as very tight pattern avoidance.
For example, 15324 tightly contains 132 (consider the first three entries),
but does not very tightly contain 132. On the other hand, 15324 very tightly
contains 213, as can be seen by considering the last three entries. If $p$
does not very tightly contain $q$, then we will say that $p$ {\em very tightly
avoids} $q$.
\subsection{Enumerative Results}
Let $V_n(q)$ be the number of permutations of length $n$ that very tightly
avoid the pattern $q$. The following early results on $V_n(\alpha_k)$ are due
to David Jackson and al. They generalize earlier work by Riordan
\cite{Riordan} concerning the special case of $k=3$.
\begin{theorem} \label{jackson} \cite{jackson}, \cite{jackreid}
For all positive integers $n$, and any $k\leq n$, the value of
$V_n(\alpha_k)$ is equal to
the coefficient of $x^n$ in the formal power series
\[\sum_{m\geq 0} m!x^m \left(\frac{1-x^{k-1}}{1-x^k}\right)^m.\]
\end{theorem}
Note that in particular, this implies that for $k\leq n<2k$, the number of
permutations of length $k+r$ {\em containing} a very tight copy of $\alpha_k$
is $r!(r^2+r+1)$.
\subsection{An Extremal Property of the Monotone Pattern}
Recall that we have seen in Section 2 that in the multiset of
the $k!$ numbers $S_n(q)$ where $q$ is of length $k$,
the number $S_n(\alpha_k)$ is neither minimal nor maximal. Also recall
that in Section 3 we mentioned that
in the multiset of
the $k!$ numbers $T_n(q)$, where $q$ is of length $k$,
the number $T_n(\alpha_k)$ is {\em conjectured}
to be maximal. While we cannot prove that we prove that in the
in the multiset of
the $k!$ numbers $V_n(q)$, where $q$ is of length $k$,
the number $V_n(\alpha_k)$ is maximal, in this Subsection we prove
that for almost all very tight patterns $q$ of length $k$, the inequality
$V_n(q)\leq V_n(\alpha_k)$ does hold.
\subsubsection{An Argument Using Expectations} \label{outline}
Let $q$ be any pattern of length $k$.
For a fixed positive integer $n$, let $X_{n,q}$ be the random
variable counting the very tight copies
of $q$ in a randomly selected
$n$-permutation. It is straightforward to see that by linearity of
expectation,
\begin{equation} \label{equal}
E(X_{n,q})=\frac{(n-k+1)^2}{{n\choose k}k!}.\end{equation}
In particular, $E(X_{n,q})$ does not depend on $q$, just on the length $k$
of $q$.
Let $p_{n,i,q}$
be the probability that a randomly selected $n$-permutation
contains {\em exactly} $i$ very tight copies of $q$, and let $P(n,i,q)$ be the
probability that a randomly selected $n$-permutation contains {\em at least}
$i$ very tight
copies of $q$. Note that $V_n(q)=(1-P(n,1,q))n!$, for any given pattern
$q$.
Now note that by the definition of expectation
\begin{eqnarray*} \label{atleast}
E(X_n,q) & = & \sum_{i= 1}^{m} ip_{n,i,q} \\
& = & \sum_{j=0}^{m-1} \sum_{i=0}^j p_{n,m-i,q} \\
& = & p_{n,m,q} + (p_{n,m,q}+p_{n,m-1,q}) + \cdots +
(p_{n,m,q}+\cdots +p_{n,1,q}) \\
& = & \sum_{i=1}^m P(n,i,q) .
\end{eqnarray*}
We know from (\ref{equal}) that $E(X_{n,q})=E(X_{n,\alpha_k})$,
and then the previous displayed equation implies that
\begin{equation}\label{bigequal}
\sum_{i=1}^m P(n,i,q) = \sum_{i=1}^m P(n,i,\alpha) .\end{equation}
So if we can show that for $i\geq 2$, the inequality
\begin{equation} \label{toprove} P(n,i,q)\leq P(n,i,\alpha_k)
\end{equation} holds, then (\ref{bigequal}) will imply
that $P(n,1,q) \geq P(n,1,\alpha_k)$, which is equivalent to $V_n(q)\leq
V_n(\alpha_k)$, which we set out to prove.
\subsubsection{Extendible and Non-extendible Patterns}
Now we are going to describe the set of patterns $q$ for which we will
prove that $V_n(q)\leq
V_n(\alpha_k)$.
Let us assume that the permutation $p=p_1p_2\cdots p_n$ very tightly contains
two {\em non-disjoint} copies of the pattern $q=q_1q_2\cdots q_k$.
Let these two copies be $q^{(1)}$ and
$q^{(2)}$, so that $q^{(1)}=p_{i+1}p_{i+2}\cdots p_{i+k}$ and
$q^{(2)}=p_{i+j+1}p_{i+j+2}\cdots p_{i+j+k}$ for some $j\in [1,k-1]$.
Then $|q^{(1)}\cap q^{(2)}|=k-j+1=:s$. Furthermore, since the set of entries
of $q^{(1)}$ is an interval, and the set of entries of $q^{(2)}$ is an
interval, it follows that the set of entries of $q^{(1)}\cap q^{(2)}$ is
also an interval. So the rightmost
$s$ entries of $q$, and the leftmost $s$ entries
of $q$ must form identical patterns, and the respective sets of these entries
must both be intervals.
If $q'$ is the reverse of the pattern $q$, then clearly $V_n(q)=V_n(q')$.
Therefore, we can assume without loss of generality that
that the first entry of $q$ is less than the last entry of $q$.
For shortness, we will call such patterns {\em rising} patterns.
We claim that if $p$ very tightly
contains two non-disjoint copies $q^{(1)}$ and
$q^{(2)}$ of the rising pattern $q$, and $s$ is defined as above, then
the {\em rightmost} $s$ entries of $q$ must also be the {\em largest} $s$
entries of $q$. This can be seen by considering $q^{(1)}$. Indeed,
the set of these entries of $q^{(1)}$ is the
intersection of two intervals of the same length, and therefore, must
be an ending segment of
the interval that starts on the left of the other. An analogous argument,
applied for $q^{(2)}$, shows that the leftmost $s$ entries of $q$ must
also be the {\em smallest} $s$ entries of $q$.
So we have proved the following.
\begin{proposition} \label{conditions}
Let $p$ be a permutation that very tightly contains copies $q^{(1)}$ and
$q^{(2)}$ of the pattern $q=q_1q_2\cdots q_k$. Let us assume without
loss of generality that
$q$ is rising. Then
$q^{(1)}$ and $q^{(2)}$ are disjoint unless all of the following hold.
There exists a positive integer $s\leq k-1$ so that
\begin{enumerate}
\item the rightmost $s$ entries of $q$ are also the largest $s$ entries
of $q$,
and the leftmost $s$ entries of $q$ are also the smallest $s$ entries of $q$,
and
\item the pattern of the leftmost $s$ entries of $q$ is identical to the
pattern of the rightmost $s$ entries of $q$.
\end{enumerate}
\end{proposition}
If $q$ satisfies both of these criteria, then two
very tightly contained copies of $q$ in $p$ may indeed intersect.
For example, the pattern $q=2143$ satisfies both of the above criteria with
$s=2$, and indeed, 214365 very tightly contains two intersecting copies of
$q$, namely 2143 and 4365.
The following definition is similar to one in \cite{myers}.
\begin{definition}
Let $q=q_1q_2\cdots q_k$ be a rising pattern
that satisfies both conditions of Proposition
\ref{conditions}
Then we say that $q$ is {\em extendible}.
If $q$ is rising and not extendible, then we say that $q$ is non-extendible.
\end{definition}
Note that the notions of extendible and non-extendible patterns are only
defined for rising patterns here.
\begin{example} The extendible patterns of length four are as follows:
\begin{itemize} \item 1234, 1324 (here $s=1$),
\item 2143 (here $s=2$). \end{itemize}
\end{example}
Now we are in a position to prove the main result of this Subsection.
\begin{theorem} \label{almostall}
Let $q$ be any pattern of length $k$ so that either $q$ or
its reverse $q'$ is non-extendible. Then
for all positive integers $n$, \[V_n(q)\leq V_n(\alpha_k).\]
\end{theorem}
\begin{proof} We have seen in Subsubsection \ref{outline} that it suffices
to prove (\ref{toprove}).
On the one hand,
\begin{equation} \label{lowerbound}
\frac{(n-k-i+2)!}{n!} \leq P(n,i,\alpha_k),\end{equation} since the number
of $n$-permutations very tightly
containing $i$ copies of
$\alpha$ is at least as large as the number of $n$-permutations very tightly
containing
the pattern $12\cdots (i+k-1)$. The latter is at least as large as the number
of $n$-permutations that very tightly contain
a $12\cdots (i+k-1)$-pattern in their first $i+k-1$ positions.
On the other hand,
\begin{equation} \label{upperbound}
P(n,i,q)\leq {n-i(k-1)\choose i}^2(n-ik)!\frac{1}{n!}.
\end{equation}
This can be proved by noting that if $S$ is the $i$-element set of
starting positions of $i$ (necessarily disjoint)
very tight copies of $q$ in an $n$-permutation, and $A_S$ is the event
that in a random permutation $p=p_1\cdots p_n$,
the subsequence $p_jp_{j+1}\cdots p_{j+k-1}$ is a very tight
$q$-subsequence for
all $j\in S$, then $P(A_S)={n-i(k-1)\choose i}(n-ik)!
\frac{1}{n!}$. The details can be found in \cite{rules}.
Comparing (\ref{lowerbound}) and (\ref{upperbound}), the claim of
the theorem follows.
Again, the reader is invited to consult \cite{rules} for details.
\end{proof}
It is not difficult to show \cite{rules} that the ratio of extendible
permutations of length $k$ among all permutations of length $k$ converges
to 0 as $k$ goes to infinity. So
Theorem \ref{almostall} covers almost all patterns of length $k$.
\subsection{The Limiting Distribution of the Number of Very Tight Copies}
In the previous two sections, we have seen that the limiting distribution
of the number of copies of $\alpha_k$, as well as the
limiting distribution
of the number of tight copies of $\alpha_k$, is normal. Very tight
copies behave differently. We will discuss the special case
of $k=2$, that is, the case of the very tight pattern 12.
\begin{theorem} \label{poisson}
Let $Z_n$ be the random variable that counts very tight
copies of 12 in a randomly selected permutation of length $n$.
Then $Z_n$ converges a Poisson distribution with parameter $\lambda=1$.
\end{theorem}
A version of this
result was proved, in a slightly different setup, by Wolfowitz
in \cite{wolfowitz}
and by Kaplansky in \cite{kaplansky}. They used the {\em method of moments},
which is the following.
\begin{lemma} \cite{rucinski} Let $U$ be a random variable so that
\begin{enumerate} \item for every positive integer $k$,
the moment $E(U^k)$ exists, and
\item the variable $U$ is completely determined
by its moments, that is, there is no other variable with the same sequence
of moments.
\end{enumerate}
Let $U_1,U_2,\cdots $ be a sequence of random variables, and let us assume
that for all positive integers $k$,
\[\lim_{n\rightarrow \infty} U_n^k =U^k.\]
Then $U_n \rightarrow U$ in distribution.
\end{lemma}
\begin{proof} (of Theorem \ref{poisson}.)
It is well-known \cite{vonmises}
that the Poisson distribution (with any parameter)
is determined by its
moments, so the method of moments can be applied to prove convergence
to a Poisson distribution. Let $Z_{n,i}$ be the indicator random variable of
the event that in a randomly selected $n$-permutation $p=p_1p_2\cdots p_n$,
the inequality $p_{i}+1=p_{i+1}$. Then $E(Z_{n,i})=1/n$, and the probability
that $p$ has a very tight copy of $\alpha_k$ for $k>2$ is $O(1/n)$. Therefore,
we have
\begin{equation} \label{longone}
\lim_{n\rightarrow \infty} E(Z_n^j)=\lim_{n\rightarrow \infty}
E\left(\left (\sum_{i=1}^{n-1} Z_{n,i}\right )^j\right)
=\lim_{n\rightarrow \infty}
E\left(\left (\sum_{i=1}^{n-1} V_{n,i}\right )^j\right),\end{equation}
where the $V_{n,i}$ are {\em independent}
random variables and each of them takes value 0 with probability $(n-1)/n$,
and value 1 with probability $1/n$. (See
\cite{wolfowitz} for more details.)
The rightmost limit in the above displayed equation
is not difficult to compute. Let $t$ be a fixed
non-negative integer. Then the probability that exactly
$t$ variables $V_{n,i}$ take value 1 is ${n-1\choose t}n^{-t}
(\frac{n-1}{n})^{n-t} \sim \frac{e^{-1}}{t!}$. Once we know the $t$-element
set of the $V_{n,i}$ that take value 1, each of the $t^j$
strings of length $j$ formed from those $t$ variables contributes 1 to
$E(V^j)$. Summing over all $t$, this proves that
\[\lim_{n\rightarrow \infty}
E\left(\left (\sum_{i=1}^{n-1} V_{n,i}\right )^j\right) =e^{-1}\sum_{t\geq 0}
\frac{t^j}{j!}.
\]
On the other hand, it is well-known that $e^{-1}\sum_{t\geq 1}
\frac{t^j}{j!}$, the $j$th Bell number, is also the $j$th moment of the
Poisson distribution with parameter 1. Comparing this to (\ref{longone}),
we see that the sequence $E(Z_n^j)$ converges to the $j$th moment of
the Poisson distribution with parameter 1. Therefore, by the method of
moments, our claim is proved.
\end{proof}
| {
"timestamp": "2007-11-27T19:42:28",
"yymm": "0711",
"arxiv_id": "0711.4325",
"language": "en",
"url": "https://arxiv.org/abs/0711.4325",
"abstract": "We review how the monotone pattern compares to other patterns in terms of enumerative results on pattern avoiding permutations. We consider three natural definitions of pattern avoidance, give an overview of classic and recent formulas, and provide some new results related to limiting distributions.",
"subjects": "Combinatorics (math.CO); Probability (math.PR)",
"title": "On Three Different Notions of Monotone Subsequences",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9859363737832231,
"lm_q2_score": 0.7185944046238982,
"lm_q1q2_score": 0.7084883615158004
} |
https://arxiv.org/abs/1704.00451 | Characterization of minimizers of an anisotropic variant of the Rudin-Osher-Fatemi functional with $L^1$ fidelity term | In this paper we study an anisotropic variant of the Rudin-Osher-Fatemi functional with $L^1$ fidelity term of the form \[ E(u) = \int_{\mathbb{R}^n} \phi(\nabla u) + \lambda \| u -f \|_{L^1(\mathbb{R}^n)}. \] We will characterize the minimizers of $E$ in terms of the Wulff shape of $\phi$ and the dual anisotropy. In particular we will calculate the subdifferential of $E$. We will apply this characterization to the special case $\phi = |\cdot|_1$ and $n=2$, which has been used in the denoising of 2D bar codes. In this case, we determine the shape of a minimizer $u$ when $f$ is the characteristic function of a circle. | \section{Introduction}
We analyze an anisotropic variant of the Rudin-Osher-Fatemi functional (anisotropic $\TV$-$L^1$ energy in the following)
\[
E(u) = \int_{\R^n} \phi(\nabla u) + \lambda \| u -f \|_{L^1(\R^n)},
\]
which has been proposed by Choksi et al.~in \cite{Choksi2011} for the special case of $\phi = |\cdot|_1$ and $n=2$ for the denoising of 2D bar codes. Given a noisy input $f \in L^1(\R^n)$ we seek
for a minimizer $u$ of $E$.
The famous Rudin-Osher-Fatemi functional \cite{Rudin1992} is given by
\[
E_\text{ROF}(u) = \int_{\Omega} |\nabla u|_2 + \lambda \|u-f\|_{L^2(\Omega)}^2,
\]
where the noisy input is modeled by a function $f \in L^2(\Omega)$ typically defined on a bounded domain $\Omega \subset \R^2$.
The first term measures the regularity of $u$ with respect to the Euclidean metric on $\R^2$. We call this term \textit{isotropic total variation}.
The second term measures the distance to the original signal and is called \textit{fidelity term}.
This model is not contrast invariant: plugging in a rescaled noisy image $cf$ for some rescaling parameter $c > 0$ does not result in a rescaled minimizer $cu$. Furthermore the minimization of $E_\text{ROF}$ is not faithful to any $f \ne 0$, which means that there is no $f \ne 0$ such that $f$ is the minimizer of $E_\text{ROF}$.
In \cite{Chan2005} Chan and Esedo\=glu analyze the isotropic $\TV$-$L^1$ model. This model has a different fidelity term, where the squared $L^2$ distance to the measured signal $f$ is replaced by the $L^1$ distance to $f$. The authors prove that this modification yields a contrast invariant minimization problem. They show that $u$ is a minimizer if and only if the level sets of $u$ solve a shape optimization problem. This motivates the study of the geometric properties of minimizers. They can show that the $\TV$-$L^1$ model is faithful to every characteristic function of a bounded domain with $C^2$ boundary, if $\lambda$ is larger than some domain dependent constant. A negative result is, that the characteristic function of a square can not appear as a minimizer of this model.
Therefore, when working with images that consist of shapes with some anisotropic feature, a modified total variation must be considered.
This has been done by Esedo\=glu and Osher in \cite{Esedoglu2004}
for the classical Rudin-Osher-Fatemi model. They consider an energy
\[
E_\text{EO}(u) = \int_{\Omega} \phi(\nabla u) + \lambda \|u-f\|_{L^2(\Omega)}^2
\]
where $\phi$ is a norm on $\R^2$. They prove that this energy prefers the corresponding Wulff shape
\[
W_\phi = \left\{ x \in \R^2 \,\middle| \, x\cdot y \le \phi(y) ~\forall y \in \R^2 \right\}.
\]
In particular they show that for large $\lambda$ the unique minimizer for $f = \chi_{W_\phi}$ is given by $cf$ for some constant $c > 0$.
In this work we combine the techniques used in \cite{Chan2005} and \cite{Esedoglu2004} to analyze the functional $E$. In his PhD thesis \cite{Duval2011} Duval starts with an excellent overview over the $\TV$-$L^1$ model. He extends the geometric point of view of \cite{Chan2005} and proposes two algorithms based on these results. At the end of the first part of \cite{Duval2011} he mentions
the generalization to anisotropies described by a norm $\phi$. If in addition $\phi$ is crystalline, that means that the Wulff shape is a polytope, he characterizes the minimizers for $f = \chi_C$ where $C$ is a bounded convex set. In this work we will allow a slightly more general class of anisotropies described by a Finsler metric $\phi$.
The main result in this work is \Cref{thm:main}, which gives a dual characterization of minimizers $u \in BV(\R^n)$. We will use this result in \Cref{ex} to show that for $\phi = |\cdot|_1$
and $f = \chi_{\{ |x|_2 \le 1 \}}$ the minimizer is given as intersection of the circle with a properly scaled square.
In \cite{Choksi2011} Choksi et al. prove one implication of \Cref{thm:main} for this special case and deduce that $E$ is faithful to every 2D bar code as long as $\lambda$ is larger than some constant which depends on the size of the bar code. We conclude by \Cref{ex} that the converse conclusion is wrong, namely that there are faithful binary signals which are not given by 2D bar codes.
\section{Anisotropic total variation}
We consider an anisotropy given by a convex and nonnegative, positively 1-homogeneous function $\phi : \R^n \rightarrow [0, \infty)$,
which means that $\phi$ satisfies $\phi(\alpha y) = \alpha \phi(y)$ for all $\alpha > 0$, $y \in \R^n$ and $\phi(y) = 0$ if and only if $y = 0$. A function $\phi$ like this is sometimes also called a gauge function (see \cite{Freund1987}, \cite{Friedlander2014}) or a Finsler metric (see \cite{Kawohl2008}). The Wulff shape associated with $\phi$ is given by
\[ W_\phi = \left\{ x \in \R^n \,\middle| \, -x\cdot y \le \phi(y) ~\forall y \in \R^n \right\} \]
and coincides with the polar set (sometimes also called the one-sided polar set) of
\[ B_\phi = \left\{ y \in \R^n \,\middle| \, \phi(-y) \le 1\right\}. \]
Then $W_\phi$ is a convex and compact set with $0 \in \interior W_\phi$.
\begin{remark} Note that we need to introduce a minus sign in the definition of the Wulff shape, which differs from the classical setting. This is due to the fact that we want to allow anisotropies which are not even functions. This modification ensures that the Wulff shape is still the optimal shape for the anisotropic total variation we will define in \Cref{defn:aniso_tv}. This technical aspect occurs several times in this work. If $\phi$ is an even function, we can skip the minus sign.
\end{remark}
The gauge dual $\phi^\circ$ of $\phi$ is given by
\[ \phi^\circ(x) \coloneqq \max_{\phi(y) \le 1} x\cdot y \]
and can also be understood as the Minkowski function of $-W_\phi$,
\[\phi^\circ(x) = \inf \{ \lambda > 0 \mid x \in -\lambda W_\phi \}. \]
Since $\phi^{\circ\circ} = \phi$ (see \cite[Proposition 2.1]{Friedlander2014}), we have $B_\phi = W_{\phi^\circ}$ and $W_\phi = B_{\phi^\circ}$. The gauge functions $\phi$ and $\phi^\circ$ satisfy the Cauchy-Schwarz inequality in the sense that
\begin{equation}
\label{eq:csi}
x\cdot y \le \phi^\circ(x) \phi(y) \text{ for all } x,y \in \R^n.
\end{equation}
In the particular case $\phi = |\cdot|_p$ for some $p \in [1,\infty]$ we have $\phi^\circ = |\cdot|_q$, where $q$ is given by
$\frac{1}{p} + \frac{1}{q} = 1$ in the usual sense. In this case the sets $W_\phi$ and $B_\phi$ are given by
\begin{align*}
W_\phi &= \{ x \in \R^n \mid |x|_q \le 1 \} \text{ and } \\
B_\phi &= \{ x \in \R^n \mid |x|_p \le 1 \}.
\end{align*}
We need the following property, which is a generalization of the well known isometry of the embedding $L^\infty(\R^n ; \R^n) \hookrightarrow (L^1(\R^n ; \R^n))^*$ into the dual space of $L^1(\R^n ; \R^n)$.
\begin{lemma}
\label{eq:hoelder_extremal}
Let $f \in L^\infty(\R^n ; \R^n)$ satisfy
\[ \int_{\R^n} f \cdot g \le \int_{\R^n} \phi(g) ~\forall g \in L^1(\R^n ; \R^n). \]
Then $\phi^\circ(f) \le 1$ almost everywhere in $\R^n$, that means $f \in -W_\phi$ almost everywhere in $\R^n$.
\begin{proof}
Choose $\eta_{\phi^\circ}(x) \in \argmax_{\phi(y) \le 1} x \cdot y$ such that $\eta_{\phi^\circ}(x) \cdot x = \phi^\circ(x)$ and $\phi(\eta_{\phi^\circ}(x)) = 1$ for $x \in \R^n$. This can also be understood as $\eta_{\phi^\circ}(x) \in \partial \phi^\circ(x)$, where $\partial \phi^\circ(x)$ is the subdifferential of $\phi^\circ$ at $x \in \R^n$.
Let $g \in L^1(\R^n)$. Then, using \eqref{eq:csi}
\begin{align*}
\int_{\R^n} \phi^\circ(f) g &\le \int_{\R^n} \phi^\circ(f) |g| \\
&= \int_{\R^n} f \cdot \eta_{\phi^\circ}(f) |g|
\le \int_{\R^n} \phi\left(\eta_{\phi^\circ}(f) |g|\right)
= \int_{\R^n} |g|.
\end{align*}
Since $g$ was arbitrary we conclude $\phi^\circ(f) \le 1$ almost everywhere in $\R^n$.
\end{proof}
\end{lemma}
\begin{definition}[Anisotropic total variation, see \cite{Esedoglu2004}]
\label{defn:aniso_tv}
The anisotropic total variation of an $\R^n$-valued Radon measure $\mu \in [\mathcal{M}(\R^n)]^n$ is
\[
\int_{\R^n} \phi(\mathrm{d}\mu) \coloneqq \sup \left\{ \int_{\R^n} \varphi ~\mathrm{d}\mu \,\middle| \, \varphi \in C^1_c(\R^n ; \R^n), \varphi(x) \in -W_\phi ~ \forall x \in \R^n \right\}.
\]
The anisotropic total variation of a function $u \in BV(\R^n)$ is given by
\[ \TV_\phi(u) \coloneqq \int_{\R^n} \phi(\mathrm{d} Du ) \]
where $Du$ is the total variation measure of $u$.
\emph{Remark:} For $u \in BV(\R^n)$ we have
\[ \TV_\phi(u) = \sup \left\{ -\int_{\R^n} u \nabla \cdot \varphi \,\middle| \, \varphi \in C^1_c(\R^n ; \R^n), \varphi(x) \in -W_\phi ~ \forall x \in \R^n \right\}. \]
The isotropic total variation is given by $\TV = \TV_{|\cdot|_2}$.
\end{definition}
The definition of the anisotropic total variation can be extended to functions that have a weak divergence in $L^\infty(\R^n)$.
\begin{lemma}
For $u \in BV(\R^n)$ we have
\[
\begin{aligned}
\TV_\phi(u) = \sup \bigg\{ -\int_{\R^n} u \nabla \cdot v \,\bigg|\, &v \in L^\infty(\R^n ; \R^n), \\
&\nabla \cdot v \in L^\infty(\R^n), v \in -W_\phi \text{ a.e.~in } \R^n \bigg\}.
\end{aligned}
\]
\begin{proof}
The proof is an easy adaption of \cite[A.4]{Choksi2011}.
\end{proof}
\end{lemma}
\begin{example}
If $\Omega \subset \R^n$ is a bounded domain with Lipschitz continuous boundary, the anisotropic total variation equals
\[ \TV_\phi(\Omega) \coloneqq \TV_\phi(\chi_\Omega) = \int_{\partial \Omega} \phi(-\nu). \]
Here $\nu$ denotes the outward unit normal which is well defined almost everywhere on $\partial \Omega$.
\begin{proof}
We can prove this result by approximating the function
$\eta_\phi \circ (-\nu)$ (see \Cref{eq:hoelder_extremal}) by smooth functions and applying the Gauss formula.
\end{proof}
\end{example}
\begin{corollary}
\label{cor:tv_wulff}
The total variation of the Wulff shape is given by
\[
\TV_\phi(W_\phi) = n |W_\phi|.
\]
\begin{proof}
Since $W_\phi$ is convex, it has a Lipschitz continuous boundary, see \cite[1.2.2.3]{Grisvard1985}. Furthermore we know that at almost every point $x \in \partial W_\phi$ the Wulff shape lies on one side of the hyperplane $\{ y \in \R^n \mid y \cdot \nu(x) = x \cdot \nu(x) \}$. In particular we get
\[
\phi(-\nu(x)) = \phi^{\circ\circ}(-\nu(x))
= \sup_{\phi^\circ(y) \le 1} -\nu(x) \cdot y
= \sup_{y \in B_{\phi^\circ}=W_\phi} \nu(x) \cdot y = x \cdot y. \]
Then, using the Gauss formula
\[
\TV_\phi(W_\phi) = \int_{\partial W_\phi} \phi(-\nu) = \int_{\partial W_\phi} \nu(x) \cdot x ~\mathrm{d}x = n |W_\phi|.
\]
\end{proof}
\end{corollary}
\begin{proposition}
\label{prop:equi}
There are constants $c, C > 0$ satisfying
\[ c \TV_\phi(u) \le \TV(u) \le C \TV_{\phi}(u) ~\forall u \in BV(\R^n). \]
\begin{proof}
This follows directly from the fact that $W_\phi$ is compact and contains $0$ in its interior.
\end{proof}
\end{proposition}
We state some results about the anisotropic total variation. All proofs are easy adaptions of the results \cite[5.2 and 5.5]{Evans1992} for the isotropic case.
\begin{proposition}[Lower semi-continuity]
\label{prop:lsc}
Let $(u_k)_{k\in\N} \subset BV(\R^n)$, $u \in BV(\R^n)$ with $u_k \to u$ in $L^1_\text{loc}(\R^n)$, then $\TV_\phi(u) \le \liminf_{k\to\infty} \TV_\phi(u_k)$.
\end{proposition}
\begin{proposition}[Approximation]
For every $u \in BV(\R^n)$ there is a sequence $(u_k)_{k \in \N} \subset BV(\R^n) \cap C^\infty(\R^n)$ satisfying
\begin{enumerate}
\item $u_k \to u$ in $L^1(\R^n)$ and
\item $\TV_\phi(u_k) \to \TV_\phi(u)$ for $k \to \infty$.
\end{enumerate}
\end{proposition}
\begin{proposition}[Coarea formula]
Let $u \in BV(\R^n)$. Define the level set $\Sigma(t) \coloneqq \{ x \in \R^n \mid u(x) > t \}$. Then we have
\[ \TV_\phi(u) = \int_{-\infty}^\infty \TV_\phi(\Sigma(t)) ~\mathrm{d}t. \]
\end{proposition}
\section{Anisotropic $\TV$-$L^1$ model}
\begin{definition}
For a given function $f \in L^1(\R^n)$ and $\lambda > 0$ we consider the functional
\[ E(u) \coloneqq E(u ; f, \lambda) \coloneqq \TV_\phi(u) + \lambda \| u - f \|_{L^1(\R^n)} ~,~ u \in BV(\R^n) \]
and seek for a minimizer $u \in BV(\R^n)$ of $E$.
\end{definition}
Using \Cref{prop:equi}, the isotropic compactness result in \cite[5.2 Theorem 4]{Evans1992} and \Cref{prop:lsc}, we can apply the direct method of the Calculus of Variations to deduce the existence of a minimizer.
Since $E$ is not strictly convex, the minimizer is not necessarily unique.
\begin{example}
Let $n=2$, $\phi = |\cdot|_2$, $B = B_\phi = \{ |x|_2 \le 1 \}$, $f = \chi_{B}$ and $\lambda = 2$. From what we will prove in this work, we infer that $f$ is a minimizer of $E$. For this special choice of $\lambda$ we note that $\alpha f$ for $\alpha \in [0,1]$ is also a minimizer of $E$ because
$E$ is convex and
\[ E(f) = TV(f) = 2 \pi = 2\|f\|_{L^1} = E(0). \]
\end{example}
For small $\lambda$ and functions $f$ with compact support, we only have the trivial minimizer.
\begin{proposition}
\label{prop:trivial_minimizer}
Let $f \in L^1(\R^n)$ and $R > 0$ with $\support f \subset R\,W_\phi = \{ Rx \mid x \in W_\phi \}$.
The zero function is the unique minimizer of $E$ for all $0 < \lambda < \lambda_0 = \frac{n}{R}$.
\begin{proof}
This proof is similar to the proof of \cite[Lemma 2.2]{Choksi2011}, but we will additionally prove that $\lambda_0$ can be chosen as $\lambda_0 = \frac{n}{R}$.
We know that the Wulff shape is the shape that minimizes the anisotropic total variation with prescribed area.
We refer to \cite{Fonseca1991a} for the proof. Using a scaling argument we can deduce that
\begin{equation}
\label{eq:iso_ineq}
|A|^\frac{n-1}{n} \le C \TV_\phi(A)
\end{equation}
for all bounded sets $A \subset \R^n$ with finite perimeter. The constant $C$ is given by
\[
C = \frac{|W_\phi|^\frac{n-1}{n}}{\TV_\phi(W_\phi)} = n^{-1}|W_\phi|^{-\frac{1}{n}},
\]
where we applied \Cref{cor:tv_wulff}.
Now let $u \in BV(\R^n)$ and $0 < \lambda < \lambda_0 = \frac{n}{R}$. We have that
\begin{equation}
\label{eq:splitting}
\TV_\phi(u) = \TV_\phi(u_+) + \TV_\phi(-u_-)
\end{equation}
where $u_+$ and $u_-$ are the positive and negative parts of $u$. Since we are working with an anisotropy $\phi$ which is not necessarily an even function, we need to be careful
when comparing the anisotropic total variation of positive and negative functions. We know that
\[
\int_{\R^n} -u \nabla \cdot \varphi = \int_{\R^n} u(x) \nabla \cdot (\varphi(-\cdot))(-x) ~\mathrm{d} x
= \int_{\R^n} u(-x) \nabla \cdot (\varphi(-\cdot))(x) ~\mathrm{d}x
\]
for all $\varphi \in C^1_c(\R^n)$. Therefore the anisotropic total variation of the negative of $u$ is given by
\begin{equation}
\label{eq:negativetv}
\TV_\phi(-u) = \TV_\phi(u(-\cdot)).
\end{equation}
Plugging \eqref{eq:negativetv} into \eqref{eq:splitting} and using the coarea formula together with the isoperimetric inequality \eqref{eq:iso_ineq} gives
\begin{equation}
\begin{aligned}
\label{eq:tv_l1}
\TV_\phi(u) &= \TV_\phi(u_+) + \TV_\phi(u_-(-\cdot)) \\
&= \int_0^\infty \TV_\phi(\{u_+ > t\}) + TV_\phi(\{u_-(-\cdot) > t\}) ~\mathrm{d}t \\
&\ge C^{-1} \int_0^\infty |\{u_+ > t\}|^\frac{n-1}{n} + |\{u_- > t\}|^\frac{n-1}{n} ~\mathrm{d}t \\
&\ge C^{-1} \int_0^\infty |\{ |u| > t\}|^\frac{n-1}{n} ~\mathrm{d}t \\
&\ge C^{-1} \int_0^\infty |\{ |u| > t\}\cap R\,W_\phi|^\frac{n-1}{n} ~\mathrm{d}t \\
&\ge C^{-1} \int_0^\infty |\{ |u| > t\}\cap R\,W_\phi||R\,W_\phi|^{-\frac{1}{n}} ~\mathrm{d}t
= \lambda_0 \| u \|_{L^1(R\,W_\phi)}.
\end{aligned}
\end{equation}
We conclude
\begin{equation}
\label{eq:e_zero}
\begin{aligned}
E(u) &\ge \lambda_0 \| u \|_{L^1(R\,W_\phi)} + \lambda \| u -f \|_{L^1(\R^n)} \\
&\ge \lambda \|f\|_{L^1(\R^n)} = E(0).
\end{aligned}
\end{equation}
It is easy to see, that the last inequality in \eqref{eq:e_zero} or the second to last inequality in \eqref{eq:tv_l1} is strict if $u \ne 0$.
\end{proof}
\end{proposition}
\begin{remark}
The constant $\lambda_0$ in \Cref{prop:trivial_minimizer} is optimal for $f = \chi_{R\,W_\phi}$.
\end{remark}
\begin{proposition}
Let $f \in L^1(\R^n)$, $\lambda > 0$ and $u \in BV(\R^n)$. Then we have
\[ E(u ; f, \lambda) = \int_{-\infty}^\infty E(\chi_{\Sigma_u(t)} ; \chi_{\Sigma_f(t)}, \lambda) ~\mathrm{d}t. \]
Note that $\chi_{\Sigma_u(t)} \not \in L^1(\R^n)$ for $t < 0$, but the symmetric difference
$\|\chi_{\Sigma_u(t)} - \chi_{\Sigma_f(t)} \|_{L^1(\R^n)} = | \Sigma_u(t) \,\triangle \, \Sigma_f(t) |$ is finite for almost every $t \in \R$.
\begin{proof}
We can follow the proof of \cite[Proposition 5.1]{Chan2005} since the anisotropic total variation satisfies the coarea formula.
\end{proof}
\end{proposition}
\begin{remark}
If $f \in L^1(\R^n; \{ 0, 1\})$ is binary, it is a well known consequence of the coarea formula that there is a binary minimizer $u \in BV(\R^n; \{ 0, 1 \})$, see \cite[Theorem 5.2]{Chan2005}. In this special case the energy $E$ equals \[E(u) = \TV_\phi(u) + \lambda \| u -f \|_{L^2(\R^n)}^2,\] which is the anisotropic ROF model analyzed in \cite{Esedoglu2004}. Therefore everything that is proven in \cite[4.2]{Esedoglu2004} about the regularity of domains, that may appear as minimizer of the binary anisotropic ROF model, transfers to our situation.
\end{remark}
For $u \in BV(\R^n)$ we define the set (c.f. \cite{Choksi2011})
\[
\mathcal{V}(u) \coloneqq \{ v \in L^\infty(\R^n ; \R^n) \mid v \text{ satisfies (i) - (iii)} \}
\]
with
\begin{enumerate}[label=(\roman*)]
\item $v \in -W_\phi$ almost everywhere in $\R^n$,
\item has a weak divergence $\nabla \cdot v \in L^\infty(\R^n)$ and
\item $\TV_\phi(u) = - \int_{\R^n} u \nabla \cdot v$.
\end{enumerate}
The following theorem is our main result.
\begin{theorem}
\label{thm:main}
Let $f \in L^1(\R^n)$ and $\lambda > 0$. Then $u_0 \in BV(\R^n)$ is a minimizer of $E$ if and only if there is a $v \in \mathcal{V}(u_0)$ which additionally satisfies:
\begin{enumerate}[label=(\alph*)]
\item $\| \nabla \cdot v \|_{L^\infty(\R^n)} \le \lambda $,
\item $\nabla \cdot v = \lambda$ almost everywhere in $\{ u_0 > f \}$ and
\item $\nabla \cdot v = -\lambda$ almost everywhere in $\{ u_0 < f \}$.
\end{enumerate}
\begin{proof}
This proof is an extension of the proof of \cite[Lemma 3.1]{Choksi2011}.
Let $v \in \mathcal{V}(u_0)$ be a function with the properties (a) -- (c). For every $h \in BV(\R^n)$ we have
\begin{align*}
E(u_0+h) &= \TV_\phi(u_0+h) + \lambda \| u_0 + h - f \|_{L^1(\R^n)} \\
&\ge -\int_{\R^n} (u_0 + h) \nabla \cdot v + \lambda \| u_0 + h - f \|_{L^1(\R^n)} \\
&= E(u_0) - \int_{\R^n} h \nabla \cdot v - \lambda \| u_0 - f \|_{L^1(\R^n)} + \lambda \|u_0 + h - f \|_{L^1(\R^n)} \\
&= E(u_0) - \int_{\R^n} (h + u_0 - f) \nabla \cdot v + \lambda \|u_0 + h -f \|_{L^1(\R^n)} \\
&\ge E(u_0).
\end{align*}
This proves that $u_0$ is a minimizer of $E$.
Now let $u_0$ be a minimizer of $E$. We want to prove the existence of a $v \in \mathcal{V}(u_0)$ satisfying (a) -- (c).
For that reason we define
\[ \begin{array}{rl}
F : BV(\R^n) \rightarrow [\mathcal{M}(\R^n)]^n ,~& u \mapsto Du, \\
G : [\mathcal{M}(\R^n)]^n \rightarrow \R ,~& \mu \mapsto \int_{\R^n} \phi(\mathrm{d} \mu) \\
H : BV(\R^n) \rightarrow \R ,~& u \mapsto \| u - f \|_{L^1(\R^n)}.
\end{array} \]
We have $E = G\circ F + \lambda H$.
Since $F$, $G$ and $H$ are continuous we can apply subdifferential calculus (see \cite[I.5.6, I.5.7]{Ekeland1999}) and obtain
\[ \partial E(u_0) = F^\ast \partial G(F u_0) + \lambda \partial H(u_0), \]
where $F^\ast : ([\mathcal{M}(\R^n)]^n)^\ast\rightarrow BV(\R^n)^\ast$ is the transpose mapping of $F$.
Since $u_0$ is a minimizer of $E$ we know that $0 \in \partial E(u_0)$. Therefore there are $\Phi \in \partial G(Fu_0) \subset ([\mathcal{M}(\R^n)]^n)^\ast$ and $\Psi \in \partial H(u_0) \subset BV(\R^n)^\ast$ with
\begin{equation}
\label{eq:subdiff_zero}
0 = F^\ast \Phi + \lambda \Psi.
\end{equation}
By definition we have
\begin{equation}
\label{eq:psi_norm}
\langle \Psi, h \rangle \le H(h+u_0) - H(u_0) \le \| h \|_{L^1(\R^n)}
\end{equation}
for all $h \in BV(\R^n)$. Since $BV(\R^n)$ is dense in $L^1(\R^n)$ and $\Psi$ is a bounded operator on $BV(\R^n)$ with respect to the $L^1(\R^n)$ norm, we can extend $\Psi$ to $\Psi \in L^1(\R^n)^\ast = L^\infty(\R^n)$ with operator norm
\begin{equation}
\label{eq:opnorm}
\| \Psi \|_{L^\infty(\R^n ; \R^n)} \le 1.
\end{equation}
By continuity inequality \eqref{eq:psi_norm} remains valid for all $h \in L^1(\R^n)$. Testing this inequality with $\chi_{\{u_0 > f\}}(f-u_0)$ and $\chi_{\{u_0 < f\}}(f-u_0)$ we conclude
\begin{equation}
\label{eq:psi_exact}
\begin{array}{ll}
\Psi = 1 & \text{almost everywhere in } \{u_0 > f \} \text{ and } \\
\Psi = -1 & \text{almost everywhere in } \{ u_0 < f \}.
\end{array}
\end{equation}
Plugging $Du_0$ and $-Du_0$ into the subdifferential inequality
\[
\langle \Phi, \mu - Du_0 \rangle \le G(\mu) - G(Du_0) ~\forall \mu \in [\mathcal{M}(\R^n)]^n
\]
gives
\begin{equation}
\label{eq:v_exact} \langle \Phi, Du_0 \rangle = \TV_\phi(u_0)
\end{equation}
and so
\begin{equation}
\label{eq:phi_norm}
\langle \Phi, \mu \rangle \le G(\mu) ~\forall \mu \in [\mathcal{M}(\R^n)]^n.
\end{equation}
By restricting $\Phi$ to the space $L^1(\R^n ; \R^n)$, which is isometrically embedded in $[\mathcal{M}(\R^n)]^n$, we get $v = \restr{\Phi}{L^1(\R^n;\R^n)} \in (L^1(\R^n ; \R^n))^\ast = L^\infty(\R^n ; \R^n)$. Inequality \eqref{eq:phi_norm} yields
\[
\int_{\R^n} v \cdot g \le \int_{\R^n} \phi(g) ~ \forall g \in L^1(\R^n ; \R^n)
\]
and from \Cref{eq:hoelder_extremal} we can conclude that
\begin{equation}
\label{eq:v_in_wphi}
v \in -W_\phi \text{ almost everywhere in } \R^n.
\end{equation}
Using equation \eqref{eq:subdiff_zero} we can deduce that the weak divergence of $v$ is given by
\[ \nabla \cdot v = \lambda \Psi. \]
We conclude from \eqref{eq:v_exact} and \eqref{eq:v_in_wphi} that $v \in \mathcal{V}(u_0)$ and from \eqref{eq:opnorm} and \eqref{eq:psi_exact} the properties (a) -- (c).
\end{proof}
\end{theorem}
\begin{remark}
\Cref{thm:main} can also be obtained by the theory developed in \cite{Duval2011} if $\phi$ is even. We prefer to give a self-contained proof.
\end{remark}
\begin{corollary}\
\begin{enumerate}
\item We have that $u_0 \in BV(\R^n)$ is a minimizer of $E(\cdot ; u_0, \lambda)$ for $\lambda > 0$ if and only if there is a $v \in \mathcal{V}(u_0)$ satisfying $\| \nabla \cdot v \|_{L^\infty(\R^n)} \le \lambda$. If $\| \nabla \cdot v \|_{L^\infty(\R^n)} < \lambda$, then $u_0$ is the unique minimizer of $E(\cdot ; u_0, \lambda)$.
\item If $u_0 \in BV(\R^n)$ is a minimizer of $E$ for some $f \in L^1(\R^n)$ and $\lambda >0$, then $u_0$ is a minimizer of $E(\cdot ; u_0, \lambda)$. In that sense the minimization of $E$ is an idempotent operation.
\end{enumerate}
\begin{proof}
\begin{enumerate}
\item The first part is a simple conclusion from \Cref{thm:main}. We can repeat the first part of the proof of \Cref{thm:main} to show that $u_0$ is the unique minimizer if $\| \nabla \cdot v \|_{L^\infty(\R^n)} < \lambda$.
\item If $u_0$ is a minimizer of $E$ for some $f$, then we can apply \Cref{thm:main} to deduce the existence of a vector field $v \in \mathcal{V}(u_0)$ with $\| \nabla \cdot v \|_{L^\infty(\R^n)} \le \lambda$.
\end{enumerate}
\end{proof}
\end{corollary}
\begin{example}
\label{ex}
\begin{figure}[h!]
\centering
\begin{tikzpicture}[baseline=0, scale=3]
\draw[gray, dashed] (0,0) circle [radius=1cm];
\draw[gray, dashed] (-0.9,-0.9) rectangle (0.9,0.9);
\filldraw[darkgray] (0,0) circle(0.02cm);
\draw[darkgray] (0,0) -- node [right] {$h$} (0, 0.9)
-- node [below] {$s$} (0.43588989cm, 0.9cm)
-- node [right] {$1$} (0,0);
\draw[very thick] (0, 0.9cm) -- (0.43588989cm, 0.9cm) arc [radius=1, start angle=64.158, end angle=25.8419]
-- (0.9cm, -0.43588989cm) arc [radius=1, end angle=-64.158, start angle=-25.8419]
-- (-0.43588989cm, -0.9cm) arc [radius=1, start angle=244.158, end angle=205.8419]
-- (-0.9cm, (0.43588989cm) arc [radius=1, end angle=115.842, start angle=154.1581] -- cycle;
\end{tikzpicture}
\caption{The situation in \Cref{ex}.}
\end{figure}
In this example we consider the anisotropy $\phi = |\cdot|_1$ in dimension $n = 2$. The dual anisotropy is given by $\phi^\circ = |\cdot|_\infty$ and the Wulff shape is given by
\[ W_\phi = [-1,1]^2. \]
We are going to calculate an optimal shape for a circle of radius $1$, which means $B = \{ |x|_2 \le 1 \}$ and $f = \chi_B$.
It is known by \cite[Proposition 7.2.2]{Duval2011} that an optimal shape for an arbitrary bounded convex set can be expressed by an opening with a rescaled Wulff shape in the sense of morphology. In our case we get an optimal shape $U$, such that $u = \chi_U$ is a minimizer of $E$, either by $U = \emptyset$ or for $s = \frac{1}{\lambda}$ by
\[
U = B_s \coloneqq \left\{ x + sW_\phi \mid x \in B \text{ such that } x + sW_\phi \subset B \right\}.
\]
We can rewrite $B_s = B \cap \sqrt{1 - s^2}W_\phi = B \cap[-h,h]^2$ with $h = \sqrt{1 - s^2}$ if $h \in [\frac{1}{\sqrt{2}},1]$. We have
\begin{align*}
\TV_\phi (U) &= 8\sqrt{1 - \frac{1}{\lambda^2}}, \\
|U| &= \pi - 4\left(\sin^{-1}\left(\frac{1}{\lambda}\right) - \frac{1}{\lambda}\sqrt{1 - \frac{1}{\lambda^2}}\right), \\
E(\chi_U) &= 8h + 4 \lambda \left(\sin^{-1}\left(\frac{1}{\lambda}\right) - \frac{1}{\lambda}\sqrt{1 - \frac{1}{\lambda^2}}\right).
\end{align*}
From \cite[Proposition 7.2.2]{Duval2011} we conclude that $U$ is an optimal shape as long as
\[
\frac{\TV_\phi(U)}{|U|} \ge \lambda,
\]
which is equivalent to
\[
4 \lambda \sin^{-1}\left(\frac{1}{\lambda}\right) + 4\sqrt{1 - \frac{1}{\lambda^2}} \ge \lambda \pi.
\]
This inequality is true as long as $\lambda \ge 2.4754\dotsc$.
In the following we will prove that $U$ is an optimal shape for $\lambda > 2\sqrt{2}$. This is strictly weaker than the result we can deduce from \cite[Proposition 7.2.2]{Duval2011}, but the proof we give is shorter and hopefully more accessible. Furthermore this result is still strong enough to conclude that in the sense of \cite{Choksi2011} $E$ may be faithful to domains which are not clean 2D bar codes. For that reason we will construct a function $v$ which
satisfies the conditions of \Cref{thm:main}.
\begin{proof}
Set
\[
w(x_1, x_2) \coloneqq \left\{ \begin{array}{cr}
\min \{ 1, \max \{ -1, \frac{x_1}{s}\} \} & |x_2| \ge \frac{1}{\sqrt{2}} \\
\min \{ 1, \max \{ -1, \sqrt{2}x_1 \} \} & |x_2| < \frac{1}{\sqrt{2}}
\end{array}\right.
\]
and
\[
v(x) \coloneqq \left( \begin{array}{c}
-w(x_1, x_2) \\
-w(x_2, x_1)
\end{array}\right).
\]
The construction implies $v(z) \cdot \nu(z) = -|\nu(z)|_1$ for $z \in \partial U$ and therefore
\[
\TV_{|\cdot|_1}(u) = \int_{\partial U} |\nu|_1 = -\int_{\R^n} u \nabla \cdot v. \]
From \Cref{thm:main} we conclude, that $u$ is a minimizer of $E$.
\end{proof}
\end{example}
\section*{Acknowledgments} This paper is an extension of parts of my master thesis, which i have finished in October 2016 at the Technische Universität Dortmund. I thank my advisor, Prof. Dr. Matthias Röger, for drawing my attention to this interesting topic and for his guidance during writing the thesis and this work.
| {
"timestamp": "2017-04-04T02:10:58",
"yymm": "1704",
"arxiv_id": "1704.00451",
"language": "en",
"url": "https://arxiv.org/abs/1704.00451",
"abstract": "In this paper we study an anisotropic variant of the Rudin-Osher-Fatemi functional with $L^1$ fidelity term of the form \\[ E(u) = \\int_{\\mathbb{R}^n} \\phi(\\nabla u) + \\lambda \\| u -f \\|_{L^1(\\mathbb{R}^n)}. \\] We will characterize the minimizers of $E$ in terms of the Wulff shape of $\\phi$ and the dual anisotropy. In particular we will calculate the subdifferential of $E$. We will apply this characterization to the special case $\\phi = |\\cdot|_1$ and $n=2$, which has been used in the denoising of 2D bar codes. In this case, we determine the shape of a minimizer $u$ when $f$ is the characteristic function of a circle.",
"subjects": "Analysis of PDEs (math.AP)",
"title": "Characterization of minimizers of an anisotropic variant of the Rudin-Osher-Fatemi functional with $L^1$ fidelity term",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9859363758493942,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.708488357058773
} |
https://arxiv.org/abs/1305.2584 | On Borsuk's conjecture for two-distance sets | In this paper we answer Larman's question on Borsuk's conjecture for two-distance sets. We find a two-distance set consisting of 416 points on the unit sphere in the dimension 65 which cannot be partitioned into 83 parts of smaller diameter. This also reduces the smallest dimension in which Borsuk's conjecture is known to be false. Other examples of two-distance sets with large Borsuk's numbers will be given. | \section{Introduction}
\label{intro}
For each $n\in{\mathbb N}$ the Borsuk number $b(n)$ is the minimal number such that any bounded set in
$\mathbb{R}^{n}$ consisting of at least 2 points can be partitioned into $b(n)$ parts of smaller diameter.
In 1933 Karol Borsuk~\cite{Bor} conjectured that $b(n)=n+1$. The conjecture was disproved by Kahn and Kalai~\cite{KK}
who showed that in fact $b(n)>1.2^{\sqrt{n}}$ for large $n$. In particular, their construction implies that $b(n)>n+1$ for $n=1325$ and for all
$n>2014$. This result attracted a substantial amount of attention from many mathematicians; see for example~\cite{A},~\cite{BMP}, and~\cite{R}.
Improvements on the smallest dimension $n$ such that $b(n)>n+1$ were obtained by Nilli~\cite{N} ($n = 946$), Raigorodskii~\cite{R1} ($n = 561$), Wei\ss bach~\cite{W} ($n = 560$), Hinrichs~\cite{H} ($n = 323$), and Pikhurko~\cite{Pikh} ($n = 321$). Currently the best known result is that Borsuk's conjecture is false for $n\ge 298$; see~\cite{HR}. On the other hand, many related problems are still unsolved. Borsuk's conjecture can be wrong even in dimension $4$. Only the estimate $b(4)\le 9$ is known; see~\cite{La}.
In the 1970s Larman asked if the Borsuk's conjecture is true for two-distance sets; see also~\cite{K} and~\cite{R}.
Denote by $b_2(n)$ the Borsuk number for two-distance sets in dimension $n$, that is the minimal number such that any two-distance set in $\mathbb{R}^{n}$ can be partitioned into $b_2(n)$ parts of smaller diameter. The aim of this paper is to construct two-distance sets with large Borsuk's numbers. Two basic constructions follow from Euclidean representations of $G_2(4)$ and $Fi_{23}$ strongly regular graphs. First we prove
\begin{theorem}
There is a two-distance subset $\{x_1,\ldots,x_{416}\}$ of the unit sphere $S^{64}\subset\mathbb{R}^{65}$
such that $\langle x_i,x_j \rangle=1/5$ or $-1/15$ for $i\neq j$ which cannot be partitioned into $83$ parts of smaller diameter.
\end{theorem}
Hence $b(65)\ge b_2(65)\ge 84$. We also prove the following
\begin{theorem}
There is a two-distance subset $\{x_1,\ldots,x_{31671}\}$ of the unit sphere $S^{781}$
such that $\langle x_i,x_j\rangle =1/10$ or $-1/80$ for $i\neq j$ which cannot be partitioned into $1376$ parts of smaller diameter.
\end{theorem}
Then, using the configurations from Theorem 1 and Theorem 2 we prove
\begin{corollary}
For integers $n\ge 1$ and $k\ge 0$ we have
\begin{equation}
\label{car1}
b_2(66n+k)\ge 84n+k+1,
\end{equation}
and
\begin{equation}
\label{car2}
b_2(783n+k)\ge 1377n+k+1.
\end{equation}
\end{corollary}
Finally, using again the configuration from Theorem~2 we prove slightly better estimates for $b_2(781)$, $b_2(780)$, and $b_2(779)$ than what can be obtained by~\eqref{car1}.
\begin{corollary}
The following inequalities hold:
$$
b_2(781)\ge 1225,\quad b_2(780)\ge 1102,\quad\text{and}\quad b_2(779)\ge 1002.
$$
\end{corollary}
The paper is organized as follows. First, in Section~\ref{sec:1} we describe Euclidean representations of a strongly regular graph by
two-distance sets and then in Section~\ref{sec:2} we prove our main results.
\section{Euclidean representations of strongly regular graphs}
\label{sec:1}
A strongly regular graph $\Gamma$ with parameters
$(v,k,\lambda,\mu)$ is an undirected regular graph on $v$ vertices
of valency $k$ such that each pair of adjacent vertices
has $\lambda$ common neighbors, and each pair of nonadjacent
vertices has $\mu$ common neighbors. The adjacency matrix $A$ of
$\Gamma$ has the following properties:
$$
AJ = kJ
$$
and
$$
A^2 + (\mu - \lambda)A + (\mu - k)I = \mu J,
$$
where $I$ is the identity matrix and $J$ is the matrix with all
entries equal to~$1$ of appropriate sizes. These conditions imply
that
\begin{equation}
\label{par} (v - k - 1)\mu = k(k - \lambda - 1).
\end{equation}
Moreover, the matrix $A$ has only 3 eigenvalues: $k$ of multiplicity
$1$, one positive eigenvalue
$$
r=\frac 12\left(\lambda-\mu+\sqrt{(\lambda-\mu)^2+4(k-\mu)}\right)
$$
of multiplicity
\begin{equation}
\label{f}
f=\frac 12
\left(v-1-\frac{2k+(v-1)(\lambda-\mu)}{\sqrt{(\lambda-\mu)^2+4(k-\mu)}}\right),
\end{equation}
and one negative eigenvalue
$$
s=\frac 12\left(\lambda-\mu-\sqrt{(\lambda-\mu)^2+4(k-\mu)}\right)
$$
of multiplicity
\begin{equation*}
\label{g} g=\frac 12
\left(v-1+\frac{2k+(v-1)(\lambda-\mu)}{\sqrt{(\lambda-\mu)^2+4(k-\mu)}}\right).
\end{equation*}
Clearly, both $f$ and $g$ must be integers. This together
with~\eqref{par} gives a collection of feasible parameters
$(v,k,\lambda,\mu)$ for strongly regular graphs.
Let $V$ be the set of vertices $\Gamma$. Consider the columns $\{y_i:
i\in V\}$ of the matrix $A-sI$ and put $x_i:=z_i/\|z_i\|$, where
$$
z_i=y_i-\frac 1{v}\sum_{j\in V}y_j, \quad i\in V.
$$
Note that while the vectors $x_i$ lie in $\mathbb{R}^v$, they span
at most an $f$-dimensional vector space. Thus for convenience we
consider them to lie in $\mathbb{R}^f$. By easy calculations
$$
\langle x_i,x_j\rangle=
\begin{cases}1, & \mbox{if }i=j, \\
p, & \mbox{if }i\mbox{ and }j\mbox{ are adjacent},\\
q, & \mbox{otherwise},
\end{cases}
$$
where
\begin{equation}
\label{pq}
p=\frac{\lambda-2s-\beta}{s^2+k-\beta},\quad q=\frac{\mu-\beta}{s^2+k-\beta},\quad\beta=\frac 1v(s^2+k+k(\lambda-2s)+(v-k-1)\mu).
\end{equation}
Denote by $\Gamma_f$ the configuration $x_i$, $i\in V$. Similarly, we can define the configuration $\Gamma_g$ in $\mathbb{R}^g$. The configurations $\Gamma_f$ and $\Gamma_g$ were also considered in~\cite{Cam} and have many other fascinating properties. For example, they are spherical $2$-designs.
\section{Proof of main results}
\label{sec:2}
For any vertex $v\in V$ of a strongly regular graph $\Gamma$, let $N(v)$ be the set of all neighbors of $v$
and let $N'(v)$ be the set of non-neighbors of $v$, i.e.
$N'(v)=V\setminus(\{v\}\cup N(v))$.\\
\noindent
{\em Proof of Theorem 1}
We consider the configuration $\Gamma_f$ of the well-known strongly regular graph $\Gamma=G_2(4)$ with parameters $(416,100,36,20)$. By~\eqref{f} we have that $f=65$. Moreover by~\eqref{pq}, $p=1/5$ and $q=-1/15$. Therefore the diameter of $\Gamma_f$ is the distance between $x_i$ and $x_j$ where $i$ and $j$ are nonadjacent. Hence, the configuration cannot be partitioned into less than $v/m$ parts, where $m$ is the size of the largest clique in $\Gamma$.
To prove Theorem 1 it is enough to show that $\Gamma$ has no $6$-clique.
We will use the following theorem consisting of four independent results that can be found in~\cite{B}.\\
\noindent
{\bf Theorem A}
{\it \begin{itemize}
\item[(i)] For each $u\in V$ the subgraph of $\Gamma$
induced on $N(u)$ is a strongly regular graph with parameters $(100,36,14,12)$ (the Hall-Janko graph).
In other words the Hall-Janko graph is the first subconstituent of $\Gamma$.
\item[(ii)]The first subconstituent of the Hall-Janko graph is the $U_3(3)$ strongly regular graph with parameters $(36,14,4,6)$.
\item[(iii)] The first subconstituent of $U_3(3)$ is a graph on $14$ vertices of regularity $4$ (the co-Heawood graph).
\item[(iv)] The co-Heawood graph has no triangles.
\end{itemize}
}
Parts (i)-(iii) are folklore. They follow from D.G. Higman's theory of rank 3 permutation groups (see also~\cite{G} and~\cite{L}).
Part (iv) follows from the fact that the co-Heawood graph is a subgraph of the Gewirtz graph with parameters (56,10,0,2); see also~\cite{BCD}.
Now, for vertices $u,v,w\in V$ forming a triangle, (i)-(iii) imply that $$|N(u)\cap N(v)\cap N(w)|=14.$$ Moreover, the subgraph induced on $N(u)\cap N(v)\cap N(w)$ is the co-Heawood graph. Therefore by (iv) the maximal cliques in $\Gamma$ are of size 5.
\qed
\noindent
{\em Proof of Theorem 2}
Consider the configuration $\Gamma_f$ of the $Fi_{23}$ graph with parameters $(31671,3510,693,351)$.
We have $f=782$, $p=1/10$, and $q=-1/80$. Hence, the diameter of $\Gamma_f$ is the distance between nonadjacent vertices.
Therefore $\Gamma_f$ cannot be partitioned into less than $v/m$ parts, where $m$ is the size of the largest clique in $\Gamma$. We will use the well-known fact (see~\cite{P}) that the first subconstituent of $\Gamma$ is the strongly regular graph with parameters $(3510,693,180,126)$ and the second subconstituent of $\Gamma$ is the strongly regular graph $G$ with parameters $(693,180,51,45)$. Now we will estimate from above the size of a clique in $G$. To this end consider the complement graph $\bar G$ having parameters $(693,512,376,384)$. For the configuration $\bar G_f$, we have that $f=440$, $p=1/64$, and $q=-1/20$. Therefore, the size of a clique $K$ in $G$ cannot be larger than $21$. Otherwise the vector
$$\sum_{i\in K}x_i,\qquad x_i\in\bar G_f,$$
is of negative norm. Thus, the size of a clique in $\Gamma$ is not larger than $23$ and hence $\Gamma_f$ cannot be partitioned into less than $31671/23=1377$ parts of smaller diameter.
\qed
\noindent
{\em Proof of Corollary 1}
Let us first prove~\eqref{car1} for $k=0$. Fix $n\in\mathbb{N}$ and put $m=66n$.
Consider the following coordinate representation of a vector $y\in\mathbb{R}^{m}$:
$$
y=(y_1,\ldots,y_n|a_1,\ldots,a_n),
$$
where $y_k\in\mathbb{R}^{65}$ and $a_k\in\mathbb{R}$, $k=1,\ldots, n$.
Now we take the following set of unit vectors in $\mathbb{R}^{m}$:
$Y=\{v_{ik},\,i=1,\ldots, 416,\,k=1,\ldots, n\}$, where
$$
v_{ik}=(0,\ldots,0,\frac{\sqrt{15}}4x_i,0,\ldots,0\,|\,0,\ldots,0,\frac 14,0,\ldots,0),\, i=1,\ldots,416,\,k=1,\ldots, n,
$$
Here each $v_{ik}$ has only two nonzero coordinates $y_k$ and $a_k$, and vectors $x_i$ are such as in Theorem 1.
Clearly, $\langle v_{ik},v_{jl}\rangle=0$ if $k\neq l$. Moreover,
$$
\langle v_{ik},v_{jk}\rangle=
\begin{cases}1, & \mbox{if }i=j, \\
1/4, & \mbox{if }i\mbox{ and }j\mbox{ are adjacent},\\
0, & \mbox{otherwise}.
\end{cases}
$$
Therefore, $Y$ is a two-distance set consisting of $416n$ vectors.
Now, by Theorem~1, this set cannot be partitioned into less than $84n$ parts of smaller diameter. Adding the vector
$v$ which is at distance $\sqrt{2}$ to each vector of~$Y$
$$
v=(0,\ldots,0\,|\,\alpha,\ldots,\alpha),\quad \alpha=\frac{1+\sqrt{1+16n}}{4n}
$$
($\alpha$ is a solution of the equation $(\alpha-1/4)^2+(n-1)\alpha^2=17/16$) we obtain that $b_2(m)\ge 84n+1$.
Finally we note that all these $416n+1$ vectors are at the same distance $R$ to the vector $(0,\ldots,0\,|\,\gamma,\ldots,\gamma)$,
where
$$
\gamma=\frac{\alpha}{4n\alpha-1}\,\text{ and }\, R=\frac{4\sqrt{n}}{\sqrt{16n+1}}<1
$$
($\gamma$ is a solution of the equation $(\gamma-1/4)^2+(n-1)\gamma^2+15/16=n(\alpha-\gamma)^2$).
Hence we can add a new vector at the diameter distance $\sqrt{2}$ to each of these $416n+1$ vectors
to get a new set of $416n+2$ vectors in $\mathbb{R}^{m+1}$ provided that $b_2(m+1)\ge 84n+2$.
We can also rescale this new set to be on the sphere $S^{m}$. Now inductive application of this procedure immediately gives us~\eqref{car1}. This procedure was also described in~\cite[Lemma 9]{HR}.
Similarly, Theorem 2 implies~\eqref{car2}.
\qed
\noindent
{\em Proof of Corollary 2}
Let $\Gamma$ be the $Fi_{23}$ graph. For a vertex $u\in V$, consider the subset $\left
\{x_i:\, i\in N'(u)\right\}$ of the configuration $\Gamma_f$.
This subset lies in the hyperplane $\langle x_u,x\rangle=-1/80$ and consists of $31671-3510-1=28160$ vectors. Hence,
$b_2(781)>[28160/23]=1224$.
Similarly, for adjacent vertices $u$ and $v$, the subset $\left
\{x_i:\, i\in N'(u)\cap N'(v)\right\}$ consists of $31671-2\times3510+693=25344$ vectors. This subset lies
in the hyperplane $\{x\in\mathbb{R}^{782}:\,\langle x_u,x\rangle=-1/80\text{ and }\langle x_v,x\rangle=-1/80\}$, and hence $b_2(780)>[25344/23]=1101$.
Finally, consider a subset $\left
\{x_i:\,i\in N'(u)\cap N'(v)\cap N'(w)\right\}$ such that the vertices $u$, $v$, $w$ form a triangle. This subset consists of $31671-3\times3510+3\times693-180=23040$ vectors, and hence $b_2(779)>[23040/23]=1001$.
\qed\\
{\bf Acknowledgements}
The author thanks Danylo Radchenko and Kristian Seip for several fruitful discussions.
The author also wish to thank the Centre for Advanced Study at the Norwegian Academy of Science and Letters in Oslo, and Mathematisches Forschungsinstitut Oberwolfach for their hospitality during the preparation of this manuscript and for providing a stimulating
atmosphere for research.
\footnotesize
| {
"timestamp": "2013-08-30T02:07:12",
"yymm": "1305",
"arxiv_id": "1305.2584",
"language": "en",
"url": "https://arxiv.org/abs/1305.2584",
"abstract": "In this paper we answer Larman's question on Borsuk's conjecture for two-distance sets. We find a two-distance set consisting of 416 points on the unit sphere in the dimension 65 which cannot be partitioned into 83 parts of smaller diameter. This also reduces the smallest dimension in which Borsuk's conjecture is known to be false. Other examples of two-distance sets with large Borsuk's numbers will be given.",
"subjects": "Metric Geometry (math.MG)",
"title": "On Borsuk's conjecture for two-distance sets",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9859363746096916,
"lm_q2_score": 0.7185943985973773,
"lm_q1q2_score": 0.7084883561679298
} |
https://arxiv.org/abs/1503.03188 | Optimal prediction for sparse linear models? Lower bounds for coordinate-separable M-estimators | For the problem of high-dimensional sparse linear regression, it is known that an $\ell_0$-based estimator can achieve a $1/n$ "fast" rate on the prediction error without any conditions on the design matrix, whereas in absence of restrictive conditions on the design matrix, popular polynomial-time methods only guarantee the $1/\sqrt{n}$ "slow" rate. In this paper, we show that the slow rate is intrinsic to a broad class of M-estimators. In particular, for estimators based on minimizing a least-squares cost function together with a (possibly non-convex) coordinate-wise separable regularizer, there is always a "bad" local optimum such that the associated prediction error is lower bounded by a constant multiple of $1/\sqrt{n}$. For convex regularizers, this lower bound applies to all global optima. The theory is applicable to many popular estimators, including convex $\ell_1$-based methods as well as M-estimators based on nonconvex regularizers, including the SCAD penalty or the MCP regularizer. In addition, for a broad class of nonconvex regularizers, we show that the bad local optima are very common, in that a broad class of local minimization algorithms with random initialization will typically converge to a bad solution. | \section{Introduction}
The classical notion of minimax risk, which plays a central role in
decision theory, allows for the statistician to implement any possible
estimator, regardless of its computational cost. For many problems,
there are a variety of estimators, which can be ordered in terms of
their computational complexity. Given that it is usually feasible
only to implement polynomial-time methods, it has become increasingly
important to study computationally-constrained analogues of the
minimax estimator, in which the choice of estimator is restricted to a
subset of computationally efficient estimators~\cite{Wai14_ICM}. A
fundamental question is when such computationally-constrained forms of
minimax risk estimation either coincide or differ in a fundamental way
from their classical counterpart.
The goal of this paper is to explore such gaps between classical and
computationally practical minimax risks, in the context of prediction
error for high-dimensional sparse regression. Our main contribution
is to establish a fundamental gap between the classical minimax
prediction risk and the best possible risk achievable by a broad class
of $M$-estimators based on coordinate-separable regularizers, one
which includes various nonconvex regularizers that are used in
practice.
In more detail, the classical linear regression model is based on a
response vector $\yvec \in \real^\numobs$ and a design matrix $\Xmat
\in \real^{\numobs \times \usedim}$ that are linked via the relationship
\begin{align}
\label{eqn:standard-linear-model}
\yvec & = \Xmat \thetastar + w,
\end{align}
where the vector $w \in \real^\numobs$ is a random noise vector.
Our goal is to estimate the unknown regression vector $\thetastar \in
\real^\usedim$. Throughout this paper, we focus on the
standard Gaussian model, in which the entries
of the noise vector $w$ are i.i.d.~$N(0, \sigma^2)$ variates, and the
case of deterministic design, in which the matrix $\Xmat$ is viewed as
non-random. In the sparse variant of this model, the regression
vector is assumed to have a small number of non-zero coefficients. In
particular, for some positive integer $\kdim < \usedim$, the vector
$\thetastar$ is said to be $\kdim$-sparse if it has at most $\kdim$
non-zero coefficients. Thus, the model is parameterized by the triple
$(\numobs, \usedim, \kdim)$ of sample size $\numobs$, ambient
dimension $\usedim$, and sparsity $\kdim$. We use $\Ball_0(\kdim)$ to
the denote the $\ell_0$-``ball'' of all $\usedim$-dimensional vectors
with at most $\kdim$ non-zero entries.
An estimator $\thetahat$ is a measurable function of the pair $(y,
X)$, taking values in $\real^\usedim$, and its quality can be assessed
in different ways. In this paper, we focus on its \emph{fixed design
prediction error}, given by $\Exs \big[ \frac{1}{\numobs} \|X(
\thetahat - \thetastar)\|_2^2 \big]$, a quantity that measures how
well $\thetahat$ can be used to predict the vector $X \thetastar$ of
noiseless responses. The worst-case prediction error of an estimator
$\thetahat$ over the set $\Ball_0(\kdim)$ is given by
\begin{align}
\MSE(\thetahat; \Xmat) & \defn \sup_{\thetastar\in \Ball_0(\kdim)}
\frac{1}{\numobs}\E[\ltwos{\Xmat (\thetahat - \thetastar)}^2]
\end{align}
Given that $\thetastar$ is $\kdim$-sparse, the most direct approach
would be to seek a $\kdim$-sparse minimizer to the least-squares cost
$\|\yvec - \Xmat \theta\|_2^2$, thereby obtaining the $\ell_0$-based
estimator
\begin{align}
\label{EqnDefnEllZeroEstimator}
\thetazero & \in \arg \min_{\theta\in \Ball_0(\kdim)} \|\yvec - \Xmat
\theta\|_2^2.
\end{align}
The $\ell_0$-based estimator $\thetazero$ is
known~\cite{BunWegTsyb07,raskutti2011minimax} to satisfy a bound of
the form
\begin{align}
\label{eqn:l0-optimal-rate}
\MSE(\thetazero; \Xmat) \precsim \frac{\sigma^2 \, \kdim \log
\usedim}{\numobs},
\end{align}
where $\precsim$ denotes an inequality up to constant factors
(independent of the triple $(\numobs, \usedim, \kdim)$ as well as the
standard deviation $\sigma$). However, it is not tractable to compute
this estimator in a brute force manner, since there are ${\usedim
\choose \kdim}$ subsets of size $\kdim$ to consider.
The computational intractability of the $\ell_0$-based estimator has
motivated the use of various heuristic algorithms and approximations,
including the basis pursuit method~\cite{chen1998atomic}, the Dantzig
selector~\cite{candes2007dantzig}, as well as the extended family of
Lasso estimators~\cite{tibshirani1996regression,
chen1998atomic,zou2006adaptive,belloni2011square}. Essentially,
these methods are based on replacing the $\ell_0$-constraint with its
$\ell_1$-equivalent, in either a constrained or penalized form. There
is now a very large body of work on the performance of such methods,
covering different criteria including support recovery, $\ell_2$-norm
error and prediction error (e.g., see the book~\cite{BuhVan11} and
references therein).
For the case of fixed design prediction error that is the primary
focus here, such $\ell_1$-based estimators are known to achieve the
bound~\eqref{eqn:l0-optimal-rate} only if the design matrix $\Xmat$
satisfies certain conditions, such as the restricted eigenvalue (RE)
condition or compatibility
condition~\cite{bickel2009simultaneous,van2009conditions} or the
stronger restricted isometry property~\cite{candes2007dantzig}; see
the paper~\cite{van2009conditions} for an overview of these various
conditions, and their inter-relationships. Without such conditions,
the best known guarantees for $\ell_1$-based estimators are of the
form
\begin{align}
\label{eqn:l1-achievable-rate}
\MSE(\thetahat_{\ell_1}; \Xmat) \precsim \sigma \, R \,
\sqrt{\frac{\log \usedim}{\numobs}},
\end{align}
a bound that is valid without any RE conditions on the design matrix
$\Xmat$ whenever the $\kdim$-sparse regression vector $\thetastar$ has
$\ell_1$-norm bounded by $R$ (e.g., see the
papers~\cite{BunWegTsyb07,Nem00,raskutti2011minimax}.)
The substantial gap between the ``fast''
rate~\eqref{eqn:l0-optimal-rate} and the ``slow''
rate~\eqref{eqn:l1-achievable-rate} leaves open a fundamental
question: is there a computationally efficient estimator attaining the
bound~\eqref{eqn:l0-optimal-rate} for general design matrices? In the
following subsections, we provide an overview of the currently known
results on this gap, and we then provide a high-level statement of
the main result of this paper.
\subsection{Lower bounds for Lasso}
Given the gap between the fast rate~\eqref{eqn:l0-optimal-rate} and
the Lasso's slower rate~\eqref{eqn:l1-achievable-rate}, one
possibility might be that existing analyses of prediction error are
overly conservative, and $\ell_1$-based methods can actually achieve
the bound~\eqref{eqn:l0-optimal-rate}, without additional constraints
on $\Xmat$. Some past work has given negative answers to this
quesiton. Foygel and Srebro~\cite{FoySre11} constructed a 2-sparse
regression vector and a random design matrix for which the Lasso
prediction error with any choice of regularization parameter
$\regparn$ is lower bounded by $1/\sqrt{\numobs}$. In particular,
their proposed regression vector is $\thetastar
=(0,\dots,0,\frac{1}{2},\frac{1}{2})$. In their design matrix, the
columns are randomly generated with distinct covariances, and
moreover, such that the rightmost column is strongly correlated with
the other two columns on its left. With this particular regression
vector and design matrix, they show that Lasso's prediction error is
lower bounded by $1/\sqrt{\numobs}$ for \emph{any} choice of Lasso
regularization parameter $\lambda$. This construction is explicit for
Lasso, and thus does not apply to more general M-estimators. Moreover,
for this particular counterexample, there is a one-to-one
correspondence between the regression vector and the design matrix, so
that one can identify the non-zero coordinates of $\thetastar$ by
examining the design matrix. Consequently, for this construction, a
simple reweighted form of the Lasso can be used to achieve the fast
rate. In particular, the reweighted Lasso estimator
\begin{align}
\label{eqn:general-reweighed-lasso}
\thetahat_{\rm wl} \in \arg \min_{\theta\in \R^d} \left \{ \|\yvec -
\Xmat \theta\|_2^2 + \lambda \sum_{j=1}^d \alpha_j |\theta_j| \right
\},
\end{align}
with $\lambda$ chosen in the usual manner ($\lambda \asymp \sigma
\sqrt{\frac{\log \usedim}{\numobs}}$), weights $\alpha_{d-1} =
\alpha_d = 1$, and the remaining weights $\{\alpha_1, \ldots,
\alpha_{d-2} \}$ chosen to be sufficiently large, has this property.
Dalalyan et al.~\cite{dalalyan2014prediction} construct a stronger
counter-example, for which the prediction error of Lasso is again
lower bounded by $1/\sqrt{n}$. For this counterexample, there is no
obvious correspondence between the regression vector and the design
matrix. Nevertheless, as we show in
Appendix~\ref{sec:fast-rate-dalalyan}, the reweighted Lasso
estimator~\eqref{eqn:general-reweighed-lasso} with a proper choice of
the regularization coefficients still achieves the fast rate on this
example. Another related piece of work is by Cand{\`e}s and
Plan~\cite{candes2009near}. They construct a design matrix for which
the Lasso estimator, when applied with the usual choice of
regularization parameter $\lambda \asymp \sigma (\frac{\log
\usedim}{\numobs})^{1/2}$, has sub-optimal prediction error. Their
matrix construction is spiritually similar to ours, but the
theoretical analysis is limited to the Lasso for a particular choice
of regularization parameter. It does not rule out the possibility that
other choices of regularization parameters, or other polynomial-time
estimators can achieve the fast rate. In contrast, our hardness result
applies to general $M$-estimators based on coordinatewise separable
regularizers, and it allows for arbitrary regularization parameters.
\subsection{Complexity-theoretic lower bound for polynomial-time sparse estimators}
In our own recent work~\cite{zhang2014lower}, we have provided a
complexity-theoretic lower bound that applies to a very broad class of
polynomial-time estimators. The analysis is performed under a
standard complexity-theoretic condition---namely, that the class $\np$
is not a subset of the class $\ppoly$---and shows that there is no
polynomial-time algorithm that returns a $\kdim$-sparse vector that
achieves the fast rate. The lower bound is established as a function
of the restricted eigenvalue of the design matrix. Given sufficiently
large $(\numobs,\kdim,\usedim)$ and any $\gamma > 0$, a design matrix
$X$ with restricted eigenvalue $\gamma$ can be constructed, such that
every polynomial-time $k$-sparse estimator $\thetahat_{\rm poly}$ has
its minimax prediction risk lower bounded as
\begin{align}
\label{eqn:polytime-paper-lower-bound}
\MSE(\thetahat_{\rm poly}; \Xmat) \succsim \frac{\sigma^2 \,
\kdim^{1-\delta} \log \usedim}{\gamma \numobs},
\end{align}
where $\delta > 0$ is an arbitrarily small positive scalar. Note that
the fraction $\kdim^{-\delta}/\gamma$, which characterizes the gap between the fast rate and the rate~\eqref{eqn:polytime-paper-lower-bound}, could be arbitrarily large. The lower bound has the following consequence:
any estimator that achieves the fast rate must either not be polynomial-time,
or must return a regression vector that is not $k$-sparse.
The condition that the estimator is $k$-sparse is essential in the
proof of lower bound~\eqref{eqn:polytime-paper-lower-bound}. In
particular, the proof relies on a reduction between estimators with
low prediction error in the sparse linear regression model, and
methods that can solve the 3-set covering
problem~\cite{natarajan1995sparse}, a classical problem that is known
to be NP-hard. The 3-set covering problem takes as input a list of
3-sets, which are subsets of a set $\mathcal{S}$ whose cardinality is
$3k$. The goal is to choose $k$ of these subsets in order to cover the
set $\mathcal{S}$. The lower
bound~\eqref{eqn:polytime-paper-lower-bound} is established by showing
that if there is a $k$-sparse estimator achieving better prediction
error, then it provides a solution to the 3-set covering problem, as
every non-zero coordinate of the estimate corresponds to a chosen
subset. This hardness result does not eliminate the possibility of
finding a polynomial-time estimator that returns dense vectors
satisfying the fast rate. In particular, it is possible that a dense
estimator cannot be used to recover a a good solution to the 3-set
covering problem, implying that it is not possible to use the hardness
of $3$-set covering to assert the hardness of achieving low prediction
error in sparse regression.
At the same time, there is some evidence that better prediction error
can be achieved by dense estimators. For instance, suppose that we
consider a sequence of high-dimensional sparse linear regression
problems, such that the restricted eigenvalue $\gamma =
\gamma_\numobs$ of the design matrix $X \in \real^{\numobs \times
\usedim}$ decays to zero at the rate $\gamma_\numobs = 1/\numobs^2$.
For such a sequence of problems, as $\numobs$ diverges to infinity,
the lower bound~\eqref{eqn:polytime-paper-lower-bound}, which applies
to $\kdim$-sparse estimators, goes to infinity, whereas the Lasso
upper bound~\eqref{eqn:l1-achievable-rate} converges to zero.
Although this behavior is somewhat mysterious, it is not a
contradiction. Indeed, what makes Lasso's performance better than the
lower bound~\eqref{eqn:polytime-paper-lower-bound} is that it allows
for non-sparse estimates. In this example, truncating the Lasso's
estimate to be $k$-sparse will substantially hurt the prediction
error. In this way, we see that proving lower bounds for non-sparse
estimators---the problem to be addressed in this paper---is a
substantially more challenging task than proving lower bound for estimators
that must return sparse outputs.
\subsection{Main results of this paper}
With this context in place, let us now turn to a high-level statement
of the main results of this paper. More precisely, our contribution
is to provide additional evidence against the polynomial achievability
of the fast rate~\eqref{eqn:l0-optimal-rate}, in particular by showing
that the slow rate~\eqref{eqn:l1-achievable-rate} is a lower bound for
a broad class of M-estimators, namely those based on minimizing a
least-squares cost function together with a coordinate-wise
decomposable regularizer. In particular, we consider estimators that
are based on an objective function of the form $L(\theta; \lambda) =
\frac{1}{\numobs} \|y - X \theta\|_2^2 + \lambda \, \ensuremath{\reg}(\theta)$,
for a weighted regularizer $\ensuremath{\reg}: \real^\usedim \rightarrow \real$
that is coordinate-separable. See Section~\ref{SecCoord} for a
precise definition of this class of estimators. Our first main result
(Theorem~\ref{theorem:main-lower-bound}) establishes that there is
always a matrix $\Xmat\in \R^{\numobs\times \usedim}$ such that for
any coordinate-wise separable function~$\reg$ and for any choice of
weight $\lambda \geq 0$, the objective $L$ always has at least one
local optimum $\thetahat_\lambda$ such that
\begin{align}
\label{eqn:intro-main-result}
\sup_{\thetastar\in \Ball_0(\kdim)} \E \Big[ \frac{1}{\numobs}
\ltwos{\Xmat (\thetahat_\lambda - \thetastar \big)}^2 \Big]
\succsim \frac{\sigma}{\sqrt{\numobs}}.
\end{align}
Moreover, if the regularizer $\ensuremath{\reg}$ is convex, then this lower
bound applies to all global optima of the convex criterion $L$. This
lower bound is applicable to many popular estimators, including the
ridge regression estimator~\cite{hoerl1970ridge}, the basis pursuit
method~\cite{chen1998atomic}, the Lasso
estimator~\cite{tibshirani1996regression}, the weighted Lasso
estimator~\cite{zou2006adaptive}, the square-root Lasso
estimator~\cite{belloni2011square}, and least squares based on
nonconvex regularizers such as the SCAD penalty~\cite{fan2001variable}
or the MCP penalty~\cite{zhang2010nearly}.
In the nonconvex setting, it is impossible (in general) to guarantee
anything beyond local optimality for any solution found by a
polynomial-time algorithm~\cite{ge2015strong}. Nevertheless, to play
the devil's advocate, one might argue that the assumption that an
adversary is allowed to pick a bad local optimum could be overly
pessimistic for statistical problems. In order to address this
concern, we prove a second result
(Theorem~\ref{theorem:gradient-descent-lower-bound}) that demonstrates
that bad local solutions are difficult to avoid. Focusing on a class
of local descent methods, we show that given a random isotropic
initialization centered at the origin, the resulting stationary points
have poor mean-squared error---that is, they can only achieve the slow
rate. In this way, this paper shows that the gap between the fast and
slow rates in high-dimensional sparse regression cannot be closed via
standard application of a very broad class of methods. In conjunction
with our earlier complexity-theoretic paper~\cite{zhang2014lower}, it
adds further weight to the conjecture that there is a fundamental gap
between the performance of polynomial-time and exponential-time
methods for sparse prediction.
The remainder of this paper is organized as follows. We begin in
Section~\ref{SecBackground} with further background, including a
precise definition of the family of $M$-estimators considered in this
paper, some illustrative examples, and discussion of the prediction
error bound achieved by the Lasso. Section~\ref{sec:main-result} is
devoted to the statements of our main results, along with discussion
of their consequences. In Section~\ref{SecProofs}, we provide the
proofs of our main results, with some technical lemmas deferred to the
appendices. We conclude with a discussion in
Section~\ref{SecDiscussion}.
\section{Background and problem set-up}
\label{SecBackground}
As previously described, an instance of the sparse linear regression
problem is based on observing a pair $(\Xmat, \yvec)\in
\R^{\numobs\times \usedim} \times \R^\numobs$ of instances that are
linked via the linear model~\eqref{eqn:standard-linear-model}, where
the unknown regressor $\thetastar$ is assumed to be $\kdim$-sparse,
and so belongs to the $\ell_0$-ball $\Ball_0(\kdim)$. Our goal is to
find a good predictor, meaning a vector $\thetahat$ such that the
mean-squared prediction error $\frac{1}{\numobs} \ltwos{\Xmat(
\thetahat -\thetastar)}^2$ is small.
\subsection{Least squares with coordinate-separable regularizers}
\label{SecCoord}
The analysis of this paper applies to estimators that are based on
minimizing a cost function of the form
\begin{align}
\label{EqnLoss}
L(\theta; \lambda) & = \frac{1}{\numobs} \|y - X \theta\|_2^2 +
\lambda \, \ensuremath{\reg}(\theta),
\end{align}
where $\ensuremath{\reg}:\real^\usedim \rightarrow \real$ is a
\emph{regularizer}, and $\lambda \geq 0$ is a regularization weight.
We consider the following family $\family$ of coordinate-separable
regularizers:
\begin{enumerate}[(i)]
\item The function $\ensuremath{\reg}: \real^\usedim \rightarrow \real$ is
coordinate-wise decomposable, meaning that $\ensuremath{\reg}(\theta) =
\sum_{j=1}^\usedim \reg_j(\theta_j)$ for some univariate functions
$\rho_j:\real \rightarrow \real$.
\item Each univariate function satisfies $\rho_j(0)=0$ and is
symmetric around zero (i.e., $\rho_j(t) = \rho_j(-t)$ for all $t\in
\R$).
\item On the nonnegative real line $[0,+\infty)$, each function
$\rho_j$ is nondecreasing.
\end{enumerate}
\noindent Let us consider some examples to illustrate this definition.\\
\paragraph{Bridge regression:} The family of bridge regression estimates~\cite{frank1993statistical}
take the form
\begin{align*}
\thetahat_{\tiny{\mbox{bidge}}} \in \arg \min_{ \theta \in
\real^\usedim} \Big \{ \frac{1}{\numobs} \|\yvec - \Xmat
\theta\|_2^2 + \lambda \sum_{i=1}^\usedim |\theta|^\gamma \Big \}.
\end{align*}
Note that this is a special case of the objective
function~\eqref{EqnLoss} with $\rho_j(\cdot) = |\cdot|^\gamma$ for
each coordinate. When $\gamma \in \{1,2\}$, it corresponds to the
Lasso estimator and the ridge regression estimator respectively. The
analysis of this paper provides lower bounds for both estimators,
uniformly over the choice of $\lambda$.
\paragraph{Weighted Lasso:} The weighted Lasso estimator~\cite{zou2006adaptive}
uses a weighted $\ell_1$-norm to regularize the empirical risk, and
leads to the estimator
\begin{align*}
\thetahat_{\tiny{\mbox{wl}}} \in \arg \min_{ \theta \in \real^\usedim}
\Big \{ \frac{1}{\numobs} \|\yvec - \Xmat \theta\|_2^2 + \lambda
\sum_{i=1}^\usedim \alpha_i|\theta_i| \Big \}.
\end{align*}
Here $\alpha_1, \dots, \alpha_\usedim$ are weights that can be
adaptively chosen with respect to the design matrix $\Xmat$. The
weighted Lasso can perform better than the ordinary Lasso,
corresponding to the special case in which all $\alpha_j$ are all
equal. For instance, on the counter-example proposed by Foygel and
Srebro~\cite{FoySre11}, for which the ordinary Lasso estimator
achieves only the slow $1/\sqrt{\numobs}$ rate, the weighted Lasso
estimator achieves the $1/\numobs$ convergence rate. Nonetheless, the
analysis of this paper shows that there are design matrices for which
the weighted Lasso, even when the weights are chosen adaptively with
respect to the design, has prediction error at least a constant
multiple of $1/\sqrt{\numobs}$.
\paragraph{Square-root Lasso:} The square-root Lasso
estimator~\cite{belloni2011square} is defined by minimizing the
criterion
\begin{align*}
\thetahat_{\tiny{\mbox{sqrt}}} \in \arg \min_{ \theta \in
\real^\usedim} \Big \{ \frac{1}{\sqrt{\numobs}} \|\yvec - \Xmat
\theta\|_2 + \lambda \lone{\theta} \Big \}.
\end{align*}
This criterion is slightly different from our general objective
function~\eqref{EqnLoss}, since it involves the square root of the
least-squares error. Relative to the Lasso, its primary advantage is
that the optimal setting of the regularization parameter does not
require the knowledge of the standard deviation of the noise. For the
purposes of the current analysis, it suffices to note that by
Lagrangian duality, every square-root Lasso estimate
$\thetahat_{\tiny{\mbox{sqrt}}}$ is a minimizer of the least-squares
criterion $\|\yvec - \Xmat \theta\|_2$, subject to $\|\theta\|_1 \leq R$,
for some radius $R \geq 0$ depending on $\lambda$. Consequently, as
the weight $\lambda$ is varied over the interval $[0,\infty)$, the
square root Lasso yields the same solution path as the Lasso. Since
our lower bounds apply to the Lasso for any choice of $\lambda \geq
0$, they also apply to all square-root Lasso solutions.
\begin{figure}
\centering
\includegraphics[width = 0.65\textwidth]{penalty-func}
\caption{Plots with regularization weight $\lambda = 1$, and
parameters $a = 3.7$ for SCAD, and $b = 2.7$ for MCP.}
\label{fig:penalty-func}
\end{figure}
\paragraph{SCAD penalty or MCP regularizer:}
Due to the intrinsic bias induced by $\ell_1$-regularization, various
forms of nonconvex regularization are widely used. Two of the most
popular are the SCAD penalty, due to Fan and
Li~\cite{fan2001variable}, and the MCP penalty, due to
Zhang et al.~\cite{zhang2010nearly}. The family of SCAD penalties takes the
form
\begin{align*}
\phi_\lambda(t) \defeq \frac{1}{\lambda} \begin{cases} \lambda |t| &
\mbox{for $|t|\leq \lambda$},\\ -(t^2-2 a \lambda |t| +
\lambda^2)/(2a-2) & \mbox{for $\lambda < |t| \leq
a\lambda$},\\ (a+1)\lambda^2/2 & \mbox{for $|t| \geq a\lambda$},
\end{cases}
\end{align*}
where $a > 2$ is a fixed parameter. When used with the least-squares
objective, it is a special case of our general set-up with
$\reg_j(\theta_j) = \phi_\lambda(\theta_j)$ for each coordinate $j =
1, \ldots, \usedim$. Similarly, the MCP penalty takes the form
\begin{align*}
\reg_\lambda(t) \defeq \int_0^{|t|} \left( 1 - \frac{z}{\lambda
b}\right)_+ d z,
\end{align*}
where $b > 0$ is a fixed parameter. It can be verified that both the
SCAD penalty and the MCP regularizer belong to the function class
$\family$ previously defined. See Figure~\ref{fig:penalty-func} for
a graphical illustration of the SCAD penalty and the MCP regularizer.
\subsection{Prediction error for the Lasso}
We now turn to a precise statement of the best known upper bounds for
the Lasso prediction error. We assume that the design matrix satisfies
the column normalization condition. More precisely, letting $X_j \in
\real^\numobs$ denote the $j^{th}$ column of the design matrix
$\Xmat$, we say that it is $1$-column normalized if
\begin{align}
\label{eqn:column-normalization-condition}
\frac{\ltwos{X_j}}{\sqrt{n}} \leq 1 \qquad \mbox{for $j = 1,2,
\ldots,\usedim$.}
\end{align}
Our choice of the constant $1$ is to simplify notation; the more
general notion allows for an arbitrary constant $C$ in this bound.
In addition to the column normalization condition, if the design matrix further satisfies a restricted eigenvalue (RE)
condition~\cite{bickel2009simultaneous,van2009conditions}, then the
Lasso is known to achieve the fast rate~\eqref{eqn:l0-optimal-rate}
for prediction error. More precisely, restricted eigenvalues are defined
in terms of subsets $\PlainSset$ of the index set $\{1, 2, \ldots,
\usedim \}$, and a cone associated with any such subset. In
particular, letting $\PlainSbar$ denote the complement of
$\PlainSset$, we define the cone
\begin{align*}
\ConeSet(\PlainSset) & \defn \big \{ \theta \in \real^\usedim \, \mid
\, \|\theta_{\PlainSbar}\|_1 \leq 3 \|\theta_{\PlainSset}\|_1 \big \}.
\end{align*}
Here $\|\theta_{\PlainSbar}\|_1 \defn \sum_{j \in \PlainSbar}
|\theta_j|$ corresponds to the $\ell_1$-norm of the coefficients
indexed by $\PlainSbar$, with $\|\theta_{\PlainSset}\|_1$ defined
similarly. Note that any vector $\thetastar$ supported on
$\PlainSset$ belongs to the cone $\ConeSet(\PlainSset)$; in addition,
it includes vectors whose $\ell_1$-norm on the ``bad'' set
$\PlainSbar$ is small relative to their $\ell_1$-norm on $\PlainSset$.
Given triplet $(\numobs, \usedim, \kdim)$, the matrix $\Xmat \in
\real^{\numobs \times \usedim}$ is said to satisfy a $\RECON$-RE
condition (also known as a compatibility condition) if
\begin{align}
\label{EqnDefnRE}
\frac{1}{\numobs} \|X \theta\|_2^2 & \geq \RECON \|\theta\|_2^2 \qquad
\mbox{for all $\theta \in \bigcup
\limits_{|\PlainSset| = \kdim} \ConeSet(\PlainSset)$.}
\end{align}
The following
result~\cite{bickel2009simultaneous,negahban2012unified,BuhVan11}
provides a bound on the prediction error for the Lasso estimator:
\begin{proposition}[Prediction error for Lasso with RE condition]
\label{PropLassoThresh}
Consider the standard linear model for a design matrix $\Xmat$
satisfying the column normalization
condition~\eqref{eqn:column-normalization-condition} and the
$\gamma$-RE condition. Then for any vector $\thetastar\in
\mathbb{B}_0(k)$, the Lasso estimator $\thetahat_{\lambda_n}$ with
$\regparn = 4 \sigma \sqrt{\frac{\log \usedim}{\numobs}}$ satisfies
\begin{align}
\label{EqnLassoThreshBound}
\frac{1}{\numobs} \| \Xmat \thetahat_{\lambda_n} - \Xmat
\thetastar\|_2^2 & \leq \frac{64\sigma^2 \kdim \log
\usedim}{\gamma^2\numobs} \qquad \mbox{for any $\thetastar \in
\Ball_0(\kdim)$}
\end{align}
with probability at least $1 - c_1 e^{-c_2 n \lambda_n^2}$.
\end{proposition}
The Lasso rate~\eqref{EqnLassoThreshBound} will match the optimal
rate~\eqref{eqn:l0-optimal-rate} if the RE constant~$\gamma$ is
bounded away from zero. If $\gamma$ is close to zero, then the Lasso
rate could be arbitrarily worse than the optimal rate. It is known
that the RE condition is necessary for recovering the true vector
$\thetastar$~\cite[see, e.g.][]{raskutti2011minimax}, but minimizing
the prediction error should be easier than recovering the true vector.
In particular, strong correlations between the columns of $X$, which
lead to violations of the RE conditions, should have no effect on the
intrinsic difficulty of the prediction problem. Recall that the
$\ell_0$-based estimator $\thetazero$ satisfies the prediction error
upper bound~\eqref{eqn:l0-optimal-rate} without any constraint on the
design matrix. Moreover, Raskutti et al.~\cite{raskutti2011minimax}
show that many problems with strongly correlated columns are actually
easy from the prediction point of view.
In the absence of RE conditions, $\ell_1$-based methods are known to
achieve the slow $1/\sqrt{\numobs}$ rate, with the only constraint on
the design matrix being a uniform column
bound~\cite{bickel2009simultaneous}:
\begin{proposition}[Prediction error for Lasso without RE condition]
\label{PropLasso}
Consider the standard linear model for a design matrix $\Xmat$
satisfying the column normalization
condition~\eqref{eqn:column-normalization-condition}. Then for any
vector $\thetastar\in \Ball_0(\kdim)\cap \Ball_1(\radius)$, the Lasso
estimator $\thetaregparn$, with $\regparn = 4 \sigma \sqrt{\frac{2\log
\usedim}{\numobs}}$, satisfies the bound
\begin{align}
\label{EqnLassoBound}
\PREDERRSQ{\thetaregparn}{\thetastar} & \leq \UNICON\; \sigma \radius
\Big ( \sqrt{\frac{2\log \usedim}{\numobs}} + \delta \Big),
\end{align}
with probability at least $1 - c_1 \usedim \, e^{-c_2 \numobs
\delta^2}$.
\end{proposition}
\noindent Combining the bounds of
Proposition~\ref{PropLassoThresh} and Proposition~\ref{PropLasso}, we
have
\begin{align}\label{eqn:combined-upper-bound}
\MSE(\thetahat_{\ell_1}; \Xmat) \leq \UNICON'
\min\Big\{\frac{\sigma^2 k \log d}{\gamma^2 n}, \sigma \radius
\sqrt{\frac{ \log \usedim}{\numobs}}\Big\}.
\end{align}
If the RE constant $\gamma$ is sufficiently close to zero, then the
second term on the right-hand side will dominate the first term. In
that case, the $1/\sqrt{\numobs}$ achievable rate is substantially
slower than the $1/\numobs$ optimal rate for reasonable ranges of
$(\kdim, R)$. One might wonder whether the analysis leading to the
bound~\eqref{eqn:combined-upper-bound} could be sharpened so as to
obtain the fast rate. Among other consequences, our first main result
(Theorem~\ref{theorem:main-lower-bound} below) shows that no
substantial sharpening is possible.
\section{Main results}
\label{sec:main-result}
We now turn to statements of our main results, and discussion of their
consequences.
\subsection{A general lower bound}
Our analysis applies to the set of local minima of the objective
function $L$ defined in equation~\eqref{EqnLoss}. More precisely, a
vector $\thetatil$ is a local minimum of the function $\theta \mapsto
L(\theta;\lambda)$ if there is an open ball $\Ball$ centered at
$\thetatil$ such that $\thetatil \in \arg \min \limits_{\theta\in
\Ball} L(\theta;\lambda)$. We then define the set
\begin{align}
\label{eqn:define-solution-path}
\Thetahat_\lambda \defeq \Big \{ \theta \in \real^\usedim \, \mid \,
\mbox{$\theta$ is a local minimum of the function $\theta \mapsto
L(\theta;\lambda)$} \Big\},
\end{align}
an object that depends on the triplet $(\Xmat,\yvec,\rho)$ as well as
the choice of regularization weight $\lambda$. Since the function
$\reg$ might be non-convex, the set $\Thetahat_\lambda$ may contain
multiple elements.
At best, a typical descent method applied to the objective $L$ can be
guaranteed to converge to some element of $\Thetahat_\lambda$. The
following theorem provides a lower bound, applicable to any method
that always returns some local minimum of the objective
function~\eqref{EqnLoss}.
\begin{theorem}
\label{theorem:main-lower-bound}
For any pair $(\numobs, \usedim)$ such that \mbox{$\usedim \geq
\numobs \geq 4$,} any sparsity level \mbox{$\kdim \geq 2$} and any
radius \mbox{$\radius \geq 8{\sigma}/{\sqrt{\numobs}}$,} there is a
design matrix $\Xmat \in \R^{\numobs\times \usedim}$ satisfying the
column normalization
condition~\eqref{eqn:column-normalization-condition} such that for any
coordinate-separable penalty, we have
\begin{subequations}
\begin{align}
\label{eqn:main-lower-bound}
\sup_{\thetastar\in \Ball_0(\kdim)\cap \Ball_1(\radius)} \E \left[
\inf_{\lambda\geq 0} \sup_{\theta\in \Thetahat_\lambda}
\PREDERRSQ{\theta}{\thetastar} \right] & \geq \UNICON \min\left\{
\sigma^2, \frac{\sigma\radius}{\sqrt{\numobs}}\right\}.
\end{align}
Moreover, for any convex coordinate-separable penalty, we have
\begin{align}
\label{eqn:main-lower-bound-for-convex-penalty}
\sup_{\thetastar\in \Ball_0(\kdim)\cap \Ball_1(\radius)} \E \left[
\inf_{\lambda\geq 0} \inf_{\theta\in \Thetahat_\lambda}
\PREDERRSQ{\theta}{\thetastar} \right] \geq \UNICON \min\left\{
\sigma^2, \frac{\sigma\radius}{\sqrt{\numobs}}\right\}.
\end{align}
\end{subequations}
\end{theorem}
\noindent In both of these statements, the constant $c$ is universal,
independent of $(\numobs, \usedim, \kdim, \sigma, \radius)$ as well as
the design matrix. See Section~\ref{sec:prove-main-result} for the
proof.
In order to interpret the lower bound~\eqref{eqn:main-lower-bound},
consider any estimator $\thetahat$ that takes values in the set
$\Thetahat_\lambda$, corresponding to local minima of $L$. The result
is of a game-theoretic flavor: the statistician is allowed to
adaptively choose $\lambda$ based on the observations $(y, X)$,
whereas nature is allowed to act adversarially in choosing a local
minimum for every execution of $\thetahat_\lambda$. Under this
setting, Theorem~\ref{theorem:main-lower-bound} implies that
\begin{align}
\label{eqn:concretized-lower-bound}
\sup_{\thetastar\in \Ball_0(\kdim)\cap \Ball_1(\radius)}
\frac{1}{\numobs} \E \left[ \ltwos{\Xmat\thetahat_\lambda -
\Xmat\thetastar}^2 \right] &\geq \UNICON \min\left\{ \sigma^2,
\frac{\sigma\radius}{\sqrt{\numobs}}\right\}.
\end{align}
For any convex regularizer (such as the $\ell_1$-penalty underlying
the Lasso estimate),
equation~\eqref{eqn:main-lower-bound-for-convex-penalty} provides a
stronger lower bound, one that holds uniformly over all choices of
$\lambda \geq 0$ and all (global) minima. For the Lasso estimator,
the lower bound of Theorem~\ref{theorem:main-lower-bound} matches the
upper bound~\eqref{EqnLassoBound} up to the logarithmic term
$\sqrt{\log \usedim}$, showing that the lower bound is almost tight.\\
It is possible that lower bounds of this form hold only for extremely
ill-conditioned design matrices, which would render the consequences
of the result less broadly applicable. In particular, it is natural
to wonder whether it is also possible to prove a non-trivial lower
bound when the restricted eigenvalues are bounded above zero. Recall
that under the RE condition with a positive constant~$\gamma$, the
Lasso will achieve a mixture rate~\eqref{eqn:combined-upper-bound},
consisting of a scaled fast rate $1/(\gamma^2 n)$ and the slow rate
$1/\sqrt{n}$. The following result shows that this mixture rate cannot
be improved to match the fast rate.
\begin{corollary}
\label{coro:lower-bound}
For any sparsity level $\kdim \geq 1$, any integers $\usedim = \numobs
\geq 4k$, any radius \mbox{$\radius \geq 8{\sigma}/{\sqrt{\numobs}}$}
and any constant $\gamma \in (0,1]$, there is a design matrix $\Xmat
\in \R^{\numobs\times \usedim}$ satisfying the column normalization
condition~\eqref{eqn:column-normalization-condition} and the
$\gamma$-RE condition, such that for any coordinate-separable penalty,
we have
\begin{subequations}
\begin{align}
\label{eqn:coro-lower-bound}
\sup_{\thetastar\in \Ball_0(2\kdim)\cap \Ball_1(\radius)} \E \left[
\inf_{\lambda\geq 0} \sup_{\theta\in \Thetahat_\lambda}
\PREDERRSQ{\theta}{\thetastar} \right] & \geq \UNICON \min\left\{
\sigma^2, \frac{k \sigma^2}{\gamma n},
\frac{\sigma\radius}{\sqrt{\numobs}}\right\}.
\end{align}
Moreover, for any convex coordinate-separable penalty, we have
\begin{align}
\label{eqn:coro-lower-bound-for-convex-penalty}
\sup_{\thetastar\in \Ball_0(2\kdim)\cap \Ball_1(\radius)} \E \left[
\inf_{\lambda\geq 0} \inf_{\theta\in \Thetahat_\lambda}
\PREDERRSQ{\theta}{\thetastar} \right] \geq \UNICON \min\left\{
\sigma^2, \frac{k \sigma^2}{\gamma n},
\frac{\sigma\radius}{\sqrt{\numobs}}\right\}.
\end{align}
\end{subequations}
\end{corollary}
Since none of the three terms on the right-hand side of
inequalities~\eqref{eqn:coro-lower-bound}
and~\eqref{eqn:coro-lower-bound-for-convex-penalty} matches the
optimal rate~\eqref{eqn:l0-optimal-rate}, the corollary implies that
the optimal rate is not achievable even if the restricted eigenvalues
are bounded above zero. Comparing this lower bound to the upper
bound~\eqref{eqn:combined-upper-bound}, there are two factors that are
not perfectly matched. First, the upper bound depends on $\log
\usedim$, but there is no such dependence in the lower bound. Second,
the upper bound has a term that is proportional to $1/\gamma^2$, but
the corresponding term in the lower bound is proportional
to~$1/\gamma$. Proving a sharper lower bound that closes this gap
remains an open problem.
We remark that Corollary~\ref{coro:lower-bound} follows by a
refinement of the proof of Theorem~\ref{theorem:main-lower-bound}. In
particular, we first show that the design matrix underlying
Theorem~\ref{theorem:main-lower-bound}---call it
$\ensuremath{X_{\mbox{\tiny{bad}}}}$---satisfies the $\gamma_n$-RE condition, where the quantity
$\gamma_n$ converges to zero as a function of sample size~$n$. In
order to prove Corollary~\ref{coro:lower-bound}, we construct a new
block-diagonal design matrix such that each block corresponds to a
version of $\ensuremath{X_{\mbox{\tiny{bad}}}}$. The size of these blocks are then chosen so that,
given a predefined quantity~$\gamma >0$, the new matrix satisfies the
$\gamma$-RE condition. We then lower bound the prediction error of
this new matrix, using Theorem~\ref{theorem:main-lower-bound} to lower
bound the prediction error of each of the blocks. We refer the reader
to Section~\ref{sec:proof-coro-lower-bound} for the full proof.
\subsection{Lower bounds for local descent methods}
For any least-squares cost with a coordinate-wise separable
regularizer, Theorem~\ref{theorem:main-lower-bound} establishes the
existence of at least one ``bad'' local minimum such that the
associated prediction error is lower bounded by $1/\sqrt{n}$. One
might argue that this result could be overly pessimistic, in that the
adversary is given too much power in choosing local minima. Indeed,
the mere existence of bad local minima need not be a practical concern
unless it can be shown that a typical optimization algorithm will
frequently converge to one of them.
Steepest descent is a standard first-order algorithm for minimizing a
convex cost function~\cite{Bertsekas_nonlin,Boyd04}. However, for
non-convex and non-differentiable loss functions, it is known that the
steepest descent method does not necessarily yield convergence to a
local minimum~\cite{dem1990introduction,wolfe1975method}. Although there
exist provably convergent first-order methods for general non-convex
optimization
(e.g.,~\cite{mifflin1982modification,kiwiel1983aggregate}), the paths
defined by their iterations are difficult to characterize, and it is
also difficult to predict the point to which the algorithm eventually
converges.
In order to address a broad class of methods in a unified manner, we
begin by observing that most first-order methods can be seen as
iteratively and approximately solving a local minimization
problem. For example, given a stepsize parameter $\eta > 0$, the
method of steepest descent iteratively approximates the minimizer of
the objective over a ball of radius $\eta$. Similarly, the
convergence of algorithms for non-convex optimization algorithms is
based on the fact that they guarantee decrease of the function value
in the local neighborhood of the current
iterate~\cite{mifflin1982modification,kiwiel1983aggregate}. We thus
study an iterative local descent algorithm taking the form:
\begin{align}
\label{eqn:local-descent-update-formula}
\theta^{t+1} \in \arg \min_{\theta \in \ensuremath{\Ball_2(\stepsize; \theta^t)}} L(\theta; \lambda),
\end{align}
where $\eta > 0$ is a given parameter, and $\Ball_2(\eta;
\theta^t) \defn \{ \theta \in \real^\usedim \, \mid \, \ltwos{\theta -
\theta^t} \leq \eta \}$ is the ball of radius $\eta$
around the current iterate. If there are multiple points achieving
the optimum, the algorithm chooses the one that is closest to
$\theta^t$, resolving any remaining ties by randomization. The
algorithm terminates when there is a minimizer belonging to the
interior of the ball $\ensuremath{\Ball_2(\stepsize; \theta^t)}$---that is, exactly when
$\theta^{t+1}$ is a local minimum of the loss function.
It should be noted that the
algorithm~\eqref{eqn:local-descent-update-formula} defines a powerful
algorithm---one that might not be easy to implement in polynomial
time---since it is guaranteed to return the global minimum of a
nonconvex program over the ball $\ensuremath{\Ball_2(\stepsize; \theta^t)}$. In a certain sense, it
is more powerful than any first-order optimization method, since it
will always decrease the function value at least as much as a descent
step with stepsize related to $\eta$. Since we are proving lower
bounds, these observations only strengthen our result. We
impose two additional conditions on the regularizers:
\begin{enumerate}[(i)]
\item[(iv)] Each component function $\reg_j$ is continuous at the
origin.
\item[(v)] There is a constant $\curbound$ such that
$|\reg'_j(x)-\reg'_j(\ensuremath{\tilde{x}})|\leq \curbound |x - \ensuremath{\tilde{x}}|$ for any
pair $x, \ensuremath{\tilde{x}} \in (0,\infty)$.
\end{enumerate}
Assumptions (i)-(v) are more restrictive than assumptions (i)-(iii),
but they are satisfied by many popular penalties. As illustrative
examples, for the $\ell_1$-norm, we have $ H=0$. For the SCAD
penalty, we have $\curbound = 1/(a-1)$, whereas for the MCP
regularizer, we have $\curbound = 1/b$. Finally, in order to prevent the
update~\eqref{eqn:local-descent-update-formula} being so powerful that
it reaches the global minimum in one single step, we impose an
additional condition on the stepsize, namely that
\begin{align}
\label{eqn:beta-constraint}
\eta \leq \min \Big\{ B, \frac{B}{\lambda \curbound}\Big\}, \quad
\mbox{where $B \defeq \frac{\sigma}{4\sqrt{\numobs}}$.}
\end{align}
It is reasonable to assume that the stepsize bounded by a time-invariant
constant, as we can always partition a single-step update into a finite
number of smaller steps, increasing the algorithm's time complexity by a
multiplicative constant. On the other hand, the $\order(1/\sqrt{n})$
stepsize is adopted by popular first-order methods. Under these assumptions,
we have the following theorem, which applies to any regularizer $\reg$ that
satisfies Assumptions (i)-(v).
\begin{theorem}
\label{theorem:gradient-descent-lower-bound}
For any pair $(\numobs, \usedim)$ such that $\usedim\geq \numobs \geq
4$, integer $\kdim \geq 2$ and any scalars $\ensuremath{\gamma} \geq 0$ and
$\radius \geq {\sigma}/{\sqrt{\numobs}}$, there is a design matrix
$\Xmat \in \R^{\numobs\times \usedim}$ satisfying the column
normalization condition~\eqref{eqn:column-normalization-condition}
such that
\begin{enumerate}[(a)]
\item The update~\eqref{eqn:local-descent-update-formula} terminates
after a finite number of steps $\ensuremath{T}$ at a vector $\ensuremath{\widehat{\theta}} =
\theta^{\ensuremath{T}+1}$ that is a local minimum of the loss function.
\item Given a random initialization $\theta^0\sim N(0, \ensuremath{\gamma}^2
I_{d\times d})$, the local minimum satisfies the lower bound
\end{enumerate}
\begin{align*}
\sup_{\thetastar\in \Ball_0(\kdim)\cap \Ball_1(\radius)} \E \left[
\inf_{\lambda\geq 0}\frac{1}{\numobs} \ltwos{\Xmat\ensuremath{\widehat{\theta}} -
\Xmat\thetastar}^2 \right] \geq \UNICON \min\{ \radius, \sigma\}
\frac{\sigma}{\sqrt{\numobs}}.
\end{align*}
\end{theorem}
\noindent Theorem~\ref{theorem:gradient-descent-lower-bound} shows
that local descent methods based on a random initialization do not
lead to local optima that achieve the fast rate. This conclusion
provides stronger negative evidence than
Theorem~\ref{theorem:main-lower-bound}, since it shows that bad local
minima not only exist, but are difficult to avoid.
\subsection{Simulations}
\label{SecSimulations}
In the proof of Theorem~\ref{theorem:main-lower-bound} and
Theorem~\ref{theorem:gradient-descent-lower-bound}, we construct
specific design matrices to make the problem hard to solve. In this
section, we apply several popular algorithms to the solution of the sparse linear
regression problem on these ``hard'' examples, and compare their
performance with the $\ell_0$-based
estimator~\eqref{EqnDefnEllZeroEstimator}. More specifically,
focusing on the special case $\numobs = \usedim$, we perform
simulations for the design matrix $\Xmat \in \R^{\numobs \times
\numobs}$ used in the proof of
Theorem~\ref{theorem:gradient-descent-lower-bound}. It is given by
\begin{align*}
\Xmat = \Big[ \mbox{blkdiag} \underbrace{\big \{ \sqrt{\numobs} A,
\sqrt{\numobs} A, \ldots, \sqrt{\numobs} A \big
\}}_{\mbox{$\numobs/2$ copies}}\Big],
\end{align*}
where the sub-matrix $A$ takes the form
\begin{align*}
A = \begin{bmatrix} \cos(\alpha) & -\cos(\alpha)\\ \sin(\alpha) &
\sin(\alpha)
\end{bmatrix}, \quad \mbox{where} \quad \alpha = \arcsin(n^{-1/4}).
\end{align*}
Given the $2$-sparse regression vector $\thetastar = \big(0.5, 0.5, 0,
\ldots, 0 \big)$, we form the response vector \mbox{$\yvec = \Xmat
\thetastar + w$,} where $w \sim N(0, I_{\numobs \times \numobs})$.
We compare the $\ell_0$-based estimator, referred to as the
\emph{baseline estimator}, with three other methods: the Lasso
estimator~\cite{tibshirani1996regression}, the estimator based on the
SCAD penalty~\cite{fan2001variable} and the estimator based on the MCP
penalty~\cite{zhang2010nearly}. In implementing the $\ell_0$-based
estimator, we provide it with the knowledge that $k = 2$, since the
true vector $\thetastar$ is $2$-sparse. For Lasso, we adopt the
\texttt{MATLAB} implementation~\cite{matlabLasso}, which generates a
Lasso solution path evaluated at $100$ different regularization parameters,
and we choose the estimate that yields the smallest prediction error.
For the SCAD penalty, we choose $a = 3.7$ as suggested by Fan and
Li~\cite{fan2001variable}. For the MCP penalty, we choose $b = 2.7$,
so that the maximum concavity of the MCP penalty matches that of the
SCAD penalty. For the SCAD penalty and the MCP penalty (and recalling
that $\usedim = \numobs$), we studied choices of the regularization
weight of the form $\lambda = C\sqrt{\frac{\log n}{n}}$ for a
pre-factor $C$ to be determined. As shown in past work on non-convex
regularizers~\cite{loh2013regularized}, such choices of $\lambda$ lead
to low $\ell_2$-error. By manually tuning the parameter $C$ to
optimize the prediction error, we found that $C=0.1$ is a reasonable
choice. We used routines from the GIST package~\cite{gong2013gist} to
optimize these non-convex objectives.
\begin{figure}
\centering
\includegraphics[width = 0.7\textwidth]{simulation}
\caption{Problem scale $n$ versus the prediction error
$\Exs[\frac{1}{\numobs}\ltwos{\Xmat(\thetahat - \thetastar)}^2]$.
The expectation is computed by averaging $100$ independent runs of
the algorithm. Both the sample size $\numobs$ and the prediction
error are plotted on a logarithmic scale.}
\label{fig:simulation}
\end{figure}
By varying the sample size over the range $10$ to $1000$, we obtained
the results plotted in Figure~\ref{fig:simulation}, in which the
prediction error $\Exs[\frac{1}{\numobs}\ltwos{\Xmat(\thetahat -
\thetastar)}^2]$ and sample size $\numobs$ are both plotted on a
logarithmic scale. The performance of the Lasso, SCAD-based estimate,
and MCP-based estimate are all similar. For all of the three methods,
the prediction error scales as $1/\sqrt{n}$, as confirmed by the
slopes of the corresponding lines in Figure~\ref{fig:simulation},
which are very close to~$0.5$. In fact, by examining the estimator's
output, we find that in many cases, all three estimators output
$\thetahat = 0$, leading to the prediction error
$\frac{1}{n}\ltwos{\Xmat(0 - \thetastar)}^2 = \frac{1}{\sqrt{n}}$.
Since the regularization parameters have been chosen to optimize the
prediction error, this scaling is the best rate that the three
estimators are able to achieve, and it matches the theoretical
prediction of Theorem~\ref{theorem:main-lower-bound} and
Theorem~\ref{theorem:gradient-descent-lower-bound}.
In contrast, the $\ell_0$-based estimator achieves a substantially
better error rate. The slope of the corresponding line in
Figure~\ref{fig:simulation} is very close to $1$. It means that the
prediction error of the $\ell_0$-based estimator scales as $1/n$,
thereby matching the theoretically-predicted
scaling~\eqref{eqn:l0-optimal-rate}.
\section{Proofs}
\label{SecProofs}
We now turn to the proofs of our theorems and corollary. In each
case, we defer the proofs of more technical results to the appendices.
\subsection{Proof of Theorem~\ref{theorem:main-lower-bound}}
\label{sec:prove-main-result}
For a given triple $(\numobs, \sigma, \radius)$, we define the angle
$\alpha \defeq \arcsin\left(
\frac{\sqrt{\sigma}}{\numobs^{1/4}\sqrt{32 \radius}} \right)$, and the
two-by-two matrix
\begin{subequations}
\begin{align}
\label{eqn:define-matrix-A}
A = \begin{bmatrix} \cos(\alpha) & -\cos(\alpha)\\ \sin(\alpha) &
\sin(\alpha)
\end{bmatrix}.
\end{align}
Using the matrix $A \in \real^{2 \times 2}$ as a building block, we
construct a design matrix $\Xmat \in \real^{\numobs \times
\usedim}$. Without loss of generality, we may assume that $\numobs$
is divisible by two. (If $\numobs$ is not divisible by two,
constructing a $(\numobs-1)$-by-$\usedim$ design matrix concatenated
by a row of zeros only changes the result by a constant.) We then
define
\begin{align}
\label{eqn:define-design-matrix-X}
\Xmat = \Big[ \mbox{blkdiag} \underbrace{\big \{ \sqrt{\numobs} A,
\sqrt{\numobs} A, \ldots, \sqrt{\numobs} A \big
\}}_{\mbox{$\numobs/2$ copies}}~~{\bf 0}\Big] \in
\real^{\numobs\times \usedim},
\end{align}
\end{subequations}
where the all-zeroes matrix on the right side has dimensions
$\numobs\times (\usedim-\numobs)$. It is easy to verify that the
matrix $\Xmat$ defined in this way satisfies the column normalization
condition~\eqref{eqn:column-normalization-condition}.
Next, we prove the lower bound~\eqref{eqn:main-lower-bound}. For any
integers $i,j \in [\usedim]$ with $i < j$, let $\theta_i$ denote the
$i^{th}$ coordinate of $\theta$, and let $\theta_{i:j}$ denote the
subvector with entries $\{\theta_i, \ldots, \theta_j\}$. Since the
matrix $A$ appears in diagonal blocks of $\Xmat$, we have
\begin{align}
\label{eqn:decompose-prediction-error}
\inf_{\lambda \geq 0} \sup_{\theta\in \Thetahat_\lambda}
\PREDERRSQ{\theta}{\thetastar} & = \inf_{\lambda \geq 0}
\sup_{\theta\in \Thetahat_\lambda} \sum_{i=1}^{\numobs/2}\norms{A
\big( \theta_{(2i-1):2i} - \opt_{(2i-1):2i} \big)}_2^2
\end{align}
and it suffices to lower bound the right-hand side of the above
equation.
For the sake of simplicity, we introduce the shorthand $B \defeq \frac{4\sigma}{\sqrt{\numobs}}$,
and define the scalars
\begin{align}
\label{eqn:define-gamma-i}
\gamma_i = \min\{\reg_{2i-1}(B), \reg_{2i}(B)\} \qquad \mbox{for
each $i = 1, \ldots, \numobs/2$.}
\end{align}
Furthermore, we define
\begin{align}
\label{EqnAdefinition}
a_i \defeq \left\{\begin{array}{ll}
(\cos \alpha, \sin \alpha) & \mbox{if }\gamma_i = \reg_{2i-1}(B)\\
(-\cos \alpha, \sin \alpha) & \mbox{if }\gamma_i = \reg_{2i}(B)
\end{array}\right.
\quad \mbox{and} \quad \ensuremath{w'}_i \defeq
\frac{\inprod{a_i}{w_{(2i-1):2i}}}{\sqrt{\numobs}}.
\end{align}
Without loss of generality, we may assume that \mbox{$\gamma_1 = \max
\limits_{i\in[\numobs/2]}\{\gamma_i\}$} and \mbox{$\gamma_i =
\reg_{2i -1}(B)$} for all $i \in [\numobs/2]$. If this condition
does not hold, we can simply re-index the columns of $\Xmat$ to make
these properties hold. Note that when we swap the columns $2i-1$ and
$2i$, the value of $a_i$ doesn't change; it is always associated with
the column whose regularization term is equal to $\gamma_i$.
Finally, we define the regression vector $\thetastar = \begin{bmatrix}
\frac{\radius}{2} & \frac{\radius}{2} & 0 & \cdots & 0
\end{bmatrix} \in \real^\usedim$.
Given these definitions, the following lemma lower bounds each term on
the right-hand side of
equation~\eqref{eqn:decompose-prediction-error}.
\begin{lemma}
\label{LEMMA:DECOMPOSED-TERM-LOWER-BOUND}
For any $\lambda\geq 0$, there is a local minimum $\thetahat_\lambda$
of the objective function $L(\theta;\lambda)$ such that
$\PREDERRSQ{\ensuremath{\thetahat_\lambda}}{\thetastar} \geq \ensuremath{T}_1 + \ensuremath{T}_2$,
where
\begin{subequations}
\begin{align}
\ensuremath{T}_1 & \defeq \indicator\Big[\lambda\gamma_1 > 4B(\sin^2(\alpha)
\radius \frac{\ltwos{w_{1:2}}}{\sqrt{\numobs}})\Big]
\sin^2(\alpha)(\radius -2B)_+^2 \quad \mbox{and} \\
\ensuremath{T}_2 & \defeq \sum_{i=2}^{\numobs/2} \indicator\Big[B/2 \leq
\ensuremath{w'}_i \leq B\Big] \Big( \frac{B^2}{4} - \lambda \gamma_1 \Big).
\end{align}
\end{subequations}
Moreover, if the regularizer $\reg$ is convex, then every minimizer
$\thetahat_\lambda$ satisfies this lower bound.
\end{lemma}
\noindent See Appendix~\ref{AppDecomposedOne} for the proof of this
claim.
Using Lemma~\ref{LEMMA:DECOMPOSED-TERM-LOWER-BOUND}, we can now
complete the proof of the theorem. It is convenient to condition on
the event $\Event \defn \{ \ltwo{w_{1:2}} \leq \frac{\sigma}{32} \}$.
Since $\ltwo{w_{1:2}}^2/\sigma^2$ follows a chi-square distribution
with two degrees of freedom, we have $\mprob[\Event] > 0$.
Conditioned on this event, we now consider two separate cases:
\paragraph{Case 1:} First, suppose that
$\lambda\gamma_1 > \sigma^2/\numobs$. In this case, we have
\begin{align*}
4 B \Big \{ \sin^2(\alpha) \radius +
\frac{\ltwos{w_{1:2}}}{\sqrt{\numobs}} \Big \} & \leq
\frac{16\sigma}{\sqrt{\numobs}} \left(
\frac{\sigma}{32\sqrt{\numobs}} + \frac{\sigma}{32\sqrt{\numobs}}
\right) = \frac{\sigma^2}{\numobs} < \lambda\gamma_1,
\end{align*}
and consequently
\begin{subequations}
\begin{align}
\label{EqnCaseOneBound}
\ensuremath{T}_1 + \ensuremath{T}_2 \geq \ensuremath{T}_1 = \sin^2(\alpha)(\radius -2B)_+^2 =
\frac{\sigma}{32\sqrt{\numobs}\radius} \left(\radius -
\frac{4\sigma}{\sqrt{\numobs}}\right)^2 \geq \frac{\sigma
\radius}{128\sqrt{\numobs}},
\end{align}
where the last inequality holds since we have assumed that $\radius
\geq 8 \sigma / \sqrt{\numobs}$.
\paragraph{Case 2:}
Otherwise, we may assume that $\lambda\gamma_1 \leq \sigma^2/\numobs$.
In this case, we have
\begin{align}
\label{EqnCaseTwoBound}
\ensuremath{T}_1 + \ensuremath{T}_2 & \geq \ensuremath{T}_2 = \sum_{i=2}^{\numobs/2}
\indicator\left(B/2 \leq \ensuremath{w'}_i \leq B\right)
\frac{3\sigma^2}{\numobs}.
\end{align}
\end{subequations}
\vspace*{.1in}
Combining the two lower bounds~\eqref{EqnCaseOneBound}
and~\eqref{EqnCaseTwoBound}, we find
\begin{align*}
&\E\left[ \inf_{\lambda \geq 0} \sup_{\theta\in \Thetahat_\lambda}
\frac{1}{\numobs} \ltwos{\Xmat\theta - \Xmat\thetastar}^2 \Big|
\event \right] \\
& \qquad \geq \underbrace{\E\left[ \min\left\{ \frac{\sigma
\radius}{128 \sqrt{\numobs}}, \sum_{i=2}^{\numobs/2}
\indicator\left[B/2 \leq \ensuremath{w'}_i \leq B/2\right]
\frac{3\sigma^2}{\numobs} \right\}\right]}_{\ensuremath{T}_3},
\end{align*}
where we have used the fact that $\{\ensuremath{w'}_i\}_{i=2}^{\numobs/2}$ are
independent of the event $\Event$. Using the inequality
$\min\left\{a, \sum_{i=2}^{n/2} b_i\right\} \geq \sum_{i=2}^{n/2}
\min\{2 a/n, b_i\}$, valid for scalars $a$ and
$\{b_i\}_{i=2}^{\numobs/2}$, we see that
\begin{align*}
\ensuremath{T}_3 & \geq \sum_{i=2}^{\numobs/2} \mprob\left[ \frac{2
\sigma}{\sqrt{\numobs} } \leq \ensuremath{w'}_i \leq \frac{4
\sigma}{\sqrt{\numobs}} \right] \min\left\{ \frac{\sigma
\radius}{128 \numobs \sqrt{\numobs}}, \frac{3\sigma^2}{\numobs}
\right\},
\end{align*}
where we have used the fact that $\Exs\Big[\indicator \big[B/2 \leq
\ensuremath{w'}_i \leq B \big] \Big] = \mprob[B/2 \leq \ensuremath{w'}_i \leq B]$,
and the definition $B \defeq \frac{4 \sigma}{\sqrt{\numobs}}$.
Since $\ensuremath{w'}_i \sim N(0,\sigma^2/\numobs)$, the probability $\mprob[
{2\sigma}/{\sqrt{\numobs}} \leq \ensuremath{w'}_i \leq
{4\sigma}/{\sqrt{\numobs}} ]$ is bounded away from zero
independently of all problem parameters. Hence, there is a universal
constant $c_2 > 0$ such that $\ensuremath{T}_3 \geq c_2 \min\left\{
\frac{\sigma \radius}{\sqrt{\numobs}}, \; \sigma^2 \right\}$. Putting
together the pieces, we have shown that
\begin{align*}
\E\left[ \inf_{\lambda \geq 0} \sup_{\theta\in \Thetahat_\lambda}
\frac{1}{\numobs} \ltwos{\Xmat\theta - \Xmat\thetastar}^2 \right]
&\geq \: \mprob[\event] \, \ensuremath{T}_3 \; \geq c_2' \min \left \{ \frac{
\sigma \radius}{\sqrt{\numobs}}, \: \sigma^2 \right\},
\end{align*}
which completes the proof of the theorem.
\subsection{Proof of Corollary~\ref{coro:lower-bound}}
\label{sec:proof-coro-lower-bound}
Here we provide a detailed proof of
inequality~\eqref{eqn:coro-lower-bound}. We note that
inequality~\eqref{eqn:coro-lower-bound-for-convex-penalty} follows by
an essentially identical series of steps, so that we omit the details.
Let $m$ be an even integer and let $X_m \in \R^{m\times m}$ denote the
design matrix constructed in the proof of
Theorem~\ref{theorem:main-lower-bound}. In order to avoid confusion,
we rename the parameters $(n,d,R)$ in the
construction~\eqref{eqn:define-design-matrix-X} by $(n',d',R')$, and
set them equal to
\begin{align}
\label{eqn:m-m-radius}
(n',d',R') \defeq \Big(m, m, \min \Big \{ \frac{R \sqrt{n}}{k
\sqrt{m}}, \frac{\sigma}{16 \gamma \sqrt{m}} \Big\} \Big),
\end{align}
where the quantities $(k,m,n,R,\sigma)$ are defined in the statement
of Corollary~\ref{coro:lower-bound}. Note that $X_m$ is a square
matrix, and according to equation~\eqref{eqn:define-design-matrix-X},
all of its eigenvalues are lower bounded by
$(\frac{m^{1/2}\sigma}{16R'})^{1/2}$. By
equation~\eqref{eqn:m-m-radius}, this quantity is lower bounded by
$\sqrt{m\gamma}$.
Using the matrix $X_m$ as a building block, we now construct a larger
design matrix \mbox{$X \in \R^{n \times n}$} that we then use to prove
the corollary. Let $m$ be the greatest integer divisible by two such
that $k m \leq n$. By the assumption that $n \geq 4 k$, we have $m
\geq 4$. Consequently, we may construct the $n \times n$ dimensional
matrix
\begin{align}
\label{eqn:define-coro-design-matrix-X}
\Xmat \defeq \mbox{blkdiag} \Big \{\underbrace{ \sqrt{\numobs/m} X_m,
\ldots, \sqrt{\numobs/m} X_m}_{\mbox{$k$ copies}}, \sqrt{n} I_{n-km}
\Big\} \in \real^{\numobs\times \numobs},
\end{align}
where $I_{n-km}$ is the $(n-km)$-dimensional identity matrix. It is
easy to verify the matrix $X$ satisfies the column normalization
condition. Since all eigenvalues of $X_m$ are lower bounded by
$\sqrt{m\gamma}$, we are guaranteed that all eigenvalues of $X$ are
lower bounded by $\sqrt{n\gamma}$. Thus, the matrix $X$ satisfies the
$\gamma$-RE condition.
It remains to prove a lower bound on the prediction error, and in
order to do so, it is helpful to introduce some shorthand
notation. Given an arbitrary vector $u \in \R^n$, for each integer $i
\in \{1, \ldots, k\}$, we let $u_{(i)} \in \real^m$ denote the
sub-vector consisting of the $((i-1)m+1)$-th to the $(im)$-th elements
of vector $u$, and we let $u_{(k+1)}$ denote the sub-vector consisting
of the last $n-km$ elements. We also introduce similar notation for
the function $\rho(x) = \rho_1(x_1) + \dots +\rho_n(x_n)$;
specifically, for each $i \in \{1, \ldots, k\}$, we define the
function $\rho_{(i)}: \R^m\to \R$ via $\rho_{(i)}(\theta) \defeq
\sum_{j=1}^m \rho_{(i-1)m+j}(\theta_j)$.
Using this notation, we may rewrite the cost function as:
\begin{align*}
L(\theta;\lambda) = \frac{1}{n}\sum_{i=1}^k\Big(\ltwos{\sqrt{n/m}X_m
\theta_{(i)} - y_{(i)}}^2 + n \lambda \rho_{(i)}(\theta_{(i)}) \Big)
+ h(\theta_{(k+1)}),
\end{align*}
where $h$ is a function that only depends on $\theta_{(k+1)}$. If we
define $\theta'_{(i)} \defeq \sqrt{n/m}\theta_{(i)}$ and
$\rho'_{(i)}(\theta) \defeq \frac{n}{m}\rho_{(i)}(\sqrt{m/n}\theta)$,
then substituting them into the above expression, the cost function
can be rewritten as
\begin{align*}
G(\theta'; \lambda) & \defeq \frac{m}{n} \sum_{i=1}^k\Big(\frac{1}{m}
\ltwos{X_m \theta'_{(i)} - y_{(i)}}^2 + \lambda
\rho'_{(i)}(\theta'_{(i)}) \Big) + h(\sqrt{m/n} \; \theta'_{(k+1)}).
\end{align*}
Note that if the vector $\thetahat$ is a local minimum of the function
$\theta \mapsto L(\theta;\lambda)$, then the rescaled vector
$\thetahat' \defeq \sqrt{n/m}\;\thetahat$ is a local minimum of the
function $\theta' \mapsto G(\theta';\lambda)$. Consequently, the
sub-vector $\thetahat'_{(i)}$ must be a local minimum of the function
\begin{align}
\label{eqn:coro-split-local-minimum}
\frac{1}{m}\ltwos{X_m \theta'_{(i)} - y_{(i)}}^2 +
\rho'_{(i)}(\theta'_{(i)}).
\end{align}
Thus, the sub-vector $\thetahat'_{(i)}$ is the solution of a
regularized sparse linear regression problem with design matrix
$X_m$.
Defining the rescaled true regression vector $(\thetastar)' \defeq
\sqrt{n/m}\;\thetastar$, we can then write the prediction error as
\begin{align}
\frac{1}{n} \ltwos{X(\thetahat - \thetastar)}^2 & =
\frac{1}{n}\sum_{i=1}^k \Big(\ltwos{X_m(\thetahat'_{(i)} -
(\thetastar)'_{(i)})}^2 \Big) + \ltwos{\thetahat_{(k+1)} -
\thetastar_{(k+1)}}^2 \nonumber \\
\label{eqn:coro-split-prediction-error}
& \geq \frac{m}{n}\sum_{i=1}^k \Big( \frac{1}{m}
\ltwos{X_m(\thetahat'_{(i)} - (\thetastar)'_{(i)})}^2 \Big).
\end{align}
Consequently, the overall prediction error is lower bounded by a
scaled sum of the prediction errors associated with the design
matrix~$X_m$. Moreover, each term $\frac{1}{m}
\ltwos{X_m(\thetahat'_{(i)} - (\thetastar)'_{(i)})}^2$ can be bounded
by Theorem~\ref{theorem:main-lower-bound}.
More precisely, let $\mathcal{Q}(X,2k,R)$ denote the left-hand side of
inequality~\eqref{eqn:coro-lower-bound}. The above analysis shows that
the sparse linear regression problem on the design matrix $X$ and the
constraint $\thetastar\in \Ball_0(2\kdim)\cap \Ball_1(\radius)$ can be
decomposed into smaller-scale problems on the design matrix $X_m$ and
constraints on the scaled vector $(\thetastar)'$. By the rescaled
definition of $(\thetastar)'$, the constraint $\thetastar\in
\Ball_0(2\kdim)\cap \Ball_1(\radius)$ holds if and only if
$(\thetastar)' \in \Ball_0(2\kdim)\cap \Ball_1(\sqrt{n/m}\radius)$.
Recalling the definition of the radius $R'$ from
equation~\eqref{eqn:m-m-radius}, we can ensure that $(\thetastar)' \in
\Ball_0(2\kdim)\cap \Ball_1(\sqrt{n/m}\radius)$ by requiring that
$(\thetastar)'_{(i)}\in \Ball_0(2)\cap \Ball_1(R')$ for each
\mbox{index $i \in \{1,\dots, k\}$.} Combining
expressions~\eqref{eqn:coro-split-local-minimum}
and~\eqref{eqn:coro-split-prediction-error}, the quantity
$\mathcal{Q}(X,2k,R)$ can be lower bounded by the sum
\begin{subequations}
\begin{align}
\label{eqn:bound-q-X2kR}
\mathcal{Q}(X,2k,R) \geq \frac{m}{n}\sum_{i=1}^k
\mathcal{Q}(X_m,2,R').
\end{align}
By Theorem~\ref{theorem:main-lower-bound}, we have
\begin{align}
\label{eqn:bound-q-Xm2Rp}
\mathcal{Q}(X_m,2,R') \geq c \min \left\{ \sigma^2,
\frac{\sigma\radius'}{\sqrt{m}}\right\} = c \min \left\{\sigma^2,
\frac{\sigma^2}{16\gamma m}, \frac{\sigma \radius \sqrt{n}}{k m}
\right\},
\end{align}
\end{subequations}
where the second equality follows from our choce of $R'$ from
equation~\eqref{eqn:m-m-radius}. Combining the lower
bounds~\eqref{eqn:bound-q-X2kR} and~\eqref{eqn:bound-q-Xm2Rp}
completes the proof.
\subsection{Proof of Theorem~2}
The proof of Theorem~2 is conceptually similar to the proof of
Theorem~\ref{theorem:main-lower-bound}, but differs in some key
details. We begin with the altered definitions
\begin{align*}
\alpha \defeq \arcsin\left( \frac{\sqrt{\sigma}}{\numobs^{1/4}
\sqrt{\altradius}} \right) \quad \mbox{and} \quad B \defeq
\frac{\sigma}{4\sqrt{\numobs}}, \qquad \mbox{where } \altradius
\defeq \min\{\radius,\sigma\}.
\end{align*}
Given our assumption $R\geq \sigma/\sqrt{\numobs}$, note that we are
guaranteed that the inequality \mbox{$2B = \sigma/(2\sqrt{\numobs})
\leq r/2$} holds. We then define the matrix $A\in \R^{2\times2}$
and the matrix $\Xmat\in \R^{\numobs\times \usedim}$ by
equations~\eqref{eqn:define-matrix-A}
and~\eqref{eqn:define-design-matrix-X}.
\subsubsection{Proof of part (a)}
Let $\{\theta^t\}_{t=0}^\infty$ be the sequence of iterates generated
by equation~\eqref{eqn:local-descent-update-formula}. We proceed via
proof by contradiction, assuming that the sequence does not terminate
finitely, and then deriving a contradiction. We begin with a lemma.
\begin{lemma}
\label{LemAuxiliary}
If the sequence of iterates $\{\theta^t\}_{t=0}^\infty$ is not
finitely convergent, then it is unbounded.
\end{lemma}
We defer the proof of this claim to the end of this section. Based on
Lemma~\ref{LemAuxiliary}, it suffices to show that, in fact, the
sequence $\{\theta^t\}_{t=0}^\infty$ is bounded. Partitioning the
full vector as \mbox{$\theta \defeq \big(\theta_{1:n}, \theta_{n+1:d}
\big)$,} we control the two sequences $\{\theta^t_{1:n}
\}_{t=0}^\infty$ and $\{ \theta^t_{n+1:d} \}_{t=0}^\infty$. \\
Beginning with the former sequence, notice that the objective function
can be written in the form
\begin{align*}
L(\theta;\lambda) = \frac{1}{\numobs} \ltwos{y -
X_{1:\numobs}\theta_{1:n}}^2 + \sum_{i=1}^\usedim \lambda
\rho_i(\theta_i),
\end{align*}
where $\Xmat_{1:\numobs}$ represents the first $\numobs$ columns of
matrix $\Xmat$. The conditions~\eqref{eqn:define-matrix-A}
and~\eqref{eqn:define-design-matrix-X} guarantee that the Gram matrix
$X_{1:n}^T X_{1:n}$ is positive definite, which implies that the
quadratic function \mbox{$\theta_{1:n} \mapsto \ltwos{y -
X_{1:n}\theta_{1:n}}^2$} is strongly convex. Thus, if the
sequence $\{\theta_{1:n}^t\}_{t=0}^\infty$ were unbounded, then the
associated cost sequence $\{L(\theta^t; \lambda)\}_{t=0}^\infty$ would
also be unbounded. But this is not possible since $L(\theta^t;
\lambda) \leq L(\theta^0; \lambda)$ for all iterations $t = 1, 2,
\ldots$. Consequently, we are guaranteed that the sequence
$\{\theta^t_{1:n}\}_{t=0}^\infty$ must be bounded.
It remains to control the sequence
$\{\theta_{n+1:d}^t\}_{t=0}^\infty$. We claim that for any $i \in
\{n+1, \ldots, \usedim\}$, the sequence
$\{|\theta_i^t|\}_{t=0}^\infty$ is non-increasing, which implies the
boundedness condition. Proceeding via proof by contradiction, suppose
that $|\theta^t_i| < |\theta^{t+1}_i|$ for some index $i \in \{n+1,
\ldots, \usedim \}$ and iteration number $t \geq 0$. Under this
condition, define the vector
\begin{align*}
\widetilde \theta^{t+1}_j \defeq \begin{cases} \theta^{t+1}_j
& \mbox{if $j\neq i$}\\ \theta^{t}_j & \mbox{if $j=
i$.} \end{cases}
\end{align*}
Since $\rho_j$ is a monotonically non-decreasing function of $|x|$, we
are guaranteed that $L(\widetilde \theta^{t+1};\lambda) \leq
L(\theta^{t+1};\lambda)$, which implies that $\widetilde \theta^{t+1}$
is also a constrained minimum point over the ball $\ensuremath{\Ball_2(\stepsize; \theta^t)}$. In
addition, we have
\begin{align*}
\ltwos{\widetilde \theta^{t+1} - \theta^{t}} = \ltwos{\theta^{t+1} -
\theta^{t}} - |\theta^t_i - \theta^{t+1}_i| < \eta,
\end{align*}
so that $\widetilde \theta^{t+1}_j$ is strictly closer to $\theta^t$.
This contradicts the specification of the algorithm, in that it
chooses the minimum closest to $\theta^t$.
\paragraph{Proof of Lemma~\ref{LemAuxiliary}:}
The final remaining step is to prove Lemma~\ref{LemAuxiliary}. We
first claim that $\ltwos{\theta^s - \theta^t} \geq \eta$ for all
pairs $s < t$. If not, we could find some pair $s < t$ such that
$\ltwos{\theta^s - \theta^t} < \eta$. But since $t > s$, we are
guaranteed that $L(\theta^t;\lambda) \leq L(\theta^{s+1};\lambda)$.
Since $\theta^{s+1}$ is a global minimum over the ball
$\Ball_2(\eta; \theta^s)$ and $\ltwos{\theta^s - \theta^t} <
\eta$, the point $\theta^t$ is also a global minimum, and this
contradicts the definition of the algorithm (since it always chooses
the constrained global minimum closest to the current iterate).
Using this property, we now show that $\{\theta^t\}_{t=0}^\infty$ is
unbounded. For each \mbox{iteration} \mbox{$t = 0, 1, 2 \ldots$,} we use
$\Ball^t = \Ball_2(\eta/3; \theta^t)$ to denote the Euclidean
ball of radius $\eta/3$ centered at $\theta^t$. Since
$\ltwos{\theta^s - \theta^t} \geq \eta$ for all $s \neq t$, the
balls $\{\Ball^t\}_{t=0}^\infty$ are all disjoint, and hence there is
a numerical constant $C > 0$ such that for each $T \geq 1$, we have
\begin{align*}
{\rm vol}\Big( \cup_{t=0}^T \Ball^t \Big) = \sum_{t=0}^T {\rm
vol}(\Ball^t) = C \sum_{t=0}^T\eta^d.
\end{align*}
Since this volume diverges as $T \rightarrow \infty$, we conclude that
the set $\Ball \defn \cup_{t=0}^\infty \Ball^t$ must be unbounded. By
construction, any point in $\Ball$ is within $\eta/3$ of some
element of the sequence $\{\theta^t\}_{t=0}^\infty$, so this sequence
must be unbounded, as claimed.
\subsubsection{Proof of part (b)}
We now prove a lower bound on the prediction error corresponding the
local minimum to which the algorithm converges, as claimed in part (b)
of the theorem statement. In order to do so, we begin by introducing
the shorthand notation
\begin{align}
\label{eqn:define-gamma-i}
\gamma_i = \min\{\sup_{u\in (0,B]}\reg'_{2i-1}(u), \sup_{u\in (0,B]}
\reg'_{2i}(u)\} \qquad \mbox{for each $i = 1, \ldots, \numobs/2$}.
\end{align}
Then we define the quantities $a_i$ and $\ensuremath{w'}_i$ by
equations~\eqref{EqnAdefinition}. Similar to the proof of
Theorem~\ref{theorem:main-lower-bound}, we assume (without loss of
generality, re-indexing as needed) that $\gamma_i = \sup_{u\in
(0,B]}\reg'_{2i-1}(u)$ and that $\gamma_1 = \max_{i\in[\numobs/2]}
\{\gamma_i\}$.
Consider the regression vector $\thetastar \defn \begin{bmatrix}
\frac{r}{2} & \frac{r}{2} & 0 & \cdots & 0
\end{bmatrix}$.
Since the matrix $A$ appears in diagonal blocks of $\Xmat$, the
algorithm's output $\ensuremath{\widehat{\theta}}$ has error
\begin{align}
\label{eqn:gradient-decomposable-m-estimator}
\inf_{\lambda\geq 0} \PREDERRSQ{\ensuremath{\widehat{\theta}}}{\thetastar} & =
\inf_{\lambda\geq 0} \sum_{i=1}^{\numobs/2} \norms{A
\big(\ensuremath{\widehat{\theta}}_{(2i-1):2i} - \opt_{(2i-1):2i} \big)}_2^2.
\end{align}
Given the random initialization $\thetait{0}$, we define the events
\begin{align*}
\Event_0 \defeq \Big \{ \max\{ \theta_{1}^0, \theta_{2}^0 \} \leq 0\Big\}
\quad \mbox{and} \quad \event_1 \defeq \Big\{
\lambda\gamma_1 \geq 2\sin^2(\alpha)\altradius +
\frac{2 \ltwos{w_{1:2}}}{\sqrt{\numobs}} + 3B \Big \},
\end{align*}
as well as the (random) subsets
\begin{align*}
\Sset_1 & \defeq \Big \{ i \in \big \{ 2, \ldots, \numobs/2 \big \} \,
\mid \, \lambda \gamma_1 \leq 2\ensuremath{w'}_i - 4 B \Big \}, \quad
\mbox{and} \\
\Sset_2 & \defeq \Big\{ i \in \{2, \ldots, \numobs/2 \} \, \mid \,
\mbox{$2\sin^2(\alpha)\altradius + \frac{2
\ltwos{w_{1:2}}}{\sqrt{\numobs}} + 3B \leq 2\ensuremath{w'}_i - 4 B$}
\Big\}.
\end{align*}
Here the reader should recall the definition of $\ensuremath{w'}$ from
equation~\eqref{EqnAdefinition}.
Given these definitions, the following lemma provides lower bounds on
the decomposition~\eqref{eqn:gradient-decomposable-m-estimator} for
the vector $\ensuremath{\widehat{\theta}}$ after convergence.
\begin{lemma}
\label{LEMMA:GRADIENT-DESCENT-TERMWISE-LOWER-BOUND}
\begin{enumerate}[(a)]
\item If $\ensuremath{\Event_0 \cap \Event_1}$ holds, then $\norms{A \, \big(\ensuremath{\widehat{\theta}}_{1:2} -
\opt_{1:2}\big)}_2^2 \geq \frac{\sigma \altradius}{4\sqrt{n}}$.
\item
For any index $i \in \Sset_1$, we have
$\norms{A \, \big(\ensuremath{\widehat{\theta}}_{2i-1:2i} - \opt_{2i-1:2i} \big)}_2^2 \geq
\frac{\sigma \altradius}{8\numobs^{3/2}}$.
\item We have $\mprob[\event_0] = 1/4$, and moreover $\min
\limits_{i \in \{2, \ldots, \numobs/2 \}} \mprob[i\in \ensuremath{\Sset_2}]
\geq c$ for some numerical constant $c > 0$.
\end{enumerate}
\end{lemma}
\noindent See Appendix~\ref{AppDecomposedTwo} for the proof of this
claim. \\
Conditioned on event $\event_0$, for any index $i \in \Sset_2$,
either the event $\ensuremath{\Event_0 \cap \Event_1}$ holds, or we have
\begin{align*}
\lambda \gamma_1 < 2\sin^2(\alpha)\altradius + \frac{2
\ltwos{w_{1:2}}}{\sqrt{\numobs}} + 3 B \leq 2\ensuremath{w'}_i - 4 B,
\end{align*}
which means that $i\in
\Sset_1$ holds. Applying
Lemma~\ref{LEMMA:GRADIENT-DESCENT-TERMWISE-LOWER-BOUND} yields the
lower bound
\begin{align*}
\inf_{\lambda\geq 0} \sum_{i=1}^{\numobs/2} \norms{A
\ensuremath{\widehat{\theta}}_{(2i-1):2i} - A \opt_{(2i-1):2i}}_2^2 &\geq
\indicator[\Event_0] \; \min\left\{ \frac{\sigma \altradius}{4
\sqrt{n}}, \: \; \frac{\sigma \altradius}{8\numobs^{3/2}} \;
\sum_{i=2}^{[\numobs/2]} \indicator[i \in \ensuremath{\Sset_2}] \right\}\\ &=
\indicator[\Event_0]\; \frac{\sigma \altradius}{8\numobs^{3/2}} \;
\sum_{i=2}^{[\numobs/2]} \indicator[i \in \ensuremath{\Sset_2}],
\end{align*}
where the last equality holds since $\frac{\sigma
\altradius}{8\numobs^{3/2}} \; \sum_{i=2}^{[\numobs/2]} \indicator[i
\in \Sset_2] \leq \frac{\sigma \altradius}{8\numobs^{3/2}}\;(n/2-1)
< \frac{\sigma \altradius}{4 \sqrt{n}}$. Since the event $\event_0$
is independent of the event $\{i \in \ensuremath{\Sset_2}, i = 2, \ldots,
\numobs/2 \}$, we have
\begin{align*}
\Exs \Big[ \inf_{\lambda\geq 0} \sum_{i=1}^{\numobs/2} \norms{A
\ensuremath{\widehat{\theta}}_{(2i-1):2i} - A \opt_{(2i-1):2i}}_2^2 \Big] & \geq
\mprob[\event_0] \; \frac{\sigma \altradius}{8\numobs^{3/2}}
\sum_{i=2}^{[\numobs/2]} \mprob[i \in \ensuremath{\Sset_2}] \\
& \stackrel{(i)}{\geq} \frac{1}{4} \; \frac{\sigma
\altradius}{8\numobs^{3/2}} \;c(n/2-1)\\
& = c' \min \{\radius, \sigma \} \, \frac{\sigma}{\sqrt{\numobs}},
\end{align*}
where step (i) uses the lower bound $\mprob[\event_0] = 1/4$ and
$\mprob[i\in \ensuremath{\Sset_2}] \geq c$ from
Lemma~\ref{LEMMA:GRADIENT-DESCENT-TERMWISE-LOWER-BOUND}. Combined
with the decomposition~\eqref{eqn:gradient-decomposable-m-estimator},
the proof is complete.
\section{Discussion}
\label{SecDiscussion}
In this paper, we have demonstrated a fundamental gap in sparse linear
regression: the best prediction risk achieved by a class of
$M$-estimators based on coordinate-wise separable regularizers is
strictly larger than the the classical minimax prediction risk,
achieved for instance by minimization over the $\ell_0$-ball. This gap
applies to a range of methods used in practice, including the Lasso in
its ordinary and weighted forms, as well as estimators based on
nonconvex penalties such as the MCP and SCAD penalties.
Several open questions remain, and we discuss a few of them here.
When the penalty function $\rho$ is convex, the M-estimator minimizing
function~\eqref{EqnLoss} can be understood as a particular convex
relaxation of the $\ell_0$-based
estimator~\eqref{EqnDefnEllZeroEstimator}. It would be interesting to
consider other forms of convex relaxations for the $\ell_0$-based
problem. For instance, Pilanci et al.~\cite{PilWaiElg15} show how a
broad class of $\ell_0$-regularized problems can be reformulated
exactly as optimization problems involving convex functions in Boolean
variables. This exact reformulation allows for the direct application
of many standard hierarchies for Boolean polynomial programming,
including the Lasserre hierarchy~\cite{lasserre2001explicit} as well
as the Sherali-Adams hierarchy~\cite{Sherali90}. Other relaxations
are possible, including those that are based on introducing auxiliary
variables for the pairwise interactions (e.g., $\gamma_{ij} = \theta_i
\theta_j$), and so incorporating these constraints as polynomials in
the constraint set. We conjecture that for any fixed natural number
$t$, if the the $t$-th level Lasserre (or Sherali-Adams) relaxation is
applied to such a reformulation, it still does not yield an estimator
that achieves the fast rate~\eqref{eqn:l0-optimal-rate}. Since a
$t^{th}$-level relaxation involves $\order(\usedim^t)$ variables, this
would imply that these hierarchies do not contain polynomial-time
algorithms that achieve the classical minimax risk. Proving or
disproving this conjecture remains an open problem.
Finally, when the penalty function $\rho$ is concave, concurrent work
by Ge et al.~\cite{ge2015strong} shows that finding the global minimum
of the loss function~\eqref{EqnLoss} is strongly NP-hard. This result
implies that no polynomial-time algorithm computes the global minimum
unless ${\bf NP}={\bf P}$. The result given here is complementary in
nature: it shows that bad local minima exist, and that local descent
methods converge to these bad local minima. It would be interesting to
extend this algorithmic lower bound to a broader class of first-order
methods. For instance, we suspect that any algorithm that relies on an
oracle giving first-order information will inevitably converge to a
bad local minimum for a broad class of random initializations.
\subsection*{Acknowledgements}
This work was partially supported by grants NSF grant DMS-1107000, NSF
grant CIF-31712-23800, Air Force Office of Scientific Research Grant
AFOSR-FA9550-14-1-0016, and Office of Naval Research MURI
N00014-11-1-0688.
| {
"timestamp": "2015-12-01T02:14:28",
"yymm": "1503",
"arxiv_id": "1503.03188",
"language": "en",
"url": "https://arxiv.org/abs/1503.03188",
"abstract": "For the problem of high-dimensional sparse linear regression, it is known that an $\\ell_0$-based estimator can achieve a $1/n$ \"fast\" rate on the prediction error without any conditions on the design matrix, whereas in absence of restrictive conditions on the design matrix, popular polynomial-time methods only guarantee the $1/\\sqrt{n}$ \"slow\" rate. In this paper, we show that the slow rate is intrinsic to a broad class of M-estimators. In particular, for estimators based on minimizing a least-squares cost function together with a (possibly non-convex) coordinate-wise separable regularizer, there is always a \"bad\" local optimum such that the associated prediction error is lower bounded by a constant multiple of $1/\\sqrt{n}$. For convex regularizers, this lower bound applies to all global optima. The theory is applicable to many popular estimators, including convex $\\ell_1$-based methods as well as M-estimators based on nonconvex regularizers, including the SCAD penalty or the MCP regularizer. In addition, for a broad class of nonconvex regularizers, we show that the bad local optima are very common, in that a broad class of local minimization algorithms with random initialization will typically converge to a bad solution.",
"subjects": "Statistics Theory (math.ST); Machine Learning (stat.ML)",
"title": "Optimal prediction for sparse linear models? Lower bounds for coordinate-separable M-estimators",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9859363746096915,
"lm_q2_score": 0.7185943985973773,
"lm_q1q2_score": 0.7084883561679297
} |
https://arxiv.org/abs/1109.0322 | Bayesian nonparametric multivariate convex regression | In many applications, such as economics, operations research and reinforcement learning, one often needs to estimate a multivariate regression function f subject to a convexity constraint. For example, in sequential decision processes the value of a state under optimal subsequent decisions may be known to be convex or concave. We propose a new Bayesian nonparametric multivariate approach based on characterizing the unknown regression function as the max of a random collection of unknown hyperplanes. This specification induces a prior with large support in a Kullback-Leibler sense on the space of convex functions, while also leading to strong posterior consistency. Although we assume that f is defined over R^p, we show that this model has a convergence rate of log(n)^{-1} n^{-1/(d+2)} under the empirical L2 norm when f actually maps a d dimensional linear subspace to R. We design an efficient reversible jump MCMC algorithm for posterior computation and demonstrate the methods through application to value function approximation. | \section{Introduction}
Consider the problem of estimating the function $f$ for the model
$$y = f(\mathbf{x}) + \epsilon,$$
where $\mathbf{x} \in \mathcal{X} \subset \mathbb{R}^p$, $y \in \mathbb{R}$, $f:\mathbb{R}^p \rightarrow \mathbb{R}$ is a mean regression function and $\epsilon\sim N(0,\sigma^2).$ Given the observations $(\mathbf{x}_1,y_1),\dots,(\mathbf{x}_n,y_n)$, we would like to estimate $f$ subject to the convexity constraint,
\begin{equation}\label{eq:convexity}
f(\mathbf{x}_1) \geq f(\mathbf{x}_2) + \nabla f(\mathbf{x}_1)^T (\mathbf{x}_1-\mathbf{x}_2),
\end{equation} for every $\mathbf{x}_1,\mathbf{x}_2 \in \mathcal{X}$, where $\nabla f(\mathbf{x})$ is the gradient of $f$ at $\mathbf{x}$. This is called the convex regression problem. Convex regression can easily be modified to allow concave regression by multiplying all of the values by negative one.
Convex regression problems are common in economics, operations research and reinforcement learning. In economics, production functions~\citep{Sk78} and consumer preferences~\citep{MePr68} are often convex, while in operations research and reinforcement learning, value functions for stochastic optimization problems can be convex~\citep{ShDeRu09}. If a problem is known to be convex, a convex regression estimate provides advantages over an unrestricted estimate. First, convexity is a powerful regularizer: it places strong conditions on the derivatives---and hence smoothness---of a function. Convexity constraints can substantially reduce overfitting and lead to more accurate predictions. Second, maintaining convexity allows the use of convex optimization solvers when the regression estimate is used in an objective function of an optimization problem.
Multivariate convex regression has received relatively little attention in the literature. The oldest method is the least squares estimator (LSE)~\citep{Hi54,Dy83,BoVa04,SeSe11},
\begin{align}\label{eq:old}
\min_{\hat{y}_{1:n}, \mathbf{g}_{1:n}} & \sum_{i=1}^n \left(y_i - \hat{y}_i\right)^2 \\\notag
\mathrm{subject \ to \ } & \hat{y}_j \geq \hat{y}_i + \mathbf{g}_i^T(\mathbf{x}_j - \mathbf{x}_i), \ \ \ i,j = 1,\dots,n.
\end{align}The resulting function is piecewise linear, generated by taking the maximum over the supporting hyperplanes, $\mathbf{g}_{1:n}$. However, Equation (\ref{eq:old}) has $n^2$ constraints, making solution infeasible for more than a few thousand observations. Recently, there has been interest in multivariate convex regression beyond the LSE. \citet{HePa09} proposed a method that generates a regression estimator via a weighted kernel estimate subject to conditions on the Hessian of the estimator; solutions are found using sequential quadratic programming. Convexity is guaranteed only at points where the Hessian condition is enforced and the method does not scale well to high dimensions or large datasets. \citet{HaDu11b} proposed a method, Convex Adaptive Partitioning (CAP), that adaptively splits the dataset and fits linear estimates within each of the subsets. Like the least squares estimator, the CAP estimator is formed by taking the maximum over hyperplanes; unlike previous methods, it produces a sparse estimator that scales well to large datasets and large numbers of covariates. However, it has theoretical guarantees only in the univariate case.
Piecewise planar models, like the LSE and CAP, are poor when used in the objective function of an optimization problem. The minima of piecewise planar functions occur at a vertex where $p+1$ hyperplanes intersect. The location of vertices is sensitive to the number of hyperplanes and the hyperplane parameters. The parameters are in turn sensitive to noise and observation design. Bayesian models could reduce these problems: prior distributions on parameters reduce design sensitivity and model averaging produces a smoother estimate.
Bayesian models have been used for convex regression, but only in the univariate case. In this setting, methods rely on the ordering implicit to the real line: a positive semi-definite Hessian translates into an increasing derivative function in one dimension. \citet{RaLaSm93} discretized the covariate space and placed a Dirichlet prior over the normalized integral of the slope parameters between those points. \citet{ChChLi07} used Bernstein polynomials as a basis by placing a prior on the number of polynomials and then sampling from a restricted set of coefficients. \citet{ShWaDa11} used fixed knot and free-knot splines with a prior that placed an order restriction on the coefficients for each basis function. In a single dimension, Bayesian convex regression is closely related to Bayesian isotonic regression~\citep{LaMo95,NeDu04,ShSaWa09}. In multiple dimensions, however, convexity constraints become combinatorially difficult to enforce through projections.
We take an entirely different approach to modeling convex functions. Instead of creating an estimator based on a set of restricted parameters or projecting an unconstrained estimate back into the space of convex functions, we place a prior over a smaller set of functions that are guaranteed to be convex: piecewise planar functions. The number of hyperplanes and their parameters are random; we define the function to be the maximum over the set of hyperplanes. We efficiently sample from the posterior distribution with reversible jump Markov chain Monte Carlo (RJMCMC). We call this approach Multivariate Bayesian Convex Regression (MBCR). Although the set of piecewise planar functions does not include all convex functions, it is dense over that space and we show strong ($L_1$) consistency for MBCR. If $f(\mathbf{x}) = g(\mathbf{A}\mathbf{x})$ for some $d \times p$ matrix $\mathbf{A} $ and function $g$, we show convergence rates for MBCR with respect to the $L_2$ norm to be $\log(n)^{-1} n^{-1/(d+2)}$. The dimension of the linear subspace, $d$, determines the convergence rate, not the dimension of the full space, $p$.
In numerical experiments, we show that MBCR produces estimates that are competitive with LSE and CAP in terms of traditional metrics, like mean squared error, and can outperform them in objective function approximation. Through examples on toy problems, we show that MBCR has the potential to produce regression estimates that are much better suited to objective function approximation than piecewise planar methods.
\section{Multivariate Bayesian Convex Regression}\label{sec:model}
Convexity is defined by the set of supporting hyperplane constraints in Equation (\ref{eq:convexity}): any supporting hyperplane of the function $f$ at $\mathbf{x}_1$ is less than or equal to $f(\mathbf{x}_2)$ at any other point $\mathbf{x}_2$. This is equivalent to $f$ having a positive semi-definite Hessian. In multiple dimensions, it is difficult to project onto the set of functions that satisfy these constraints. Instead of placing a prior over an unconstrained set of functions and then restricting the parameters to meet convexity conditions, we place a prior over a smaller set of functions that automatically meet the conditions. Specifically, for all $\mathbf{x}$ in a compact set $ \mathcal{X}$ we place a prior over all functions that are the maximum over a set of $K$ hyperplanes, $(\alpha_1,\beta_1),\dots,(\alpha_K,\beta_K)\in \mathbb{R}^{p+1}$,
\begin{equation}\label{eq:max}
f(\mathbf{x}) = \max_{k \in \{1,\dots,K\} } \alpha_k + \beta^T_k \mathbf{x},
\end{equation}where $K$ is unknown. This set of functions can approximate any convex function $f$ arbitrarily well while maintaining straightforward inference.
Assuming $f(\mathbf{x})$ follows Equation (\ref{eq:max}), we let
\begin{equation}\label{eq:fTheta}
Y_i = f(\mathbf{x}_i ; \theta) + \epsilon_i , \quad \epsilon_i \sim N(0,\sigma^2),
\end{equation}where the unknown parameters are $$\theta =\{ K, \alpha = (\alpha_1,\dots,\alpha_K)^T, \beta = (\beta_1,\dots, \beta_K)^T, \sigma^2 \}.$$ The prior $\Pi$ over $\{K,\alpha,\beta, \sigma^2\}$ is factored as,
\begin{equation}\notag
\Pi(K,\alpha,\beta,\sigma^2) = \Pi_{\sigma}(\sigma^2) \Pi_K(K) \prod_{k=1}^K \Pi_{\theta}(\alpha_{k},\beta_k).
\end{equation}The prior for the variance parameter, $\sigma^2$, is defined as $\Pi_\sigma$, and the prior for the number of hyperplanes, $K$, is $\Pi_K.$ The hyperplane parameters, $\theta_k = (\alpha_k,\beta_k)^,$ are given the prior $\Pi_{\theta}.$ These yield the model,
\begin{align}\notag
K & \sim \Pi_K, &
\sigma^2 & \sim \Pi_{\sigma}, &
\theta_k \, | \, K & \sim \Pi_{\theta}, & k = 1,\dots,K.
\end{align}
MBCR is similar to Bayesian adaptive regression spline (BARS) models~\citep{DeMaSm98,DiGeKa01,ShSaWa09,ShWaDa11} in that the method places a prior over a finite set of locally parametric models, with the prior accommodating uncertainty in the number of models, their locations and their parameters. Indeed, we use the same inference method: reversible jump Markov chain Monte Carlo (RJMCMC). In both cases, RJMCMC works by adaptively adding and removing local models while updating the model-specific parameters. However, while BARS explicitly introduces random changepoints or knots within a region, in MBCR regions are implicitly defined as corresponding to locations across which a particular hyperplane dominates. Let $\{A_1,\dots,A_K\}$ be a partition of $\mathcal{X}$ where $$A_k = \{ \mathbf{x} \in \mathcal{X} \, : \, k = \arg \max_{j \in \{1,\dots, K\} } \alpha_j + \beta_j^T \mathbf{x} \}.$$As in the local knot search of \citet{DiGeKa01}, we use these regions to produce an efficient proposal distribution for the RJMCMC. We discuss implementation details for MBCR in Section \ref{sec:implementation}, but first we show consistency and rate of convergence for MBCR in Section \ref{sec:theory}.
\section{Theoretical Results}\label{sec:theory}
Posterior consistency occurs if the posterior assigns probability converging to one in arbitrarily small neighborhoods of the true function $f_0$ as the number of samples $n$ grows. The rate of convergence is the rate at which the neighborhood size can contract with respect to $n$ while still maintaining consistency. Despite the longstanding interest in shape-restricted estimators, relatively little work has explored their asymptotic properties---particularly in multivariate and Bayesian settings. In the frequentist framework, \citet{HaPl76} showed consistency of the univariate LSE for convex regression; \citet{GrJoWe01} showed it has a local convergence rate of $n^{-2/5}$. More recently, \citet{SeSe11} showed consistency for the multivariate LSE.
There is also a recent literature on the related topic of multivariate convex-transformed density estimation. \citet{CuSaSt10} showed consistency for the MLE log-concave density estimator; \citet{SeWe10} showed consistency for the MLE of convex-transformed density estimators and gave a lower minimax bound on the convergence rate of $n^{-2/(p+4)}$. Bayesian shape-restricted asymptotics have received even less attention. \citet{ShSaWa09} showed consistency for monotone regression estimation with free knot splines in the univariate case; this was extended to univariate convex regression estimation by \citet{ShWaDa11}.
Let $\theta \in \Theta$ be the set of parameters to be estimated. Let $\Pi$ be the prior induced on $f$ by
\begin{align}\label{eq:modelThm}
K-1 & \sim Poisson(\lambda), &
\sigma^2 & \sim \Pi_{\sigma}, &
\theta_k \, | \, K & \sim \Pi_{\theta}, & k = 1,\dots,K,
\end{align}where $\Pi_{\sigma}$ is defined in Assumption {\bf B2} and $\Pi_{\theta}$ in Assumptions {\bf B3} and {\bf B4}.
We consider strong, or $L_1$, consistency. That is, let $$L_{\epsilon} = \left\{ (f,\sigma) : \int_{\mathcal{X}} \left| f(\mathbf{x}) - f_0(\mathbf{x}) \right| dx < \epsilon, \ \left| \frac{\sigma}{\sigma_0} - 1\right| < \epsilon\right\},$$ where the data-generating model is
\begin{equation}\notag
Y_i = f_0(\mathbf{x}_i) + \epsilon_i, \quad \epsilon_i \sim N(0,\sigma_0)^2.
\end{equation} We would like $\Pi(L_{\epsilon}^C | (X_i,Y_i)_{i=1}^n ) \rightarrow 0$ as $n \rightarrow \infty$, almost surely $\mathbb{P}_{f_0,\sigma_0}^{\infty}$, where $\mathbb{P}_{f_0,\sigma_0}^{\infty}$ is the product measure under the true distribution. Throughout the rest of this paper, we use lower case $\mathbf{x}_i$ and $y_i$ to denote known or observed quantities, while $\mathbf{X}_i$ and $Y_i$ denote random variables. We show that MBCR is strongly consistent under a general set of conditions.
Bayesian rates of convergence are slightly different from their frequentist counterparts. A series $(\epsilon_n)_{n=1}^{\infty}$ where $\epsilon_n \rightarrow 0 $ is a rate of convergence under a metric $d(\theta,\theta_0)$ if
\begin{equation}\notag
\mathbb{P}_{f_0,\sigma_0}^{\infty} \, \Pi(\theta \in \Theta \, : \, d(\theta,\theta_0 ) \geq H_n \epsilon_n | (X_i,Y_i)_{i=1}^n ) \rightarrow 0
\end{equation} for every $H_n \rightarrow \infty$. We examine convergence rates with respect to the empirical $L_2$ norm. Moreover, if $f_0$ actually maps a $d$-dimensional linear subspace of $\mathbb{R}^p$ to $\mathbb{R}$, then the convergence rate is determined by the dimensionality of the subspace, $d$, rather than the full dimensionality, $p$.
\subsection{Consistency}\label{sec:consistency}
We consider two design cases for consistency: fixed design and random design. We place a series of assumptions on the true function, the prior and the design. Some of the assumptions on the prior are specific to the design type. In both cases, we assume that $f_0$ is uniformly bounded:
\begin{enumerate}
\item[{\bf B1.}] The function $f_0$ is uniformly bounded on the compact set $\mathcal{X}$.
\end{enumerate}Without loss of generality, we assume that $\mathcal{X} = [0,1]^p$.
For both design types, we need define the prior $\Pi_{\sigma}$ and $\Pi_{\theta}$ in Equation (\ref{eq:modelThm}). First, we assume that the prior on $\sigma^2$ has compact support bounded away from zero. This is not a restrictive assumption in practice since zero measurement error is unlikely to occur and an upper bound can be easily chosen to cover a wide range of plausible values. Second, in the case of fixed design, we assume compact support of the prior for the hyperplane parameters; again, a wide range of plausible values can be chosen. Truncated normal and inverse-gamma distributions provide a convenient choice.
\begin{enumerate}
\item[{\bf B2.}] Let $\Pi_{\sigma}$ be the prior on $\sigma$; $\Pi_{\sigma}$ is non-atomic and only has support over $[\underline{\sigma}, \bar{\sigma}]$ with $0 < \underline{\sigma} < \sigma_0 < \bar{\sigma}< \infty.$
\item[{\bf B3.}] Let $\Pi_{\theta} = N_{p+1}(\mu_{\alpha,\beta},V_{\alpha, \beta})$ be the prior on $\theta_k$, where $N_{p+1}$ is the $p+1$ dimensional Gaussian distribution.
\item[{\bf B4.}] Let $\Pi^*_{\theta} = N_{p+1}(\mu_{\alpha,\beta},V_{\alpha, \beta})$. Let $L$ be a constant such that $L > || \frac{\partial}{\partial x_j} f_0(\mathbf{x}) ||_{\infty}$ and for some $V > \frac{1}{\sqrt{p}}L$, let $$ \Omega = \left\{ (\alpha, \beta) \, : \, \max \{\alpha,\beta_1,\dots,\beta_p\} \leq V\right\}.$$ Set $\Pi_{\theta} = \Pi^*_{\theta}(\cdot \cap \Omega) / \Pi^*_{\theta}(\Omega)$ and let $\theta_k \sim \Pi_{\theta}$.
\end{enumerate}
For both design cases, we need to ensure that the covariate space is sufficiently well-sampled.
\begin{enumerate}
\item[{\bf B5.}] For each hypercube $H$ in $\mathcal{X}$, let $\lambda(H)$ be the Lebesgue measure. Suppose that there exists a constant $K_p$ with $0 < K_p \leq 1$ such that whenever $\lambda(H) \geq \frac{\lambda(\mathcal{X})}{K_p n}$, $H$ contains at least one design point for sufficiently large $n$.
\item[{\bf B6.}] Let $Q$ be the density of the random design points; $Q$ is non-atomic and $Q(x) > 0$ for every $x \in \mathcal{X}$.
\end{enumerate}
With these assumptions, we now give consistency results.
\begin{thm}\label{thm:L1Fixed}
Assume that $\mathcal{X}$ is compact, the covariate design is fixed and that $f_0$ is convex with continuous first order partial derivatives. Suppose that conditions {\bf B1}, {\bf B2}, {\bf B4} and {\bf B5} hold. Then for every $\epsilon > 0$, $$ \mathbb{P}_{f_0,\sigma_0}^{\infty} \, \Pi \left( L_{\epsilon}^C \, | \, Y_1,\dots, Y_n , \mathbf{x}_1, \dots, \mathbf{x}_n \right) \rightarrow 0.$$
\end{thm}
In the stochastic design case, assumptions {\bf B4} and {\bf B5} are replaced by {\bf B3} and {\bf B6}, respectively. We note that for random design, $L_1$ convergence follows directly from convergence in probability for a uniformly bounded function $f_0$.
\begin{thm}\label{thm:L1Random}
Assume that $\mathcal{X}$ is compact, the covariate design is random and that $f_0$ is convex with continuous first order partial derivatives. Suppose that conditions {\bf B1}-{\bf B3} and {\bf B6} hold. Then for every $\epsilon > 0$, $$\mathbb{P}_{f_0,\sigma_0}^{\infty}\, \Pi \left( L_{\epsilon}^C \, | \, Y_1,\dots, Y_n , \mathbf{X}_1, \dots, \mathbf{X}_n \right) \rightarrow 0.$$
\end{thm}
To prove Theorems \ref{thm:L1Fixed} and \ref{thm:L1Random}, we use the consistency results for Bayesian nonparametric regression of \citet{ChSc07}. We show that the prior $\Pi$ satisfies the following assumptions,
\begin{enumerate}
\item[{\bf A1.}] The prior $\Pi$ puts positive measure on the neighborhood $$B_{\delta} = \left\{ (f,\sigma) : || f - f_0 ||_{\infty} < \delta, \, \left| \frac{\sigma}{\sigma_0} - 1\right| < \delta \right\}$$ for every $\delta > 0$.
\item[{\bf A2.}] Set $\Theta_n = \Theta_{1n} \times \mathbb{R}_+$, where $$\Theta_{1n} = \left\{f : ||f||_{\infty} < M_n, \, \left| \left| \frac{\partial}{\partial x_j} f \right| \right|_{\infty} < M_n, \, j = 1,\dots,p\right\},$$ and $M_n = \mathcal{O}(n^{\alpha})$ with $\frac{1}{2} < \alpha < 1$. Then there exists $C_1,c_1 > 0$ such that $\Pi(\Theta^C_n) \leq C_1 e^{-c_1n}$.
\end{enumerate}
\citet{ChSc07} modifies the consistency theorem of \citet{Sc65} for non-i.i.d. observations; the requirements are prior positivity on a set of variationally close Kullback-Leibler neighborhoods and the existence of exponentially consistent tests separating the desired posterior region from the rest. Assumption {\bf A1} satisfies the prior positivity while assumption {\bf A2} constructs a sieve that is used to create the exponentially consistent tests. Assumptions {\bf A1} and {\bf A2} generate pointwise convergence in the empirical and in-$Q$-probability metrics for the fixed and random design cases, respectively. Assumptions {\bf B1} to {\bf B6} are then used to extend consistency under these metrics to consistency under the $L_1$ metric. See Appendix A for details.
\subsection{Rate of Convergence}\label{sec:rate}
We determine the rate of convergence of MBCR with respect to the empirical $L_2$ norm,
\begin{equation}\notag
||f||_n = \left(\frac{1}{n} \sum_{i=1}^n f(\mathbf{x}_i)^2 \right)^{1/2}.
\end{equation}For both the fixed design and random design cases, we make the following assumptions:
\begin{enumerate}
\item[{\bf B7.}] The model variance $\sigma_0^2$ is known.
\item[{\bf B8.}] There exists a convex function $g_0 : \mathbb{R}^d \rightarrow \mathbb{R}$ and a matrix $\mathbf{A} \in \mathbb{R}^{p \times d}$ with rank $d$ where $d \leq p$ such that $f_0( \mathbf{x} ) = g_0(\mathbf{A} \mathbf{x})$.
\end{enumerate}We make assumption {\bf B7} for convenience; it can be loosened with sufficient algebraic footwork. Assumption {\bf B8} says that $f_0$ actually lives on a $d-$dimensional subspace; this is not restrictive as it is possible for $\mathbf{A} = I_p$. However, many situations arise when $d << p$. For example, there may be extraneous covariates or the mean function may be a function of a linear combination of the covariates---in effect, $d=1$. It is not required that $\mathbf{A}$ is known {\it a priori}, but simply that it exists. We also keep assumption {\bf B4}, which truncates the tails of the Gaussian priors for the hyperplane slopes and intercepts; this is done to bound the prior probability of the compliment of the sieve.
\begin{thm}\label{thm:fixedRate}
Assume that $\mathcal{X}$ is compact and that $f_0$ is convex, has continuous first order partial derivatives and suppose that conditions {\bf B1}, {\bf B4}, {\bf B7} and {\bf B8} hold. For both random covariates and fixed covariates and sufficiently large $V$,
\begin{equation}\notag
\mathbb{P}_{f_0}^{\infty}\, \Pi\left( f \, : \, ||f - f_0||_n \geq H_n \epsilon_n \, | \, Y_1,\dots, Y_n , \mathbf{x}_1, \dots, \mathbf{x}_n \right)\rightarrow 0
\end{equation}for any $H_n \rightarrow \infty$, where $\epsilon_n^{-1} = \log( n) \, n^{1/(d+2)}$.
\end{thm}Theorem \ref{thm:fixedRate} is proven by showing that the conditions for Theorem 3 of \citet{GhVa07b} are satisfied. Details are given in Appendix B.
We note that the rates achieved in Theorem \ref{thm:fixedRate} are within a log term of global minimax rates for general nonparametric convergence, $\epsilon_n = n^{-1/(p+2)}$, assuming $\mathbf{A} = I_p$. However, the $\epsilon$-metric entropy of the set of bounded convex functions with respect to the $|| \cdot ||_{\infty}$ metric scales like $\epsilon^{-p/2}$~\citep{VaWe96}, leaving open the possibility of convergence rates of $\epsilon_n = n^{-2/(p+4)}$ for bounded convex functions. In certain settings of convex-transformed density estimation that rate has been obtained~\citep{SeWe10}. We, however, do not believe that MBCR achieves this rate in a general setting.
\section{Implementation}\label{sec:implementation}
In this section, we extend MBCR to a model that can accommodate heteroscedastic data and provide a reversible jump MCMC sampler.
\subsection{Heteroscedastic Model}
The model in Section \ref{sec:model} assumes a global variance parameter, $\sigma^2$. While this is often a reasonable assumption, it can lead to particularly poor results when it is violated in a shape-restricted setting: locally chasing outliers in high-variance regions can lead to globally poor prediction due to the highly constrained nature of convex regression.
To accommodate heteroscedasticity, we consider the following model,
\begin{equation}\notag
Y_i = f(\mathbf{x}_i; \theta ) + \epsilon_i, \quad \epsilon_i \sim N(0,g(\mathbf{x}_i)),
\end{equation}where $g : \mathbb{R}^p \rightarrow \mathbb{R}_+$. Specifically, to induce a flexible prior on $g$, we introduce a separate variance term for each hyperplane and modify the model to let
\begin{align}\notag
Y_i & = \max_{k \in \{1,\dots,K\}} \alpha_k + \beta^T_k \mathbf{x}_i + \epsilon_i, \quad \epsilon_i \sim N(0,\sigma^2_k),\\\notag
(\theta_k,\sigma^2_k) & \sim N_{p+1}IG(\mu_{\alpha,\beta},V_{\alpha,\beta},a,b), \quad k = 1,\dots, K,\\\notag
K - 1 & \sim Poisson(\lambda).
\end{align}Here $N_{p+1}IG$ denotes the normal inverse gamma distribution with a $p+1$ dimensional normal. We choose a Poisson prior for the number of components, although we note that the model is generally not sensitive to the prior on the number of components. Due to the adaptable nature of the heteroscedastic model and its resistance to variance misspecification, we use it for all numerical work.
\subsection{Posterior Inference}
To sample from the posterior distribution, we use RJMCMC with the marginal posterior distribution of $\{K, \alpha, \beta, \sigma^2\}$ as the stationary distribution. Similar methods have been used for posterior inference on free-knot spline models by \citet{DeMaSm98} and \citet{DiGeKa01}.
RJMCMC works by proposing a candidate model, $\{K^*, \alpha^*, \beta^*, {\sigma^2}^*\}$, and determining whether or not to move to that new model based on a Metropolis-Hastings type acceptance probability,
\begin{align}\label{eq:acceptanceP}
a(K^*, \alpha^*,\beta^*,{\sigma^2}^* & \, | \, K, \alpha, \beta, \sigma^2) = \min\left\{ 1 , \frac{p(Y \, | \, \mathbf{x}, K^*, \alpha^*, \beta^*,{\sigma^2}^*)}{p(Y \, | \, \mathbf{x}, K, \alpha, \beta,\sigma^2)}\right.\\\notag
& \times \left. \frac{\Pi(K^*, \alpha^*, \beta^*,{\sigma^2}^*)}{\Pi(K, \alpha, \beta,\sigma^2)} \frac{q(K, \alpha, \beta,\sigma^2 \, | \, K^*, \alpha^*, \beta^*, {\sigma^2}^*)}{q(K^*, \alpha^*, \beta^* , {\sigma^2}^* \, | \, K, \alpha, \beta, \sigma^2)}\right\}.
\end{align}Here $p(Y \, | \, \mathbf{x}, K^*, \alpha^*, \beta^*, {\sigma^2}^*)/p(Y \, | \, \mathbf{x}, K, \alpha, \beta,\sigma^2)$ is the likelihood ratio of the data conditioned on the models, $\Pi(K^*, \alpha^*, \beta^*,{\sigma^2}^*)/\Pi(K, \alpha, \beta,\sigma^2)$ is the prior ratio of the models and $$q(K, \alpha, \beta, \sigma^2 \, | \, K^*, \alpha^*, \beta^*, {\sigma^2}^*)/q(K^*, \alpha^*, \beta^*,{\sigma^2}^* \, | \, K, \alpha, \beta,\sigma^2)$$ is an asymmetry correction for the proposal distribution. Candidate models are entirely new models: all parameters are updated as a block. If only individual parameters or hyperplanes are updated, acceptance rates for parameters in the most constrained areas are orders of magnitude lower than those in the relatively unconstrained regions on the boundary of the function. Without block updates, there is poor mixing. There are three types of candidate models: hyperplane relocations, deletions and additions. All candidate models are generated from proposal distributions, which significantly impact the efficiency of the RJMCMC algorithm.
To generate proposal distributions we use the covariate partition induced by the current model $\{K,\alpha,\beta,\sigma^2\}$ to create a set of basis regions. Basis regions are determined by partitioning the set of training data. For example, suppose that a partition of the observations, $(\mathbf{x}_1,y_1),\dots,(\mathbf{x}_n,y_n)$, has $K$ subsets. Let $C = \{C_1,\dots,C_{K}\}$, where $
C_k = \left\{ i \, : \, i \mathrm{\ in \ subset \ } k \right\}.$ We can use $C$ to produce a set of basis regions for generating $(\alpha^*,\beta^*)$ with $K$ components,
\begin{align}\label{eq:basisRegion}
V_k^* & = \left(\tilde{V}_{\alpha,\beta}^{-1} + \mathbf{x}_{[k]}^T \mathbf{x}_{[k]}\right)^{-1},\\\notag
\mu_k^* & = V_{k}^*\left(\tilde{V}_{\alpha,\beta}^{-1}\tilde{\mu}_{\alpha,\beta} + \mathbf{x}_{[k]}^T\mathbf{y}_{[k]}\right),\\\notag
a^*_k & = \tilde{a} + \frac{n_k}{2},\\\notag
b^*_k & = \tilde{b} + \frac{1}{2}\left(\tilde{\mu}_{\alpha,\beta}^T \tilde{V}_{\alpha,\beta}^{-1} \tilde{\mu}_{\alpha,\beta} + \mathbf{y}_{[k]}^T \mathbf{y}_{[k]} - {\mu_{k}^*}^T {V_{k}^*}^{-1}{\mu_{k}^*}\right),\\\notag
(\alpha^*_k,\beta^*_k,{\sigma_k^2}^*) & \sim N_{p+1}IG\left(\mu_{k}^*,V_{k}^*, a_k^*,b_k^* \right), \quad k = 1,\dots,K.
\end{align}Here, $\mathbf{x}_{[k]} = \{[1, \mathbf{x}_i] \, : \, i \in C_k\}$, $\mathbf{y}_{[k]} = \{y_i \, : \, i \in C_k\}$ and $n_k$ is the number of elements in subset $k$. The hyperparameters for the proposal distributions, $(\tilde{\mu}_{\alpha,\beta},\tilde{V}_{\alpha,\beta},\tilde{a},\tilde{b})$, are not necessarily the same as those for the prior. Often the variance parameters are smaller to produce higher acceptance rates. The current set of hyperplanes, $\{K,\alpha,\beta\}$, are used to create the partitions that define the basis regions,
\begin{equation}\notag
C_k = \left\{ i \, : \, k = \arg \max_{j \in \{1, \dots, K\} } \alpha_j + \beta_{j}^T \mathbf{x}_i \right\}.
\end{equation}
For a relocation proposal distribution, the $K$ basis regions are generated by the covariate partition of the current model. The removal proposal distribution is a mixture with $K$ components. Each component is generated by removing the hyperplane $k$ for $k = 1,\dots, K$ and using the remaining $K-1$ hyperplanes to create a set of basis regions. Proposal distributions for additions are less straightforward. The addition proposal distribution is a mixture with $K L M$ components. Beginning with the subsets defined by the current model, $\{K,\alpha,\beta,\sigma^2\}$, each subset $j = 1,\dots,K$ is searched along a random direction $m$ for $m = 1,\dots, M$. On each of those random directions, the subset $j$ is divided according to a knot $a_{\ell}^j$ into the set of observations less than $a_{\ell}^j$ in direction $m$ and those greater. This is done for $\ell = 1,\dots, L$ knots for each subset $j$ and direction $m$. An example is shown in Figure \ref{fig:additions}. Full implementation details are given in Appendix C.
We note that the sampler for MBCR does not behave like a typical MCMC sampler. Convergence and mixing are extremely fast. Unlike most MCMC samplers, the MBCR sampler converges once the ``right'' number of components has been reached, typically within zero to four of the mean number of components. This is due to the way the proposal distributions are constructed and the strict requirements of convexity. Block updating ensures that autocorrelation drops to near zero rapidly. Numerical results suggest this generally happens after about three samples. While convexity endows the sampler with properties like fast convergence, it can also lead to situations where the restrictions are too rigid for the sampler to function. For example, if the noise level is very low, the number of observations is more than a few thousand, or the number of dimensions is moderate to high, the region of admissible models becomes very small and the acceptance rates rapidly drop to zero. Approximate inference methods seem to be required in these situations.
\begin{figure}[t]
\begin{center}
\includegraphics[width=5in,viewport=65 270 680 550]{BasisRegionsHor.pdf}
\end{center}
\caption{Basis regions for one covariate combination when $L = 2$ and $K = 2$. (A) shows the original partition; (B) shows the partition when the region induced by the first hyperplane is split; (C) shows the partition when the region induced by the second hyperplane is split.}
\label{fig:additions}
\end{figure}
\section{Applications}\label{sec:numbers}
In Section \ref{sec:synthetic}, we compare the performance of MBCR to other regression methods on a set of synthetic problems. We show that convexity constraints can produce better estimates than their unconstrained counterparts and that MBCR is competitive with state of the art convex regression methods with respect to mean squared error. In Section \ref{sec:convex}, we analyze the behavior of MBCR, CAP and LSE when approximating an objective function for convex optimization. We show that MBCR produces estimates that are more suited to objective function approximation than those produced by CAP or LSE.
\subsection{Synthetic Problems}\label{sec:synthetic}
In this subsection, we create a set of synthetic problems designed to show off the strength of convexity constraints. Problem 1 is highly non-linear and has moderate dimensionality (5); Problems 2 and 3 also have moderate dimensions in the covariate space (6 and 4, respectively), but both actually reside in a univariate subspace.
\paragraph{Problem 1.}
Let $\mathbf{x} \in \mathbb{R}^5$. Set $$y = \left(x_1+.5x_2+x_3\right)^2 - x_4+.25x_5^2 + \epsilon,$$
where $\epsilon \sim N(0,1)$. The covariates are drawn from a 5 dimensional standard Gaussian distribution, $N_5(0,I)$.
\paragraph{Problem 2.}
Let $\mathbf{x} \in \mathbb{R}^6$. Set $$y = \left(x_1+x_2\right)^2 + \epsilon,$$
where $\epsilon \sim N(0,.5^2)$. The covariates are drawn from a 6 dimensional uniform distribution, $x_{j} \sim Unif[-1,1]$ for $j=1,\dots,6$.
\paragraph{Problem 3.}
Let $\mathbf{x} \in \mathbb{R}^4$. Set
\begin{align}\notag
y & = \left| \mathbf{a}^T\mathbf{x} \right| + \epsilon, & \mathbf{a}^T & = \left[0.8262,0.9305,1.6361,0.6072\right]
\end{align}
where $\epsilon \sim N(0,1^2)$. The covariates are drawn from a 4 dimensional uniform distribution, $x_{j} \sim Unif[-4,4]$ for $j=1,\dots,4$.
\begin{table}
\caption{\label{tab:synthetic}Mean squared error on Problems 1, 2 and 3.}
\centering
\fbox{%
\begin{tabular}{l | r@{.}l | r@{.}l | r@{.}l | r@{.}l
\multicolumn{9}{c}{Problem 1}\\\hline
Method & \multicolumn{2}{c |}{$n = 100$} & \multicolumn{2}{c |}{$n = 200$} & \multicolumn{2}{c |}{$n = 500$} & \multicolumn{2}{c }{$n = 1,000$}\\\hline
MBCR & {\bf1} &{\bf0373} & {\bf 0} & {\bf3679} & {\bf 0} & {\bf2784} & 0 & 2180 \\
CAP & 1 & 6878 & 1 & 5336 & 0 & 3646 & {\bf 0} & {\bf1500} \\
LSE & 4 & 0174 & 1 & 4370 & 13 & 3398 & 1 & 8434 \\
GP & 7 & 6612 & 6 & 2974 & 4 & 4793 & 3 & 5518 \\\hline
\multicolumn{9}{c}{Problem 2}\\\hline
Method & \multicolumn{2}{c |}{$n = 100$} & \multicolumn{2}{c |}{$n = 200$} & \multicolumn{2}{c |}{$n = 500$} & \multicolumn{2}{c }{$n = 1,000$}\\\hline
MBCR & {\bf0} & {\bf 0943} & {\bf0} & {\bf0720} & {\bf0} & {\bf0155} & 0 & 0182 \\
CAP & 0 & 1191 & 0 & 0887 & 0 & 0205 & {\bf 0} & {\bf 0129} \\
LSE & 4 & 6521 & 2 & 8926 & 1 & 9979 & 5 & 4998 \\
GP & 0 & 3555 & 0 & 3932 & 0 & 3598 & 0 & 2174 \\\hline
\multicolumn{9}{c}{Problem 3}\\\hline
Method & \multicolumn{2}{c |}{$n = 100$} & \multicolumn{2}{c |}{$n = 200$} & \multicolumn{2}{c |}{$n = 500$} & \multicolumn{2}{c }{$n = 1,000$}\\\hline
MBCR & {\bf 0 }& {\bf 1399} & {\bf 0} & {\bf 0775} & {\bf 0} & {\bf 0138} & {\bf 0} & {\bf 0102} \\
CAP & 0 & 1886 & 0 & 1308 & 0 & 0192 & 0 & 0164 \\
LSE & 4 & 7537 & 2 & 0210 & 1 & 4801 & 7 & 6638 \\
GP & 2 & 0351 & 3 & 4649 & 2 & 9349 & 3 & 2026 \\
\end{tabular}}
\end{table}
\subsubsection{Results.}
On all of these problems, MBCR is compared to CAP, LSE and Gaussian Process priors~\citep{RaWi06}. Gaussian process priors are a Bayesian method that is widely and successfully used in regression and classification settings; we use the Matlab \texttt{gpml} package for implementation. All methods were implemented in Matlab; the least squares estimate (LSE) was found using the \texttt{cvx} optimization package. LSE took 5 to 6 minutes to run with 500 observations and 50 to 60 minutes to run with 1,000. The tolerance parameter for CAP was chosen through five-fold cross-validation. MBCR was implemented with component-specific variances. It was run for 1,000 iterations with the first 500 discarded as a burn-in.
Due to the highly constrained nature of the model and block updating, convergence of the sampler was extremely fast in all settings. The model was generally insensitive to the hyperparameter for the number of hyperplanes, $\lambda$; it was varied over three orders of magnitude and set to 20 for all tests. In lower, dimensions, however, choice of $\lambda$ was more important. Likewise, variance hyperparameters were tested over three orders of magnitude with little sensitivity. Distributions were not placed over the variance hyperparameters because of the delicate relationship between the proposal distributions and the hyperparameters. All mean hyperparameters were set to 0.
MBCR and CAP dramatically outperformed other methods on all of the problems. LSE had relatively poor performance although it includes convexity constraints. This is due to overfitting, particularly in boundary regions. MBCR and CAP performed comparably on Problems 2 and 3, which both reside in a univariate subspace of the general covariate space. However, MBCR outperformed CAP on the more complex Problem 1, particularly when there were few observations available.
\subsection{Objective Function Approximation for Stochastic Optimization}\label{sec:convex}
Stochastic optimization methods are used to solve optimization problems with uncertain outcomes. The traditional objective is to minimize expected loss. There are many problems in this class, ranging from stochastic search~\citep{Sp03} to sequential decision problems~\citep{SuBa98,Po07}. In this section, we study the use of convex regression to compute response surfaces. A response surface is an approximation of an objective function based on a collection of noisy samples. Once a response surface has been created, it is searched to estimate the minimizer or maximizer of a function. Convex representations are desirable. First, the resulting approximation will likely be closer to the true objective function than an unconstrained approximation. Second, and more importantly, the surrogate objective function is now convex as well and can be easily searched with a commercial solver.
Consider the following problem. We would like to minimize an unknown function $f(\mathbf{x})$ with respect to $\mathbf{x}$ given $n$ noisy observations, $(\mathbf{x}_i,y_i)_{i=1}^n$, where $y_i = f(\mathbf{x}_i) + \epsilon_i$,
\begin{equation}\label{eq:so}
\min_{\mathbf{x} \in \mathcal{X}} \mathbb{E}\left\{f(\mathbf{x}) \, | \, (\mathbf{x}_i,y_i)_{i=1}^n\right\}.
\end{equation}
To solve Equation (\ref{eq:so}), we approximate $\mathbb{E}\left\{f(\mathbf{x}) \, | \, (\mathbf{x}_i,y_i)_{i=1}^n\right\}$ with three different methods for regression: least squares, CAP and MBCR. Let $\hat{f}_n(\mathbf{x})$ be the estimate of the mean function given $ (\mathbf{x}_i,y_i)_{i=1}^n$. Unlike CAP and LSE, MBCR is a Bayesian method; it places a distribution over functions rather than producing a single function estimate. Let $\hat{f}^{(m)}_n(\mathbf{x})$ be a sample from the posterior; the Bayes estimate of the mean function can be approximated by the average of $M$ samples from the posterior,$$\hat{f}_n(\mathbf{x}) \approx \frac{1}{M} \sum_{m=1}^M \hat{f}^{(m)}_n(\mathbf{x}).$$We demonstrate the empirical differences between the objective functions produced by MBCR, CAP and LSE by solving a small stochastic optimization problem.
\paragraph{Example.} Set
\begin{equation}\label{eq:optExample}
Y_i = \mathbf{x}_i \mathbf{Q} \mathbf{x}_i^T + \epsilon_i,\quad \mathbf{Q} =
\left[
\begin{array}{cc}
1 & 0.2 \\
0.2 & 1
\end{array}
\right], \quad \epsilon_i \sim N(0,0.1).
\end{equation}
The constraint set is $-1 \leq x_j \leq 1$ for $ j = 1,2$. Observations were sampled randomly from a uniform distribution, $\mathbf{x}_i \sim Uniform [-1,1]^2.$ We used LSE, CAP and MBCR to approximate the objective function. To examine the stability of these methods for objective function approximation, we sampled 100 observations 50 times for Equation (\ref{eq:optExample}). Approximations of the objective functions for one sample are shown in Figure \ref{fig:objective}.
\begin{figure}
\begin{center}
\includegraphics[width=3in,viewport= 70 200 560 650]{optimizeFunctionCompare1.pdf}
\end{center}
\caption{Objective functions for LSE, CAP and MBCR for Equation (\ref{eq:optExample}) given 100 observations. Both LSE and CAP produce piecewise-linear functions; CAP produces a sparser function than LSE. MBCR averages over piecewise-linear functions to produce an estimate that is much closer to smooth.}
\label{fig:objective}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=3in,viewport= 110 220 500 580]{contour.pdf}
\end{center}
\caption{Minima from the objective functions created by LSE, CAP and MBCR for Equation (\ref{eq:optExample}) given 100 observations; contours are from the true function. The observations were sampled 50 times; selections made when the objective function was approximated by MBCR are much more concentrated around the true minimum than those chosen using LSE or CAP.}
\label{fig:optMinima}
\end{figure}
\subsubsection{Results}We compared MBCR, CAP and LSE across 50 samples of 100 observations. The minima of piecewise planar models, like CAP and LSE, are on one of the vertices (or occasionally along one of the edges); this makes the minima of such models highly sensitive to model parameters such as number of hyperplanes and the value of their coefficients. MBCR, however, places a distribution over piecewise planar models. The Bayes estimate averages those models to produce something that is close to smooth and hence is relatively robust to observation design. Figure \ref{fig:objective} highlights these differences. The minima of both piecewise planar methods were sensitive to the observation design while the minima of MBCR proved more robust. Locations of minima are shown in Figure \ref{fig:optMinima}.
\subsubsection{Discussion}
Many methods for solving stochastic optimization problems, including response surface methods~\citep{BaMe06,Li10}, Q-learning~\citep{ErGeWe05} and approximate dynamic programming~\citep{Po07}, involve functional approximations that are then searched to find a solution that minimizes or maximizes the approximate reward. Current solution methods for these problems use either unconstrained regression methods~\citep{LaPa03,ErGeWe05} or additive approximations with univariate convex functions~\citep{PoRuTo04,NaPo09}. Robust multivariate convex regression methods could allow efficient solution of a broad set of stochastic optimization problems, inlcuding resource allocation, portfolio optimization and inventory management.
\section{Conclusions and Future Work}\label{sec:conclusions}
In this article, we introduced a novel fully Bayesian, nonparametric model for multivariate convex regression and showed strong posterior consistency along with convergence rates. We presented an efficient RJMCMC sampler for posterior inference. Our model was used to approximate objective functions for stochastic optimization and showed improvement over existing frequentist methods.
While this work represents a large advancement for convex regression, much remains to be done. First, we need to develop sampling methods that scale to large problems. Second, MBCR needs to be tested on a variety of stochastic optimization problems. Third, MBCR can be combined with other Bayesian methods to produce a class of semi-convex estimators.
Currently, the RJMCMC sampling method only scales well to moderate dimensionality and problem size: its limits are about 8 to 10 dimensions and a few thousand observations. Approximate inference methods, such as variational Bayes, could allow MBCR to solve problems an order of magnitude larger. Implementation, however, is not a straightforward extension of existing methods.
In stochastic optimization, MBCR is an extremely promising tool for value function approximation. Many solution methods for sequential decision problems include value function approximation, such as point-based value iteration~\citep{PiGoTh03}, fitted Q-iteration~\citep{ErGeWe05}, approximate dynamic programming~\citep{Po07}. All of these methods involve iterative searches of an approximate value function over sets of feasible actions where. In many problems, such as resource allocation, the value function is known to be convex. Robust multivariate convex regression methods would allow a wider variety of problems to be solved, including those with large action spaces and non-separable objective functions.
Perhaps the most intriguing feature of MBCR is that it is a Bayesian model and can easily be combined with other Bayesian models to produce estimators that are convex in some dimensions, but not all. For example, it is well known that consumer preferences for bundled products tend to be convex. However, it is likely that other covariates like consumer age, gender, income and education influence the preference function---and the function is not convex in these covariates. This set of functions could be well-modeled by a combination of MBCR and Bayesian mixture models like Dirichlet processes~\citep{Fe73,An74} or hierarchical Dirichlet processes~\citep{TeJoBe06}. Such flexible models would be of great value to an assortment of fields, including economics, operations research and reinforcement learning.
\section*{Acknowledgements}
This research was partially supported by grant R01ES17240 from the National Institute of Environmental Health Sciences (NIEHS) of the National Institutes of Health (NIH). Lauren A. Hannah is partially supported by the Duke Provost's Postdoctoral Fellowship.
\section*{Appendix A}\label{app:proofs}
Appendix A contains the proofs for Section \ref{sec:consistency}.
To show pointwise convergence, we use Theorems 1 to 3 of \citet{ChSc07}; they are condensed for this paper. For the fixed design case, let $Q_n$ be the empirical density of the design points, $Q_n(\mathbf{x}) = n^{-1} \sum_{i=1}^n \mathbf{1}_{\{\mathbf{x}_i\}}(\mathbf{x}).$ The empirical density is used to define the following neighborhood, $$W_{\epsilon,n} = \left\{ (f,\sigma) : \int \left| f(\mathbf{x}) - f_0(\mathbf{x}) \right| dQ_n(\mathbf{x}) < \epsilon, \, \left| \frac{\sigma}{\sigma_0} - 1 \right| < \epsilon\right\}.$$
\begin{thm}\label{thm:pointwiseFixed}\citep{ChSc07}
Let $\mathbb{P}_{f_0,\sigma_0}^{\infty}$ denote the joint conditional distribution of $\{Y_i\}_{i=1}^{\infty}$ given the covariates, assuming that $f_0$ is the true mean function and $\sigma_0^2$ is the true variance. If assumptions {\bf A1}, {\bf A2}, {\bf B1} and {\bf B2} are satisfied, then for every $\epsilon > 0$, $$\mathbb{P}_{f_0,\sigma_0}^{\infty} \Pi\left\{ (f,\sigma) \in W^C_{\epsilon,n} \, | \, Y_1,\dots,Y_n,\mathbf{x}_1,\dots,\mathbf{x}_n\right\} \rightarrow 0.$$
\end{thm}
For the random design case, let $Q$ be the density of the random design points. Let $$U_{\epsilon} = \left\{ (f,\sigma) : \inf \{ \epsilon > 0 : Q(\{ \mathbf{x} : |f(\mathbf{x}) - f_0(\mathbf{x})| > \epsilon \} ) < \epsilon \}, \, \left| \frac{\sigma}{\sigma_0} - 1 \right| < \epsilon\right\}$$be the set of neighborhoods based on the in-probability metric.
\begin{thm}\label{thm:pointwiseRandom}\citep{ChSc07}
Let $\mathbb{P}_{f_0,\sigma_0}^{\infty}$ denote the joint conditional distribution of $\{Y_i\}_{i=1}^{\infty}$ given the covariates, assuming that $f_0$ is the true mean function and $\sigma_0^2$ is the true variance. If assumptions {\bf A1}, {\bf A2}, {\bf B1} and {\bf B2} are satisfied, then for every $\epsilon > 0$, $$\mathbb{P}_{f_0,\sigma_0}^{\infty} \Pi\left\{ (f,\sigma) \in U^C_{\epsilon} \, | \, Y_1,\dots,Y_n,\mathbf{X}_1,\dots,\mathbf{X}_n\right\} \rightarrow 0.$$
\end{thm}
We now show that the prior satisfies assumptions {\bf A1} and {\bf A2} for Theorems \ref{thm:pointwiseFixed} and \ref{thm:pointwiseRandom}.
\begin{lem}\label{prop:priorKL}
For every $\delta > 0$, the prior $\Pi$ from {\bf B2} and {\bf B3} or {\bf B4} puts positive measure on the neighborhood $$B_{\delta} = \left\{ (f,\sigma^2) : || f - f_0 ||_{\infty} < \delta, \, \left| \frac{\sigma}{\sigma_0} - 1\right| < \delta \right\}.$$
\end{lem}
\begin{proof}Fix $\delta > 0$. Break $B_{\delta}$ into two parts, $\Pi(B_{\delta}(\beta)) = \left\{ f : || f - f_0 ||_{\infty} < \delta\right\}$, and $\Pi(B_{\delta}(\sigma^2)) = \left\{\left| \frac{\sigma}{\sigma_0} - 1\right| < \delta \right\}.$ Under a truncated inverse gamma prior,
\begin{equation}\notag
\Pi(B_{\delta}(\sigma^2)) = \Pi((\sigma_0^2(1-\delta)^2,\sigma_0^2(1+\delta)^2)) > 0.
\end{equation}
To show prior positivity on $\Pi(B_{\delta}(\beta))$, we create a sufficiently fine mesh over $\mathcal{X}$. On each section of the mesh, we show that there exists a collection of hyperplanes that 1) do not intersect with $f_0$, and 2) have an $\ell_{\infty}$ distance from $f_0$ of less than $\delta$ in that section. Since $f_0$ is bounded on $\mathcal{X}$, it is Lipschitz continuous with parameter $L$. A mesh size parameter $\gamma > 0$, which depends on $\delta$, can be found to make a $\gamma$ mesh over $\mathcal{X}$ with the following requirements.
Number regions $r = 1,\dots, R$; call the subsets of the covariate space defined by the regions $M_r^{\gamma}$. Because $f_0$ is Lipschitz continuous, an $\eta > 0$ can be found such that for every region $r$, one can find $\alpha_{r^*}$ and $\beta_{r^*}$ where for every $\alpha_r \in [\alpha_{r^*} - \eta, \alpha_{r^*} + \eta]$ and $\beta_r \in [\beta_{r^*} - \eta \mathbf{1}, \beta_{r^*} + \eta \mathbf{1}]$,
\begin{align}\notag
\alpha_r + \beta_r^T \mathbf{x} & < f_0(\mathbf{x}), & f_0(\mathbf{x}) - \alpha_r - \beta_r^T \mathbf{x} & < \delta
\end{align}for every $x \in M_r^{\gamma}$.
We create a function $f_{\delta}(\mathbf{x})$ to approximate $f_0$ by taking the maximum over the set of $R$ hyperplanes; using the above, we can bound the distance between $f_0$ and $f_{\delta}$,
\begin{align}\notag
\sup_{\mathbf{x} \in \mathcal{X}} ||f_0(\mathbf{x}) - f_{\delta}(\mathbf{x}) ||_{\infty} & = \sup_{\mathbf{x} \in \mathcal{X}} ||f_0(\mathbf{x}) -\max_{r \in \{1,\dots,R\}} \alpha_r + \beta_r^T \mathbf{x}||_{\infty},\\\notag
& \leq \max_{r = 1,\dots,R} \sup_{\mathbf{x} \in M^{\gamma}_r} ||f_0(\mathbf{x}) - \alpha_r - \beta_r^T\mathbf{x}||_{\infty},\\\notag
& < \delta.
\end{align}
To complete the proof, we note that $\Pi(K = R) > 0$ and $\Pi([\alpha_{r^*} - \eta, \alpha_{r^*} + \eta], [\beta_{r^*} - \eta \mathbf{1}, \beta_{r^*} + \eta \mathbf{1}]) > 0 $ for $r = 1,\dots, R$.\qed \end{proof}
\begin{lem}\label{lem:priorSieve}
Define the prior $\Pi$ as in {\bf B2} and {\bf B3}. There exist constants $C_1>0$ and $c_1 > 0$ such that $\Pi(\Theta^C_n) \leq C_1 e^{-c_1n}$.
\end{lem}
\begin{proof}
Without loss of generality, assume $\mathcal{X} = [0,1]^p$. Note that
\begin{align}\notag
\Theta_{1n}^c & = \Theta\setminus \left\{||f||_{\infty} < M_n, \, \left| \left| \frac{\partial}{\partial x_j} f \right| \right|_{\infty} < M_n, \, j = 1,\dots,p\right\},\\\label{eq:subset}
& \subseteq \bigcup_{k=1}^{\infty} \bigcup_{j=1}^k \bigcup_{\ell=1}^p \left\{f(\cdot; \theta) : K = k, |\beta_{j,\ell}| \geq \frac{M_n}{2\sqrt{p}}\right\}\\\notag
& \quad \quad \quad \bigcup \left\{f(\cdot; \theta) : K = k, |\alpha_{j}| \geq \frac{M_n}{2\sqrt{p}}\right\}.
\end{align}Taking the probability of the right hand side of Equation (\ref{eq:subset}),
\begin{align}\notag
\Pi(\Theta_{1n}^C) & \leq \sum_{k = 1}^{\infty} \Pi_{K} (K = k) \sum_{j=1}^k\left\{\Pi \left(|\alpha_j| \geq \frac{M_n}{2\sqrt{p}}\right)+ \sum_{\ell = 1}^p \Pi\left(|\beta_{j,\ell}| \geq \frac{M_n}{2\sqrt{p}}\right)\right\},\\\notag
& \leq 2 \mathbb{E}_{\Pi}[K] (p+1) \int_{c_0M_n}^{\infty}\frac{1}{\sqrt{2\pi }} e^{-\frac{1}{2} x^2}dx,\\\notag
& \leq C_1 e^{-c_1 n^{2\alpha}}.
\end{align}\qed \end{proof}
Note that under the bounded prior assumption {\bf B4}, $\Pi(\Theta_n^c) = 0$ for sufficiently large $n$. Theorems \ref{thm:L1Fixed} and \ref{thm:L1Random} follow directly from Theorems 4 and 6, respectively, of \citet{ChSc07} and Theorems \ref{thm:pointwiseFixed} and \ref{thm:pointwiseRandom}. In the random design case, $L_1$ convergence is equivalent to in-probability convergence under assumptions {\bf B1} and {\bf B6}; the fixed design case requires more care. See \citet{ChSc07} for details.
\section*{Appendix B}\label{app:rateProofs}
Appendix B contains the proofs for Section \ref{sec:rate}. Theorem \ref{thm:fixedRate} relies on verifying the conditions of Theorem 3 of \citet{GhVa07b},
\begin{thm}[\citet{GhVa07b}]\label{thm:GhVa07b}
Let $\mathbb{P}_{f}^n$ be a product measure and $d_n$ a semimetric and let $\Theta$ be the space of all $\{K,\alpha,\beta\}$-tuples with positive measure under $\Pi$. Suppose that for a sequence $\epsilon_n \rightarrow 0$ such that $n\epsilon_n^2$ is bounded away from zero, all sufficiently large $j$ and sets $\Theta_n \subset \Theta$, the following conditions hold:
\begin{enumerate}[(i)]
\item $\sup_{\epsilon > \epsilon_n} \log N(\epsilon/18, \{f \in \Theta_n: d_n(f,f_0) < \epsilon\},d_n) \leq n\epsilon_n^2;$
\item There exist tests $\Phi_n$ such that $\mathbb{E}_{f_0}^n \Phi_n \leq e^{-\frac{1}{2}nd_n^2(f_0,f_1)}$ and $\mathbb{E}_{f}^n (1 - \Phi_n) \leq e^{-\frac{1}{2}nd_n^2(f_0,f_1)}$ for all $f \in \Theta$ such that $d_n(f,f_1) \leq \frac{1}{18}d_n(f_0,f_1);$
\item $\frac{\Pi(\Theta_n^C) }{\Pi(B_n^*(f_0,\epsilon_n))} = o\left(e^{-2n\epsilon_n^2}\right);$
\item $\frac{\Pi(f \in \Theta_n \, : \, j\epsilon_n < d_n(f,f_0) \leq 2j\epsilon_n) }{\Pi(B_n^*(f_0,\epsilon_n))} \leq e^{n\epsilon_n^2j^2/4},$
\end{enumerate}where $B_n^*(f_0,\epsilon_n) = \left\{ f \in \Theta \, : \, \frac{1}{n} \sum_{i=1}^n K_i(f_0,f) \leq \epsilon_n^2, \ \frac{1}{n} \sum_{i=1}^n V_i(f_0,f) \leq C \epsilon_n^2\right\}$. Then, $\mathbb{P}_{f_0}^{\infty} \Pi(f \, : \, d_n(f,f_0) \geq H_n \epsilon_n \, | \, (X_i,Y_i)_{i=1}^n ) \rightarrow 0$ for every $H_n \rightarrow \infty$.
\end{thm}
The distance metric, $d_n$ that we will use is the $|| \cdot ||_{n}$ norm. Note that the $||\cdot||_n$ norm is bounded by the $||\cdot ||_{\infty}$ norm; we shall do metric entropy computations with respect to the $||\cdot ||_{\infty}$ norm. The values $K_i(f_0,f)$ and $V_i(f_0,f)$ denote $\int f_0 \log(f_0/f) d\mu$ and $\int f_0 (\log(f_0/f))^2 d\mu$, respectively. The quantity in condition {\it (i)} is the log of the covering number of the sieve under the supremum norm. To show that conditions {\it (i)} to {\it (iv)} of Theorem \ref{thm:GhVa07b} are met, we check them off one at a time while working in the linearly transformed space, $\tilde{\mathcal{X}} = \{ \mathbf{y} \, : \, \mathbf{y} = \mathbf{A} \mathbf{x}, \, \mathbf{x} \in \mathcal{X}\}.$
\begin{lem}\label{lem:metricEntropy}
Define $\Theta_n = \Theta_{1n}$ and suppose {\bf B4} holds. Then,
\begin{equation}\notag
\sup_{\epsilon > \epsilon_n} \log N(\epsilon/18, \{f \in \Theta_n: ||f-f_0||_{\infty} < \epsilon\},|| \cdot ||_{\infty}) \leq C \epsilon_n^{-d/2}.
\end{equation}
\end{lem}
\begin{proof}Working in the transformed space, assumption {\bf B4} places bounds on the supremum and partial derivatives for all $f \in \Theta$; the result then follows directly from Theorem 2.7.10 of \citet{VaWe96} or Theorem 6 of \citep{Br76} for $V = 1$. By setting $\tilde{\epsilon} = \epsilon/ V$, setting $\tilde{f} = f / V$, $\tilde{f}_0 = f _0/ V$ and calculating the metric entropy with respect to $\tilde{\epsilon}$, $\tilde{f}$ and $\tilde{f}_0$, the result holds. This covering needs to be repeated at most $\epsilon^{-p}$ times to cover the original space; taking the log, $p\log(1/\epsilon)$ can be bounded by a constant times $\epsilon^{-d/2}$.\qed \end{proof}
\begin{lem}\label{lem:piBounds}
Define $\Pi$ by {\bf B4} and {\bf B7}. Let $$B_n^*(f_0,\epsilon_n) = \left\{ f \in \Theta \, : \, \frac{1}{n} \sum_{i=1}^n K_i(f_0,f) \leq \epsilon_n^2, \ \frac{1}{n} \sum_{i=1}^n V_i(f_0,f) \leq C \epsilon_n^2\right\}.$$ Then there exist $C_1$ and $c_1>0$ such that
\begin{equation}\notag
\Pi\left(B_n^*(f_0,\epsilon_n) \right) \geq C_1 e^{c_1 \epsilon_n^{-d} \log \epsilon_n}.
\end{equation}
\end{lem}
\begin{proof}
By simple calculations,
\begin{align}\notag
K_i(f_0,f) & = \frac{1}{2\sigma_0^2} \left(f_0(\mathbf{x}_i)-f(\mathbf{x}_i)\right)^2, & V_i(f_0,f) & = \frac{1}{\sigma_0^2} \left(f_0(\mathbf{x}_i)-f(\mathbf{x}_i)\right)^2.
\end{align}To place a lower bound on the prior measure of $B_n^*(f_0,\epsilon_n),$ we construct a subset and place prior bounds on that.
Let $\beta \in \mathbb{R}^p$; a truncated Gaussian prior on $\beta$ induces a truncated Gaussian prior on $\tilde{\beta} = \mathbf{A}\beta$, the slope parameters in the transformed space. WLOG, take $\tilde{\mathcal{X}} = [0,1]^d.$ Set $\delta = \frac{1}{8\sqrt{d}\sigma_0^2}\epsilon_n$; let $\mathbf{y}_1,\dots, \mathbf{y}_m$ be a $\delta-$net over $\tilde{\mathcal{X}}$. The net can be chosen such that $m \leq K_m / \epsilon_n^d$ for some constant $K_m$ that depends only on $d$ and $\sigma_0^2$. Let $$\left(\alpha_{k}^*,\tilde{\beta}_{k,1}^*,\dots,\tilde{\beta}_{k,d}^*\right)= \left(g_0(\mathbf{y}_k),\frac{\partial}{\partial{x_1}}g_0(\mathbf{y}_k),\dots,\frac{\partial}{\partial{x_d}}g_0(\mathbf{y}_k)\right).$$Then with a sufficiently large truncation parameter $V$, for every $k \in \{1,\dots,m\}$,
\begin{align}\notag
\Pi_{\tilde{\theta}}\left((\alpha_k,\beta_{k,1},\dots,\beta_{k,d}) \in \left(\alpha_{k}^*,\tilde{\beta}_{k,1}^*,\dots,\tilde{\beta}_{k,d}^*\right) \pm \frac{1}{8\sigma_0^2}\epsilon_n \right) & \geq K_a \epsilon_{n}^{d+1},
\end{align} for some $K_a >0$ that depends on $d$, $\sigma_0^2$, $\mathbf{A}$ and $g_0$. Set $g(\mathbf{y}) = \max_{k \in \{1,\dots,m\}} \alpha_k + \beta^T_k \mathbf{y}.$ Then, $ \frac{1}{2\sigma_0^2} \left(f_0(\mathbf{x}_i)-g(\tilde{\mathbf{x}}_i)\right)^2 \leq \epsilon_n^2,$ so
\begin{align}\notag
\Pi\left(B_n^*(f_0,\epsilon_n) \right) & \geq \Pi_K(K= m) \\\notag
& \quad \times \sum_{k=1}^m \Pi_{\tilde{\theta}}\left((\alpha_k,\beta_{k1},\dots,\beta{k,d}) \in \left(\alpha_{k}^*,\tilde{\beta}_{k,1}^*,\dots,\tilde{\beta}_{k,d}^*\right) \pm \frac{1}{8\sigma_0^2}\epsilon_n \right),\\\notag
& \geq C_1 e^{c_1 \epsilon_n^{-d} \log \epsilon_n},
\end{align}for some constants $C_1, c_1 > 0$.\qed \end{proof}
We can use Lemma \ref{lem:piBounds} to check conditions {\it (iii)} and {\it (iv)} of Theorem \ref{thm:GhVa07b}.
\begin{lem}\label{lem:conds3and4}
Define $\Pi$ by {\bf B4} and {\bf B7}. Then for every large $j$,
\begin{align}\notag
\frac{\Pi(\Theta_n^C) }{\Pi(B_n^*(f_0,\epsilon_n))} & = C_2 e^{-c_2n - c_1 \epsilon_n^{-d} \log \epsilon_n},\\\notag
\frac{\Pi(f \in \Theta_n \, : \, j\epsilon_n < ||f,f_0||_{\infty} \leq 2j\epsilon_n) }{\Pi(B_n^*(f_0,\epsilon_n))} & \leq C_1 e^{- c_1 \epsilon_n^{-d} \log \epsilon_n}.
\end{align}
\end{lem}
\begin{proof}
The first equation can be bounded by using Lemma \ref{lem:priorSieve}; the second by setting the numerator equal to 1. \qed
\end{proof}
Now we use this collection of Lemmas and Theorem \ref{thm:GhVa07b} to prove Theorem \ref{thm:fixedRate}.
\begin{proof}[Proof of Theorem \ref{thm:fixedRate}]
We begin by checking the conditions of Theorem \ref{thm:GhVa07b}. Condition {\it (i)} follows from Lemma \ref{lem:metricEntropy}. Setting $\epsilon_{n}^{-1} = \log(n) \, n^{1/(d+2)}$, conditions {\it (iii)} and {\it (iv)} follow from Lemma \ref{lem:conds3and4}. Finally, \citet{Birge06} shows that the likelihood ratio test for $f_0$ versus $f_1$ satisfies condition {\it (ii)} relative to the $||\cdot||_n$ norm under both fixed and random design. Therefore, the main result follows directly from Theorem \ref{thm:GhVa07b}.\qed \end{proof}
\section*{Appendix C}\label{sec:inference}
The RJMCMC algorithm is similar to the ones proposed by \citet{DeMaSm98} and \citet{DiGeKa01} for BARS. Jumps in the chain can take three forms: additions, deletions and relocations. The probabilities of additions, deletions and relocations must satisfy detailed balance equations,
\begin{align}\label{eq:detailedBalance}
& \Pi (K+1,\alpha_{1:K+1}^*,\beta_{1:K+1}^*,{\sigma^2_{1:K+1}}^*) \\\notag
& \quad \times p(K,\alpha_{1:K},\beta_{1:K}, {\sigma^2_{1:K}} \, | \, K+1,\alpha_{1:K+1}^*,\beta_{1:K+1}^*,{\sigma^2_{1:K+1}}^*) \\\notag
& = \Pi(K,\alpha_{1:K},\beta_{1:K},\sigma^2_{1:K}) p(K+1,\alpha_{1:K+1}^*,\beta_{1:K+1}^*,{\sigma^2_{1:K+1}}^* \, | \, K,\alpha_{1:K},\beta_{1:K},\sigma^2_{1:K}).
\end{align}\citet{DiGeKa01} shows that Equation (\ref{eq:detailedBalance}) is satisfied if additions, deletions and relocations are attempted with the following probabilities, respectively,
\begin{align}\notag
b_k & = c \min \left\{1, \frac{p(k+1)}{p(k)}\right\},& d_k & = c \min \left\{1, \frac{p(k-1)}{p(k)}\right\}, & r_k & = 1 - b_k - d_k,
\end{align}where $p(k)$ is the prior probability of $k$ hyperplanes and $c$ is a constant; we set $c = 0.4$.
\paragraph{Additions.}Given the current state $(K, \alpha, \beta)$, a new state with $K+1$ hyperplanes, $(K+1, \alpha^*,\beta^*)$, is proposed with the jump probability,
\begin{equation}\notag
q(K+1,\alpha_{1:K+1}^*,\beta_{1:K+1}^*,{\sigma^2_{1:K+1}}^*\, | \, K,\alpha_{1:K},\beta_{1:K},\sigma^2_{1:K}) = b_K h_b(\alpha^*,\beta^*,{\sigma^2}^*\, | \, \alpha, \beta,\sigma^2).
\end{equation}Here $b_K$ is the addition probability given $K$ hyperplanes and $h_b$ is the proposal distribution for additions.
\paragraph{Deletions.} A new state with $K-1$ hyperplanes, $(K-1,\alpha^*,\beta^*)$ is proposed with the jump probability,
\begin{equation}\notag
q(K-1,\alpha_{1:K-1}^*,\beta_{1:K-1}^*,{\sigma^2_{1:K-1}}^* \, | \, K, \alpha_{1:K},\beta_{1:K},\sigma_{1:K}^2) = d_K h_d(\alpha^*,\beta^*,{\sigma^2}^*\, | \, \alpha,\beta,\sigma^2).
\end{equation}Here $d_K$ is the deletion probability given $K$ hyperplanes and $h_d$ is the proposal distribution for deletions.
\paragraph{Relocations.}
A new state with $K$ hyperplanes, $(K,\alpha^*,\beta^*)$ is proposed with the jump probability,
\begin{equation}\notag
q(K, \alpha_{1:K}^*,\beta_{1:K}^*,{\sigma^2_{1:K}}^*\, | \, K, \alpha_{1:K},\beta_{1:K},\sigma_{1:K}^2) = r_K h_r(\alpha^*,\beta^*,{\sigma^2}^* \, | \, \alpha, \beta,\sigma^2).
\end{equation}Here $r_K$ is the relocation probability given $K$ hyperplanes and $h_r$ is the proposal distribution for relocations. The full RJMCMC algorithm is given in Algorithm \ref{alg:RJMCMC}.
\begin{algorithm}[t]
\caption{Reversible Jump MCMC for MBCR}
\label{alg:RJMCMC}
\begin{algorithmic}
\STATE Initialize $(K,\alpha,\beta,\sigma^2)$: set $K = 1$, draw $(\alpha_1, \beta_1,\sigma^2_1)$ from posterior
\LOOP
\STATE Draw a new $(K^*,\alpha^*,\beta^*,{\sigma^2}^*)$ from the proposal distribution
\STATE Set $(K,\alpha,\beta,\sigma^2)$ to $(K^*,\alpha^*,\beta^*,{\sigma^2}^*)$ with probability $a(K^*, \alpha^*,\beta^* , {\sigma^2}^*\, | \, K, \alpha, \beta, \sigma^2)$
\ENDLOOP
\end{algorithmic}
\end{algorithm}
\subsection*{Proposal Distributions}
Posterior inference for our model is particularly sensitive to the choice of proposal distributions. The space of potential hyperplanes is quite large and grows with $p$. In order to efficiently search that space, we use a collection of {\it basis regions} to create $h_b$, $h_d$ and $h_r$. Basis regions are determined by partitioning the set of training data, as described in Equation (\ref{eq:basisRegion}). We show how this is done for relocations, deletions and additions.
\paragraph{Relocations.} In a relocation step, the proposal distribution is generated by the basis regions created by the current $(\alpha,\beta)$. That is, $C^r = \{C^r_1,\dots,C^r_K\}$, where
\begin{equation}\notag
C_k^r = \left\{ i \, : \, k = \arg \max_{j=\{1,\dots,K\}} \alpha_j + \beta_j^T \mathbf{x}_i\right\}.
\end{equation}Then the proposal distribution is created using this partition as in Equation (\ref{eq:basisRegion}).
\paragraph{Deletions.} In a deletion step, the proposal distribution is a mixture of distributions generated by basis regions,
\begin{equation}\notag
h_d(\alpha^*,\beta^*,{\sigma^2}^*\, | \, \alpha,\beta,\sigma^2) = \sum_{j=1}^K p_d(j) h_{d}^*(\alpha^*,\beta^*,{\sigma^2}^* \, | \, \alpha_{-j},\beta_{-j},\sigma_{-j}^2),
\end{equation}where $\sum_{j=1}^K p_d(j) = 1$, and the $-j$ subscript denotes the indices $\{1,\dots,j-1,j+1,\dots,K\}$. For $j = 1,\dots, K$, the distribution $h_d^*(\alpha^*,\beta^*,{\sigma^2}^* \, | \, \alpha_{-j},\beta_{-j},{\sigma}_{-j}^2)$ is made by creating the following partition, $C^{d,j} = \{C^{d,j}_1,\dots,C^{d,j}_{K-1}\}$, where
\begin{equation}\notag
C_k^{d,j} = \left\{ i \, : \, k = \arg \max_{\ell=\{1,\dots,j-1,j+1,\dots,K\}} \alpha_{\ell} + \beta_{\ell}^T \mathbf{x}_i\right\}.
\end{equation}Likewise, the distribution $h^*_d(\alpha^*,\beta^*,{\sigma^2}^* \, | \, \alpha_{-j},\beta_{-j},\sigma_{-j}^2)$ is defined by Equation (\ref{eq:basisRegion}). The component probability $p_d(j)$ is set to be proportional to $1/|C^r_j|$, the inverse of the number of components supported by hyperplane $j$. If no components are currently supported, we set $p_d(j) \propto 1/.25$.
\paragraph{Additions.} As in the deletion step, the proposal distribution is a mixture of distributions generated by basis regions. Unlike deletions, it is not obvious how to add a hyperplane in a way that will result in a high quality proposal. In an addition step, MBCR starts with a set of hyperplanes, $(\alpha,\beta)$, and adaptively adds an additional hyperplane in the following manner.
The current hyperplanes $(\alpha, \beta)$ define a partition over the observation space, $C = \{C_1,\dots,C_K\}$, where
\begin{equation}\notag
C_k = \left\{ i \in \{1,\dots,n\} \, : \, k = \arg \max_{\{k = 1,\dots,K\}} \alpha_k + \beta_{k}^T \mathbf{x}_i\right\}.
\end{equation}MBCR splits each element $ j = 1,\dots,K$ of $C$ in turn along a direction defined by a linear combination of covariates, producing a collection of new covariate partitions. A number of knots, $L$, is chosen {\it a priori}, along with $M$ random linear combinations, $(g_1^{m},\dots,g_{p}^m)_{m=1}^M$. Then, for each direction $m = 1,\dots, M$, and each knot $\ell = 1,\dots, L$, a covariate partition is generated in the following manner,
\begin{align}\notag
C_k^{b, j,\ell,m} & = \left\{ i \, : \, k = \arg \max_{\{k = 1,\dots,K\}} \alpha_k + \beta_{k}^T \mathbf{x}_i, \ j \neq k \right\},\\\notag
C_{j^-}^{b, j,\ell,m} & = \left\{ i \, : \, j = \arg \max_{\{k = 1,\dots,K\}} \alpha_k + \beta_{k}^T \mathbf{x}_i, \ {\mathbf{g}^m}^T\mathbf{x}_j \leq a_{\ell}^j\right\},\\\notag
C_{j^+}^{b, j,\ell,m} & = \left\{ i \, : \, j = \arg \max_{\{k = 1,\dots,K\}} \alpha_k + \beta_{k}^T \mathbf{x}_i, \ {\mathbf{g}^m}^T \mathbf{x}_j > a_{\ell}^j\right\},
\end{align}where $a_{\ell}^j$ are chosen to produce $L+1$ intervals between $\min\{{\mathbf{g}^m}^T \mathbf{x}_{i} \, : \, i \in C_j\}$ and $\max\{{\mathbf{g}^m}^T \mathbf{x}_{i} \, : \, \ i \in C_j \}$. Set $$C^{b,j,\ell,m} = \left\{C_1^{b,j,\ell},\dots,C^{b,j,\ell,m}_{j-1},C^{b,j,\ell,m}_{j^-},C^{b,j,\ell,m}_{j^+},C^{b,j,\ell,m}_{j+1}\dots,C^{b,j,\ell,m}_{K}\right\}.$$
Often, it is convenient to chose the cardinal directions, that is, $\mathbf{g}^M = e_j$, as the linear combinations for the covariates. However, when $p$ is large and a sparse underlying structure is assumed, it is useful to choose $\mathbf{g}^m$ to be a random Gaussian vector with $M < p$.
The observation partitions are used to produce the following mixture model for the addition proposal distribution,
\begin{equation}\notag
h_b(\alpha^*,\beta^*,{\sigma^2}^* \, | \, \alpha, \beta,\sigma^2) = \sum_{j=1}^p \sum_{\ell=1}^L p_b(j,\ell,m) h^*_b(\alpha^*,\beta^*,{\sigma^2}^* \, | \, C^{b,j,\ell,m}),
\end{equation}where the distribution $h^*_b(\alpha^*,\beta^*,{\sigma^2}^* \, | \, C^{j,\ell,m})$ is defined Equation (\ref{eq:basisRegion}). The weights $p_b(j,\ell,m)$ are set to be proportional to
\begin{equation}\notag
p_b(j,\ell,m) \propto n_{j^-}^{j,\ell,m}n_{j^+}^{j,\ell,m},
\end{equation}where $n_{j^-}^{j,\ell,m}= |C_{j^-}^{b,j,\ell,m}|$ and $n_{j^+}^{j,\ell,m}= |C_{j^+}^{b,j,\ell,m}|$. This gives higher weight to partitions that split a large number of observations fairly evenly.
\bibliographystyle{agsm}
| {
"timestamp": "2011-09-05T02:00:26",
"yymm": "1109",
"arxiv_id": "1109.0322",
"language": "en",
"url": "https://arxiv.org/abs/1109.0322",
"abstract": "In many applications, such as economics, operations research and reinforcement learning, one often needs to estimate a multivariate regression function f subject to a convexity constraint. For example, in sequential decision processes the value of a state under optimal subsequent decisions may be known to be convex or concave. We propose a new Bayesian nonparametric multivariate approach based on characterizing the unknown regression function as the max of a random collection of unknown hyperplanes. This specification induces a prior with large support in a Kullback-Leibler sense on the space of convex functions, while also leading to strong posterior consistency. Although we assume that f is defined over R^p, we show that this model has a convergence rate of log(n)^{-1} n^{-1/(d+2)} under the empirical L2 norm when f actually maps a d dimensional linear subspace to R. We design an efficient reversible jump MCMC algorithm for posterior computation and demonstrate the methods through application to value function approximation.",
"subjects": "Methodology (stat.ME); Optimization and Control (math.OC); Machine Learning (stat.ML)",
"title": "Bayesian nonparametric multivariate convex regression",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9859363741964573,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7084883558709818
} |
https://arxiv.org/abs/1610.09587 | A Counting Lemma for Binary Matroids and Applications to Extremal Problems | In graph theory, the Szemerédi regularity lemma gives a decomposition of the indicator function for any graph $G$ into a structured component, a uniform part, and a small error. This result, in conjunction with a counting lemma that guarantees many copies of a subgraph $H$ provided a copy of $H$ appears in the structured component, is used in many applications to extremal problems. An analogous decomposition theorem exists for functions over $\mathbb{F}_p^n$. Specializing to $p=2$, we obtain a statement about the indicator functions of simple binary matroids. In this paper we extend previous results to prove a corresponding counting lemma for binary matroids. We then apply this counting lemma to give simple proofs of some known extremal results, analogous to the proofs of their graph-theoretic counterparts, and discuss how to use similar methods to attack a problem concerning the critical numbers of dense binary matroids avoiding a fixed submatroid. | \section{Introduction}
In this paper, the term \emph{matroid} refers to a simple binary matroid. A \emph{simple binary matroid} $M$ is, for our purposes, a full-rank subset of $\mathbb{F}_2^r\setminus \{0\}$ for some positive integer $r=r(M)$ called the \emph{rank} of $M$. The \emph{critical number} $\chi(M)$, another important quantity, is the smallest $c$ such that there is a copy of $\mathbb{F}_2^{r-c}$ in $\mathbb{F}_2^r\setminus M$, or equivalently such that $M$ is contained in a union $A_1\cup\cdots\cup A_c$ where each $A_i$ is a hyperplane $\mathbb{F}_2^r\setminus \mathbb{F}_2^{r-1}$.
Basic examples of matroids include the following.
\begin{itemize}
\item The \emph{projective geometry} of rank $r$, $PG(r-1,2)\coloneqq \mathbb{F}_2^r\setminus \{0\}$, which has rank $r$ and critical number $r$.
\item The \emph{affine geometry} of rank $r$, $AG(r-1,2)\coloneqq \mathbb{F}_2^r\setminus \mathbb{F}_2^{r-1}$, which has rank $r$ and critical number $1$.
\item The \emph{Bose-Burton geometry}, $\BB(r,c)=\mathbb{F}_2^r\setminus \mathbb{F}_2^{r-c}$, a generalization of both examples above, which has rank $r$ and critical number $c$.
\end{itemize}
There is a direct connection between graphs and matroids: For any graph $G$ we can define its \emph{cycle matroid} $M(G)$, whose elements correspond to edges of $G$, where a set of elements of $M$ is linearly independent if and only if the corresponding edges of $G$ contain no cycle. A matroid is called \emph{graphic} if it is the cycle matroid of some graph. It is easy to see that $\chi(M(G))=\lfloor \log_2(\chi(G))\rfloor$, where $\chi(G)$ is the chromatic number of $G$.
The critical number of a matroid is analogous to the chromatic number of a graph. Just as the chromatic number plays a large role in many extremal problems in graph theory, the critical number plays a large role in extremal problems on matroids, which are often motivated by analogous problems for graphs. As there is a notion of a graph $G$ containing a copy of a subgraph $H$, there is a corresponding notion for matroids: a matroid $M$ \emph{contains} a copy of a matroid $N$ if there is a linear injection $\iota:\mathbb{F}_2^{r(N)}\to \mathbb{F}_2^{r(M)}$ such that $\iota(N)\subseteq M$. We often simply write this as $N\subseteq M$. Many extremal problems pose questions about criteria for the containment or avoidance of a fixed matroid $N$ in a matroid $M$.
One example of such an extremal problem is to determine the critical threshold of a matroid, another concept inspired by a graph-theoretical analogue.
\begin{defn}
Given a matroid $N$, the \emph{critical threshold} $\theta(N)$ is the infimum of all $\alpha>0$ for which there exists $c<\infty$ such that $|M|\geq \alpha 2^{r(M)}$ implies either $N\subseteq M$ or $\chi(M)\leq c$.
\end{defn}
The conjecture below is the extremal problem that motivates our work in this paper.
\begin{conj}[Geelen, Nelson, {\cite[Conj~1.7]{main}}]
\label{conj:mainsimp}
If $\chi(N)=c$, then $\theta(N)=1-i2^{-c}$, where $i\in \{2,3,4\}$.
\end{conj}
A more precise and technical version of this conjecture is stated as Conjecture~\ref{conj:main}. The technical details, and previous work towards solving the conjecture, are discussed in Section~\ref{sec:matrBack}.
The graph-theoretic analog of Conjecture~\ref{conj:mainsimp} is Theorem~\ref{thm:chromThresh}, which was proven in \cite{chromThresh}. The proof there makes use of the Szemerédi regularity lemma and a corresponding counting lemma. Roughly speaking, the regularity lemma states that, for any desired degree of uniformity $\varepsilon$, the vertices of a sufficiently large graph $G$ can be partitioned into a bounded number of parts of approximately the same size such that most (all but $\varepsilon$-fraction) pairs of parts $(X,Y)$ are $\varepsilon$-uniform, meaning that the edge density $d(X',Y')$ between large enough subsets $X',Y'$ of $X,Y$ does not differ too much from the edge density $d(X,Y)$ between $X$ and $Y$. Given such a partition $\Pi$ of $G$, we can construct a ``reduced graph'' $R=R_{\varepsilon,\delta}(\Pi)$ whose vertices are the parts in $\Pi$, with an edge between a pair $(X,Y)$ if and only if $(X,Y)$ is $\varepsilon$-uniform and $d(X,Y)\geq \delta$. The counting lemma states that for any graph $H$ contained as a subgraph in $R$, many copies of it are contained in $G$.
It is natural to consider approaching the critical threshold problem using analogous methods. Various regularity results analogous to the Szemerédi regularity lemma have been shown in the matroid setting, usually framed in terms of the indicator function for a matroid $M$ decomposing into several parts. The main such result we use, Theorem~\ref{thm:decomp}, is stated in Section~\ref{sec:decomps}. The statement of this theorem involves some technical terminology relating to nonclassical polynomial factors and Gowers norms, for which a brief introduction is given in Section~\ref{sec:decomps}.
In this paper, we build on work in \cite{VeryCountingMaybe} and \cite{hatamiRegCount} to develop a corresponding counting lemma for matroids, Theorem~\ref{thm:counting}. Again, stating this Counting Lemma in a precise form requires building up technical definitions for concepts like the reduced matroid, based on the results of applying Theorem~\ref{thm:decomp}. The Counting Lemma and its proof can be found in Section~\ref{sec:count}.
In Section~\ref{sec:apply} we demonstrate a few simple applications of this counting lemma, giving short new proofs for the matroid analogues of the Removal Lemma and the Erd\H{o}s-Stone Theorem in graph theory. Finally, we discuss an approach to applying our counting lemma and related techniques to Conjecture~\ref{conj:mainsimp} in Section~\ref{sec:work}, giving a conditional result on a special case as a demonstration. Along the way, we prove the following technical result, a generalization of the Bose-Burton theorem stated in Section~\ref{sec:matrBack} which may be useful in other settings as well.
\begin{prop}
\label{prop:extBB}
Let $n,c$ be positive integers, let $k_1,\dots,k_n$ be nonnegative integers, and let $G = \bigoplus_{i=1}^n \frac{1}{2^{k_i+1}}\mathbb{Z}/\mathbb{Z}$. Let $H$ be a subgroup of $G$. Let $M_1,\dots,$ $M_{2^c-1}$ be subsets of $G$. Then there exist $H_1,\dots,H_c\in G/H$, cosets of $H$, such that for $1\leq i\leq c$,
\begin{equation}
\frac{1}{|H|}\sum_{x\in[0,1]^{i-1}} \left | M_{2^{i-1}+\sum_{j=1}^{i-1} x_j 2^{j-1}}\cap \left( H_i + \sum_{j=1}^{i-1} x_j H_j \right ) \right | \geq \sum_{j=2^{i-1}}^{2^i-1}\frac{|M_j|}{|G|}.\tag{$*$}
\end{equation}
\end{prop}
\section{Extremal Problems on Graphs and Matroids}
We start by looking at a basic extremal problem on graphs, that of avoiding a fixed subgraph $H$.
\begin{defn}
The \emph{extremal number} for a graph $H$ and integer $n$ is defined by
$$\ex(H,n)=\max\{|E(G)| \mid |G|=n, H\not\subseteq G\}.$$
\end{defn}
The following theorem is a classical result.
\begin{thm}[Erd\H{o}s-Stone]
\label{thm:ES}
\[\lim_{n\rightarrow \infty} \frac{\text{ex}(H;n)}{|K_n|}=1-\frac{1}{\chi(H)-1}.\]
\end{thm}
The special case where $H=K_m$ is a form of \emph{Turán's theorem}.
We can analyze the situation more carefully by looking for density thresholds above which, though graphs $G$ avoiding $H$ may exist, they are constrained by properties like a bounded chromatic number. It turns out that for graphs, the appropriate notion of density to consider here is the minimum degree $\delta(G)$ of a graph $G$.
\begin{defn}
\label{def:chromthresh}
Given a graph $H$, the \emph{chromatic threshold} $\theta(H)$ is the infimum of all $\alpha>0$ for which there exists $c<\infty$ such that $\delta(G)\geq \alpha |G|$ implies either $H\subseteq G$ or $\chi(G)\leq c$.
\end{defn}
The definition of the critical threshold of a matroid was motivated in analogy to Definition~\ref{def:chromthresh}.
The chromatic threshold was first determined for complete graphs in \cite{KnFree}, with an explicit sharp bound on the chromatic number involved.
\begin{thm}[Goddard, Lyle, {\cite[Thm~11]{KnFree}}]
If $\delta(G)>(2r-5)n/(2r-3)$ and $K_r\not\subseteq G$, then $\chi(G)\leq r+1$. In particular, $\theta(G)\leq \frac{2r-5}{2r-3}$.
\end{thm}
The chromatic threshold of a general graph $H$ was determined in the general case by Allen et al. in \cite{chromThresh}. To state the result, we first need to make the following definitions.
\begin{defn}
The \emph{decomposition family} $\mathcal{M}(H)$ of an $r$-partite graph $H$ is the set of bipartite graphs obtained by deleting all but $2$ color classes in some $r$-coloring of $H$.
\end{defn}
\begin{defn}
A graph $H$ is \emph{r-near-acyclic} if $\chi(H)=r$ and deleting all but $3$ color classes in some $r$-coloring of $H$ yields a graph $H'$ that can be partitioned into a forest $F$ and an independent set $S$ such that every odd cycle in $H'$ meets $S$ in at least $2$ vertices.
\end{defn}
Now we can state the main result of \cite{chromThresh}.
\begin{thm}[Allen, et al., {\cite[Thm~2]{chromThresh}}]
\label{thm:chromThresh}
If $\chi(H)=r$, then $\theta(H)=1-\frac{1}{r-\frac{i}{2}}$, where $i=2$ if and only if $\mathcal{M}(H)$ contains no forest, and $i=4$ if and only if $H$ is $r$-near-acyclic.
\end{thm}
\label{sec:matrBack}
We can ask the same extremal questions for matroids.
\begin{defn}
The \emph{extremal number} for a matroid $N$ and integer $n$ is defined by
$$\ex(N,n)=\max\{|M| \mid r(M)=n, N\not\subseteq M\}.$$
\end{defn}
Note that if $\chi(N)=c$, then $N$ is contained in $BB(n,c)$ for some $n$. Geelen and Nelson prove the following analogue of the Erd\H{o}s-Stone theorem in \cite{gES}.
\begin{thm}[Geometric Erd\H{o}s-Stone, {\cite[Thm~1.3]{gES}}]
\label{thm:gES}
\[\lim_{n\rightarrow \infty} \frac{\ex(N,n)}{2^n-1}=1-2^{1-\chi(N)}.\]
\end{thm}
The $\chi(N)=1$ case is known as the \emph{Binary Density Hales-Jewett} theorem. In this case, Geelen and Nelson in fact show that $\ex(AG(k,2),n)<2^{\alpha_k n + 1}$, where $\alpha_k=1-2^{-(k-1)}$ \cite{gDHJ}.
The special case where $N=PG(c-1,2)$ is a form of the \emph{Bose-Burton theorem}, which has a more precise statement as follows.
\begin{thm}
\label{thm:BB}
If $M$ does not contain a copy of $PG(c-1,2)$, then $|M|\leq 2^{r(M)}-2^{r(M)-c+1}$.
\end{thm}
Note that taking $G=\mathbb{F}_2^{r(M)}$, $H$ the trivial subgroup, and $M_1=\cdots=M_{2^c-1}=M$ in Proposition~\ref{prop:extBB} immediately yields Theorem~\ref{thm:BB}.
In \cite{tidor}, Tidor proved a result on the chromatic thresholds of projective geometries, analogous to Goddard and Lyle's result for complete graphs.
\begin{thm}[Tidor, {\cite[Thm~1.4]{tidor}}]
\label{thm:tidor}
If $|M|>(1-3\cdot 2^{-t})2^{r(M)}$ and $PG(t-1,2)\not\subseteq M$, then $\chi(M)\in \{t-1,t\}$. In particular, $\theta(M)\leq 1-3\cdot 2^{-t}$.
\end{thm}
To formulate the precise version of Conjecture~\ref{conj:mainsimp}, we make the following definition.
\begin{defn}
A matroid $M$ is \emph{c-near-independent} if $\chi(M)=c$ and for some $(c-2)$-codimensional subspace $H$ with $\chi(M\cap H)=2$, $H$ has a 1-codimensional subspace $S$ such that $M\cap S$ is linearly independent, and every odd circuit in $M\cap H$ contains at least four elements of $H\setminus S$.
\end{defn}
Now we state the precise form of the conjecture.
\begin{conj}[Geelen, Nelson, {\cite[Conj~5.2]{main}}]
\label{conj:main}
If $\chi(N)=c$, then $\theta(N)=1-i2^{-c}$, where $i=2$ if and only if no $(c-1)$-codimensional subspace $S$ exists such that $S\cap N$ is a set of linearly independent vectors, $i=4$ if and only if $N$ is $c$-near-independent, and $i=3$ otherwise.
\end{conj}
In \cite{main}, Geelen and Nelson show that the conjectured expression is a valid lower bound.
\begin{thm}[Geelen, Nelson, {\cite[Thm~5.4]{main}}]
\label{thm:lower}
If $\chi(N)=c$, then $\theta(N)\geq 1-i2^{-c}$, where $i=2$ if and only if no $(c-1)$-codimensional subspace $S$ exists such that $S\cap N$ is a set of linearly independent vectors, and $i=4$ if and only if $N$ is $c$-near-independent, and $i=3$ otherwise.
\end{thm}
Combined with the trivial upper bound of $\theta(N)\leq 1 - 2^{1-c}$ that follows immediately from Theorem~\ref{thm:gES}, it remains to show that $\theta(N)\leq 1 - 3\cdot 2^{-c}$ when $N$ has a $(c-1)$-codimensional flat that is independent, and that $\theta(N)\leq 1 - 4\cdot 2^{-c}$ when $N$ is $c$-near-independent.
For $\ell\geq c+k-1$, $c>1$, define $N_{\ell,c,k}$ to be the rank $\ell$ matroid consisting of the union of $BB(\ell,c-1)$ with $k$ linearly independent vectors contained inside the complement of $BB(\ell,c-1)$ in $\mathbb{F}_2^\ell$. This represents the most general maximal case of matroids of critical number $c$ for which $i$ is conjectured to be $3$. In Section~\ref{sec:work}, we verify Conjecture~\ref{conj:main} for $N_{\ell,2,1}$ and discuss, using a conditional result as a demonstration, how the tools in this paper could be applied to the general $N_{\ell,c,1}$ case.
\section{Regularity and Counting}
A \emph{regularity} or \emph{decomposition} result, in general, splits a generic object (e.g. a graph, a subset of an abelian group, or a function) into a structured part, a uniform part, and possibly a small error. A corresponding \emph{counting lemma} then guarantees that the number of copies of a suitable subobject contained in this object can be well-approximated by the number of copies contained in the structured part. The most well-known example of such a pair of results is the Szemerédi regularity lemma and the corresponding counting lemma for subgraphs contained in the reduced graph. As mentioned in the introduction, the use of this pair of lemmas is key to the argument used in \cite{chromThresh} to prove Theorem~\ref{thm:chromThresh}.
A simple example of an analogous regularity result for matroids is Green's regularity lemma (specialized to $\mathbb{F}_2^n$). To state it, we make a few preliminary definitions.
\begin{defn}
Let $V=\mathbb{F}_2^n$. A set $X\subset V$ is \emph{linearly $\varepsilon$-uniform} in $V$ if $|\widehat{1_X}(\xi)|\leq \varepsilon$ for all nonzero $\xi\in \hat{V}=V$, or equivalently if for each hyperplane $H\leq V$,
\[||X\cap H|-|X\setminus H||\leq \varepsilon |V|.\]
\end{defn}
\begin{defn}
Let $X\subseteq V=\mathbb{F}_2^n$. A subspace $W\leq V$ is \emph{linearly $\varepsilon$-regular} with respect to $X$ if for all but $\varepsilon |V|$ values of $v\in V$, $X-v\cap W$ is linearly $\varepsilon$-uniform in $W$.
\end{defn}
Green's regularity result is the following.
\begin{thm}[Geometric Regularity Lemma, {\cite[Thm~2.1]{gReg}}]
\label{thm:greenReg}
For any $\varepsilon\in(0,\frac{1}{2})$ there is a $T>0$ such that for any $V=\mathbb{F}_2^n$ and any subset $X\subset V$ there is a subspace $W\subseteq V$ of codimension at most $T$ that is linearly $\varepsilon$-regular with respect to $X$.
\end{thm}
This notion of regularity readily yields a counting lemma for triangles (and indeed, all odd circuits) in matroids, which Geelen and Nelson use in their proofs that $\theta(PG(1,2))\leq \frac{1}{4}$ \cite{main} and that odd circuits have critical threshold $0$ \cite{geelenOdd}. In Section~\ref{sec:vernm1}, we use a method along the same lines as their proof to verify Conjecture~\ref{conj:main} for $N_{\ell,2,1}$.
Unfortunately, the linear Fourier-analytic notion of regularity provided by Theorem~\ref{thm:greenReg} is not strong enough for a counting lemma to hold for general submatroids $N$. In attempting to translate the ideas of \cite{chromThresh} into tools for the matroid threshold problem, we therefore need a stronger regularity statement, one that admits a corresponding, more general counting lemma.
\subsection{Regularity on Matroids} \label{sec:decomps}
After the inverse conjecture for the Gowers norm over finite fields of low characteristic was established \cite{tao2012inverse}, stronger regularity results, using regularity with respect to the Gowers norms, came within reach. The primary regularity result that we will use is stated in \cite{VeryCountingMaybe} as a decomposition theorem for bounded functions on $\mathbb{F}^n$. To work with this result, we will first need to introduce a few technical concepts from higher-order Fourier analysis.
We will only be concerned with the field $\mathbb{F}=\mathbb{F}_2$, though most of the concepts below also extend to prime-ordered fields $\mathbb{F}_p$ in general. Given a function $f:\mathbb{F}^n\rightarrow\{0,1\}$, in our case usually the indicator function of some matroid $M\subseteq \mathbb{F}^n\setminus\{0\}$, the decomposition theorem will split it into a sum of three parts: a structured part, a uniform part, and a small error. Here we will address the technical issues that arise in working with the first two parts. In the sections below, we largely quote the terminology and notation used in \cite{VeryCountingMaybe} and \cite{hatamiRegCount}.
\subsubsection{The Gowers norm and nonclassical polynomials}
\begin{defn}
Given a function $f:\mathbb{F}^n\rightarrow\mathbb{C}$ and an integer $d\geq 1$, the \emph{Gowers norm of order $d$} for $f$ is
\[\|f\|_{U^d}=\left|\E_{h_1,\dots,h_d,x\in\mathbb{F}^n}\left[\prod_{i_1,\dots,i_d\in\{0,1\}}\mathcal{C}^{i_1+\cdots+i_d}f\left(x + \sum_{j=1}^d i_j h_j\right)\right]\right|^{1/2^d},\]
where $\mathcal{C}$ denotes the conjugation operator.
\end{defn}
It is easy to see that $\|f\|_{U^d}$ is increasing in $d$ and is indeed a norm for $d\geq 2$, and that $\|f\|_{U^1}=|\E[f]|$ and $\|f\|_{U^2}=\|\hat{f}\|_{l^4}$. So, the Gowers norm of order $2$ is related to the Fourier bias used in Green's regularity lemma, a measure of correlation with exponentials of linear polynomials: $\|f\|_{U^2}$ is large if and only if $\sup_{\xi\neq 0}|\hat{f}(\xi)|$ is large, i.e. if and only if $f$ is strongly correlated with the exponential of some linear polynomial. It is natural to expect the Gowers norm of order $d+1$ to be similarly related to polynomials of degree $d$; conjectures that a large Gowers-$(d+1)$ norm implies correlation with the exponential of a degree $d$ polynomial, in various settings, were known as \emph{inverse conjectures} for the Gowers norms.
For large $d$ over fields of small characteristic, it turns out that the inverse conjectures are not true as stated; the right notion to consider, over which an inverse theorem for the Gowers norms actually holds, is that of a \emph{nonclassical polynomial}.
\begin{defn}
Let $\mathbb{T}=\mathbb{R}/\mathbb{Z}$. Given an integer $d\geq 0$, a function $P:\mathbb{F}^n\rightarrow\mathbb{T}$ is called a \emph{(non-classical) polynomial of degree at most $d$} if for all $h_1,\dots,h_d,x\in\mathbb{F}^n$,
\[\sum_{i_1,\dots,i_d\in\{0,1\}}(-1)^{i_1+\cdots+i_d}P\left(x + \sum_{j=1}^d i_j h_j\right)=0.\]
\end{defn}
Since we will be working mostly with non-classical polynomials, it should be assumed that any use of the word ``polynomial'' refers to a possibly non-classical polynomial unless otherwise specified.
Let $\expo{x}=e^{2\pi i x}$. It follows from definition that $\|f\|_{U^{d+1}}=1$ if and only if $f=\expo{P}$ for some non-classical polynomial $P$ of degree at most $d$, and $\|f\cdot \expo{P}\|_{U^{d+1}}=\|f\|_{U^{d+1}}$ for any function $f$ and non-classical polynomial $P$ of degree at most $d$. Non-classical polynomials can be characterized in terms of classical ones by the following lemma of Tao and Ziegler \cite{tao2012inverse}.
\begin{lem}[{\cite[Lemma~1.7]{tao2012inverse}}]\label{lem:nonclassrep}
Let $|\cdot|$ denote the standard map from $\mathbb{F}_p$ to $\{0,1,\dots,p-1\}$. A function $P: \mathbb{F}_p^n \to \mathbb{T}$ is a polynomial of degree at most $d$ if and
only if $P$ can be represented as
$$P(x_1,\dots,x_n) = \alpha + \sum_{\substack{0\leq d_1,\dots,d_n< p; k \geq 0:
\\ 0 < \sum_i d_i \leq d - k(p-1)}} \frac{ c_{d_1,\dots, d_n,
k} |x_1|^{d_1}\cdots |x_n|^{d_n}}{p^{k+1}} \mod 1,
$$
for a unique choice of $c_{d_1,\dots,d_n,k} \in \{0,1,\dots,p-1\}$
and $\alpha \in \mathbb{T}$. The element $\alpha$ is called the {\em
shift} of $P$, and the largest integer $k$ such that there
exist $d_1,\dots,d_n$ for which $c_{d_1,\dots,d_n,k} \neq 0$ is called
the {\em depth} of $P$. A depth-$k$ polynomial $P$ takes values in a coset of the subgroup $\mathbb{U}_{k+1}\coloneqq \frac{1}{p^{k+1}} \mathbb{Z}/\mathbb{Z}$. Classical polynomials correspond to
polynomials with $0$ shift and $0$ depth.
\end{lem}
For convenience we will assume henceforth that all polynomials have shift $0$, so that all polynomials of depth $k$ take values in $\mathbb{U}_{k+1}$; this will not affect our arguments.
\subsubsection{Polynomial factors and rank}
\begin{defn}
A \emph{polynomial factor} $\mathcal{B}$ of $\mathbb{F}^n$ is a partition of $\mathbb{F}^n$ into finitely many pieces, called \emph{atoms}, such that for some polynomials $P_1,\dots,P_C$, each atom is defined as the solution set $\{x|\forall i\in\{1,\dots,C\} P_i(x)=b_i\}$ for some $(b_1,\dots,b_C)\in \mathbb{T}^C$. The \emph{complexity} of $\mathcal{B}$ is the number of defining polynomials $|\mathcal{B}|=C$, and the \emph{degree} is the highest degree among $P_1,\dots,P_C$. If $P_i$ has depth $k_i$, the \emph{order} of $\mathcal{B}$ is $\|\mathcal{B}\|=\prod_{i=1}^C p^{k_i+1}$, an upper bound on the number of atoms in $\mathcal{B}$.
\end{defn}
\begin{defn}
The \emph{$d$-rank} $\rank_d(P)$ of a polynomial $P$ is the smallest integer $r$ such that $P$ can be expressed as a function of $r$ polynomials of degree at most $d-1$. The \emph{rank} of a polynomial factor defined by $P_1,\dots,P_C$ is the least integer $r$ for which there is a tuple $(\lambda_1,\dots,\lambda_C)\in\mathbb{Z}^C$, with $(\lambda_1 \mod p^{k_1+1}, \ldots, \lambda_C \mod p^{k_C+1})\neq 0^C$, such that $\rank_d(\sum_{i=1}^C \lambda_iP_i) \leq r$, where $d=\max_i \deg(\lambda_i P_i)$.
Given a polynomial factor $\mathcal B$ and a function $r:\mathbb{Z}_{>0}\rightarrow \mathbb{Z}_{>0}$, we say that $\mathcal B$ is \emph{$r$-regular} if $\mathcal B$ is of rank larger than $r(|\mathcal B|)$.
\end{defn}
We can also define an analytic notion of uniformity for polynomial factors.
\begin{defn}
For $\varepsilon>0$, we say that $\mathcal B$ is \emph{$\varepsilon$-uniform} if for all $(\lambda_1,\dots,\lambda_C)\in\mathbb{Z}^C$ with $(\lambda_1 \mod p^{k_1+1}, \ldots, \lambda_C \mod p^{k_C+1})\neq 0^C$,
\[\left\|\expo{\sum_i \lambda_i P_i}\right\|_{U^d}<\varepsilon.\]
\end{defn}
The following fact, noted in \cite{hatamiRegCount}, follows from Theorem~1.20 of \cite{tao2012inverse}.
\begin{prop}\label{prop:regUni}
For every $\varepsilon>0,d\in\mathbb{Z}_{>0}$ there exists an integer $r=r(d,\varepsilon)$ such that every $r$-regular degree $d$ polynomial factor $\mathcal B$ is $\varepsilon$-uniform.
\end{prop}
\subsubsection{The Strong Decomposition Theorem}
We can finally state the main regularity result we will use, the strong decomposition theorem of \cite{VeryCountingMaybe}.
\begin{thm}[Strong Decomposition Theorem, {\cite[Theorem~5.1]{VeryCountingMaybe}}]\label{thm:decomp}
Suppose $\delta > 0$ and $ d \geq 1$ are integers. Let $\eta: \mathbb{N}
\to \mathbb{R}^+$ be an arbitrary non-increasing function and $r: \mathbb{N} \to \mathbb{N}$ be an arbitrary
non-decreasing function. Then there exist $N =
N(\delta, \eta, r, d)$ and $C =
C(\delta,\eta,r,d)$ such that the following holds.
Given $f: \mathbb{F}^n \to \{0,1\}$ where $n > N$, there
exist three functions $f_1, f_2, f_3: \mathbb{F}^n \to
\mathbb{R}$ and a polynomial factor $\mathcal B$ of
degree at most $d$ and complexity at most $C$ such that the following conditions hold:
\begin{itemize}
\item[(i)]
$f=f_1+f_2+f_3$.
\item[(ii)]
$f_1 = \E[f|\mathcal B]$, the expected value of $f$ on an atom of $\mathcal B$.
\item[(iii)]
$\|f_2\|_{U^{d+1}} \leq \eta(|\mathcal B|)$.
\item[(iv)]
$\|f_3\|_2 \leq \delta$.
\item[(v)]
$f_1$ and $f_1 + f_3$ have range $[0,1]$; $f_2$ and $f_3$ have range $[-1,1]$.
\item[(vi)]
$\mathcal B$ is $r$-regular.
\end{itemize}
\end{thm}
In analogy with the terminology for the Szemerédi regularity lemma, we will call a decomposition of $f=1_M$ with the properties given by Theorem~\ref{thm:decomp} for parameters $\delta, \eta, r, d$ a \emph{$(\delta,\eta,r,d)$-regular partition of $f$ (or of $M$)}, and we say that $\mathcal B$ is its \emph{corresponding factor}. Similarly, an \emph{$(\eta,r,d)$-regular partition of $f$ (or of $M$)} is the same thing with an unspecified value for $\delta$.
Theorem~\ref{thm:decomp} is strong enough to help us prove a corresponding counting lemma for general binary matroids.
\subsection{The Counting Lemma} \label{sec:count}
Before stating our Counting Lemma, we first we define the notion of a \emph{reduced matroid}, in analogy to the reduced graph used in the graph counting lemma.
\begin{defn}[Reduced Matroid]
Given a matroid $M\subseteq \mathbb{F}^n\setminus \{0\}$ and an $(\eta,r,d)$-regular partition $f_1+f_2+f_3$ of $M$ with corresponding factor $\mathcal B$, for any $\varepsilon,\zeta>0$ define the $(\varepsilon,\zeta)$-\emph{reduced matroid} $R=R_{\varepsilon, \zeta}$ to be the subset of $\mathbb{F}^n$ whose indicator function $F$ is constant on each atom $b$ of $\mathcal B$ and equals $1$ if and only if
\begin{enumerate}
\item $\E[|f_3(x)|^2\mid x\in b]\leq \varepsilon^2$, and
\item $\E[f(x)\mid x\in b]\geq \zeta$.
\end{enumerate}
\end{defn}
So, $R$ gives the atoms of the decomposition in which $M$ has high density and the $L^2$ error term is small.
As in the counting lemma for graphs, it will turn out that we do not need the hypothesis of having a \emph{copy} of $N$ in $R_{\varepsilon,\zeta}$, i.e. an \emph{injective} linear map sending $N$ inside $R_{\varepsilon,\zeta}$. Rather, we only need there to be a \emph{homomorphism} from $N$ to $R_{\varepsilon,\zeta}$, i.e. any linear map $\iota$ such that $\iota(N)\subseteq R_{\varepsilon,\zeta}$.
\begin{thm}[Counting Lemma]\label{thm:counting}
For every matroid $N$, positive real number $\zeta$, and integer $d\geq |N|-2$, there exist positive real numbers $\beta$ and $\varepsilon_0$, a positive nonincreasing function $\eta:\mathbb{Z}^+\to \mathbb{R}^+$, and positive nondecreasing functions $r,\nu:\mathbb{Z}^+\to \mathbb{Z}^+$ such that the following holds for all $\varepsilon\leq \varepsilon_0$. Let $M\subseteq \mathbb{F}^n\setminus \{0\}$ be a matroid with an $(\eta,r,d)$-regular partition $f_1+f_2+f_3$ with corrresponding factor $\mathcal B$. If $n\geq \nu(|\mathcal B|)$ and there exists a homomorphism from $N$ to the reduced matroid $R_{\varepsilon,\zeta}$, then there exist at least $\beta \frac{(2^n)^{r(N)}}{\|\mathcal B\|^{|N|}}$ copies of $N$ in $M$.
\end{thm}
The case where $N$ is an affine matroid (i.e. $\chi(N)=1$) is proved in \cite{VeryCountingMaybe} in the context of property testing; here we prove the lemma in full generality using a very similar argument. The basic idea is to obtain a lower bound for the probability that a linear map $\iota:\mathbb{F}^{r(N)}\to \mathbb{F}^{n}$ chosen uniformly at random sends $N$ to a set contained entirely within $M$, by splitting $f=1_M$ into its three parts and expanding out the product we get in the expectation expression. Letting $N=\{N_1,\dots,N_m\}$ and $r(N)=\ell$ we have
\[{\bf {Pr}}_{\iota:\mathbb{F}^{\ell}\to \mathbb{F}^{n}}[\iota(N)\subseteq M]=\E_{\iota}\left[\prod_{i=1}^m f(\iota(N_i)\right]=\E_{\iota}\left[\sum_{(j_1,\dots,j_m)\in [1,3]^m}\prod_{i=1}^m f_{j_i}(\iota(N_i))\right].\]
For ease of notation in the proof that follows, we will introduce the concept of a linear form, as used in \cite{VeryCountingMaybe} and \cite{hatamiRegCount}.
\begin{defn}
A \emph{linear form} on $k$ variables is a linear map $L:(\mathbb{F}^n)^k\to \mathbb{F}^n$. If it is given by $L(x_1,\dots,x_k)=\sum_{i=1}^k \ell_i x_i$, where $ \ell_i\in \mathbb{F} ~ \forall i$, we write $L=(\ell_1,\dots,\ell_k)$.
\end{defn}
Note that a linear form on $k$ variables can be thought of as a vector in $\mathbb{F}^k$. Conversely, we can think of each element $N_j$ of $N$ as a linear form $L_j$ on $\ell$ variables. Each linear map $\iota: \mathbb{F}^{\ell}\to \mathbb{F}^{n}$ corresponds to a point $X=(x_1,\dots,x_{\ell})\in (\mathbb{F}^{n})^{\ell}$ such that $\iota(N_j)=L_j(X)$. So, instead of taking an expectation over linear maps, we can take the more intuitive approach of taking an expectation over tuples of points. The expression from before is the same as
\begin{align*}
{\bf {Pr}}_{X\in (\mathbb{F}^{n})^{\ell}}[L_j(X)\in M ~ \forall j \in [1,m]]&=\E_{X}\left[\prod_{i=1}^m f(L_j(X))\right]\\
&=\E_{X}\left[\sum_{(i_1,\dots,i_m)\in [1,3]^m}\prod_{i=1}^m f_{i_j}(L_j(X))\right].
\end{align*}
The terms involving the Gowers uniform part $f_2$ are the easiest to deal with; we will simply invoke the following result.
\begin{lemma}[{\cite[Lemma~3.12]{hatamiRegCount}}]
\label{lem:gowerscount}
Let $f_1,\ldots,f_m:\mathbb{F}^n \to \mathbb{D}$. Let $N=\{L_1,\dots,L_m\}$ be a system of linear forms in $\ell$ variables. Then for $s\geq m-2$,
$$
\left| \E_{X\in(\mathbb{F}^{n})^{\ell}} \left[ \prod_{j=1}^{m} f_j (L_j(X)) \right]\right| \le \min_{1 \le j \le m} \|f_j\|_{U^{s+1}}.
$$
\end{lemma}
To deal with the remaining terms, we will use a near-orthogonality theorem from \cite{hatamiRegCount}. To state this near-orthogonality theorem, we first need to introduce the notion of \emph{consistency} as defined in \cite{hatamiRegCount}. By a \emph{homogeneous} polynomial over $\mathbb{F}_p$ we mean a polynomial $P$ such that for all $c\in\mathbb{F}_p$ there exists a $c'\in\mathbb{F}_p$ such that $P(cx)\equiv c'P(x)$. In the case of $\mathbb{F}=\mathbb{F}_2$, this restriction is equivalent to simply requiring $P(0)=0$.
\begin{definition}[Consistency]\label{def:consistent}
Let $N=\{L_1,\dots,L_m\}$ be a system of linear forms in $\ell$ variables. A vector $(\beta_1, \dots, \beta_m) \in \mathbb{T}^m$ is said to be {\em $(d,k)$-consistent with $N$} if there exists a homogeneous polynomial $P$ of degree $d$ and depth $k$ and a point $X\in (\mathbb{F}^n)^\ell$ such that $P(X(L_j))=\beta_j$ for every $j \in [m]$. Let $\Phi_{d,k}(N)$ denote the set of all such vectors. This is a subgroup of $\mathbb{U}_{k+1}^m$; we define
\begin{align*}
&\Phi_{d,k}(N)^\perp \\
& := \left\{(\lambda_1,\ldots,\lambda_m) \in [0,2^{k+1}-1]^m \ : \ \forall (\beta_1,\ldots,\beta_m) \in \Phi_{d,k}(N), \ \sum \lambda_j \beta_j = 0 \right\},
\end{align*}
\noindent the set of all $(\lambda_1,\ldots,\lambda_m) \in [0,2^{k+1}-1]^m$ such that $\sum_{k=1}^m \lambda_j P(L_j(X)) \equiv 0$ for every homogeneous polynomial $P$ of degree $d$ and depth $k$ and every point $X$. Call $\Phi_{d,k}(N)^\perp$ the \emph{$(d,k)$-dependency set} of $N$.
\end{definition}
\begin{thm}[Near Orthogonality, {\cite[Theorem~3.7]{hatamiRegCount}}]\label{thm:nearOrtho}
Let $N=\{L_1,\dots,L_m\}$ be a system of linear forms in $\ell$ variables, and let $\mathcal B=(P_1,\dots,P_C)$ be an $\varepsilon$-uniform polynomial factor for some $\varepsilon\in (0,1]$ defined only by homogeneous polynomials. For every tuple $\Lambda$ of integers $(\lambda_{i,j})_{i\in [C], j\in [m]}$, define
$$
P_{\Lambda}(X)= \sum_{i\in [C], j\in [m]} \lambda_{i,j} P_i(L_j(X)).
$$
Then one of the following two statements holds:
\begin{itemize}
\item $P_\Lambda \equiv 0,$ and furthermore for every $i \in [C]$, we have $(\lambda_{i,j})_{j \in [m]} \in \Phi_{d_i,k_i}(N)^\perp$, where $d_i,k_i$ are the degree and depth of $P_i$, respectively.
\item $P_\Lambda$ is non-constant and $\left| \E_{X\in (\mathbb{F}^n)^\ell} [\expo{P_\Lambda}] \right|< \varepsilon$.
\end{itemize}
\end{thm}
\begin{rem}
Over a general prime-ordered field $\mathbb{F}_p$, the restriction of homogeneity is dealt with by modifying the decomposition theorem to only use homogeneous polynomials in the factor $\mathcal B$. The situation is even simpler over $\mathbb{F}=\mathbb{F}_2$, where any polynomial factor can be rewritten in terms of homogeneous polynomials simply by shifting each polynomial by a constant.
\end{rem}
Directly applying Theorem~\ref{thm:nearOrtho} yields the following result, which estimates the number of copies of $N$ with each element in a specified atom of $\mathcal B$.
\begin{theorem}[Near-equidistribution, {\cite[Theorem~3.10]{hatamiRegCount}}]\label{thm:equidist}
Given $\varepsilon > 0$, let $\mathcal B$ be an $\varepsilon$-uniform polynomial factor of
degree $d > 0$ and complexity $C$ that is defined by a tuple of homogeneous
polynomials $P_1, \dots, P_C: \mathbb{F}^n
\to \mathbb{T}$ having respective degrees $d_1, \dots, d_C$ and
depths $k_1, \dots, k_C$.
Let $N=\{L_1,\dots,L_m\}$ be a system of linear forms on $\ell$ variables.
Suppose $(\beta_{i,j})_{i \in [C], j \in [m]} \in \mathbb{T}^{C \times m}$ is such that $(\beta_{i,1},\ldots,\beta_{i,m}) \in \Phi_{d_i,k_i}(N)$ for every $i \in [C]$. Then
$$
\left|{\bf {Pr}}_{X\in (\mathbb{F}^n)^\ell}\left[P_i(L_j(X)) = \beta_{i,j}~
\forall i \in [C],j \in [m] \right] - \frac{1}{K}\right| \leq \varepsilon,
$$
where $K= \prod_{i=1}^C |\Phi_{d_i,k_i}(N)|$.
\end{theorem}
In particular, taking $N$ to have a single element yields an estimate on the size of each atom.
\begin{cor}[Size of atoms, {\cite[Lemma~3.2]{VeryCountingMaybe}}]
\label{cor:atomSize}
Given $\varepsilon > 0$, let $\mathcal B$ be an $\varepsilon$-uniform polynomial factor of
degree $d > 0$ and complexity $C$ that is defined by a tuple of homogeneous
polynomials $P_1, \dots, P_C: \mathbb{F}^n
\to \mathbb{T}$ having respective
depths $k_1, \dots, k_C$.
Suppose $b=(b_1,\dots,b_C) \in \mathbb{T}^{C}$ is such that $b_i \in \mathbb{U}_{k_i+1}$ for every $i \in [C]$. Then
$$
\left|{\bf {Pr}}_{x}\left[P_i(x) = b_i~
\forall i \in [C] \right] - \frac{1}{\|\mathcal B\|}\right| \leq \varepsilon.
$$
\end{cor}
Now we proceed with the proof of the Counting Lemma.
\begin{proof}[Proof of Theorem~\ref{thm:counting}]
Let $\ell=r(N)$ and $\alpha(C)=2^{-2dCm}$. We set $r(C)$ to be the integer $r(d,\alpha(C))$ given by Proposition~\ref{prop:regUni} such that every $r$-regular degree $d$ polynomial factor is $\alpha(C)$-uniform. Let $\mathcal B=(P_1,\dots,P_C)$. So, $\mathcal B$ is $\alpha(C)$-uniform. Notice that $\|\mathcal B\|=\prod_{i=1}^C 2^{k_i+1}\leq 2^{dC}$.
We want a lower bound for the probability that for a linear map $\iota:\mathbb{F}^\ell\to \mathbb{F}^n$ chosen uniformly at random, each element of $N$ is sent inside $M$. Since degenerate maps (ones where the image of $N$ is of lower rank than $N$) are sparse, this will give us that a constant fraction of the copies of $N$ in $\mathbb{F}^n$ (depending on $\|\mathcal B\|$ in an appropriate way) are contained in $M$.
As before, we represent $N$ as a system $\{L_1,\dots,L_m\}$ of linear forms on $\ell$ variables. The probability we want to bound is then
\begin{align*}
&{\bf {Pr}}_{X\in (\mathbb{F}^{n})^\ell}[L_j(X)\subseteq M ~ \forall j \in [1,m] ]=\E_{X}\left[\prod_{j=1}^m f(L_j(X))\right]\\
&=\sum_{(i_1,\dots,i_m)\in [1,3]^m}\E_{X}\left[\prod_{j=1}^m f_{i_j}(L_j(X))\right].
\end{align*}
There are $3^m$ terms in the sum. One of these, the one that will turn out to be the main term, involves only $f_1$. Of the rest, $2^m-1$ involve $f_1$ and $f_3$ but not $f_2$, and the other $3^m-2^m$ terms involve $f_2$.
If one of the $i_j$ is $2$, then since $d\geq m-2$, by Lemma~\ref{lem:gowerscount} we have
\[\left|\E_{X}\left[\prod_{j=1}^m f_{i_j}(L_j(X))\right]\right|\leq \min_{1\leq j\leq m} \|f_{i_j}\|_{U^{d+1}}\leq \|f_2\|_{U^{d+1}}\leq \eta(|\mathcal B|).
\]
We can choose $\eta$ later to make all of these terms sufficiently small. Our probability is thus at least
\[\sum_{(i_1,\dots,i_m)\in \{1,3\}^m}\E_{X}\left[\prod_{j=1}^m f_{i_j}(L_j(X))\right]-3^m \eta(|\mathcal B|).\]
To deal with the remaining terms, we restrict to counting within certain ``good'' atoms of the decomposition. Specifically, let $Y$ be a point such that $L_1(Y),\dots,L_m(Y)\in R_{\varepsilon,\zeta}$, i.e. $(L_1(Y),\dots,L_m(Y))$ forms a copy of the matroid $N$ in $R_{\varepsilon,\zeta}$. Let $\beta_{i,j}=P_i(L_j(Y))$, and let $b_j=(\beta_{1,j},\dots,\beta_{C,j})$ be the atom of $\mathcal B$ that $L_j(Y)$ is in. We restrict to counting across tuples $X$ such that $L_j(X)\in b_j$ for all $j$. Since $f_1+f_3$ is always nonnegative,
\begin{align*}
& \sum_{(i_1,\dots,i_m)\in \{1,3\}^m}\E_{X}\left[\prod_{j=1}^m f_{i_j}(L_j(X))\right] \\
& \geq\sum_{(i_1,\dots,i_m)\in \{1,3\}^m}\E_{X}\left[\prod_{j=1}^m f_{i_j}(L_j(X))1_{[\mathcal B(L_j(X))=b_j]}\right].
\end{align*}
We next deal with the main term. Since $f_1\geq 0$, applying Theorem~\ref{thm:equidist} with $\varepsilon=\alpha(C)$ gives
\begin{align*}
& \E_{X}\left[\prod_{j=1}^m f_{1}(L_j(X)) 1_{[\mathcal B(L_j(X))=b_j]}\right] \\
&= {\bf {Pr}}_{X\in (\mathbb{F}^n)^\ell}\left[P_i(L_j(X)) = \beta_{i,j}~
\forall i \in [C],j \in [m] \right]\cdot\\
&\qquad \qquad \E_{X}\left[\prod_{j=1}^m f_{1}(L_j(X))\middle| \forall j\in[m], \mathcal B(L_j(X))=b_j \right] \\
&\geq \left(\frac{1}{K} - \alpha(C)\right) \zeta^m.
\end{align*}
Now we deal with the terms involving $f_3$, following the argument used in the corresponding part of the proof of Theorem~5.10 in \cite{VeryCountingMaybe}.
Take such a term $\E_{X}\left[\prod_{j=1}^m f_{i_j}(L_j(X))1_{[\mathcal B(L_j(X))=b_j]}\right]$, and suppose $i_k=3$. Without loss of generality we can take a linear transformation of coordinates and assume $L_k(x_1,\dots,x_\ell)=x_1$. We can also assume without loss of generality that $k=1$. Then, since $|f_1|,|f_3|\leq 1$, we have
\begin{align*}
&\left|\E_{X}\left[\prod_{j=1}^m f_{i_j}(L_j(X))1_{[\mathcal B(L_j(X))=b_j]}\right]\right| \leq \E_{X}\left[ \left|f_{3}(x_1)\right|\prod_{j=1}^m 1_{[\mathcal B(L_j(X))=b_j]}\right] \\
&=\E_{x_1}\left[\left|f_{3}(x_1)\right|1_{[\mathcal B(x_1)=b_1]}\E_{x_2,\dots,x_\ell}\left[ \prod_{j=1}^m 1_{[\mathcal B(L_j(X))=b_j]}\right]\right].
\end{align*}
By Cauchy-Schwarz,
\begin{align*}
& \left(\E_{X}\left[ \left|f_{3}(x_1)\right|\prod_{j=1}^m 1_{[\mathcal B(L_j(X))=b_j]}\right]\right)^2 \\
& \leq \E_{x_1}\left[\left|f_{3}(x_1)\right|^2 1_{[\mathcal B(x_1)=b_1]}\right]\E_{x_1}\left(\E_{x_2,\dots,x_\ell}\left[ \prod_{j=1}^m 1_{[\mathcal B(L_j(X))=b_j]}\right]\right)^2.\label{eqn:csterm} \tag{*}
\end{align*}
By Corollary~\ref{cor:atomSize}, ${\bf {Pr}}_{x_1}[\mathcal B(x_1)=b_1]\leq \frac{1}{\|\mathcal B\|}+\alpha(C)$. Thus by condition (1) in the definition of the reduced matroid,
\begin{align*}
& \E_{x_1}\left[\left|f_{3}(x_1)\right|^2 1_{[\mathcal B(x_1)=b_k]}\right]=\E_{x_1}\left[\left|f_{3}(x_1)\right|^2 \mid x_1\in b_1\right] {\bf {Pr}}_{x_1}[\mathcal B(x_1)=b_1] \\
& \leq \varepsilon^2 \left(\frac{1}{\|\mathcal B\|}+\alpha(C)\right) \leq \frac{2\varepsilon^2}{\|\mathcal B\|}.
\end{align*}
Let $Y=(y_2,\dots,y_\ell)\in(\mathbb{F}^n)^{\ell-1}$, so that $(x_1,Y)$ forms another input to the linear forms $L_j$ such that $L_k(x_1,Y)=L_k(X)=x_1$. The second term in the right hand side of (\ref{eqn:csterm}) expands as
\begin{align*}
& \E_{x_1}\left(\E_{x_2,\dots,x_l}\left[ \prod_{j=1}^m 1_{[\mathcal B(L_j(X))=b_j]}\right]\right)^2
= \E_{x_1}\left(\E_{x_2,\dots,x_l}\left[ \prod_{\substack{i\in [C]\\ j\in [m]}} 1_{[P_i(L_j(X))=\beta_{i,j}]}\right]\right)^2 \\
& = \E_{x_1}\left(\E_{x_2,\dots,x_l}\left[ \prod_{\substack{i\in [C]\\ j\in [m]}} \frac{1}{2^{k_i+1}}\sum_{\lambda_{i,j}=0}^{2^{k_i+1}-1}\expo{\lambda_{i,j}(P_i(L_j(X))-\beta_{i,j})}\right]\right)^2 \\
& = \frac{1}{\|\mathcal B\|^{2m}}\E_{x_1}\left(\sum_{\substack{(\lambda_{i,j})\in\\ \prod_{i,j}[0,2^{k_i+1}-1]}}\expo{-\sum_{\substack{i\in [C]\\ j\in [m]}}\lambda_{i,j}\beta_{i,j}}\E_{x_2,\dots,x_l} \expo{\sum_{\substack{i\in[C]\\ j\in[m]}} \lambda_{i,j}P_i(L_j(X))}\right)^2 \\
& \leq \frac{1}{\|\mathcal B\|^{2m}}\sum_{\substack{(\lambda_{i,j}),(\tau_{i,j})\in\\ \prod_{i,j}[0,2^{k_i+1}-1]}}\left |\E_{\substack{X,Y}}\left[\expo{\sum_{\substack{i\in[C]\\ j\in[m]}} \lambda_{i,j}P_i(L_j(X))}\expo{\sum_{\substack{i\in[C]\\ j\in[m]}} \tau_{i,j}P_i(L_j(x_1,Y))}\right]\right|.
\end{align*}
To bound this expression, we will interpret it as an expectation of the form for which Theorem~\ref{thm:nearOrtho} gives upper bounds, for some system of linear forms $N'$ that we construct. Specifically, let $N'$ be a system of linear forms $\tilde L_1,\dots,\tilde L_{2m-1}$ on $2\ell-1$ variables, where $\tilde L_i(X,Y)=L_i(X)$ for $i\in\{1,\dots,m\}$ and $\tilde L_i(X,Y)=L_{i-m+1}(x_1,Y)$ for $i\in\{m+1,\dots,2m-1\}$. That is, $N'$ corresponds to a rank-$(2\ell-1)$ matroid consisting of the union of two copies of $N$ with $L_1=N_1$ as the only shared element.
\begin{lem}
\label{lem:equivConsistent}
For each $i$, $|\Phi_{d_i,k_i}(N')^\perp|=|\Phi_{d_i,k_i}(N)^\perp|^2$.
\end{lem}
\begin{proof}
The proof is essentially the same as that of Lemma~5.13 in \cite{VeryCountingMaybe}, except that a key step now uses relevant polynomials being homogeneous instead of relevant linear forms being affine.
Consider the map $\varphi: \Phi_{d_i,k_i}(N)^\perp \times \Phi_{d_i,k_i}(N)^\perp \to \Phi_{d_i,k_i}(N')^\perp$ given by
\[\varphi((\lambda_1,\dots,\lambda_m),(\tau_1,\dots,\tau_m))=(\lambda_1 + \tau_1 ,\lambda_2,\dots,\lambda_m, \tau_2,\dots,\tau_m).\]
If $\lambda=(\lambda_1,\dots,\lambda_m),\tau=(\tau_1,\dots,\tau_m)\in \Phi_{d_i,k_i}(N)^\perp$, then $\sum_{j=1}^m \lambda_j P(L_j(X)) = \sum_{j=1}^m \tau_j P(L_j(X)) = 0$ for all $P$ and $X$. So, for all $(X,Y)$,
\[\sum_{j=1}^{2m-1} \varphi(\lambda,\tau)_j P(\tilde{L}_j(X,Y))=\sum_{j=1}^m \lambda_j P(L_j(X)) + \sum_{j=1}^m \tau_j P(L_j(x_1,Y)) = 0.\]
Thus $\varphi$ indeed maps $\Phi_{d_i,k_i}(N)^\perp \times \Phi_{d_i,k_i}(N)^\perp$ to $\Phi_{d_i,k_i}(N')^\perp$. We claim that $\varphi$ is a bijection, which then implies the desired equality.
Suppose $\lambda\in \Phi_{d_i,k_i}(N)^\perp$. By the definition of a linear form, for each $i$ either $L_i(x_1,0,\dots,0)\equiv 0$ or $L_i(x_1,0,\dots,0)\equiv x_1$. Let $S$ be the set of $i$ such that $L_i(x_1,0,\dots,0)\equiv x_1$. Note that $1\in S$. Setting $x_2=\cdots=x_\ell=0$ and setting $P,x_1$ such that $P$ is a linear polynomial with $P(x_1)=1,P(0)=0$ gives
\[0=\left(\sum_{j\in S} \lambda_j\right) P(x_1) + \left(\sum_{j\notin S} \lambda_j\right) P(0)=\left(\sum_{j\in S} \lambda_j\right) x_1,\]
so $\sum_{j\in S} \lambda_j = 0$, meaning $\lambda_1=-\sum_{\substack{j\in S \\ j\neq 1}} \lambda_j$. Thus $\lambda$ is uniquely determined given $\lambda_2,\dots,\lambda_m$, and so both $\lambda$ and $\tau$ are uniquely determined given $\varphi(\lambda,\tau)$, meaning $\varphi$ is injective.
Now suppose $(\lambda_1,\dots,\lambda_m,\tau_2,\dots,\tau_m)\in \Phi_{d_i,k_i}(N')^\perp$, so $\sum_{j=1}^m (\lambda_j Q(L_j(X))+\sum_{j=2}^m \tau_j Q(x_1,Y))=0$ for every $Q$ and $(X,Y)$. For convenience, let $\tau_1=\lambda_1$. Similarly to before, setting $x_2=\cdots=x_\ell=0$ gives
\[\sum_{j\in S} \lambda_j Q(x_1)+\sum_{j=2}^m \tau_j Q(L_j(x_1,Y))=0,\]
for any $x_1,Y$ and any homogeneous $Q$, while setting $y_2=\cdots=y_\ell=0$ gives
\[\sum_{j=2}^m \lambda_j Q(L_j(X))+\sum_{j\in S} \tau_j Q(x_1)=0,\]
for any $X$ and any homogeneous $Q$. Here we have used the fact that $Q(0)=0$.
In particular, setting $x_2=\cdots=x_\ell=y_2=\cdots=y_\ell=0$ and setting $Q,x_1$ such that $Q$ is a linear polynomial with $Q(x_1)=1,Q(0)=0$ gives $0 = \sum_{j\in S} \lambda_j + \sum_{j\in S} \tau_j - \lambda_1 = \sum_{\substack{j\in S \\ j\neq 1}} (\lambda_j+\tau_j) - \lambda_1$. So, fixing $X$,
\begin{align*}
& 0=\sum_{j=2}^m \lambda_j Q(L_j(X))+\sum_{j\in S} \tau_j Q(x_1) = \sum_{j=2}^m \lambda_j Q(L_j(X)) - \sum_{j\in S} \lambda_j Q(x_1)) + \lambda_1 Q(x_1) \\
& = \left( - \sum_{\substack{j\in S\\ j\neq 1}} \lambda_j\right) Q(x_1) + \sum_{j=2}^m \lambda_j Q(L_j(X)).
\end{align*}
Thus $\left(- \sum_{\substack{j\in S\\ j\neq 1}} \lambda_j,\lambda_2,\dots,\lambda_m\right)\in \Phi_{d_i,k_i}(N)^\perp$, and $\left(- \sum_{\substack{j\in S\\ j\neq 1}} \tau_j,\tau_2,\dots,\tau_m\right)\in \Phi_{d_i,k_i}(N)^\perp$ likewise. But
\[
\varphi\left(\left(- \sum_{\substack{j\in S\\ j\neq 1}} \lambda_j,\lambda_2,\dots,\lambda_m\right),\left(- \sum_{\substack{j\in S\\ j\neq 1}} \tau_j,\tau_2,\dots,\tau_m\right)\right)=(\lambda_1,\dots,\lambda_m,\tau_2,\dots,\tau_m).
\]
So $\varphi$ is surjective, and thus bijective as desired.
\end{proof}
Now, let $\mu_i = \varphi(\lambda_i,\tau_i)$. We have
\begin{align*}
& \sum_{\substack{(\lambda_{i,j}),(\tau_{i,j})\in\\ \prod_{i,j}[0,2^{k_i+1}-1]}}\left |\E_{\substack{X,Y}}\left[\expo{\sum_{\substack{i\in[C]\\ j\in[m]}} \lambda_{i,j}P_i(L_j(X))}\expo{\sum_{\substack{i\in[C]\\ j\in[m]}} \tau_{i,j}P_i(L_j(x_1,Y))}\right]\right|\\
& = \left(\prod_{i=1}^C 2^{k_i+1} \right)\sum_{\substack{(\mu_{i,j})\in\\ \prod_{i\in [C], j\in [2m-1]}[0,2^{k_i+1}-1]}}\left |\E_{X,Y}\left[\expo{\sum_{\substack{i\in[C]\\ j\in[2m-1]}} \mu_{i,j}P_i(\tilde{L}_j(X,Y))}\right]\right|\\
& \leq \left(\prod_{i=1}^C 2^{k_i+1}\right)^{2m} \alpha(C) + \left(\prod_{i=1}^C 2^{k_i+1}\right)\prod_{i=1}^C |\Phi_{d_i,k_i}(N')^\perp| \\
& = \|\mathcal B\|^{2m}\left(\alpha(C)+\frac{\|\mathcal B\|}{K^2}\right),
\end{align*}
\noindent where $K=\prod_{i=1}^C |\Phi_{d_i,k_i}(N)|$, and the third line is an application of Theorem~\ref{thm:nearOrtho}.
Thus,
\begin{align*}
& \left(\E_{X}\left[ \left|f_{3}(x_1)\right|\prod_{j=1}^m 1_{[\mathcal B(L_j(X))=b_j]}\right]\right)^2 \leq \varepsilon^2 \left(\frac{2}{\|\mathcal B\|}\right) \left(\alpha(C)+\frac{\|\mathcal B\|}{K^2}\right)\\
& \leq 2\varepsilon^2 \left(\alpha(C)+\frac{1}{K^2}\right),
\end{align*}
so each term in our original sum that involves $f_3$ has magnitude at most
\[ \sqrt{2\varepsilon^2 \left(\alpha(C)+\frac{1}{K^2}\right)}\leq \varepsilon \sqrt{2\left(\alpha(C)+\frac{1}{K^2}\right)}.\]
Finally, we bring everything together. Note that $K\leq \prod_{i=1}^C |\Phi_{d_i,k_i}(N)|\leq \|\mathcal B\|^m \leq 2^{dCm}$, so $\alpha(C)\leq \frac{1}{K^2}$. Setting $\eta(C) = \left(\frac{\zeta}{3}\right)^m 2^{-dCm-3}$, $\varepsilon_0=\frac{1}{16}\left(\frac{\zeta}{2}\right)^m$, we have
\begin{align*}
& \E_{X}\left[\prod_{j=1}^m f(L_j(X))\right] \geq \left(\frac{1}{K} - \alpha(C)\right) \zeta^m - 2^{m+\frac{1}{2}} \varepsilon \sqrt{\alpha(C)+\frac{1}{K^2}} - 3^m \eta(|\mathcal B|) \\
& \geq \frac{1}{2K} \zeta^m - 2^{m+1} \varepsilon \frac{1}{K} - \frac{1}{8K}\zeta^m \\
& \geq \zeta^m \frac{1}{4K} \geq \frac{\zeta^m}{4} \frac{1}{\|\mathcal B\|^m}.
\end{align*}
So, there are at least $\frac{\zeta^m}{4} \frac{(2^n)^\ell}{\|\mathcal B\|^m}$ linear maps $\iota$ such that $\iota(N)\subseteq M$. At most $\ell (2^n)^{\ell-1}2^{\ell-1}$ such linear maps are not injections. When $n$ is sufficiently large compared to $\ell$ and $C$, this is negligible compared to the number of linear maps we obtained, so for some choice of $\nu$, the number of injections $\iota$ taking $N$ into $M$ is at least $\frac{\zeta^m}{5} \frac{(2^n)^\ell}{\|\mathcal B\|^m}$ for large enough $n$. Since $N$ has at most $2^{\ell^2}$ automorphisms, there are at least $\frac{\zeta^m}{5\cdot 2^{\ell^2}} \frac{(2^n)^\ell}{\|\mathcal B\|^m}$ distinct copies of $N$ in $M$. Taking $\beta=\frac{\zeta^m}{5\cdot 2^{\ell^2}}$, this concludes the proof of the Counting Lemma.
\end{proof}
\section{Basic Applications} \label{sec:apply}
\subsection{The Removal Lemma}
As a first application of our Counting Lemma, we give a simple proof of a removal lemma for matroids. This removal lemma is a special case of a removal lemma for linear equations that appears in \cite{hyperRemoval} and \cite{MatroidRemoval}, in both cases proven using a hypergraph removal lemma.
We say that a matroid $M\subseteq \mathbb{F}^{r(M)}\setminus \{0\}$ is \emph{$\varepsilon$-far from being $N$-free} if for any matroid $M'\subseteq \mathbb{F}^{r(M)}\setminus \{0\}$ that does not contain a copy of $N$, $\E_x[|1_M-1_{M'}|]\geq \varepsilon$.
\begin{thm}[Removal Lemma]
For any $\zeta>0$ and matroid $N$ there is a $\alpha>0$ such that if a matroid $M$ of sufficiently high rank $r(M)$ is $\zeta$-far from being $N$-free, then $M$ contains at least $\alpha (2^{r(M)})^{r(N)}$ copies of $N$.
\end{thm}
\begin{proof}
Let $\zeta'=\frac{\zeta}{4}$, and let $d=|N|-2$. By the Counting Lemma, there exist $\beta, \eta, \varepsilon, r, \nu$ such that if the reduced matroid $R=R_{\varepsilon,\zeta'}$ given by an $(\eta,r,d)$-regular partition of a matroid $M$ with corresponding factor $\mathcal B$ contains a copy of $N$ and $r(M)\geq \nu(|\mathcal B|)$, then $M$ contains at least $\beta \frac{(2^{r(M)})^{r(N)}}{\|\mathcal B\|^{|N|}}$ copies of $N$. Fix such a choice of $\beta,\eta,\varepsilon,r,\nu$.
Let $M$ be a matroid of rank $n$. Suppose that $M$ is $\zeta$-far from being $N$-free.
By Theorem~\ref{thm:decomp}, we have a $(\varepsilon \zeta'^{1/2},\eta,r,d)$-regular partition of $M$, $1_M=f_1+f_2+f_3$, whose corresponding factor $\mathcal B$ has complexity at most $C$, where $C$ depends on only $\zeta',\varepsilon, \eta,r,d$, i.e. depends on only $\zeta, N$. Let $M'=M\cap R_{\varepsilon,\zeta'}$.
The only elements in $M\setminus M'$ are either
\begin{itemize}
\item[(i)] In an atom $b$ of $\mathcal B$ such that $E[|f_3(x)|^2\mid x\in b]>\varepsilon^2$, or
\item[(ii)] In an atom $b$ of $\mathcal B$ such that $E[f(x)\mid x\in b]<\zeta'$.
\end{itemize}
Let $S$ be the subset of $\mathbb{F}^{n}$ contained in atoms $b$ of $\mathcal B$ such that $E[|f_3(x)|^2\mid x\in b]>\varepsilon^2$. Then by condition (iv) of Theorem~\ref{thm:decomp},
\[ \varepsilon^2 \zeta' \geq \|f_3\|_2^2=E_x[|f_3(x)|^2] \geq \frac{|S|}{2^n} \varepsilon^2, \]
so $|S|\leq \zeta' 2^n$. Likewise, let $T$ be the subset of $\mathbb{F}^n$ contained in atoms $b$ of $\mathcal B$ such that $E[f(x)\mid x\in b]<\zeta'$. Then $|T\cap M|< \zeta' |T|\leq \zeta' 2^n$.
So, $E_x{|1_M-1_{M'}|}\leq \frac{|S|+|T|}{2^n} < 2\zeta'=\frac{\zeta}{2}$. Since $M$ is $\zeta$-far from being $N$-free, $M'$ cannot be $N$-free, so $M'$ contains a copy of $N$. If we take $n\geq \nu(C)$, then the Counting Lemma argument above yields that $M$ contains at least $\beta \frac{(2^{n})^{r(N)}}{\|\mathcal B\|^{|N|}}\geq \frac{1}{2^{dC|N|}}(2^{n})^{r(N)}$ copies of $N$, so we are done by taking $\alpha = \frac{1}{2^{dC|N|}}$.
\end{proof}
\subsection{The Doubling Lemma and the Geometric Erd\texorpdfstring{\H{o}}{o}s-Stone Theorem}
We extend the argument in the proof of the removal lemma to prove a simple result we will call the \emph{doubling lemma}. To state it, we define the double of a matroid.
\begin{defn}
Let $N$ be a matroid of rank $\ell$. Define its \emph{double} $2N$ to be the matroid of rank $\ell+1$ consisting of the union of $N$ with $\{x+v|x\in N\}$, where $v$ is a nonzero element not contained in the span of the elements of $N$. So, for example, $2BB(n,c)=BB(n+1,c)$. The matroid $2^k N$ is the result of starting with $N$ and doubling $k$ times.
\end{defn}
The matroid $2^k N$ corresponds to the result of replacing each element of $N$ with an affine cube of dimension $k$. Note that if there is a homomorphism from $N$ to $R$, then for any $k>1$ there is a homomorphism from $2^k N$ to $R$ because we can simply first contract each of the affine hypercubes into a point.
\begin{lem}[Doubling Lemma]
For any $\alpha>0$ and matroid $N$ there exists $\alpha'>0$ such that for sufficiently large $n$, if a matroid $M\subseteq \mathbb{F}^n\setminus 0$ contains at least $\alpha (2^n)^{r(N)}$ copies of $N$, then it contains at least $\alpha'(2^n)^{r(N)+1}$ copies of $2N$.
\end{lem}
\begin{proof}
Let $M$ be a matroid of rank $n$, and suppose that $M$ contains at least $\alpha (2^n)^{r(N)}$ copies of $N$. Any element $v\in M$ can be contained in at most $|N|(2^n)^{r(N)-1}$ copies of $N$, so if $M'$ is a subset of $M$ with $|M'| < \frac{\alpha}{|N|}2^n$, then $M\setminus M'$ contains a copy of $N$. So $M$ is $\frac{\alpha}{|N|}$-far from being $N$-free. Set $\zeta\coloneqq \frac{\alpha}{|N|}$.
Let $\zeta'=\frac{\zeta}{4}$, $d=2|N|-2$. By the Counting Lemma, there exist $\beta, \eta, \varepsilon, r, \nu$ such that if there exists a homomorphism from $2N$ to the reduced matroid $R=R_{\varepsilon,\zeta'}$ given by an $(\eta,r,d)$-regular partition of a matroid $M$ with corresponding factor $\mathcal B$, and $r(M)\geq \nu(|\mathcal B|)$, then $M$ contains at least $\beta \frac{(2^{r(M)})^{r(N)+1}}{\|\mathcal B\|^{|2N|}}$ copies of $2N$. Fix such a choice of $\beta,\eta,\varepsilon,r,\nu$.
By Theorem~\ref{thm:decomp}, we have a $(\varepsilon \zeta'^{1/2},\eta,r,d)$-regular partition of $M$, $1_M=f_1+f_2+f_3$, whose corresponding factor $\mathcal B$ has complexity at most $C$, where $C$ depends on only $\zeta',\varepsilon, \eta,r,d$, i.e. depends on only $\zeta, N$. Let $M'=M\cap R_{\varepsilon,\zeta'}$.
By the same argument as in the proof of the removal lemma, $M'$ contains a copy of $N$. Thus there is a homomorphism from $2N$ to $R_{\varepsilon,\zeta'}$.
So, if we take $n\geq \nu(C)$, by the Counting Lemma argument above, $M$ contains at least $\beta \frac{(2^{n})^{r(N)+1}}{\|\mathcal B\|^{|2N|}}\geq \frac{1}{2^{2dC|N|}}(2^{n})^{r(N)+1}$ copies of $N$, so we are done by taking $\alpha' = \frac{1}{2^{2dC|N|}}$.
\end{proof}
\begin{rem}
We could also have directly gotten a (nondegenerate) copy of $2N$ in $R_{\varepsilon,\zeta'}$ by using an extension of the Chevalley-Warning Theorem to prime power moduli, such as Theorem~B in \cite{powerCW}.
\end{rem}
One particular special case of this result is of interest.
\begin{cor}\label{cor:doublePG}
For any $\alpha>0$ and positive integers $s,t$ there exists $\alpha'>0$ such that for sufficiently large $n$, if a matroid $M\subseteq \mathbb{F}^n\setminus 0$ contains at least $\alpha (2^n)^{s}$ copies of $PG(s-1,2)$, then it contains at least $\alpha'(2^n)^{s+t}$ copies of $BB(s+t,s)$.
\end{cor}
Corollary~\ref{cor:doublePG} should be compared with the following analogous graph-theoretic lemma, used in the proof of results on the chromatic threshold in \cite{chromThresh}.
\begin{lem}[{\cite[Lemma~7]{doublingKn}}]
For every $r, s$ and $\varepsilon > 0$ there exists $\delta=\delta_{r,s}(\varepsilon) > 0$ such that the following holds for sufficiently large $n$. If the $n$-vertex graph $G$ contains at least $\varepsilon n^r$ copies of $K_r$, then $G$ contains at least $\delta_{r,s}(\varepsilon)n^{rs}$ copies of $K_r(s)$.
\end{lem}
Along the same lines, our Counting Lemma gives a short proof of the Geometric Erd\H{o}s-Stone theorem, Theorem~\ref{thm:gES}, using the Bose-Burton theorem, Theorem~\ref{thm:BB}, analogous to the proof of the Erd\H{o}s-Stone theorem using Turán's theorem.
\begin{proof}[Proof of Theorem~\ref{thm:gES}]
Let $r(M)=n$, $\chi(N)=c$, and $r(N)=\ell$, and suppose $|M|\geq (1-2^{1-c}+\zeta)2^n$. It suffices to show that if $n$ is sufficiently large, $M$ must contain a copy of $BB(\ell,c)$, so without loss of generality $N=BB(\ell,c)$.
Let $\zeta'=\frac{\zeta}{4}$, and let $d=|N|-2$. By the Counting Lemma, there exist $\beta, \eta, \varepsilon, r, \nu$ such that if there is a homomorphism from $N$ to the reduced matroid $R=R_{\varepsilon,\zeta'}$ given by an $(\eta,r,d)$-regular partition of a matroid $M$ with corresponding factor $\mathcal B$, and $r(M)\geq \nu(|\mathcal B|)$, then $M$ contains at least $\beta \frac{(2^{n})^{l}}{\|\mathcal B\|^{|N|}}>0$ copies of $N$. Fix such a choice of $\beta,\eta,\varepsilon,r,\nu$.
By Theorem~\ref{thm:decomp}, we have a $(\varepsilon \zeta'^{1/2},\eta,r,d)$-regular partition of $M$, $1_M=f_1+f_2+f_3$, whose corresponding factor $\mathcal B$ has complexity at most $C$, where $C$ depends on only $\zeta',\varepsilon, \eta,r,d$, i.e. depends on only $\zeta, N$. Let $M'=M\cap R_{\varepsilon,\zeta'}$. As in the proof of the Removal Lemma, we see that $E_x{|1_M-1_{M'}|} < \frac{\zeta}{2},$ so that $|M'|\geq (1-2^{1-c}+\frac{\zeta}{2})2^n$. By the Bose-Burton theorem, $M'\subseteq R_{\varepsilon,\zeta'}$ contains a copy of $PG(c-1,2)$. So there is a homomorphism from $N=2^{\ell-c}PG(c-1,2)$ to $R_{\varepsilon,\zeta'}$, and thus the Counting Lemma gives us at least one copy of $N$ in $M$, as desired.
\end{proof}
\section{Applications to the Critical Threshold Problem} \label{sec:work}
The strong decomposition theorem and our Counting Lemma allow us to extend the arguments using Green's regularity lemma in \cite{main} and \cite{geelenOdd} to address more general cases of Conjecture~\ref{conj:main}. To illustrate the approach we take, we first extend the argument in \cite{main} to the case of $N_{\ell,2,1}$. This case is simple enough that it suffices to only use Green's regularity lemma, but we phrase it in terms of the $d=1$ case of the strong decomposition theorem to highlight the similarity with our approach to a more general case, in the next section.
\subsection{Verifying the Conjecture for \texorpdfstring{$N_{\ell,2,1}$}{N\_(l,2,1)}}
\label{sec:vernm1}
\begin{prop}
\label{prop:nm1}
$\theta(N_{\ell,2,1})=\frac{1}{4}.$
\end{prop}
\begin{proof}
Fix $\delta>0$.
Let $\zeta=\frac{\delta}{2}$, and let $d=1$. By the Counting Lemma, there exist $\beta, \eta, \varepsilon, r, \nu$ such that if the reduced matroid $R=R_{\varepsilon,\zeta}$ given by an $(\eta,r,d)$-regular partition of a matroid $M$ with corresponding factor $\mathcal B$ contains a copy of $PG(1,2)$ and $r(M)\geq \nu(|\mathcal B|)$, then $M$ contains at least $\beta \frac{(2^{r(M)})^{2}}{\|\mathcal B\|^{3}}$ copies of $PG(1,2)$. Fix such a choice of $\beta,\eta,\varepsilon,r,\nu$.
Let $M$ be a matroid of rank $n\geq \nu(|\mathcal B|)$ with $|M|\geq (\frac{1}{4}+\delta)2^n$. By Theorem~\ref{thm:decomp}, we have an $(\varepsilon \zeta^{1/2},\eta,r,1)$-regular partition of $M$, $1_M=f_1+f_2+f_3$, whose corresponding factor $\mathcal B$ has complexity at most $C$, where $C$ depends only on $\zeta,\varepsilon,\eta,r,d$, i.e. depends only on $\delta$. Let $R=R_{\varepsilon,\zeta}$ and let $D=R_{\varepsilon,\frac{1}{2}+\zeta}$.
We have two cases, depending on whether $D$ is nonempty.
\emph{Case 1}: $D$ is nonempty; that is, for some atom $b$ of $\mathcal B$, $|b\cap M|\geq (\frac{1}{2}+\zeta)|b|$ and $\E[|f_3(x)|^2\mid x\in b]\leq \varepsilon^2$. Since $\mathcal B$ is a factor defined by linear polynomials, if $h$ is any element of the atom $b_0$ containing $0$, then shifting by $h$ preserves each atom. If $\chi(M)>C$, then there exists such an element $h$ in $M\cap b_0$. Consider the submatroid $M_h=\{w\in M \mid w+h\in M\}$. Since $|M\cap b|\geq (\frac{1}{2}+\zeta)|b|$, $|M_h\cap b|\geq \zeta |b|\geq \frac{\zeta}{\|\mathcal B\|} 2^n$. By the Density Hales-Jewett Theorem, for sufficiently large $n$, $M_h$ contains a copy $A$ of $AG(m-1,2)$. Then $A\cup (A+h)\cup \{h\}$ contains a copy of $N_{\ell,2,1}$. So, either $\chi(M)\leq C$ or $M$ contains a copy of $N_{\ell,2,1}$, as desired.
\emph{Case 2}: $D$ is empty.
Then for each atom $b$ of $\mathcal B$, either $\E[|f_3(x)|^2\mid x\in b]> \varepsilon^2$ or $|b\cap M| < \frac{1}{2}+\zeta$. Since $|f_3|_{L^2}\leq \varepsilon\zeta^{1/2}$, the former is true for less than a fraction $\zeta$ of the atoms $b$. We can use this to give a lower bound on the size of $R$. Indeed, we have
\[(\frac{1}{4}+\delta)2^n\leq |M|< (\zeta 2^n + |D|) \cdot 1 + (|R|-|D|)\cdot (\frac{1}{2}+\zeta) + (2^n-|R|)\cdot \zeta,\]
so $|R|> (\frac{1}{2}+2(\delta-2\zeta)) 2^{n}>\frac{1}{2}2^n$.
By Theorem~\ref{thm:BB}, $R$ contains a copy of $PG(1,2)$. So, by the Counting Lemma, $M$ contains at least $\beta \frac{(2^{n})^{2}}{\|\mathcal B\|^{3}}$ copies of $PG(1,2)$. Then some element $h$ is part of at least $\beta \frac{2^{n}}{\|\mathcal B\|^{3}}$ copies of $PG(1,2)$, so $M_h$, as defined in Case 1, has density at least $\frac{\beta}{\|\mathcal B\|^{3}}$. Applying the Density Hales-Jewett Theorem again gives a copy of $AG(\ell-1,2)$ in $M_h$, and thus a copy of $N_{\ell,2,1}$ in $M$, as desired.
\end{proof}
\subsection{Conditional Result on \texorpdfstring{$N_{\ell,c,1}$}{N\_(l,c,1)}}
The case $c>2$ is more difficult to address because using the Counting Lemma now requires polynomial factors of higher degrees, with which the construction of $M_h$ from before does not interact in as simple a manner. To make progress on this case, we will need to make additional assumptions on the polynomial factors that we obtain.
Here, we will illustrate our techniques by verifying Conjecture~\ref{conj:main} in the case where $N=N_{\ell,c,1}$, assuming the following conjecture on the regularity of factors formed by intersecting two shifts of a given polynomial factor.
\begin{conj}
\label{conj:polydifreg}
For any nondecreasing function $r':\mathbb{Z}_{\geq 0}\to \mathbb{Z}_{\geq 0}$ and parameters $\delta,\eta,r,d$, there exists a constant $K$ such that the polynomial factor $\mathcal B$ of $V=\mathbb{F}^n$ given by Theorem~\ref{thm:decomp} can be chosen to satisfy the following property. Let $\mathcal B$ be defined by polynomials $P_1,\dots,P_C$, let $\Delta_h P_i(x)=P_i(x+h)-P_i(x)-P_i(h)$ for $1\leq i\leq C$, and let $S$ be the set of values $h\in V$ for which the factor defined by polynomials $P_1,\dots,P_C,\Delta_h P_1, \dots, \Delta_h P_C$ is $r'$-regular. Then $S$ contains a subspace of $V$ of codimension at most $K$.
\end{conj}
We remark that we are not confident in the truth of this conjecture. However, we believe that the arguments that follow are useful in demonstrating some new techniques that seem promising for future work on the critical threshold problem.
\begin{thm}
\label{thm:conditional}
Assuming Conjecture~\ref{conj:polydifreg} holds, $\theta(N_{\ell,c,1})=1-3\cdot 2^{-c}$ for all $c\geq 2$.
\end{thm}
\begin{proof}
Assume $c\geq 3$, since the case $c=2$ is Proposition~\ref{prop:nm1}. Fix $\delta>0$.
Let $N=N_{\ell,c,1}$, and let $N^*=BB(\ell-1,c-1)$, so $N=N^*\cup (N^*+v)\cup \{v\}$ for some element $v$. Let $\zeta=\frac{\delta}{2}$ and let $d=|N|-2$. By the Counting Lemma, there exist $\beta_1, \eta_1, \varepsilon_1, r_1, \nu_1$ such that if the reduced matroid $R=R_{\varepsilon_1,\zeta}$ given by an $(\eta_1,r_1,d)$-regular partition of a matroid $M$ with corresponding factor $\mathcal B$ contains a copy of $N$ and $r(M)\geq \nu_1(|\mathcal B|)$, then $M$ contains at least $\beta_1 \frac{(2^{r(M)})^{r(N)}}{\|\mathcal B\|^{|N|}}$ copies of $N$. Let $\beta_2,\eta_2,\varepsilon_2,r_2,\nu_2$ be parameters we will pick later, and define $\beta=\min(\beta_1,\beta_2)$, $\eta(C)=\min(\eta_1(C),\eta_2(C))$, $\varepsilon=\min(\varepsilon_1,\varepsilon_2)$, $r(C)=\max(r_1(C),r_2(C))$, $\nu(C)=\max(\nu_1(C),\nu_2(C))$. Let $\varepsilon'=\frac{1}{2}\varepsilon \zeta^{1/2}$.
Let $M$ be a matroid of rank $n\geq \nu(|\mathcal B|)$ with $|M|\geq (1-3\cdot 2^{-c}+\delta)2^n$. By Theorem~\ref{thm:decomp}, we have a $(\frac{1}{2}\varepsilon'\zeta^{1/2},\eta,r,1)$-regular partition of $M$, $1_M=f_1+f_2+f_3$, whose corresponding factor $\mathcal B$ has complexity at most $C$, where $C$ depends only on $\zeta,\varepsilon,\eta,r,d$, i.e. depends only on $\delta, |N|$. Assuming Conjecture~\ref{conj:polydifreg}, for some $K,r_2$ depending on only $N$ and $d$, we can pick $\mathcal B$ such that the set $S$ of elements $h$ for which $\mathcal B_h$ is $r_2'$-regular contains a subspace $W$ of $V=\mathbb{F}^n$ of codimension at most $K$ as long as $\mathcal B$ is $r_2$-regular, where $r_2'$ is another parameter to be determined later. Let $R=R_{\varepsilon',\zeta}$ and let $D=R_{\varepsilon',\frac{1}{2}+\zeta}$.
We have two cases, depending on the density of $D$.
\emph{Case 1}: $|D|\leq (1-2^{2-c}+\zeta)2^n$.
For each atom $b$ of $\mathcal B$, either $\E[|f_3(x)|^2\mid x\in b]> \varepsilon'^2$, $b\subseteq D$, or $|b\cap M| < \frac{1}{2}+\zeta$. Since $\|f_3\|_{L^2}\leq \frac{1}{2}\varepsilon'\zeta^{1/2}$, the atoms $b$ for which the first is true span in total less than a fraction $\frac{\zeta}{4}$ of the elements of $\mathbb{F}^n$. We can use this to give a lower bound on the size of $R$. Indeed, we have
\[(1-3\cdot 2^{-c}+\delta)2^n\leq |M|< (\frac{\zeta}{4} 2^n + |D|) \cdot 1 + (|R|-|D|)\cdot (\frac{1}{2}+\zeta) + (2^n-|R|)\cdot \zeta,\]
so
\begin{align*}
|R| &> 2\left((1-3\cdot 2^{-c}+\delta)2^n - |D|\left(\frac{1}{2}-\zeta\right)-\frac{5}{4}\zeta 2^n\right) \\
&\geq \left(1-2^{1-c}+2(\delta -\frac{5}{4}\zeta)-\zeta + 2\zeta(1-2^{2-c}+\zeta)\right)2^n\\
&\geq (1-2^{1-c}+\zeta)2^n.
\end{align*}
By the Bose-Burton Theorem, $R$ contains a copy of $PG(c-1,2)$, and thus there exists a homomorphism from $2^{\ell-c} PG(c-1,2)$ to $R$. Since $N$ is a submatroid of $2^{\ell-c} PG(c-1,2)$, there is a homomorphism from $N$ to $R$. So, since $R\subseteq R_{\varepsilon,\zeta}$, by the Counting Lemma, $M$ contains a positive number of copies of $N$, as desired.
\emph{Case 2}: $|D|> (1-2^{2-c}+\zeta)2^n$.
We will introduce a few new technical tools to deal with this case.
Assume without loss of generality that $|\mathcal B|=C$. Given an element $h\in V$, let $\mathcal B_h$ be the factor defined by the polynomials $P_1,\dots,P_C,\Delta_h P_1,\dots,\Delta_h P_C$. Note that the indicator function for $D\cap (D+h)$ is constant on each atom of $\mathcal B_h$.
We represent $N^*$ as a system of linear forms on $\ell-1$ variables, $\{L_1,\dots,L_m\}$, where without loss of generality $\{L_1,\dots,L_{2^{c-1}-1}\}$ forms a copy of $PG(c-1,2)$. Let $M_h=\{w\in M \mid w+h\in M\}$. For an appropriately chosen $h$, we will derive a lower bound for the number of copies of $N^*$ contained in $M_h$. If $h\in M$, we will then end up with a copy of $N$ in $M$.
Let $g(x)=f(x)f(x+h)$ be the indicator function for $M_h$, so the expression we wish to give a lower bound for is
\[\E_{X\in (\mathbb{F}^n)^{\ell-1}}\left[ \prod_{j=1}^m f(L_j(X))f(L_j(X)+h)\right].\]
Note that
\begin{align*}
f(x)f(x+h)&=(1-f(x))(1-f(x+h))+(f(x)+f(x+h)-1) \\
&= ((1-f(x))(1-f(x+h))+f_1(x)+f_1(x+h)-1) \\
& + (f_2(x)+f_2(x+h))+(f_3(x)+f_3(x+h)).
\end{align*}
Let $g_1(x)=(1-f(x))(1-f(x+h))+f_1(x)+f_1(x+h)-1$, $g_2(x)=f_2(x)+f_2(x+h)$, and $g_3(x)=f_3(x)+f_3(x+h)$. So, $g(x)=g_1(x)+g_2(x)+g_3(x)$.
Similarly to the approach in the proof of the Counting Lemma, we will restrict to counting within certain ``good'' atoms of $\mathcal B_h$. Specifically, suppose we have a point $Y\in (\mathbb{F}^n)^{\ell-1}$ such that $L_1(Y),\dots,L_m(Y)$ are in atoms $\tilde b_1,\dots,\tilde b_m$ of $\mathcal B_h$ contained in $D\cap (D+h)$ such that $\E[|f_3(x)|^2 \mid x\in \tilde b_j],\E[|f_3(x+h)|^2 \mid x\in \tilde b_j]$ are at most $\varepsilon^2$ for $1\leq j\leq m$. Then
\begin{align*}
& \E_{X}\left[ \prod_{j=1}^m f(L_j(X))f(L_j(X)+h)\right]\geq \E_{X}\left[ \prod_{j=1}^m f(L_j(X))f(L_j(X)+h) 1_{[\mathcal B_h(L_j(X))=\tilde b_j]}\right]\\
&= \sum_{(i_1,\dots,i_m)\in [1,3]^m}\E_X \left[ \prod_{j=1}^m g_{i_j}(L_j(X)) 1_{[\mathcal B_h(L_j(X))=\tilde b_j]} \right].
\end{align*}
To handle the terms where $i_j=2$ for some $j$, we use the following lemma.
\begin{lem}
\label{lem:gowerspoly}
Let $s\geq 1$. For any (nonclassical) polynomial $P$ of degree at most $s$, constant $\beta \in \mathbb{T}$, and function $g:\mathbb{F}^n \rightarrow \mathbb{C}$, we have
\[\|g1_{P(x)=\beta}\|_{U^{s+1}}\leq \|g\|_{U^{s+1}}.\]
\end{lem}
\begin{proof}
Let $P$ have depth $k$. We have
\[g(x)1_{P(x)=\beta}(x)=\frac{1}{2^{k+1}}\sum_{\lambda=0}^{2^{k+1}-1}g(x)\expo{\lambda(P(x)-\beta)}.\]
Since $s+1\geq 2$, the triangle inequality holds for the Gowers norm, so
\begin{align*}
\|g1_{P(x)=\beta}\|_{U^{s+1}}&=\left\|\frac{1}{2^{k+1}}\sum_{\lambda=0}^{2^{k+1}-1}g(x)\expo{\lambda(P(x)-\beta)}\right\|_{U^{s+1}} \\
&\leq \frac{1}{2^{k+1}}\sum_{\lambda=0}^{2^{k+1}-1}\left\|g(x)\expo{\lambda(P(x)-\beta)}\right\|_{U^{s+1}}= \|g\|_{U^{s+1}},
\end{align*}
since $\|g\cdot P(x)\|_{U^{s+1}}=\|g\|_{U^{s+1}}$ when $P$ has degree $\leq s$.
\end{proof}
Repeated application of Lemma~\ref{lem:gowerspoly}, followed by the triangle inequality, gives that
\[\|g_2(x) 1_{[\mathcal B_h(x)=\tilde b_j]}\|_{U^{d+1}}\leq \|g_2\|_{U^{d+1}}\leq 2\|f_2\|_{U^{d+1}},\]
\noindent for $1\leq j\leq m$. So, since $\max_x |g_i(x)|\leq 2$ for $1\leq i\leq 3$, applying Lemma~\ref{lem:gowerscount} on $\{\frac{1}{2}g_{i_j}\}_{j=1}^m$ gives
\begin{align*}
& \left|\E_{X}\left[\prod_{j=1}^m g_{i_j}(L_j(X))1_{[\mathcal B_h(L_j(X))=\tilde b_j]}\right]\right|\leq 2^m \min_{1\leq j\leq m} \left\|\frac{1}{2} g_{i_j}(x)1_{[\mathcal B_h(x)=\tilde b_j]}\right\|_{U^{d+1}} \\
& \leq 2^m \left\|\frac{1}{2} g_2(x)1_{[\mathcal B_h(x)=\tilde b_j]}\right\|_{U^{d+1}}\leq 2^{m} \|f_2\|_{U^{d+1}} \leq 2^{m} \eta(|\mathcal B|),
\end{align*}
\noindent for each term where at least one of the $i_j$ is $2$.
Our probability is thus at least
\[\sum_{(i_1,\dots,i_m)\in \{1,3\}^m}\E_{X}\left[\prod_{j=1}^m g_{i_j}(L_j(X))1_{[\mathcal B_h(L_j(X))=\tilde b_j]}\right]-6^m \eta(|\mathcal B|).\]
When $x\in \tilde b_j$, we have $f_1(x),f_1(x+h)\geq \frac{1}{2}+\zeta$, so $g_1(x)=(1-f(x))(1-f(x+h))+f_1(x)+f_1(x+h) \geq 2\zeta$. The argument after this is exactly as in the proof of the Counting Lemma, assuming that $\mathcal B_h$ is $r'$-regular for $r'$ sufficiently large. We get a lower bound on the main term by applying Theorem~\ref{thm:equidist}, and use Cauchy-Schwarz followed by Lemma~\ref{lem:equivConsistent} and Theorem~\ref{thm:nearOrtho} to deal with the terms involving $g_3$ but not $g_2$. By that argument, we are guaranteed at least $\beta \frac{(2^n)^{\ell-1}}{\|\mathcal B\|^{|N^*|}}$ copies of $N^*$ in $M_h$ as long as $\beta\leq \beta_2$, $\eta(|\mathcal B_h|)\leq \eta_2(|\mathcal B_h|),$ $\varepsilon\leq \varepsilon_2,$ $r'(|\mathcal B_h|)\geq r_2'(|\mathcal B_h|)$, and $n\geq \nu_2(|\mathcal B_h|)$ for some parameters $\beta_2,\eta_2,\varepsilon_2,r_2',\nu_2$ depending only on $N,d$. At most $O_\ell((2^n)^{\ell-2})$ of these copies intersect themselves when shifted by $h$, so without loss of generality $\nu_2$ is big enough such that a copy $A$ exists where $A$ is disjoint from $A+h$, and so $A\cup (A+h)\cup \{h\}$ forms a copy of $N$ in $M$.
It remains to show that, when $\chi(M)$ is sufficiently large, there exists an $h\in M$ where $\mathcal B_h$ is $r_2'$-regular and a point $Y\in (\mathbb{F}^n)^{\ell-1}$ such that $L_1(Y),\dots,L_m(Y)$ are in atoms $\tilde b_1,\dots,\tilde b_m$ of $\mathcal B_h$ contained in $D\cap (D+h)$ such that $\E[|f_3(x)|^2 \mid x\in \tilde b_j],\E[|f_3(x+h)|^2 \mid x\in \tilde b_j]$ are at most $\varepsilon^2$ for $1\leq j\leq m$. By Proposition~\ref{prop:regUni} we can, without loss of generality, require $r_2, r_2'$ to be large enough (depending only on $\zeta,d,C$) so that $\mathcal B$ and $\mathcal B_h$ are $\frac{\zeta}{16\|\mathcal B\|}$-uniform. In this case, by Corollary~\ref{cor:atomSize}, every potential atom of $\mathcal B_h$ consistent with the defining polynomials is nonempty, and in fact has size at least $\left(\frac{1}{\|\mathcal B_h\|}-\frac{\zeta}{16\|\mathcal B_h\|}\right)2^n$.
Recall that $W$ is a subspace of $V=\mathbb{F}^n$ of codimension at most $K$ such that $\mathcal B_h$ is $r_2'$-regular for all $h\in W$. For any $h\in W$ and $i\in[1,C]$, by our choice of $r_2'$, $\Delta_h P_i$ attains all values in $\frac{1}{2^{k_i'+1}}\mathbb{Z}/\mathbb{Z}$, where $k_i'$ is the depth of $\Delta_h P_i$. So, $P_i(x+h)-P_i(x)=\Delta_h P_i(x)+P_i(h)$ attains all values in a coset of $\frac{1}{2^{k_i'+1}}\mathbb{Z}/\mathbb{Z}$ in $\frac{1}{2^{k_i+1}}\mathbb{Z}/\mathbb{Z}\subset \mathbb{T}$. We will use the following proposition to narrow down our choice of $h$ to a slightly smaller subspace, in order to ensure that $P_i(h)\in \frac{1}{2^{k_i'+1}}\mathbb{Z}/\mathbb{Z}$.
\begin{prop}
Let $P$ be a (nonclassical) polynomial of depth at most $k$ on $V=\mathbb{F}^n$ and let $W$ be a subspace of $V$ such that for all $h\in W$, $\Delta_h P$ attains all values in $\frac{1}{2^{k_h'+1}}\mathbb{Z}/\mathbb{Z}$, where $k_h'$ is the depth of $\Delta_h P$. Then there exists a subspace $W'$ of $W$ with codimension at most $k+1$ in $W$, such that for $h\in W'$, $P(h)\in \frac{1}{2^{k_h'+1}}\mathbb{Z}/\mathbb{Z}$.
\end{prop}
\begin{proof}
Without loss of generality let $P$ have depth $k$, so for each $h$, $\Delta_h P$ has depth $k_h \leq k$. We use induction on $i$ to show that, for $0\leq i\leq k+1$, the set of $h$ for which $P(x+h)-P(x)$ attains no value in $\frac{1}{2^{k+1-i}}\mathbb{Z}/\mathbb{Z}$ is contained in the complement of a subspace of $W$ of codimension $i$. This gives a subspace of codimension at most $k+1$ where $P(x+h)-P(x)$ attains the value $0$, which implies $P(h)\in \frac{1}{2^{k_h'+1}}\mathbb{Z}/\mathbb{Z}$.
The statement is trivial for $i=0$ since by assumption, $\Delta_h P$ attains some value in $\frac{1}{2^{k_h'+1}}\mathbb{Z}/\mathbb{Z}\subseteq \frac{1}{2^{k+1}}\mathbb{Z}/\mathbb{Z}$ for all $h\in W$.
Assume the statement is true for $i=j\leq k$, so there is some subspace $W_j\leq W$ of codimension $j$ such that for all $h\in W_j$, $P(x+h)-P(x)$ attains some value in $\frac{1}{2^{k+1-j}}\mathbb{Z}/\mathbb{Z}$. Let $Z$ be the subset of $W_j$ consisting of the values of $h$ for which $P(x+h)-P(x)$ attains no value in $\frac{1}{2^{k+1-(j+1)}}\mathbb{Z}/\mathbb{Z}$. Since the values attained by $P(x+h)-P(x)$ form a coset of some subgroup $\frac{1}{2^{k_h'+1}}\mathbb{Z}/\mathbb{Z}$ of $\mathbb{T}$, and this coset must contain some element of $\frac{1}{2^{k+1-j}}\mathbb{Z}/\mathbb{Z}$ but avoid $0$, we must have $k_h'<k-j$. So, for all $h\in Z$, all values attained by $P(x+h)-P(x)$ are contained in $\frac{1}{2^{k+1-j}}+\frac{1}{2^{k-j}}\mathbb{Z}/\mathbb{Z}$.
Assume for the sake of contradiction that $Z$ contains an odd circuit, so for some $h_1,\dots,h_t\in Z$, $\sum_{s=1}^t h_s =0$, where $t$ is odd. But
\[0=P\left(x+\sum_{s=1}^t h_s\right)-P(x)=\sum_{s=1}^t \left(P\left(x+\sum_{u=1}^s h_u\right)-P\left(x+\sum_{u=1}^{s-1} h_u\right)\right),\]
which must take only values in $\sum_{s=1}^t \left(\frac{1}{2^{k+1-j}}+\frac{1}{2^{k-j}}\mathbb{Z}/\mathbb{Z}\right)=\frac{1}{2^{k+1-j}}+\frac{1}{2^{k-j}}\mathbb{Z}/\mathbb{Z}$. This is a contradiction. So $Z$ contains no odd circuits, and is thus contained in the complement of a hyperplane $W_{j+1}$ of $W_j$. Thus the statement is true for $i=j+1$ as well, and by induction is true for all $j\leq k+1$. This proves the proposition.\end{proof}
So, for each $P_i$ we have a subspace $W_i'$ of codimension at most $k_i+1$ in $W$ on which $P_i(h)\in \frac{1}{2^{k_i'+1}}\mathbb{Z}/\mathbb{Z}$. Intersecting all of these gives a subspace $W'$ of $W$ of codimension at most $K+\sum_{i=1}^C (k_i+1)\leq K+dC$ on which $P_i(h)\in \frac{1}{2^{k_i'+1}}\mathbb{Z}/\mathbb{Z}$ for all $i$.
If $\chi(M)>K+dC$, we can pick a fixed $h\in M\cap W'$, and let $k_i'$ be the depth of $\Delta_h P_i$ for each $i$. Since $h\in W'$, and $\mathcal B_h$ is sufficiently regular, the range of $P_i(x+h)-P_i(x)$ is $\frac{1}{2^{k_i'+1}}\mathbb{Z}/\mathbb{Z}$, with each value attained approximately equally often.
Let $D'$ be the union of the atoms $b$ of $\mathcal B$ that are contained in $D$ and satisfy $\E[|f_3(x+h)|^2 \mid x\in b]\leq \varepsilon'^2$. Since $\|f_3\|_{L^2}\leq \frac{1}{2}\varepsilon'\zeta^{1/2}$, at most a fraction $\frac{\zeta}{4}$ of elements of $\mathbb{F}^n$ are contained in an atom $b$ of $\mathcal B$ that does not satisfy $\E[|f_3(x+h)|^2 \mid x\in b]\leq \varepsilon'^2$. Thus $|D'|\geq |D|-\frac{\zeta}{4}2^n > (1-2^{2-c}+\frac{3\zeta}{4})2^n$.
We will now prove and use the following proposition, first stated in the introduction, which has a technical statement but a simple proof.
\begin{repprop}{prop:extBB}
Let $n,c$ be positive integers, let $k_1,\dots,k_n$ be nonnegative integers, and let $G = \bigoplus_{i=1}^n \frac{1}{2^{k_i+1}}\mathbb{Z}/\mathbb{Z}$. Let $H$ be a subgroup of $G$. Let $M_1,\dots,$ $M_{2^c-1}$ be subsets of $G$. Then there exist $H_1,\dots,H_c\in G/H$, cosets of $H$, such that for $1\leq i\leq c$,
\begin{equation}
\label{eq:extBB}
\frac{1}{|H|}\sum_{x\in[0,1]^{i-1}} \left | M_{2^{i-1}+\sum_{j=1}^{i-1} x_j 2^{j-1}}\cap \left( H_i + \sum_{j=1}^{i-1} x_j H_j \right ) \right | \geq \sum_{j=2^{i-1}}^{2^i-1}\frac{|M_j|}{|G|}.\tag{$*$}
\end{equation}
\end{repprop}
\begin{proof}
We will choose $H_1,\dots,H_c$ in order, greedily. For $1\leq i_0\leq c$, suppose $H_1,\dots,H_{i_0-1}$ have already been chosen such that \eqref{eq:extBB} holds for $1\leq i \leq i_0-1$.
Consider a uniformly random choice of $H_{i_0}\in G/H$. Taking an expectation gives
\begin{align*}
& \E_{H_{i_0}\in G/H}\frac{1}{|H|}\sum_{x=(x_1,\dots,x_{i_0-1})\in[0,1]^{i_0-1}} \left | M_{2^{i_0-1}+\sum_{j=1}^{i_0-1} x_j 2^{j-1}}\cap \left( H_{i_0} + \sum_{j=1}^{i_0-1} x_j H_j \right ) \right | \\
&= \frac{1}{|H|}\sum_{x=(x_1,\dots,x_{i_0-1})\in[0,1]^{i_0-1}} \E_{H'\in G/H}\left | M_{2^{i_0-1}+\sum_{j=1}^{i_0-1} x_j 2^{j-1}}\cap H' \right | \\
&= \sum_{j=2^{i-1}}^{2^i-1}\frac{|M_j|}{|G|},
\end{align*}
so for some choice of $H_{i_0}$, the inequality \eqref{eq:extBB} holds. Continuing in this fashion, we can successfully pick $H_i$ for all $i\in [1,c]$, as desired.
\end{proof}
Let $G=\bigoplus_{i=1}^C \frac{1}{2^{k_i+1}}\mathbb{Z}/\mathbb{Z}$, and let $H$ be the subgroup of $G$ isomorphic to $\bigoplus_{i=1}^C \frac{1}{2^{k_i'+1}}\mathbb{Z}/\mathbb{Z}$, where each term is a subgroup of the corresponding term in $G$. To see how this relates to the problem at hand, consider a fixed tuple $b=(b_1,\dots,b_m)\in G^m$, where $b_j=(b_{j,1},\dots,b_{j,C})$. When $\mathcal B_h$ is sufficiently regular, the set of tuples $b'=(b_1',\dots,b_m')\in G^m$ for which there is a point $X\in (\mathbb{F}^n)^{\ell-1}$ satisfying $P_i(L_j(X))=b_{j,i}$,$P_i(L_j(X)+h)=b_{j,i}'$ for all $i,j$ is exactly those for which $b_j,b_j'$ are in the same coset of $H$ for all $j$.
Let $S$ be the subset of $G$ consisting of the points $g=(g_1,\dots,g_C)$ such that the atom where $P_i=g_i$ for all $i$ is contained in $D'$.
By Corollary~\ref{cor:atomSize} and our choice of $r_2$, every atom of $\mathcal B$ has size at most $(\frac{1}{\|\mathcal B\|}+\frac{\zeta}{16\|\mathcal B\|})2^n$. So, since $|D'|>(1-2^{2-c}+\frac{3\zeta}{4})2^n$ and $|G|=\|\mathcal B\|$, we have $\frac{|S|}{|G|}> \frac{1-2^{2-c}+\frac{3\zeta}{4}}{1+\frac{\zeta}{16}}>1-2^{2-c}+\frac{5}{8}\zeta.$
By Proposition~\ref{prop:extBB} with $M_1=\dots=M_{2^{c-1}-1}=S$, we have cosets $H_1,\dots,$ $H_{c-1}$ of $H$ for which \eqref{eq:extBB} holds for $1\leq i \leq c-1$. So, for $1\leq i\leq c-1$,
\[\frac{1}{|H|}\sum_{x\in[0,1]^{i-1}} \left | S\cap \left( H_i + \sum_{j=1}^{i-1} x_j H_j \right ) \right | \geq 2^{i-1}\frac{|S|}{|G|} > 2^{i-1}(1-2^{2-c}+\frac{5}{8}\zeta).\]
Pick representatives $h_1,\dots,h_{c-1}$ for the cosets $H_1,\dots,H_{c-1}$. For nonzero $x\in [0,1]^{c-1}$, let $M_{\sum_{j=1}^{c-1} x_j 2^{j-1}}=(S\cap (\sum_{j=1}^{c-1} x_j H_j)) - \sum_{j=1}^{c-1} x_j h_j$. Applying Proposition~\ref{prop:extBB} again with $H$ as the group, the trivial group as the subgroup, and the $\{M_j\}_{j=1}^{2^{c-1}-1}$ just defined, we get points $e_1,\dots,e_{c-1}\in H$ such that for $1\leq i\leq c-1$,
\[
\sum_{x\in[0,1]^{i-1}} \left | M_{2^{i-1}+\sum_{j=1}^{i-1} x_j 2^{j-1}}\cap \left \{ e_i + \sum_{j=1}^{i-1} x_j e_j \right \} \right | \geq \sum_{j=2^{i-1}}^{2^i-1}\frac{|M_j|}{|H|}.
\]
The right hand side is
\begin{align*}
& \sum_{x\in [0,1]^{i-1}}\frac{|M_{2^i +\sum_{j=1}^{i-1} x_j 2^{j-1}}|}{|H|} \\
&= \sum_{x\in [0,1]^{i-1}}\frac{|S\cap (H_i+\sum_{j=1}^{i-1} x_j H_j)|}{|H|} \\
&> 2^{i-1}\left(1-2^{2-c}+\frac{5}{8}\zeta\right),
\end{align*}
where the last inequality is from the first application of Proposition~\ref{prop:extBB}. Since $i\leq c-1$, this is strictly greater than $2^{i-1}-1$.
On the other hand, the left hand side is
\begin{align*}
& \sum_{x\in[0,1]^{i-1}} \left | M_{2^{i-1}+\sum_{j=1}^{i-1} x_j 2^{j-1}}\cap \left \{ e_i + \sum_{j=1}^{i-1} x_j e_j \right \} \right | \\
& = \sum_{x\in[0,1]^{i-1}} \left | \left(\left(S\cap \left(H_i+\sum_{j=1}^{i-1} x_j H_j\right)\right) - \left(h_i + \sum_{j=1}^{i-1} x_j h_j\right)\right) \cap \left \{ e_i + \sum_{j=1}^{i-1} x_j e_j \right \} \right | \\
& = \left |\left\{x\in [0,1]^{i-1} \mid (h_i+e_i) + \sum_{j=1}^{i-1} x_j (h_j + e_j) \in \left(S\cap \left(H_i+\sum_{j=1}^{i-1} x_j H_j\right)\right)\right\}\right |,
\end{align*}
\noindent which is an integer in the interval $[0,2^{i-1}]$. So this integer must be $2^{i-1}$, meaning that for each nonzero $x\in [0,1]^{c-1}$, we have $\sum_{j=1}^{c-1} x_j (h_j + e_j) \in M_{\sum_{j=1}^{c-1} x_j 2^{j-1}}\subseteq S$.
Let $b_{\sum_{j=1}^{c-1} x_j 2^{j-1}}=\sum_{j=1}^{c-1} x_j (h_j + e_j)$ for each nonzero $x\in [0,1]^{c-1}$. We claim that, for $1\leq i\leq C$, the atoms of $G$ corresponding to $b_1,\dots,b_{2^{c-1}-1}$ are $(d_i,k_i)$-consistent with the system of linear forms $\{L_1,\dots,L_{2^{c-1}-1}\}$ corresponding to $PG(c-1,2)$. Indeed, letting $v_j=h_j+e_j=(v_{j,1},\dots,v_{j,C})$, the function $Q_i(X)=\sum_{j=1}^{c-1} v_{j,i}|x_j|$ is a polynomial of depth at most $k_i$ and degree at most $k_i+1\leq d_i$. Thus, there is a copy of $PG(c-1,2)$ in $D'$ whose elements lie in the atoms $b_1,\dots,b_{2^{c-1}-1}$ of $\mathcal B$.
The next step is to find appropriate atoms $\tilde b_1,\dots,\tilde b_{2^{c-1}-1}$ of $\mathcal B_h$ contained in these atoms of $\mathcal B$. If we fix points $g_1,\dots,g_{2^{c-1}-1}\in H$, where $g_j=(g_{j,1},\dots,g_{j,C})$, then for $j\in [1,2^{c-1}-1]$ we can let $\tilde b_j$ be the atom of $\mathcal B_h$ on which $P_i = b_{j,i}$, $\Delta_h P_i = g_{j,i}+b_{j,i}$ for all $i\in [1,C]$. For $j\in [1,2^{c-1}-1]$, let $S_j$ be the subset of $H$ consisting of the points $g_j$ for which $b_j+g_j\in S$ and the atom $\tilde b_j$ so defined satisfies $\E[|f_3(x)|^2 \mid x\in \tilde b_j],\E[|f_3(x+h)|^2 \mid x\in \tilde b_j]\leq \varepsilon^2$.
Since for $b_j\in S$ we have $\E[|f_3(x)|^2 \mid x\in b_j],\E[|f_3(x+h)|^2 \mid x\in b_j]\leq \varepsilon'^2=\frac{1}{4}\varepsilon^2 \zeta$, at most a fraction $\frac{\zeta}{4}$ of the elements of $b_j$ are in an atom $\tilde b_j$ of $\mathcal B_h$ satisfying $\E[|f_3(x)|^2 \mid x\in \tilde b_j]>\varepsilon^2$, and similarly at most a fraction $\frac{\zeta}{4}$ of the elements of $b_j$ are in an atom $\tilde b_j$ of $\mathcal B_h$ satisfying $\E[|f_3(x+h)|^2 \mid x\in \tilde b_j]>\varepsilon^2$. By our choice of $r_2$ and $r_2'$, every atom of $\mathcal B$ has size at most $(\frac{1}{\|\mathcal B\|}+\frac{\zeta}{16\|\mathcal B\|})2^n$ and
every atom of $\mathcal B_h$ has size at least $(\frac{1}{\|\mathcal B_h\|}-\frac{\zeta}{16\|\mathcal B_h\|})2^n$, so the number of atoms of $\mathcal B_h$ for which one of the two bounds is exceeded is at most $\frac{\zeta}{2}\frac{\left(\frac{1}{\|\mathcal B\|}+\frac{\zeta}{16\|\mathcal B\|}\right)2^n}{\left(\frac{1}{\|\mathcal B_h\|}-\frac{\zeta}{16\|\mathcal B_h\|}\right)2^n}=\frac{\zeta}{2}\frac{1+\frac{\zeta}{16}}{1-\frac{\zeta}{16}}|H|\leq \frac{17}{30}\zeta|H|$. That is,
$|S_j|\geq |(S-b_j)\cap H|-\frac{17}{30}\zeta|H|$. Applying Proposition~\ref{prop:extBB} with $H$ for the group, the trivial group for the subgroup, and sets $S_1,\dots,S_{2^{c-1}-1}$ gives points $p_1,\dots,p_{c-1}\in H$ such that such for $1\leq i\leq c-1$,
\[
\sum_{x\in[0,1]^{i-1}} \left | S_{2^{i-1}+\sum_{j=1}^{i-1} x_j 2^{j-1}}\cap \left \{ p_i + \sum_{j=1}^{i-1} x_j p_j \right \} \right | \geq \sum_{j=2^{i-1}}^{2^i-1}\frac{|S_j|}{|H|}.
\]
The right hand side is
\begin{align*}
& \sum_{x\in[0,1]^{i-1}} \frac{\left | S_{2^{i-1}+\sum_{j=1}^{i-1} x_j 2^{j-1}} \right |}{|H|} \\
& \geq \sum_{x\in[0,1]^{i-1}} \frac{|(S-b_{2^{i-1}+\sum_{j=1}^{i-1} x_j 2^{j-1}})\cap H|-\frac{17}{30}\zeta|H|}{|H|} \\
& = \sum_{x\in[0,1]^{i-1}} \frac{\left | S\cap \left( H_i + \sum_{j=1}^{i-1} x_j H_j \right ) \right | - \frac{17}{30}\zeta|H|}{|H|} \\
& > 2^{i-1} \left(1-2^{2-c}+\frac{5}{8}\zeta - \frac{17}{30}\zeta|H| \right ) > 2^{i-1}(1-2^{2-c}).
\end{align*}
This is greater than $2^{i-1}-1$. So, by the same argument as before, if for all nonzero $x\in [0,1]^{c-1}$ we let $g_{\sum_{j=1}^{c-1} x_j 2^{j-1}} = \sum_{j=1}^{c-1} x_j p_j$, then $g_j\in S_j$ for all $j$. Also as before, for $i\in [1,C]$, $(g_{1,i},\dots,g_{2^{c-1}-1,i})$ is $(d_i',k_i')$-consistent with the system of linear forms $\{L_1,\dots,L_{2^{c-1}-1}\}$. So as long as $r_2'$ is sufficiently large, by Theorem~\ref{thm:equidist}, there is a copy of $PG(c-1,2)$ whose elements are contained in the atoms $\tilde b_1,\dots, \tilde b_{2^{c-1}-1}$ of $\mathcal B_h$, respectively. Thus there is a homomorphism from $N^*$ into the union of these atoms, so by the Counting Lemma, if $n\geq v_2$ is sufficiently large, there is a copy of $N^*$ in the union of these atoms. By construction, these atoms satisfy the conditions on $f_1$ and $f_3$ that we were looking for, so applying the first part of our argument finishes the proof of our conditional result.
\end{proof}
\section{Future Steps}
The Counting Lemma has potential for giving simple proofs to other extremal results on matroids whose graph theory analogues are proven using the Szemerédi regularity lemma and its associated counting lemma, including giving short new proofs for known results (as for Theorem~\ref{thm:gES}). A search through more such results in extremal graph theory which yield matroid analogues may well prove fruitful.
In terms of the critical threshold problem, if Conjecture~\ref{conj:polydifreg} is shown to be true, then Theorem~\ref{thm:conditional} constitutes a proof of a somewhat more general subcase of the $i=3$ case of Conjecture~\ref{conj:main} than that given by Theorem~\ref{thm:tidor}. Regardless of whether Conjecture~\ref{conj:polydifreg} is true (or simple to prove), the methods used in the proof of Theorem~\ref{thm:conditional} seem to offer a new approach to the critical threshold problem, that of using a strong form of regularity and attempting to analyze constructions like $M\cap (M+h)$ through counting lemma-type arguments. The same types of arguments seem equally suited for some special subcases of the $i=4$ case of Conjecture~\ref{conj:main}, and could perhaps be adapted for more general cases as well. Finally, Proposition~\ref{prop:extBB}, as a simple generalization of the Bose-Burton theorem that makes its proof completely transparent, may be applicable to computing critical thresholds independently of the more sophisticated machinery we have developed.
\section*{Acknowledgements}
This research was conducted at the University of Minnesota Duluth REU and was supported by NSF grant 1358659 and NSA grant H98230-16-1-0026. The author thanks Joe Gallian for suggesting the problem and for helpful comments on the manuscript.
.
\bibliographystyle{acm}
| {
"timestamp": "2016-11-01T01:04:57",
"yymm": "1610",
"arxiv_id": "1610.09587",
"language": "en",
"url": "https://arxiv.org/abs/1610.09587",
"abstract": "In graph theory, the Szemerédi regularity lemma gives a decomposition of the indicator function for any graph $G$ into a structured component, a uniform part, and a small error. This result, in conjunction with a counting lemma that guarantees many copies of a subgraph $H$ provided a copy of $H$ appears in the structured component, is used in many applications to extremal problems. An analogous decomposition theorem exists for functions over $\\mathbb{F}_p^n$. Specializing to $p=2$, we obtain a statement about the indicator functions of simple binary matroids. In this paper we extend previous results to prove a corresponding counting lemma for binary matroids. We then apply this counting lemma to give simple proofs of some known extremal results, analogous to the proofs of their graph-theoretic counterparts, and discuss how to use similar methods to attack a problem concerning the critical numbers of dense binary matroids avoiding a fixed submatroid.",
"subjects": "Combinatorics (math.CO)",
"title": "A Counting Lemma for Binary Matroids and Applications to Extremal Problems",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9859363733699887,
"lm_q2_score": 0.7185943985973773,
"lm_q1q2_score": 0.7084883552770863
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.